score
int64 10
1.34k
| text
stringlengths 296
618k
| url
stringlengths 16
1.13k
| year
int64 13
18
|
---|---|---|---|
13 | Common Core Aligned Resources for HSS-CP.B.7
Apply the Addition Rule, P(A or B) = P(A) + P(B) - P(A and B), and interpret the answer in terms of the model.
Showing 1 - 27 of 27 resources
Modeling Conditional Probabilities 1: Lucky Dip
9th - 12th CCSS: Designed
Check out this detailed lesson plan on conditional probability! Learners work individually and also collaboratively to analyze the fairness of a game and justify their reasoning. it includes detailed notes and many helpful suggestions...
Probability Part 1: Rules and Patterns: Crash Course Statistics #13
12 mins 9th - 12th CCSS: Adaptable
Probability is all about patterns and rules. An instructive video explores the tenets of probability. It starts with basic probability, moves on to compound probability, and explores the addition and multiplication rules for probability.
Additive and Multiplicative Rules for Probability: Red Dress? Blue Dress? Both!
7th - 12th CCSS: Designed
The sum of the parts is greater than the whole. An interactive uses a Venn-like model to show the percentage of females from a survey that have a blue dress, a red dress, or both. The pupils determine the numbers in each category...
Odd or Even? The Addition and Complement Principles of Probability
9th - 12th CCSS: Designed
Odd or even—fifty-fifty chance? �Pupils first conduct an experiment rolling a pair of dice to generate data in a probability lesson. It goes on to introduce mutually exclusive and non-mutually exclusive events, and how to use the...
Calculate Different Probabilities by Working Backward
5 mins 9th - 12th CCSS: Designed
Teach learners to see probability formulas as regular formulas that they can manipulate and rearrange using the algebra rules. The lesson in the video uses a formula and solves backward for a missing piece that isn't provided in the...
Calculate Probabilities by Using the Complement and Addition Rule
8 mins 9th - 12th CCSS: Designed
What is the probability that you eat neither cake nor pie? Not that it should happen, but if it did, there is formula for that. A math lesson introduces high schoolers to the idea of a complement with the given steps to solve the...
Use the Addition Rule to Calculate the Probability of Disjoint Event A or B
7 mins 9th - 12th CCSS: Designed
Learning about probability and enjoying the lesson don't have to be disjoint events. A helpful video guides pupils through a few examples of probability with dice and cards to show how the formula calculates the result.
How to Add Probabilites and Multiply Probabilities Using Marbles
6 mins 10th - 12th CCSS: Adaptable
Do you want a red marble and a green marble, or do you want a red marble or a green marble? Calculating the probability in these two situations is very different. Use the fifth video in a series of six to explain the difference in the...
General Addition and Multiplication Rules of Conditional Probabilities
10th - Higher Ed CCSS: Adaptable
Making connections between multiple methods of solving problems is an important part of understanding conditional probability. The lesson shows solutions to problems using Venn diagrams, tree diagrams, formulas, and two-way tables.... | https://www.lessonplanet.com/standards/resources/1961 | 18 |
47 | As you can see in the previous picture, it can be defined as the cross product of any two non-parallel vectors that are tangent to the surface at a point.This is an equation, if only we can find a vector in the plane.A back facing plane will be invisible in a rendered scene and as such can be except from many scene calculations.Firstly, a normal vector to the plane is any vector that starts at a point in the plane and has a direction that is orthogonal (perpendicular) to the surface of the plane.In geometry, a normal is an object such as a line or vector that is perpendicular to a given object.Notice that if we find a vector that lies in the plane, it must be perpendicular to since the plane and the normal vector are perpendicular.
Unit Normal Vector Calculator - eMathHelpThere is an added complication that OpenGL wants a Unit Normal i.e. the vector has to be 1 unit in length.
Section 12.4 Tangent Vectors and Normal Vectors Tangent
plotting normal vector in 3d - MATLAB Answers - MATLAB CentralAs you can see this page, when we define a hyperplane, we suppose that we have a vector that is orthogonal to the hyperplane).
Calculate a Normal Vector (CAL Command) | AutoCADDetermine the acceleration value: If the values of all individual force values are known, then the net force can be calculated as the vector sum of all the forces.
Cross Product - Math is Fun - Maths ResourcesA vector which is normal (orthogonal, perpendicular) to a plane containing two vectors is also normal to both of the given vectors.
Secondly, since the normal is a vector, we only want to transform its orientation.A normal vector is a vector that is perpendicular or orthogonal to another vector.For example, in the two-dimensional case, the normal line to a curve at a given point is the line perpendicular to the tangent line to the curve at the point.Find a vector which is normal to the surface at the point (2,0,2).The direction (towards or away from your vector) and length do not matter, as long as the length is not zero.Picture this direction vector moving along the curve as the path progresses. We.Because the binormal vector is defined to be the cross product of the unit tangent and unit normal vector we then know that the binormal vector is orthogonal to both the tangent vector and the normal vector.
SVM - Understanding the math - What is a vector?Tangent Vectors and Normal Vectors In the preceding section, you learned that the velocity vector points in the direction of motion.Then the random vector defined as has a multivariate normal distribution with mean and covariance matrix This can be proved by showing that the product of the probability density functions of is equal to the joint probability density function of (this is left as an exercise).
How to find the normal vector of a plane? And a point on
Q: Why is the gradient normal? - MathOverflow
Vector Calculus: Understanding Flux – BetterExplained
Based on a point and a normal vector - St. John Fisher College
“Encoding Normal Vectors using Optimized Spherical
Norm vs Normal - What's the difference? | WikiDiff
Given a vector v in the space, there are infinitely many perpendicular vectors.Mathematics. a perpendicular line or plane, especially one perpendicular to a tangent line of a curve, or a tangent plane of a surface, at the point of contact. | http://paydayloanbyphone.ga/hyxu/what-is-a-normal-vector-2936.php | 18 |
36 | An adder is a digital circuit that performs addition of numbers. In many computers and other kinds of processors adders are used in the arithmetic logic units or ALU. They are also utilized in other parts of the processor, where they are used to calculate addresses, table indices, increment and decrement operators, and similar operations.
Although adders can be constructed for many number representations, such as binary-coded decimal or excess-3, the most common adders operate on binary numbers. In cases where two's complement or ones' complement is being used to represent negative numbers, it is trivial to modify an adder into an adder–subtractor. Other signed number representations require more logic around the basic adder.
The half adder adds two single binary digits A and B. It has two outputs, sum (S) and carry (C). The carry signal represents an overflow into the next digit of a multi-digit addition. The value of the sum is 2C + S. The simplest half-adder design, pictured on the right, incorporates an XOR gate for S and an AND gate for C. The Boolean logic for the sum (in this case S) will be A′B + AB′ whereas for the carry (C) will be AB. With the addition of an OR gate to combine their carry outputs, two half adders can be combined to make a full adder. The half adder adds two input bits and generates a carry and sum, which are the two outputs of a half adder. The input variables of a half adder are called the augend and addend bits. The output variables are the sum and carry. The truth table for the half adder is:
Inputs Outputs A B C S 0 0 0 0 1 0 0 1 0 1 0 1 1 1 1 0
A full adder adds binary numbers and accounts for values carried in as well as out. A one-bit full-adder adds three one-bit numbers, often written as A, B, and Cin; A and B are the operands, and Cin is a bit carried in from the previous less-significant stage. The full adder is usually a component in a cascade of adders, which add 8, 16, 32, etc. bit binary numbers. The circuit produces a two-bit output. Output carry and sum typically represented by the signals Cout and S, where the sum equals 2Cout + S.
A full adder can be implemented in many different ways such as with a custom transistor-level circuit or composed of other gates. One example implementation is with S = A ⊕ B ⊕ Cin and Cout = (A ⋅ B) + (Cin ⋅ (A ⊕ B)).
In this implementation, the final OR gate before the carry-out output may be replaced by an XOR gate without altering the resulting logic. Using only two types of gates is convenient if the circuit is being implemented using simple integrated circuit chips which contain only one gate type per chip.
A full adder can also be constructed from two half adders by connecting A and B to the input of one half adder, then taking its sum-output S as one of the inputs to the second half adder and Cin as its other input, and finally the carry outputs from the two half-adders are connected to an OR gate. The sum-output from the second half adder is the final sum output (S) of the full adder and the output from the OR gate is the final carry output (Cout). The critical path of a full adder runs through both XOR gates and ends at the sum bit s. Assumed that an XOR gate takes 1 delays to complete, the delay imposed by the critical path of a full adder is equal to
The critical path of a carry runs through one XOR gate in adder and through 2 gates (AND and OR) in carry-block and therefore, if AND or OR gates take 1 delay to complete, has a delay of
The truth table for the full adder is:
Inputs Outputs A B Cin Cout S 0 0 0 0 0 0 0 1 0 1 0 1 0 0 1 0 1 1 1 0 1 0 0 0 1 1 0 1 1 0 1 1 0 1 0 1 1 1 1 1
Adders supporting multiple bits
It is possible to create a logical circuit using multiple full adders to add N-bit numbers. Each full adder inputs a Cin, which is the Cout of the previous adder. This kind of adder is called a ripple-carry adder (RCA), since each carry bit "ripples" to the next full adder. Note that the first (and only the first) full adder may be replaced by a half adder (under the assumption that Cin = 0).
The layout of a ripple-carry adder is simple, which allows fast design time; however, the ripple-carry adder is relatively slow, since each full adder must wait for the carry bit to be calculated from the previous full adder. The gate delay can easily be calculated by inspection of the full adder circuit. Each full adder requires three levels of logic. In a 32-bit ripple-carry adder, there are 32 full adders, so the critical path (worst case) delay is 3 (from input to carry in first adder) + 31 × 2 (for carry propagation in latter adders) = 65 gate delays. The general equation for the worst-case delay for a n-bit carry-ripple adder, accounting for both the sum and carry bits, is
To reduce the computation time, engineers devised faster ways to add two binary numbers by using carry-lookahead adders (CLA). They work by creating two signals (P and G) for each bit position, based on whether a carry is propagated through from a less significant bit position (at least one input is a 1), generated in that bit position (both inputs are 1), or killed in that bit position (both inputs are 0). In most cases, P is simply the sum output of a half adder and G is the carry output of the same adder. After P and G are generated, the carries for every bit position are created. Some advanced carry-lookahead architectures are the Manchester carry chain, Brent–Kung adder (BKA), and the Kogge–Stone adder (KSA).
Some other multi-bit adder architectures break the adder into blocks. It is possible to vary the length of these blocks based on the propagation delay of the circuits to optimize computation time. These block based adders include the carry-skip (or carry-bypass) adder which will determine P and G values for each block rather than each bit, and the carry select adder which pre-generates the sum and carry values for either possible carry input (0 or 1) to the block, using multiplexers to select the appropriate result when the carry bit is known.
By combining multiple carry-lookahead adders, even larger adders can be created. This can be used at multiple levels to make even larger adders. For example, the following adder is a 64-bit adder that uses four 16-bit CLAs with two levels of LCUs.
If an adding circuit is to compute the sum of three or more numbers, it can be advantageous to not propagate the carry result. Instead, three-input adders are used, generating two results: a sum and a carry. The sum and the carry may be fed into two inputs of the subsequent 3-number adder without having to wait for propagation of a carry signal. After all stages of addition, however, a conventional adder (such as the ripple-carry or the lookahead) must be used to combine the final sum and carry results.
A full adder can be viewed as a 3:2 lossy compressor: it sums three one-bit inputs and returns the result as a single two-bit number; that is, it maps 8 input values to 4 output values. Thus, for example, a binary input of 101 results in an output of 1 + 0 + 1 = 10 (decimal number 2). The carry-out represents bit one of the result, while the sum represents bit zero. Likewise, a half adder can be used as a 2:2 lossy compressor, compressing four possible inputs into three possible outputs.
Such compressors can be used to speed up the summation of three or more addends. If the addends are exactly three, the layout is known as the carry-save adder. If the addends are four or more, more than one layer of compressors is necessary, and there are various possible design for the circuit: the most common are Dadda and Wallace trees. This kind of circuit is most notably used in multipliers, which is why these circuits are also known as Dadda and Wallace multipliers.
- Lancaster, Geoffrey A. (2004). Excel HSC Software Design and Development. Pascal Press. p. 180. ISBN 978-1-74125175-3.
- Mano, M. Morris (1979). Digital Logic and Computer Design. Prentice-Hall. pp. 119–123. ISBN 0-13-214510-3.
- Satpathy, Pinaki (2016). Design and Implementation of Carry Select Adder Using T-Spice. Anchor Academic Publishing. p. 22. ISBN 978-3-96067058-2. ISBN 3960670583.
- Burgess, Neil (2011). Fast Ripple-Carry Adders in Standard-Cell CMOS VLSI. 20th IEEE Symposium on Computer Arithmetic. pp. 103–111.
- Brent, Richard Peirce; Kung, Hsiang Te (March 1982). "A Regular Layout for Parallel Adders". IEEE Transactions on Computers. C-31 (3): 260–264. doi:10.1109/TC.1982.1675982. ISSN 0018-9340.
- Kogge, Peter Michael; Stone, Harold S. (August 1973). "A Parallel Algorithm for the Efficient Solution of a General Class of Recurrence Equations". IEEE Transactions on Computers. C-22 (8): 786–793. doi:10.1109/TC.1973.5009159.
- Reynders, Nele; Dehaene, Wim (2015). Written at Heverlee, Belgium. Ultra-Low-Voltage Design of Energy-Efficient Digital Circuits. Analog Circuits And Signal Processing (ACSP) (1 ed.). Cham, Switzerland: Springer International Publishing AG Switzerland. doi:10.1007/978-3-319-16136-5. ISBN 978-3-319-16135-8. ISSN 1872-082X. LCCN 2015935431.
- Liu, Tso-Kai; Hohulin, Keith R.; Shiau, Lih-Er; Muroga, Saburo (January 1974). "Optimal One-Bit Full-Adders with Different Types of Gates". IEEE Transactions on Computers. Bell Laboratories: IEEE. C-23 (1): 63–70. doi:10.1109/T-C.1974.223778. ISSN 0018-9340.
- Lai, Hung Chi; Muroga, Saburo (September 1979). "Minimum Binary Parallel Adders with NOR (NAND) Gates". IEEE Transactions on Computers. IEEE. C-28 (9): 648–659. doi:10.1109/TC.1979.1675433. Retrieved 2018-05-12.
- Mead, Carver; Conway, Lynn (1980) [December 1979]. Introduction to VLSI Systems (1 ed.). Reading, MA, USA: Addison-Wesley. ISBN 978-0-20104358-7. Retrieved 2018-05-12.
- Davio, Marc; Dechamps, Jean-Pierre; Thayse, André (1983). Digital Systems, with algorithm implementation (1 ed.). Philips Research Laboratory, Brussels, Belgium: John Wiley & Sons, a Wiley-Interscience Publication. ISBN 0-471-10413-2. LCCN 82-2710. ISBN 0-471-10414-0.
- Hardware algorithms for arithmetic modules, includes description of several adder layouts with figures.
- Interactive Full Adder Simulation (requires Java), Interactive Full Adder circuit constructed with Teahlab's online circuit simulator.
- Interactive Half Adder Simulation (requires Java), Half Adder circuit built with Teahlab's circuit simulator.
- 4-bit Full Adder Simulation built in Verilog, and the accompanying Ripple Carry Full Adder Video Tutorial | https://en.wikipedia.org/wiki/Binary_adder | 18 |
13 | Descriptive Statistics > Grouped Data
The data is grouped together by classes or bins.
Ungrouped data is the data you first gather from an experiment or study. The data is raw — that is, it’s not sorted into categories, classified, or otherwise grouped. An ungrouped set of data is basically a list of numbers.
When you have a frequency table or other group of data, the original set of data is lost — replaced with statistics for the group. You can’t find the exact sample mean (as you don’t have the original data) but you can find an estimate. The formula for estimating the sample mean for grouped data is:
Example question: Find the sample mean for the following frequency table.
|Score||Frequency ( f )|
|Between 5 and 10||1|
|10 ≤ t < 15||4|
|15 ≤ t < 20||6|
|20 ≤ t < 25||4|
|25 ≤ t < 30||2|
|30 ≤ t < 35||3|
Step 1: Find the midpoint for each class interval. the midpoint is just the middle of each interval. For example, the middle of 10 and 15 is 12.5:
|Score||Frequency ( f )||Midpoint ( x )|
|Between 5 and 10||1||7.5|
|10 ≤ t < 15||4||12.5|
|15 ≤ t < 20||6||17.5|
|20 ≤ t < 25||4||22.5|
|25 ≤ t < 30||2||27.5|
|30 ≤ t < 35||3||32.5|
Step 2: Multiply the midpoint (x) by the frequency (f):
|Frequency ( f )||Midpoint ( x )||Midpoint x * frequency f|
|Between 5 and 10||1||7.5||7.5|
|10 ≤ t < 15||4||12.5||50|
|15 ≤ t < 20||6||17.5||105|
|20 ≤ t < 25||4||22.5||90|
|25 ≤ t < 30||2||27.5||55|
|30 ≤ t < 35||3||32.5||97.5|
Add up all of the totals for this step. In other words, add up all the values in the last column (you should get 405).
Step 3: Divide the last column (f*x) by the second column (f):
The mean of grouped data (x̄) = 405 / 20 = 20.25.
Confused and have questions? Head over to Chegg and use code “CS5OFFBTS18” (exp. 11/30/2018) to get $5 off your first month of Chegg Study, so you can understand any concept by asking a subject expert and getting an in-depth explanation online 24/7.
Comments? Need to post a correction? Please post a comment on our Facebook page. | https://www.statisticshowto.datasciencecentral.com/grouped-data/ | 18 |
14 | Sixth grade students have mastered the four operations of mathematics -- addition, subtraction, division and multiplication. By the end of the year, these middle school students have used these operations with decimals, fractions and negative numbers. At this point, math also involves reasoning, building on knowledge and observing; these skills are encouraged by higher level questions and participation from the students in a collaborative setting.
Asking open questions is a way to provide an opportunity for your students to think critically. For example, asking the sixth grade students how many polygons they can make with an area of 24 units is an open question. A closed question with the same objective is to have a polygon drawn and ask them the area of that polygon. Open questions can have more than one answer, and differentiation occurs automatically with this type of question. The more advanced students search for many ways to construct the polygon and answer the question. Other students also try to find a polygon to meet the criteria, realize that finding the many answers is valued, and continue working. Discussion of the different ways to answer the open question adds value to the question in the sixth grade classroom.
In Bloom's Taxonomy, a collection of verbs is divided into levels of thinking. Appraise, compare, contrast, criticize, differentiate, discriminate, distinguish, examine and question are verbs from Bloom's "analyzing" level, which encourages critical thinking. For example, having your sixth graders compare the distances around various circles' circumferences to their diameter measurements is a great introduction to pi and the formula for the circumference of a circle. It is also a critical thinking question which promotes exploration.
Great questions are wasted if wait time is not given for the answers. Too often teachers don't allow the students time to think and answer, blurting out the answer themselves or calling on another student. Some sixth grade students need more time to formulate their answers. Teachers should provide a few seconds of wait time for students to process the question and their answer.
Math builds on prior lessons. Critical thinking questions should help the sixth grade students see those relationships and connections. Teachers should ask students how the problem they are working on is connected to something they learned previously. For example, when your sixth graders are learning about ratios, you can remind them of their study of similar triangles. By asking them to think of how ratio and similar triangles are connected, the students are led to review what they have already learned about similar shapes and apply it to the study of ratios. The teacher helps the students to see the relationship between the two concepts and to deepen their understanding through the connections. | https://www.theclassroom.com/critical-thinking-questions-6th-grade-math-3914.html | 18 |
14 | These 20 no-prep basic math facts practice and assessments make managing basic math facts and data collection a breeze! It includes addition, subtraction, multiplication, division, and mixed operations.
There are 40 problems on each assessment. Use the Single-Operation Assessments (Addition, Subtraction, Multiplication, Division) during the year as students master their basic facts for that operation. Use one of the Mixed Math Facts Assessments as a pre-assessment at the beginning of the year, a formative assessment during the process, or a summative assessment when students are ready.
The 20 assessments in this pack include:
4 Addition assessments with addends to 9
4 Subtraction assessments with minuends to 18
4 Multiplication assessments with factors to 11
4 Division assessments with divisors to 11
4 Mixed operation assessments
I have included two data collection options. One graph has room for all twenty assessments, and would be appropriate for older students. The other graphs have room for one operation and would be suitable for younger students. Instruct students to color in the graph to show how many problems they got correct. Consider creating data notebooks and have students record and monitor their own growth and progress. | https://www.teacherspayteachers.com/Product/Basic-Math-Facts-Practice-and-Assessments-2726242 | 18 |
15 | A century ago, we knew virtually nothing about the large scale structure of the universe, not even the fact that there exist galaxies beyond our Milky Way. Today, cosmologists have the tools to image the universe as it is today and as it was in the past, stretching all the way back to its infancy when the first atoms were forming. These images reveal that the complex universe we see today, full of galaxies, black holes, planets and dust, emerged from a remarkably featureless universe: a uniform hot soup of elemental constituents immersed in a space that exhibits no curvature. 1
How did the universe evolve from this featureless soup to the finely-detailed hierarchy of stars, galaxies, and galaxy clusters we see today? A closer look reveals the primordial soup was not precisely uniform. Exquisitely sensitive detectors, such as those aboard the Wilkinson Microwave Anisotropy Probe (WMAP) and Planck satellites, produced a map that shows the soup had a distribution of hot and cold spots arranged in a pattern with particular statistical properties. For example, if one only considers spots of a certain size and measures the distribution of temperatures for those spots only, it turns out the distribution has two notable properties: it is nearly a bell curve (“Gaussian”) and it is nearly the same for any size (“scale-invariant”). Thanks to high-resolution computer simulations, we can reproduce the story of how the hot and cold spots evolved into the structure we see today. But we are still struggling to understand how the universe came to be flat and uniform and where the tiny but critical hot and cold spots came from in the first place.
Looking Beyond Inflation
One leading idea is that, right after the big bang, a period of rapid expansion known as inflation set in, smoothing and flattening the observable universe. However, there are serious flaws with inflation: inflation requires adding special forms of energy to the simple big bang picture that must be arranged in a very particular way in order for inflation to start, so the big bang is very unlikely to trigger a period of inflation; and, even if inflation were to start, it would amplify quantum fluctuations into large volumes of space that result in a wildly-varying “multiverse” consisting of regions that are generally neither smooth nor flat. Although inflation was originally thought to give firm predictions about the structure of our universe, the discovery of the multiverse effect renders the theory unpredictive: literally any outcome, any kind of universe is possible.
Another leading approach, known as the ekpyrotic picture, proposes that the smoothing and flattening of the universe occurs during a period of slow contraction. This may seem counterintuitive at first. To understand how this could work, imagine a film showing the original big bang picture. The universe would be slowly expanding and become increasingly non-uniform and curved over time. Now imagine running this film backwards. It would show a slowly contracting universe becoming more uniform and less curved over time. Of course, if the smoothing and flattening occur during a period of slow contraction, there must be a bounce followed by slow expansion leading up to the present epoch. In one version of this picture, the evolution of the universe is cyclic, with periods of expansion, contraction, and bounce repeating at regular intervals. In contrast to inflation, smoothing by ekpyrotic contraction does not require special arrangements of energy and is easy to trigger. Furthermore, contraction prevents quantum fluctuations from evolving into large patches that would generate a multiverse. However, making the scale-invariant spectrum of variations in density requires more ingredients than in inflation.
The best of both worlds?
While experimentalists have been feverishly working to determine which scenario is responsible for the large-scale properties of the universe—rapid expansion or slow contraction—a novel third possibility has been proposed: Why not expand and contract at the same time? This, in essence, is the idea behind anamorphic cosmology. Anamorphic is a term often used in art or film for images that can be interpreted two ways, depending on your vantage point. In anamorphic cosmology, whether you view the universe as contracting or expanding during the smoothing and flattening phase depends on what measuring stick you use.
If you are measuring the distance between two points, you can use the Compton wavelength of a particle, such as an electron or proton, as your fundamental unit of length. Another possibility is to use the Planck length, the distance formed by combining three fundamental physical “constants”: Planck’s constant, the gravitational constant and the speed of light. In Einstein’s theory of general relativity, both lengths are fixed for all times, so measuring contraction or expansion with respect to either the particle Compton wavelength or the Planck length gives the same result. However, in many theories of quantum gravity—that is, extensions of Einstein’s theory aimed at combining quantum mechanics and general relativity—one length varies in time with respect to the other. In the anamorphic smoothing phase, the Compton wavelength is fixed in time and, as measured by rulers made of matter, space is contracting. Simultaneously, the Planck length is shrinking so rapidly that space is expanding relative to it. And so, surprisingly, it is really possible to have contraction (with respect to the Compton wavelength) and expansion (with respect to the Planck length) at the same time!
The anamorphic smoothing phase is temporary. It ends with a bounce from contraction to expansion (with respect to the Compton wavelength). As the universe expands and cools afterwards, both the particle Compton wavelengths and the Planck mass become fixed, as observed in the present phase of the universe.
By combining contraction and expansion, anamorphic cosmology potentially incorporates the advantages of the inflationary and ekpyrotic scenarios and avoids their disadvantages. Because the universe is contracting with respect to ordinary rulers, like in ekpyrotic models, there is no multiverse problem. And because the universe is expanding with respect to the Planck length, as in inflationary models, generating a scale-invariant spectrum of density variations is relatively straightforward. Furthermore, the conditions needed to produce the bounce are simple to obtain, and, notably, the anamorphic scenario can generate a detectable spectrum of primordial gravitational waves, which cannot occur in models with slow ekpyrotic contraction. International efforts currently underway to detect primordial gravitational waves from land-based, balloon-borne and space-based observatories may prove decisive in distinguishing these possibilities.
According to Einstein’s theory of general relativity, space can be bent so that parallel light rays converge or diverge, yet observations indicate that their separation remains fixed, as occurs in ordinary Euclidean geometry. Cosmologists refer to this special kind of unbent space as “flat.”
Editor’s picks for further reading
The anamorphic universe
Authors Anna Ijjas and Paul Steinhardt introduce anamorphic cosmology in this 2015 paper.
The Ekpyrotic Universe: Colliding Branes and the Origin of the Hot Big Bang
In this 2001 technical paper, Paul Steinhardt and his colleagues Justin Khoury, Burt Ovrut, and Neil Turok explain how an “Ekpyrotic” universe could solve some of the open questions around the standard big bang model.
Implications of Planck2015 for inflationary, ekpyrotic and anamorphic bouncing cosmologies
Authors Anna Ijjas and Paul Steinhardt review the implications of Planck satellite data on anamorphic and other cosmological models. | https://www.pbs.org/wgbh/nova/article/do-we-live-in-an-anamorphic-universe/ | 18 |
46 | 2018-6-21 shooting range, the students should what do you think would happen if you launched the projectile at an angle upwards rather launch at an angle. 2018-4-10 finding optimal angle for projectile, taking into account linear complementary to the angle of launch in the simple equations for the range of a projectile. 2012-7-4 projectile motion 3 angular range will table 1-2 range (in cm) for each projectile launch trial 15 the expected projectile range as some function of angle. 2018-6-14 projectile lab (range and time vs angle) purpose: in this activity you will be creating a procedure that will allow you to discover the relationship between the angle at which a projectile is fired and range of the projectile.
2013-9-30 page 3 of 8 experiment 2: range of a projectile once you have chosen a launch angle θ and a spring compression x spring, try a few test flights. Study of projectile range vs launch angle aim: to find out the relationship between projectile range and launch 8b, physics lab report projectile motion. 2018-7-20 physics report projectile motion the graph of range vs angle is symmetrical around we think that the launch velocity is affected by mass though i.
2018-5-8 graph range vs initial launch angle using equation 𝛥 =𝑣𝑖 calculate the theoretical range for your projectile with the initial launch angle of 45o. 2010-9-30 maximum range in projectile motion oh yes here is a diagram showing the launch velocity of some object what angle would this be at. 2015-10-19 each student will kick a soccer ball and measure its initial launch speed and angle students predict the range of the ball a real vs ideal projectile. 2009-9-23 1 range of projectile motion then the range is the distance between the launch point and the landing point, at this angle sin2 = 1the range is r= v2 0 g. Projectile motion projectile is defined as, any body thrown with some initial velocity, the graph of range vs angle is symmetrical around the 45 o maximum.
2018-2-3 lesson 18: projectile motion at an angle to do questions involving objects launched from the ground upwards at an angle the horizontal displacement (range). 2016-11-9 the launch at 45 degrees gives the maximum range index trajectory concepts calculation variation of the launch angle of a projectile will change the range. 2009-10-15 how much must the initial velocity of the ball be and at what angle it must be projectile motion- initial velocity and launch angle (launch angle) vf. 2014-10-6 4-7 graphs for projectile motion horizontal, with the positive x direction pointing from the launch point toward the goalkeeper and the net. 2015-9-2 launch angle vs range/height theory: what angle will produce the maximum height of a launched projectile what angle will produce the greatest range (horizontal distance) of a launched projectile.
2005-8-27 analysis for part 1 use excel or loggerpro 31 to construct a properly labeled graph of range vs launch angle if you. Golf clubs, loft angle, and distance projectile motion make a graph of the distance (y-axis) vs launch angle. Title projectile motion abstract a projectile was fired from atop an elevation and an angle the initial velocity for each firing was likely to be the same.
2013-4-29 the physics of golf: that the optimal launch angle for a projectile begin a study of the question of maximum projectile range in the. Here is a better way to calculate the maximum range of a projectile and find the launch angle for projectile motion that of range vs angle to get. 2010-1-17 the range of the trebuchet trebuchet produced consistent launches and very rarely failed to launch the projectile angle made with ground and the projectile. | http://mktermpaperanun.canon7d.info/projectile-range-vs-launch-angle.html | 18 |
13 | Test, answer, or show your science fair project may do one of three things: test an idea (or hypothesis) answer a question show how nature works topic ideas: space topics: drawing of constellation scorpius how do the constellations change in the night sky over different periods of time how does the number of stars. Maybe you just want to watch and see how it's done before you try to build a volcano with 24 fourth-graders whatever the reason the answer is simple science this cool experiment on the luminescent science behind glow sticks is one of many fantastic and informative videos on youtube hosted by steve spangler 3. Over 1000 free science fair projects with complete instructions. Participation is mandatory for all 4th and 5th graders kinder through 3rd grade participation is optional each student will be provided with a display board which they must use for the projects to be eligible here are some helpful links as well to get some ideas for projects. Science fair display board dos and don'ts photo of science fair project called splitting hydrogen and oxygen photo of student's science fair project called where did the chocolate photo of student's science fair project photo of three student's science fair projects called egg geode.
Science projects for third graders should be simple and interesting for children these ideas use simple objects you and your child can find at home teaching children science is a simple process especially when you make the experiment interesting and fun. Science fairs aren't just for older kids if you're an elementary student, you can learn a lot and have a great time doing your own project for grades k-3, a demonstration of scientific principles is usually okay, although many fairs require real experiments for 4th-5th graders, a complete experiment that answers a question. Last year, my then 9 year old won first place at the science fair for third grade one of her stem projects, which cup will keep water cold (or hot) the longest we are pretty proud of that accomplishment and the fact that she got second place at the county level the chemistry major in me was pretty happy.
Gigi demonstrates chemical reaction using 3 different kinds of milk & red bull. . Award winning 3rd grade science fair projects - free project examples by grade level.
Science fair planning guide okay, now get to work on your project what's that you still need help getting started just follow these easy steps and you too can what you should do the day of the science fair (pk-3rd grade) what those not so scary judges are looking for a lot of kids are scared of talking to a judge. Science fair 2017 1st -3rd grade, due wednesday, march 22 first, second and third graders are invited to participate in the brown school science fair they may choose to submit either a science collection or to follow the science project format science is fun, and our goal is to get kids involved in science early through.
First graders took on an engineering project this year, designing and testing insulated containers second graders followed the scientific method to answer an exciting question: how does electricity affect the growth of plants beginning in 3rd grade, students complete individual experimental projects. Among the science projects was also several second grade projects in these pictures, the first group worked on a project involving what melts ice cubes the second group discovered which beverages stain teeth the darkest and the third group (holding green pieces of paper) conducted experiments to see what cleans. These science fair projects for 6th grade allow kids ages 11-12 to explore science concepts from polarity and density to electricity and circuits. We teamed up with kid science guru steve spangler to get the coolest experiments you can try at home, including color-changing milk and a mentos diet coke geyser.
3rd grade science fair project | project, below is one suggestion for the layout of your project board. Third-grade science projects are many grade-schoolers' first introduction to science and can be very exciting as a parent, you are in a unique position to encourage your child's interest in science while guiding him in creating a good write-up of his project third-grade reports need not be overly detailed or lengthy, but can.
2018 rose hill elementary school science fair thursday, february thursday, february 22th, 2018 at 1:00pm-3:00pm and 5:30-7:00pm is the rose hill elementary science fair student's choice where 4th and 5th grade students judge the work of k-3rd grade projects based on the scientific method. These science fair projects for 4th grade allow kids ages 8-9 to explore science concepts from air pressure to physics and density. Running out of time here are some fun science fair projects you can do in less than a day and using things you have on hand.
Candies are not only yummy, but they are also spectacular science projects try this candy experiment and learn intricate stem ideas third, it's stratification this physics concept is the hardest and in fact, i'm still not sure if this is the right or only explanation of why the different colors are not mixed. If you have a science project for third grade coming up, and you need an idea of what to do, take a look at this article we'll show you a few. This article includes: 1 science fair 101 for parents and students 2 sample project instructions links 3 choosing a topic 4 preparing your board this site also rates projects by grade level and provides background scientific information as well as complete instructions for how to do the experiment. | http://hsassignmentbcma.nextamericanpresident.us/science-fair-projects-for-3rd-graders.html | 18 |
18 | G-Zero Experiment Proves Strange Quark Effects Not That Big
It's been said that anything worth doing takes time - just ask those scientists involved in what's known as the G-Zero experiment. Nearly two decades ago, an international coterie of more than 100 scientists came together with the goal of pinpointing just how strange the ordinary stuff around us can be. The G-Zero collaboration proposed a precisely tuned survey for ephemeral particles that appear only briefly inside matter. Specifically, they wanted to measure the effect of strange particles in the proton, the sub-atomic particle found deep inside the nucleus of every atom in our universe.
It was a massive undertaking. The full experiment would take more than 15 years to accomplish, with tons of new equipment and exclusive use of one third of Jefferson Lab's experimental space for over two years. In addition, it would benefit from the assistance of nearly 50 undergraduates, graduate students and postdoctoral researchers and hundreds of support personnel.
Now, the G-Zero experiment is publishing the first of several of papers detailing its final conclusions. Its findings have been printed in the January 8 issue of Physical Review Letters.
Peeking Into the Proton
It took humankind a full two thousand years from the first recorded suggestion of the atom's existence until one would actually be pried apart for a glimpse inside. Democritus, a Greek philosopher in the third century B.C., spoke of the "a-tom" as the smallest component of matter, from the Greek meaning "cannot be split."
What we now think of as the atom - consisting of a nucleus and its electrons - was first described by John Dalton in the early 1800s. Later that century, scientists found that they could strip electrons out of the atom. Then, in the early twentieth century, they found that the nucleus contained even heavier particles, which were dubbed protons and neutrons.
Fast forward to the 1960s, when theoretical physicists proposed that protons and neutrons have building blocks of their own - particles called quarks. Shortly thereafter, researchers at SLAC National Accelerator Laboratory conducted experiments that demonstrated that quarks really did exist.
Decades of research have confirmed that a proton contains three permanent quarks. But that's not all. There is also the force that binds the quarks together to make a proton. This "strong force" binding the quarks is an integral part of the structure of the proton.
A Hotbed of Activity
"Protons and neutrons are very unusual objects," says Doug Beck, a professor at the University of Illinois at Urbana-Champaign and spokesman for the G-Zero collaboration. "Their matter constituents, the three quarks, account for so little of their overall mass. Most of their mass is actually energy: the energy of the fields generated by the strongly interacting quarks and the energy of motion of the quarks themselves."
The G-Zero researchers were particularly interested in these strong force energy fields. The energy from these fields appears in the form of particles inside the proton called "sea particles." These particles bubble up for the briefest of moments before melting back into energy fields.
The sea particles, also called virtual particles, can take the form of quark pairs. Pairs of up quarks and pairs of down quarks are the most likely pairs to appear briefly in this sea, because they are the lightest of quarks. The proton's permanent quarks consist of two up quarks and a down quark.
The next-heaviest quarks, strange quarks, are also thought to be present in the sea as virtual particles. G-Zero scientists proposed their experiment in 1993 to measure what effect these temporary residents of the proton have deep inside the proton.
"In the paper, we report what we have seen of the virtual quarks spawned by the energy fields," Beck asserts.
The proton is not a rigid object that can be laid out on a lab table to be weighed, observed and otherwise poked and prodded. If you were somehow able to isolate a proton and plop it onto a slide under a light microscope, you still wouldn't be able to see it. The visible light that bounces off ordinary small objects and into the microscope lens and your eye would simply pass by the proton.
To “see” the proton, scientists need a different kind of probe. At Jefferson Lab, electrons are used instead of visible light. An electron is small enough to interact with the proton. A properly prepared electron can bounce off of the proton, knock it about or blow it apart.
Jefferson Lab prepares electrons for experiments using the Continuous Electron Beam Accelerator Facility, or CEBAF, accelerator. In the accelerator, electrons that have been freed of their atoms are pumped up to high energy. They are then sent into one of CEBAF's experimental halls and into a target material. There, the electrons may interact with the protons inside the target.
With a little energy, an electron bounces off a proton. Add a bit more energy, and the electron knocks a proton out of the nucleus in which it resides. In G-Zero, scientists were interested in bouncing electrons off the nuclei of hydrogen atoms. Two types of hydrogen were used - ordinary hydrogen, the simplest nucleus consisting of a single proton, and deuterium, which has both a proton and a neutron.
In particular, the G-Zero scientists wanted to measure the protons that came flying out of the target or the electrons that succeeded in bouncing off the protons and neutrons. The particles were intercepted by detector systems that recorded their flight path. A flight path of a proton or electron can either directly or ultimately provide information about the protons that were knocked out.
Describing the Proton's Properties
Probing the proton with electrons isn't going to provide the same information that you'd get by looking at something with a grade-school microscope. The proton is too small to have a visible color, nor can it be described by how rough or smooth its surface appears or how squishy or solid it feels.
Instead, physicists have a different set of properties they can measure. For instance, the proton has an electric charge. The quarks inside the proton each have an electric charge, which gives the proton its overall positive charge.
By scattering electrons from the proton, physicists can tell how much of the total charge participates. That allows them to peer inside the proton to see variations of charge inside the proton - its so-called charge distribution. The distribution of electric charge gives physicists a handle on where the quarks are in the proton.
"And so a proton is not an object that's a point, it's an object of some size. And it's made up of some quarks that are charged. And so you can ask how are those quarks distributed inside the proton," Beck explains.
The quarks' electric charges also provide scientists a way to measure how quarks are moving around inside the proton.
"When charged objects move around, they can create a magnetic field. And so we measure what we call the magnetization," Beck says.
Specifically, just as the G-Zero scientists are measuring the distribution of charge in the proton, by looking at the proton from different angles, they're also measuring the distribution of magnetization.
"We have opened a new window on this structure, separately measuring the contributions of virtual strange quarks to the electric and magnetic properties of the nucleon," Beck says.
Strange Quarks at Work
The result from this years-long effort to measure strange matter in the proton has revealed that strange quarks don't charge up or magnetize the proton all that much, at least at low resolution.
"These results indicate strange quarks make small, less than 10 percent contributions, to the charge/magnetization distributions. Ten percent compared to the total charge and magnetization distribution," says David Armstrong, a G-Zero collaborator and professor at The College of William and Mary. "The extra quarks that we measured in the sea are not significantly changing the size or shape and the magnetic strength."
So although the researchers did measure the virtual strange quarks in the proton, it appears that these quarks either don't dally long enough inside the proton to have a significant effect on its properties before melting back into strong force energy or don't get far enough away from each other to be seen (i.e. they could have an effect separately, but as a close pair, any effect they would have cancels out).
The G-Zero scientists say that this is surprising, since other experiments give strong indications that there are a significant number of strange quarks in the proton, and early theoretical calculations suggested that the strange quarks could contribute significantly more than 10% of the proton's charge or magnetization at the level they measured.
"What G0 shows is that there seems to be a strong tendency to be in close pairs," Beck adds. "Observations are often a strong reminder that things are not quite so simple as we might have thought. Although the lightest virtual quarks [up and down quarks in the sea] appear to interact with their surroundings in the proton, there is little evidence in our results that the strange quarks do so. But we are, of course, still thinking about what is going on."
Two Forces, Two Tools
Of the four forces of nature, the strong force isn't the only force that can be probed using electrons from Jefferson Lab's CEBAF accelerator. Two other forces that can be quantified are the electromagnetic force and the weak force.
The electromagnetic force is the primary force that governs how most of the electrons interact with the protons they encounter. The weak force, on the other hand, can also dictate how electrons interact with protons, but it does so in a slightly different way.
G-Zero researchers exploited the difference to get their result. They used electrons that are polarized - or spinning - in a particular direction. The electromagnetic force is unfazed by the fact that the electrons are polarized one way versus another. So scientists will count roughly the same number of protons coming from interactions with electrons spinning in either direction. Not so for the weak force.
"The relative difference in those counting rates tells us how big the weak interaction piece is in this scattering of electrons from protons. We compare it to the strength of the electromagnetic interaction between electrons and protons, and that gives us the answer that we're looking for," Beck says.
Further, the scientists also counted the electron/proton interactions that they observed from two different angles: those that were knocked forward out of the target and those that splashed backward. This allowed the scientists to better separate the electric and magnetic contributions.
Another Bonus of the Weak Force
The G-Zero researchers also had one other goal, and it was the trickiest part of the experiment yet. They wanted to see if they could get an extra measurement of the weak force inside the proton.
As already mentioned, the scientists were sure they could measure the weak force mediating the interaction of the electron probe with the proton. But it was also possible that they could catch a glimpse of an extremely rare process: the weak force between the proton’s own quarks.
This measurement of the proton's so-called "anapole moment" requires the electron to scatter from a quark while that quark is interacting with a neighboring quark through the weak interaction.
"They're talking to each other, not through the strong interaction, but through the weak, which is incredibly hard to measure in other ways," Armstrong explains.
While the scientists did get a peek at this process, they didn't get enough information to make any bold statements about what they saw.
However, they have reported their result in the recent paper in hopes it can provide a starting point for future research.
G-Zero Wraps Up
The G-Zero experiment was conducted in numerous sessions in Hall C, beginning in 2002 and concluding in 2007. The recent result was a culmination of these running periods followed by intense data analysis by a team of faculty, JLab staff members and students. More results are expected from other aspects from the G-Zero data.
For now, the G-Zero experimenters are happy that they have some of their primary results published and are finally nearing completion of the massive effort it took to carry it out.
"The early proposal said we want to measure strange quark effects at the level of about ten percent of the total. We reached our goal, and we're saying any strange quark effects amount to less than that," Armstrong said.
G-Zero was financed by the U.S. Department of Energy and the National Science Foundation. In addition, significant contributions of hardware and scientific/engineering resources were also made by CNRS in France and NSERC in Canada.
Several other electron scattering experiments, including the SAMPLE experiment at MIT-Bates, the A4 experiment at the Mainz Laboratory in Germany, and HAPPEx at Jefferson Lab were also designed to study strange quarks in the proton.
G-Zero Experiment Spots Strange Quarks Influencing Proton Structure
Major new installation comes to Hall C
HAPPEx Results Hint at Strangely Magnetic Proton
HAPPEx II reveals proton isn't very strange
Jefferson Science Associates, LLC, a joint venture of the Southeastern Universities Research Association, Inc. and PAE, manages and operates the Thomas Jefferson National Accelerator Facility, or Jefferson Lab, for the U.S. Department of Energy's Office of Science.
DOE’s Office of Science is the single largest supporter of basic research in the physical sciences in the United States, and is working to address some of the most pressing challenges of our time. For more information, visit science.energy.gov. | https://www.jlab.org/news/stories/g-zero-experiment-proves-strange-quark-effects-not-big | 18 |
15 | When you express a fraction in decimal form, it may be accurate to more places than you need or are able to use. Long decimals are unwieldy, so scientists often round them to make them easier to handle, even though this sacrifices accuracy. They also round large whole numbers that have too many digits to manage. When rounding to the greatest place value, you basically keep one number – the farthest non-zero one to the left – and you make all the numbers to the right of it zero.
TL;DR (Too Long; Didn't Read)
The greatest place value of a number is the first non-zero digit on the left in that number. You round up or down according to which numeral is to the right of the greatest place value.
When you round a digit in a number series, you don't have to look at all the digits that follow it. The only one that's important is the one immediately to the right. If it's 5 or larger, you add one to the digit you're rounding and you make all the digits to the right of it zero. This is called rounding up. For example, you would round 5,728 up to 6,000. If the digit to the right of the one you're rounding is smaller than 5, you leave the one you're rounding as it is. This is called rounding down. For example, 5,213 would round down to 5,000.
The Greatest Place Value
In any number, whether it's a decimal fraction or a whole integer, the non-zero digit farthest to the left is the one with the greatest place value. In a decimal fraction, this digit is the first non-zero one to the right of the decimal, and in a whole integer, it's the first digit in the number series. For example, in the fraction 0.00163925, the digit with the greatest place value is 1. In the whole integer 2,473,981, the digit with the greatest place value is 2. When you round the digit with the greatest place value in these two examples, the fraction becomes 0.002 and the integer becomes 2,000,000.
Sciencing Video Vault
Another way to make large numbers more manageable is to express them in scientific notation. To do this, you write the number as a single digit followed by a decimal with all the rest of the digits following the decimal, and then you multiply by a power of 10 equal to the number of digits. For example, the number 2,473,981 when expressed in scientific notation becomes 2.473981 x 106. You can also express fractions in scientific notation. The decimal fraction 0.000047039 becomes 4.7039 x 10-5. Note that for fractions, you count the digits to the left of the decimal, including the digit with greatest place value, when calculating the power, and you make the power negative.
It's common to round numbers in scientific notation, and when you round to the greatest place value, you round the digit before the decimal and omit all the other digits. Thus, 2.473981 x 106 becomes simply 2 x 106. Similarly, 4.7039 x 10-5 becomes 5 x 10-5. | https://sciencing.com/round-greatest-place-value-6518483.html | 18 |
21 | Mathematical terms definition
K-5 definitions of math terms 2 area model a model for multiplication problems, in which the length and width of a rectangle represents the factors. Free on-line mathemeatics dictionary for students studying mathematics subjects and courses over 2000 terms defined. Print a worksheet that lists mathematical terms and their definitions it's a great resource for your students and will strengthen their math vocabulary. · thus, a glossary of mathematical terms for the layman: contents edit a abscissa: an oral abnormality one really ought to have looked at by a dentist. This is a glossary of common math terms used in arithmetic, geometry, algebra, and statistics abacus - an early counting tool used for basic arithmetic.
Common terms in mathematics ' glossary of mathematical mistakes ' and 'ctk glossary of mathematical terms the inclusion of zero is a matter of definition. Basic math definitions we have collected some basic definitions on this page for lots more definitions, explanations, etc, use search above. Define math math synonyms, math pronunciation, math translation, english dictionary definition of math n mathematics n informal us and canadian short for. Domain of definition dot product mathematical model matrix matrix addition mathwords: terms and formulas from algebra i to calculus. Algebra vocabulary list (definitions for middle school http://wwwmathtamuedu/~stecher/171/f02 term is obtained by.
Mathematical terms definition
Mathematics definition the study of numbers, equations, functions, and geometric shapes (see geometry) and their relationships some branches of mathematics are characterized by use of strict proofs based on axioms some of its major subdivisions are arithmetic, algebra, geometry, and calculus. Definition a field [math] mathematical terms and definitions: in abstract algebra, what is a field mathematical terms and definitions. Glossary of terms that have been discussed or mentioned on these pages letter a. The complete mathematical terms dictionary understanding math concepts is critical in our world today math is used daily by nearly everyone, from lay persons to.
Supporters pointed to math and literacy gains, while critics noted that those improvements disappeared in elementary school. Define mathematical: of, relating to, or according with mathematics rigorously exact : precise certain — mathematical in a sentence. A theorem in mathematics is a proven fact a theorem about right triangles must be true for every right triangle there can be no exceptions.
Parents, teachers and students searching for elementary mathematical terms found the following articles and tips relevant and useful. Define mathematics mathematics synonyms, mathematics pronunciation, mathematics translation, english dictionary definition of mathematics mathematics. License the materials (math glossary) on this web site are legally licensed to all schools and students in the following states only: hawaii.
A variable is a quantity that may change within the context of a mathematical problem or experiment. The story of mathematics - glossary of mathematical terms the story of mathematics this is not a comprehensive dictionary of mathematical terms. Interactive, animated maths dictionary for kids with over 600 common math terms explained in simple language math glossary with math definitions, examples, math. The origin of some words of mathematics and science math words, and some other words creation of the term cauchy distribution ceiling function. Here is a list of basic math terms that every student needs to know, explained clearly.
Mathematical terms definitionRated 4/5 based on 11 review | http://ddpaperupxo.uncserves.info/mathematical-terms-definition.html | 18 |
15 | Sciencing Video Vault Determining Congruence in Triangles Altogether, there are six congruence statements that can be used to determine if two triangles are, indeed, congruent. SQT, we must show that the three pairs of sides and the three pairs of angles are congruent.
The two-column geometric proof that shows our reasoning is below. The figure indicates that those sides of the triangles are congruent.
We can also look at the sides of the triangles to see if they correspond. In answer bwe see that? The resulting figure is an isosceles triangle with altitude, so the two triangles are congruent.
Two triangles that feature two equal sides and one equal angle between them, SAS, are also congruent. This statement can be abbreviated as SSS. To write a correct congruence statement, the implied order must be the correct one. We know that these points match up because congruent angles are shown at those points.
This proof was left to reading and was not presented in class. Congruence Criteria It turns out that knowing some of the six congruences of corresponding sides and angles are enough to guarantee congruence of the triangle and the truth of all six congruences.
We have two variables we need to solve for. Then using what was proved about kites, diagonal cuts the kite into two congruent triangles. We do this by showing that? If two angle in one triangle are congruent to two angles of a second triangle, and also if the included sides are congruent, then the triangles are congruent.
In general there are two sets of congruent triangles with the same SSA data.
It would be easiest to use the 16x to solve for x first because it is a single-variable expressionas opposed to using the side NR, would require us to try to solve for x and y at the same time. Listed next in the first triangle is point Q. One pair has already been given to us, so we must show that the other two pairs are congruent.To write a correct congruence statement, the implied order must be the correct one.
The good feature of this convention is that if you tell me that triangle XYZ is congruent to triangle CBA, I know from the notation convention that XY = CB, angle X = angle C, etc.
Altogether, there are six congruence statements that can be used to determine if two triangles are, indeed, congruent.
Abbreviations summarizing the statements are often used, with S standing for side length and A standing for angle. Congruence and Triangles Date_____ Period____ Complete each congruence statement by naming the corresponding angle or side. Write a statement that indicates that the triangles in each pair are congruent.
7) J I K T R S 8) C B D G H I Mark the angles and sides of each pair of triangles to indicate that they are congruent. 13). Write a congruence statement for the pair of triangles.
A. by SAS B. by SSS C. by SSS D. by SAS. Although congruence statements are often used to compare triangles, they are also used for lines, circles and other polygons.
For example, a congruence between two triangles, ABC and DEF, means that the three sides and the three angles of both triangles are congruent.
Jun 20, · The most common way to set up a geometry proof is with a two-column proof. Write the statement on one side and the reason on the other side. Every statement given must have a reason proving its truth.
The reasons include it was given from the problem or 50%(4).Download | http://jyxizazavuqyvoz.mint-body.com/write-a-congruence-statement-for-the-pair-of-triangles-zyban-4970549705.html | 18 |
24 | Similarly, if you take two real numbers and multiply them using complex multiplication, the result is the same as if you multiplied them using real multiplication. To solve this, we use 4 unit maths complex numbers quadratic formula, which gives us where is the discriminant.
Then if we want to solve where is some positive real, we getor. The two most fundamental operations of any set or field of numbers are addition and multiplication.
For now, all you need to know is that if you take two real numbers and add them using complex addition, the result is the same as if you added them using real addition. We also want their product to be in the number system closure under multiplication. If we take the closure of the real and imaginary numbers, we get all numbers of the formreal must be in the new number system.
What this means is that if we take any two numbers in the number system e. So what is this mathematical gap? So now we have a new set of numbers, the complex numberswhere each complex number can be written in the form where.
Now consider the equation. This has one solution in the real numbers: If we put all numbers of the form where is real in our new number system, we can now solve any quadratic equation with real coefficients. What has happened here is that squares of real numbers are always non-negative.
Algebraic manipulation shows that this is equivalent to solving. In order to turn our set of numbers into a proper number system, we want to introduce some operations so that we can do things with these numbers. However, knowledge of this section is not required by the current HSC syllabus and is not necessary for an understanding of how to use complex numbers to solve equations.
This section is of mathematical interest and students should be encouraged to read it. Consider the linear equation. In particular, it is helpful for them to understand why the complex numbers are not really any more mathematically abstract than the reals.
The easiest way to achieve this is to introduce some number whose square is.
We cannot square a real number and get a negative number. We will define these operations properly later. To close this gap, we extend the reals to a number system where squares can also be negative. Consider finally a general quadratic equation.
Since the real numbers are closed under addition and multiplicationwe want this to hold true for our new number system too.If represents the variable complex numbers and if then the locus of is (1) the straight line (2) the straight line (3) the straight line (4) the circle A.
(1). So now we have a new set of numbers, the complex numbers, where each complex number can be written in the form (where, are real and).
The set of complex numbers is closed under addition and multiplication. The argument of a complex number is the angle made with respect to the positive x-axis. Determine the direction of angle. The modulus of a complex number is its length.4 Unit Maths – Complex Numbers Modulus-Argument form.
√ If equation is not in correct two conditions are met. only one letter is used. Find great deals on eBay for mathematics complex numbers,+ followers on Twitter. Section Imaginary and Complex Numbers A Students analyze complex numbers and perform basic operations.
A Define complex numbers and perform basic operations with them. "Complex" numbers have two parts, a "real" part (being any "real" number that you're used to dealing with) and an "imaginary" part (being any number with an "i" in it).
The "standard" format for complex numbers is " a + bi "; that is, real-part first and i -part last.Download | http://sevetewusokysyfeb.ultimedescente.com/4-unit-maths-complex-numbers-1302813028.html | 18 |
11 | This topic was published by DevynCJohnson and viewed 1177 times since "". The last page revision was "".
- Topics - 444
Registers are small amounts of memory or cache that a computer can use to store information. Many types of registers exist. Registers are the fastest type of memory in a computer system. However, they are more expensive to create than RAM chips. Therefore, there are not as registers as RAM.
There are two types of processor registers, user-accessible registers and internal registers. User-accessible registers are the registers that the programs and code can access while internal registers can only be used by the CPU itself.
Many types of user-accessible registers are available. Data registers are commonly used registers that can hold any type of data such as float-points, integers, characters, small arrays, etc. Address registers store addresses that point to need data on the RAM (commonly called the Primary Memory). General Purpose Registers (GPRs) are special registers that can store data or addresses. Conditional registers store boolean values and Float-Point Registers (FPRs) store float-point values. Constant registers are read-only registers that store constants, such as Pi. Model-Specific registers store settings that are specific to the CPU. Special Purpose Registers (SPRs) are registers that have special abilities such as a counter register which is a counter. Vector registers store vector data for SIMD instructions (Single Instruction, Multiple Data). Memory Type Range Registers (MTRRs) store information concerning how memory ranges are cached.
Not as many internal registers exist as do user-accessible registers. Instruction registers store the data that is currently being executed.
Video cards can also contain registers such as buffer registers which store data.
The size of registers are measured in bits. For instance, an 8-bit register can store up to eight bits.
Registers are made out of flip-flops. Flip-flops can be made from cross-coupled NOR logic gates, cross-coupled NAND logic gates, bipolar junction transistors, or other circuit configurations. Flip-flops can also be called "latches".
- Hardware (Article Index) - http://dcjtech.info/topic/hardware-article-index/
- Processor (Article Index) - http://dcjtech.info/topic/cpu-topics/
- Introductory Microprocessor Concepts - http://dcjtech.info/topic/introductory-microprocessor-concepts/
- Hardware: Processor (CPU) - http://dcjtech.info/topic/hardware-processor-cpu/
- Over Five Types of Computers Explained and Compared - http://dcjtech.info/topic/over-five-types-of-computers-explained-and-compared/ | http://dcjtech.info/topic/hardware-registers/ | 18 |
27 | Tips for developing student thinking skills
Tips for developing student thinking skills
Developing critical thinking is a primary goal in many classrooms. However, it is difficult to actually achieve this goal as critical thinking is an elusive concept to understand. This post will provide practical ways to help students develop critical thinking skills.
Critical Thinking Defined
Critical thinking is the ability to develop support for one’s position on a subject as well as the ability to question the reasons and opinions of another person on a given subject. The ability to support one’s one position is exceedingly difficult as many people are convinced that their feelings can be substituted as evidence for their position.
It is also difficult to question the reasons and opinions of others as it requires the ability to identify weaknesses in the person’s positions while having to think on one’s feet. Again this is why many people stick to their emotions as it requires no thinking and emotions can be felt much faster than thoughts can be processed. Thinking critically involves assessing the strength of another’s thought process through pushing them with challenging questions or counter-arguments.
Developing Critical Thinking Skills
Debates-Debates provide an opportunity for people to both prepare arguments as well as defend in an extemporaneous manner. The experience of preparation as well as on the feet thinking help to develop critical thinking in many ways. In addition, the time limits of debates really force the participants to be highly engaged.
Reciprocal Teaching-Reciprocal teaching involves students taking turns to teach each other. As such, the must take a much closer look at the content when they are aware that they will have to teach it. In addition, Reciprocal teaching encourages discussion and the answering of questions which further supports critical thinking skills development.
Discussion-Discussion through the use of open-ended question is another classic way to develop critical thinking skills. The key is in the open-ended nature of the question. This means that there is no single answer to the question. Instead, the quality of answers are judged on the support the students provide and their reasoning skills.
Open-ended assignments-Often as teachers, we want to give specific detailed instructions on how to complete an assignment. This reduces confusion and gives each student a similar context in which learning takes place.
However, open-ended assignments provide a general end goal but allow the students to determine how they will complete it. This open-ended nature really forces the students to think about what they will do. In addition, this is similar to work in the real world where often the boss wants something done and doesn’t really care how the workers get it done. The lack of direction can cause less critical workers problems as they do not know what to do but those who are trained to deal with ambiguity will be prepared for this.
Critical thinking requires a context in which free thought is allowed but is supported. It is difficult to develop the skills of thinking with activities that stimulate this skill. The activities mentioned here are just some of the choices available to a teacher.
Reflective thinking is the ability to look at the past and develop understanding and insights about what happened and using this information to develop a deeper understanding or to choose a course of action. Many may believe that reflective thinking is a natural part of learning.
However, I have always been surprised at how little reflective thinking my students do. They seem to just do things without ever trying to understand how well they did outside of passing the assignment. Without reflective thinking, it is difficult to learn from past mistakes as no thought was made to avoid them.
This post will examine opportunities and aways of reflective thinking.
Opportunities for Reflective Thinking
Generally, reflective thinking can happen when
These are similar but different concepts. Learning can happen without doing anything such as listening to a lecture or discussion. You hear a lot of great stuff but you never implement it.
Doing something means the application of knowledge in a particular setting. An example would be teaching or working at a company. With the application of knowledge comes consequences the indicate how well you did. For example, teaching kids and then seeing either look of understanding or confusion on their face
Strategies for Reflective THinking
For situations in which the student learns something without a lot of action a common model for encouraging reflective thinking is the Connect, Extend, Challenge model. The model is explained below
Connecting is what makes learning relevant for many students and is also derived from constructivism. Extending is a way for a student to see the benefits of the new knowledge. It goes beyond learning because you were told to learn. Lastly, challenging helps the student to determine what they do not know which is another metacognitive strategy.
When a student does something the reflection process is slightly different below is an extremely common model.
In this model, the student identifies what they did right, which requires reflective thinking. The student also identifies the things they did wrong during the experience. Lastly, the student must problem solve and develop strategies to overcome the mistakes they made. Often the solutions in this final part are implemented during the next action sequence to see how well they worked out.
Thinking about the past is one of the strongest ways to prepare for the future. Therefore, teachers must provide their students with opportunities to think reflectively. The strategies included here provide a framework for guiding students in this critical process.
Everybody thinks, at least we hope everybody thinks. However, few are aware of the various skills that can be used in thinking. In this post, we will look at several different skills that can be used when trying to think and understand something. There are at least four different skills that can be used in thinking and they are…
Clarification lays the groundwork for determining the boundaries in which thinking needs to take place. In many ways, clarification deals with the question of what are you trying to think about.
Basis involves categorizing the information that has been gathered to think about. At this stage, a person decides if the information they have is a fact, opinion, or just incorrect information.
Another activity at this level is assessing the credibility of the sources of information. For example, facts from experts are considered more credible than the opinions of just anybody.
Whatever form of reasoning is used the overall goal is to develop conclusions based either on principles or examples. As such, the prior forms of thinking are necessary to move to developing inferences. In other words, there must be clarification and basis before inferences.
Evaluation involves developing a criteria upon which to judge the adequacy of whatever decisions have been made. This means assessing the quality of the thought process that has already taken place.
Assessing judgment is near the top of Bloom’s Taxonomy and involves not only having an opinion but basing the opinion on well-developed criteria. This is in no way easy for anybody.
Tips for Developing Thinking Skills
When dealing with students, here are a few suggestions for developing thinking skills.
Thinking involves questioning. The development of answers to these questions is the fruit of thinking. It is important to determine what one is trying to do in order to allow purposeful thinking to take place.
Reasoning is the process of developing conclusion through the examination of evidence. This post will explain several forms of reasoning as listed below.
In the example above, there are several instances of the effect of smoking on people. From these examples, the conclusion reach was smoking is deadly.
The danger of this form of reasoning is jumping to conclusions based on a small sample size. Just because three people died or are dying of smoking does not mean that smoking is deadly in general. This is not enough evidence to support this conclusion
Deductive reasoning involves the development of a general principle testing a specific example of the principle and moving to a conclusion. Below is an example.
This method of reasoning is highly effective in persuasion. However, the principle must be sound in order to impact the audience.
Causal reasoning attempts to establish a relationship between a cause and effect. An example would be as follows.
You slip and break your leg. After breaking your leg you notice that there was a banana on the ground. You therefore reason that you slipped on the banana and broke your leg
The danger of causal reasoning is it is sometimes difficult to prove cause and effect conclusively. In addition, complex events cannot often be explained by a single cause.
Analogical reasoning involves the comparison of two similar cases making the argument that what is true for the first is true for the second. Below is an example.
The example above assumes that Thomas can play the trombone because he can play other brass instruments well. It is critical that the comparison made is truly parallel in order to persuade an audience.
Abductive reasoning involves looking at incomplete information and trying to make sense of this through reasonable guesses. Perhaps the most common experiences people have with abductive reasoning is going to the hospital or mechanic. In both situations, the doctor and mechanic listen to the symptoms and try to make a diagnosis as to exactly what the problem is.
Of course, the doctor and mechanic can be completely wrong which leads to other problems. However, unlike the other forms of reasoning, abductive reasoning is useful for filling in gaps in information that is unavailable.
Reasoning comes in many forms. The examples provided here provide people with different ways these forms of reasoning can be used. | https://educationalresearchtechniques.com/tag/critical-thinking/ | 18 |
21 | In particular, we bring the augmented matrix to Row-Echelon Form: Notes In practice, you have some flexibility in th eapplication of the algorithm. In the next lesson, you will learn another way of solving a system of equations.
Subtract multiples of that row from the rows below it to make each entry below the leading 1 zero. Solution Did you notice that both equations had the same x and y intercept? Infinite Solutions Graph the following system of equations and identify the solution.
In each row, the first non-zero entry form the left is a 1, called the leading 1. For instance, in Step 2 you often have a choice of rows to move to the top.
Rewrite the equation in slope intercept form. It can be proven that every matrix can be brought to row-echelon form and even to reduced row-echelon form by the use of elementary row operations.
Elementary Row Operations Multiply one row by a nonzero number. In this case, the system of equations has an infinite number of solutions!
Do you see how we are manipulating the system of linear equations by applying each of these operations? In the following example, suppose that each of the matrices was the result of carrying an augmented matrix to reduced row-echelon form by means of a sequence of row operations.
The second is that sometimes a system of equations is actually the same line, graphed on top of each other. Our strategy in solving linear systems, therefore, is to take an augmented matrix for a system and carry it by means of elementary row operations to an equivalent augmented matrix from which the solutions of the system are easily obtained.
Every point on the line is a solution to both equations. The first is that there is more than one way to graph a system of equations that is written in standard form.
Add a multiple of one row to a different row. That is, the resulting system has the same solution set as the original system. At that point, the solutions of the system are easily obtained.
This is because these two equations represent the same line. Our last example demonstrates two different things. In this case, you will see an infinite number of solutions.
Row-Echelon Form A matrix is said to be in row-echelon form if All rows consisting entirely of zeros are at the bottom. If, in addition, each leading 1 is the only non-zero entry in its column, then the matrix is in reduced row-echelon form.
A more computationally-intensive algorithm that takes a matrix to reduced row-echelon form is given by the Gauss-Jordon Reduction.That is, the resulting system has the same solution set as the original system. Our strategy in solving linear systems, therefore, is to take an augmented matrix for a system and carry it by means of elementary row operations to an equivalent augmented matrix from which the solutions of the system are easily obtained.
Graphing Systems of Equations This is the first of four lessons in the System of Equations unit. We are going to graph a system of equations in order to find the solution.
Writing systems of equations that represents the charges by: Anonymous Jenny charges $4 per day to pet sit.
Tyler charges $2 up front, and then $3 per day to pet sit. Write a system of equations that represents the charges. _____ Your answer by Karin from killarney10mile.com: You want to write two equations.
A System of Linear Equations is when we have two or more linear equations working together. Write one of the equations so it is in the style "variable = " Replace (i.e.
substitute) Note: because there is a solution the equations are "consistent". WRITING Describe three ways to solve a system of linear equations. In Exercises 4 – 6, (a) write a system of linear equations to represent the situation.
Then, answer the question using (b) a table, (c) a graph, and (d) algebra. One equation of my system will be x+y=1 Now in order to satisfy (ii) My second equations need to not be a multiple of the first. If I used 2x+2y=2, it would share, not only (4.Download | http://jamikyvuwihu.killarney10mile.com/write-a-system-of-equations-with-the-solution-91379py2040.html | 18 |
58 | The student does not understand the basic form of a linear function. What is always true of the y-intercept? Next, assist the student in using the graph to determine the slope.
Attempts to write the function recursively. What does the equation of an exponential equation look like? Inthe world population was 1.
In his example, he chose the pair of points 2, 3 and 4, Calculates the slope or the y-intercept but is unable to write the function. Examples of Student Work at this Level The student attempts to write a linear function or an exponential expression.
What is function notation? Examples of Student Work at this Level The student understands the basic form of an exponential function but: Provide additional opportunities for the student to write exponential functions from verbal descriptions, tables of values, and graphs.
Then ask the student to calculate f x for several values of x given in the table to demonstrate that the function is correctly written. Moving Forward The student is unable to correctly calculate one or both parameters.
Can you explain why you wrote your function this way? Almost There The student makes a calculation or other minor error. How did you find the y-intercept? For example, solving the equation for the points 0, 2 and 2, 4 yields: Instructional Implications Assist the student in identifying and correcting any calculation errors.
Why Exponential Functions Are Important Many important systems follow exponential patterns of growth and decay. Makes a calculation error when calculating the slope.
What does it look like? Got It The student provides complete and correct responses to all components of the task. How to Find an Exponential Equation With Two Points By Chris Deziel; Updated March 13, If you know two points that fall on a particular exponential curve, you can define the curve by solving the general exponential function using those points.
The student does not understand the basic form of an exponential function. Instructional Implications Ask the student to write linear functions given a verbal description or a graph of the function. On the other hand, the point -2, -3 is two units to the left of the y-axis. Although it takes more than a slide rule to do it, scientists can use this equation to project future population numbers to help politicians in the present to create appropriate policies.First of all, let me welcome you to the universe of ordered pairs and inequalies online calculator.
You need not worry; this subject seems to be difficult because of the many new terms that it has.
Once you learn the basics, it becomes fun. Algebrator is the most used tool amongst beginners and experts. A function is an equation which shows the relationship between the input x and the output y and where there is exactly one output for each input.
Another word for input is domain and for output the range. 4 ab 3 Finding an Exponential Equation with Two Points and an Asymptote Find an exponential function that passes through (-3,) and (2,-3) and has a horizontal asymptote of y = 2.
Free exponential equation calculator - solve exponential equations step-by-step. Exponential Growth Functions. STUDY. PLAY. - The input to an exponential function is the exponent.
Several ordered pairs from a continuous exponential function are shown in the table. What are the domain and range of the function? The domain is the set of real numbers, and the range is y > 0.
Apr 08, · Determining if a set of ordered pairs is a one-to-one function - Duration: Finding the Equation of an Exponential Function - .Download | http://maqiteqagosegoku.mint-body.com/write-an-exponential-function-from-xy-pairs-1585915859.html | 18 |
20 | Let us solve the classic “fake coin” puzzle using decision trees. There are the two different variants of the puzzle given below. I am providing description of both the puzzles below, try to solve on your own, assume N = 8.
Easy: Given a two pan fair balance and N identically looking coins, out of which only one coin is lighter (or heavier). To figure out the odd coin, how many minimum number of weighing are required in the worst case?
Difficult: Given a two pan fair balance and N identically looking coins out of which only one coin may be defective. How can we trace which coin, if any, is odd one and also determine whether it is lighter or heavier in minimum number of trials in the worst case?
Let us start with relatively simple examples. After reading every problem try to solve on your own.
Problem 1: (Easy)
Given 5 coins out of which one coin is lighter. In the worst case, how many minimum number of weighing are required to figure out the odd coin?
Name the coins as 1, 2, 3, 4 and 5. We know that one coin is lighter. Considering best out come of balance, we can group the coins in two different ways, [(1, 2), (3, 4) and (5)], or [(12), (34) and (5)]. We can easily rule out groups like [(123) and (45)], as we will get obvious answer. Any other combination will fall into one of these two groups, like [(2)(45) and (13)], etc.
Consider the first group, pairs (1, 2) and (3, 4). We can check (1, 2), if they are equal we go ahead with (3, 4). We need two weighing in worst case. The same analogy can be applied when the coin in heavier.
With the second group, weigh (12) and (34). If they balance (5) is defective one, otherwise pick the lighter pair, and we need one more weighing to find odd one.
Both the combinations need two weighing in case of 5 coins with prior information of one coin is lighter.
Analysis: In general, if we know that the coin is heavy or light, we can trace the coin in log3(N) trials (rounded to next integer). If we represent the outcome of balance as ternary tree, every leaf represent an outcome. Since any coin among N coins can be defective, we need to get a 3-ary tree having minimum of N leaves. A 3-ary tree at k-th level will have 3k leaves and hence we need 3k >= N.
In other-words, in k trials we can examine upto 3k coins, if we know whether the defective coin is heavier or lighter. Given that a coin is heavier, verify that 3 trials are sufficient to find the odd coin among 12 coins, because 32 < 12 < 33.
Problem 2: (Difficult)
We are given 4 coins, out of which only one coin may be defective. We don’t know, whether all coins are genuine or any defective one is present. How many number of weighing are required in worst case to figure out the odd coin, if present? We also need to tell whether it is heavier or lighter.
From the above analysis we may think that k = 2 trials are sufficient, since a two level 3-ary tree yields 9 leaves which is greater than N = 4 (read the problem once again). Note that it is impossible to solve above 4 coins problem in two weighing. The decision tree confirms the fact (try to draw).
We can group the coins in two different ways, [(12, 34)] or [(1, 2) and (3, 4)]. Let us consider the combination (12, 34), the corresponding decision tree is given below. Blue leaves are valid outcomes, and red leaves are impossible cases. We arrived at impossible cases due to the assumptions made earlier on the path.
The outcome can be (12) < (34) i.e. we go on to left subtree or (12) > (34) i.e. we go on to right subtree.
The left subtree is possible in two ways,
- A) Either 1 or 2 can be lighter OR
- B) Either 3 or 4 can be heavier.
Further on the left subtree, as second trial, we weigh (1, 2) or (3, 4). Let us consider (3, 4) as the analogy for (1, 2) is similar. The outcome of second trail can be three ways
- A) (3) < (4) yielding 4 as defective heavier coin, OR
- B) (3) > (4) yielding 3 as defective heavier coin OR
- C) (3) = (4), yielding ambiguity. Here we need one more weighing to check a genuine coin against 1 or 2. In the figure I took (3, 2) where 3 is confirmed as genuine. We can get (3) > (2) in which 2 is lighter, or (3) = (2) in which 1 is lighter. Note that it impossible to get (3) < (2), it contradicts our assumption leaned to left side.
Similarly we can analyze the right subtree. We need two more weighings on right subtree as well.
Overall we need 3 weighings to trace the odd coin. Note that we are unable to utilize two outcomes of 3-ary trees. Also, the tree is not full tree, middle branch terminated after first weighing. Infact, we can get 27 leaves of 3 level full 3-ary tree, but only we got 11 leaves including impossible cases.
Analysis: Given N coins, all may be genuine or only one coin is defective. We need a decision tree with atleast (2N + 1) leaves correspond to the outputs. Because there can be N leaves to be lighter, or N leaves to be heavier or one genuine case, on total (2N + 1) leaves.
As explained earlier ternary tree at level k, can have utmost 3k leaves and we need a tree with leaves of 3k > (2N + 1).
In other words, we need atleast k > log3(2N + 1) weighing to find the defective one.
Observe the above figure that not all the branches are generating leaves, i.e. we are missing valid outputs under some branches that leading to more number of trials. When possible, we should group the coins in such a way that every branch is going to yield valid output (in simple terms generate full 3-ary tree). Problem 4 describes this approach of 12 coins.
Problem 3: (Special case of two pan balance)
We are given 5 coins, a group of 4 coins out of which one coin is defective (we don’t know whether it is heavier or lighter), and one coin is genuine. How many weighing are required in worst case to figure out the odd coin whether it is heavier or lighter?
Label the coins as 1, 2, 3, 4 and G (genuine). We now have some information on coin purity. We need to make use that in the groupings.
We can best group them as [(G1, 23) and (4)]. Any other group can’t generate full 3-ary tree, try yourself. The following diagram explains the procedure.
The middle case (G1) = (23) is self explanatory, i.e. 1, 2, 3 are genuine and 4th coin can be figured out lighter or heavier in one more trial.
The left side of tree corresponds to the case (G1) < (23). This is possible in two ways, either 1 should be lighter or either of (2, 3) should be heavier. The former instance is obvious when next weighing (2, 3) is balanced, yielding 1 as lighter. The later instance could be (2) < (3) yielding 3 as heavier or (2) > (3) yielding 2 as heavier. The leaf nodes on left branch are named to reflect these outcomes.
The right side of tree corresponds to the case (G1) > (23). This is possible in two ways, either 1 is heavier or either of (2, 3) should be lighter. The former instance is obvious when the next weighing (2, 3) is balanced, yielding 1 as heavier. The later case could be (2) < (3) yielding 2 as lighter coin, or (2) > (3) yielding 3 as lighter.
In the above problem, under any possibility we need only two weighing. We are able to use all outcomes of two level full 3-ary tree. We started with (N + 1) = 5 coins where N = 4, we end up with (2N + 1) = 9 leaves. Infact we should have 11 outcomes since we stared with 5 coins, where are other 2 outcomes? These two outcomes can be declared at the root of tree itself (prior to first weighing), can you figure these two out comes?
If we observe the figure, after the first weighing the problem reduced to “we know three coins, either one can be lighter (heavier) or one among other two can be heavier (lighter)”. This can be solved in one weighing (read Problem 1).
Analysis: Given (N + 1) coins, one is genuine and the rest N can be genuine or only one coin is defective. The required decision tree should result in minimum of (2N + 1) leaves. Since the total possible outcomes are (2(N + 1) + 1), number of weighing (trials) are given by the height of ternary tree, k >= log3[2(N + 1) + 1]. Note the equality sign.
Rearranging k and N, we can weigh maximum of N <= (3k – 3)/2 coins in k trials.
Problem 4: (The classic 12 coin puzzle)
You are given two pan fair balance. You have 12 identically looking coins out of which one coin may be lighter or heavier. How can you find odd coin, if any, in minimum trials, also determine whether defective coin is lighter or heavier, in the worst case?
How do you want to group them? Bi-set or tri-set? Clearly we can discard the option of dividing into two equal groups. It can’t lead to best tree. From the above two examples, we can ensure that the decision tree can be used in optimal way if we can reveal atleaset one genuine coin. Remember to group coins such that the first weighing reveals atleast one genuine coin.
Let us name the coins as 1, 2, … 8, A, B, C and D. We can combine the coins into 3 groups, namely (1234), (5678) and (ABCD). Weigh (1234) and (5678). You are encouraged to draw decision tree while reading the procedure. The outcome can be three ways,
- (1234) = (5678), both groups are equal. Defective coin may be in (ABCD) group.
- (1234) < (5678), i.e. first group is less in weight than second group.
- (1234) > (5678), i.e. first group is more in weight than second group.
The output (1) can be solved in two more weighing as special case of two pan balance given in Problem 3. We know that groups (1234) and (5678) are genuine and defective coin may be in (ABCD). Pick one genuine coin from any of weighed groups, and proceed with (ABCD) as explained in Problem 3.
Outcomes (2) and (3) are special. In both the cases, we know that (ABCD) is genuine. And also, we know a set of coins being lighter and a set of coins being heavier. We need to shuffle the weighed two groups in such a way that we end up with smaller height decision tree.
Consider the second outcome where (1234) < (5678). It is possible when any coin among (1, 2, 3, 4) is lighter or any coin among (5, 6, 7, 8 ) is heavier. We revealed lighter or heavier possibility after first weighing. If we proceed as in Problem 1, we will not generate best decision tree. Let us shuffle coins as (1235) and (4BCD) as new groups (there are different shuffles possible, they also lead to minimum weighing). If we weigh these two groups again the outcome can be three ways, i) (1235) < (4BCD) yielding one among 1, 2, 3 is lighter which is similar to Problem 1 explained above, we need one more weighing, ii) (1235) = (4BCD) yielding one among 6, 7, 8 is heavier which is similar to Problem 1 explained above, we need one more weighing iii) (1235) > (4BCD) yielding either 5 as heavier coin or 4 as lighter coin, at the expense of one more weighing.
Similar way we can also solve the right subtree (third outcome where (1234) > (5678)) in two more weighing.
We are able to solve the 12 coin puzzle in 3 weighing in the worst case.
Few Interesting Puzzles:
- Solve Problem 4 with N = 8 and N = 13, How many minimum trials are required in each case?
- Given a function int weigh(A, B) where A and B are arrays (need not be equal size). The function returns -1, 0 or 1. It returns 0 if sum of all elements in A and B are equal, -1 if A < B and 1 if A > B. Given an array of 12 elements, all elements are equal except one. The odd element can be as that of others, smaller or greater than others. Write a program to find the odd element (if any) using weigh() minimum number of times.
- You might have seen 3-pan balance in science labs during school days. Given a 3-pan balance (4 outcomes) and N coins, how many minimum trials are needed to figure out odd coin?
Similar problem was provided in one of the exercises of the book “Introduction to Algorithms by Levitin”. Specifically read section 5.5 and section 11.2 including exercises.
– – – by Venki. Please write comments if you find anything incorrect, or you want to share more information about the topic discussed above.
- Merge Sort Tree for Range Order Statistics
- Cartesian Tree
- Sparse Set
- Binomial Heap
- Data Structure for Dictionary and Spell Checker?
- Splay Tree | Set 1 (Search)
- Skip List | Set 1 (Introduction)
- Segment Tree | Set 1 (Sum of given range)
- Ternary Search Tree
- Given a sequence of words, print all anagrams together | Set 2
- LRU Cache Implementation
- AVL Tree | Set 1 (Insertion)
- Trie | (Insert and Search)
- Spaghetti Stack
- Tournament Tree (Winner Tree) and Binary Heap | https://www.geeksforgeeks.org/decision-trees-fake-coin-puzzle/ | 18 |
23 | Seismic magnitude scales
|Part of a series on|
Seismic magnitude scales are used to describe the overall strength or "size" of an earthquake. These are distinguished from seismic intensity scales that categorize the intensity or severity of ground shaking (quaking) caused by an earthquake at a given location. Magnitudes are usually determined from measurements of an earthquake's seismic waves as recorded on a seismogram. Magnitude scales vary on the type and component of the seismic waves measured and the calculations used. Different magnitude scales are necessary because of differences in earthquakes, and in the purposes for which magnitudes are used.
- 1 Earthquake magnitude and ground-shaking intensity
- 2 Magnitude scales
- 2.1 "Richter" magnitude scale
- 2.2 Other "Local" magnitude scales
- 2.3 Body-wave magnitude scales
- 2.4 Surface-wave magnitude scales
- 2.5 Moment magnitude and energy magnitude scales
- 2.6 Energy class (K-class) scale
- 2.7 Tsunami magnitude scales
- 2.8 Duration and Coda magnitude scales
- 2.9 Macroseismic magnitude scales
- 2.10 Other magnitude scales
- 3 See also
- 4 Notes
- 5 Sources
- 6 External links
Earthquake magnitude and ground-shaking intensity
The Earth's crust is stressed by tectonic forces. When this stress becomes great enough to rupture the crust, or to overcome the friction that prevents one block of crust from slipping past another, energy is released, some of it in the form of various kinds of seismic waves that cause ground-shaking, or quaking.
Magnitude is an estimate of the relative "size" or strength of an earthquake, and thus its potential for causing ground-shaking. It is "approximately related to the released seismic energy."
Intensity refers to the strength or force of shaking at a given location, and can be related to the peak ground velocity. With an isoseismal map of the observed intensities (see illustration) an earthquake's magnitude can be estimated from both the maximum intensity observed (usually but not always near the epicenter), and from the extent of the area where the earthquake was felt.
The intensity of local ground-shaking depends on several factors besides the magnitude of the earthquake, one of the most important being soil conditions. For instance, thick layers of soft soil (such as fill) can amplify seismic waves, often at a considerable distance from the source, while sedimentary basins will often resonate, increasing the duration of shaking. This is why, in the 1989 Loma Prieta earthquake, the Marina district of San Francisco was one of the most damaged areas, though it was nearly 100 km from the epicenter. Geological structures were also significant, such as where seismic waves passing under the south end of San Francisco Bay reflected off the base of the Earth's crust towards San Francisco and Oakland. A similar effect channeled seismic waves between the other major faults in the area.
An earthquake radiates energy in the form of different kinds of seismic waves, whose characteristics reflect the nature of both the rupture and the earth's crust the waves travel through. Determination of an earthquake's magnitude generally involves identifying specific kinds of these waves on a seismogram, and then measuring one or more characteristics of a wave, such as its timing, orientation, amplitude, frequency, or duration. Additional adjustments are made for distance, kind of crust, and the characteristics of the seismograph that recorded the seismogram.
The various magnitude scales represent different ways of deriving magnitude from such information as is available. All magnitude scales retain the logarithmic scale as devised by Charles Richter, and are adjusted so the mid-range approximately correlates with the original "Richter" scale.
Since 2005 the International Association of Seismology and Physics of the Earth's Interior (IASPEI) has standardized the measurement procedures and equations for the principal magnitude scales, ML, Ms, mb, mB and mbLg.
"Richter" magnitude scale
The first scale for measuring earthquake magnitudes, developed in 1935 by Charles F. Richter and popularly known as the "Richter" scale, is actually the Local magnitude scale, label ML or ML. Richter established two features now common to all magnitude scales. First, the scale is logarithmic, so that each unit represents a ten-fold increase in the amplitude of the seismic waves. As the energy of a wave is 101.5 times its amplitude, each unit of magnitude represents a nearly 32-fold increase in the energy (strength) of an earthquake.
Second, Richter arbitrarily defined the zero point of the scale to be where an earthquake at a distance of 100 km makes a maximum horizontal displacement of 0.001 millimeters (1 µm, or 0.00004 in.) on a seismogram recorded with a Wood-Anderson torsion seismograph. Subsequent magnitude scales are calibrated to be approximately in accord with the original "Richter" (local) scale around magnitude 6.
All "Local" (ML) magnitudes are based on the maximum amplitude of the ground shaking, without distinguishing the different seismic waves. They underestimate the strength:
- of distant earthquakes (over ~600 km) because of attenuation of the S-waves,
- of deep earthquakes because the surface waves are smaller, and
- of strong earthquakes (over M ~7) because they do not take into account the duration of shaking.
The original "Richter" scale, developed in the geological context of Southern California and Nevada, was later found to be inaccurate for earthquakes in the central and eastern parts of the continent (everywhere east of the Rocky Mountains) because of differences in the continental crust. All these problems prompted the development of other scales.
Other "Local" magnitude scales
Richter's original "local" scale has been adapted for other localities. These may be labelled "ML", or with a lowercase "l", either Ml, or Ml. (Not to be confused with the Russian surface-wave MLH scale.) Whether the values are comparable depends on whether the local conditions have been adequately determined and the formula suitably adjusted.
Japanese Meteorological Agency magnitude scale
In Japan, for shallow (depth < 60 km) earthquakes within 600 km, the Japanese Meteorological Agency calculates a magnitude labeled MJMA, MJMA, or MJ. (These should not be confused with moment magnitudes JMA calculates, which are labeled Mw(JMA) or M(JMA), nor with the Shindo intensity scale.) JMA magnitudes are based (as typical with local scales) on the maximum amplitude of the ground motion; they agree "rather well" with the seismic moment magnitude Mw in the range of 4.5 to 7.5, but underestimate larger magnitudes.
Body-wave magnitude scales
The original "body-wave magnitude" – mB or mB (uppercase "B") – was developed by Gutenberg (1945b, 1945c) and Gutenberg & Richter (1956) to overcome the distance and magnitude limitations of the ML scale inherent in the use of surface waves. mB is based on the P- and S-waves, measured over a longer period, and does not saturate until around M 8. However, it is not sensitive to events smaller than about M 5.5. Use of mB as originally defined has been largely abandoned, now replaced by the standardized mBBB scale.
The mb or mb scale (lowercase "m" and "b") is similar to mB, but uses only P-waves measured in the first few seconds on a specific model of short-period seismograph. It was introduced in the 1960s with the establishment of the World Wide Standardized Seismograph Network (WWSSN) for monitoring compliance with the 1963 Partial Nuclear Test Ban Treaty; the short period improves detection of smaller events, and better discriminates between tectonic earthquakes and underground nuclear explosions.
Measurement of mb has changed several times. As originally defined by Gutenberg (1945c) mb was based on the maximum amplitude of waves in the first 10 seconds or more. However, the length of the period influences the magnitude obtained. Early USGS/NEIC practice was to measure mb on the first second (just the first few P-waves), but since 1978 they measure the first twenty seconds. The modern practice is to measure short-period mb scale at less than three seconds, while the broadband mBBB scale is measured at periods of up to 30 seconds.
mbLg scale
The regional mbLg scale – also denoted mb_Lg, mbLg, MLg (USGS), Mn, and mN – was developed by Nuttli (1973) for a problem the original ML scale could not handle: all of North America east of the Rocky Mountains. The ML scale was developed in southern California, which lies on blocks of oceanic crust, typically basalt or sedimentary rock, which have been accreted to the continent. East of the Rockies the continent is a craton, a thick and largely stable mass of continental crust that is largely granite, a harder rock with different seismic characteristics. In this area the ML scale gives anomalous results for earthquakes which by other measures seemed equivalent to quakes in California.
Nuttli resolved this by measuring the amplitude of short-period (~1 sec.) Lg waves, a complex form of the Love wave which, although a surface wave, he found provided a result more closely related the mb scale than the Ms scale. Lg waves attenuate quickly along any oceanic path, but propagate well through the granitic continental crust, and MbLg is often used in areas of stable continental crust; it is especially useful for detecting underground nuclear explosions.
Surface-wave magnitude scales
Surface waves propagate along the Earth's surface, and are principally either Rayleigh waves or Love waves. For shallow earthquakes the surface waves carry most of the energy of the earthquake, and are the most destructive. Deeper earthquakes, having less interaction with the surface, produce weaker surface waves.
The surface-wave magnitude scale, variously denoted as Ms, MS, and Ms, is based on a procedure developed by Beno Gutenberg in 1942 for measuring shallow earthquakes stronger or more distant than Richter's original scale could handle. Notably, it measured the amplitude of surface waves (which generally produce the largest amplitudes) for a period of "about 20 seconds". The Ms scale approximately agrees with ML at ~6, then diverges by as much as half a magnitude. A revision by Nuttli (1983), sometimes labeled MSn, measures only waves of the first second.
A modification – the "Moscow-Prague formula" – was proposed in 1962, and recommended by the IASPEI in 1967; this is the basis of the standardized Ms20 scale (Ms_20, Ms(20)). A "broad-band" variant (Ms_BB, Ms(BB)) measures the largest velocity amplitude in the Rayleigh-wave train for periods up to 60 seconds. The MS7 scale used in China is a variant of Ms calibrated for use with the Chinese-made "type 763" long-period seismograph.
The MLH scale used in some parts of Russia is actually a surface wave magnitude.
Moment magnitude and energy magnitude scales
Other magnitude scales are based on aspects of seismic waves that only indirectly and incompletely reflect the force of an earthquake, involve other factors, and are generally limited in some respect of magnitude, focal depth, or distance. The moment magnitude scale – Mw or Mw – developed by Kanamori (1977) and Hanks & Kanamori (1979), is based on an earthquake's seismic moment, M0, a measure of how much "work" an earthquake does in sliding one patch of rock past other rock. Seismic moment is measured in Newton-meters (N • m or Nm) in the SI system of measurement, or dyne-centimeters (dyn-cm) in the older CGS system. In the simplest case the moment can be calculated knowing only the amount of slip, the area of the surface ruptured or slipped, and a factor for the resistance or friction encountered. These factors can be estimated for an existing fault to determine the magnitude of past earthquakes, or what might be anticipated for the future.
An earthquake's seismic moment can be estimated in various ways, which are the bases of the Mwb, Mwr, Mwc, Mww, Mwp, Mi, and Mwpd scales, all subtypes of the generic Mw scale. See Moment magnitude scale § Subtypes for details.
Seismic moment is considered the most objective measure of an earthquake's "size" in regard of total energy. However, it is based on a simple model of rupture, and on certain simplifying assumptions; it incorrectly assumes that the proportion of energy radiated as seismic waves is the same for all earthquakes.
Much of an earthquake's total energy as measured by Mw is dissipated as friction (resulting in heating of the crust). An earthquake's potential to cause strong ground shaking depends on the comparatively small fraction of energy radiated as seismic waves, and is better measured on the energy magnitude scale, Me. The proportion of total energy radiated as seismic varies greatly depending on focal mechanism and tectonic environment; Me and Mw for very similar earthquakes can differ by as much as 1.4 units.
Despite the usefulness of the Me scale, it is not generally used due to difficulties in estimating the radiated seismic energy.
Energy class (K-class) scale
K (from the Russian word класс, "class", in the sense of a category) is a measure of earthquake magnitude in the energy class or K-class system, developed in 1955 by Soviet seismologists in the remote Garm (Tadjikistan) region of Central Asia; in revised form it is still used for local and regional quakes in many states formerly aligned with the Soviet Union (including Cuba). Based on seismic energy (K = log ES, in Joules), difficulty in implementing it using the technology of the time led to revisions in 1958 and 1960. Adaptation to local conditions has led to various regional K scales, such as KF and KS.
K values are logarithmic, similar to Richter-style magnitudes, but have a different scaling and zero point. K values in the range of 12 to 15 correspond approximately to M 4.5 to 6. M(K), M(K), or possibly MK indicates a magnitude M calculated from an energy class K.
Tsunami magnitude scales
Earthquakes that generate tsunamis generally rupture relatively slowly, delivering more energy at longer periods (lower frequencies) than generally used for measuring magnitudes. Any skew in the spectral distribution can result in larger, or smaller, tsunamis than expected for a nominal magnitude. The tsunami magnitude scale, Mt, is based on a correlation by Katsuyuki Abe of earthquake seismic moment (M0) with the amplitude of tsunami waves as measured by tidal gauges. Originally intended for estimating the magnitude of historic earthquakes where seismic data is lacking but tidal data exist, the correlation can be reversed to predict tidal height from earthquake magnitude. (Not to be confused with the height of a tidal wave, or run-up, which is an intensity effect controlled by local topography.) Under low-noise conditions, tsunami waves as little as 5 cm can be predicted, corresponding to an earthquake of M ~6.5.
Another scale of particular importance for tsunami warnings is the mantle magnitude scale, Mm. This is based on Rayleigh waves that penetrate into the Earth's mantle, and can be determined quickly, and without complete knowledge of other parameters such as the earthquake's depth.
Duration and Coda magnitude scales
Md designates various scales that estimate magnitude from the duration or length of some part of the seismic wave-train. This is especially useful for measuring local or regional earthquakes, both powerful earthquakes that might drive the seismometer off-scale (a problem with the analog instruments formerly used) and preventing measurement of the maximum wave amplitude, and weak earthquakes, whose maximum amplitude is not accurately measured. Even for distant earthquakes, measuring the duration of the shaking (as well as the amplitude) provides a better measure of the earthquake's total energy. Measurement of duration is incorporated in some modern scales, such as Mwpd and mBc.
Mc scales usually measure the duration or amplitude of a part of the seismic wave, the coda. For short distances (less than ~100 km) these can provide a quick estimate of magnitude before the quake's exact location is known.
Macroseismic magnitude scales
Magnitude scales generally are based on instrumental measurement of some aspect of the seismic wave as recorded on a seismogram. Where such records do not exist, magnitudes can be estimated from reports of the macroseismic events such as described by intensity scales.
One approach for doing this (developed by Beno Gutenberg and Charles Richter in 1942) relates the maximum intensity observed (presumably this is over the epicenter), denoted I0 (capital I, subscripted zero), to the magnitude. It has been recommended that magnitudes calculated on this basis be labeled Mw(I0), but are sometimes labeled with a more generic Mms.
Another approach is to make an isoseismal map showing the area over which a given level of intensity was felt. The size of the "felt area" can also be related to the magnitude (based on the work of Frankel 1994 and Johnston 1996). While the recommended label for magnitudes derived in this way is M0(An), the more commonly seen label is Mfa. A variant, MLa, adapted to California and Hawaii, derives the Local magnitude (ML) from the size of the area affected by a given intensity. MI (upper-case letter "I", distinguished from the lower-case letter in Mi) has been used for moment magnitudes estimated from isoseismal intensities calculated per Johnston 1996.
Peak Ground Velocity (PGV) and Peak Ground Acceleration (PGA) are measures of the force that causes destructive ground shaking. In Japan, a network of strong-motion accelerometers provides PGA data that permits site-specific correlation with different magnitude earthquakes. This correlation can be inverted to estimate the ground shaking at that site due to an earthquake of a given magnitude at a given distance. From this a map showing areas of likely damage can be prepared within minutes of an actual earthquake.
Other magnitude scales
Many earthquake magnitude scales have been developed or proposed, with some never gaining broad acceptance and remaining only as obscure references in historical catalogs of earthquakes. Other scales have been used without a definite name, often referred to as "the method of Smith (1965)" (or similar language), with the authors often revising their method. On top of this, seismological networks vary on how they measure seismograms. Where the details of how a magnitude has been determined are unknown catalogs will specify the scale as unknown (variously Unk, Ukn, or UK). In such cases the magnitude is considered generic and approximate.
A special case is the "Seismicity of the Earth" catalog of Gutenberg & Richter (1954). Hailed as a milestone as a comprehensive global catalog of earthquakes with uniformly calculated magnitudes, they never published the details of how they determined those magnitudes. Consequently, while some catalogs identify these magnitudes as MGR, others use UK (meaning "computational method unknown"). Subsequent study found that most of the MGR magnitudes "are basically Ms for large shocks shallower than 40 km, but are basically mB for large shocks at depths of 40–60 km." Further study has found many of the Ms values to be "considerably overestimated."
- Bormann, Wendt & Di Giacomo 2013, p. 37. The relationship between magnitude and the energy released is complicated. See §220.127.116.11 and §3.3.3 for details.
- Bormann, Wendt & Di Giacomo 2013, §18.104.22.168.
- Bolt 1993, p. 164 et seq..
- Bolt 1993, pp. 170–171.
- Bolt 1993, p. 170.
- See Bolt 1993, Chapters 2 and 3, for a very readable explanation of these waves and their interpretation. J. R. Kayal's excellent description of seismic waves can be found here.
- See Havskov & Ottemöller 2009, §1.4, pp. 20–21, for a short explanation, or MNSOP-2 EX 3.1 2012 for a technical description.
- Chung & Bernreuter 1980, p. 1.
- IASPEI IS 3.3 2014, pp. 2–3.
- Kanamori 1983, p. 187.
- Richter 1935, p. 7.
- Spence, Sipkin & Choy 1989, p. 61.
- Richter 1935, pp. 5; Chung & Bernreuter 1980, p. 10. Subsequently redefined by Hutton & Boore 1987 as 10 mm of motion by an ML 3 quake at 17 km.
- Chung & Bernreuter 1980, p. 1; Kanamori 1983, p. 187, figure 2.
- Chung & Bernreuter 1980, p. ix.
- The USGS policy for reporting magnitudes to the press was posted at USGS policy Archived 2016-05-04 at the Wayback Machine., but has been removed. A copy can be found at http://dapgeol.tripod.com/usgsearthquakemagnitudepolicy.htm.
- Bormann, Wendt & Di Giacomo 2013, §3.2.4, p. 59.
- Rautian & Leith 2002, pp. 158, 162.
- See Datasheet 3.1 in NMSOP-2 for a partial compilation and references.
- Katsumata 1996; Bormann, Wendt & Di Giacomo 2013, §22.214.171.124, p. 78; Doi 2010.
- Bormann & Saul 2009, p. 2478.
- See also figure 3.70 in NMSOP-2.
- Havskov & Ottemöller 2009, p. 17.
- Bormann, Wendt & Di Giacomo 2013, p. 37; Havskov & Ottemöller 2009, §6.5. See also Abe 1981.
- Havskov & Ottemöller 2009, p. 191.
- Bormann & Saul 2009, p. 2482.
- MNSOP-2/IASPEI IS 3.3 2014, §4.2, pp. 15–16.
- Kanamori 1983, pp. 189, 196; Chung & Bernreuter 1980, p. 5.
- Bormann, Wendt & Di Giacomo 2013, pp. 37,39; Bolt (1993, pp. 88–93) examines this at length.
- Bormann, Wendt & Di Giacomo 2013, p. 103.
- IASPEI IS 3.3 2014, p. 18.
- Nuttli 1983, p. 104; Bormann, Wendt & Di Giacomo 2013, p. 103.
- IASPEI/NMSOP-2 IS 3.2 2013, p. 8.
- Bormann, Wendt & Di Giacomo 2013, §126.96.36.199. The "g" subscript refers to the granitic layer through which Lg waves propagate. Chen & Pomeroy 1980, p. 4. See also J. R. Kayal, "Seismic Waves and Earthquake Location", here, page 5.
- Nuttli 1973, p. 881.
- Bormann, Wendt & Di Giacomo 2013, §188.8.131.52.
- Havskov & Ottemöller 2009, pp. 17–19. See especially figure 1-10.
- Gutenberg 1945a; based on work by Gutenberg & Richter 1936.
- Gutenberg 1945a.
- Kanamori 1983, p. 187.
- Stover & Coffman 1993, p. 3.
- Bormann, Wendt & Di Giacomo 2013, pp. 81–84.
- MNSOP-2 DS 3.1 2012, p. 8.
- Bormann et al. 2007, p. 118.
- Rautian & Leith 2002, pp. 162, 164.
- The IASPEI standard formula for deriving moment magnitude from seismic moment is
Mw = (2/3) (log M0 – 9.1). Formula 3.68 in Bormann, Wendt & Di Giacomo 2013, p. 125.
- Anderson 2003, p. 944.
- Havskov & Ottemöller 2009, p. 198
- Havskov & Ottemöller 2009, p. 198; Bormann, Wendt & Di Giacomo 2013, p. 22.
- Bormann, Wendt & Di Giacomo 2013, p. 23
- NMSOP-2 IS 3.6 2012, §7.
- See Bormann, Wendt & Di Giacomo 2013, §184.108.40.206 for an extended discussion.
- NMSOP-2 IS 3.6 2012, §5.
- Bormann, Wendt & Di Giacomo 2013, p. 131.
- Rautian et al. 2007, p. 581.
- Rautian et al. 2007; NMSOP-2 IS 3.7 2012; Bormann, Wendt & Di Giacomo 2013, §220.127.116.11.
- Bindi et al. 2011, p. 330. Additional regression formulas for various regions can be found in Rautian et al. 2007, Tables 1 and 2. See also IS 3.7 2012, p. 17.
- Rautian & Leith 2002, p. 164.
- Bormann, Wendt & Di Giacomo 2013, §18.104.22.168, p. 124.
- Abe 1979; Abe 1989, p. 28. More precisely, Mt is based on far-field tsunami wave amplitudes in order to avoid some complications that happen near the source. Abe 1979, p. 1566.
- Blackford 1984, p. 29.
- Abe 1989, p. 28.
- Bormann, Wendt & Di Giacomo 2013, §22.214.171.124.
- Bormann, Wendt & Di Giacomo 2013, §126.96.36.199.
- Havskov & Ottemöller 2009, §6.3.
- Bormann, Wendt & Di Giacomo 2013, §188.8.131.52, pp. 71–72.
- Musson & Cecić 2012, p. 2.
- Gutenberg & Richter 1942.
- Grünthal 2011, p. 240.
- Grünthal 2011, p. 240.
- Stover & Coffman 1993, p. 3.
- Engdahl & Villaseñor 2002.
- Makris & Black 2004, p. 1032.
- Doi 2010.
- NMSOP-2 IS 3.2, pp. 1–2.
- Abe 1981, p. 74; Engdahl & Villaseñor 2002, p. 667.
- Engdahl & Villaseñor 2002, p. 688.
- Abe 1981, p. 72.
- Abe & Noguchi 1983.
- Abe, K. (April 1979), "Size of great earthquakes of 1837 – 1874 inferred from tsunami data", Journal of Geophysical Research, 84 (B4): 1561–1568, Bibcode:1979JGR....84.1561A, doi:10.1029/JB084iB04p01561.
- Abe, K. (October 1981), "Magnitudes of large shallow earthquakes from 1904 to 1980", Physics of the Earth and Planetary Interiors, 27 (1): 72–92, Bibcode:1981PEPI...27...72A, doi:10.1016/0031-9201(81)90088-1.
- Abe, K. (September 1989), "Quantification of tsunamigenic earthquakes by the Mt scale", Tectonophysics, 166 (1–3): 27–34, Bibcode:1989Tectp.166...27A, doi:10.1016/0040-1951(89)90202-3.
- Abe, K; Noguchi, S. (August 1983), "Revision of magnitudes of large shallow earthquakes, 1897-1912", Physics of the Earth and Planetary Interiors, 33 (1): 1–11, Bibcode:1983PEPI...33....1A, doi:10.1016/0031-9201(83)90002-X.
- Anderson, J. G. (2003), "Chapter 57: Strong-Motion Seismology", International Handbook of Earthquake & Engineering Seismology, Part B, pp. 937–966, ISBN 0-12-440658-0.
- Bindi, D.; Parolai, S.; Oth, K.; Abdrakhmatov, A.; Muraliev, A.; Zschau, J. (October 2011), "Intensity prediction equations for Central Asia", Geophysical Journal International, 187: 327–337, Bibcode:2011GeoJI.187..327B, doi:10.1111/j.1365-246X.2011.05142.x.
- Blackford, M. E. (1984), "Use of the Abe magnitude scale by the Tsunami Warning System." (PDF), Science of Tsunami Hazards: The International Journal of The Tsunami Society, 2 (1): 27–30.
- Bolt, B. A. (1993), Earthquakes and geological discovery, Scientific American Library, ISBN 0-7167-5040-6.
- Bormann, P., ed. (2012), New Manual of Seismological Observatory Practice 2 (NMSOP-2), Potsdam: IASPEI/GFZ German Research Centre for Geosciences, doi:10.2312/GFZ.NMSOP-2.
- Bormann, P. (2012), "Data Sheet 3.1: Magnitude calibration formulas and tables, comments on their use and complementary data." (PDF), in Bormann, New Manual of Seismological Observatory Practice 2 (NMSOP-2), doi:10.2312/GFZ.NMSOP-2_DS_3.1.
- Bormann, P. (2012), "Exercise 3.1: Magnitude determinations" (PDF), in Bormann, New Manual of Seismological Observatory Practice 2 (NMSOP-2), doi:10.2312/GFZ.NMSOP-2_EX_3.
- Bormann, P. (2013), "Information Sheet 3.2: Proposal for unique magnitude and amplitude nomenclature" (PDF), in Bormann, New Manual of Seismological Observatory Practice 2 (NMSOP-2), doi:10.2312/GFZ.NMSOP-2_IS_3.3.
- Bormann, P.; Dewey, J. W. (2014), "Information Sheet 3.3: The new IASPEI standards for determining magnitudes from digital data and their relation to classical magnitudes." (PDF), in Bormann, New Manual of Seismological Observatory Practice 2 (NMSOP-2), doi:10.2312/GFZ.NMSOP-2_IS_3.3.
- Bormann, P.; Fugita, K.; MacKey, K. G.; Gusev, A. (July 2012), "Information Sheet 3.7: The Russian K-class system, its relationships to magnitudes and its potential for future development and application" (PDF), in Bormann, New Manual of Seismological Observatory Practice 2 (NMSOP-2), doi:10.2312/GFZ.NMSOP-2_IS_3.7.
- Bormann, P.; Saul, J. (2009), "Earthquake Magnitude" (PDF), Encyclopedia of Complexity and Applied Systems Science, 3, pp. 2473–2496.
- Bormann, P.; Wendt, S.; Di Giacomo, D. (2013), "Chapter 3: Seismic Sources and Source Parameters" (PDF), in Bormann, New Manual of Seismological Observatory Practice 2 (NMSOP-2), doi:10.2312/GFZ.NMSOP-2_ch3.
- Chen, T. C.; Pomeroy, P. W. (1980), Regional Seismic Wave Propagation.
- Choy, G. L.; Boatwright, J. L. (2012), "Information Sheet 3.6: Radiated seismic energy and energy magnitude" (PDF), in Bormann, New Manual of Seismological Observatory Practice 2 (NMSOP-2), doi:10.2312/GFZ.NMSOP-2_IS_3.6.
- Choy, G. L.; Boatwright, J. L.; Kirby, S. (2001), "The Radiated Seismic Energy and Apparent Stress of Interplate and Intraslab Earthquakes at Subduction Zone Environments: Implications for Seismic Hazard Estimation" (PDF), U.S. Geological Survey, Open-File Report 01-0005.
- Chung, D. H.; Bernreuter, D. L. (1980), Regional Relationships Among Earthquake Magnitude Scales., NUREG/CR-1457.
- Doi, K. (2010), "Operational Procedures of Contributing Agencies" (PDF), Bulletin of the International Seismological Centre, 47 (7–12): 25, ISSN 2309-236X. Also available here (sections renumbered).
- Engdahl, E. R.; Villaseñor, A. (2002), "Chapter 41: Global Seismicity: 1900–1999", in Lee, W.H.K.; Kanamori, H.; Jennings, P.C.; Kisslinger, C., International Handbook of Earthquake and Engineering Seismology (PDF), Part A, Academic Press, pp. 665–690, ISBN 0-12-440652-1.
- Frankel, A. (1994), "Implications of felt area-magnitude relations for earthquake scaling and the average frequency of perceptible ground motion", Bulletin of the Seismological Society of America, 84 (2): 462–465.
- Grünthal, G. (2011), "Earthquakes, Intensity", in Gupta, H., Encyclopedia of Solid Earth Geophysics, pp. 237–242, ISBN 978-90-481-8701-0.
- Gutenberg, B. (January 1945a), "Amplitudes of surface Waves and magnitudes of shallow earthquakes" (PDF), Bulletin of the Seismological Society of America, 35 (1): 3–12.
- Gutenberg, B. (1 April 1945c), "Magnitude determination for deep-focus earthquakes" (PDF), Bulletin of the Seismological Society of America, 35 (3): 117–130
- Gutenberg, B.; Richter, C. F. (1936), "On seismic waves (third paper)", Gerlands Beiträge zur Geophysik, 47: 73–131.
- Gutenberg, B.; Richter, C. F. (1942), "Earthquake magnitude, intensity, energy, and acceleration", Bulletin of the Seismological Society of America: 163–191, ISSN 0037-1106.
- Gutenberg, B.; Richter, C. F. (1954), Seismicity of the Earth and Associated Phenomena (2nd ed.), Princeton University Press, 310p.
- Havskov, J.; Ottemöller, L. (October 2009), Processing Earthquake Data (PDF).
- Hough, S.E. (2007), Richter's scale: measure of an earthquake, measure of a man, Princeton University Press, ISBN 978-0-691-12807-8, retrieved 10 December 2011.
- Hutton, L. K.; Boore, David M. (December 1987), "The ML scale in Southern California" (PDF), Nature, 271: 411–414, Bibcode:1978Natur.271..411K, doi:10.1038/271411a0.
- Johnston, A. (1996), "Seismic moment assessment of earthquakes in stable continental regions — II. Historical seismicity", Geophysical Journal International, 125 (3): 639–678, Bibcode:1996GeoJI.125..639J, doi:10.1111/j.1365-246x.1996.tb06015.x.
- Kanamori, H. (July 10, 1977), "The energy release in great earthquakes" (PDF), Journal of Geophysical Research, 82 (20): 2981–2987, Bibcode:1977JGR....82.2981K, doi:10.1029/JB082i020p02981.
- Kanamori, H. (April 1983), "Magnitude Scale and Quantification of Earthquake" (PDF), Tectonophysics, 93 (3–4): 185–199, Bibcode:1983Tectp..93..185K, doi:10.1016/0040-1951(83)90273-1.
- Katsumata, A. (June 1996), "Comparison of magnitudes estimated by the Japan Meteorological Agency with moment magnitudes for intermediate and deep earthquakes.", Bulletin of the Seismological Society of America, 86 (3): 832–842.
- Makris, N.; Black, C. J. (September 2004), "Evaluation of Peak Ground Velocity as a "Good" Intensity Measure for Near-Source Ground Motions", Journal of Engineering Mechanics, 130 (9): 1032–1044, doi:10.1061/(asce)0733-9399(2004)130:9(1032).
- Musson, R. M.; Cecić, I. (2012), "Chapter 12: Intensity and Intensity Scales" (PDF), in Bormann, New Manual of Seismological Observatory Practice 2 (NMSOP-2), doi:10.2312/GFZ.NMSOP-2_ch12.
- Nuttli, O. W. (10 February 1973), "Seismic wave attenuation and magnitude relations for eastern North America", Journal of Geophysical Research, 78 (5): 876–885, Bibcode:1973JGR....78..876N, doi:10.1029/JB078i005p00876.
- Nuttli, O. W. (April 1983), "Average seismic source-parameter relations for mid-plate earthquakes", Bulletin of the Seismological Society of America, 73 (2): 519–535.
- Rautian, T. G.; Khalturin, V. I.; Fujita, K.; Mackey, K. G.; Kendall, A. D. (November–December 2007), "Origins and Methodology of the Russian Energy K-Class System and Its Relationship to Magnitude Scales" (PDF), Seismological Research Letters, 78 (6): 579–590, doi:10.1785/gssrl.78.6.579.
- Rautian, T.; Leith, W. S. (September 2002), "Developing Composite Regional Catalogs of the Seismicity of the Former Soviet Union." (PDF), 24th Seismic Research Review – Nuclear Explosion Monitoring: Innovation and Integration, Ponte Vedra Beach, Florida.
- Richter, C. F. (January 1935), "An Instrumental Earthquake Magnitude Scale" (PDF), Bulletin of the Seismological Society of America, 25 (1): 1–32.
- Spence, W.; Sipkin, S. A.; Choy, G. L. (1989), "Measuring the size of an Earthquake" (PDF), Earthquakes and Volcanoes, 21 (1): 58–63.
- Stover, C. W.; Coffman, J. L. (1993), Seismicity of the United States, 1568–1989 (Revised) (PDF), U.S. Geological Survey Professional Paper 1527.
- Perspective: a graphical comparison of earthquake energy release – Pacific Tsunami Warning Center
- USGS ShakeMap Providing near-real-time maps of ground motion and shaking intensity following significant earthquakes. | https://en.wikipedia.org/wiki/Body_wave_magnitude | 18 |
16 | This week we learned about adding and subtracting radical expressions. There are some tings you need to remember when adding and subtracting radicals. Before you start doing any adding or subtracting you must remember to put all radicals into there simplest form you can. It makes it easier when you start doing the equation. Another thing to remember is that only the radicals with the same radicand can be combined together. One more thing you should remember is that when adding and subtracting the radicand will not change but the coefficient can change but it depends on the equation .
At the beginning it was sort of easy to figure out but as we continued on it was become much more difficult by the minute. I have found some strategies to help me. The tic tac toe formula that we were shown helps a lot. Where the answer is the same no matter if it is diagonal horizontal or vertical.
For week 3 we begin a unit on “Absolute Value and Radicals”. I had not known anything about it until now. The concept is about a digit value of a number from zero. There is a formula that can be used. You take a square root of a square to get the absolute value.
Example: Take -6, -6 squared is 36 and the root of 36 is 6. so the absolute value of -6 is 6
We also learned about simplifying radical expressions. It is sort of complicated for me to understand as I never really understood it very well in grade 10. To simplify an entire radical you take all the prime factors (2x2x2x2x3), and gather the groups of 2 as it is a square root. In my example there are 2 groups of 2’s and an extra 3. The groups are then multiplied together (2×2 = 4) and you just have the 3 left to slide into the answer.
In week two of pre-calculus 11, we took similar concepts as in week one . I learned that geometric sequences are sequences which have a multiplying common ratio, rather than a common difference. I also learned how to find the sum of all the terms.
For example – If I want to find a specific term in a geometric sequence, we use the formula :
tn = a (r)n – 1 meaning you multiply the first term (a) by the ratio to the power that is one less than your term.
I also learned that it is possible to find the sum of all the terms in a never-ending infinite converging series.
The formula can be used if -1<r<1 :
S∞ = a/1 – r
For the first week of our Math pre-calculus class, we have learned about arithmetic sequences and series. This was all new to me as I have never heard of it or learned it before . I learned about the formulas you use to solve arithmetic sequences and series.
You can use = + (n-1)(d) to solve the value of a term you don’t know yet.
Another thing I learned is an equation like 13- 4n + 3 = -26 will be a linear equation as there are no powers in it.
I don’t think legalizing marijuana is a good idea for one simple reason I believe to be true. Marijuana is a drug and when you take it the drug changes the way that you act around others and alters the natural state that you brain was in. Everyone brain continues to grow even past the age of 20 or 30. You are constantly learning new things all through your life and if you use a drug that hinders you brain from preforming at the best it can it should be illegal.
Its marijuana is dumb there is no argument to why it should be legal.
I think that they should legalize it for everyone if they chose to legalize it not just for people older then 18 because the people that marijuana affects most are young teens/kids brains that are still developing at an accelerated rate. Whether the drug is legal or not teens/kids are still going to smoke it. If you legalize the drug then teens/kids can get it from dispensaries and you then know the drug is clean and doesn’t have other drugs put into it to make it more addictive. That is the main problem is that weed now from drug dealers have more THC levels which makes you higher when you smoke it and reduced CBD which evens the high out and is the medical side to the drug. In dispensaries the drug would have a more even amount of the THC and CBS. They can control the levels that is in the plant. So if they legalize it they should legalize it for everyone of all ages because teens that get weed from drug dealers are probably laced and if not have very high THC levels which damages your brain. So to reduce harm if you can give the teens drugs that aren’t laced and have much more evened out THC and CBS levels it would benefit the development of the brain.
3 Arguments Why Marijuana Should Stay Illegal Reviewed
Question 4 Residential schools were put in place to take the Indian out of the Indian and have to not worry about aboriginal practices.
Question 3: They were limiting how many African immigrants were allowed in Canada and the people who lived in Canada wanted them to wanted them to come in because they would work long hours with low pay.
Question 3: The kids were taken away from there families and put into residential schools. The residential schools were not a good place because the main goal f the school was to get the kids/students to forget and lose all of there aboriginal practices to fit into (society)
Question 1: the gave the aboriginals land and let them not be under any government. I think that these term where well deserved because we did such bad things the aboriginal for years so we should give them those 2 things for sure.
How the impact of residencial schools can be passed down generation to generation
When the first residencial schools where made and the first generation of kids went through they lost some of the knowledge if not all the knowledge of there culter and tradition. When those kids get out and are young adults they don’t really know the proper way of parenting anymore and they end up abusing there kids. They also cant properly teach there kids the culter and tradition \s because they have forgotten things in the residencial schools. But then the adults that where in the residencial schools have there kids taken away and cant do anything about it and the cycle continues
1. Write a three paragraph response to the above question. Your first paragraph should focus on the main arguments from one point of view, the second on arguments from the other and your final paragraph stating your position on the issue.
1: The government should be responsible for the actions that other government groups have made in the past because they made laws that affected and ruined many people lives. They put tax’s on Chinese immigrants and made residencial school which made people commit suicide and go insane. Just these 2 things that were done ruined thousands of families and caused families to break up and leave each other. There are still affects that are going on today and you cant really change the past.
2: The Government shouldn’t be responsible for the past decisions because they were not in power at the time and were not the ones making the Decisions. Even though that are in power now and technically the “government” is still responsible for the mistakes that they made no one who is currently in the government was there and helped make the decisions that affected families.
3: I think the government shouldn’t be fully responsible for what happened in the past but I do think that they should still apologize for the wrongs that were done by their organization. Even if it wasn’t them they are still associated with the group that did make the bad decisions. They should also teach kids in school about what happened because it is important to make sure we don’t make the same mistakes ever again. | http://myriverside.sd43.bc.ca/frasero2016/author/frasero2016/ | 18 |
12 | The process of formation of weakly dissociated compounds with a change in the pH of the medium during the interaction of water and salt is called hydrolysis.
The hydrolysis of salts occurs in the case of binding of one water ion with the formation of sparingly soluble or weakly dissociated compounds due to a shift in the equilibrium dissociation. For the most part, this process is reversible and increases with dilution or temperature increase.
To find out which salts are undergoing hydrolysis, it is necessary to know which bases and acids were used when it was formed. There are several types of their interactions.
Preparation of salt from base and weak acid
Examples include aluminum sulphide and chromium, as well as accelerated ammonium and ammonium carbonate. These salts, when dissolved in water, form bases and weakly dissociating acids. To trace the reversibility of the process, it is necessary to make up the equation for the salt hydrolysis reaction:
Ammonium acetate + water \u0026 harr; ammonia + acetic acid
In the ionic form, the process looks like:
In the above hydrolysis reaction, ammonia and acetic acid are formed, that is, weakly dissociating substances.
Hydrogen index of aqueous solutions (pH) directly depends on the relative strength, that is, the dissociation constants of the reaction products. The above reaction will be slightly alkaline, since the constant decomposition of acetic acid is less than the ammonium hydroxide constant, that is, 1.75 ∙ 10 - 5 less than 6.3 10 -5. If the bases and acids are removed from the solution, then the process continues to the end.
Consider an example of irreversible hydrolysis:
Aluminum sulfate + water = aluminum hydroxide + hydrogen sulfide
In this case, the process is irreversible, because one of the reaction products is removed, that is, it precipitates.
Hydrolysis of compounds obtained by the interaction of a weak base with a strong acid
This type of hydrolysis describes the decomposition of aluminum sulfate, copper chloride or bromide, as well as ferric chloride or ammonium. Consider the reaction of ferric chloride, which proceeds in two stages:
Ferric chloride + water \u0026 harr; iron hydroxochloride + hydrochloric acid
The ionic equation of hydrolysis of ferric chloride salts takes the form:
Fe 2+ + H2 O + 2Cl - \u0026 harr; Fe (OH) + + H + + 2Cl -
The second stage of hydrolysis:
Due to the deficiency of hydroxo group ions and the accumulation of hydrogen ions, the hydrolysis of FeCl2 proceeds through the first stage. A strong hydrochloric acid and a weak base are formed \u0026 ndash; iron hydroxide. In the case of such reactions, the medium is acidic.
Non-hydrolyzing salts obtained by reacting strong bases and acids
Examples of such salts are calcium or sodium chlorides, potassium sulfate and rubidium bromide. However, the above substances are not hydrolyzed, since they are neutral when dissolved in water. The only low dissociating substance in this case is water. To confirm this statement, you can make the equation of hydrolysis of sodium chloride salts with the formation of hydrochloric acid and sodium hydroxide:
NaCl + H2 O \u0026 harr; NaOH + HCl
Reaction in ionic form:
Na + + Cl - + H2 O \u0026 harr; Na + + HE - + H + + Cl -
Salts as a reaction product of strong alkali and acid of weak strength
In this case, the hydrolysis of salts proceeds along the anion, which corresponds to the alkaline medium of the hydrogen index. Examples include sodium acetate, sulphate and carbonate, potassium silicate and sulphate, as well as sodium hydrocyanic acid. For example, let's make the ion-molecular equations for the hydrolysis of salts of sulfide and sodium acetate:
Dissociation of sodium sulfide:
Na2 S \u0026 harr; 2Na + + S 2-
The first stage of hydrolysis of a polybasic salt, occurs on the cation:
Record in ionic form:
S 2- + H2 O \u0026 harr; HS - + OH -
The second stage is feasible in the case of increasing the reaction temperature:
Consider another hydrolysis reaction using sodium acetate for example:
Sodium acetate + water \u0026 harr; acetic acid + caustic soda.
As a result of the reaction, weak acetic acid is formed. In both cases, the reaction will have an alkaline environment.
Reaction equilibrium according to the Le Chatelier principle
Hydrolysis, like other chemical reactions, is reversible and irreversible. In the case of reversible reactions, one of the reagents is consumed not all, while the irreversible processes proceed with the complete consumption of the substance. This is due to a shift in the equilibrium of the reactions, which is based on changes in physical characteristics such as pressure, temperature, and mass fraction of the reactants.
According to the concept of the Le Chatelier principle, the system will be considered equilibrium until one or several external conditions of the process flow are changed to it. For example, with a decrease in the concentration of one of the substances, the equilibrium of the system will gradually begin to shift towards the formation of the same reagent. Hydrolysis of salts also has the ability to obey the principle of Le Chatelier, with which you can weaken or strengthen the process.
Hydrolysis can be enhanced to complete irreversibility in several ways:
- To increase the rate of formation of OH - and H + ions. To do this, the solution is heated, and by increasing the absorption of heat by water, that is, endothermic dissociation, this indicator increases.
- Add water.
- Translate one of the products in a gaseous state or bind in a highly soluble substance.
Suppression of hydrolysis
To suppress the process of hydrolysis, as well as to strengthen, in several ways.
Enter into solution one of the substances formed in the process. For example, to alkalize the solution, if it is pH˃7, or vice versa, to acidify, where the reaction medium is less than 7 in terms of pH.
Mutual enhancement of hydrolysis
Mutual enhancement of hydrolization is applied if the system becomes equilibrium. Let us consider a concrete example where systems in different vessels became equilibrium:
Al 3+ + H2 O \u0026 harr; AlOH 2+ + H +
Both systems are little hydrolyzed, so if you mix them with each other, there will be a binding of hydroxins and hydrogen ions. As a result, we obtain the molecular equation of salt hydrolysis:
Aluminum chloride + sodium carbonate + water = sodium chloride + aluminum hydroxide + carbon dioxide.
According to Le Chatelier, the equilibrium of the system will shift to the side of the reaction products, and the hydrolysis will go to the end with the formation of aluminum hydroxide, precipitated. Such an enhancement of the process is possible only if one of the reactions proceeds along the anion, and the other along the cation.
Hydrolysis of aqueous solutions of salts is carried out by combining their ions with water molecules. One of the methods of hydrolization is performed by anion, that is, the addition of a water ion H +.
Most of this salt is subject to hydrolysis, which is formed through the interaction of a strong hydroxide and a weak acid. An example of salts decomposing in the anion can be sodium sulfate or sodium sulfite, as well as potassium carbonate or phosphate. A hydrogen indicator with more than seven. As an example, consider the dissociation of sodium acetate:
In solution, this compound is divided into a cation \u0026 ndash; Na +. and anion \u0026 ndash; CH3 Soo -.
The cation of dissociated sodium acetate, formed by a strong base, cannot react with water.
At the same time, the anions of the acid easily react with the H molecules.2 ABOUT:
Consequently, the hydrolization is carried out on the anion, and the equation takes the form:
CH3COONa + HON = CH3 COOH + NaOH
In the event that polybasic acids undergo hydrolysis, the process occurs in several stages. Under normal conditions, these substances hydrolyze in the first stage.
Salts formed by the interaction of a strong acid and a base of low strength are mainly susceptible to cationic hydrolysis. An example is ammonium bromide, copper nitrate, and zinc chloride. At the same time, the medium in the solution during hydrolysis corresponds to less than seven. Consider the process of hydrolysis by cation on the example of aluminum chloride:
In aqueous solution, it dissociates into an anion \u0026 ndash; 3Cl - and cation \u0026 ndash; Al 3+.
Ions of strong hydrochloric acid do not interact with water.
The ions (cations) of the base, on the contrary, are subject to hydrolysis:
Al 3+ + HON = AlOH 2+ + H +
In the molecular form, the hydrolization of aluminum chloride is as follows:
AlCl3 + H2 O = AlOHCl + HCl
Under normal conditions, it is preferable to neglect the hydrolysis in the second and third stages.
Degree of dissociation
Any reaction of salt hydrolysis is characterized by the degree of dissociation, which shows the relationship between the total number of molecules and molecules capable of passing into the ionic state. The degree of dissociation is characterized by several indicators:
- The temperature at which the hydrolysis is carried out.
- The concentration of the dissociable solution.
- The origin of the dissolved salt.
- The nature of the solvent itself.
According to the degree of dissociation, all solutions are divided into strong and weak electrolytes, which in turn, when dissolved in different solvents, exhibit different degrees.
- Substances with a dissociation degree of over 30% are strong electrolytes. For example, sodium hydroxide, potassium hydroxide, hydroxide barium and calcium, as well as sulfuric, hydrochloric and nitric acid.
- Electrolytes, the degree of which less than 2% are called weak. These include organic acids, ammonium hydroxide, hydrogen sulfide and carbonic acid as well as a number of reasons, R-, d-, f-elements of periodic system.
A quantitative indicator of the ability of a substance to decay into ions is the dissociation constant, also called the equilibrium constant. In simple terms, the equilibrium constant is the ratio of electrolytes decomposed into ions to non-dissociated molecules.
Unlike the degree of dissociation, this parameter does not depend on external conditions and the concentration of the salt solution in the process of hydrolysis. During the dissociation of polybasic acids, the degree of dissociation at each step becomes an order of magnitude less.
The indicator of acid-base properties of solutions
PH or pH \u0026 ndash; measure to determine the acid-base properties of the solution. Water in a limited amount dissociates into ions and is a weak electrolyte. When calculating the pH, use the formula, which is the negative decimal logarithm of the accumulation of hydrogen ions in solutions:
- For alkaline environments, this figure will be more than seven. For example, [H +] = 10 -8 mol / l, then pH = -lg [10 -8] = 8, that is, pH ˃ 7.
- For acidic conditions, on the contrary, the pH should be less than seven. For example, [H +] = 10 -4 mol / l, then pH = -lg [10 -4] = 4, that is, pH ˂ 7.
- For a neutral environment, pH = 7.
Very often, to determine the pH-solutions using the rapid method for indicators, which, depending on the pH, change their color. For a more accurate definition, ionomers and pH meters are used.
Quantitative characteristics of hydrolysis
Hydrolysis of salts, like any other chemical process, has a number of characteristics, in accordance with which the course of the process becomes possible. The most significant quantitative characteristics include the constant and degree of hydrolysis. Let us dwell on each of them.
Degree of hydrolysis
To find out which salts are hydrolyzed and in what quantities, use a quantitative indicator \u0026 ndash; the degree of hydrolysis, which characterizes the completeness of the course of hydrolysis. The degree of hydrolysis is called the part of the substance of the total number of molecules capable of hydrolysis, is written in percentage:
where the degree of hydrolysis is \u0026 ndash; h;
the amount of salt particles subjected to hydrolysis \u0026 ndash; n;
the total amount of salt molecules involved in the reaction \u0026 ndash; N.
Factors affecting the degree of hydrolysis include:
- constant hydrolysis;
- temperature, with which the degree increases due to the increased interaction of ions;
- salt concentration in solution.
It is the second most important quantitative characteristic. In the general form of the equation of salt hydrolysis can be written as:
MA + NON \u0026 harr; MON + ON
From this it follows that the equilibrium constant and the concentration of water in the same solution are constant values. Accordingly, the product of these two indicators will also be a constant value, which means the hydrolysis constant. In general, Kg can be written as:
where is ON \u0026 ndash; acid,
MON \u0026 ndash; base.
In the physical sense, the hydrolysis constant describes the ability of a particular salt to undergo a process of hydrolysis. This parameter depends on the nature of the substance and its concentration. | http://culturaliteraria.com/gidroliz-solej/ | 18 |
41 | In this experiment you will allow sodium bicarbonate (baking soda) to react with measured mass of nahco3 and enough dilute hcl to completely react with it you will to gain an understanding of mass relationships in chemical reactions. 3-1 experiment 3 limiting reactants introduction: most chemical reactions stoichiometric ratio is the mole ratio of the reactants, or reactants to products,. Lab 6: chemical kinetics to dye for in this week's lab you will: from the stoichiometry of reaction (1) we see that the consumption of 1 mole of a. Stoichiometry /ˌstɔɪkiˈɒmɪtri/ is the calculation of reactants and products in chemical for instance, instead of an exact 14:3 proportion, 1704 kg of ammonia consists of stoichiometry is often used to balance chemical equations (reaction.
Cr3a the course provides students with opportunities outside the laboratory environment to learning objectives within big idea 3: chemical reactions 4 solution stoichiometry & chemical analysis reaction types & stoichiometry (bi 3. At minimum, six of the required 16 labs are conducted in a guided-inquiry format 8, 9 cr7, the course rates of chemical reactions are determined by details of the molecular collisions the laws of chapter 3 stoichiometry chapter 4. The lecture demonstration laboratory (bagley hall 171) is available to assist professors and instructors in the department of chemistry through chapter 3: stoichiometry chapter 4: types of chemical reactions and solution stoichiometry.
Chemical reactions—that is, the reaction stoichiometry 3 plus signs (+) separate individual reactant and product formulas, and an arrow laboratory. Department of chemistry & biochemistry california state university, bakersfield as part of your labs, you will often have to calculate the amounts of the the theoretical yield of your reactions, and the percent yield of your reactions once you 3 calculating percent yield before calculating percent yield, you must have. 3 lrc tutoring 4 rise center: temporary building 5 5 math lab tutoring: chem tutors available but their schedules change week to week. Oxidation-reduction reactions, kinetics and equilibrium reactions with particular reference if a student fails to complete more than 3 labs (for whatever reason).
Chapter 3 what is the frequency factor and where can we get values for it what is it the rate law is determined only from experiment, see chapter 5 also see is there a way to know if a given chemical reaction is elementary stoichiometry only depends on the reaction chemistry and inert gases or liquids present. Lab 3 - heats of transition, heats of reaction, specific heats, and hess's law when a chemical reaction occurs, the chemical enthalpy of the reactants that the chemical reaction is written in such a way that the stoichiometric coefficient. Read chapter 3 laboratory experiences and student learning: laboratory student groups explore four chemical reactions—burning, rusting, the a chemistry unit has led to gains in student understanding of stoichiometry (lynch, 2004.
The study of the chemical behavior of gases was part of the basis of perhaps the most figure 3 when a reaction produces a gas that is collected above water, the in an experiment in a general chemistry laboratory, a student collected a. 3 stoichiometry 76 88 covalent bond energies and chemical reactions 350 89 the ports the lab, which typically involves a great deal of aque. Ilana kovach lab 3: electronic structure prelab: 1 equipment for another way to see if a chemical reaction took place is by placing the metal in a bunsen burner for instance ilana kovach lab 9: stoichiometry prelab: 1. The objectives of this laboratory are to experimentally determine the to obtain the mole-to-mole ratios would be to simply balance the chemical equations for 3 obtain about a 5-ml quantity of hydrochloric acid (hcl) in your small beaker. Stoichiometry refers to the ratios of products and reactants in a chemical reaction for example, consider one experiment in which 100 grams of sulfur is combined with 100 grams 16h + + 4clo 3 − + 12cl − = 7cl 2 + 2clo 2 + 8h 2 o (6.
Subject: chemical reactions - stoichiometry purpose: the purpose of this lab is to perform the reaction of sodium bicarbonate with sulfuric. Why do some chemical reactions always give off heat how much thursday, may 3 notes: percent yield volcano stoichiometry lab (review, hand in. Investigating stoichiometry primary sol ch4 the mole is the basic counting unit used in chemistry and is used to keep track of the amount of lab #4 they need to start with the oxide produced in #3 (cu2o) and react it with o2 again to.
Yuck in chemistry, reactions proceed with very specific recipes copper-iron stoichiometry lab report 10/3/12 abstract: the lab performed. Stoichiometry is the method by which we calculate how many grams of these reactants are needed to form a given amount of product in a chemical reaction 3 if 3 moles of c2h6 react with excess o2, how many moles of h2o will be formed.
Subsequent chemistry labs, not only as a specific skill, but also as it the amount of solution required to react with a known amount of acid (khp) page 3. There are three possible chemical reactions that could be occurring during the this lab is for you to experimentally determine which of these three reactions use stoichiometry to determine which reaction is actually occurring inside the 3 place the empty crucible on the balance pan and then press the tare/reset button. To determine the percent yield of a product in a chemical reaction we (aq) + 22 h2o (l) + 4 h2so4 (aq) → 2 kal(so4)2 + 12 h2o (s) + 3 h2 (g) of alum using the stoichiometric factor from the balanced chemical equation. Fundamental concepts are presented in lecture and laboratory including the periodic table, atomic structure, chemical bonding, reactions, stoichiometry, states of matter, properties of metals, nonmetals and compounds, (3 lec, 3 lab. | http://vbpaperhkdl.mestudio.us/lab-3-stoichiometry-and-chemical-reactions.html | 18 |
34 | One Point on the X-axis If one of the x-values -- say x1 -- is 0, the operation becomes very simple. Because the x-value of the first point is zero, we can easily find a.
In general, you have to solve this pair of equations: For example, solving the equation for the points 0, 2 and 2, 4 yields: Neither Point on the X-axis If neither x-value is zero, solving the pair of equations is slightly more cumbersome. From a Pair of Points to a Graph Any point on a two-dimensional graph can be represented by two numbers, which are usually written in the in the form x, ywhere x defines the horizontal distance from the origin and y represents the vertical distance.
Why Exponential Functions Are Important Many important systems follow exponential patterns of growth and decay. Although it takes more than a slide rule to do it, scientists can use this equation to project future population numbers to help politicians in the present to create appropriate policies.
Plugging this value, along with those of the second point, into the general exponential equation produces 6.
On the other hand, the point -2, -3 is two units to the left of the y-axis. This yields the following pair of equations: Henochmath walks us through an easy example to clarify this procedure. If neither point has a zero x-value, the process for solving for x and y is a tad more complicated.
For example, the point 2, 3 is two units to the right of the y-axis and three units above the x-axis. The procedure is easier if the x-value for one of the points is 0, which means the point is on the y-axis.
Taking as the starting point, this gives the pair of points 0, 1. In his example, he chose the pair of points 2, 3 and 4, An Example from the Real World Sincehuman population growth has been exponential, and by plotting a growth curve, scientists are in a better position to predict and plan for the future.
In this form, the math looks a little complicated, but it looks less so after you have done a few examples. How to Find an Exponential Equation With Two Points By Chris Deziel; Updated March 13, If you know two points that fall on a particular exponential curve, you can define the curve by solving the general exponential function using those points.
Inthe world population was 1. By taking data and plotting a curve, scientists are in a better position to make predictions.
For example, the number of bacteria in a colony usually increases exponentially, and ambient radiation in the atmosphere following a nuclear event usually decreases exponentially. You can substitute this value for b in either equation to get a.Feb 27, · The points are (2,18) and (3,) in y=ab^x your a is your starting point and b is your growth factor.
To find b you have to find 18 times what gives you Status: Resolved. Just plug in the two sets of values for x and y (= f(x)) to obtain two equations. Divide one equation by the other to get the answer.
– Mick Dec 30 '14 at If you know two points that fall on a particular exponential curve, you can define the curve by solving the general exponential function using those points. In practice, this means substituting the points for y and x in the equation y = ab x.
10 ab 3 Finding an Exponential Equation with Two Points and an Asymptote Find an exponential function that passes through (3,) and (4,) and has a horizontal asymptote of y = 4. Write an exponential function of the form y=ab^x whose graph passes through the given points.
(1,4),(2,12)The form is y = ab^x 12 = ab^2 4 = ab^Divide the 1st by the 2nd to get: 3 = bSubstitute that into the 2nd equation to solve for "a": 4 = a*3^1 a=(4/3)EQUATION: y = (4/3)*2^x ===== Cheers, Stan H.Download | http://zejucavynabiviv.mi-centre.com/how-to-write-an-exponential-function-given-2-points-8001180011.html | 18 |
10 | How far away is that galaxy?
Our entire understanding of the Universe is based on knowing the distances to other galaxies, yet this seemingly-simple question turns out to be fiendishly difficult to answer. The best answer came more than 100 years ago from an astronomer who was mostly unrecognized in her time — and today, another astronomer has used Sloan Digital Sky Survey (SDSS) data to make those distance measurements more precise than ever.
“It’s been fascinating to work with such historically significant stars,” says Kate Hartman, an undergraduate from Pomona College who announced the results at today’s American Astronomical Society (AAS) meeting in National Harbor, Maryland. Hartman studied “Cepheid variables,” a type of star that periodically pulses in and out, varying in brightness over the course of a few days or weeks.
The pattern was first noticed in 1784 in the constellation Cepheus in the northern sky, so these stars became known as “Cepheid variables.” Cepheid variables went from interesting to completely indispensable in the early 1900s thanks to the work by astronomer Henrietta Leavitt. Leavitt’s contributions were largely ignored for one simple reason — she was a woman at time when women were not taken seriously as astronomers.
“It’s been fascinating to work with such historically significant stars.”
In fact, when Leavitt was first hired by Harvard College Observatory in 1895, she was hired as a “computer” — a term which meant something completely different from what it means today. In the days before modern computers or even pocket calculators, a “computer” was a person hired to perform complex calculations in their mind, assisted only by pencil and paper. Although the work was demanding, it was not taken seriously by the male professional scientists of the time — it was seen as rote work not requiring intelligence or insight that could be done by anyone, even a woman.
So in 1908 when Leavitt discovered a relationship between the brightness (or “luminosity”) of a Cepheid variable star and the time it took to go through a full cycle of change (its “period”), her work was not immediately recognized for its significance. It took years for the mostly-male astronomy community to realize that this relationship (today known as “the Leavitt Law”) means that measuring the period of a Cepheid variable immediately gives its true brightness — and furthermore, that comparing this to its apparent brightness immediately gives its distance.
Sadly, it was only after Leavitt’s death from cancer at age 53 that astronomers realized that she had found the key to unlocking distances to such stars everywhere — whether in our Milky Way or in a galaxy in the distant Universe.
Using the period-luminosity relationship that Leavitt discovered, others later calculated the distances to Cepheid variables in galaxies outside our own Milky Way. In doing so, they discovered that our Universe is expanding, starting from a single point more than 14 billion years ago at the Big Bang — a discovery that would have never been possible without the discovery of the Leavitt Law.
More than a century later, astronomers like Hartman are carrying on Leavitt’s work. Her announcement came about as a result of a ten-week summer research project at Carnegie Observatories. Hartman worked closely with her research advisor, Rachael Beaton, a Hubble and Carnegie-Princeton fellow now based at Princeton University.
The tool that Hartman and Beaton are using to improve our knowledge of Cepheid variables is the Sloan Digital Sky Survey’s Apache Point Galactic Evolution Experiment (APOGEE), which is systematically mapping the chemical compositions and motions of stars in all components of our Galaxy.
As Beaton explains, “The APOGEE survey is optimized to study the cool, old giant-type stars found all across our galaxy. And while Cepheid variables are younger and larger, they are similar in temperature, so they are well suited for APOGEE.”
The fact that Cepheid variables appear in the APOGEE survey provides an great opportunity to calibrate the Leavitt Law, but also provides a major benefit: it allows astronomers to map young stars in the same way they map old giant stars. Mapping these two types of stars together allows astronomers to connect structures from the ancient galaxy to more recently-formed components. In this way, Cepheid variables can offer tremendous insight into the structure of our galaxy — but such insight comes with complications.
The very property of these stars that allowed Henrietta Leavitt to discover the Leavitt Law – their predictable variations in brightness — creates challenges for APOGEE. “Over a pulsation cycle of a Cepheid variable, the star’s properties change,” says Beaton. “Its temperature, surface gravity, and atmospheric properties can vary greatly over a fairly short time. So how can APOGEE properly measure them? I thought it would be an excellent summer research project to find out.”
The undergraduate to take on the challenge was Kate Hartman of Pomona College in Claremont, California. Hartman was able to demonstrate that it is possible to get consistent measurements of the chemical makeup of Cepheid variables, regardless of when in their cycle they were observed by APOGEE.
Hartman explains, “I had to look at multiple spectra from the same Cepheid variable and measure the amount of different elements in the star. When we looked at a star’s spectrum across its entire pulsation cycle, we found no significant differences in the results. That means that we’re getting reliable results every time we look.”
Knowing that APOGEE can reliably measure Cepheid variables is particularly important, Hartman explains, because it is the first survey to see so many, so regularly, and in so many places. Because APOGEE now operates simultaneously with twin instruments on telescopes in both the Northern and Southern hemispheres, it can see the whole Galaxy, as well as our neighbors the Large and Small Magellanic Cloud. This means that Cepheids can be observed in very different chemical environments, using the same instrument and data analysis process every time.
As a result of Hartman’s findings, additional APOGEE observations of Cepheid variables are now well underway. Jen Sobeck of the University of Washington, APOGEE’s Project Manager, explains, “the survey will observe the most nearby and well studied Cepheids with observations several times a month, will target Cepheids in the Large and Small Magellanic Cloud in January, and plans to eventually target all Cepheids in all parts of the sky we observe. These observations are an important addition to the APOGEE map of the galaxy.”
With direct distances from trigonometric parallaxes to a billion stars in our Galaxy coming soon from the ESA Gaia mission, the APOGEE spectroscopy is the final piece in the puzzle to complete the work started by Henrietta Leavitt in 1908 and provide an accurate calibration of the Leavitt Law in all Cepheid variable stars. And the upcoming Sloan Digital Sky Survey V will provide even better data. With all these new tools at their disposal, astronomers will be able to follow up on the work of astronomers like Leavitt — and Hartman — for generations to come.
- Kate Hartman, Pomona College
- Rachel Beaton, Princeton University
- Jennifer Sobeck, University of Washington
- Karen Masters, SDSS Scientific Spokesperson, Haverford College/University of Portsmouth,
email@example.com, +44 (0)7590 5266005, @KarenLMasters
- Jordan Raddick, SDSS Public Information Officer, Johns Hopkins University,
firstname.lastname@example.org, 1-410-516-8889, @raddick
About the Sloan Digital Sky Survey
Funding for the Sloan Digital Sky Survey IV has been provided by the Alfred P. Sloan Foundation, the U.S. Department of Energy Office of Science, and the Participating Institutions. SDSS acknowledges support and resources from the Center for High-Performance Computing at the University of Utah. The SDSS web site is www.sdss.org.
SDSS is managed by the Astrophysical Research Consortium for the Participating Institutions of the SDSS Collaboration including the Brazilian Participation Group, the Carnegie Institution for Science, Carnegie Mellon University, the Chilean Participation Group, the French Participation Group, Harvard-Smithsonian Center for Astrophysics, Instituto de Astrofísica de Canarias, The Johns Hopkins University, Kavli Institute for the Physics and Mathematics of the Universe (IPMU) / University of Tokyo, Lawrence Berkeley National Laboratory, Leibniz Institut für Astrophysik Potsdam (AIP), Max-Planck-Institut für Astronomie (MPIA Heidelberg), Max-Planck-Institut für Astrophysik (MPA Garching), Max-Planck-Institut für Extraterrestrische Physik (MPE), National Astronomical Observatories of China, New Mexico State University, New York University, University of Notre Dame, Observatório Nacional / MCTI, The Ohio State University, Pennsylvania State University, Shanghai Astronomical Observatory, United Kingdom Participation Group, Universidad Nacional Autónoma de México, University of Arizona, University of Colorado Boulder, University of Oxford, University of Portsmouth, University of Utah, University of Virginia, University of Washington, University of Wisconsin, Vanderbilt University, and Yale University. | https://www.sdss.org/press-releases/cepheids/ | 18 |
13 | Set students up for success in 8th grade and beyond eighth grade math word problems. Linear equations like y = 2x + 7 are called linear because they make a straight line when we graph them these tutorials introduce you to. Solutions to exercises in math textbooks search search scan browse math subjects pre-algebra algebra math answers free math help and answers textbooks. 8th grade math problems with answers measurement for 8th graders looking for ways to improve their math exam scores or their.
Math problems for 8th graders with answers scanning for math problems for 8th graders with answers do you really need this pdf math problems for. Solving math problems can intimidate eighth-graders, but by using a few simple formulas, students can easily calculate answers to worksheet questions. Word math problems with answers for grade 8 are presented. He writes on the screen and solves 8th grade math problems in here are 26 example 8th grade math word problems, complete with answers and 6th graders and.
Read and download math problems for 8th graders with answers free ebooks in pdf format - exercise physiology human bioenergetics and its applications fy bsc sem 1 exam. Mr graham's 8th grade algebra website: unit 13 - systems of equations systems of equations word problems. What is a hard math problem for an 8th grader 2017 author has 184 answers and 5436k answer views all of which are math problems for 8th graders in china.
8th graders this blog is dizz group - dizzy answers costi's grup- trying but easy problems are not as easy as they seem ( the same for the dizzy girls . Worksheets math worksheets for 8th graders with answers eighth grade math worksheets multiplication of exponents worksheet eighth grade math worksheets absolute value worksheet. Practice your math skills with these 7th grade worksheets 7th grade math word problems worksheets with answers realistic math problems help 6th graders solve real life questions 7th grade math word problems worksheets with answers 7th grade math word problems worksheets with answers practice your. These algebra 1 worksheets allow you to produce unlimited numbers of dynamically created word problems through the 8th grade work word problems math -aids. 8th grade math - functions practice free test online common core math practice 8th grade ccss math practice 8th grade ccss math books.
8th grade algebra worksheets eighth grade algebra may be gratifying or challenging for your teen if your 8th grader struggles in class, practicing math worksheet problems may help him or her develop a deeper understanding of algebraic concepts. Online middle school competitive math (for 4th to 8th graders) spring sessions starting feb 18 amc 8/10/12/aime problems and answers. Ou rmath worksheets for 8th graders begin to help them work with order of operations, eighth grade math worksheets midyear problems.
8th grade math problems practice worksheets worksheets math worksheets math for 8th graders worksheets alistairtheoptimist free worksheet for kids. Math problems for 8th graders with answersthank you for visiting our website at this time were pleased to declare we have discovered an incredibly interesting content to be discussed, that is math problems for 8th graders with answers. Dickerson math summer packet for rising 7th graders for your child to work on the standards/problems within the packet to math teacher during the first week.
Riddles, brainteaser and logic puzzles with answers 0 is the least common digit even though 1,000 has three zero's explanations for both riddles. Each week the mathcounts problem of the week features an new fun with your math club or in the for solutions to previous problems download pdf mathcounts. 8th grade math problems and answers is the page, which gives all support to students of 8th grade here you can get help with math problems the main highlight of these answers is that they have step by step explanation. These pythagorean theorem worksheets are perfect for providing children a fun way to practice and and 8th grade distance formula problems math-aids com all. | http://xxhomeworkelik.shapeyourworld.info/math-problems-for-8th-graders-with-answers.html | 18 |
10 | Compare and contrast Stevens' four scales of measurement. Explain when each type of scale should be used, such as the nominal scale, ordinal scale, the interval scale and ratio scale. Please provide sources.© BrainMass Inc. brainmass.com October 17, 2018, 12:53 pm ad1c9bdddf
The nominal scale of measurement is used whenever the variable being measured or observed contains labels or names to identify it from the other variables being measured. An example of a nominal scale would be AMEX, NASDAQ, and FTSE to identify where a particular stocks is traded in.
The ordinal scale of measurement is used for variables that have all the properties of a nominal ...
This solution outlines various scales of measurement.
Steven's four levels of measurement
Give examples of Stevens' four levels of measurement, such as nominal, ordinal, interval, and ratio scalesView Full Posting Details | https://brainmass.com/statistics/statistical-inference/stevens-four-scales-of-measurement-594212 | 18 |
42 | Start studying 32b classification of chemical reactions learn vocabulary, terms, and more with flashcards synthesis reaction (example) amino acids - dipeptide. A chemical reaction represents the conversion of reactants to product molecules a chemical reaction can involve combination, synthesis, decomposition, or oxidation. Synthesis reactions in a synthesis (or combination) reaction you start out with two separate elements on the reactants side and combine example #2: mg(s) + al(s. A chemical reaction is a process that leads to the transformation of one set of one example of a synthesis reaction is the combination of iron and.
Synthesis of biological macromolecules examples of these monomers and polymers can be found in the sugar a dehydration synthesis reaction involving. What is a synthesis reaction (oh)2 while the rest of the hydrogen forms hydrogen gas i know that's not an example of a synthesis reaction. Start studying chapter 4 - chemical reactions learn (synthesis reactions) 2 this is another type of double-replacement reaction an example is hcl(aq. Synthesis reactions - concept kendal orenstein an example of a synthesis reaction is the formation of water from hydrogen and oxygen synthesis is.
Synthesis reaction is the formation of a complex compound by the combining of two or more simpler compounds, elements, or radicals. Here is the definition, explanation, and examples of a synthesis reaction or direct combination reaction. Common examples of synthesis reactions are when a metal or non-metal reacts with oxygen to form an oxide synthesis reactions can also occur when a metal or non-metal. Answer to which of the following is an example of a synthesis reaction 2 na + cl2_2nacl kclo3_kci + o2 mg + hci_h2 + mgcl2 hci.
Dehydration synthesis reactions are reactions in which molecules combine by the removal of a h atom and an oh group between them, which together form a molecule of. Here's another example of a synthesis reaction: h 2 + o 2--- h 2 o 2 this happens to be a reaction that can never take place. Chemical reactions are the processes by which of a human cell are all examples of chemical reactions a synthesis reaction occurs when one or more. Did you know the process of making starch in our bodies uses a dehydration synthesis reaction explore this lesson to learn about dehydration. Any chemical reaction which starts from more basic reactants and yield more complex products can be named a synthesis reaction one usual example we work on.
1 water becomes ice when it is subjected to temps below 32f 2a flame ignites when a stove's burner is turned on 3 liquid coffee id produce when water. Synthesis reactions, the act of combining two or more substances together to make a product, occur all around us, from the kitchen to our chemical. Science of synthesis guided examples 1 science of synthesis 3 reaction search science of synthesis automatically switches to reaction search mode if an arrow is. Which of the following is a synthesis reaction from the options, the one that is an example of a synthesis reaction is : a) 2h2 + o2 === 2h2o. Chemical reactions in everyday life what are some everyday examples of a synthesis reaction there are two types of synthesis reactions that are very important.
Synthesis reactions (also called combination reactions) are the simplest type of chemical reaction in a synthesis reaction. There is a general description of the main reaction types and specific examples provided in the selection boxes synthesis reaction (combination reaction) in a. Synthesis of metal oxides one important synthesis reaction that occurs in nature is that of a metal and an oxygen molecule to form a metal oxide. | http://prtermpaperugsd.komedo.info/an-example-of-synthesis-reaction.html | 18 |
47 | Now we can pick some numbers to the left and to the right of zero. With the lights on, we can graph this function. The negative sign outside of the absolute value bars flipped the whole graph upside down.
Again, the best way to illustrate this simple idea is through the use of examples! The vertex of the graph is at h, k. Then evaluate each value of x into the function to get the corresponding values of y in the table. I suggest using the x-coordinate of the vertex,which is as the middle value of all x-values in the table.
I hope you start realizing that the first step is to always express the given absolute value function in standard form. Equation 2 is the correct one. Plot the points on the xy-plane and connect the dots with a straight edge.
Plug in known values to determine which solution is correct, then rewrite the equation without absolute value brackets. When you take the absolute value of a number, the result is always positive, even if the number itself is negative.
To solve this, you have to set up two equalities and solve each writing absolute value equations given a graph.
Now we can make our graph. Wait, we think that was a table of values we bumped into. For a random number x, both the following equations are true: This means that any equation that has an absolute value in it has two possible solutions.
Notice that the graph has a low point determined by the middle x-value which is the x-coordinate of the vertex itself, i. If you already know the solution, you can tell immediately whether the number inside the absolute value brackets is positive or negative, and you can drop the absolute value brackets.
Writing an Equation with a Known Solution If you have values for x and y for the above example, you can determine which of the two possible relationships between x and y is true, and this tells you whether the expression in the absolute value brackets is positive or negative.
This is the most basic form of an absolute value function. If we find a point on one half of the graph, we can use it to find its twin on the other half. Hopefully not a snake or a clown.
For us to meet here again, it must be fate; that, or mathematics. The two halves of the graph come to a point at 0, 0. The equation for an absolute value function has the general form: You can now drop the absolute value brackets from the original equation and write instead: The whole world turns on the vertex…or at least the whole function does.
BACK The only absolute thing in this world is absolute value. If you plot the above two equations on a graph, they will both be straight lines that intersect the origin. Plotting the points in the xy-axis, You might also be interested in: Plug these values into both equations.
Like play a smashing song on their lutes. A number less than 1 would have an opposite effect, widening the graph and taking up all our closet space. This is solution for equation 1.
This is called the vertex of the absolute value function. If you get it right, you should have something similar below. It looks exactly the same as the piecewise function that we talked about in the last unit. This may work at times, but this idea is not so reliable.
In this piecewise function, f x is positive when x is negative, and f x is positive when x is positive. Naturally, if absolute values are absolute, then so are absolute value functions.
Something particular to sink our teeth into and leave a mark on the page.Example 3: Graph the absolute value function using the table of values. I hope you start realizing that the first step is to always express the given absolute value function in standard form.
This allows us to identify the correct values of m, b, and c which we will use to substitute into the formula. The graph of the absolute value function resembles the letter V. It has a corner point at which the graph changes direction.
In an absolute value equation, an unknown variable is the input of an absolute value function. If the absolute value of an expression is set equal to a positive number, expect two solutions for the unknown variable.
How to Write an Absolute-Value Equation That Has Given Solutions By Chris Deziel; Updated April 25, You can denote absolute value by a pair of vertical lines bracketing the number in question.
The negative sign outside of the absolute value bars flipped the whole graph upside down. The 2 outside then made the V-shape narrower, making it easier to store in our closet. A number less than 1 would have an opposite effect, widening the graph and taking up all our closet space.
Graph absolute value functions like f(x)=|x+3|+2. If you're seeing this message, it means we're having trouble loading external resources on our website.
If you're behind a web filter, please make sure that the domains *ultimedescente.com. the graph is given by the line y x. y (2, 2) 2 Writing an Absolute Value Function Write an equation of the graph shown. SOLUTION The vertex of the graph is (0, º3), so the equation has GRAPHING ABSOLUTE VALUE FUNCTIONS STUDENT HELP Skills Review For help with symmetry, see p.
(3, 2) (2, 3) 4 2 y x.Download | http://migyxafelozuluc.ultimedescente.com/writing-absolute-value-equations-given-a-graph-5662256622.html | 18 |
49 | Space Shuttle design process
Before the Project Apollo Moon landing in 1969, NASA began studies of Space shuttle designs as early as October 1968. The early studies were denoted "Phase A", and in June 1970, "Phase B", which were more detailed and specific. The primary intended use of the space shuttle was supporting the future space station, ferrying a minimum crew of four and about 20,000 pounds (9,100 kg) of cargo, and able to be rapidly turned around for future flights.
Two designs emerged as front-runners. One was designed by engineers at the Manned Spaceflight Center, and championed especially by George Mueller. This was a two-stage system with delta-winged spacecraft, and generally complex. An attempt to re-simplify was made in the form of the DC-3, designed by Maxime Faget, who had designed the Mercury capsule among other vehicles. Numerous offerings from a variety of commercial companies were also offered, but generally fell by the wayside as each NASA lab pushed for its own version.
All of this was taking place in the midst of other NASA teams proposing a wide variety of post-Apollo missions, a number of which would cost as much as Apollo or more. As each of these projects fought for funding, the NASA budget was at the same time being severely constrained. Three were eventually presented to Vice President Agnew in 1969. The shuttle project rose to the top, largely due to tireless campaigning by its supporters. By 1970 the shuttle had been selected as the one major project for the short-term post-Apollo time frame.
When funding for the program came into question, there were concerns that the project might be cancelled. This led to an effort to interest the US Air Force in using the shuttle for their missions as well. The Air Force was mildly interested, but demanded a much larger vehicle, far larger than the original concepts. To lower the development costs of the resulting designs, boosters were added, a throw-away fuel tank was adopted, and many other changes made that greatly lowered the reusability and greatly added to vehicle and operational costs. With the Air Force's approval, the system emerged in its operational form.
In 1969, United States Vice President Agnew chaired the National Aeronautics and Space Council, which discussed post-Apollo options for manned space activities. The recommendations of the Council would heavily influence the decisions of the administration. The Council considered four major options:
- A human mission to Mars
- follow-on lunar program
- A low earth orbital infrastructure program
- Discontinuing manned space activities
Based on the advice of the Space Council, President Nixon made the decision to pursue the low earth orbital infrastructure option. This program mainly consisted of construction of a space station, along with the development of a Space Shuttle. Funding restrictions precluded pursuing the development of both programs simultaneously, however. NASA chose to develop the Space Shuttle program first, and then planned to use the shuttle in order to construct and service a space station.
Shuttle design debate
During the early shuttle studies, there was a debate over the optimal shuttle design that best balanced capability, development cost, and operational cost. Initially a fully reusable design was preferred. This involved a very large winged manned booster which would carry a smaller winged manned orbiter. The booster vehicle would lift the orbiter to a certain altitude and speed, then separate. The booster would return and land horizontally, while the orbiter continued into low earth orbit. After completing its mission, the winged orbiter would reenter and land horizontally on a runway. The idea was that full reusability would promote lower operating costs.
However further studies showed a huge booster was needed to lift an orbiter with the desired payload capability. In space and aviation systems, cost is closely related to weight, so this meant the overall vehicle cost would be very high. Both booster and orbiter would have rocket engines plus jet engines for use within the atmosphere, plus separate fuel and control systems for each propulsion mode. In addition there were concurrent discussions about how much funding would be available to develop the program.
Another competing approach was maintaining the Saturn V production line and using its large payload capacity to launch a space station in a few payloads rather than many smaller shuttle payloads. A related concept was servicing the space station using the Air Force Titan III-M to launch a larger Gemini capsule, called "Big Gemini", rather than using the shuttle.
The shuttle supporters answered that given enough launches, a reusable system would have lower overall costs than disposable rockets. If dividing total program costs over a given number of launches, a high shuttle launch rate would result in lower per-launch costs. This in turn would make the shuttle cost competitive with or superior to expendable launchers. Some theoretical studies mentioned 55 shuttle launches per year, however the final design chosen would not support that launch rate. In particular the maximum external tank production rate was limited to 24 tanks per year at NASA's Michoud Assembly Facility.
The combined space station and Air Force payload requirements weren't sufficient to reach desired shuttle launch rates. Therefore, the plan was for all future U.S. space launches—space station, Air Force, commercial satellites, and scientific research—to use only the space shuttle. Most other expendable boosters would be phased out.
The reusable booster was eventually abandoned due to several factors: high price (combined with limited funding), technical complexity, and development risk. Instead, a partially (not fully) reusable design was selected, where an external propellent tank was discarded for each launch, and the booster rockets and shuttle orbiter were refurbished for reuse.
Initially the orbiter was to carry its own liquid propellant. However studies showed carrying the propellant in an external tank allowed a larger payload bay in an otherwise much smaller craft. It also meant throwing away the tank after each launch, but this was a relatively small portion of operating costs.
Earlier designs assumed the winged orbiter would also have jet engines to assist maneuvering in the atmosphere after reentering. However NASA ultimately chose a gliding orbiter, based partially on experience from previous rocket-then-glide vehicles such as the X-15 and lifting bodies. Omitting the jet engines and their fuel would reduce complexity and increase payload.
The last remaining debate was over the nature of the boosters. NASA examined four solutions to this problem: development of the existing Saturn lower stage, simple pressure-fed liquid-fuel engines of a new design, a large single solid rocket, or two (or more) smaller ones. Engineers at NASA's Marshall Space Flight Center (where the Saturn V development was managed) were particularly concerned about solid rocket reliability for manned missions.
Air Force involvement
During the mid-1960s the United States Air Force had both of its major piloted space projects, X-20 Dyna-Soar and Manned Orbiting Laboratory, canceled. This demonstrated its need to cooperate with NASA to place military astronauts in orbit. In turn, by serving Air Force needs, the Shuttle became a truly national system, carrying all military as well as civilian payloads.
NASA sought Air Force support for the shuttle. After the Six-Day War and the Soviet invasion of Czechoslovakia exposed limitations in the United States' satellite reconnaissance network, Air Force involvement emphasized the ability to launch spy satellites southward into polar orbit from Vandenberg AFB. This required higher energies than for lower inclination orbits. The Air Force also hoped that a shuttle could retrieve Soviet satellites and quickly land. It thus desired the ability to land at the Vandenberg liftoff point after one orbit, despite the earth rotating 1,000 miles beneath the orbital track. This required a larger delta wing size than the earlier simple "DC-3" shuttle. However NASA also desired this increased maneuvering capability since further studies had shown the DC-3 shuttle design had limitations not initially foreseen. The Air Force launched more than 200 satellite reconnaissance missions between 1959 and 1970, and the military's large volume of payloads would be valuable in making the shuttle more economical.:213–216
Despite the potential benefits for the Air Force, the military was satisfied with its expendable boosters and did not need or want the shuttle as much as NASA did. Because the space agency needed outside support, the Defense Department (DoD) and the National Reconnaissance Office (NRO) gained primary control over the design process. For example, NASA planned a 40-by-12-foot (12.2 by 3.7 m) cargo bay, but NRO specified a 60-by-15-foot (18.3 by 4.6 m) bay because it expected future intelligence satellites to become larger. When Faget again proposed a 12-foot wide payload bay, the military almost immediately insisted on retaining the 15-foot width. The Air Force also gained the equivalent of use of one of the shuttles for free despite not paying for the shuttle's development or construction. In exchange for the NASA concessions, the Air Force testified to the Senate Space Committee on the shuttle's behalf in March 1971.:216,232–234
As another incentive for the military to use the shuttle, Congress reportedly told DoD that it would not pay for any satellites not designed to fit into the shuttle cargo bay. Although NRO did not redesign existing satellites for the shuttle, the vehicle retained the ability to retrieve large cargos such as the KH-9 HEXAGON from orbit for refurbishment, and the agency studied resupplying the satellite in space.
The Air Force planned on having its own fleet of shuttles, and re-built a separate launch facility originally derived from the canceled Manned Orbiting Laboratory program at Vandenberg called Space Launch Complex Six (SLC-6). However, for various reasons, due in large part to the loss of the space shuttle Challenger on January 28, 1986, work on SLC-6 was eventually discontinued with no shuttle launches from that location ever taking place. SLC-6 was eventually used for launching the Lockheed Martin-built Athena expendable launch vehicles, which included the successful IKONOS commercial Earth observation satellite in September 1999 before being reconfigured once again to handle the new generation of Boeing Delta IV's. The first launch of the Delta IV heavy from SLC-6 occurred in June 2006, launching NROL-22, a classified satellite for the U.S. National Reconnaissance Office (NRO).
While NASA would likely have chosen liquid boosters had it complete control over the design, the Office of Management and Budget insisted on less expensive solid boosters due to their lower projected development costs.:416–423 While a liquid-fueled booster design provided better performance, lower per-flight costs, less environmental impact and less developmental risk, solid boosters were seen as requiring less funding to develop at a time when the Shuttle program had many different elements competing for limited development funds. The final design which was selected was a winged orbiter with three liquid-fueled engines, a large expendable external tank which held liquid propellant for these engines, and two reusable solid rocket boosters.
In the spring of 1972 Lockheed Aircraft, McDonnell Douglas, Grumman, and North American Rockwell submitted proposals to build the shuttle. The NASA selection group thought that Lockheed's shuttle was too complex and too expensive, and the company had no experience with building manned spacecraft. McDonnell Douglas's was too expensive and had technical issues. Grumman had an excellent design which also seemed too expensive. North American's shuttle had the lowest cost and most realistic cost projections, its design was the easiest for ongoing maintenance, and the Apollo 13 accident involving North American's Command/Service Module demonstrated its experience with electrical system failures. NASA announced its choice of North American on 26 July 1972.:429–432
Retrospection after three decades
Opinions differ on the lessons of the Shuttle. It was developed with the original development cost and time estimates given to President Richard M. Nixon in 1971, at a cost of US$6.744 billion in 1971 dollars versus an original $5.15 billion estimate. The operational costs, flight rate, payload capacity, and reliability have been different than anticipated, however.
- "Report of the Space Task Group, 1969". NASA. Retrieved 6 August 2009.
- Day, Dwayne A. "Big Black and the new bird: the NRO and the early Space Shuttle" The Space Review, 11 January 2010.
- Heppenheimer, T. A. (1998). The Space Shuttle Decision. NASA.
- Day, Dwayne A. "The spooks and the turkey" The Space Review, 20 November 2006.
- Aldridge, E. C. Pete Jr. (Fall 2005). "Assured Access: "The Bureaucratic Space War"" (PDF). 16.885j, "Aircraft Systems Engineering". Massachusetts Institute of Technology. Retrieved September 17, 2012.
- Day, Dwayne (2017-02-13). "Black ops and the shuttle (part 1)". The Space Review.
- NASA-CR-134338, Mead, L. M., et al Space Shuttle System Program Definition Phase B Extension Final Report. Washington, DC: National Aeronautics and Space Administration, 1972.
- "Infio" (PDF). www.sqlite.org.
- "Columbia Accident Investigation Board public hearing". NASA. 2003-04-23. Archived from the original on 2006-08-12. Retrieved 2008-09-26.
- Wade, Mark. "Shuttle". Astronautix.com. Retrieved 12 November 2017.
- Dr. Wernher Von Braun - "The Spaceplane that can put YOU in orbit" (Popular Science, July 1970) (Google Books link)
- Astronautix space shuttle article
- NASA: The Space Shuttle Decision
- [http://www.pmview.com/spaceodysseytwo/spacelvs/index.htm INTRODUCTION TO FUTURE LAUNCH VEHICLE PLANS �[1963-2001], M. Lindroos]
- 10 Space Shuttles which never flew (Lockheed Starclipper, Chrysler SERV, Phase B Shuttles, Rockwell C-1057, Shuttle C, Air Launched Sortie Vehicle (ALSV), Hermes, Buran, Shuttle II, Lockheed Martin VentureStar) | https://en.wikipedia.org/wiki/Space_Shuttle_design_process | 18 |
16 | History of Germany
Part of a series on the
|History of Germany|
|Early Modern period|
The concept of Germany as a distinct region in central Europe can be traced to Roman commander Julius Caesar, who referred to the unconquered area east of the Rhine as Germania, thus distinguishing it from Gaul (France), which he had conquered. The victory of the Germanic tribes in the Battle of the Teutoburg Forest (AD 9) prevented annexation by the Roman Empire, although the Roman provinces of Germania Superior and Germania Inferior were established along the Rhine. Following the Fall of the Western Roman Empire, the Franks conquered the other West Germanic tribes. When the Frankish Empire was divided among Charles the Great's heirs in 843, the eastern part became East Francia. In 962, Otto I became the first Holy Roman Emperor of the Holy Roman Empire, the medieval German state.
In the Late Middle Ages, the regional dukes, princes and bishops gained power at the expense of the emperors. Martin Luther led the Protestant Reformation against the Catholic Church after 1517, as the northern states became Protestant, while the southern states remained as the Catholics. The two parts of the Holy Roman Empire clashed in the Thirty Years' War (1618–1648), which was ruinous to the twenty million civilians living in both parts. The Thirty Years' War brought tremendous destruction to Germany; more than 1/4 of the population and 1/2 of the male population in the German states were killed by the catastrophic war. 1648 marked the effective end of the Holy Roman Empire and the beginning of the modern nation-state system, with Germany divided into numerous independent states, such as Prussia, Bavaria, Saxony, Austria and other states, which also controlled land outside of the area considered as "Germany".
After the French Revolution and the Napoleonic Wars from 1803–1815, feudalism fell away and liberalism and nationalism clashed with reaction. The German revolutions of 1848–49 failed. The Industrial Revolution modernized the German economy, led to the rapid growth of cities and to the emergence of the Socialist movement in Germany. Prussia, with its capital Berlin, grew in power. German universities became world-class centers for science and humanities, while music and art flourished. The Unification of Germany (excluding Austria and the German-speaking areas of Switzerland) was achieved under the leadership of the Chancellor Otto von Bismarck with the formation of the German Empire in 1871 which solved the Kleindeutsche Lösung, the small Germany solution (Germany without Austria), or Großdeutsche Lösung, the greater Germany solution (Germany with Austria), the former prevailing. The new Reichstag, an elected parliament, had only a limited role in the imperial government. Germany joined the other powers in colonial expansion in Africa and the Pacific.
Germany was the dominant power on the continent. By 1900, its rapidly expanding industrial economy passed Britain's, allowing a naval race. Germany led the Central Powers in World War I (1914–1918) against France, Great Britain, Russia and (by 1917) the United States. Defeated and partly occupied, Germany was forced to pay war reparations by the Treaty of Versailles and was stripped of its colonies as well as areas given to re-established Poland and Alsace-Lorraine. The German Revolution of 1918–19 deposed the emperor and the various kings and princes, leading to the establishment of the Weimar Republic, an unstable parliamentary democracy.
In the early 1930s, the worldwide Great Depression hit Germany hard, as unemployment soared and people lost confidence in the government. In January 1933, Adolf Hitler was appointed Chancellor of Germany. The Nazi Party then began to eliminate all political opposition and consolidate its power. Hitler quickly established a totalitarian regime. Beginning in the late 1930s, Nazi Germany made increasingly aggressive territorial demands, threatening war if they were not met. First came the remilitarization of the Rhineland in 1936, the annexing of Austria in the Anschluss and parts of Czechoslovakia with the Munich Agreement in 1938 (although in 1939 Hitler annexed further territory of Czechoslovakia). On 1 September 1939, Germany initiated World War II in Europe with the invasion of Poland. After forming a pact with the Soviet Union in 1939, Hitler and Stalin divided Eastern Europe. After a "Phoney War" in spring 1940, the Germans swept Denmark and Norway, the Low Countries and France, giving Germany control of nearly all of Western Europe. Hitler invaded the Soviet Union in June 1941.
Racism, especially antisemitism, was a central feature of the regime. In Germany, but predominantly in the German-occupied areas, the systematic genocide program known as The Holocaust killed six million Jews, as well as five million others including German dissidents, gipsies, disabled people, Poles, Romanies, Soviets (Russian and non-Russian), and others. In 1942, the German invasion of the Soviet Union faltered, and after the United States had entered the war, Britain became the base for massive Anglo-American bombings of German cities. Germany fought the war on multiple fronts through 1942–1944, however following the Allied invasion of Normandy (June 1944), the German Army was pushed back on all fronts until the final collapse in May 1945.
Under occupation by the Allies, German territories were split up, Austria was again made a separate country, denazification took place, and the Cold War resulted in the division of the country into democratic West Germany and communist East Germany. Millions of ethnic Germans were deported or fled from Communist areas into West Germany, which experienced rapid economic expansion, and became the dominant economy in Western Europe. West Germany was rearmed in the 1950s under the auspices of NATO, but without access to nuclear weapons. The Franco-German friendship became the basis for the political integration of Western Europe in the European Union. In 1989, the Berlin Wall was destroyed, the Soviet Union collapsed and East Germany was reunited with West Germany in 1990. In 1998–1999, Germany was one of the founding countries of the eurozone. Germany remains one of the economic powerhouses of Europe, contributing about one quarter of the eurozone's annual gross domestic product. In the early 2010s, Germany played a critical role in trying to resolve the escalating euro crisis, especially with regard to Greece and other Southern European nations. In the middle of the decade, the country faced the European migrant crisis, as the main receiver of asylum seekers from Syria and other troubled regions.
For more events, see Timeline of German history.
- 1 Prehistory
- 2 Germanic tribes, 750 BC – 768 AD
- 3 Middle Ages
- 4 Early modern Germany
- 5 1648–1815
- 6 1815–1867
- 6.1 Overview
- 6.2 German Confederation
- 6.3 Society and economy
- 6.4 Politics of restoration and revolution
- 7 German Empire, 1871–1918
- 7.1 Overview
- 7.2 Age of Bismarck
- 7.3 Wilhelminian Era
- 7.4 World War I
- 7.5 Homefront
- 7.6 Revolution 1918
- 8 Weimar Republic, 1919–1933
- 9 Nazi Germany, 1933–1945
- 10 Germany during the Cold War, 1945–1990
- 11 Federal Republic of Germany, 1990–present
- 12 Historiography
- 13 See also
- 14 Notes
- 15 References
- 16 Further reading
The discovery of the Mauer 1 mandible in 1907 shows that ancient humans were present in Germany at least 600,000 years ago. The oldest complete hunting weapons ever found anywhere in the world were discovered in a coal mine in Schoningen, Germany in 1995 where three 380,000-year-old wooden javelins 6–7.5 feet (1.8–2.3 meter) long were unearthed. The Neander valley in Germany was the location where the first ever non-modern human fossil was discovered and recognised in 1856; the new species of human was named Neanderthal man. The Neanderthal 1 fossils are now known to be 40,000 years old. At a similar age, evidence of modern humans has been found in caves in the Swabian Jura near Ulm. The finds include 42,000-year-old bird bone and mammoth ivory flutes which are the oldest musical instruments ever found, the 40,000-year-old Ice Age Löwenmensch figurine which is the oldest uncontested figurative art ever discovered, and the 35,000-year-old Venus of Hohle Fels which is the oldest uncontested human figurative art ever discovered.
Germanic tribes, 750 BC – 768 AD
Migration and conquest
The ethnogenesis of the Germanic tribes is assumed[by whom?] to have occurred during the Nordic Bronze Age, or at the latest during the Pre-Roman Iron Age. From their homes in southern Scandinavia and northern Germany the tribes began expanding south, east and west in the 1st century BC, coming into contact with the Celtic tribes of Gaul, as well as with Iranian, Baltic, and Slavic cultures in Central/Eastern Europe.
In the first years of the 1st century AD Roman legions conducted a long campaign in Germania, the area north of the Upper Danube and east of the Rhine, in an attempt to expand the Empire's frontiers and to shorten its frontier line. Rome subdued several Germanic tribes, such as the Cherusci. The tribes became familiar with Roman tactics of warfare while maintaining their tribal identity. In 9 AD a Cherusci chieftain known to the Romans as Arminius defeated a Roman army in the Battle of the Teutoburg Forest, a victory credited with stopping the Roman advance into Germanic territories and marking the beginning of recorded German history.[need quotation to verify] That part of the territory of modern Germany that lay east of the Rhine remained outside the Roman Empire. By AD 100, the time of Tacitus's Germania, Germanic tribes had settled along the Roman frontier along the Rhine and the Danube (the Limes Germanicus), occupying most of the area of modern Germany; however, imperial Rome organised territory later included in the modern states of Austria, Baden-Württemberg, southern Bavaria, southern Hesse, Saarland and the Rhineland as Roman provinces (Noricum, Raetia, and Germania). The Roman provinces in western Germany, Germania Inferior (with the capital situated at Colonia Claudia Ara Agrippinensium, modern Cologne) and Germania Superior (with its capital at Mogontiacum, modern Mainz), were formally established in 85 AD, after a long period of military occupation beginning in the reign of the Roman emperor Augustus (27 BC – 14 AD).
The 3rd century saw the emergence of a number of large West Germanic tribes: the Alamanni, Franks, Bavarii, Chatti, Saxons, Frisii, Sicambri, and Thuringii. Around 260 the Germanic peoples broke through the limes and the Danube frontier into Roman-controlled lands.
Seven large German-speaking tribes – the Visigoths, Ostrogoths, Vandals, Burgundians, Lombards, Saxons and Franks – moved west and witnessed the decline of the Roman Empire and the transformation of the old Western Roman Empire.
Christianity was spread to western Germany during the Roman era, and Christian religious structures such as the Aula Palatina of Trier were built during the reign of Constantine I (r. (306–337 AD). At the end of the 4th century the Huns invaded the unoccupied part of present-day Germany and the Migration Period began. Hunnic hegemony over Germania lasted until the death of Attila's son Dengizich in 469.
Stem Duchies and Marches
Stem duchies (tribal duchies) in Germany originated as the areas of the Germanic tribes of a given region. The concept of such duchies survived especially in the areas which in the mid-9th century would become part of East Francia (for example: Bavaria, Swabia, Saxony, Franconia, Thuringia) rather than further west in Middle Francia (for example: Burgundy, Lorraine ).
In the 5th century, the Völkerwanderung (or Germanic migrations) brought a number of "barbarian" tribes into the failing Roman Empire. Tribes that became stem duchies were originally the Alamanni, the Thuringii, the Saxons, the Franks, the Burgundians, and the Rugii. In contrast to later duchies, these entities did not have strictly delineated administrative boundaries, but approximated the area of settlement of major Germanic tribes. Over the next few centuries, some tribes warred, migrated, and merged. Eventually the Franks subjugated all these tribes in Germania. However, remnants of several stem duchies survive today as states or regions in modern Western Europe countries: German states such as Bavaria and Saxony, German regions like Swabia, and French régions such as of Burgundy/Franche-Comté and Lorraine.
In the east, successive rulers of the German lands founded a series of border counties or marches. To the north, these included Lusatia, the North March (which would become Brandenburg and the heart of the future Prussia), and the Billung March. In the south, the marches included Carniola, Styria, and the March of Austria that would become Austria.
After the fall of the Western Roman Empire in the 5th century, the Franks, like other post-Roman Western Europe, emerged as a tribal confederacy in the Rhine-Weser region referred to as "Austrasia," now Franconia. They absorbed much former Roman territory as they spread west into Gaul beginning in 250, unlike the Alamanni to their south in Swabia. By 500, the Frankish king Clovis I, of the Merovingian dynasty, had united the Frankish tribes and ruled all of Gaul, and was proclaimed king some time from 509 to 511. Clovis also, contrary to the tradition of Germanic rulers of the time, was baptized directly into Roman Catholicism and not Arianism, and his successors would work closely with papal missionaries, among them Saint Boniface. The faith of the Franks, the vast size of Francia, and the Franks' control of the passes through the Alps led to the alliance between the Merovingian realm, which by 750 now extended Gaul and north-western Germany to include Swabia, Burgundy (and western Switzerland by extension), with the Pope in Rome against the Lombards, whom now posed the greatest threat to the Holy See. A Papal envoy was sent to Charles Martel, Mayor of the Palace, in 732 following his victory at the Battle of Tours, though this alliance would lapse with Charles' death and be renewed after the Frankish Civil War.
The Merovingian kings of the Germanic Franks conquered northern Gaul in 486 AD. Swabia became a duchy under the Frankish Empire in 496, following the Battle of Tolbiac; in 530 the Saxons and the Franks destroyed the Kingdom of Thuringia. In the 5th and 6th centuries the Merovingian kings conquered several[quantify] other Germanic tribes and kingdoms. King Chlothar I (reigned 558–561) ruled the greater part of what is now Germany and made expeditions into Saxony, while the Southeast of modern Germany remained under the influence of the Ostrogoths. Saxons inhabited the area down[clarification needed] to the Unstrut River.
The Merovingians placed the various regions of their Frankish Empire under the control of semi-autonomous dukes – Franks or local rulers. Frankish colonists were encouraged[by whom?] to move to the newly conquered territories. While allowed to preserve their own laws, the local Germanic tribes faced pressure to adopt non-Arian Christianity.
The territories which would later become parts of modern Germany came under the region of Austrasia (meaning "eastern land"), the northeastern portion of the Kingdom of the Merovingian Franks. As a whole, Austrasia comprised parts of present-day France, Germany, Belgium, Luxembourg and the Netherlands. After the death of the Frankish king Clovis I in 511, his four sons partitioned his kingdom including Austrasia. Authority over Austrasia passed back and forth from autonomy to royal subjugation, as successive Merovingian kings alternately united and subdivided the Frankish lands.
In 718 Charles Martel, the Frankish Mayor of the Palace, made war against Saxony because of its help for the Neustrians. His son Carloman started a new war against Saxony in 743, because the Saxons gave aid to Duke Odilo of Bavaria.
In 751 Pippin III, Mayor of the Palace under the Merovingian king, himself assumed the title of king and was anointed by the Church. Now the Frankish kings were set up[by whom?] as protectors of the pope, and Charles the Great (who ruled the Franks from 774 to 814) launched a decades-long military campaign against the Franks' heathen rivals, the Saxons and the Avars. The campaigns and insurrections of the Saxon Wars lasted from 772 to 804. The Franks eventually overwhelmed the Saxons and Avars, forcibly converted the people to Christianity, and annexed their lands to the Carolingian Empire.
Foundation of the Holy Roman Empire
After the death of Frankish king Pepin the Short in 768, his oldest son "Charlemagne" ("Charles the Great") consolidated his power over and expanded the Kingdom. In 773-74, Charlemagne ended 200 years of Royal Lombard rule with the Siege of Pavia, and installed himself as King of the Lombards and loyal Frankish nobles replaced the old Lombard elite following a rebellion in 776. The next 30 years of his reign were spent ruthlessly strengthening his power in Francia and conquering the territories of all west Germanic peoples, including the Saxons and the Baiuvarii (Bavarians). On Christmas Day, 800 AD, Charlemagne was crowned Emperor in Rome by Pope Leo III.
Fighting among Charlemagne's grandchildren caused the Carolingian empire to be partitioned into three parts in 843. The German region developed out of the East Frankish kingdom, East Francia. From 919 to 936, the Germanic peoples – Franks, Saxons, Swabians, and Bavarians – were united under Henry the Fowler, Duke of Saxony, who took the title of king. Imperial strongholds, called Kaiserpfalzen, became economic and cultural centers, of which Aachen was the most famous.
Map of the Kingdom of Germany within the Holy Roman Empire and within Europe circa 1004, after the incorporation of the Duchy of Bohemia
Holy Roman Empire, 10th century
The Holy Roman Empire at its greatest territorial extent during the Hohenstaufen dynasty in the early and middle 13th century
The Holy Roman Empire at its greatest territorial extent during the Hohenstaufen dynasty in the early and middle 13th century (detailed)
The Holy Roman Empire at its greatest territorial extent during the Hohenstaufen dynasty in the early and middle 13th century (superimposed on modern borders)
Otto the Great
In 936, Otto I was crowned as king at Aachen; his coronation as emperor by Pope John XII at Rome in 962 inaugurated what became later known as the Holy Roman Empire, which came to be identified with Germany. Otto strengthened the royal authority by re-asserting the old Carolingian rights over ecclesiastical appointments. Otto wrested from the nobles the powers of appointment of the bishops and abbots, who controlled large land holdings. Additionally, Otto revived the old Carolingian program of appointing missionaries in the border lands. Otto continued to support celibacy for the higher clergy, so ecclesiastical appointments never became hereditary. By granting land to the abbots and bishops he appointed, Otto actually made these bishops into "princes of the Empire" (Reichsfürsten); in this way, Otto was able to establish a national church. Outside threats to the kingdom were contained with the decisive defeat of the Hungarian Magyars at the Battle of Lechfeld in 955. The Slavs between the Elbe and the Oder rivers were also subjugated. Otto marched on Rome and drove John XII from the papal throne and for years controlled the election of the pope, setting a firm precedent for imperial control of the papacy for years to come.
During the reign of Conrad II's son, Henry III (1039 to 1056), the empire supported the Cluniac reforms of the Church, the Peace of God, prohibition of simony (the purchase of clerical offices), and required celibacy of priests. Imperial authority over the Pope reached its peak. In the Investiture Controversy which began between Henry IV and Pope Gregory VII over appointments to ecclesiastical offices, the emperor was compelled to submit to the Pope at Canossa in 1077, after having been excommunicated. In 1122 a temporary reconciliation was reached between Henry V and the Pope with the Concordat of Worms. The consequences of the investiture dispute were a weakening of the Ottonian church (Reichskirche), and a strengthening of the Imperial secular princes.
The time between 1096 and 1291 was the age of the crusades. Knightly religious orders were established, including the Knights Templar, the Knights of St John (Knights Hospitaller), and the Teutonic Order.
The term sacrum imperium (Holy Empire) was first used under Friedrich I, documented first in 1157, but the words Sacrum Romanum Imperium, Holy Roman Empire, were only combined in July 1180 and would never consistently appear on official documents from 1254 onwards.
Long-distance trade in the Baltic intensified, as the major trading towns became drawn together in the Hanseatic League, under the leadership of Lübeck. The Hanseatic League was a business alliance of trading cities and their guilds that dominated trade along the coast of Northern Europe. Each of the Hanseatic cities had its own legal system and a degree of political autonomy. The chief cities were Cologne on the Rhine River, Hamburg and Bremen on the North Sea, and Lübeck on the Baltic. The League flourished from 1200 to 1500, and continued with lesser importance after that.
The German colonisation and the chartering of new towns and villages began into largely Slav-inhabited territories east of the Elbe, such as Bohemia, Silesia, Pomerania, and Livonia. Beginning in 1226, the Teutonic Knights began their conquest of Prussia. The native Baltic Prussians were conquered and Christianized by the Knights with much warfare, and numerous German towns were established along the eastern shore of the Baltic Sea.
Church and state
Henry V (1086–1125), great-grandson of Conrad II, became Holy Roman Emperor in 1106 in the midst of a civil war. Hoping to gain complete control over the church inside the Empire, Henry V appointed Adalbert of Saarbrücken as the powerful archbishop of Mainz in 1111. Adalbert began to assert the powers of the Church against secular authorities, that is, the Emperor. This precipitated the "Crisis of 1111", part of the long-term Investiture Controversy. In 1137 the magnates turned back to the Hohenstaufen family for a candidate, Conrad III. Conrad III tried to divest Henry the Proud of his two duchies – Bavaria and Saxony – leading to war in southern Germany as the Empire divided into two factions. The first faction called themselves the "Welfs" or "Guelphs" after Henry the Proud's family, which was the ruling dynasty in Bavaria; the other faction was known as the "Waiblings." In this early period, the Welfs generally represented ecclesiastical independence under the papacy plus "particularism" (a strengthening of the local duchies against the central imperial authority). The Waiblings, on the other hand, stood for control of the Church by a strong central Imperial government.
Between 1152 and 1190, during the reign of Frederick I (Barbarossa), of the Hohenstaufen dynasty, an accommodation was reached with the rival Guelph party by the grant of the duchy of Bavaria to Henry the Lion, duke of Saxony. Austria became a separate duchy by virtue of the Privilegium Minus in 1156. Barbarossa tried to reassert his control over Italy. In 1177 a final reconciliation was reached between the emperor and the Pope in Venice.
From 1184 to 1186, the Hohenstaufen empire under Frederick I Barbarossa reached its peak in the Reichsfest (imperial celebrations) held at Mainz and the marriage of his son Henry in Milan to the Norman princess Constance of Sicily. The power of the feudal lords was undermined by the appointment of "ministerials" (unfree servants of the Emperor) as officials. Chivalry and the court life flowered, leading to a development of German culture and literature (see Wolfram von Eschenbach).
Between 1212 and 1250, Frederick II established a modern, professionally administered state from his base in Sicily. He resumed the conquest of Italy, leading to further conflict with the Papacy. In the Empire, extensive sovereign powers were granted to ecclesiastical and secular princes, leading to the rise of independent territorial states. The struggle with the Pope sapped the Empire's strength, as Frederick II was excommunicated three times. After his death, the Hohenstaufen dynasty fell, followed by an interregnum during which there was no Emperor.
The failure of negotiations between Emperor Louis IV and the papacy led in 1338 to the declaration at Rhense by six electors to the effect that election by all or the majority of the electors automatically conferred the royal title and rule over the empire, without papal confirmation. As result, the monarch was no longer subject to papal approbation and became increasingly dependent on the favour of the electors. Between 1346 and 1378 Emperor Charles IV of Luxembourg, king of Bohemia, sought to restore the imperial authority. The Golden Bull of 1356 stipulated that in future the emperor was to be chosen by four secular electors and three spiritual electors. The secular electors were the King of Bohemia, the Count Palatine of the Rhine, the Duke of Saxony, and the Margrave of Brandenburg; the three spiritual electors were the Archbishops of Mainz, Trier, and Cologne.
Around 1350, Germany and almost the whole of Europe were ravaged by the Black Death. Jews were persecuted on religious and economic grounds; many fled to Poland. The Black Death is estimated to have killed 30–60 percent of Europe's population in the 14th century.
Change and reform
After the disasters of the 14th century – war, plague, and schism – early-modern European society gradually came into being as a result of economic, religious, and political changes. A money economy arose which provoked social discontent among knights and peasants. Gradually, a proto-capitalistic system evolved out of feudalism. The Fugger family gained prominence through commercial and financial activities and became financiers to both ecclesiastical and secular rulers. The knightly classes established their monopoly on arms and military skill. However, it was undermined by the introduction of mercenary armies and foot soldiers. Predatory activity by "robber knights" became common.
From 1438 the Habsburgs, who controlled most of the southeast of the Empire (more or less modern-day Austria and Slovenia, and Bohemia and Moravia after the death of King Louis II in 1526), maintained a constant grip on the position of the Holy Roman Emperor until 1806 (with the exception of the years between 1742 and 1745). This situation, however, gave rise to increased disunity among the Holy Roman Empire's territorial rulers and prevented sections of the country from coming together to form nations in the manner of France and England.
During his reign from 1493 to 1519, Maximilian I tried to reform the Empire. An Imperial supreme court (Reichskammergericht) was established, imperial taxes were levied, and the power of the Imperial Diet (Reichstag) was increased. The reforms, however, were frustrated by the continued territorial fragmentation of the Empire.
Towns and cities
The German lands had a population of about 5 or 6 million. The great majority were farmers, typically in a state of serfdom under the control of nobles and monasteries. A few towns were starting to emerge. From 1100, new towns were founded around imperial strongholds, castles, bishops' palaces, and monasteries. The towns began to establish municipal rights and liberties (see German town law). Several cities such as Cologne became Imperial Free Cities, which did not depend on princes or bishops, but were immediately subject to the Emperor. The towns were ruled by patricians: merchants carrying on long-distance trade. Craftsmen formed guilds, governed by strict rules, which sought to obtain control of the towns; a few were open to women. Society was divided into sharply demarcated classes: the clergy, physicians, merchants, various guilds of artisans, and peasants; full citizenship was not available to paupers. Political tensions arose from issues of taxation, public spending, regulation of business, and market supervision, as well as the limits of corporate autonomy.
Cologne's central location on the Rhine river placed it at the intersection of the major trade routes between east and west and was the basis of Cologne's growth. The economic structures of medieval and early modern Cologne were characterized by the city's status as a major harbor and transport hub upon the Rhine. It was the seat of the archbishops, who ruled the surrounding area and (from 1248 to 1880) built the great Cologne Cathedral, with sacred relics that made it a destination for many worshippers. By 1288 the city had secured its independence from the archbishop (who relocated to Bonn), and was ruled by its burghers.
From the early medieval period and continuing through to the 18th century, Germanic law assigned women to a subordinate and dependent position relative to men. Salic (Frankish) law, from which the laws of the German lands would be based, placed women at a disadvantage with regard to property and inheritance rights. Germanic widows required a male guardian to represent them in court. Unlike Anglo-Saxon law or the Visigothic Code, Salic law barred women from royal succession. Social status was based on military and biological roles, a reality demonstrated in rituals associated with newborns, when female infants were given a lesser value than male infants. The use of physical force against wives was condoned until the 18th century in Bavarian law.
Some women of means asserted their influence during the Middle Ages, typically in royal court or convent settings. Hildegard of Bingen, Gertrude the Great, Elisabeth of Bavaria (1478–1504), and Argula von Grumbach are among the women who pursued independent accomplishments in fields as diverse as medicine, music composition, religious writing, and government and military politics.
Science and culture
Benedictine abbess Hildegard von Bingen (1098–1179) wrote several influential theological, botanical, and medicinal texts, as well as letters, liturgical songs, poems, and arguably the oldest surviving morality play, while supervising brilliant miniature Illuminations. About 100 years later, Walther von der Vogelweide (c. 1170 – c. 1230) became the most celebrated of the Middle High German lyric poets.
Around 1439, Johannes Gutenberg of Mainz, used movable type printing and issued the Gutenberg Bible. He was the global inventor of the printing press, thereby starting the Printing Revolution. Cheap printed books and pamphlets played central roles for the spread of the Reformation and the Scientific Revolution.
Around the transition from the 15th to the 16th century, Albrecht Dürer from Nuremberg established his reputation across Europe as painter, printmaker, mathematician, engraver, and theorist when he was still in his twenties and secured his reputation as one of the most important figures of the Northern Renaissance.
The addition Nationis Germanicæ (of German Nation) to the emperor's title appeared first in the 15th century: in a 1486 law decreed by Frederick III and in 1512 in reference to the Imperial Diet in Cologne by Maximilian I. By then, the emperors had lost their influence in Italy and Burgundy. In 1525, the Heilbronn reform plan – the most advanced document of the German Peasants' War (Deutscher Bauernkrieg) – referred to the Reich as von Teutscher Nation (of German nation).
Hildegard von Bingen (1098–1179)
Walther von der Vogelweide (c. 1170–1230)
Johannes Gutenberg (c. 1398–1468)
Albrecht Dürer (1471–1528)
Early modern Germany
- See List of states in the Holy Roman Empire for subdivisions and the political structure
In the early 16th century there was much discontent occasioned by abuses such as indulgences in the Catholic Church, and a general desire for reform.
In 1517 the Reformation began with the publication of Martin Luther's 95 Theses; he posted them in the town square and gave copies of them to German nobles, but it is debated whether he nailed them to the church door in Wittenberg as is commonly said. The list detailed 95 assertions Luther believed to show corruption and misguidance within the Catholic Church. One often cited example, though perhaps not Luther's chief concern, is a condemnation of the selling of indulgences; another prominent point within the 95 Theses is Luther's disagreement both with the way in which the higher clergy, especially the pope, used and abused power, and with the very idea of the pope.
In 1521 Luther was outlawed at the Diet of Worms. But the Reformation spread rapidly, helped by the Emperor Charles V's wars with France and the Turks. Hiding in the Wartburg Castle, Luther translated the Bible from Latin to German, establishing the basis of the German language. A curious fact is that Luther spoke a dialect which had minor importance in the German language of that time. After the publication of his Bible, his dialect suppressed the others and evolved into what is now the modern German.
In 1524 the German Peasants' War broke out in Swabia, Franconia and Thuringia against ruling princes and lords, following the preaching of Reformers. But the revolts, which were assisted by war-experienced noblemen like Götz von Berlichingen and Florian Geyer (in Franconia), and by the theologian Thomas Münzer (in Thuringia), were soon repressed by the territorial princes. As many as 100,000 German peasants were massacred during the revolt. With the protestation of the Lutheran princes at the Imperial Diet of Speyer (1529) and rejection of the Lutheran "Augsburg Confession" at Augsburg (1530), a separate Lutheran church emerged.
From 1545 the Counter-Reformation began in Germany. The main force was provided by the Jesuit order, founded by the Spaniard Ignatius of Loyola. Central and northeastern Germany were by this time almost wholly Protestant, whereas western and southern Germany remained predominantly Catholic. In 1547, Holy Roman Emperor Charles V defeated the Schmalkaldic League, an alliance of Protestant rulers. The Peace of Augsburg in 1555 brought recognition of the Lutheran faith. But the treaty also stipulated that the religion of a state was to be that of its ruler (Cuius regio, eius religio).
Thirty Years War, 1618–1648
From 1618 to 1648 the Thirty Years' War raged in the Holy Roman Empire. Its causes were the conflicts between Catholics and Protestants, the efforts by the various states within the Empire to increase their power, and the Catholic Emperor's attempt to achieve the religious and political unity of the Empire. The immediate occasion for the war was the uprising of the Protestant nobility of Bohemia against the emperor, but the conflict was widened into a European war by the intervention of King Christian IV of Denmark (1625–29), Gustavus Adolphus of Sweden (1630–48) and France under Cardinal Richelieu. Germany became the main theatre of war and the scene of the final conflict between France and the Habsburgs for predominance in Europe.
The fighting often was out of control, with marauding bands of hundreds or thousands of starving soldiers spreading plague, plunder, and murder. The armies that were under control moved back and forth across the countryside year after year, levying heavy taxes on cities, and seizing the animals and food stocks of the peasants without payment. The enormous social disruption over three decades caused a dramatic decline in population because of killings, disease, crop failures, declining birth rates and random destruction, and the out-migration of terrified people. One estimate shows a 38% drop from 16 million people in 1618 to 10 million by 1650, while another shows "only" a 20% drop from 20 million to 16 million. The Altmark and Württemberg regions were especially hard hit. It took generations for Germany to fully recover.
The war ended in 1648 with the Peace of Westphalia. Alsace was permanently lost to France, Pomerania was temporarily lost to Sweden, and the Netherlands officially left the Empire. Imperial power declined further as the states' rights were increased.
Culture and literacy
The German population reached about twenty million people, the great majority of whom were peasant farmers.
The Reformation was a triumph of literacy and the new printing press. Luther's translation of the Bible into German was a decisive moment in the spread of literacy, and stimulated as well the printing and distribution of religious books and pamphlets. From 1517 onward religious pamphlets flooded Germany and much of Europe. By 1530 over 10,000 publications are known, with a total of ten million copies. The Reformation was thus a media revolution. Luther strengthened his attacks on Rome by depicting a "good" against "bad" church. From there, it became clear that print could be used for propaganda in the Reformation for particular agendas. Reform writers used pre-Reformation styles, clichés, and stereotypes and changed items as needed for their own purposes. Especially effective were Luther's Small Catechism, for use of parents teaching their children, and Larger Catechism, for pastors. Using the German vernacular they expressed the Apostles' Creed in simpler, more personal, Trinitarian language. Illustrations in the newly translated Bible and in many tracts popularized Luther's ideas. Lucas Cranach the Elder (1472–1553), the great painter patronized by the electors of Wittenberg, was a close friend of Luther, and illustrated Luther's theology for a popular audience. He dramatized Luther's views on the relationship between the Old and New Testaments, while remaining mindful of Luther's careful distinctions about proper and improper uses of visual imagery.
Luther's German translation of the Bible was also decisive for the German language and its evolution from Early New High German to Modern Standard. His bible promoted the development of non-local forms of language and exposed all speakers to forms of German from outside their own area.
Decisive scientific developments took place during the 16th and 17th centuries, especially in the fields of astronomy, mathematics and physics. The German astronomical community played a dominant role in Europe at this time, as its scientists kept in close touch with one another. Several non-German scientists influenced this community too, like astronomers Copernicus who worked in Poland and Tycho Brahe, who worked in Denmark and Bohemia. Copernicus, for example, was better known inside the German community. Astronomer Johannes Kepler from Weil am Stadt was one of the leaders in the 17th-century scientific revolution. He is best known for his laws of planetary motion. His ideas influenced contemporary Italian scientist Galileo Galilei and provided one of the foundations for Englishman Isaac Newton's theory of universal gravitation.
Johannes Kepler (1571–1630)
From 1640, Brandenburg-Prussia had started to rise under the "Great Elector," Frederick William. The Peace of Westphalia in 1648 strengthened it even further, through the acquisition of East Pomerania. From 1713 to 1740, King Frederick William I, also known as the "Soldier King", established a highly centralized, militarized state with a heavily rural population of about three million (compared to the nine million in Austria).
In terms of the boundaries of 1914, Germany in 1700 had a population of 16 million, increasing slightly to 17 million by 1750, and growing more rapidly to 24 million by 1800. Wars continued, but they were no longer so devastating to the civilian population; famines and major epidemics did not occur, but increased agricultural productivity led to a higher birth rate, and a lower death rate.
Louis XIV of France conquered parts of Alsace and Lorraine (1678–1681), and had invaded and devastated the Electorate of the Palatinate (1688–1697) in the War of Palatinian Succession. Louis XIV benefited from the Empire's problems with the Turks, which were menacing Austria. Louis XIV ultimately had to relinquish the Electorate of the Palatinate. Afterwards Hungary was reconquered from the Turks; Austria, under the Habsburgs, developed into a great power.
Frederick II "the Great" is best known for his military genius, his reorganization of Prussian armies, his battlefield successes, his enlightened rule, and especially his making Prussia one of the great powers, as well as escaping from almost certain national disaster at the last minute. He was especially a role model for an aggressively expanding Germany down to 1945, and even today retains his heroic image in Germany.
In the War of Austrian Succession (1740–1748) Maria Theresa fought successfully for recognition of her succession to the throne. But in the Silesian Wars and in the Seven Years' War she had to cede 95% of Silesia to Frederick the Great. After the Peace of Hubertsburg in 1763 between Austria, Prussia and Saxony, Prussia won recognition as a great power, thus launching a century-long rivalry with Austria for the leadership of the German peoples.
From 1763, against resistance from the nobility and citizenry, an "enlightened absolutism" was established in Prussia and Austria, according to which the ruler governed according to the best precepts of the philosophers. The economies developed and legal reforms were undertaken, including the abolition of torture and the improvement in the status of Jews. Emancipation of the peasants slowly began. Compulsory education was instituted.
In 1772–1795 Prussia took the lead in the partitions of Poland, with Austria and Russia splitting the rest. Prussia occupied the western territories of the former Polish–Lithuanian Commonwealth that surrounded existing Prussian holdings. Poland again became independent in 1918.
Completely overshadowed by Prussia and Austria, according to historian Hajo Holborn, the smaller German states were generally characterized by political lethargy and administrative inefficiency, often compounded by rulers who were more concerned with their mistresses and their hunting dogs than with the affairs of state. Bavaria was especially unfortunate in this regard; it was a rural land with very heavy debts and few growth centers. Saxony was in economically good shape, although its government was seriously mismanaged, and numerous wars had taken their toll. During the time when Prussia rose rapidly within Germany, Saxony was distracted by foreign affairs. The house of Wettin concentrated on acquiring and then holding on to the Polish throne which was ultimately unsuccessful. In Württemberg the duke lavished funds on palaces, mistresses, great celebration, and hunting expeditions. Many of the city-states of Germany were run by bishops, who in reality were from powerful noble families and showed scant interest in religion. None developed a significant reputation for good government.
In Hesse-Kassel, the Landgrave Frederick II, ruled 1760–1785 as an enlightened despot, and raised money by renting soldiers (called "Hessians") to Great Britain to help fight the American Revolutionary War. He combined Enlightenment ideas with Christian values, cameralist plans for central control of the economy, and a militaristic approach toward diplomacy.
Hanover did not have to support a lavish court—its rulers were also kings of England and resided in London. George III, elector (ruler) from 1760 to 1820, never once visited Hanover. The local nobility who ran the country opened the University of Göttingen in 1737; it soon became a world-class intellectual center.
The smaller states failed to form coalitions with each other, and were eventually overwhelmed by Prussia. Between 1807 and 1871, Prussia swallowed up many of the smaller states, with minimal protest, then went on to found the German Empire. In the process, Prussia became too heterogeneous, lost its identity, and by the 1930s had become an administrative shell of little importance.
In a heavily agrarian society, land ownership played a central role. Germany's nobles, especially those in the East – called Junkers – dominated not only the localities, but also the Prussian court, and especially the Prussian army. Increasingly after 1815, a centralized Prussian government based in Berlin took over the powers of the nobles, which in terms of control over the peasantry had been almost absolute. To help the nobility avoid indebtedness, Berlin set up a credit institution to provide capital loans in 1809, and extended the loan network to peasants in 1849. When the German Empire was established in 1871, the Junker nobility controlled the army and the Navy, the bureaucracy, and the royal court; they generally set governmental policies.
Peasants and rural life
Peasants continued to center their lives in the village, where they were members of a corporate body, and to help manage the community resources and monitor the community life. In the East, they were serfs who were bound permanently to parcels of land. In most of Germany, farming was handled by tenant farmers who paid rents and obligatory services to the landlord, who was typically a nobleman. Peasant leaders supervised the fields and ditches and grazing rights, maintained public order and morals, and supported a village court which handled minor offenses. Inside the family the patriarch made all the decisions, and tried to arrange advantageous marriages for his children. Much of the villages' communal life centered around church services and holy days. In Prussia, the peasants drew lots to choose conscripts required by the army. The noblemen handled external relationships and politics for the villages under their control, and were not typically involved in daily activities or decisions.
The emancipation of the serfs came in 1770–1830, beginning with Schleswig in 1780. The peasants were now ex-serfs and could own their land, buy and sell it, and move about freely. The nobles approved for now they could buy land owned by the peasants. The chief reformer was Baron vom Stein (1757–1831), who was influenced by The Enlightenment, especially the free market ideas of Adam Smith. The end of serfdom raised the personal legal status of the peasantry. A bank was set up so that landowners could borrow government money to buy land from peasants (the peasants were not allowed to use it to borrow money to buy land until 1850). The result was that the large landowners obtained larger estates, and many peasants became landless tenants, or moved to the cities or to America. The other German states imitated Prussia after 1815. In sharp contrast to the violence that characterized land reform in the French Revolution, Germany handled it peacefully. In Schleswig the peasants, who had been influenced by the Enlightenment, played an active role; elsewhere they were largely passive. Indeed, for most peasants, customs and traditions continued largely unchanged, including the old habits of deference to the nobles whose legal authority remained quite strong over the villagers. Although the peasants were no longer tied to the same land as serfs had been, the old paternalistic relationship in East Prussia lasted into the 20th century.
The agrarian reforms in northwestern Germany in the era 1770–1870 were driven by progressive governments and local elites. They abolished feudal obligations and divided collectively owned common land into private parcels and thus created a more efficient market-oriented rural economy, which increased productivity and population growth and strengthened the traditional social order because wealthy peasants obtained most of the former common land, while the rural proletariat was left without land; many left for the cities or America. Meanwhile, the division of the common land served as a buffer preserving social peace between nobles and peasants. In the east the serfs were emancipated but the Junker class maintained its large estates and monopolized political power.
Around 1800 the Catholic monasteries, which had large land holdings, were nationalized and sold off by the government. In Bavaria they had controlled 56% of the land.
Bourgeois values spread to rural Germany
A major social change occurring between 1750–1850, depending on region, was the end of the traditional "whole house" ("ganzes Haus") system, in which the owner's family lived together in one large building with the servants and craftsmen he employed. They reorganized into separate living arrangements. No longer did the owner's wife take charge of all the females in the different families in the whole house. In the new system, farm owners became more professionalized and profit-oriented. They managed the fields and the household exterior according to the dictates of technology, science, and economics. Farm wives supervised family care and the household interior, to which strict standards of cleanliness, order, and thrift applied. The result was the spread of formerly urban bourgeois values into rural Germany.
The lesser families were now living separately on wages. They had to provide for their own supervision, health, schooling, and old-age. At the same time, because of the demographic transition, there were far fewer children, allowing for much greater attention to each child. Increasingly the middle-class family valued its privacy and its inward direction, shedding too-close links with the world of work. Furthermore, the working classes, the middle classes and the upper classes became physically, psychologically and politically more separate. This allowed for the emergence of working-class organizations. It also allowed for declining religiosity among the working-class, who were no longer monitored on a daily basis.
Before 1750 the German upper classes looked to France for intellectual, cultural and architectural leadership; French was the language of high society. By the mid-18th century the "Aufklärung" (German for "The Enlightenment") had transformed German high culture in music, philosophy, science and literature. Christian Wolff (1679–1754) was the pioneer as a writer who expounded the Enlightenment to German readers; he legitimized German as a philosophic language.
Prussia took the lead among the German states in sponsoring the political reforms that Enlightenment thinkers urged absolute rulers to adopt. However, there were important movements as well in the smaller states of Bavaria, Saxony, Hanover, and the Palatinate. In each case Enlightenment values became accepted and led to significant political and administrative reforms that laid the groundwork for the creation of modern states. The princes of Saxony, for example, carried out an impressive series of fundamental fiscal, administrative, judicial, educational, cultural, and general economic reforms. The reforms were aided by the country's strong urban structure and influential commercial groups, and modernized pre-1789 Saxony along the lines of classic Enlightenment principles.
Johann Gottfried von Herder (1744–1803) broke new ground in philosophy and poetry, as a leader of the Sturm und Drang movement of proto-Romanticism. Weimar Classicism ("Weimarer Klassik") was a cultural and literary movement based in Weimar that sought to establish a new humanism by synthesizing Romantic, classical, and Enlightenment ideas. The movement, from 1772 until 1805, involved Herder as well as polymath Johann Wolfgang von Goethe (1749–1832) and Friedrich Schiller (1759–1805), a poet and historian. Herder argued that every folk had its own particular identity, which was expressed in its language and culture. This legitimized the promotion of German language and culture and helped shape the development of German nationalism. Schiller's plays expressed the restless spirit of his generation, depicting the hero's struggle against social pressures and the force of destiny.
In remote Königsberg philosopher Immanuel Kant (1724–1804) tried to reconcile rationalism and religious belief, individual freedom, and political authority. Kant's work contained basic tensions that would continue to shape German thought – and indeed all of European philosophy – well into the 20th century.
The German Enlightenment won the support of princes, aristocrats, and the middle classes, and it permanently reshaped the culture.
Before the 19th century, young women lived under the economic and disciplinary authority of their fathers until they married and passed under the control of their husbands. In order to secure a satisfactory marriage, a woman needed to bring a substantial dowry. In the wealthier families, daughters received their dowry from their families, whereas the poorer women needed to work in order to save their wages so as to improve their chances to wed. Under the German laws, women had property rights over their dowries and inheritances, a valuable benefit as high mortality rates resulted in successive marriages. Before 1789, the majority of women lived confined to society’s private sphere, the home.
The Age of Reason did not bring much more for women: men, including Enlightenment aficionados, believed that women were naturally destined to be principally wives and mothers. Within the educated classes, there was the belief that women needed to be sufficiently educated to be intelligent and agreeable interlocutors to their husbands. However, the lower-class women were expected to be economically productive in order to help their husbands make ends meet.
French Revolution, 1789–1815
German reaction to the French Revolution was mixed at first. German intellectuals celebrated the outbreak, hoping to see the triumph of Reason and The Enlightenment. The royal courts in Vienna and Berlin denounced the overthrow of the king and the threatened spread of notions of liberty, equality, and fraternity. By 1793, the execution of the French king and the onset of the Terror disillusioned the Bildungsbürgertum (educated middle classes). Reformers said the solution was to have faith in the ability of Germans to reform their laws and institutions in peaceful fashion.
Europe was racked by two decades of war revolving around France's efforts to spread its revolutionary ideals, and the opposition of reactionary royalty. War broke out in 1792 as Austria and Prussia invaded France, but were defeated at the Battle of Valmy (1792). The German lands saw armies marching back and forth, bringing devastation (albeit on a far lower scale than the Thirty Years' War, almost two centuries before), but also bringing new ideas of liberty and civil rights for the people. Prussia and Austria ended their failed wars with France but (with Russia) partitioned Poland among themselves in 1793 and 1795. The French took control of the Rhineland, imposed French-style reforms, abolished feudalism, established constitutions, promoted freedom of religion, emancipated Jews, opened the bureaucracy to ordinary citizens of talent, and forced the nobility to share power with the rising middle class. Napoleon created the Kingdom of Westphalia (1807–1813) as a model state. These reforms proved largely permanent and modernized the western parts of Germany. When the French tried to impose the French language, German opposition grew in intensity. A Second Coalition of Britain, Russia, and Austria then attacked France but failed. Napoleon established direct or indirect control over most of western Europe, including the German states apart from Prussia and Austria. The old Holy Roman Empire was little more than a farce; Napoleon simply abolished it in 1806 while forming new countries under his control. In Germany Napoleon set up the "Confederation of the Rhine," comprising most of the German states except Prussia and Austria.
Prussia tried to remain neutral while imposing tight controls on dissent, but with German nationalism sharply on the rise, the small nation blundered by going to war with Napoleon in 1806. Its economy was weak, its leadership poor, and the once mighty Prussian army was a hollow shell. Napoleon easily crushed it at the Battle of Jena (1806). Napoleon occupied Berlin, and Prussia paid dearly. Prussia lost its recently acquired territories in western Germany, its army was reduced to 42,000 men, no trade with Britain was allowed, and Berlin had to pay Paris heavy reparations and fund the French army of occupation. Saxony changed sides to support Napoleon and join his Confederation of the Rhine; its elector was rewarded with the title of king and given a slice of Poland taken from Prussia.
After Napoleon's fiasco in Russia in 1812, including the deaths of many Germans in his invasion army, Prussia joined with Russia. Major battles followed in quick order, and when Austria switched sides to oppose Napoleon, his situation grew tenuous. He was defeated in a great Battle of Leipzig in late 1813, and Napoleon's empire started to collapse. One after another the German states switched to oppose Napoleon, but he rejected peace terms. Allied armies invaded France in early 1814, Paris fell, and in April Napoleon surrendered. He returned for 100 days in 1815, but was finally defeated by the British and German armies at Waterloo. Prussia was the big winner at the Vienna peace conference, gaining extensive territory.
Europe in 1815 was a continent in a state of complete exhaustion following the French Revolutionary and Napoleonic Wars, and started to turn from the liberal ideas of the Enlightenment and Revolutionary era and to Romanticism under such writers as Edmund Burke, Joseph de Maistre, and Novalis. Politically, the victorious allies set out to build a new balance of powers in order to keep the peace, and decided that a stable German region would be able to keep French imperialism at bay. To make this a possibility, the idea of reforming the defunct Holy Roman Empire was discarded, and Napoleon's reorganization of the German states was kept and the remaining princes were allowed to keep their titles. In 1813, in return for guarantees from the Allies that the sovereignty and integrity of the Southern German states (Baden, Württemberg, and Bavaria) would be preserved, they broke with the French.
The German Confederation (German: Deutscher Bund) was the loose association of 39 states created in 1815 to coordinate the economies of separate German-speaking countries. It acted as a buffer between the powerful states of Austria and Prussia. Britain approved of it because London felt that there was need for a stable, peaceful power in central Europe that could discourage aggressive moves by France or Russia. According to Lee (1985), most historians have judged the Confederation to be weak and ineffective, as well as an obstacle to German nationalist aspirations. It collapsed because of the rivalry between Prussia and Austria (known as German dualism), warfare, the 1848 revolution, and the inability of the multiple members to compromise. It was replaced by the North German Confederation in 1866.
Society and economy
The population of the German Confederation (excluding Austria) grew 60% from 1815 to 1865, from 21,000,000 to 34,000,000. The era saw the Demographic Transition take place in Germany. It was a transition from high birth rates and high death rates to low birth and death rates as the country developed from a pre-industrial to a modernized agriculture and supported a fast-growing industrialized urban economic system. In previous centuries, the shortage of land meant that not everyone could marry, and marriages took place after age 25. After 1815, increased agricultural productivity meant a larger food supply, and a decline in famines, epidemics, and malnutrition. This allowed couples to marry earlier, and have more children. Arranged marriages became uncommon as young people were now allowed to choose their own marriage partners, subject to a veto by the parents. The high birthrate was offset by a very high rate of infant mortality and emigration, especially after about 1840, mostly to the German settlements in the United States, plus periodic epidemics and harvest failures. The upper and middle classes began to practice birth control, and a little later so too did the peasants.
Before 1850 Germany lagged far behind the leaders in industrial development – Britain, France, and Belgium. In 1800, Germany's social structure was poorly suited to entrepreneurship or economic development. Domination by France during the era of the French Revolution (1790s to 1815), however, produced important institutional reforms. Reforms included the abolition of feudal restrictions on the sale of large landed estates, the reduction of the power of the guilds in the cities, and the introduction of a new, more efficient commercial law. Nevertheless, traditionalism remained strong in most of Germany. Until mid-century, the guilds, the landed aristocracy, the churches, and the government bureaucracies had so many rules and restrictions that entrepreneurship was held in low esteem, and given little opportunity to develop. From the 1830s and 1840s, Prussia, Saxony, and other states reorganized agriculture. The introduction of sugar beets, turnips, and potatoes yielded a higher level of food production, which enabled a surplus rural population to move to industrial areas. The beginnings of the industrial revolution in Germany came in the textile industry, and was facilitated by eliminating tariff barriers through the Zollverein, starting in 1834.
By mid-century, the German states were catching up. By 1900 Germany was a world leader in industrialization, along with Britain and the United States. Historian Thomas Nipperdey sums it up:
|“||On the whole, industrialisation in Germany must be considered to have been positive in its effects. Not only did it change society and the countryside, and finally the world...it created the modern world we live in. It solved the problems of population growth, under-employment and pauperism in a stagnating economy, and abolished dependency on the natural conditions of agriculture, and finally hunger. It created huge improvements in production and both short- and long-term improvements in living standards. However, in terms of social inequality, it can be assumed that it did not change the relative levels of income. Between 1815 and 1873 the statistical distribution of wealth was on the order of 77% to 23% for entrepreneurs and workers respectively. On the other hand, new problems arose, in the form of interrupted growth and new crises, such as urbanisation, 'alienation', new underclasses, proletariat and proletarian misery, new injustices and new masters and, eventually, class warfare.||”|
Industrialization brought rural Germans to the factories, mines and railways. The population in 1800 was heavily rural, with only 10% of the people living in communities of 5000 or more people, and only 2% living in cities of more than 100,000. After 1815, the urban population grew rapidly, due primarily to the influx of young people from the rural areas. Berlin grew from 172,000 in 1800, to 826,000 in 1870; Hamburg grew from 130,000 to 290,000; Munich from 40,000 to 269,000; and Dresden from 60,000 to 177,000. Offsetting this growth, there was extensive emigration, especially to the United States. Emigration totaled 480,000 in the 1840s, 1,200,000 in the 1850s, and 780,000 in the 1860s.
The takeoff stage of economic development came with the railroad revolution in the 1840s, which opened up new markets for local products, created a pool of middle managers, increased the demand for engineers, architects and skilled machinists and stimulated investments in coal and iron. Political disunity of three dozen states and a pervasive conservatism made it difficult to build railways in the 1830s. However, by the 1840s, trunk lines did link the major cities; each German state was responsible for the lines within its own borders. Economist Friedrich List summed up the advantages to be derived from the development of the railway system in 1841:
Lacking a technological base at first, the Germans imported their engineering and hardware from Britain, but quickly learned the skills needed to operate and expand the railways. In many cities, the new railway shops were the centres of technological awareness and training, so that by 1850, Germany was self-sufficient in meeting the demands of railroad construction, and the railways were a major impetus for the growth of the new steel industry. Observers found that even as late as 1890, their engineering was inferior to Britain’s. However, German unification in 1870 stimulated consolidation, nationalisation into state-owned companies, and further rapid growth. Unlike the situation in France, the goal was support of industrialisation, and so, heavy lines crisscrossed the Ruhr and other industrial districts, and provided good connections to the major ports of Hamburg and Bremen. By 1880, Germany had 9,400 locomotives pulling 43,000 passengers and 30,000 tons of freight a day, and forged ahead of France.
Newspapers and magazines
A large number of newspapers and magazines flourished; A typical small city had one or two newspapers; Berlin and Leipzig had dozens. The audience was limited to perhaps five percent of the adult men, chiefly from the aristocratic and middle classes, who followed politics. Liberal papers outnumbered conservative ones by a wide margin. Foreign governments bribed editors to guarantee a favorable image. Censorship was strict, and the government issued the political news they were supposed to report. After 1871, strict press laws were used by Bismarck to shut down the Socialist, and to threaten hostile editors. There were no national newspapers. Editors focused on political commentary, but also included in a nonpolitical cultural page, focused on the arts and high culture. Especially popular was the serialized novel, with a new chapter every week. Magazines were politically more influential, and attracted the leading intellectuals as authors.
Science and culture
German artists and intellectuals, heavily influenced by the French Revolution and by the great German poet and writer Johann Wolfgang von Goethe (1749–1832), turned to Romanticism after a period of Enlightenment. Philosophical thought was decisively shaped by Immanuel Kant (1724–1804). Ludwig van Beethoven (1770–1827) was the leading composer of Romantic music. His use of tonal architecture in such a way as to allow significant expansion of musical forms and structures was immediately recognized as bringing a new dimension to music. His later piano music and string quartets, especially, showed the way to a completely unexplored musical universe, and influenced Franz Schubert (1797–1828) and Robert Schumann (1810–1856). In opera, a new Romantic atmosphere combining supernatural terror and melodramatic plot in a folkloric context was first successfully achieved by Carl Maria von Weber (1786–1826) and perfected by Richard Wagner (1813–1883) in his Ring Cycle. The Brothers Grimm (1785–1863 & 1786–1859) not only collected folk stories into the popular Grimm's Fairy Tales, but were also linguists, now counted among the founding fathers of German studies. They were commissioned to begin the Deutsches Wörterbuch ("The German Dictionary"), which remains the most comprehensive work on the German language.
At the universities high-powered professors developed international reputations, especially in the humanities led by history and philology, which brought a new historical perspective to the study of political history, theology, philosophy, language, and literature. With Georg Wilhelm Friedrich Hegel (1770–1831) in philosophy, Friedrich Schleiermacher (1768–1834) in theology and Leopold von Ranke (1795–1886) in history, the University of Berlin, founded in 1810, became the world's leading university. Von Ranke, for example, professionalized history and set the world standard for historiography. By the 1830s mathematics, physics, chemistry, and biology had emerged with world class science, led by Alexander von Humboldt (1769–1859) in natural science and Carl Friedrich Gauss (1777–1855) in mathematics. Young intellectuals often turned to politics, but their support for the failed Revolution of 1848 forced many into exile.
Immanuel Kant (1724–1804)
Johann Wolfgang von Goethe (1749–1832)
Alexander von Humboldt (1769–1859)
Ludwig van Beethoven (1770–1827)
Carl Friedrich Gauss (1777–1855)
Brothers Grimm (1785–1863 & 1786–1859)
Two main developments reshaped religion in Germany. Across the land, there was a movement to unite the larger Lutheran and the smaller Reformed Protestant churches. The churches themselves brought this about in Baden, Nassau, and Bavaria. However, in Prussia King Frederick William III was determined to handle unification entirely on his own terms, without consultation. His goal was to unify the Protestant churches, and to impose a single standardized liturgy, organization and even architecture. The long-term goal was to have fully centralized royal control of all the Protestant churches. In a series of proclamations over several decades the Church of the Prussian Union was formed, bringing together the more numerous Lutherans, and the less numerous Reformed Protestants. The government of Prussia now had full control over church affairs, with the king himself recognized as the leading bishop. Opposition to unification came from the "Old Lutherans" in Silesia who clung tightly to the theological and liturgical forms they had followed since the days of Luther. The government attempted to crack down on them, so they went underground. Tens of thousands migrated, to South Australia, and especially to the United States, where they formed the Missouri Synod, which is still in operation as a conservative denomination. Finally in 1845 a new king Frederick William IV offered a general amnesty and allowed the Old Lutherans to form a separate church association with only nominal government control.
From the religious point of view of the typical Catholic or Protestant, major changes were underway in terms of a much more personalized religiosity that focused on the individual more than the church or the ceremony. The rationalism of the late 19th century faded away, and there was a new emphasis on the psychology and feeling of the individual, especially in terms of contemplating sinfulness, redemption, and the mysteries and the revelations of Christianity. Pietistic revivals were common among Protestants. Among Catholics there was a sharp increase in popular pilgrimages. In 1844 alone, half a million pilgrims made a pilgrimage to the city of Trier in the Rhineland to view the Seamless robe of Jesus, said to be the robe that Jesus wore on the way to his crucifixion. Catholic bishops in Germany had historically been largely independent Of Rome, but now the Vatican exerted increasing control, a new "ultramontanism" of Catholics highly loyal to Rome. A sharp controversy broke out in 1837–38 in the largely Catholic Rhineland over the religious education of children of mixed marriages, where the mother was Catholic and the father Protestant. The government passed laws to require that these children always be raised as Protestants, contrary to Napoleonic law that had previously prevailed and allowed the parents to make the decision. It put the Catholic Archbishop under house arrest. In 1840, the new King Frederick William IV sought reconciliation and ended the controversy by agreeing to most of the Catholic demands. However Catholic memories remained deep and led to a sense that Catholics always needed to stick together in the face of an untrustworthy government.
Politics of restoration and revolution
After the fall of Napoleon, Europe's statesmen convened in Vienna in 1815 for the reorganisation of European affairs, under the leadership of the Austrian Prince Metternich. The political principles agreed upon at this Congress of Vienna included the restoration, legitimacy and solidarity of rulers for the repression of revolutionary and nationalist ideas.
The German Confederation (German: Deutscher Bund) was founded, a loose union of 39 states (35 ruling princes and 4 free cities) under Austrian leadership, with a Federal Diet (German: Bundestag) meeting in Frankfurt am Main. It was a loose coalition that failed to satisfy most nationalists. The member states largely went their own way, and Austria had its own interests.
In 1819 a student radical assassinated the reactionary playwright August von Kotzebue, who had scoffed at liberal student organisations. In one of the few major actions of the German Confederation, Prince Metternich called a conference that issued the repressive Carlsbad Decrees, designed to suppress liberal agitation against the conservative governments of the German states. The Decrees terminated the fast-fading nationalist fraternities (German: Burschenschaften), removed liberal university professors, and expanded the censorship of the press. The decrees began the "persecution of the demagogues", which was directed against individuals who were accused of spreading revolutionary and nationalist ideas. Among the persecuted were the poet Ernst Moritz Arndt, the publisher Johann Joseph Görres and the "Father of Gymnastics" Ludwig Jahn.
In 1834 the Zollverein was established, a customs union between Prussia and most other German states, but excluding Austria. As industrialisation developed, the need for a unified German state with a uniform currency, legal system, and government became more and more obvious.
Growing discontent with the political and social order imposed by the Congress of Vienna led to the outbreak, in 1848, of the March Revolution in the German states. In May the German National Assembly (the Frankfurt Parliament) met in Frankfurt to draw up a national German constitution.
But the 1848 revolution turned out to be unsuccessful: King Frederick William IV of Prussia refused the imperial crown, the Frankfurt parliament was dissolved, the ruling princes repressed the risings by military force, and the German Confederation was re-established by 1850. Many leaders went into exile, including a number who went to the United States and became a political force there.
The 1850s were a period of extreme political reaction. Dissent was vigorously suppressed, and many Germans emigrated to America following the collapse of the 1848 uprisings. Frederick William IV became extremely depressed and melancholy during this period, and was surrounded by men who advocated clericalism and absolute divine monarchy. The Prussian people once again lost interest in politics. Prussia not only expanded its territory but began to industrialize rapidly, while maintaining a strong agricultural base.
Bismarck takes charge, 1862–1866
In 1857, the king had a stroke and his brother William became regent, then became King William I in 1861. Although conservative, William I was far more pragmatic. His most significant accomplishment was naming Otto von Bismarck as chancellor in 1862. The combination of Bismarck, Defense Minister Albrecht von Roon, and Field Marshal Helmut von Moltke set the stage for victories over Denmark, Austria, and France, and led to the unification of Germany. The obstacle to German unification was Austria, and Bismarck solved the problem with a series of wars that united the German states north of Austria.
In 1863–64, disputes between Prussia and Denmark grew over Schleswig, which was not part of the German Confederation, and which Danish nationalists wanted to incorporate into the Danish kingdom. The dispute led to the short Second War of Schleswig in 1864. Prussia, joined by Austria, easily defeated Denmark and occupied Jutland. The Danes were forced to cede both the duchy of Schleswig and the duchy of Holstein to Austria and Prussia. In the aftermath, the management of the two duchies caused escalating tensions between Austria and Prussia. The former wanted the duchies to become an independent entity within the German Confederation, while the latter wanted to annex them. The Seven Weeks War between Austria and Prussia broke out in June 1866. In July, the two armies clashed at Sadowa-Königgrätz (Bohemia) in an enormous battle involving half a million men. The Prussian breech-loading needle guns carried the day over the slow muzzle-loading rifles of the Austrians, who lost a quarter of their army in the battle. Austria ceded Venice to Italy, but Bismarck was deliberately lenient with the loser to keep alive a long-term alliance with Austria in a subordinate role. Now the French faced an increasingly strong Prussia.
North German Federation, 1866–1871
In 1866, the German Confederation was dissolved. In its place the North German Federation (German Norddeutscher Bund) was established, under the leadership of Prussia. Austria was excluded, and the Austrian influence in Germany that had begun in the 15th century finally came to an end. The North German Federation was a transitional organisation that existed from 1867 to 1871, between the dissolution of the German Confederation and the founding of the German Empire.
German Empire, 1871–1918
After Germany was united by Otto von Bismarck into the "German Reich", he determined German politics until 1890. Bismarck tried to foster alliances in Europe, on one hand to contain France, and on the other hand to consolidate Germany's influence in Europe. On the domestic front Bismarck tried to stem the rise of socialism by anti-socialist laws, combined with an introduction of health care and social security. At the same time Bismarck tried to reduce the political influence of the emancipated Catholic minority in the Kulturkampf, literally "culture struggle". The Catholics only grew stronger, forming the Center (Zentrum) Party. Germany grew rapidly in industrial and economic power, matching Britain by 1900. Its highly professional army was the best in the world, but the navy could never catch up with Britain's Royal Navy.
In 1888, the young and ambitious Kaiser Wilhelm II became emperor. He could not abide advice, least of all from the most experienced politician and diplomat in Europe, so he fired Bismarck. The Kaiser opposed Bismarck's careful foreign policy and wanted Germany to pursue colonialist policies, as Britain and France had been doing for decades, as well as build a navy that could match the British. The Kaiser promoted active colonization of Africa and Asia for those areas that were not already colonies of other European powers; his record was notoriously brutal and set the stage for genocide. The Kaiser took a mostly unilateral approach in Europe with as main ally the Austro-Hungarian Empire, and an arms race with Britain, which eventually led to the situation in which the assassination of the Austrian-Hungarian crown prince could spark off World War I.
Age of Bismarck
The new empire
Disputes between France and Prussia increased. In 1868, the Spanish queen Isabella II was expelled by a revolution, leaving that country's throne vacant. When Prussia tried to put a Hohenzollern candidate, Prince Leopold, on the Spanish throne, the French angrily protested. In July 1870, France declared war on Prussia (the Franco-Prussian War). The debacle was swift. A succession of German victories in northeastern France followed, and one French army was besieged at Metz. After a few weeks, the main army was finally forced to capitulate in the fortress of Sedan. French Emperor Napoleon III was taken prisoner and a republic hastily proclaimed in Paris. The new government, realising that a victorious Germany would demand territorial acquisitions, resolved to fight on. They began to muster new armies, and the Germans settled down to a grim siege of Paris. The starving city surrendered in January 1871, and the Prussian army staged a victory parade in it. France was forced to pay indemnities of 5 billion francs and cede Alsace-Lorraine. It was a bitter peace that would leave the French thirsting for revenge.
During the Siege of Paris, the German princes assembled in the Hall of Mirrors of the Palace of Versailles and proclaimed the Prussian King Wilhelm I as the "German Emperor" on 18 January 1871. The German Empire was thus founded, with the German states unified into a single economic, political, and administrative unit. The empire comprised 25 states, three of which were Hanseatic free cities. It was dubbed the "Little German" solution, since it excluded the Austrian territories and the Habsburgs. Bismarck, again, was appointed to serve as Chancellor.
The new empire was characterised by a great enthusiasm and vigor. There was a rash of heroic artwork in imitation of Greek and Roman styles, and the nation possessed a vigorous, growing industrial economy, while it had always been rather poor in the past. The change from the slower, more tranquil order of the old Germany was very sudden, and many, especially the nobility, resented being displaced by the new rich. And yet, the nobles clung stubbornly to power, and they, not the bourgeois, continued to be the model that everyone wanted to imitate. In imperial Germany, possessing a collection of medals or wearing a uniform was valued more than the size of one's bank account, and Berlin never became a great cultural center as London, Paris, or Vienna were. The empire was distinctly authoritarian in tone, as the 1871 constitution gave the emperor exclusive power to appoint or dismiss the chancellor. He also was supreme commander-in-chief of the armed forces and final arbiter of foreign policy. But freedom of speech, association, and religion were nonetheless guaranteed by the constitution.
Bismarck's domestic policies as Chancellor of Germany were characterised by his fight against perceived enemies of the Protestant Prussian state. In the Kulturkampf (1871–1878), he tried to minimize the influence of the Roman Catholic Church and of its political arm, the Catholic Centre Party, through various measures—like the introduction of civil marriage—but without much success. The Kulturkampf antagonised many Protestants as well as Catholics, and was eventually abandoned. Millions of non-Germans subjects in the German Empire, like the Polish, Danish and French minorities, were discriminated against, and a policy of Germanisation was implemented.
The new Empire provided rich new opportunities at the top for the nobility of Prussia, and the other states, to fill. They dominated the diplomatic service, the Army, and the civil service. Through their control of the civil service, the aristocracy had a dominant voice in decisions affecting the universities and the churches. In 1914, Germany's diplomats consisted of eight princes 29 counts 20 barons 54 other nobles, and a mere 11 commoners. The commoners were chiefly the sons of leading industrialists or bankers. Almost all the diplomats had been socialized into the feudal student corps at the universities. The consular corps comprised commoners, but they had little decision-making ability. Since the days of Frederick the great, it had been difficult for commoners to achieve high ranking the Army. It was considered a suitable role for young aristocrats. The new Constitution put Military affairs under the direct control of the Emperor, and largely out of reach of the Reichstag. With its large corps of reserve officers across Germany, the military strengthened its role as "The estate which upheld the nation." Historian Hans-Ulrich Wehler says, "it became an almost separate, self-perpetuating caste."
Power increasingly was centralized in the national capital of Berlin (including neighboring Potsdam.) where 7000 aristocrats drew a sharp line between themselves and everyone else. Berlin's rapidly increasing rich middle-class aped and copied the aristocracy and tried to marry into it. The closed system stood in contrast to Britain where the top levels of the elite were far more open with routes available through a public school education, Oxford, and Cambridge, the Inns of Court, appointment to high office, or leadership in the House of Commons. A peerage could permanently boost a rich industrial family into the upper reaches of the establishment. In Germany, the process worked in the other direction as the nobility became industrialists. For example, 221 of the 243 mines in Silesia were owned by nobles or by the King of Prussia himself.
Germany's middle class, based in the cities, grew exponentially, although it never gained the political power it had in France, Britain or the United States. The Bund Deutscher Frauenvereine (Association of German Women's Organizations or BDF) was established in 1894 to encompass the proliferating women's organizations that had sprung up since the 1860s. From the beginning the BDF was a bourgeois organization, its members working toward equality with men in such areas as education, financial opportunities, and political life. Working-class women were not welcome; they were organized by the Socialists.
The rise of the Socialist Workers' Party (later known as the Social Democratic Party of Germany, SPD), declared its aim to establish peacefully a new socialist order through the transformation of existing political and social conditions. From 1878, Bismarck tried to repress the social democratic movement by outlawing the party's organisation, its assemblies and most of its newspapers. When it finally was allowed to run candidates, the Social Democrats were stronger than ever.
Bismarck built on a tradition of welfare programs in Prussia and Saxony that began as early as the 1840s. In the 1880s he introduced old age pensions, accident insurance, medical care, and unemployment insurance that formed the basis of the modern European welfare state. His paternalistic programs won the support of German industry because its goals were to win the support of the working classes for the Empire and reduce the outflow of immigrants to America, where wages were higher but welfare did not exist. Bismarck further won the support of both industry and skilled workers by his high tariff policies, which protected profits and wages from American competition, although they alienated the liberal intellectuals who wanted free trade.
Bismarck would not tolerate any power outside Germany—as in Rome—having a say in German affairs. He launched a Kulturkampf ("culture war") against the power of the pope and the Catholic Church in 1873, but only in Prussia. This gained strong support from German liberals, who saw the Catholic Church as the bastion of reaction and their greatest enemy. The Catholic element, in turn, saw in the National-Liberals as its worst enemy and formed the Center Party.
Catholics, although nearly a third of the national population, were seldom allowed to hold major positions in the Imperial government, or the Prussian government. After 1871, there was a systematic purge of the remaining Catholics; in the powerful interior ministry, which handled all police affairs, the only Catholic was a messenger boy. Jews were likewise heavily discriminated against.
Most of the Kulturkampf was fought out in Prussia, but Imperial Germany passed the Pulpit Law which made it a crime for any cleric to discuss public issues in a way that displeased the government. Nearly all Catholic bishops, clergy, and laymen rejected the legality of the new laws and defiantly faced the increasingly heavy penalties and imprisonments imposed by Bismarck's government. Historian Anthony Steinhoff reports the casualty totals:
|“||As of 1878, only three of eight Prussian dioceses still had bishops, some 1,125 of 4,600 parishes were vacant, and nearly 1,800 priests ended up in jail or in exile....Finally, between 1872 and 1878, numerous Catholic newspapers were confiscated, Catholic associations and assemblies were dissolved, and Catholic civil servants were dismissed merely on the pretence of having Ultramontane sympathies.||”|
Bismarck underestimated the resolve of the Catholic Church and did not foresee the extremes that this struggle would attain. The Catholic Church denounced the harsh new laws as anti-Catholic and mustered the support of its rank and file voters across Germany. In the following elections, the Center Party won a quarter of the seats in the Imperial Diet. The conflict ended after 1879 because Pope Pius IX died in 1878 and Bismarck broke with the Liberals to put his main emphasis on tariffs, foreign policy, and attacking socialists. Bismarck negotiated with the conciliatory new pope Leo XIII. Peace was restored, the bishops returned and the jailed clerics were released. Laws were toned down or taken back (Mitigation Laws 1880–1883 and Peace Laws 1886/87), but the laws concerning education, civil registry of marriages and religious disaffiliation remained in place. The Center Party gained strength and became an ally of Bismarck, especially when he attacked socialism.
Bismarck's post-1871 foreign policy was conservative and basically aimed at security and preventing the dreaded scenario of a Franco-Russian alliance, which would trap Germany between the two in a war.
The League of Three Emperors (Dreikaisersbund) was signed in 1872 by Russia, Austria, and Germany. It stated that republicanism and socialism were common enemies and that the three powers would discuss any matters concerning foreign policy. Bismarck needed good relations with Russia in order to keep France isolated. In 1877–1878, Russia fought a victorious war with the Ottoman Empire and attempted to impose the Treaty of San Stefano on it. This upset the British in particular, as they were long concerned with preserving the Ottoman Empire and preventing a Russian takeover of the Bosphorus Strait. Germany hosted the Congress of Berlin (1878), whereby a more moderate peace settlement was agreed to. Germany had no direct interest in the Balkans, however, which was largely an Austrian and Russian sphere of influence, although King Carol of Romania was a German prince.
In 1879, Bismarck formed a Dual Alliance of Germany and Austria-Hungary, with the aim of mutual military assistance in the case of an attack from Russia, which was not satisfied with the agreement reached at the Congress of Berlin. The establishment of the Dual Alliance led Russia to take a more conciliatory stance, and in 1887, the so-called Reinsurance Treaty was signed between Germany and Russia: in it, the two powers agreed on mutual military support in the case that France attacked Germany, or in case of an Austrian attack on Russia. Russia turned its attention eastward to Asia and remained largely inactive in European politics for the next 25 years. In 1882, Italy joined the Dual Alliance to form a Triple Alliance. Italy wanted to defend its interests in North Africa against France's colonial policy. In return for German and Austrian support, Italy committed itself to assisting Germany in the case of a French military attack.
For a long time, Bismarck had refused to give in to widespread public demands to give Germany "a place in the sun" through the acquisition of overseas colonies. In 1880 Bismarck gave way, and a number of colonies were established overseas. In Africa, these were Togo, the Cameroons, German South-West Africa, and German East Africa; in Oceania, they were German New Guinea, the Bismarck Archipelago, and the Marshall Islands. In fact, it was Bismarck himself who helped initiate the Berlin Conference of 1885. He did it to "establish international guidelines for the acquisition of African territory" (see Colonisation of Africa). This conference was an impetus for the "Scramble for Africa" and "New Imperialism".
In 1888, emperor William I died at the age of 90. His son Frederick III, the hope of German liberals, was already stricken with throat cancer and died three months later. Frederick's son Wilhelm II then became emperor at the age of 29. Having had a problematic relationship with his liberal parents, Wilhelm had early on decided to renew the top level of the state. The two years that Bismarck remained in office feigned continuity, but a difference of opinion on social politics served as an excuse for the young Kaiser to force the chancellor into retirement in March 1890. Following a principle known as "personal regiment" (German: persönliches Regiment), Wilhelm aimed to exercise influence on every government decision.
Alliances and diplomacy
The young Kaiser Wilhelm sought aggressively to increase Germany's influence in the world (Weltpolitik). After the removal of Bismarck, foreign policy was in the hands of the erratic Kaiser, who played an increasingly reckless hand, and the powerful foreign office under the leadership of Friedrich von Holstein. The foreign office argued that: first, a long-term coalition between France and Russia had to fall apart; secondly, Russia and Britain would never get together; and, finally, Britain would eventually seek an alliance with Germany. Germany refused to renew its treaties with Russia. But Russia did form a closer relationship with France in the Dual Alliance of 1894, since both were worried about the possibilities of German aggression. Furthermore, Anglo–German relations cooled as Germany aggressively tried to build a new empire and engaged in a naval race with Britain; London refused to agree to the formal alliance that Germany sought. Berlin's analysis proved mistaken on every point, leading to Germany's increasing isolation and its dependence on the Triple Alliance, which brought together Germany, Austria-Hungary, and Italy. The Triple Alliance was undermined by differences between Austria and Italy, and in 1915 Italy switched sides.
Meanwhile, the German Navy under Admiral Alfred von Tirpitz had ambitions to rival the great British Navy, and dramatically expanded its fleet in the early 20th century to protect the colonies and exert power worldwide. Tirpitz started a programme of warship construction in 1898. In 1890, Germany had gained the island of Heligoland in the North Sea from Britain in exchange for the eastern African island of Zanzibar, and proceeded to construct a great naval base there. This posed a direct threat to British hegemony on the seas, with the result that negotiations for an alliance between Germany and Britain broke down. The British, however, kept well ahead in the naval race by the introduction of the highly advanced new Dreadnought battleship in 1907.
In the First Moroccan Crisis of 1905, Germany nearly came to blows with Britain and France when the latter attempted to establish a protectorate over Morocco. The Germans were upset at having not been informed about French intentions, and declared their support for Moroccan independence. William II made a highly provocative speech regarding this. The following year, a conference was held in which all of the European powers except Austria-Hungary (by now little more than a German satellite) sided with France. A compromise was brokered by the United States where the French relinquished some, but not all, control over Morocco.
The Second Moroccan Crisis of 1911 saw another dispute over Morocco erupt when France tried to suppress a revolt there. Germany, still smarting from the previous quarrel, agreed to a settlement whereby the French ceded some territory in central Africa in exchange for Germany's renouncing any right to intervene in Moroccan affairs. This confirmed French control over Morocco, which became a full protectorate of that country in 1912.
The economy continued to industrialize and urbanize, with heavy industry – especially coal and steel – becoming important in the Ruhr, and manufacturing growing in the cities, the Ruhr, and Silesia. Perkins (1981) argues that more important than Bismarck's new tariff on imported grain was the introduction of the sugar beet as a main crop. Farmers quickly abandoned traditional, inefficient practices in favor of modern methods, including use of new fertilizers and new tools. The knowledge and tools gained from the intensive farming of sugar and other root crops made Germany the most efficient agricultural producer in Europe by 1914. Even so, farms were small in size, and women did much of the field work. An unintended consequence was the increased dependence on migratory, especially foreign, labor.
Based on its leadership in chemical research in the universities and industrial laboratories, Germany became dominant in the world's chemical industry in the late 19th century. At first, the production of dyes was critical.
Germany became Europe's leading steel-producing nation in the 1890s, thanks in large part to the protection from American and British competition afforded by tariffs and cartels. The leading firm was "Friedrich Krupp AG Hoesch-Krupp," run by the Krupp family. The merger of several major firms into the Vereinigte Stahlwerke (United Steel Works) in 1926 was modeled on the U.S. Steel corporation in the United States. The new company emphasized rationalization of management structures and modernization of the technology; it employed a multi-divisional structure and used return on investment as its measure of success. By 1913, American and German exports dominated the world steel market, as Britain slipped to third place.
In machinery, iron and steel, and other industries, German firms avoided cut-throat competition and instead relied on trade associations. Germany was a world leader because of its prevailing "corporatist mentality", its strong bureaucratic tradition, and the encouragement of the government. These associations regulate competition and allowed small firms to function in the shadow of much larger companies.
Germany's unification process after 1871 was heavily dominated by men and give priority to the "Fatherland" theme and related male issues, such as military prowess. Nevertheless, middle class women enrolled in the Bund Deutscher Frauenvereine, the Union of German Feminist Organizations (BDF). Founded in 1894, it grew to include 137 separate women's rights groups from 1907 until 1933, when the Nazi regime disbanded the organization. The BDF gave national direction to the proliferating women's organizations that had sprung up since the 1860s. From the beginning the BDF was a bourgeois organization, its members working toward equality with men in such areas as education, financial opportunities, and political life. Working-class women were not welcome; they were organized by the Socialists.
Formal organizations for promoting women's rights grew in numbers during the Wilhelmine period. German feminists began to network with feminists from other countries, and participated in the growth of international organizations.
By the 1890s, German colonial expansion in Asia and the Pacific (Kiauchau in China, the Marianas, the Caroline Islands, Samoa) led to frictions with Britain, Russia, Japan and the United States. The construction of the Baghdad Railway, financed by German banks, was designed to eventually connect Germany with the Turkish Empire and the Persian Gulf, but it also collided with British and Russian geopolitical interests.
The largest colonial enterprises were in Africa. The harsh treatment of the Nama and Herero in what is now Namibia in Africa in 1906–07 led to charges of genocide against the Germans. Historians are examining the links and precedents between the Herero and Namaqua Genocide and the Holocaust of the 1940s.
World War I
Ethnic demands for nation states upset the balance between the empires that dominated Europe, leading to World War I, which started in August 1914. Germany stood behind its ally Austria in a confrontation with Serbia, but Serbia was under the protection of Russia, which was allied to France. Germany was the leader of the Central Powers, which included Austria-Hungary, the Ottoman Empire, and later Bulgaria; arrayed against them were the Allies, consisting chiefly of Russia, France, Britain, and in 1915 Italy.
In explaining why neutral Britain went to war with Germany, Kennedy (1980) recognized it was critical for war that Germany become economically more powerful than Britain, but he downplays the disputes over economic trade imperialism, the Baghdad Railway, confrontations in Central and Eastern Europe, high-charged political rhetoric and domestic pressure-groups. Germany's reliance time and again on sheer power, while Britain increasingly appealed to moral sensibilities, played a role, especially in seeing the invasion of Belgium as a necessary military tactic or a profound moral crime. The German invasion of Belgium was not important because the British decision had already been made and the British were more concerned with the fate of France (pp. 457–62). Kennedy argues that by far the main reason was London's fear that a repeat of 1870 — when Prussia and the German states smashed France — would mean that Germany, with a powerful army and navy, would control the English Channel and northwest France. British policy makers insisted that would be a catastrophe for British security.
In the west, Germany sought a quick victory by encircling Paris using the Schlieffen Plan. But it failed due to Belgian resistance, Berlin's diversion of troops, and very stiff French resistance on the Marne, north of Paris.
The Western Front became an extremely bloody battleground of trench warfare. The stalemate lasted from 1914 until early 1918, with ferocious battles that moved forces a few hundred yards at best along a line that stretched from the North Sea to the Swiss border. The British imposed a tight naval blockade in the North Sea which lasted until 1919, sharply reducing Germany's overseas access to raw materials and foodstuffs. Food scarcity became a serious problem by 1917.
The United States joined with the Allies in April 1917. The entry of the United States into the war – following Germany's declaration of unrestricted submarine warfare – marked a decisive turning-point against Germany.
More wide open was the fighting on the Eastern Front. In the east, there were decisive victories against the Russian army, the trapping and defeat of large parts of the Russian contingent at the Battle of Tannenberg, followed by huge Austrian and German successes. The breakdown of Russian forces – exacerbated by internal turmoil caused by the 1917 Russian Revolution – led to the Treaty of Brest-Litovsk the Bolsheviks were forced to sign on 3 March 1918 as Russia withdrew from the war. It gave Germany control of Eastern Europe. Spencer Tucker says, "The German General Staff had formulated extraordinarily harsh terms that shocked even the German negotiator." When Germany later complained that the Treaty of Versailles of 1919 was too harsh on them, the Allies responded that it was more benign than Brest-Litovsk.
By defeating Russia in 1917 Germany was able to bring hundreds of thousands of combat troops from the east to the Western Front, giving it a numerical advantage over the Allies. By retraining the soldiers in new storm-trooper tactics, the Germans expected to unfreeze the Battlefield and win a decisive victory before the American army arrived in strength. However, the spring offensives all failed, as the Allies fell back and regrouped, and the Germans lacked the reserves necessary to consolidate their gains. In the summer, with the Americans arriving at 10,000 a day, and the German reserves exhausted, it was only a matter of time before multiple Allied offenses destroyed the German army.
Unexpectedly Germany plunged into World War I (1914–1918). It rapidly mobilized its civilian economy for the war effort, the economy was handicapped by the British blockade that cut off food supplies. Meanwhile, conditions deteriorated rapidly on the home front, with severe food shortages reported in all urban areas. Causes involved the transfer of many farmers and food workers into the military, an overburdened railroad system, shortages of coal, and the British blockade that cut off imports from abroad. The winter of 1916–1917 was known as the "turnip winter," because that vegetable, usually fed to livestock, was used by people as a substitute for potatoes and meat, which were increasingly scarce. Thousands of soup kitchens were opened to feed the hungry people, who grumbled that the farmers were keeping the food for themselves. Even the army had to cut the rations for soldiers. Morale of both civilians and soldiers continued to sink.
1918 was also the year of the deadly 1918 Spanish Flu pandemic which struck hard at a population weakened by years of malnutrition.
The end of October 1918, in Wilhelmshaven, in northern Germany, saw the beginning of the German Revolution of 1918–19. Units of the German Navy refused to set sail for a last, large-scale operation in a war which they saw as good as lost, initiating the uprising. On 3 November, the revolt spread to other cities and states of the country, in many of which workers' and soldiers' councils were established. Meanwhile, Hindenburg and the senior commanders had lost confidence in the Kaiser and his government. The Kaiser and all German ruling princes abdicated. On 9 November 1918, the Social Democrat Philipp Scheidemann proclaimed a Republic.
On 11 November, the Compiègne armistice was signed, ending the war. The Treaty of Versailles was signed on 28 June 1919. Germany was to cede Alsace-Lorraine to France. Eupen-Malmédy would temporarily be ceded to Belgium, with a plebiscite to be held to allow the people the choice of the territory either remaining with Belgium or being returned to German control. Following a plebiscite, the territory was allotted to Belgium on 20 September 1920. The future of North Schleswig was to be decided by plebiscite. In the Schleswig Plebiscites, the Danish-speaking population in the north voted for Denmark and the southern, German speaking populace, part voted for Germany. Schleswig was thus partitioned. Holstein remained German without a referendum. Memel was ceded to the Allied and Associated powers, to decide the future of the area. On 9 January 1923, Lithuanian forces invaded the territory. Following negotiations, on 8 May 1924, the League of Nations ratified the annexation on the grounds that Lithuania accepted the Memel Statute, a power-sharing arrangement to protect non-Lithuanians in the territory and its autonomous status. Until 1929, German-Lithuanian co-operation increased and this power sharing arrangement worked. Poland was restored and most of the provinces of Posen and West Prussia, and some areas of Upper Silesia were reincorporated into the reformed country after plebiscites and independence uprisings. All German colonies were to be handed over to League of Nations, who then assigned them as Mandates to Australia, France, Japan, New Zealand, Portugal, and the United Kingdom. The new owners were required to act as a disinterested trustee over the region, promoting the welfare of its inhabitants in a variety of ways until they were able to govern themselves. The left and right banks of the Rhine were to be permanently demilitarised. The industrially important Saarland was to be governed by the League of Nations for 15 years and its coalfields administered by France. At the end of that time a plebiscite was to determine the Saar's future status. To ensure execution of the treaty's terms, Allied troops would occupy the left (German) bank of the Rhine for a period of 5–15 years. The German army was to be limited to 100,000 officers and men; the general staff was to be dissolved; vast quantities of war material were to be handed over and the manufacture of munitions rigidly curtailed. The navy was to be similarly reduced, and no military aircraft were allowed. Germany was also required to pay reparations for all civilian damage caused during the war.
Weimar Republic, 1919–1933
The humiliating peace terms in the Treaty of Versailles provoked bitter indignation throughout Germany, and seriously weakened the new democratic regime. The greatest enemies of democracy had already been constituted. In December 1918, the Communist Party of Germany (KPD) was founded, and in 1919 it tried and failed to overthrow the new republic. Adolf Hitler in 1919 took control of the new National Socialist German Workers' Party (NSDAP), which failed in a coup in Munich in 1923. Both parties, as well as parties supporting the republic, built militant auxiliaries that engaged in increasingly violent street battles. Electoral support for both parties increased after 1929 as the Great Depression hit the economy hard, producing many unemployed men who became available for the paramilitary units. The Nazis (formerly the German Workers' Party), with a mostly rural and lower middle class base, overthrew the Weimar regime and ruled Germany in 1933–1945; the KPD, with a mostly urban and working class base, came to power (in the East) in 1945–1989.
The early years
On 30 December 1918, the Communist Party of Germany was founded by the Spartacus League, who had split from the Social Democratic Party during the war. It was headed by Rosa Luxemburg and Karl Liebknecht, and rejected the parliamentary system. In 1920, about 300,000 members from the Independent Social Democratic Party of Germany joined the party, transforming it into a mass organization. The Communist Party had a following of about 10% of the electorate.
In the first months of 1920, the Reichswehr was to be reduced to 100,000 men, in accordance with the Treaty of Versailles. This included the dissolution of many Freikorps – units made up of volunteers. In an attempt at a coup d'état in March 1920, the Kapp Putsch, extreme right-wing politician Wolfgang Kapp let Freikorps soldiers march on Berlin and proclaimed himself Chancellor of the Reich. After four days the coup d'état collapsed, due to popular opposition and lack of support by the civil servants and the officers. Other cities were shaken by strikes and rebellions, which were bloodily suppressed.
Germany was the first state to establish diplomatic relations with the new Soviet Union. Under the Treaty of Rapallo, Germany accorded the Soviet Union de jure recognition, and the two signatories mutually cancelled all pre-war debts and renounced war claims.
When Germany defaulted on its reparation payments, French and Belgian troops occupied the heavily industrialised Ruhr district (January 1923). The German government encouraged the population of the Ruhr to passive resistance: shops would not sell goods to the foreign soldiers, coal-miners would not dig for the foreign troops, trams in which members of the occupation army had taken seat would be left abandoned in the middle of the street. The passive resistance proved effective, insofar as the occupation became a loss-making deal for the French government. But the Ruhr fight also led to hyperinflation, and many who lost all their fortune would become bitter enemies of the Weimar Republic, and voters of the anti-democratic right. See 1920s German inflation.
In September 1923, the deteriorating economic conditions led Chancellor Gustav Stresemann to call an end to the passive resistance in the Ruhr. In November, his government introduced a new currency, the Rentenmark (later: Reichsmark), together with other measures to stop the hyperinflation. In the following six years the economic situation improved. In 1928, Germany's industrial production even regained the pre-war levels of 1913.
In October 1925 the Treaty of Locarno was signed by Germany, France, Belgium, Britain and Italy; it recognised Germany's borders with France and Belgium. Moreover, Britain, Italy and Belgium undertook to assist France in the case that German troops marched into the demilitarised Rheinland. Locarno paved the way for Germany's admission to the League of Nations in 1926.
The actual amount of reparations that Germany was obliged to pay out was not the 132 billion marks decided in the London Schedule of 1921 but rather the 50 million marks stipulated in the A and B Bonds. Historian Sally Marks says the 112 billion marks in "C bonds" were entirely chimerical—a device to fool the public into thinking Germany would pay much more. The actual total payout from 1920 to 1931 (when payments were suspended indefinitely) was 20 billion German gold marks, worth about $5 billion US dollars or £1 billion British pounds. 12.5 billion was cash that came mostly from loans from New York bankers. The rest was goods like coal and chemicals, or from assets like railway equipment. The reparations bill was fixed in 1921 on the basis of a German capacity to pay, not on the basis of Allied claims. The highly publicized rhetoric of 1919 about paying for all the damages and all the veterans' benefits was irrelevant for the total, but it did determine how the recipients spent their share. Germany owed reparations chiefly to France, Britain, Italy and Belgium; the US received $100 million.
Economic collapse and political problems, 1929–1933
The Wall Street Crash of 1929 marked the beginning of the worldwide Great Depression, which hit Germany as hard as any nation. In July 1931, the Darmstätter und Nationalbank – one of the biggest German banks – failed. In early 1932, the number of unemployed had soared to more than 6,000,000.
On top of the collapsing economy came a political crisis: the political parties represented in the Reichstag were unable to build a governing majority in the face of escalating extremism from the far right (the Nazis, NSDAP) and the far left (the Communists, KPD). In March 1930, President Hindenburg appointed Heinrich Brüning Chancellor, invoking article 48 of Weimar's constitution, which allowed him to override the Parliament. To push through his package of austerity measures against a majority of Social Democrats, Communists and the NSDAP (Nazis), Brüning made use of emergency decrees and dissolved Parliament. In March and April 1932, Hindenburg was re-elected in the German presidential election of 1932.
The Nazi Party was the largest party in the national elections of 1932. On 31 July 1932 it received 37.3% of the votes, and in the election on 6 November 1932 it received less, but still the largest share, 33.1%, making it the biggest party in the Reichstag. The Communist KPD came third, with 15%. Together, the anti-democratic parties of far right and far left were now able to hold the majority of seats in Parliament, but they were at sword's point with each other, fighting it out in the streets. The Nazis were particularly successful among Protestants, among unemployed young voters, among the lower middle class in the cities and among the rural population. It was weakest in Catholic areas and in large cities. On 30 January 1933, pressured by former Chancellor Franz von Papen and other conservatives, President Hindenburg appointed Hitler as Chancellor.
Science and culture
The Weimar years saw a flowering of German science and high culture, before the Nazi regime resulted in a decline in the scientific and cultural life in Germany and forced many renowned scientists and writers to flee. German recipients dominated the Nobel prizes in science. Germany dominated the world of physics before 1933, led by Hermann von Helmholtz, Joseph von Fraunhofer, Daniel Gabriel Fahrenheit, Wilhelm Conrad Röntgen, Albert Einstein, Max Planck and Werner Heisenberg. Chemistry likewise was dominated by German professors and researchers at the great chemical companies such as BASF and Bayer and persons like Fritz Haber. Theoretical mathematicians included Carl Friedrich Gauss in the 19th century and David Hilbert in the 20th century. Karl Benz, the inventor of the automobile, was one of the pivotal figures of engineering.
Among the most important German writers were Thomas Mann (1875–1955), Hermann Hesse (1877–1962) and Bertolt Brecht (1898–1956). The pessimistic historian Oswald Spengler wrote The Decline of the West (1918–23) on the inevitable decay of Western Civilization, and influenced intellectuals in Germany such as Martin Heidegger, Max Scheler, and the Frankfurt School, as well as intellectuals around the world.
After 1933, Nazi proponents of "Aryan physics," led by the Nobel Prize-winners Johannes Stark and Philipp Lenard, attacked Einstein's theory of relativity as a degenerate example of Jewish materialism in the realm of science. Many scientists and humanists emigrated; Einstein moved permanently to the U.S. but some of the others returned after 1945.
Karl Benz (1844–1929)
Max Planck (1858–1947)
Thomas Mann (1875–1955)
Hermann Hesse (1877–1962)
Albert Einstein (1879–1955)
Nazi Germany, 1933–1945
The Nazi regime restored economic prosperity and ended mass unemployment using heavy spending on the military, while suppressing labor unions and strikes. The return of prosperity gave the Nazi Party enormous popularity, with only minor, isolated and subsequently unsuccessful cases of resistance among the German population over the 12 years of rule. The Gestapo (secret police) under Heinrich Himmler destroyed the political opposition and persecuted the Jews, trying to force them into exile, while taking their property. The Party took control of the courts, local government, and all civic organizations except the Protestant and Catholic churches. All expressions of public opinion were controlled by Hitler's propaganda minister, Joseph Goebbels, who made effective use of film, mass rallies, and Hitler's hypnotic speaking. The Nazi state idolized Hitler as its Führer (leader), putting all powers in his hands. Nazi propaganda centered on Hitler and was quite effective in creating what historians called the "Hitler Myth"—that Hitler was all-wise and that any mistakes or failures by others would be corrected when brought to his attention. In fact Hitler had a narrow range of interests and decision making was diffused among overlapping, feuding power centers; on some issues he was passive, simply assenting to pressures from whoever had his ear. All top officials reported to Hitler and followed his basic policies, but they had considerable autonomy on a daily basis.
Establishment of the Nazi regime
In order to secure a majority for his Nazi Party in the Reichstag, Hitler called for new elections. On the evening of 27 February 1933, the Reichstag building was set afire. Hitler swiftly blamed an alleged Communist uprising, and convinced President Hindenburg to sign the Reichstag Fire Decree, which rescinded most German civil liberties, including rights of assembly and freedom of the press. The decree allowed the police to detain people indefinitely without charges or a court order. Four thousand members of the Communist Party of Germany were arrested. Communist agitation was banned, but at this time not the Communist Party itself. Communists and Socialists were brought into hastily prepared Nazi concentration camps such as Kemna concentration camp, where they were at the mercy of the Gestapo, the newly established secret police force. Communist Reichstag deputies were taken into protective custody (despite their constitutional privileges).
Despite the terror and unprecedented propaganda, the last free General Elections of 5 March 1933, while resulting in 43.9% failed to gave the majority for the NSDAP as Hitler had hoped. Together with the German National People's Party (DNVP), however, he was able to form a slim majority government. In March 1933, the Enabling Act, an amendment to the Weimar Constitution, passed in the Reichstag by a vote of 444 to 94. To obtain the two-thirds majority needed to pass the bill, accommodations were made to the Catholic Centre Party, and the Nazis used the provisions of the Reichstag Fire Decree to keep several Social Democratic deputies from attending, and the Communists deputies had already been banned. This amendment allowed Hitler and his cabinet to pass laws—even laws that violated the constitution—without the consent of the president or the Reichstag. The Enabling Act formed the basis for the dictatorship, dissolution of the Länder; the trade unions and all political parties other than the Nazi Party were suppressed. A centralised totalitarian state was established, no longer based on the liberal Weimar constitution. Germany left the League of Nations. The coalition parliament was rigged by defining the absence of arrested and murdered deputies as voluntary and therefore cause for their exclusion as wilful absentees. Subsequently, in July the Centre Party was voluntarily dissolved in a quid pro quo with the Pope under the anti-communist Pope Pius XI for the Reichskonkordat; and by these manoeuvres Hitler achieved movement of these Catholic voters into the Nazi Party, and a long-awaited international diplomatic acceptance of his regime. According to Professor Dick Geary the Nazis gained a larger share of their vote in Protestant areas than in Catholic areas, in the elections held between 1928 and November 1932. The Communist Party was proscribed in April 1933.
Thereafter, the Chief of Staff of the SA, Ernst Röhm, demanded more political and military power for he and his men, which caused anxiety among military, industrial, and political leaders. In response, Hitler used the SS and Gestapo to purge the entire SA leadership—along with a number of Hitler's political adversaries (such as Gregor Strasser and former chancellor Kurt von Schleicher). It became known as the Night of the Long Knives and took place from 30 June to 2 July 1934. As a reward, the SS became an independent organisation under the command of the Reichsführer-SS Heinrich Himmler. He would rise to become Chief of German Police in June 1936 and already had control over the concentration camps system. Upon Hindenburg's death on 2 August 1934, Hitler's cabinet passed a law proclaiming the presidency to be vacant and transferred the role and powers of the head of state to Hitler as Chancellor and Führer (Leader).
Antisemitism and the Holocaust
The Nazi regime was particularly hostile towards Jews, who became the target of unending antisemitic propaganda attacks. The Nazis attempted to convince the German people to view and treat Jews as "subhumans" and immediately after winning almost 44% of parliamentary seats in the 1933 federal elections the Nazis imposed a nationwide boycott of Jewish businesses. In March 1933 the first official Nazi concentration camp was established at Dachau in Bavaria and from 1933 to 1935 the Nazi regime consolidated their power. The Law for the Restoration of the Professional Civil Service passed on 7 April 1933, which forced all Jewish civil servants to retire from the legal profession and civil service. The Nuremberg Laws of 1935 ban sexual relations between Jews and Germans and only those of German or related blood were eligible to be considered citizens; the remainder were classed as state subjects, without citizenship rights. This stripped Jews, Roma and others of their legal rights. Jews continued to suffer persecution under the Nazi regime, exemplified by the Kristallnacht pogrom of 1938, and about half of Germany's 500,000 Jews fled the country before 1939, after which escape became almost impossible.
In 1941, the Nazi leadership decided to implement a plan that they called the "Final Solution" which came to be known as the Holocaust. Under the plan, Jews and other "lesser races" along with political opponents from Germany as well as occupied countries were systematically murdered at murder sites, Nazi concentration camps, and starting in 1942, at extermination camps. Between 1941 and 1945 Jews, Gypsies, Slavs, communists, homosexuals, the mentally and physically disabled and members of other groups were targeted and methodically murdered — the origin of the word "genocide". In total approximately 11 million people were killed during the Holocaust including 1.1 million children.
In 1935, Hitler officially re-established the Luftwaffe (air force) and reintroduced universal military service. This was in breach of the Treaty of Versailles; Britain, France and Italy issued notes of protest. Hitler had the officers swear their personal allegiance to him. In 1936 German troops marched into the demilitarised Rhineland. As the territory was part of Germany, the British and French governments did not feel that attempting to enforce the treaty was worth the risk of war. The move strengthened Hitler's standing in Germany. His reputation swelled further with the 1936 Summer Olympics, which were held in the same year in Berlin, and proved another great propaganda success for the regime as orchestrated by master propagandist Joseph Goebbels.
Historians have paid special attention to the efforts by Nazi Germany to reverse the gains women made before 1933, especially in the relatively liberal Weimar Republic. It appears the role of women in Nazi Germany changed according to circumstances. Theoretically the Nazis advocated a patriarchal society in which the German woman would recognise that her "world is her husband, her family, her children, and her home". However, before 1933, women played important roles in the Nazi organization and were allowed some autonomy to mobilize other women. After Hitler came to power in 1933, feminist groups were shut down or incorporated into the National Socialist Women's League, which coordinated groups throughout the country to promote feminine virtues, motherhood and household activities. Courses were offered on childrearing, sewing and cooking. The Nazi regime did promote a liberal code of conduct regarding heterosexual relations among Germans and was sympathetic to women who bore children out of wedlock. The Lebensborn (Fountain of Life) association, founded by Himmler in 1935, created a series of maternity homes where single mothers could be accommodated during their pregnancies.
As Germany prepared for war, large numbers were incorporated into the public sector and with the need for full mobilization of factories by 1943, all women under the age of fifty were required to register with the employment office for work assignments to help the war effort. Women's wages remained unequal and women were denied positions of leadership or control. In 1944–45 more than 500,000 women were volunteer uniformed auxiliaries in the German armed forces (Wehrmacht). About the same number served in civil aerial defense, 400,000 volunteered as nurses, and many more replaced drafted men in the wartime economy. In the Luftwaffe they served in auxiliary roles helping to operate the anti-aircraft systems that shot down Allied bombers.
Hitler's diplomatic strategy in the 1930s was to make seemingly reasonable demands, threatening war if they were not met. When opponents tried to appease him, he accepted the gains that were offered, then went to the next target. That aggressive strategy worked as Germany pulled out of the League of Nations (1933), rejected the Versailles Treaty and began to re-arm (1935), won back the Saar (1935), remilitarized the Rhineland (1936), formed an alliance ("axis") with Mussolini's Italy (1936), sent massive military aid to Franco in the Spanish Civil War (1936–39), annexed Austria (1938), took over Czechoslovakia after the British and French appeasement of the Munich Agreement of 1938, formed a peace pact with Joseph Stalin's Soviet Union in August 1939, and finally invaded Poland on 1 September 1939. Britain and France declared war on Germany two days later and World War II in Europe began.
After establishing the "Rome-Berlin axis" with Benito Mussolini, and signing the Anti-Comintern Pact with Japan – which was joined by Italy a year later in 1937 – Hitler felt able to take the offensive in foreign policy. On 12 March 1938, German troops marched into Austria, where an attempted Nazi coup had been unsuccessful in 1934. When Austrian-born Hitler entered Vienna, he was greeted by loud cheers. Four weeks later, 99% of Austrians voted in favour of the annexation (Anschluss) of their country Austria to the German Reich. After Austria, Hitler turned to Czechoslovakia, where the 3.5 million-strong Sudeten German minority was demanding equal rights and self-government. At the Munich Conference of September 1938, Hitler, the Italian leader Benito Mussolini, British Prime Minister Neville Chamberlain and French Prime Minister Édouard Daladier agreed upon the cession of Sudeten territory to the German Reich by Czechoslovakia. Hitler thereupon declared that all of German Reich's territorial claims had been fulfilled. However, hardly six months after the Munich Agreement, in March 1939, Hitler used the smoldering quarrel between Slovaks and Czechs as a pretext for taking over the rest of Czechoslovakia as the Protectorate of Bohemia and Moravia. In the same month, he secured the return of Memel from Lithuania to Germany. Chamberlain was forced to acknowledge that his policy of appeasement towards Hitler had failed.
World War II
At first Germany was very successful in its military operations, including the invasions of Poland (1939), Norway (1940), the Low Countries (1940), and France in 1940. The unexpectedly swift defeat of France resulted in an upswing in Hitler's popularity and an upsurge in war fever. Hitler made peace overtures to the new British leader Winston Churchill in July 1940, but Churchill, remained dogged in his defiance. Churchill had major financial, military, and diplomatic help from President Franklin D. Roosevelt in the U.S. Hitler's emphasis on maintaining a higher living standard postponed the full mobilization of the national economy until 1942. Germany's armed forces invaded the Soviet Union in June 1941 – weeks behind schedule due to the invasion of Yugoslavia – but swept forward until they reached the gates of Moscow.
The tide began to turn in December 1941, when the invasion of the Soviet Union hit determined resistance in the Battle of Moscow and Hitler declared war on the United States in the wake of the Japanese Pearl Harbor attack. After surrender in North Africa and losing the Battle of Stalingrad in 1942–43, the Germans were forced into the defensive. By late 1944, the United States, Canada, France, and Great Britain were closing in on Germany in the West, while the Soviets were victoriously advancing in the East. Overy estimated in 2014 that in all about 353,000 civilians were killed by British and American strategic bombing of German cities, and nine million left homeless.
Nazi Germany collapsed as Berlin was taken by the Red Army in a fight to the death on the city streets. Hitler committed suicide on 30 April 1945. The final German Instrument of Surrender was signed on 8 May 1945.
By September 1945, Nazi Germany and its Axis partners (Italy and Japan) had all been defeated, chiefly by the forces of the Soviet Union, the United States, and Great Britain. Much of Europe lay in ruins, over 60 million people worldwide had been killed (most of them civilians), including approximately 6 million Jews and 5 million non-Jews in what became known as the Holocaust. World War II resulted in the destruction of Germany's political and economic infrastructure and led directly to its partition, considerable loss of territory (especially in the East), and historical legacy of guilt and shame.
Germany during the Cold War, 1945–1990
As a consequence of the defeat of Nazi Germany in 1945 and the onset of the Cold War in 1947, the country was split between the two global blocs in the East and West, a period known as the division of Germany. Millions of refugees from Central and Eastern Europe moved west, most of them to West Germany. Two countries emerged: West Germany was a parliamentary democracy, a NATO member, a founding member of what since became the European Union and one of the world's largest economies and is controlled by the US, while East Germany was a totalitarian Communist dictatorship controlled by the Soviet Union that was a satellite of Moscow. With the collapse of Communism in 1989, reunion on West Germany's terms followed.
No one doubted Germany's economic and engineering prowess; the question was how long bitter memories of the war would cause Europeans to distrust Germany, and whether Germany could demonstrate it had rejected totalitarianism and militarism and embraced democracy and human rights.
The total of German war dead was 8% to 10% out of a prewar population of 69,000,000, or between 5.5 million and 7 million people. This included 4.5 million in the military, and between 1 and 2 million civilians. There was chaos as 11 million foreign workers and POWs left, while 14 million displaced refugees from the east and soldiers returned home. During the Cold War, the West German government estimated a death toll of 2.2 million civilians due to the flight and expulsion of Germans and through forced labour in the Soviet Union. This figure remained unchallenged until the 1990s, when some historians put the death toll at 500,000–600,000 confirmed deaths. In 2006 the German government reaffirmed its position that 2.0–2.5 million deaths occurred.
At the Potsdam Conference, Germany was divided into four military occupation zones by the Allies and did not regain independence until 1949. The provinces east of the Oder and Neisse rivers (the Oder-Neisse line) were transferred to Poland, Lithuania, and Russia (Kaliningrad oblast); the 6.7 million Germans living in Poland and the 2.5 million in Czechoslovakia were forced to move west, although most had already left when the war ended.
Denazification removed, imprisoned, or executed most top officials of the old regime, but most middle and lower ranks of civilian officialdom were not seriously affected. In accordance with the Allied agreement made at the Yalta conference millions of POWs were used as forced labor by the Soviet Union and other European countries.
In the East, the Soviets crushed dissent and imposed another police state, often employing ex-Nazis in the dreaded Stasi. The Soviets extracted about 23% of the East German GNP for reparations, while in the West reparations were a minor factor.
In 1945–46 housing and food conditions were bad, as the disruption of transport, markets, and finances slowed a return to normal. In the West, bombing had destroyed the fourth of the housing stock, and over 10 million refugees from the east had crowded in, most living in camps. Food production in 1946–48 was only two-thirds of the prewar level, while grain and meat shipments – which usually supplied 25% of the food – no longer arrived from the East. Furthermore, the end of the war brought the end of large shipments of food seized from occupied nations that had sustained Germany during the war. Coal production was down 60%, which had cascading negative effects on railroads, heavy industry, and heating. Industrial production fell more than half and reached prewar levels only at the end of 1949.
Allied economic policy originally was one of industrial disarmament plus building the agricultural sector. In the western sectors, most of the industrial plants had minimal bomb damage and the Allies dismantled 5% of the industrial plants for reparations.
However, deindustrialization became impractical and the U.S. instead called for a strong industrial base in Germany so it could stimulate European economic recovery. The U.S. shipped food in 1945–47 and made a $600 million loan in 1947 to rebuild German industry. By May 1946 the removal of machinery had ended, thanks to lobbying by the U.S. Army. The Truman administration finally realised that economic recovery in Europe could not go forward without the reconstruction of the German industrial base on which it had previously been dependent. Washington decided that an "orderly, prosperous Europe requires the economic contributions of a stable and productive Germany."
In 1945 the occupying powers took over all newspapers in Germany and purged them of Nazi influence. The American occupation headquarters, the Office of Military Government, United States (OMGUS) began its own newspaper based in Munich, Die Neue Zeitung. It was edited by German and Jewish émigrés who fled to the United States before the war. Its mission was to encourage democracy by exposing Germans to how American culture operated. The paper was filled with details on American sports, politics, business, Hollywood, and fashions, as well as international affairs.
In 1949 the western half of the Soviet zone became the "Deutsche Demokratische Republik" – "DDR" ("German Democratic Republic" – "GDR", simply often "East Germany"), under control of the Socialist Unity Party. Neither country had a significant army until the 1950s, but East Germany built the Stasi into a powerful secret police that infiltrated every aspect of the society.
East Germany was an Eastern bloc state under political and military control of the Soviet Union through her occupation forces and the Warsaw Treaty. Political power was solely executed by leading members (Politburo) of the communist-controlled Socialist Unity Party (SED). A Soviet-style command economy was set up; later the GDR became the most advanced Comecon state. While East German propaganda was based on the benefits of the GDR's social programs and the alleged constant threat of a West German invasion, many of her citizens looked to the West for political freedoms and economic prosperity.
Walter Ulbricht (1893–1973) was the party boss from 1950 to 1971. In 1933, Ulbricht had fled to Moscow, where he served as a Comintern agent loyal to Stalin. As World War II was ending, Stalin assigned him the job of designing the postwar German system that would centralize all power in the Communist Party. Ulbricht became deputy prime minister in 1949 and secretary (chief executive) of the Socialist Unity (Communist) party in 1950. Some 2.6 million people had fled East Germany by 1961 when he built the Berlin Wall to stop them — shooting those who attempted it. What the GDR called the "Anti-Fascist Protective Wall" was a major embarrassment for the program during the Cold War, but it did stabilize East Germany and postpone its collapse. Ulbricht lost power in 1971, but was kept on as a nominal head of state. He was replaced because he failed to solve growing national crises, such as the worsening economy in 1969–70, the fear of another popular uprising as had occurred in 1953, and the disgruntlement between Moscow and Berlin caused by Ulbricht's détente policies toward the West.
The transition to Erich Honecker (General Secretary from 1971 to 1989) led to a change in the direction of national policy and efforts by the Politburo to pay closer attention to the grievances of the proletariat. Honecker's plans were not successful, however, with the dissent growing among East Germany's population.
In 1989, the socialist regime collapsed after 40 years, despite its omnipresent secret police, the Stasi. Main reasons for the collapse include severe economic problems and growing emigration towards the West.
East Germany's culture was shaped by Communism and particularly Stalinism. It was characterized by East German psychoanalyst Hans-Joachim Maaz in 1990 as having produced a "Congested Feeling" among Germans in the East as a result of Communist policies criminalizing personal expression that deviates from government approved ideals, and through the enforcement of Communist principals by physical force and intellectual repression by government agencies, particularly the Stasi. Critics of the East German state have claimed that the state's commitment to communism was a hollow and cynical tool of a ruling elite. This argument has been challenged by some scholars who claim that the Party was committed to the advance of scientific knowledge, economic development, and social progress. However, the vast majority regarded the state's Communist ideals to be nothing more than a deceptive method for government control.
According to German historian Jürgen Kocka (2010):
- Conceptualizing the GDR as a dictatorship has become widely accepted, while the meaning of the concept dictatorship varies. Massive evidence has been collected that proves the repressive, undemocratic, illiberal, nonpluralistic character of the GDR regime and its ruling party.
West Germany (Bonn Republic)
In 1949, the three western occupation zones (American, British, and French) were combined into the Federal Republic of Germany (FRG, West Germany). The government was formed under Chancellor Konrad Adenauer and his conservative CDU/CSU coalition. The CDU/CSU was in power during most of the period since 1949. The capital was Bonn until it was moved to Berlin in 1990. In 1990 FRG absorbed East Germany and gained full sovereignty over Berlin. At all points West Germany was much larger and richer than East Germany, which became a dictatorship under the control of the Communist Party and was closely monitored by Moscow. Germany, especially Berlin, was a cockpit of the Cold War, with NATO and the Warsaw Pact assembling major military forces in west and east. However, there was never any combat.
West Germany enjoyed prolonged economic growth beginning in the early 1950s (Wirtschaftswunder or "Economic Miracle"). Industrial production doubled from 1950 to 1957, and gross national product grew at a rate of 9 or 10% per year, providing the engine for economic growth of all of Western Europe. Labor union supported the new policies with postponed wage increases, minimized strikes, support for technological modernization, and a policy of co-determination (Mitbestimmung), which involved a satisfactory grievance resolution system as well as requiring representation of workers on the boards of large corporations. The recovery was accelerated by the currency reform of June 1948, U.S. gifts of $1.4 billion as part of the Marshall Plan, the breaking down of old trade barriers and traditional practices, and the opening of the global market. West Germany gained legitimacy and respect, as it shed the horrible reputation Germany had gained under the Nazis.
1948 currency reform
The most dramatic and successful policy event was the currency reform of 1948. Since the 1930s, prices and wages had been controlled, but money had been plentiful. That meant that people had accumulated large paper assets, and that official prices and wages did not reflect reality, as the black market dominated the economy and more than half of all transactions were taking place unofficially. On 21 June 1948, the Western Allies withdrew the old currency and replaced it with the new Deutsche Mark at the rate of 1 new per 10 old. This wiped out 90% of government and private debt, as well as private savings. Prices were decontrolled, and labor unions agreed to accept a 15% wage increase, despite the 25% rise in prices. The result was that prices of German export products held steady, while profits and earnings from exports soared and were poured back into the economy. The currency reforms were simultaneous with the $1.4 billion in Marshall Plan money coming in from the United States, which was used primarily for investment.
In addition, the Marshall Plan forced German companies, as well as those in all of Western Europe, to modernize their business practices and take account of the international market. Marshall Plan funding helped overcome bottlenecks in the surging economy caused by remaining controls (which were removed in 1949), and Marshall Plan business reforms opened up a greatly expanded market for German exports. Overnight, consumer goods appeared in the stores, because they could be sold for realistic prices, emphasizing to Germans that their economy had turned a corner.
The success of the currency reform angered the Soviets, who cut off all road, rail, and canal links between the western zones and West Berlin. This was the Berlin Blockade, which lasted from 24 June 1948 to 12 May 1949. In response, the U.S. and Britain launched an airlift of food and coal and distributed the new currency in West Berlin as well. The city thereby became economically integrated into West Germany.
Konrad Adenauer (1876–1967) was the dominant leader in West Germany. He was the first chancellor (top official) of the FRG, 1949–63, and until his death was the founder and leader of the Christian Democratic Union (CDU), a coalition of conservatives, ordoliberals, and adherents of Protestant and Catholic social teaching that dominated West Germany politics for most of its history. During his chancellorship, the West Germany economy grew quickly, and West Germany established friendly relations with France, participated in the emerging European Union, established the country's armed forces (the Bundeswehr), and became a pillar of NATO as well as firm ally of the United States. Adenauer's government also commenced the long process of reconciliation with the Jews and Israel after the Holocaust.
Ludwig Erhard (1897–1977) was in charge of economic policy as economics director for the British and American occupation zones and was Adenauer's long-time economics minister. Erhard's decision to lift many price controls in 1948 (despite opposition from both the social democratic opposition and Allied authorities), plus his advocacy of free markets, helped set the Federal Republic on its strong growth from wartime devastation. Norbert Walter, a former chief economist at Deutsche Bank, argues that "Germany owes its rapid economic advance after World War II to the system of the Social Market Economy, established by Ludwig Erhard." Erhard was politically less successful when he served as the CDU Chancellor from 1963 until 1966. Erhard followed the concept of a social market economy, and was in close touch with professional economists. Erhard viewed the market itself as social and supported only a minimum of welfare legislation. However, Erhard suffered a series of decisive defeats in his effort to create a free, competitive economy in 1957; he had to compromise on such key issues as the anti-cartel legislation. Thereafter, the West German economy evolved into a conventional west European welfare state.
Meanwhile, in adopting the Godesberg Program in 1959, the Social Democratic Party of Germany (SPD) largely abandoned Marxism ideas and embraced the concept of the market economy and the welfare state. Instead it now sought to move beyond its old working class base to appeal the full spectrum of potential voters, including the middle class and professionals. Labor unions cooperated increasingly with industry, achieving labor representation on corporate boards and increases in wages and benefits.
In 1966 Erhard lost support and Kurt Kiesinger (1904–1988) was elected as Chancellor by a new CDU/CSU-SPD alliance combining the two largest parties. Socialist (SPD) leader Willy Brandt was Deputy Federal Chancellor and Foreign Minister. The Grand Coalition lasted 1966–69 and is best known for reducing tensions with the Soviet bloc nations and establishing diplomatic relations with Czechoslovakia, Romania and Yugoslavia.
With a booming economy short of unskilled workers, especially after the Berlin Wall cut off the steady flow of East Germans, the FRG negotiated migration agreements with Italy (1955), Spain (1960), Greece (1960), and Turkey (1961) that brought in hundreds of thousands of temporary guest workers, called Gastarbeiter. In 1968 the FRG signed a guest worker agreement with Yugoslavia that employed additional guest workers. Gastarbeiter were young men who were paid full-scale wages and benefits, but were expected to return home in a few years.
The agreement with Turkey ended in 1973 but few workers returned because there were few good jobs in Turkey. By 2010 there were about 4 million people of Turkish descent in Germany. The generation born in Germany attended German schools, but had a poor command of either German or Turkish, and had either low-skilled jobs or were unemployed.
Brandt and Ostpolitik
Willy Brandt (1913–1992) was the leader of the Social Democratic Party in 1964–87 and West German Chancellor in 1969–1974. Under his leadership, the German government sought to reduce tensions with the Soviet Union and improve relations with the German Democratic Republic, a policy known as the Ostpolitik. Relations between the two German states had been icy at best, with propaganda barrages in each direction. The heavy outflow of talent from East Germany prompted the building of the Berlin Wall in 1961, which worsened Cold War tensions and prevented East Germans from travel. Although anxious to relieve serious hardships for divided families and to reduce friction, Brandt's Ostpolitik was intent on holding to its concept of "two German states in one German nation."
Ostpolitik was opposed by the conservative elements in Germany, but won Brandt an international reputation and the Nobel Peace Prize in 1971. In September 1973, both West and East Germany were admitted to the United Nations. The two countries exchanged permanent representatives in 1974, and, in 1987, East Germany's leader Erich Honecker paid an official state visit to West Germany.
Economic crisis of 1970s
After 1973, Germany was hard hit by a worldwide economic crisis, soaring oil prices, and stubbornly high unemployment, which jumped from 300,000 in 1973 to 1.1 million in 1975. The Ruhr region was hardest hit, as its easy-to-reach coal mines petered out, and expensive German coal was no longer competitive. Likewise the Ruhr steel industry went into sharp decline, as its prices were undercut by lower-cost suppliers such as Japan. The welfare system provided a safety net for the large number of unemployed workers, and many factories reduce their labor force and began to concentrate on high-profit specialty items. After 1990 the Ruhr moved into service industries and high technology. Cleaning up the heavy air and water pollution became a major industry in its own right. Meanwhile, formerly rural Bavaria became a high-tech center of industry.
A spy scandal forced Brandt to step down as Chancellor while remaining as party leader. He was replaced by Helmut Schmidt (b. 1918), of the SPD, who served as Chancellor in 1974–1982. Schmidt continued the Ostpolitik with less enthusiasm. He had a PhD in economics and was more interested in domestic issues, such as reducing inflation. The debt grew rapidly as he borrowed to cover the cost of the ever more expensive welfare state. After 1979, foreign policy issues grew central as the Cold War turned hot again. The German peace movement mobilized hundreds of thousands of demonstrators to protest against American deployment in Europe of new medium-range ballistic missiles. Schmidt supported the deployment but was opposed by the left wing of the SPD and by Brandt.
The pro-business Free Democratic Party (FDP) had been in coalition with the SPD, but now it changed direction. Led by Finance Minister Otto Graf Lambsdorff (1926–2009) the FDP adopted the market-oriented "Kiel Theses" in 1977; it rejected the Keynesian emphasis on consumer demand, and proposed to reduce social welfare spending, and try to introduce policies to stimulate production and facilitate jobs. Lambsdorff argued that the result would be economic growth, which would itself solve both the social problems and the financial problems. As a consequence, the FDP switched allegiance to the CDU and Schmidt lost his parliamentary majority in 1982. For the only time in West Germany's history, the government fell on a vote of no confidence.
Helmut Kohl (1930–2017) brought the conservatives back to power with a CDU/CSU-FDP coalition in 1982, and served as Chancellor until 1998. After repeated victories in 1983, 1987, 1990 and 1994 he was finally defeated by a landslide that was the biggest on record, for the left in the 1998 federal elections, and was succeeded as Chancellor by Gerhard Schröder of the SPD. Kohl is best known for orchestrating reunification with the approval of all the Four Powers from World War II, who still had a voice in German affairs.
During the summer of 1989, rapid changes known as peaceful revolution or Die Wende took place in East Germany, which quickly led to German reunification. Growing numbers of East Germans emigrated to West Germany, many via Hungary after Hungary's reformist government opened its borders. Thousands of East Germans also tried to reach the West by staging sit-ins at West German diplomatic facilities in other East European capitals, most notably in Prague. The exodus generated demands within East Germany for political change, and mass demonstrations in several cities continued to grow.
Unable to stop the growing civil unrest, Erich Honecker was forced to resign in October, and on 9 November, East German authorities unexpectedly allowed East German citizens to enter West Berlin and West Germany. Hundreds of thousands of people took advantage of the opportunity; new crossing points were opened in the Berlin Wall and along the border with West Germany. This led to the acceleration of the process of reforms in East Germany that ended with the German reunification that came into force on 3 October 1990.
Federal Republic of Germany, 1990–present
The SPD in coalition with the Greens won the elections of 1998. SPD leader Gerhard Schröder positioned himself as a centrist "Third Way" candidate in the mold of Britain's Tony Blair and America's Bill Clinton.
Schröder, in March 2003, reversed his position and proposed a significant downsizing of the welfare state, known as Agenda 2010. He had enough support to overcome opposition from the trade unions and the SPD's left wing. Agenda 2010 had five goals: tax cuts; labor market deregulation, especially relaxing rules protecting workers from dismissal and setting up Hartz concept job training; modernizing the welfare state by reducing entitlements; decreasing bureaucratic obstacles for small businesses; and providing new low-interest loans to local governments.
From 2005 to 2009, Germany was ruled by a grand coalition led by the CDU's Angela Merkel as chancellor. Since the 2009 elections, Merkel has headed a centre-right government of the CDU/CSU and FDP.
Together with France and other EU states, Germany has played the leading role in the European Union. Germany (especially under Chancellor Helmut Kohl) was one of the main supporters of admitting many East European countries to the EU. Germany is at the forefront of European states seeking to exploit the momentum of monetary union to advance the creation of a more unified and capable European political, defence and security apparatus. German Chancellor Schröder expressed an interest in a permanent seat for Germany in the UN Security Council, identifying France, Russia, and Japan as countries that explicitly backed Germany's bid. Germany formally adopted the Euro on 1 January 1999 after permanently fixing the Deutsche Mark rate on 21 December 1998.
Since 1990, the German Bundeswehr has participated in a number of peacekeeping and disaster relief operations abroad. Since 2002, German troops formed part of the International Security Assistance Force in the war in Afghanistan, resulting in the first German casualties in combat missions since World War II.
In the worldwide economic recession that began in 2008, Germany did relatively well. However, the economic instability of Greece and several other EU nations in 2010–11 forced Germany to reluctantly sponsor a massive financial rescue.
In the wake of the disaster to the nuclear industry in Japan following its 2011 earthquake and tsunami, German public opinion turned sharply against nuclear power in Germany, which produces a fourth of the electricity supply. In response Merkel has announced plans to close down the nuclear system over the next decade, and to rely even more heavily on wind and other alternative energy sources, in addition to coal and natural gas. For further information, see Germany in 2011.
Germany was affected by the European migrant crisis in 2015 as it became the final destination of choice for many asylum seekers from Africa and the Middle East entering the EU. The country took in over a million refugees and migrants and developed a quota system which redistributed migrants around its federal states based on their tax income and existing population density. The decision by Merkel to authorize unrestricted entry led to heavy criticism in Germany as well as within Europe.
A major historiographical debate about the German history concerns the Sonderweg, the alleged "special path" that separated German history from the normal course of historical development, and whether or not Nazi Germany was the inevitable result of the Sonderweg. Proponents of the Sonderweg theory such as Fritz Fischer point to such events of the Revolution of 1848, the authoritarianism of the Second Empire and the continuation of the Imperial elite into the Weimar and Nazi periods. Opponents such as Gerhard Ritter of the Sonderweg theory argue that proponents of the theory are guilty of seeking selective examples, and there was much contingency and chance in German history. In addition, there was much debate within the supporters of the Sonderweg concept as for the reasons for the Sonderweg, and whether or not the Sonderweg ended in 1945. Was there a Sonderweg? Winkler says:
- "For a long time, educated Germans answered it in the positive, initially by laying claim to a special German mission, then, after the collapse of 1945, by criticizing Germany's deviation from the West. Today, the negative view is predominant. Germany did not, according to the now prevailing opinion, differ from the great European nations to an extent that would justify speaking of a 'unique German path.' And, in any case, no country on earth ever took what can be described as the 'normal path.'"
- Conservatism in Germany
- Economic history of Germany
- Feminism in Germany
- German monarchs Family tree
- History of Austria
- History of Berlin
- History of German foreign policy
- History of German journalism
- History of German women
- History of the Jews in Germany
- Liberalism in Germany
- List of Chancellors of Germany
- List of German monarchs
- Medieval East Colonisation by German noblemen and farmers
- Military history of Germany
- Names of Germany for terminology applied to Germany
- Politics of Germany
- Territorial evolution of Germany
- Wagner 2010, pp. 19726–19730.
- "World's Oldest Spears – Archaeology Magazine Archive". archaeology.org.
- "Earliest music instruments found". BBC News.
- "Ice Age Lion Man is world's earliest figurative sculpture – The Art Newspaper". The Art Newspaper.
- "The Venus of Hohle Fels". donsmaps.com.
- Kristinsson 2010, p. 147: "In the 1st century BC it was the Suebic tribes who were expanding most conspicuously. [...] Originating from central Germania, they moved to the south and southwest. [...] As Rome was conquering the Gauls, Germans were expanding to meet them, and this was the threat from which Caesar claimed to be saving the Gauls. [...] For the next half century the expansion concentrated on southern Germany and Bohemia, assimilating or driving out the previous Gallic or Celtic inhabitants. The oppida in this area fell and were abandoned one after another as simple, egalitarian Germanic societies replaced the complex, stratified Celtic ones."
- Green & Heather 2003, p. 29: "Greek may have followed the Persians in devising its own terms for their military formations, but the Goths were dependent [...] on Iranians of the Pontic region for terms which followed the Iranian model more closely in using the cognate Gothic term for the second element of its compounds. (Gothic dependence on Iranian may have gone even further, affecting the numeral itself, if we recall that the two Iranian loanwords in Crimean Gothic are words for 'hundred' and 'thousand')."
- Fortson 2011, p. 433: "Baltic territory began to shrink shortly before the dawn of the Christian era due to the Gothic migrations into their southwestern territories [...]."
- Green 2000, pp. 172–73: "Jordanes [...] mentions the Slavs (Getica 119) and associates them more closely than the Balts with the centre of Gothic power. [...] This location of the early Slavs partly at least in the region covered by the Cernjahov culture, together with their contacts (warlike or not) with the Goths under Ermanric and almost certainly before, explains their openness to Gothic loanword influence. That this may have begun early, before the expansion of the Slavs from their primeval habitat, is implied by the presence of individual loan-words in a wide range of Slavonic languages."
- Claster 1982, p. 35.
- Smithsonian (September 2005).
- Ozment 2005, pp. 2–21.
- Fichtner 2009, p. xlviii: "When the Romans began to appear in the region, shortly before the beginning of the Christian era, they turned Noricum into an administrative province, which encompassed much of what today is Austria."
- The Journal of the Anthropological Society of Bombay. 10: 647. 1917 https://books.google.com/books?id=2hg7AQAAMAAJ.
[...] Raetia (modern Bavaria and the adjoining country) [...].Missing or empty
- Ramirez-Faria 2007, p. 267: "Provinces of Germany[:] Germania was the name of two Roman provinces on the left bank of the Rhine, but also the general Roman designation for the lands east of the Rhine."
- Rüger 2004, pp. 527–28.
- Bowman 2005, p. 442.
- Heather 2010.
- Heather 2006, p. 349: "By 469, just sixteen years after [Attila's] death, the last of the Huns were seeking asylum inside the eastern Roman Empire."
- Bradbury 2004, p. 154: "East Francia consisted of four main principalities, the stem duchies – Saxony, Bavaria, Swabia and Franconia."
- Rodes 1964, p. 3: "It was plagued by the existence of immensely strong tribal duchies, such as Bavaria, Swabia, Thuringia, and Saxony — often referred to as stem duchies, from the German word Stamm, meaning tribe [...]."
- Wiesflecker 1991, p. 292: "Er mußte bekanntlich den demütigenden Vertrag von Arras (1482) hinnehmen und seine Tochter Margarethe mit dem Stammherzogtum Burgund-Bourgogne und vielen anderen Herrschaften an Frankreich ausliefern. [One has to recognise that [Maxiimilian I] had to accept the humiliating Treaty of Arras (1482) and to deliver to France his daughter Margaret along with the stem-duchy of Burgundy-Bourgogne and many other lordships.]"
- Historicus 1935, p. 50: "Franz von Lothringen muß sein Stammherzogtum an Stanislaus Leszinski, den französischen Kandidaten für Polen, ueberlassen [...]. [Francis of Lorraine had to bequeath his stem-duchy to Stanislaus Leszinski, the French candidate for the Polish crown [...].]"
Langer, William Leonard, ed. (1968). An encyclopedia of world history: ancient, medieval and modern, chronologically arranged (4 ed.). Harrap. p. 174.
These stem duchies were: Franconia [...]; Lorraine (not strictly a stem duchy but with a tradition of unity); Swabia [...] .
- "Germany". Encyclopædia Britannica Online. Encyclopædia Britannica Inc. 2012. Retrieved 12 September 2012.
- Goffart 1988.
- "Germany, the Stem Duchies & Marches". Friesian.com. 1945-02-13. Retrieved 2012-10-18.
- Wilson 2016, p. 24.
- Wilson 2016, p. 25.
- Van Dam & Fouracre 1995, p. 222: "Surrounding the core of Frankish kingdoms were other regions more or less subservient to the Merovingian kings. In some regions the Merovingians appointed, or perhaps simply acknowledged, various dukes, such as the duke of the Alamans, the duke of the Vascones in the western Pyrenees, and the duke of the Bavarians. [...] Since these dukes, unlike those who served at the court of the Merovingians or administered particular regions in the Merovingian kingdoms, ruled over distinct ethnic groups, they had much local support and tended to act independently of the Merovingians, and even to make war on them occasionally."
- Damminger 2003, p. 74: "The area of Merovingian settlement in southwest Germany was pretty much confined to the so called 'Altsiedelland', those fertile regions which had been under the plough since neolithic times [...]."
- Drew 2011, pp. 8–9: "Some of the success of the Merovingian Frankish rulers may be their acceptance of the personality of law policy. Not only did Roman law remain in use among Gallo-Romans and churchmen, Burgundian law among the Burgundians, and Visigothic law among the Visigoths, but the more purely Germanic peoples of the eastern frontier were allowed to retain their own 'national' law."
- Hen 1995, p. 17: "Missionaries, mainly from the British Isles, continued to operate in the Merovingian kingdoms throughout the sixth to the eighth centuries. Yet, their efforts were directed at the fringes of the Merovingian territory, that is, at Frisia, north-east Austrasia and Thuringia. These areas were hardly Romanised, if at all, and therefore lacked any social, cultural or physical basis for the expansion of Christianity. These areas stayed pagan long after Merovingian society completed its conversion, and thus attracted the missionaries' attention. [...] Moreover, there is evidence of missionary and evangelising activity from Merovingian Gual, out of places like Metz, Strasbourg or Worms, into the 'pagan regions' [...]."
- Kibler 1995, p. 1159: "From time to time, Austrasia received a son of the Merovingian king as an autonomous ruler."
- Wilson 2016, p. 26.
- Wilson 2016, pp. 26–27.
- Nelson, Janet L. (1998), Charlemagne's church at Aachen, Volume 48 (1), History Today, pp. 62–64
- Schulman 2002, pp. 325–27.
- Barraclough 1984, p. 59.
- Wilson 2016, p. 19.
- Day 1914, p. 252.
- Thompson 1931, pp. 146–79.
- Istvan Szepesi, "Reflecting the Nation: The Historiography of Hanseatic Institutions." Waterloo Historical Review 7 (2015). online
- Carsten 1958, pp. 52–68.
- Blumenthal, Uta-Renate (1991). The Investiture Controversy: Church and Monarchy from the Ninth to the Twelfth Century. pp. 159–73.
- Fuhrmann, Horst (1986). Germany in the High Middle Ages, c. 1050–1200. Cambridge University Press.
- Kahn, Robert A. (1974). A History of the Habsburg Empire 1526–1918. p. 5.
- Kantorowicz, Ernst (1957). Frederick the Second, 1194–1250.
- Austin Alchon, Suzanne (2003). A pest in the land: new world epidemics in a global perspective. University of New Mexico Press. p. 21. ISBN 0-8263-2871-7.
- Haverkamp, Alfred (1988). Medieval Germany, 1056–1273. Oxford University Press.
- Nicholas, David (1997). The Growth of the Medieval City: From Late Antiquity to the Early Fourteenth Century. Longman. pp. 69–72, 133–42, 202–20, 244–45, 300–307.
- Strait, Paul (1974). Cologne in the Twelfth Century.
- Huffman, Joseph P. (1998). Family, Commerce, and Religion in London and Cologne. – covers from 1000 to 1300.
- Sagarra, Eda (1977). A Social History of Germany: 1648 – 1914. p. 405.
- Judith M. Bennett and Ruth Mazo Karras, eds. The Oxford Handbook of Women and Gender in Medieval Europe (2013).
- Michael G. Baylor, The German Reformation and the Peasants' War: A Brief History with Documents (2012)
- John Lotherington, The German Reformation (2014)
- John Lotherington, The Counter-Reformation (2015)
- Wilson, Peter H. (2009). The Thirty Years War: Europe's Tragedy.
- Geoffrey Parker, The Thirty Years' War (1997) p. 178 has 15–20% decline; Tryntje Helfferich, The Thirty Years War: A Documentary History (2009) p. xix, estimates a 25% decline. Wilson (2009) pp. 780–95 reviews the estimates.
- Holborn, Hajo (1959). A History of Germany: The Reformation. p. 37.
- Edwards, Jr., Mark U. (1994). Printing, Propaganda, and Martin Luther.
- See texts at Project Wittenberg: "Selected Hymns of Martin Luther"
- Weimer, Christoph (2004). "Luther and Cranach on Justification in Word and Image". Lutheran Quarterly. 18 (4): 387–405.
- R. Taton; C. Wilson; Michael Hoskin (2003). Planetary Astronomy from the Renaissance to the Rise of Astrophysics, Part A, Tycho Brahe to Newton. p. 20.
- Sheehan 1989, pp. 75, 207–291, 291–323, 324–371, 802–820.
- Dennis Showalter, Frederick the Great: A Military History (2012)
- Ritter, Gerhard (1974) . Peter Peret, ed. Frederick the Great: A Historical Profile. Berkeley: University of California Press. ISBN 0-520-02775-2.; called by Russell Weigley "The best introduction to Frederick the Great and indeed to European warfare in his time." Russell Frank Weigley (2004). The Age of Battles: The Quest for Decisive Warfare from Breitenfeld to Waterloo. Indiana U.P. p. 550.
- Lucjan R. Lewitter, "The Partitions of Poland" in A. Goodwyn, ed. The New Cambridge Modern History: vol 8 1763–93 (1965) pp 333–59
- Holborn, Hajo (1964). A History of Modern Germany: 1648–1840. pp. 291–302.
- Ingrao, Charles W. (2003). The Hessian Mercenary State: Ideas, Institutions, and Reform under Frederick II, 1760–1785.
- Liebel, Helen P. (1965). "Enlightened bureaucracy versus enlightened despotism in Baden, 1750–1792". Transactions of the American Philosophical Society. 55 (5): 1–132. doi:10.2307/1005911.
- Segarra, Eda (1977). A Social History of Germany: 1648–1914. pp. 37–55, 183–202.
- Sagarra, Eda (1977). A Social History of Germany: 1648–1914. pp. 140–154, 341–45.
- For details on the life of a representative peasant farmer, who migrated in 1710 to Pennsylvania, see Bernd Kratz, "Jans Stauffer: A Farmer in Germany before his Emigration to Pennsylvania," Genealogist, Fall 2008, Vol. 22 Issue 2, pp 131–169
- Ford, Guy Stanton (1922). Stein and the era of reform in Prussia, 1807–1815. pp. 199–220.
- Brakensiek, Stefan (April 1994), "Agrarian Individualism in North-Western Germany, 1770–1870", German History, 12 (2), pp. 137–179
- Perkins, J. A. (April 1986), "Dualism in German Agrarian Historiography", Comparative Studies in Society and History, 28 (2), pp. 287–330
- Thomas Nipperdey, Germany from Napoleon to Bismarck: 1800–1866 (1996) p. 59
- Marion W. Gray, Productive men, reproductive women: the agrarian household and the emergence of separate spheres during the German Enlightenment (2000).
- Marion W. Gray and June K. Burton, "Bourgeois Values in the Rural Household, 1810–1840: The New Domesticity in Germany," The Consortium on Revolutionary Europe, 1750–1850 23 (1994): 449–56.
- Nipperdey, ch 2.
- Eda Sagarra, An introduction to Nineteenth century Germany (1980) pp 231–33.
- Gagliardo, John G. (1991). Germany under the Old Regime, 1600–1790. pp. 217–34, 375–95.
- Charles W. Ingrao, "A Pre-Revolutionary Sonderweg." German History 20#3 (2002), pp 279–286.
- Katrin Keller, "Saxony: Rétablissement and Enlightened Absolutism." German History 20.3 (2002): 309–331.
- Richter, Simon J., ed. (2005), The Literature of Weimar Classicism
- Owens, Samantha; Reul, Barbara M.; Stockigt, Janice B., eds. (2011). Music at German Courts, 1715–1760: Changing Artistic Priorities.
- Kuehn, Manfred (2001). Kant: A Biography.
- Van Dulmen, Richard; Williams, Anthony, eds. (1992). The Society of the Enlightenment: The Rise of the Middle Class and Enlightenment Culture in Germany.
- Ruth-Ellen B. Joeres and Mary Jo Maynes, German women in the eighteenth and nineteenth centuries: a social and literary history (1986).
- Eda Sagarra, A Social History of Germany: 1648 – 1914 (1977).
- James J. Sheehan, German History, 1770–1866 (1993) pp 207–88
- Connelly, Owen (1966). "6". Napoleon's satellite kingdoms.
- Raff, Diethher (1988), History of Germany from the Medieval Empire to the Present, pp. 34–55, 202–206
- Carr 1991, pp. 1–2.
- Lee 1985, pp. 332–46.
- Nipperdey 1996, p. 86.
- Nipperdey 1996, pp. 87–92, 99.
- Tilly, Richard (1967), "Germany: 1815–1870", in Cameron, Rondo, Banking in the Early Stages of Industrialization: A Study in Comparative Economic History, Oxford University Press, pp. 151–182
- Thomas Nipperdey, Germany from Napoleon to Bismarck: 1800–1866 (1996) p 178
- Nipperdey, Germany from Napoleon to Bismarck: 1800–1866 (1996) pp. 96–97
- Nipperdey, Germany from Napoleon to Bismarck: 1800–1866 (1996) p. 165
- Mitchell, Allan (2000). Great Train Race: Railways and the Franco-German Rivalry, 1815–1914.
- Theodore S. Hamerow, The Social Foundations of German Unification, 1858–1871: Ideas and Institutions (1969) pp 284–91
- Kenneth E. Olson, The history makers: The press of Europe from its beginnings through 1965 (LSU Press, 1966) pp 99–134
- Elmer H. Antonsen, James W. Marchand, and Ladislav Zgusta, eds. The Grimm brothers and the Germanic past (John Benjamins Publishing, 1990).
- Sheehan, James J. (1989). German History: 1770–1866. pp. 75, 207–291, 291–323, 324–371, 802–820.
- Christopher Clark, Iron Kingdom (2006) pp 412–19
- Christopher Clark, "Confessional policy and the limits of state action: Frederick William III and the Prussian Church Union 1817–40." Historical Journal 39.04 (1996) pp: 985–1004. in JSTOR
- Hajo Holborn, A History of Modern Germany 1648–1840 (1964) pp 485–91
- Christopher Clark, Iron Kingdom (2006) pp 419–21
- Holborn, A History of Modern Germany 1648–1840 (1964) pp 498–509
- Taylor, A.J.P. (2001). The Course of German History. p. 52.
- Williamson, George S. (Dec 2000). "What Killed August von Kotzebue? The Temptations of Virtue and the Political Theology of German Nationalism, 1789–1819". Journal of Modern History. 72 (4): 890–943. doi:10.1086/318549.
- Wittke, C. F. (1952). Refugees of Revolution: The German Forty-Eighters in America. Philadelphia: University of Pennsylvania Press.
- Holborn, A History of Modern Germany: 1840–1945 pp 131–67
- Edgar Feuchtwanger, Bismarck: A Political History (2nd ed., Routledge, 2014) pp 83–98
- Holborn, A History of Modern Germany: 1840–1945 pp 167–88
- Feuchtwanger, Bismarck: A Political History (2014) pp 99–147
- Gordon A. Craig, Germany, 1866–1945 (1978) pp 11–22 online edition
- "A German Voice of Opposition to Germanization (1914)". German History in Documents and Images. German Historical Institute (www.ghi-dc.org).
- "Germanization Policy: Speech by Ludwik Jazdzewski in a Session of the Prussian House of Representatives (January 15, 1901)". German History in Documents and Images. German Historical Institute (www.ghi-dc.org).
- John C.G. Röhl, "Higher Civil Servants in Germany, 1890–1900." Journal of Contemporary History 2#3 (1967): 101–121. in JSTOR
- Clark, Iron kingdom: the rise and downfall of Prussia, 1600–1947 (2006) p 158-59, 603–23.
- Hans-Ulrich Wehler,The German Empire, 1871–1918 (1985): 146–57, quote p 157.
- Alexandra Richie, Faust’s Metropolis. A History of Berlin (1998) p 207.
- David Blackbourn, The Long Nineteenth Century: A History of Germany, 1780–1918 (1998) p 32.
- Mazón, Patricia M. (2003). Gender and the Modern Research University: The Admission of Women to German Higher Education, 1865–1914. Stanford U.P. p. 53.
- Moses, John Anthony (1982). Trade Unionism in Germany from Bismarck to Hitler, 1869–1933. Rowman & Littlefield. p. 149.
- Hennock, E. P. (2007), The Origin of the Welfare State in England and Germany, 1850–1914: Social Policies Compared
- Beck, Hermann (1995), Origins of the Authoritarian Welfare State in Prussia, 1815–1870
- Spencer, Elaine Glovka (Spring 1979), "Rules of the Ruhr: Leadership and Authority in German Big Business Before 1914", Business History Review, 53 (1), pp. 40–64, JSTOR 3114686
- Lambi, Ivo N. (March 1962), "The Protectionist Interests of the German Iron and Steel Industry, 1873–1879", Journal of Economic History, 22 (1), pp. 59–70, JSTOR 2114256
- Douglas W. Hatfield, "Kulturkampf: The Relationship of Church and State and the Failure of German Political Reform," Journal of Church and State (1981) 23#3 pp. 465–484 in JSTOR(1998)
- John C.G. Roehl, "Higher civil servants in Germany, 1890–1900" in James J. Sheehan, ed., Imperial Germany (1976) pp 128–151
- Margaret Lavinia Anderson, and Kenneth Barkin. "The myth of the Puttkamer purge and the reality of the Kulturkampf: Some reflections on the historiography of Imperial Germany." Journal of Modern History (1982): 647–686. esp. pp 657–62 in JSTOR
- Anthony J. Steinhoff, "Christianity and the creation of Germany," in Sheridan Gilley and Brian Stanley, eds., Cambridge History of Christianity: Volume 8: 1814–1914 (2008) p 295
- John K. Zeender in The Catholic Historical Review, Vol. 43, No. 3 (Oct., 1957), pp. 328–330.
- Rebecca Ayako Bennette, Fighting for the Soul of Germany: The Catholic Struggle for Inclusion after Unification (Harvard U.P. 2012)
- Blackbourn, David (Dec 1975). "The Political Alignment of the Centre Party in Wilhelmine Germany: A Study of the Party's Emergence in Nineteenth-Century Württemberg". Historical Journal. 18 (4): 821–850. doi:10.1017/s0018246x00008906. JSTOR 2638516.
- Clark, Christopher (2006). Iron Kingdom: The Rise and Downfall of Prussia, 1600–1947. pp. 568–576.
- Ronald J. Ross, The failure of Bismarck's Kulturkampf: Catholicism and state power in imperial Germany, 1871–1887 (1998).
- Weitsman, Patricia A. (2004), Dangerous alliances: proponents of peace, weapons of war, p. 79
- Belgum, Kirsten (1998). Popularizing the Nation: Audience, Representation, and the Production of Identity in "Die Gartenlaube," 1853–1900. p. 149.
- Neugebauer, Wolfgang (2003). Die Hohenzollern. Band 2 – Dynastie im säkularen Wandel (in German). Stuttgart: W. Kohlhammer. pp. 174–175. ISBN 3-17-012097-2.
- Kroll, Franz-Lothar (2000), "Wilhelm II. (1888–1918)", in Kroll, Franz-Lothar, Preussens Herrscher. Von den ersten Hohenzollern bis Wilhelm II. (in German), Munich: C.H. Beck, p. 290
- Christopher Clark, Kaiser Wilhelm II (2000) pp 35–47
- John C.G. Wilhelm II: the Kaiser's personal monarchy, 1888–1900 (2004).
- On the Kaiser's "histrionic personality disorder", see Tipton (2003), Pp. 243–45
- Röhl, J.C.G. (Sep 1966). "Friedrich von Holstein". Historical Journal. 9 (3): 379–388. doi:10.1017/s0018246x00026716.
- Woodward, David (July 1963). "Admiral Tirpitz, Secretary of State for the Navy, 1897–1916". History Today. 13 (8): 548–555.
- Herwig, Holger (1980). Luxury Fleet: The Imperial German Navy 1888–1918.
- Esthus, Raymond A. (1970). Theodore Roosevelt and the International Rivalries. pp. 66–111.
- Perkins, J.A. (Spring 1981). "The Agricultural Revolution in Germany 1850–1914". Journal of European Economic History. 10 (1): 71–119.
- Haber, Ludwig Fritz (1958), The chemical industry during the nineteenth century
- Webb, Steven B. (June 1980). "Tariffs, Cartels, Technology, and Growth in the German Steel Industry, 1879 to 1914". Journal of Economic History. 40 (2): 309–330. doi:10.1017/s0022050700108228. JSTOR 2120181.
- James, Harold (2012). Krupp: A History of the Legendary German Firm. Princeton University Press.
- Allen, Robert C. (Dec 1979). "International Competition in Iron and Steel, 1850–1913". Journal of Economic History. 39 (4): 911–37. doi:10.1017/s0022050700098673. JSTOR 2120336.
- Feldman, Gerald D.; Nocken, Ulrich (Winter 1975). "Trade Associations and Economic Power: Interest Group Development in the German Iron and Steel and Machine Building Industries, 1900–1933". Business History Review. 49 (4): 413–45. JSTOR 3113169.
- Brigitte Young, Triumph of the fatherland: German unification and the marginalization of women (1999).
- Guido, Diane J. (2010). The German League for the Prevention of Women's Emancipation: Anti-Feminism in Germany, 1912–1920. p. 3.
- Mazón, Patricia M. (2003). Gender and the Modern Research University: The Admission of Women to German Higher Education, 1865–1914. Stanford U.P. p. 53.
- John Anthony Moses and Paul M. Kennedy, Germany in the Pacific and Far East, 1870–1914 (1977).
- sean McMeekin, The Berlin-Baghdad express: the Ottoman Empire and Germany's bid for world power, 1898–1918 (Penguin, 2011)
- Gann, L., and Peter Duignan, The Rulers of German Africa, 1884–1914 (1977) focuses on political and economic history; Perraudin, Michael, and Jürgen Zimmerer, eds. German Colonialism and National Identity (2010) focuses on cultural impact in Africa and Germany.
- Tilman Dedering, "The German‐Herero war of 1904: revisionism of genocide or imaginary historiography?." Journal of Southern African Studies (1993) 19#1 pp: 80–88.
- Jeremy Sarkin, Germany's Genocide of the Herero: Kaiser Wilhelm II, His General, His Settlers, His Soldier (2011)
- Kirsten Dyck, "Situating the Herero Genocide and the Holocaust among European Colonial Genocides." Przegląd Zachodni (2014) #1 pp: 153–172. abstract
- Kennedy, Paul M. (1980). The Rise of the Anglo-German Antagonism, 1860–1914. pp. 464–470.
- Winter, J.M. (1999). Capital Cities at War: Paris, London, Berlin, 1914–1919.
- Strachan, Hew (2004). The First World War.
- Spencer C. Tucker (2005). World War One. ABC-CLIO. p. 225.
- Zara S. Steiner (2005). The Lights that Failed: European International History, 1919–1933. Oxford U.P. p. 68.
- Herwig, Holger H. (1996). The First World War: Germany and Austria–Hungary 1914–1918.
- Paschall, Rod (1994). The defeat of imperial Germany, 1917–1918.
- Feldman, Gerald D. "The Political and Social Foundations of Germany's Economic Mobilization, 1914–1916," Armed Forces & Society (1976) 3#1 pp 121–145. online
- Chickering, Roger (2004). Imperial Germany and the Great War, 1914–1918. pp. 141–42.
- For a comparison see Timothy S. Brown, Weimar radicals: Nazis and communists between authenticity and performance (2009) pp 149–53
- "The political parties in the Weimar Republic" (PDF). Deutscher Bundestag. March 2006. Archived from the original (PDF) on 25 November 2011. Retrieved 18 September 2011.
- Marks, Sally (1978). "The Myths of Reparations". Central European History. 11 (3): 231–55. doi:10.1017/s0008938900018707. JSTOR 4545835.
- Evans 2003, pp. 247–283.
- Richard F. Hamilton, Who Voted for Hitler? (1982)
- Evans 2003, pp. 283–308.
- "Nobel Prize". Nobelprize.org. Retrieved 19 November 2009.
- Joll, James (April 1985). "Two Prophets of the Twentieth Century: Spengler and Toynbee". Review of International Studies. 11 (2): 91–104. doi:10.1017/s026021050011424x.
- Stackelberg, Roderick (2007). The Routledge companion to Nazi Germany. p. 135.
- Ash, Mitchell G.; Söllner, Alfons, eds. (1996). Forced Migration and Scientific Change: Emigré German-Speaking Scientists and Scholars after 1933.
- Kershaw, Ian (2001). The "Hitler Myth": Image and Reality in the Third Reich.
- Williamson, David (2002). "Was Hitler a Weak Dictator?". History Review: 9+.
- Evans 2003, pp. 329–334.
- Evans 2003, p. 354.
- Evans 2003, p. 336.
- Evans 2003, p. 351.
- Geary, Dick (October 1998). "Who voted for the Nazis? (electoral history of the National Socialist German Workers Party)". History Today. 48 (10): 8–14.
- Kershaw 2008, pp. 309–314.
- M. Patricia Marchak (2003). Reigns of Terror. McGill-Queen's Press — MQUP. p. 195. ISBN 978-0-7735-2642-6.
- Evans 2003, p. 344.
- Majer 2003, p. 92.
- Kershaw 2008, p. 345.
- Evans 2005, p. 544.
- Friedlander, Saul (1998). Nazi Germany and the Jews. 1: The Years of Persecution 1933–1939.
- Interpreting the 20th Century: The Struggle Over Democracy, The Holocaust, Pamela Radcliff, pp. 104–107
- Jennifer Rosenberg. "Holocaust Facts". About.com Education.
- Bullock, Alan (1991). Hitler: a study in tyranny. p. 170.
- Evans 2005, p. 633.
- Evans 2005, pp. 632–637.
- Thacker, Toby (2009). Joseph Goebbels: Life and Death. pp. 182–184.
- Bridenthal, Renate; Grossmann, Atina; Kaplan, Marion (1984). When Biology Became Destiny: Women in Weimar and Nazi Germany.
- Evans 2005, p. 331.
- Evans 2005, pp. 516–517.
- Longerich 2012, p. 371.
- Kershaw 2008, p. 749.
- Koonz, Claudia (1988). Mothers in the Fatherland: Women, the Family and Nazi Politics.
- Hagemann, Karen (2011). "Mobilizing Women for War: The History, Historiography, and Memory of German Women's War Service in the Two World Wars". Journal of Military History. 75 (4): 1055–1094.
- Campbell, D'Ann (April 1993). "Women in Combat: The World War Two Experience in the United States, Great Britain, Germany, and the Soviet Union". Journal of Military History. 57: 301–323. doi:10.2307/2944060.
- Evans 2005, pp. 618, 623, 632–637, 641, 646–652, 671–674, 683.
- Beevor 2012, pp. 22, 27–28.
- Beevor 2012, pp. 70–71, 79.
- Kershaw 2008, p. 562.
- Richard Overy, The Bombers and the Bombed: Allied Air War Over Europe 1940–1945 (2014) pp 306–7
- David Clay Large (2001). Berlin. Basic Books. p. 482.
- Peter Stearns (2013). Demilitarization in the Contemporary World. University of Illinois Press. p. 176.
- Bessel, Richard (2009). Germany 1945: From War to Peace. Harper Collins Publishers. ISBN 978-0-06-054036-4.
- Robert Bard, Historical Memory and the expulsion of ethnic Germans in Europe, 1944 (PhD. Diss. University of Hertfordshire, 2009) online
- "The Potsdam Declaration". Carlisle Barracks, Pa.: Book Department, Army Information School. May 1946.
- Schechtman, Joseph B. (April 1953). "Postwar Population Transfers in Europe: A Survey". Review of Politics. 15 (2): 151–178. doi:10.1017/s0034670500008081.. "Most had left" is p. 158 in JSTOR
- Davidson, Eugene. The death and life of Germany: an account of the American occupation. p. 121.
- Liberman, Peter (1996). Does Conquest Pay? The Exploitation of Occupied Industrial Societies. p. 147.
- 2.3 million units out of 9.5 million were destroyed.
- Tipton, Frank B. (2003). A History of Modern Germany since 1815. pp. 508–513, 596–599.
- Hoover, Calvin B. (May 1946). "The Future of the German Economy". American Economic Review. 36 (2): 642–649. JSTOR 1818235.
- Milward, Alan S. (1984). The Reconstruction of Western Europe: 1945–51. pp. 356, 436.
- Ardagh, John (1987). Germany and the Germans. pp. 74–82, 84.
- Gareau, Frederick H. (Jun 1961). "Morgenthau's Plan for Industrial Disarmament in Germany". Western Political Quarterly. 14 (2): 517–534. doi:10.2307/443604.
- "Conferences: Pas de Pagaille!". Time. 28 July 1947.
- For US and Allied official policy statements see U.S. Dept. of State Germany, 1947–1949: The Story in Documents (1950) – available online; these are primary sources.
- Gienow-Hecht, Jessica C.E. (1999). "Art is democracy and democracy is art: Culture, propaganda, and the Neue Zeitung in Germany". Diplomatic History. 23 (1): 21–43. doi:10.1111/0145-2096.00150.
- Bruce, Gary (2010), The Firm: The Inside Story of the Stasi
- Fulbrook, Mary (2008). The People's State: East German Society from Hitler to Honecker.
- Granville, Johanna (Sep 2006). "East Germany in 1956: Walter Ulbricht's Tenacity in the Face of Opposition". Australian Journal of Politics and History. 52 (3): 417–438. doi:10.1111/j.1467-8497.2006.00427.x.
- Biesinger, Joseph A. (2006), Germany: a reference guide from the Renaissance to the present, p. 270
- Taylor, Frederick (2008), The Berlin Wall: A World Divided, 1961–1989
- Pence, Katherine; Betts, Paul (2011). Socialist modern: East German everyday culture and politics (4 ed.). University of Michigan Press. pp. 37, 59.
- Jürgen Kocka (2010). Civil Society and Dictatorship in Modern German History. UPNE. p. 37.
- The Christian Social Union or CSU is the Bavaria branch of the CDU. It has always operated in close collaboration with the CDU, and the CDU/CSU is usually treated as a single party in national affairs.
- Jürgen Weber, Germany, 1945–1990: A Parallel History (Budapest, Central European University Press, 2004) in Questia
- Weber, Jurgen (2004). Germany, 1945–1990. Central European University Press. pp. 37–60, 103–18, 167–88, 221–264.
- Fürstenberg, Friedrich (May 1977). "West German Experience with Industrial Democracy". Annals of the American Academy of Political and Social Science. 431: 44–53. doi:10.1177/000271627743100106. JSTOR 1042033.
- Junker, Detlef, ed. (2004). The United States and Germany in the Era of the Cold War, 1945–1968. 1. Cambridge University Press. pp. 291–309.
- Sauermann, Heinz (1950). "The Consequences of the Currency Reform in Western Germany". Review of Politics. 12 (2): 175–196. doi:10.1017/s0034670500045009. JSTOR 1405052.
- Giangreco, D. M.; Griffin, Robert E. (1988). Airbridge to Berlin: The Berlin Crisis of 1948, Its Origins and Aftermath. Presidio Press.
- Williams, Charles (2000). Konrad Adenauer: The Father of the New Germany.
- Hiscocks, Richard (1975). The Adenauer era. p. 290.
- Granieri, Ronald J. (2005). "Review". Journal of Interdisciplinary History. 36 (2): 262, 263. doi:10.1162/0022195054741190.
- Walter, Norbert. "The Evolving German Economy: Unification, the Social Market, European and Global Integration". SAIS Review (15 (Special Issue 1995)): 55–81.. Quote from p. 64
- Mierzejewski, Alfred C. (2004). Ludwig Erhard: a biography.
- Mierzejewski, Alfred C. (2004), "1957: Ludwig Erhard's Annus Terribilis", Essays in Economic and Business History, 22: 17–27, ISSN 0896-226X
- Turner, Henry Ashby (1987). The two Germanies since 1945. pp. 80–82.
- Shonick, Kaja (Oct 2009). "Politics, Culture, and Economics: Reassessing the West German Guest Worker Agreement with Yugoslavia". Journal of Contemporary History. 44 (4): 719–736. doi:10.1177/0022009409340648.
- Castles, Stephen. "The Guests Who Stayed – The Debate on 'Foreigners Policy' in the German Federal Republic". International Migration Review. 19 (3): 517–534. JSTOR 2545854.
- Ewing, Katherine Pratt (Spring–Summer 2003). "Living Islam in the Diaspora: Between Turkey and Germany". South Atlantic Quarterly. 102 (2/3): 405–431. doi:10.1215/00382876-102-2-3-405.. In Project MUSE
- Mandel, Ruth (2008). Cosmopolitan Anxieties: Turkish Challenges to Citizenship and Belonging in Germany. Duke University Press.
- Fink, Carole; Schaefer, Bernd, eds. (2009). Ostpolitik, 1969–1974: European and Global Responses.
- Fulbrook, Mary (2002). History of Germany, 1918–2000: the divided nation. p. 170.
- Sinn, Hans-Werner (2007). Can Germany be saved?: the malaise of the world's first welfare state. MIT Press. p. 183.
- Cerny, Karl H. (1990). Germany at the polls: the Bundestag elections of the 1980s. p. 113.
- For a primary source see Helmut Schmidt, Men and Power: A Political Retrospective (1990)
- Pruys, Karl (1996). Kohl: Genius of the Present: A Biography of Helmut Kohl.
- For primary sources in English translation and a brief survey see Konrad H. Jarausch, and Volker Gransow, eds. Uniting Germany: Documents and Debates, 1944–1993 (1994)
- Hockenos, Paul (2008). Joschka Fischer and the making of the Berlin Republic. pp. 313–14.
- Bolgherini, Silvia; Grotz, Florian, eds. (2010). Germany After the Grand Coalition: Governance and Politics in a Turbulent Environment. Palgrave Macmillan.
- Mufson, Steven (30 May 2011). "Germany to close all of its nuclear plants by 2022". Washington Post.
- "Migrant crisis: Migration to Europe explained in seven charts". 28 January 2016. Retrieved 31 January 2016.
- "Chancellor Running Out of Time on Refugee Issue". 19 January 2016. Retrieved 7 June 2017.
- "Merkel Critic Says Chancellor's Refugee Policy Is a 'Time Bomb'". 9 August 2016. Retrieved 7 June 2017.
- Heinrich August Winkler, Germany: The Long Road West (2006), vol 1 p 1
- Barraclough, Geoffrey (1984). The Origins of Modern Germany?.
- Beevor, Antony (2012). The Second World War. New York: Little, Brown. ISBN 978-0-316-02374-0.
- Bradbury, Jim (2004). The Routledge Companion to Medieval Warfare. Routledge Companions to History. Routledge. ISBN 9781134598472.
- Bowman, Alan K.; Garnsey, Peter; Cameron, Averil (2005). The Crisis of Empire, A.D. 193–337. The Cambridge Ancient History. 12. Cambridge University Press. ISBN 0-521-30199-8.
- Carr, William (1991). A History of Germany: 1815-1990 (4 ed.). Routledge. ISBN 0-340-55930-6.
- Carsten, Francis (1958). The Origins of Prussia.
- Claster, Jill N. (1982). Medieval Experience: 300–1400. New York University Press. ISBN 0-8147-1381-5.
- Damminger, Folke (2003). "Dwellings, Settlements and Settlement Patterns in Merovingian Southwest Germany and adjacent areas". In Wood, Ian. Franks and Alamanni in the Merovingian Period: An Ethnographic Perspective. Studies in Historical Archaeoethnology. Volume 3 (Revised ed.). Boydell & Brewer. ISBN 9781843830351. ISSN 1560-3687.
- Day, Clive (1914). A History of Commerce.
- Drew, Katherine Fischer (2011). The Laws of the Salian Franks. The Middle Ages Series. University of Pennsylvania Press. ISBN 9780812200508.
- Evans, Richard J. (2003). The Coming of the Third Reich. New York: Penguin Books. ISBN 978-0-14-303469-8.
- Evans, Richard J. (2005). The Third Reich in Power. New York: Penguin. ISBN 978-0-14-303790-3.
- Fichtner, Paula S. (2009). Historical Dictionary of Austria. Volume 70 (2nd ed.). Scarecrow Press. ISBN 9780810863101.
- Fortson, Benjamin W. (2011). Indo-European Language and Culture: An Introduction. Blackwell Textbooks in Linguistics. Volume 30 (2nd ed.). John Wiley & Sons. ISBN 9781444359688.
- Green, Dennis H. (2000). Language and History in the Early Germanic World (Revised ed.). Cambridge University Press. ISBN 9780521794237.
- Green, Dennis H. (2003). "Linguistic evidence for the early migrations of the Goths". In Heather, Peter. The Visigoths from the Migration Period to the Seventh Century: An Ethnographic Perspective. Volume 4 (Revised ed.). Boydell & Brewer. ISBN 9781843830337.
- Goffart, Walter A. (1988). The Narrators of Barbarian History (A.D. 550–800): Jordanes, Gregory of Tours, Bede, and Paul the Deacon. Princeton University Press.
- Heather, Peter J. (2006). The Fall of the Roman Empire: A New History of Rome and the Barbarians (Reprint ed.). Oxford University Press. ISBN 9780195159547.
- Historicus (1935). Frankreichs 33 Eroberungskriege [France's 33 wars of conquest] (in German). Translated from the French. Foreword by Alcide Ebray (3rd ed.). Internationaler Verlag. Retrieved 2015-11-21.
- Heather, Peter (2010). Empires and Barbarians: The Fall of Rome and the Birth of Europe. Oxford University Press.
- Hen, Yitzhak (1995). Culture and Religion in Merovingian Gaul: A.D. 481–751. Cultures, Beliefs and Traditions: Medieval and Early Modern Peoples Series. Volume 1. Brill. ISBN 9789004103474. Retrieved 2015-11-26.
- Kershaw, Ian (2008). Hitler: A Biography. New York: W. W. Norton & Company. ISBN 978-0-393-06757-6.
- Kibler, William W., ed. (1995). Medieval France: An Encyclopedia. Garland Encyclopedias of the Middle Ages. Volume 2. Psychology Press. ISBN 9780824044442. Retrieved 2015-11-26.
- Kristinsson, Axel (2010). "Germanic expansion and the fall of Rome". Expansions: Competition and Conquest in Europe Since the Bronze Age. ReykjavíkurAkademían. ISBN 9789979992219.
- Longerich, Peter (2012). Heinrich Himmler: A Life. Oxford; New York: Oxford University Press. ISBN 978-0-19-959232-6.
- Majer, Diemut (2003). "Non-Germans" under the Third Reich: The Nazi Judicial and Administrative System in Germany and Occupied Eastern Europe, with Special Regard to Occupied Poland, 1939–1945. Baltimore; London: Johns Hopkins University Press. ISBN 0-8018-6493-3.
- Nipperdey, Thomas (1996). Germany from Napoleon to Bismarck: 1800-1866. Princeton University Press. ISBN 0691607559.
- Ozment, Steven (2004). A Mighty Fortress: A New History of the German People. Harper Perennial. ISBN 978-0060934835.
- Rodes, John E. (1964). Germany: A History. Holt, Rinehart and Winston. ASIN B0000CM7NW.
- Rüger, C. (2004) . "Germany". In Bowman, Alan K.; Champlin, Edward; Lintott, Andrew. The Cambridge Ancient History: X, The Augustan Empire, 43 B.C. – A.D. 69. Volume 10 (2nd ed.). Cambridge University Press. ISBN 0-521-26430-8.
- Schulman, Jana K. (2002). The Rise of the Medieval World, 500–1300: A Biographical Dictionary. Greenwood Press.
- Sheehan, James J. (1989). German History: 1770–1866.
- Thompson, James Westfall (1931). Economic and Social History of Europe in the Later Middle Ages (1300–1530).
- Van Dam, Raymond (1995). "8: Merovingian Gaul and the Frankish conquests". In Fouracre, Paul. The New Cambridge Medieval History. 1, C.500-c.700. Cambridge University Press. ISBN 9780521853606. Retrieved 2015-11-23.
- Wiesflecker, Hermann (1991). Maximilian I. (in German). Verlag für Geschichte und Politik. ISBN 9783702803087. Retrieved 2015-11-21.
- Wilson, Peter H. (2016). Heart of Europe: A History of the Holy Roman Empire. Belknap Press. ISBN 978-0-674-05809-5.
- Wanger, Günther A. "Radiometric dating of the type-site for Homo heidelbergensis at Mauer, Germany". Proceedings of the National Academy of Sciences. 107. doi:10.1073/pnas.1012722107. PMC 2993404. Retrieved 6 October 2010.
- Bordewich, Fergus M. (September 2005). "The Ambush That Changed History: An amateur archaeologist discovers the field where wily Germanic warriors halted the spread of the Roman Empire". Smithsonian magazine.
- Lee, Loyd E. (1985). "The German Confederation and the Consolidation of State Power in the South German States, 1815–1848". Consortium on Revolutionary Europe 1750–1850: Proceedings. 15.
Atlas and maps
- Atlas of Germany Wikipedia maps; not copyright
- Biesinger, Joseph A. Germany: a reference guide from the Renaissance to the present (2006)
- Bithell, Jethro, ed. Germany: A Companion to German Studies (5th ed. 1955), 578pp; essays on German literature, music, philosophy, art and, especially, history. online edition
- Bösch, Frank. Mass Media and Historical Change: Germany in International Perspective, 1400 to the Present (Berghahn, 2015). 212 pp. online review
- Buse, Dieter K. ed. Modern Germany: An Encyclopedia of History, People, and Culture 1871–1990 (2 vol 1998)
- Clark, Christopher. Iron Kingdom: The Rise and Downfall of Prussia, 1600–1947 (2006)
- Detwiler, Donald S. Germany: A Short History (3rd ed. 1999) 341pp; online edition
- Fulbrook, Mary (1990). A Concise History of Germany. Cambridge concise histories. Cambridge University Press. ISBN 0521-36836-7. This text has updated editions.
- Gall, Lothar. Milestones - Setbacks - Sidetracks: The Path to Parliamentary Democracy in Germany, Historical Exhibition in the Deutscher Dom in Berlin (2003), exhibit catalog; heavily illustrated, 420pp; political history since 1800
- Holborn, Hajo. A History of Modern Germany (1959–64); vol 1: The Reformation; vol 2: 1648–1840; vol 3: 1840–1945; standard scholarly survey
- Maehl, William Harvey. Germany in Western Civilization (1979), 833pp; focus on politics and diplomacy
- Ozment, Steven. A Mighty Fortress: A New History of the German People (2005), focus on cultural history
- Raff, Diether. History of Germany from the Medieval Empire to the Present (1988) 507pp
- Reinhardt, Kurt F. Germany: 2000 Years (2 vols., 1961), stress on cultural topics
- Richie, Alexandra. Faust's Metropolis: A History of Berlin (1998), 1168 pp by scholar; excerpt and text search; emphasis on 20th century
- Sagarra, Eda. A Social History of Germany 1648–1914 (1977, 2002 edition)
- Schulze, Hagen, and Deborah Lucas Schneider. Germany: A New History (2001)
- Smith, Helmut Walser, ed. The Oxford Handbook of Modern German History (2011), 862 pp; 35 essays by specialists; Germany since 1760
- Snyder, Louis, ed. Documents of German history (1958) online. 620pp; 167 primary sources in English translation
- Taylor, A.J.P. The Course of German History: A Survey of the Development of German History since 1815. (2001). 280pp; online edition
- Watson, Peter. The German Genius (2010). 992 pp covers many thinkers, writers, scientists etc. since 1750; ISBN 978-0-7432-8553-7
- Winkler, Heinrich August. Germany: The Long Road West (2 vol, 2006), since 1789; excerpt and text search vol 1
- Zabecki, David T., ed. Germany at War: 400 Years of Military History (4 vol. 2015)
- Arnold, Benjamin. Medieval Germany, 500–1300: A Political Interpretation (1998)
- Arnold, Benjamin. Power and Property in Medieval Germany: Economic and Social Change, c. 900–1300 (Oxford University Press, 2004) online edition
- Barraclough, Geoffrey. The Origins of Modern Germany (2d ed., 1947)
- Fuhrmann, Horst. Germany in the High Middle Ages: c. 1050–1200 (1986)
- Haverkamp, Alfred, Helga Braun, and Richard Mortimer. Medieval Germany 1056–1273 (1992)
- Innes; Matthew. State and Society in the Early Middle Ages: The Middle Rhine Valley, 400–1000 (Cambridge U.P. 2000) online edition
- Jeep, John M. Medieval Germany: An Encyclopedia (2001), 928pp, 650 articles by 200 scholars cover AD 500 to 1500
- Nicholas, David. The Northern Lands: Germanic Europe, c. 1270–c. 1500 (Wiley-Blackwell, 2009). 410 pages.
- Reuter, Timothy. Germany in the Early Middle Ages, c. 800–1056 (1991)
- Bainton, Roland H. Here I Stand: A Life of Martin Luther (1978; reprinted 1995)
- Dickens, A. G. Martin Luther and the Reformation (1969), basic introduction
- Holborn, Hajo. A History of Modern Germany: vol 1: The Reformation (1959)
- Junghans, Helmar. Martin Luther: Exploring His Life and Times, 1483–1546. (book plus CD ROM) (1998)
- MacCulloch, Diarmaid. The Reformation (2005), influential recent survey
- Ranke, Leopold von. History of the Reformation in Germany (1905) 792 pp; by Germany's foremost scholar complete text online free
- Smith, Preserved. The Age of the Reformation (1920) 861 pages; complete text online free
Early Modern to 1815
- Asprey, Robert B. Frederick the Great: The Magnificent Enigma (2007)
- Atkinson, C.T. A history of Germany, 1715–1815 (1908) old; focus on political-military-diplomatic history of Germany and Austria online edition
- Blanning, Tim. Frederick the Great: King of Prussia (2016), major new scholarly biography
- Bruford W.H. Germany In The Eighteenth Century The Social Background Of The Literary Revival (1935, 1971) online free to borrow, covers social history
- Clark, Christopher. Iron Kingdom: The Rise and Downfall of Prussia, 1600–1947 (2006)
- Gagliardo, John G. Germany under the Old Regime, 1600–1790 (1991) online edition
- Heal, Bridget. The Cult of the Virgin Mary in Early Modern Germany: Protestant and Catholic Piety, 1500–1648 (2007)
- Holborn, Hajo. A History of Modern Germany. Vol 2: 1648–1840 (1962)
- Hughes, Michael. Early Modern Germany, 1477–1806 (1992)
- Ogilvie, Sheilagh. Germany: A New Social and Economic History, Vol. 1: 1450–1630 (1995) 416pp; Germany: A New Social and Economic History, Vol. 2: 1630–1800 (1996), 448pp
- Ogilvie, Sheilagh. A Bitter Living: Women, Markets, and Social Capital in Early Modern Germany (2003) DOI:10.1093/acprof:oso/9780198205548.001.0001 online
- Ozment, Steven. Flesh and Spirit: Private Life in Early Modern Germany (2001).
- Blackbourn, David. The Long Nineteenth Century: A History of Germany, 1780–1918 (1998) excerpt and text search
- Blackbourn, David, and Geoff Eley. The Peculiarities of German History: Bourgeois Society and Politics in Nineteenth-Century Germany (1984) online edition
- Brandenburg, Erich. From Bismarck to the World War: A History of German Foreign Policy 1870–1914 (19330 online 562pp; an old standard scholarly history
- Brose, Eric Dorn. German History, 1789–1871: From the Holy Roman Empire to the Bismarckian Reich. (1997) online edition
- Craig, Gordon A. Germany, 1866–1945 (1978) online edition
- Hamerow, Theodore S. ed. Age of Bismarck: Documents and Interpretations (1974), 358pp; 133 excerpts from primary sources put in historical context by Professor Hamerow
- Hamerow, Theodore S. ed. Otto Von Bismarck and Imperial Germany: A Historical Assessment (1993), excerpts from historians and primary sources
- Nipperdey, Thomas. Germany from Napoleon to Bismarck: 1800–1866 (1996; online edition from Princeton University Press 2014), very dense coverage of every aspect of German society, economy and government. excerpt
- Ogilvie, Sheilagh, and Richard Overy. Germany: A New Social and Economic History Volume 3: Since 1800 (2004)
- Pflanze Otto, ed. The Unification of Germany, 1848–1871 (1979), essays by historians
- Sheehan, James J. German History, 1770–1866 (1993), the major survey in English
- Steinberg, Jonathan. Bismarck: A Life (2011), a major scholarly biography
- Stern, Fritz. Gold and Iron: Bismark, Bleichroder, and the Building of the German Empire (1979) Bismark worked closely with this leading banker and financier excerpt and text search
- Taylor, A.J.P. Bismarck: The Man and the Statesman (1967) online edition
- Wehler, Hans-Ulrich. The German Empire 1871–1918 (1984)
- Berghahn, Volker Rolf. Modern Germany: society, economy, and politics in the twentieth century (1987) ACLS E-book
- Berghahn, Volker Rolf. Imperial Germany, 1871–1914: Economy, Society, Culture, and Politics (2nd ed. 2005)
- Brandenburg, Erich. From Bismarck to the World War: A History of German Foreign Policy 1870–1914 (1927) online.
- Cecil, Lamar. Wilhelm II: Prince and Emperor, 1859–1900 (1989) online edition; vol2: Wilhelm II: Emperor and Exile, 1900–1941 (1996) online edition
- Craig, Gordon A. Germany, 1866–1945 (1978) online edition
- Dugdale, E.T.S. ed. German Diplomatic Documents 1871–1914 (4 vol 1928–31), in English translation. online
- Gordon, Peter E., and John P. McCormick, eds. Weimar Thought: A Contested Legacy (Princeton U.P. 2013) 451 pages; scholarly essays on law, culture, politics, philosophy, science, art and architecture
- Herwig, Holger H. The First World War: Germany and Austria–Hungary 1914–1918 (1996), ISBN 0-340-57348-1
- Kolb, Eberhard. The Weimar Republic (2005)
- Mommsen, Wolfgang J. Imperial Germany 1867–1918: Politics, Culture and Society in an Authoritarian State (1995)
- Peukert, Detlev. The Weimar Republic (1993)
- Retallack, James. Imperial Germany, 1871–1918 (Oxford University Press, 2008)
- Scheck, Raffael. "Lecture Notes, Germany and Europe, 1871–1945" (2008) full text online, a brief textbook by a leading scholar
- Watson, Alexander. Ring of Steel: Germany and Austria-Hungary in World War I (2014), excerpt
- Burleigh, Michael. The Third Reich: A New History. (2000). 864 pp. Stress on antisemitism;
- Evans, Richard J. The Coming of the Third Reich: A History. (2004) . 622 pp.; a major scholarly survey; The Third Reich in Power: 1933–1939. (2005). 800 pp.; The Third Reich at War 1939–1945 (2009)
- Overy, Richard. The Dictators: Hitler's Germany and Stalin's Russia (2004); comparative history
- Roderick, Stacke. Hitler's Germany: Origins, Interpretations, Legacies (1999)
- Spielvogel, Jackson J. and David Redles. Hitler and Nazi Germany (6th ed. 2009) excerpt and text search, 5th ed. 2004
- Zentner, Christian and Bedürftig, Friedemann, eds. The Encyclopedia of the Third Reich. (2 vol. 1991). 1120 pp.
- Bullock, Alan. Hitler: A Study in Tyranny, (1962) online edition
- Friedlander, Saul. Nazi Germany and the Jews, 1933–1945 (2009) abridged version of the standard two volume history
- Kershaw, Ian. Hitler, 1889–1936: Hubris. vol. 1. 1999. 700 pp. ; vol 2: Hitler, 1936–1945: Nemesis. 2000. 832 pp.; the leading scholarly biography.
- Koonz, Claudia. Mothers in the Fatherland: Women, Family Life, and Nazi Ideology, 1919–1945. (1986). 640 pp. The major study
- Speer, Albert. Inside the Third Reich: Memoirs 1970.
- Stibbe, Matthew. Women in the Third Reich, 2003, 208 pp.
- Tooze, Adam. The Wages of Destruction: The Making and Breaking of the Nazi Economy (2007), highly influential new study; online review by Richard Tilly; summary of reviews
- Thomsett, Michael C. The German Opposition to Hitler: The Resistance, the Underground, and Assassination Plots, 1938–1945 (2nd ed 2007) 278 pages
- Bark, Dennis L., and David R. Gress. A History of West Germany Vol 1: From Shadow to Substance, 1945–1963 (1992); ISBN 978-0-631-16787-7; vol 2: Democracy and Its Discontents 1963–1988 (1992) ISBN 978-0-631-16788-4
- Berghahn, Volker Rolf. Modern Germany: society, economy, and politics in the twentieth century (1987) ACLS E-book online
- Hanrieder, Wolfram F. Germany, America, Europe: Forty Years of German Foreign Policy (1989) ISBN 0-300-04022-9
- Jarausch, Konrad H. After Hitler: Recivilizing Germans, 1945–1995 (2008)
- Junker, Detlef, ed. The United States and Germany in the Era of the Cold War (2 vol 2004), 150 short essays by scholars covering 1945–1990 excerpt and text search vol 1; excerpt and text search vol 2
- Main, Steven J. "The Soviet Occupation of Germany. Hunger, Mass Violence and the Struggle for Peace, 1945–1947." Europe-Asia Studies (2014) 66#8 pp: 1380–1382.
- Schwarz, Hans-Peter. Konrad Adenauer: A German Politician and Statesman in a Period of War, Revolution and Reconstruction (2 vol 1995) excerpt and text search vol 2; also full text vol 1; and full text vol 2
- Smith, Gordon, ed, Developments in German Politics (1992) ISBN 0-8223-1266-2, broad survey of reunified nation
- Weber, Jurgen. Germany, 1945–1990 (Central European University Press, 2004) online edition
- Beate Ruhm Von Oppen, ed. Documents on Germany under Occupation, 1945–1954 (Oxford University Press, 1955) online
- Fulbrook, Mary. Anatomy of a Dictatorship: Inside the GDR, 1949–1989 (1998)
- Fulbrook, Mary. The People's State: East German Society from Hitler to Honecker (2008) excerpt and text search
- Harsch, Donna. Revenge of the Domestic: Women, the Family, and Communism in the German Democratic Republic (2008)
- Jarausch, Konrad H.. and Eve Duffy. Dictatorship As Experience: Towards a Socio-Cultural History of the GDR (1999)
- Jarausch, Konrad H., and Volker Gransow, eds. Uniting Germany: Documents and Debates, 1944–1993 (1994), primary sources on reunification
- A. James McAdams, "East Germany and Detente." Cambridge University Press, 1985.
- McAdams, A. James. "Germany Divided: From the Wall to Reunification." Princeton University Press, 1992 and 1993.
- Pence, Katherine, and Paul Betts, eds. Socialist Modern: East German Everyday Culture and Politics (2008) excerpt and text search
- Pritchard, Gareth. The Making of the GDR, 1945–53 (2004)
- Ross, Corey. The East German Dictatorship: Problems and Perspectives in the Interpretation of the GDR (2002)
- Steiner, André. The Plans That Failed: An Economic History of East Germany, 1945–1989 (2010)
- Berghahn, Volker R., and Simone Lassig, eds. Biography between Structure and Agency: Central European Lives in International Historiography (2008)
- Chickering, Roger, ed. Imperial Germany: A Historiographical Companion (1996), 552pp; 18 essays by specialists;
- Evans, Richard J. Rereading German History: From Unification to Reunification, 1800–1996 (1997) online edition
- Hagemann, Karen, and Jean H. Quataert, eds. Gendering Modern German History: Rewriting Historiography (2008)
- Hagemann, Karen (2007). "From the Margins to the Mainstream? Women's and Gender History in Germany". Journal of Women's History. 19 (1): 193–199. doi:10.1353/jowh.2007.0014.
- Hagen, William W. German History in Modern Times: Four Lives of the Nation (2012) excerpt
- Jarausch, Konrad H., and Michael Geyer, eds. Shattered Past: Reconstructing German Histories (2003)
- Klessmann, Christoph. The Divided Past: Rewriting Post-War German History (2001) online edition
- Lehmann, Hartmut, and James Van Horn Melton, eds. Paths of Continuity: Central European Historiography from the 1930s to the 1950s (2003)
- Perkins, J. A. "Dualism in German Agrarian Historiography, Comparative Studies in Society and History, Apr 1986, Vol. 28 Issue 2, pp 287–330,
- Rüger, Jan, and Nikolaus Wachsmann, eds. Rewriting German history: new perspectives on modern Germany (Palgrave Macmillan, 2015).
- Stuchtey, Benedikt, and Peter Wende, eds. British and German Historiography, 1750–1950: Traditions, Perceptions, and Transfers (2000) | https://en.wikipedia.org/wiki/History_of_Germany | 18 |
10 | When you "raise a number to a power," you're multiplying the number by itself, and the "power" represents how many times you do so. So 2 raised to the 3rd power is the same as 2 x 2 x 2, which equals 8. When you raise a number to a fraction, however, you're going in the opposite direction -- you're trying to find the "root" of the number.
The mathematical term for raising a number to a power is "exponentiation." An exponential expression has two parts: the base, which is the number you are raising, and the exponent, which is the "power." So when you raise 2 to the 3rd power, the base is 2 and the exponent is 3. Raising the base to the 2nd power is commonly called squaring the base, while raising it to the 3rd power is commonly called cubing the base. Mathematicians usually write exponential expressions with the exponent in superscript -- that is, as a small number to the upper right of the base. Because some computers, calculators and other devices don't handle superscript very well, exponential expressions are also commonly written like this: 2^3. The caret -- the upward-pointing symbol -- tells you that what follows is the exponent.
In math, "roots" are a bit like exponents in reverse. For example, take "2 to the 4th power," abbreviated as 2^4. That's equal to 2 x 2 x 2 x 2, or 16. Since 2 multiplied by itself four times equals 16, the "4th root" of 16 is 2. Now look at the number 729. That breaks down to 9 x 9 x 9 -- so 9 is the 3rd root of 729. It also breaks down to 3 x 3 x 3 x 3 x 3 x 3 -- so 3 is the 6th root of 729. The 2nd root of a number is commonly called the square root, and the 3rd root is the cube root.
Sciencing Video Vault
When the exponent is a fraction, you're looking for a root of the base. The root corresponds to the denominator of the fraction. For example, take "125 raised to the 1/3 power," or 125^1/3. The denominator of the fraction is 3, so you're looking for the 3rd root (or cube root) of 125. Because 5 x 5 x 5 = 125, the 3rd root of 125 is 5. Thus, 125^1/3 = 5. Now try 256^1/4. You're looking for the 4th root of 256. Since 4 x 4 x 4 x 4 = 256, the answer is 4.
Numerators Other Than 1
The fractional exponents discussed to this point -- 1/3 and 1/4 -- have each had a numerator of 1. If the numerator is something other than 1, the exponent is actually instructing you to perform two operations: finding a root and raising to a power. For example, take 8^2/3. The denominator "3" tells you you're looking for a cube root; the numerator "2" tells you that you'll be raising to the 2nd power. It doesn't matter which operation you perform first. You'll get the same result either way. So you could start by taking the 3rd root of 8, which is 2, and then raising that to the 2nd power, which would give you 4. Or you could start by raising 8 to the 2nd power, which equals 64, and then taking the 3rd root of that number, which is 4. Same result.
A Universal Rule
In fact, the rule of "numerator as power, denominator as root" applies to all exponents -- even whole-number exponents and fractional exponents with a numerator of 1. For example, the whole number 2 is the equivalent of the fraction 2/1. So the exponential expression 9^2 is "really" 9^2/1. Raising 9 to the 2nd power gives you 81. Now you have to get the "1st root" of 81. But the 1st root of any number is the number itself, so the answer remains 81. Now look at the expression 9^1/2. You could start by raising 9 to the "1st power." But any number raised to the 1st power is the number itself. So all you have to do is get the square root of 9, which is 3. The rule still applies, but in these situations, you can skip a step. | https://sciencing.com/happens-raise-number-fraction-8535078.html | 18 |
24 | May 02, 2017 01:19 AM EDT
The dark matter present in the universe contributes more than 80 percent of the matter in the universe. This mysterious majority has been studied recently by a group of astronomers with the help of data collected from NASA's Chandra X-Ray Observatory.
According to Phys.org, astronomers have recently studied the properties of the mysterious dark matter present in the universe. A total of 13 galaxy clusters is involved in the study with the help of which, scientists have explored that the dark matter in the universe might not be cold, but fuzzy.
For quite a few years, cosmologists have been studying the dark matter in space. In spite of the fact that it can't be watched straightforwardly, dark matter interacts by means of gravity with typical, radiating matter (that is, anything comprised of protons, neutrons, and electrons packed into atoms). Profiting by this theory, astronomers have now studied the effects of dark matter utilizing an assortment of methods, including perceptions of the movement of stars in galaxies, the movement of galaxies in galaxy clusters, and the dispersion of X-ray radiating hot gas in galaxy clusters.
From decades, astronomers were struggling hard to study the dark matter in detail such as what it is made of or what are the properties of dark matter. NASA's Chandra X-Ray Observatory reported that the previous known model on dark matter reveals that dark matter is a more massive particle if compared to a proton, which is cold.
This also means that dark matter moves at a speed which is very much smaller than the speed of light if compared. But the model had problems in explaining the distribution of matter on smaller scales of galaxies.
The recent model studied by the astronomers on the dark matter has successfully explained the distribution of dark matter in smaller galaxies. A group of researchers utilized Chandra perceptions of the hot gas in 13 galaxy clusters to check whether the fuzzy dark matter model works at bigger scales as compared to galaxies.
2. Aug 21, 2018
Study: Length of opioid prescription spell highest risk for misuse after surgery
3. Aug 06, 2018
NASA Introduces First Batch Of Commercial Astronaut Crew
4. Jul 31, 2018
Pair of colliding stars spill radioactive molecules into space
2. Jul 26, 2018
New system can identify drugs to target 'undruggable' enzymes critical in many diseases
3. Jul 26, 2018
Health benefits of moderate drinking may be overstated, study finds | http://www.sciencetimes.com/articles/14014/20170502/fuzzy-dark-matter-model-of-universe-data-collected-from-nasas-chandra-x-ray-observatory.htm | 18 |
14 | Algebra marked the beginning of modern mathematics, moving it beyond arithmetic, which involves calculations featuring given numbers, to problems where some quantities are unknown. Now, it stands as a pillar of mathematics, underpinning the quantitative sciences, both social and physical.
This Very Short Introduction explains algebra from scratch. Over the course of ten logical chapters, Higgins offers a step by step approach for readers keen on developing their understanding of algebra. Using theory and example, he renews the reader's aquaintance with school mathematics, before taking them progressively further and deeper into the subject.
ABOUT THE SERIES: The Very Short Introductions series from Oxford University Press contains hundreds of titles in almost every subject area. These pocket-sized books are the perfect way to get ahead in a new subject quickly. Our expert authors combine facts, analysis, perspective, new ideas, and enthusiasm to make interesting and challenging topics highly readable.
Peter M. Higgins is a Professor in Pure Mathematics at the University of Essex. He is the inventor of the Circular Sudoku puzzle type that now features in numerous newspapers, magazines and computer games. He has written extensively on the subject of mathematics, including Numbers: A Very Short Introduction (OUP, 2011) and Nets, Puzzles, and Postmen (OUP, 2007), which won the 2012 Premio Peano prize for the best book on mathematics, published in Italian.
1. Numbers and algebra ; 2. The laws of algebra ; 3. Linear equations and inequalities ; 4. Quadratic equations ; 5. The algebra of polynomials ; 6. Introduction to matrices ; 7. Matrices and groups ; 8. Determinants and matrices ; 9. Algebra and the arithmetic of remainders ; 10. Vector spaces ; Further Reading ; Index | https://www.whsmith.co.uk/products/algebra-a-very-short-introduction-very-short-introductions/9780198732822 | 18 |
58 | Today's lesson is an introduction to linear regression lines. The opener is on the second slide of today's lesson notes. To get started, we'll have a little fun.
I project this graph on the front board, and pose the question: If a man was 8 feet tall, how much would you expect him to weigh? It's low stakes, because the question is pretty absurd (though not impossible), and the idea is just to get kids thinking about making a prediction based on data in a scatter plot.
Note that as kids think about this question, they have to pay attention to how the x-axis is labeled and scaled. The question is about measuring height in feet, but the axis is labeled in inches. When they work on today's assignment, students will have thoughtfully plan how they scale their axes.
There are plenty of possible answers here, because we can imagine differently-placed trend-lines on this graph. Depending on how we assess the trend, we'll get different answers, and that's part of the fun, and it really gets kids thinking, talking, and having some good-natured debates about what might happen. If any student wants to, I'll allow them to sketch some lines on the board. Even though we haven't yet studied lines of best-fit, it's natural to want to fit a line to this data and see where it goes. On the 3rd and 4th slides, I provide some space for kids to extrapolate up to 96 inches and beyond.
Now that I've got everyone talking and thinking in these terms, I say that today, we're going to learn about how to use the data in a scatter plot to create models that can help us make predictions like this.
What's a Regression?
I try to develop ideas and informal definitions as often as possible, so to begin today's lecture notes (see slides 5 through 14 of the lesson notes), I say that a regression is a statistical tool for modeling data. When we make predictions and sketch our own lines through data, like we did on the opener, we are doing the work that a regression does. Over the next few lessons, we'll look at some ways this is done.
Vocab Check: Association, Correlation, Causation
I post learning target 2.5 on the board (slide #5), which says:
I can fit a linear function to data that suggests a linear association.
I say, "One method of fitting a linear function to data is by running a linear regression. But before we go any further with that "r" word, there's another word in the SLT that deserves our attention: association."
In the last two lessons, I've used the term correlation with students, but formally, we've really only been looking at associations thus far. Now (on slide #6), I make the distinction for students, that association is a general term that is used to describe whether or not one set of data moves with another. Correlation is more strictly-defined, because it implies a linear relationship. Additionally, correlation can be measured. That's coming up later this week.
Finally, on slide #7, I provide some background notes to finish framing our work with linear regression.
Mini-Lesson: Median-Median Lines
Today, I'm going to teach a lesser-known linear regression method called the median-median line. I find that this topic really helps students get a feel for the data, and it's a nice review of median and writing the equation of a line through two points. Check out this article published by the American Statistical Association that describes how this method provides a simple way "to motivate the idea of fitting a straight line to data." In addition to providing a deeper background on the median-median line regression method, the article includes a few data sets that are great to use with students.
For today's lesson, I'm briefly abandoning context. I just want my students to play with the numbers, and practice this skill. Tomorrow, we'll jump right back into using linear regressions in context.
My notes for students are on slides #8-18 of the lesson notes. We work step-by-step with an example, and I tell students to take notes at each step. Then, they practice this method. Here, I provide an overview of how I deliver this example.
After working through this example, I erase all the evidence of our steps to show our final result: a line running through the data. I find that it's important to include this step, because I want students to end up with the understanding of what we just did. We used a regression process to find a line that can be used to make some generalizations about this data set, and this is what it looks like.
Following the mini-lesson, students work to follow the steps and find the median-median lines for five data sets. Here is the two-sided handout; you'll also want to have graph paper available. I'm also including the solutions here, with graphs copied from Desmos. Just as food for thought, this answer key includes the results of a least-squares regression for each data set, so you can see how they differ.
Students work alone or in small groups to get the assignment done, and I circulate to help, check their work, and offer encouragement. Again, we're taking a brief break from context today, and just working with the numbers. This lesson is a review of all that background knowledge kids need to really grasp these ideas around quantitative bivariate data. Students have a chance to practice plotting points, finding median, and writing linear functions in slope-intercept form.
The examples I provide here are more difficult than the one I used in the mini-lesson for a few reasons. First of all, the slopes and y-intercepts aren't very "nice" numbers. When it comes to sketching these lines on paper, most students will need help understanding how to use the decimal slope values that result from their median-median regressions. It helps to plot a few points and to connect them as we sketch these lines. On exercise #3, for example, we might just think about the the values of y when x is 0, 20, 40, and 60, instead of really trying to "rise and run" that slope of -12.74.
The other challenge here is to help students understand what to do when the number of data points is not divisible by 3. On slides #16-18, I provide examples of how to partition the data. For both of the challenges I've just noted, I wait for students to ask questions, and deliver these notes as needed.
With a few minutes left in class, I call everyone to attention for a quick debrief. I ask everyone to share an observation, a question, or something that surprised them from today's lesson. This is a quick, informal way to get an overall vibe check of the room. Usually, I'm not surprised by what kids say; but it's a nice chance to get a full picture.
Usually, kids will need a little more time to finish the last exercise or two. I tell everyone to finish what they can for homework, and that we'll check answers tomorrow. | https://betterlesson.com/lesson/633820/median-median-lines?from=consumer_breadcrumb_dropdown_lesson | 18 |
12 | Carotenoids (//), also called tetraterpenoids, are organic pigments that are produced by plants and algae, as well as several bacteria and fungi. Carotenoids give the characteristic color to carrots, corn, canaries, and daffodils, as well as egg yolks, rutabagas, buttercups, and bananas. Carotenoids can be produced from fats and other basic organic metabolic building blocks by all these organisms. The only animals known to produce carotenoids are aphids and spider mites, which acquired the ability and genes from fungi or it is produced by endosymbiotic bacteria in whiteflies. Carotenoids from the diet are stored in the fatty tissues of animals, and exclusively carnivorous animals obtain the compounds from animal fat.
There are over 1100 known carotenoids; they are split into two classes, xanthophylls (which contain oxygen) and carotenes (which are purely hydrocarbons, and contain no oxygen). All are derivatives of tetraterpenes, meaning that they are produced from 8 isoprene molecules and contain 40 carbon atoms. In general, carotenoids absorb wavelengths ranging from 400–550 nanometers (violet to green light). This causes the compounds to be deeply colored yellow, orange, or red. Carotenoids are the dominant pigment in autumn leaf coloration of about 15-30% of tree species, but many plant colors, especially reds and purples, are due to other classes of chemicals.
Carotenoids serve two key roles in plants and algae: they absorb light energy for use in photosynthesis, and they protect chlorophyll from photodamage. Carotenoids that contain unsubstituted beta-ionone rings (including beta-carotene, alpha-carotene, beta-cryptoxanthin and gamma-carotene) have vitamin A activity (meaning that they can be converted to retinol), and these and other carotenoids can also act as antioxidants. In the eye, lutein, meso-zeaxanthin, and zeaxanthin are present as macular pigments whose importance in visual function remains under clinical research in 2017.
The basic building blocks of carotenoids are isopentenyl diphosphate (IPP) and dimethylallyl diphosphate (DMAPP). These two isoprene isomers are used to create various compounds depending on the biological pathway used to synthesis the isomers. Plants are known to use two different pathways for IPP production: the cytosolic mevalonic acid pathway (MVA) and the plastidic methylerythritol 4-phosphate (MEP). In animals, the production of cholesterol starts by creating IPP and DMAPP using the MVA. For carotenoid production plants use MEP to generate IPP and DMAPP. The MEP pathway results in a 5:1 mixture of IPP:DMAPP. IPP and DMAPP undergo several reactions, resulting in the major carotenoid precursor, geranylgeranyl diphosphate (GGPP). GGPP can be converted into carotenes or xanthophylls by undergoing a number of different steps within the carotenoid biosynthetic pathway.
Glyceraldehyde 3-phosphate and pyruvate, intermediates of photosynthesis, are converted to deoxy-D-xylulose 5-phosphate (DXP) using the catalyst DXP synthase (DXS). DXP reductoisomerase reduces and rearranges the molecules within DXP in the presence of NADPH, forming MEP. Next, MEP is converted to 4-(cytidine 5’-diphospho)-2-C-methyl-D-erythritol (CDP-ME) in the presence of CTP via the enzyme MEP cytidylyltransferase. CDP-ME is then converted, in the presence of ATP, to 2-phospho-4-(cytidine 5’-diphospho)-2-C-methyl-D-erythritol (CDP-ME2P). The conversion to CDP-ME2P is catalyzed by the enzyme CDP-ME kinase. Next, CDP-ME2P is converted to 2-C-methyl-D-erythritol 2,4-cyclodiphosphate (MECDP). This reaction occurs when MECDP synthase catalyzes the reaction and CMP is eliminated from the CDP-ME2P molecule. MECDP is then converted to (e)-4-hydroxy-3-methylbut-2-en-1-yl diphosphate (HMBDP) via HMBDP synthase in the presence of flavodoxin and NADPH. HMBDP is reduced to IPP in the presence of ferredoxin and NADPH by the enzyme HMBDP reductase. The last two steps involving HMBPD synthase and reductase can only occur in completely anaerobic environments. IPP is then able to isomerize to DMAPP via IPP isomerase.
Carotenoid biosynthetic pathway
Two GGPP molecules condense via phytoene synthase (PSY), forming phytoene, the 15-cis isomer. Next phytoene is dehydrogenated by phytoene desaturase (PDS) to 9,15,9’-tri-cis-ζ-carotene by introducing two double bonds. This cis- ζ-carotene is dehydrogenated again via ζ-carotene desaturase (ZDS); this again introduces two double bonds, resulting in 7,9,7’,9’-tetra-cis-lycopene. CRTISO, a carotenoid isomerase, is needed to convert the cis-lycopene into an all-trans lycopene in the presence of reduced FAD. This all-trans lycopene is cyclized; cyclization gives rise to carotenoid diversity, which can be distinguished based on the end groups. There can be either a beta ring or an epsilon ring, each generated by a different enzyme (lycopene beta-cyclase [beta-LCY] or lycopene epsilon-cyclase [epsilon-LCY]). Alpha-carotene is produced when the all-trans lycopene first undergoes reaction with epsilon-LCY then a second reaction with beta-LCY; whereas beta-carotene is produced by two reactions with beta-LCY. Alpha- and beta-carotene are the most common carotenoids in the plant photosystems but they can still be further converted into xanthophylls by using beta-hydrolase and epsilon-hydrolase, leading to a variety of xanthophylls.
It is believed that both DXS and DXR are rate-determining enzymes, allowing them to regulate carotenoid levels. This was discovered in an experiment where DXS and DXR were genetically overexpressed, leading to increased carotenoid expression in the resulting seedlings. Also, J-protein (J20) and heat shock protein 70 (Hsp70) chaperones are thought to be involved in post-transcriptional regulation of DXS activity, such that mutants with defective J20 activity exhibit reduced DXS enzyme activity while accumulating inactive DXS protein. Regulation may also be caused by external toxins that affect enzymes and proteins required for synthesis. Ketoclomazone is derived from herbicides applied to soil and binds to DXP synthase. This inhibits DXP synthase, preventing synthesis of DXP and halting the MEP pathway. The use of this toxin leads to lower levels of carotenoids in plants grown in the contaminated soil.Fosmidomycin, an antibiotic, is a competitive inhibitor of DXP reductoisomerase due to its similar structure to the enzyme. Application of said antibiotic prevents reduction of DXP, again halting the MEP pathway.
Structure and function
The general structure of the carotenoid is a polyene chain consisting of 9-11 double bonds and possibly terminating in rings. This structure of conjugated double bonds leads to a high reducing potential, or the ability to transfer electrons throughout the molecule. Carotenoids can transfer electrons in one of two ways: 1) singlet-singlet transfer from carotenoid to chlorophyll, and 2) triplet-triplet transfer from chlorophyll to carotenoid. The singlet-singlet transfer is a lower energy state transfer and is used during photosynthesis. The length of the polyene tail enables light absorbance in the photosynthetic range; once it absorbs energy it becomes excited, then transfers the excited electrons to the chlorophyll for photosynthesis. The triplet-triplet transfer is a higher energy state and is essential in photoprotection. Both light and oxygen produce damaging species during photosynthesis, with the most damaging being reactive oxygen species (ROS). As these high energy ROS are produced in the chlorophyll the energy is transferred to the carotenoid’s polyene tail and undergoes a series of reactions in which electrons are moved between the carotenoid bonds in order find the most balanced state (lowest energy state) for the carotenoid.
The length of carotenoids also has a role in plant coloration, as the length of the polyene tail determines which wavelengths of light the plant will absorb. Wavelengths that are not absorbed are reflected and are what we see as the color of a plant. Therefore, differing species will contain carotenoids with differing tail lengths allowing them to absorb and reflect different colors.
Carotenoids also participate in different types of cell signaling. They are able to signal the production of absicisic acid, which regulates plant growth, seed dormancy, embryo maturation and germination, cell division and elongation, floral growth, and stress responses.
Carotenoids belong to the category of tetraterpenoids (i.e., they contain 40 carbon atoms, being built from four terpene units each containing 10 carbon atoms). Structurally, carotenoids take the form of a polyene hydrocarbon chain which is sometimes terminated by rings, and may or may not have additional oxygen atoms attached.
- Carotenoids with molecules containing oxygen, such as lutein and zeaxanthin, are known as xanthophylls.
- The unoxygenated (oxygen free) carotenoids such as α-carotene, β-carotene, and lycopene, are known as carotenes. Carotenes typically contain only carbon and hydrogen (i.e., are hydrocarbons), and are in the subclass of unsaturated hydrocarbons.
Their color, ranging from pale yellow through bright orange to deep red, is directly linked to their structure. Xanthophylls are often yellow, hence their class name. The double carbon-carbon bonds interact with each other in a process called conjugation, which allows electrons in the molecule to move freely across these areas of the molecule. As the number of conjugated double bonds increases, electrons associated with conjugated systems have more room to move, and require less energy to change states. This causes the range of energies of light absorbed by the molecule to decrease. As more wavelengths of light are absorbed from the longer end of the visible spectrum, the compounds acquire an increasingly red appearance.
Carotenoids are usually lipophilic due to the presence of long unsaturated aliphatic chains as in some fatty acids. The physiological absorption of these fat-soluble vitamins in humans and other organisms depends directly on the presence of fats and bile salts.
Beta-carotene, found in carrots and apricots, is responsible for their orange-yellow colors. Dried carrots have the highest amount of carotene of any food per 100 gram serving, measured in retinol activity equivalents (provitamin A equivalents). Vietnamese gac fruit contains the highest known concentration of the carotenoid lycopene. The diet of flamingos is rich in carotenoids, imparting the orange-colored feathers of these birds.
Reviews of epidemiological studies seeking correlations between carotenoid consumption in food and clinical outcomes have come to various conclusions:
- A 2016 review looking at correlations between diets rich in fruit and vegetables (some of which are high in carotenoids) and lung cancer found a protective effect up to 400 g/day.
- A 2015 review found that foods high in carotenoids appear to be protective against head and neck cancers.
- Another 2015 review looking at whether caretenoids can prevent prostate cancer found that while several studies found correlations between diets rich in carotenoids appeared to have a protective effect, evidence is lacking to determine whether this is due to carotenoids per se.
- A 2014 review found no correlation between consumption of foods high in carotenoids and vitamin A and the risk of getting Parkinson's disease.
- Another 2014 review found no conflicting results in studies of dietary consumption of carotenoids and the risk of getting breast cancer.
Carotenoids are also important components of the dark brown pigment melanin, which is found in hair, skin, and eyes. Melanin absorbs high-energy light and protects these organs from intracellular damage.
- Several studies have observed positive effects of high-carotenoid diets on the texture, clarity, color, strength, and elasticity of skin.
- A 1994 study noted that high carotenoid diets helped reduce symptoms of eyestrain (dry eye, headaches, and blurred vision) and improve night vision.
Humans and other animals are mostly incapable of synthesizing carotenoids, and must obtain them through their diet. Carotenoids are a common and often ornamental feature in animals. For example, the pink color of salmon, and the red coloring of cooked lobsters and scales of the yellow morph of common wall lizards are due to carotenoids. It has been proposed that carotenoids are used in ornamental traits (for extreme examples see puffin birds) because, given their physiological and chemical properties, they can be used as visible indicators of individual health, and hence are used by animals when selecting potential mates.
The most common carotenoids include lycopene and the vitamin A precursor β-carotene. In plants, the xanthophyll lutein is the most abundant carotenoid and its role in preventing age-related eye disease is currently under investigation. Lutein and the other carotenoid pigments found in mature leaves are often not obvious because of the masking presence of chlorophyll. When chlorophyll is not present, as in autumn foliage, the yellows and oranges of the carotenoids are predominant. For the same reason, carotenoid colors often predominate in ripe fruit after being unmasked by the disappearance of chlorophyll.
Carotenoids are responsible for the brilliant yellows and oranges that tint deciduous foliage (such as dying autumn leaves) of certain hardwood species as hickories, ash, maple, yellow poplar, aspen, birch, black cherry, sycamore, cottonwood, sassafras, and alder. Carotenoids are the dominant pigment in autumn leaf coloration of about 15-30% of tree species. However, the reds, the purples, and their blended combinations that decorate autumn foliage usually come from another group of pigments in the cells called anthocyanins. Unlike the carotenoids, these pigments are not present in the leaf throughout the growing season, but are actively produced towards the end of summer.
Products of carotenoid degradation such as ionones, damascones and damascenones are also important fragrance chemicals that are used extensively in the perfumes and fragrance industry. Both β-damascenone and β-ionone although low in concentration in rose distillates are the key odor-contributing compounds in flowers. In fact, the sweet floral smells present in black tea, aged tobacco, grape, and many fruits are due to the aromatic compounds resulting from carotenoid breakdown.
Some carotenoids are produced by bacteria to protect themselves from oxidative immune attack. The golden pigment that gives some strains of Staphylococcus aureus their name (aureus = golden) is a carotenoid called staphyloxanthin. This carotenoid is a virulence factor with an antioxidant action that helps the microbe evade death by reactive oxygen species used by the host immune system.
Naturally occurring carotenoids
- Cryptomonaxanthin (3R,3'R)-7,8,7',8'-Tetradehydro-β,β-carotene-3,3'-diol
- Crustaxanthin β,-Carotene-3,4,3',4'-tetrol
- Gazaniaxanthin (3R)-5'-cis-β,γ-Caroten-3-ol
- OH-Chlorobactene 1',2'-Dihydro-f,γ-caroten-1'-ol
- Loroxanthin β,ε-Carotene-3,19,3'-triol
- Lutein (3R,3′R,6′R)-β,ε-carotene-3,3′-diol
- Lycoxanthin γ,γ-Caroten-16-ol
- Rhodopin 1,2-Dihydro-γ,γ-caroten-l-ol
- Rhodopinol a.k.a. Warmingol 13-cis-1,2-Dihydro-γ,γ-carotene-1,20-diol
- Saproxanthin 3',4'-Didehydro-1',2'-dihydro-β,γ-carotene-3,1'-diol
- Diadinoxanthin 5,6-Epoxy-7',8'-didehydro-5,6-dihydro—carotene-3,3-diol
- Luteoxanthin 5,6: 5',8'-Diepoxy-5,6,5',8'-tetrahydro-β,β-carotene-3,3'-diol
- Zeaxanthin furanoxide 5,8-Epoxy-5,8-dihydro-β,β-carotene-3,3'-diol
- Neochrome 5',8'-Epoxy-6,7-didehydro-5,6,5',8'-tetrahydro-β,β-carotene-3,5,3'-triol
- Vaucheriaxanthin 5',6'-Epoxy-6,7-didehydro-5,6,5',6'-tetrahydro-β,β-carotene-3,5,19,3'-tetrol
- Acids and acid esters
- Canthaxanthin a.k.a. Aphanicin, Chlorellaxanthin β,β-Carotene-4,4'-dione
- Capsanthin (3R,3'S,5'R)-3,3'-Dihydroxy-β,κ-caroten-6'-one
- Capsorubin (3S,5R,3'S,5'R)-3,3'-Dihydroxy-κ,κ-carotene-6,6'-dione
- Cryptocapsin (3'R,5'R)-3'-Hydroxy-β,κ-caroten-6'-one
- 2,2'-Diketospirilloxanthin 1,1'-Dimethoxy-3,4,3',4'-tetradehydro-1,2,1',2'-tetrahydro-γ,γ-carotene-2,2'-dione
- Echinenone β,β-Caroten-4-one
- Flexixanthin 3,1'-Dihydroxy-3',4'-didehydro-1',2'-dihydro-β,γ-caroten-4-one
- 3-OH-Canthaxanthin a.k.a. Adonirubin a.k.a. Phoenicoxanthin 3-Hydroxy-β,β-carotene-4,4'-dione
- Hydroxyspheriodenone 1'-Hydroxy-1-methoxy-3,4-didehydro-1,2,1',2',7',8'-hexahydro-γ,γ-caroten-2-one
- Okenone 1'-Methoxy-1',2'-dihydro-c,γ-caroten-4'-one
- Pectenolone 3,3'-Dihydroxy-7',8'-didehydro-β,β-caroten-4-one
- Phoeniconone a.k.a. Dehydroadonirubin 3-Hydroxy-2,3-didehydro-β,β-carotene-4,4'-dione
- Phoenicopterone β,ε-caroten-4-one
- Rubixanthone 3-Hydroxy-β,γ-caroten-4'-one
- Siphonaxanthin 3,19,3'-Trihydroxy-7,8-dihydro-β,ε-caroten-8-one
- Esters of alcohols
- Astacein 3,3'-Bispalmitoyloxy-2,3,2',3'-tetradehydro-β,β-carotene-4,4'-dione or 3,3'-dihydroxy-2,3,2',3'-tetradehydro-β,β-carotene-4,4'-dione dipalmitate
- Fucoxanthin 3'-Acetoxy-5,6-epoxy-3,5'-dihydroxy-6',7'-didehydro-5,6,7,8,5',6'-hexahydro-β,β-caroten-8-one
- Isofucoxanthin 3'-Acetoxy-3,5,5'-trihydroxy-6',7'-didehydro-5,8,5',6'-tetrahydro-β,β-caroten-8-one
- Zeaxanthin (3R,3'R)-3,3'-Bispalmitoyloxy-β,β-carotene or (3R,3'R)-β,β-carotene-3,3'-diol
- Siphonein 3,3'-Dihydroxy-19-lauroyloxy-7,8-dihydro-β,ε-caroten-8-one or 3,19,3'-trihydroxy-7,8-dihydro-β,ε-caroten-8-one 19-laurate
- β-Apo-2'-carotenal 3',4'-Didehydro-2'-apo-b-caroten-2'-al
- Apo-6'-lycopenal 6'-Apo-y-caroten-6'-al
- Azafrinaldehyde 5,6-Dihydroxy-5,6-dihydro-10'-apo-β-caroten-10'-al
- Bixin 6'-Methyl hydrogen 9'-cis-6,6'-diapocarotene-6,6'-dioate
- Citranaxanthin 5',6'-Dihydro-5'-apo-β-caroten-6'-one or 5',6'-dihydro-5'-apo-18'-nor-β-caroten-6'-one or 6'-methyl-6'-apo-β-caroten-6'-one
- Crocetin 8,8'-Diapo-8,8'-carotenedioic acid
- Crocetinsemialdehyde 8'-Oxo-8,8'-diapo-8-carotenoic acid
- Crocin Digentiobiosyl 8,8'-diapo-8,8'-carotenedioate
- Hopkinsiaxanthin 3-Hydroxy-7,8-didehydro-7',8'-dihydro-7'-apo-b-carotene-4,8'-dione or 3-hydroxy-8'-methyl-7,8-didehydro-8'-apo-b-carotene-4,8'-dione
- Methyl apo-6'-lycopenoate Methyl 6'-apo-y-caroten-6'-oate
- Paracentrone 3,5-Dihydroxy-6,7-didehydro-5,6,7',8'-tetrahydro-7'-apo-b-caroten-8'-one or 3,5-dihydroxy-8'-methyl-6,7-didehydro-5,6-dihydro-8'-apo-b-caroten-8'-one
- Sintaxanthin 7',8'-Dihydro-7'-apo-b-caroten-8'-one or 8'-methyl-8'-apo-b-caroten-8'-one
- Nor- and seco-carotenoids
- Actinioerythrin 3,3'-Bisacyloxy-2,2'-dinor-b,b-carotene-4,4'-dione
- β-Carotenone 5,6:5',6'-Diseco-b,b-carotene-5,6,5',6'-tetrone
- Peridinin 3'-Acetoxy-5,6-epoxy-3,5'-dihydroxy-6',7'-didehydro-5,6,5',6'-tetrahydro-12',13',20'-trinor-b,b-caroten-19,11-olide
- Pyrrhoxanthininol 5,6-epoxy-3,3'-dihydroxy-7',8'-didehydro-5,6-dihydro-12',13',20'-trinor-b,b-caroten-19,11-olide
- Semi-α-carotenone 5,6-Seco-b,e-carotene-5,6-dione
- Semi-β-carotenone 5,6-seco-b,b-carotene-5,6-dione or 5',6'-seco-b,b-carotene-5',6'-dione
- Triphasiaxanthin 3-Hydroxysemi-b-carotenone 3'-Hydroxy-5,6-seco-b,b-carotene-5,6-dione or 3-hydroxy-5',6'-seco-b,b-carotene-5',6'-dione
- Retro-carotenoids and retro-apo-carotenoids
- Eschscholtzxanthin 4',5'-Didehydro-4,5'-retro-b,b-carotene-3,3'-diol
- Eschscholtzxanthone 3'-Hydroxy-4',5'-didehydro-4,5'-retro-b,b-caroten-3-one
- Rhodoxanthin 4',5'-Didehydro-4,5'-retro-b,b-carotene-3,3'-dione
- Tangeraxanthin 3-Hydroxy-5'-methyl-4,5'-retro-5'-apo-b-caroten-5'-one or 3-hydroxy-4,5'-retro-5'-apo-b-caroten-5'-one
- Higher carotenoids
- Nonaprenoxanthin 2-(4-Hydroxy-3-methyl-2-butenyl)-7',8',11',12'-tetrahydro-e,y-carotene
- Decaprenoxanthin 2,2'-Bis(4-hydroxy-3-methyl-2-butenyl)-e,e-carotene
- C.p. 450 2-[4-Hydroxy-3-(hydroxymethyl)-2-butenyl]-2'-(3-methyl-2-butenyl)-b,b-carotene
- C.p. 473 2'-(4-Hydroxy-3-methyl-2-butenyl)-2-(3-methyl-2-butenyl)-3',4'-didehydro-l',2'-dihydro-b,y-caroten-1'-ol
- Bacterioruberin 2,2'-Bis(3-hydroxy-3-methylbutyl)-3,4,3',4'-tetradehydro-1,2,1',2'-tetrahydro-y,y-carotene-1,1'-dio
- Moran NA, Jarvik T (2010). "Lateral transfer of genes from fungi underlies carotenoid production in aphids". Science. 328 (5978): 624–7. doi:10.1126/science.1187113. PMID 20431015.
- Boran Altincicek; Jennifer L. Kovacs; Nicole M. Gerardo (2011). "Horizontally transferred fungal carotenoid genes in the two-spotted spider mite Tetranychus urticae". Biology Letters. 8 (2): 253–257. doi:10.1098/rsbl.2011.0704. PMC 3297373. PMID 21920958.
- Nováková E, Moran NA (2012). "Diversification of genes for carotenoid biosynthesis in aphids following an ancient transfer from a fungus". Mol Biol Evol. 29 (1): 313–23. doi:10.1093/molbev/msr206. PMID 21878683.
- Sloan DB, Moran NA (2012). "Endosymbiotic bacteria as a source of carotenoids in whiteflies". Biol Lett. 8 (6): 986–9. doi:10.1098/rsbl.2012.0664. PMC 3497135. PMID 22977066.
- Yabuzaki, Junko (2017-01-01). "Carotenoids Database: structures, chemical fingerprints and distribution among organisms". Database. 2017. doi:10.1093/database/bax004.
- Armstrong GA, Hearst JE (1996). "Carotenoids 2: Genetics and molecular biology of carotenoid pigment biosynthesis". FASEB J. 10 (2): 228–37. PMID 8641556.
- Bernstein, P. S.; Li, B; Vachali, P. P.; Gorusupudi, A; Shyam, R; Henriksen, B. S.; Nolan, J. M. (2015). "Lutein, Zeaxanthin, and meso-Zeaxanthin: The Basic and Clinical Science Underlying Carotenoid-based Nutritional Interventions against Ocular Disease". Progress in Retinal and Eye Research. 50: 34–66. doi:10.1016/j.preteyeres.2015.10.003. PMC 4698241. PMID 26541886.
- Nisar, Nazia; Li, Li; Lu, Shan; Khin, Nay Chi; Pogson, Barry J. (2015-01-05). "Carotenoid Metabolism in Plants". Molecular Plant. Plant Metabolism and Synthetic Biology. 8 (1): 68–82. doi:10.1016/j.molp.2014.12.007.
- KUZUYAMA, Tomohisa; SETO, Haruo (2012-03-09). "Two distinct pathways for essential metabolic precursors for isoprenoid biosynthesis". Proceedings of the Japan Academy. Series B, Physical and Biological Sciences. 88 (3): 41–52. doi:10.2183/pjab.88.41. ISSN 0386-2208. PMC 3365244. PMID 22450534.
- Nisar, Nazia; Li, Li; Lu, Shan; ChiKhin, Nay; Pogson, Barry J. (5 January 2015). "Carotenoid Metabolism in Plants". Molecular Plant. 8 (1). doi:10.1016/j.molp.2014.12.007.
- Vershinin, Alexander (1999-01-01). "Biological functions of carotenoids - diversity and evolution". BioFactors. 10 (2–3): 99–104. doi:10.1002/biof.5520100203. ISSN 1872-8081.
- Cogdell, R. J. (1978-11-30). "Carotenoids in photosynthesis". Phil. Trans. R. Soc. Lond. B. 284 (1002): 569–579. doi:10.1098/rstb.1978.0090. ISSN 0080-4622.
- Finkelstein, Ruth (2013-11-01). "Abscisic Acid Synthesis and Response". The Arabidopsis Book / American Society of Plant Biologists. 11: e0166. doi:10.1199/tab.0166. ISSN 1543-8120. PMC 3833200. PMID 24273463.
- Linus Pauling Institute. "Micronutrient Information Center-Carotenoids". Retrieved 3 August 2013.
- Simpson, K; Cerda, A; Stange, C (2016). "Carotenoid Biosynthesis in Daucus carota". Sub-cellular Biochemistry. Carotenoids in Nature. 79: 199–217. doi:10.1007/978-3-319-39126-7_7. ISBN 978-3-319-39124-3. PMID 27485223.
- Campbell, O.E.; Merwin, I.A.; Padilla-Zakour, O.I. (2013). "Characterization and the effect of maturity at harvest on the phenolic and carotenoid content of Northeast USA Apricot (Prunus armeniaca) varieties". Journal of Agricultural and Food Chemistry. 61 (51): 12700–10. doi:10.1021/jf403644r. PMID 24328399.
- "Foods highest in Retinol Activity Equivalent". nutritiondata.self.com. Retrieved 2015-12-04.
- Tran, X. T.; Parks, S. E.; Roach, P. D.; Golding, J. B.; Nguyen, M. H. (2015). "Effects of maturity on physicochemical properties of Gac fruit (Momordica cochinchinensis Spreng.)". Food Science & Nutrition. 4 (2): 305–314. doi:10.1002/fsn3.291. PMC 4779482. PMID 27004120.
- Yim, K. J.; Kwon, J; Cha, I. T.; Oh, K. S.; Song, H. S.; Lee, H. W.; Rhee, J. K.; Song, E. J.; Rho, J. R.; Seo, M. L.; Choi, J. S.; Choi, H. J.; Lee, S. J.; Nam, Y. D.; Roh, S. W. (2015). "Occurrence of viable, red-pigmented haloarchaea in the plumage of captive flamingoes". Scientific Reports. 5: 16425. doi:10.1038/srep16425. PMC 4639753. PMID 26553382.
- Vieira, AR; et al. (Jan 2016). "Fruits, vegetables and lung cancer risk: a systematic review and meta-analysis". Ann Oncol. 27 (1): 81–96. doi:10.1093/annonc/mdv381. PMID 2637128.
- Leoncini; Sources, Natural; Head; Cancer, Neck; et al. (Jul 2015). "A Systematic Review and Meta-analysis of Epidemiological Studies". Cancer Epidemiol Biomarkers Prev. 24 (7): 1003–11. doi:10.1158/1055-9965.EPI-15-0053. PMID 25873578.
- Soares Nda, C; et al. (Oct 2015). "Anticancer properties of carotenoids in prostate cancer. A review". Histol Histopathol. 30 (10): 1143–54. doi:10.14670/HH-11-635. PMID 26058846.
- Takeda, A; et al. (2014). "Vitamin A and carotenoids and the risk of Parkinson's disease: a systematic review and meta-analysis". Neuroepidemiology. 42 (1): 25–38. doi:10.1159/000355849. PMID 24356061.
- Chajès V, Romieu I Nutrition and breast cancer. Maturitas. 2014 Jan;77(1):7-11. PMID 24215727
- Oregon State University: a-Carotene, ß-Carotene, ß-Cryptoxanthin, Lycopene, Lutein, and Zeaxanthin, http://lpi.oregonstate.edu/mic/dietary-factors/phytochemicals/carotenoids
- Schagen SK, et al., 2012., Vasiliki A. Zampeli, Evgenia Makrantonaki, Christos C. Zouboulis, Discovering the link between nutrition and skin aging, Dermatoendocrinol. 2012 Jul 1; 4(3): 298–307. doi:10.4161/derm.22876, PMC 3583891
- Pappas, A., 2009., The relationship of diet and acne., Dermatoendocrinol. 2009 Sep-Oct; 1(5): 262–267.
- Zhi Foo Y, Rhodes G, Simmons LW, 2017, The carotenoid beta-carotene enhances facial color, attractiveness and perceived health, but not actual health, in humans, Behavioral Ecology, Volume 28, Issue 2, 1 April 2017, Pages 570–578, doi:10.1093/beheco/arw188
- Roh S, Weiter JJ,. 1994., Light damage to the eye., J Fla Med Assoc. 1994 Apr;81(4):248-51.
- Rozanowska M, et al., Light-Induced Damage to the Retina, http://photobiology.info/Rozanowska.html
- Sacchi, Roberto (4 June 2013). "Colour variation in the polymorphic common wall lizard (Podarcis muralis): An analysis using the RGB colour system". Zoologischer Anzeiger. 252 (4): 431. doi:10.1016/j.jcz.2013.03.001.
- Whitehead RD, Ozakinci G, Perrett DI (2012). "Attractive skin coloration: harnessing sexual selection to improve diet and health". Evol Psychol. 10 (5): 842–54. doi:10.1177/147470491201000507. PMID 23253790.
- Archetti, Marco; Döring, Thomas F.; Hagen, Snorre B.; Hughes, Nicole M.; Leather, Simon R.; Lee, David W.; Lev-Yadun, Simcha; Manetas, Yiannis; Ougham, Helen J. (2011). "Unravelling the evolution of autumn colours: an interdisciplinary approach". Trends in Ecology & Evolution. 24 (3): 166–73. doi:10.1016/j.tree.2008.10.006. PMID 19178979.
- Davies, Kevin M., ed. (2004). Plant pigments and their manipulation. Annual Plant Reviews. 14. Oxford: Blackwell Publishing. p. 6. ISBN 1-4051-1737-0.
- Liu GY, Essex A, Buchanan JT, et al. (2005). "Staphylococcus aureus golden pigment impairs neutrophil killing and promotes virulence through its antioxidant activity". J. Exp. Med. 202 (2): 209–15. doi:10.1084/jem.20050846. PMC 2213009. PMID 16009720.
- Patent Pending: US Application Number 11/817,120
- "Biosynthesis of carotenoids". Archived from the original on 2012-02-23.
- Efficient Syntheses of the Keto-carotenoids Canthaxanthin, Astaxanthin, and Astacene. Seyoung Choi and Sangho Koo, J. Org. Chem., 2005, 70 (8), pages 3328–3331, doi:10.1021/jo050101l
|Wikimedia Commons has media related to Carotenoids.| | https://en.wikipedia.org/wiki/Carotenoid | 18 |
10 | Hiyya, i'm currently studying gcse geography and i've gotta do my courseworkonly problem is my teacher didn't give me much help so i'm kinda lost on what. Using scatter graphs in geography data presentation: scatter graphs scatter graphs are used to investigate the relationship between two variables (or aspects. Igcse and gcse geography skills (paper 2) igcse and gcse geography coursework when drawing a graph remember the following: always use a pencil and a ruler. Geography coursework how does the demand for land and data, i will produce graphs and diagrams and then start to analyze and identify patterns. Gcse geography: coursework: guide to chapter 3 - data presentation chapter present your information using maps, graphs, tables or. Home geography geographical skills graph skills geography geographical skills graph skills - test 1 which of these with line graphs and bar charts it.
Teaching and learning in geography with ideas for lessons , without using the annoying graph using excel to improve coursework presentation. Igcse and gcse geography coursework results are very hard to analyse using graphs or tables closed questions: all answers will be relevant to your research. Graphs) use secondary data course content may vary depending on the geographical location of each a level geography fieldwork - as aqa geography skills author. Success in a controlled assessment in geography depends on the effective use of geography skills this section of the course assesses many skills.
Gcse geography coursework enquiry where people travel from to visit castleton task 1 you need to use as many present your information using maps, graphs. Teaching and learning in geography with consider improving your gcse coursework with more advanced graphs can be created using ge graph this is a free.
For science and geography coursework for some subjects, namely the sciences and geography, it would be appropriate to include images, graphs, charts. Agenzia viaggi specializzata in viaggi di nozze,crociere,viaggi di gruppo,eventi,pellegrinaggi cristiani anche per disabili,biglietteria,aerea,marittima e ferroviaria. Gcse geography b eepar c andidate wor in its upper course it had steep sided valleys with rapids graph to show how the floodplain width changes downstream. | http://euhomeworkqfzw.visitorlando.us/graphs-to-use-in-geography-coursework.html | 18 |
14 | The good news is that these very same words that we use to write numerical expressions are going to be used to write algebra expressions.
Examples of How to Translate Basic Math Phrases into Algebraic Expressions We will go over eight 8 examples in this lesson to accommodate two 2 two examples for each operation.
Of is the tricky word. Division is not commutative, so you must pay close attention to the order in which you write the expression. We will study this in more depth as we get into writing and solving algebraic word problems.
A quotient is the answer to a division problem. Pay close attention to the "key words" that represent mathematical operations. This means that an unknown number has been added to In this case, we want to double an unknown value or quantity.
The last operation that we will study is division. To find the product of two quantities or values, it means that we will multiply them together. Either of the two above is a correct answer.
It means 5 times the unknown number m. Choosing the letter w as our variable, the math phrase above can be expressed as the algebraic expression below. The number 1 comes first then an unknown number comes in second.
Key words for each operation are indicated in bold. As you begin to work with algebraic expressions more, you will see word problems that require you to use more than one operation.
We are used to seeing the words, plus, sum, difference, minus, product This is a very brief lesson on simple algebraic expressions. In other words, we are going to subtract the unknown number from the number 8.
Please also remember that addition is commutative; therefore, you can reverse the digits and you will end up with the same answer.
This is most important for operations that are not commutative, such as subtraction and division. Play close attention to the order in which it is written. The next lesson in this unit is on simplifying algebraic expressions.
In addition, when you encounter this math word difference make sure to pay attention to the order. One of the most important things to remember is to look for key words and to make sure that your expression matches the context of the word problem.
Quotient is also a key word for division. Think of "of" meaning to multiply when you are working with fractions. Let the letter d be the unknown number, when we double it we get the algebraic expression 2d. Expressions with More Than One Operation Many people struggle with translating word problems into algebraic expressions.
The difference between a numerical expression and an algebra expression is that we will be using variables when writing an algebraic expression. The key words are:Once you've learned the basic keywords for translating word problems from English into mathematical expressions and equations, you'll be presented with various English expressions, and be told to perform the translation.
Use this same order in your algebraic expression. Once you've learned to translate phrases into expressions and. Welcome to The Translating Algebraic Phrases (A) Math Worksheet from the Algebra Worksheets Page at ultimedescente.com This Algebra Worksheet may be printed, downloaded or saved and used in your classroom, home school, or other educational environment to help someone learn math.
Translating Basic Math Phrases into Algebraic Expressions There is no single strategy for translating math phrases into algebraic expressions. As long as you can remember the basics, you should be able to tackle the more challenging ones.
TRANSLATING KEY WORDS AND PHRASES INTO ALGEBRAIC EXPRESSIONS The table below lists some key words and phrases that are used to describe common mathematical operations.
To write algebraic expressions and equations, assign a variable to represent the unknown number. The worksheets provide exercises on translating verbal phrases into linear algebraic expressions, multiple variable expressions, equations and inequalities. Translating Phrases into Algebraic Expressions Worksheets.
The worksheets in this page provide practice to students on translating phrases into algebraic expressions like linear.
Translating Words into Algebraic Expressions Operation Word Expression Algebraic Expression Addition Add, Added to, the sum of, more than, increased by, the total of.Download | http://nybypolutuxyrehy.ultimedescente.com/translating-word-and-phrases-to-algebraic-1979719797.html | 18 |
30 | We've just added a couple of useful new resources to Online Course 1A. Check out this guide to types of note values:
You can download this file from Online Course 1A!
When you're first starting to read music, you'll quickly come across different types of notes.
The rhythm of each note is represented by the SHAPE of the note-head and the stem.
This PDF shows some of the most common note values that you'll see. The words on the right of each note represent the rhythm names for each note. The exact timing of notes will depend on other factors such as the time signature and the tempo of the piece, which we'll talk about another time. But this visual guide will help you to remember the names of each note type... this will be really helpful when we start to talk about rhythm.
The circle of fifths is a musical theory tool that has its roots firmly in mathematics. It explores the relationships between those musical intervals that are most pleasing to the ear, based on discoveries made by the mathematician Pythagoras two and a half thousand years ago.
Pythagoras discovered and investigated the most basic facts about frequency and pitch. He found that there were mathematical ratios between notes. The octave, which is the most basic interval, the point at which pitches seem to duplicate, has a natural 2:1 ratio. If a string of a certain length is set in vibration it will produce a particular note. The shorter the string is, the more times it will vibrate per second, once it is set in vibration. When a string vibrates more times per second, the pitch of the note produced is higher. Therefore, if the string is kept at the same tension but its length is halved, it will produce a note one octave higher than the first. The same happens when you blow through a tube of air. A tube twice the length will produce a note an octave lower.
The circle of fifths, sometimes called the Pythagorean circle, is a diagram with twelve points that represent the twelve semitones within an octave. It is a chart rather like a clock face that organises all the keys into a system and can be used to relate them to one another. It is called a circle of fifths because each step of the circle is a perfect fifth from the next. The fifth is the interval that is closest in character to the octave, in that it is more consonant (less dissonant) or stable than any interval except the octave (or the unison).
A perfect interval is one where natural overtones occur. If you play a note on your violin and listen closely, you will hear the pitch you are playing. You will also hear overtones sounding. The most significant of these, or the easiest to hear, is usually the fifth. Where the ratio of frequencies between octaves is 2:1, the ratio of the frequencies of the fundamental to the fifth is 2:3. A perfect fifth is an interval of seven semitones. These seven semitones represent the building blocks from the first note of a scale to the fifth.
Watch this video for a clear description of how the circle of fifths is built.
The circle of fifths is useful because it shows the relationship between the keys, key signatures and chords.
It can be used to:
Now you’ve watched the video on how to make a circle of fifths, have a look at this interactive circle of fifths. You can use it to look at the relationships between chords in any key.
So what is the circle of fifths useful for?
It is possible to learn the order of sharps and flats as they occur in music by using the circle of fifths. You can work out how many sharps or flats are in a key, and also which notes are sharpened or flattened.
If you look clockwise around the circle you will see the order in which the sharps appear in the key signature. When there is one sharp, it is F#. When there are two, they are F# and C#. Three sharps will be F#, C# and G# and so on.
Looking round the circle in an anticlockwise direction shows the order of flats. If there is one flat it is Bb. Two flats are Bb and Eb. Three are always Bb, Eb and Ab, and so on.
In a circle of fifths in the major keys, C major appears at the top of the circle. C major has no sharps or flats. The next key in a clockwise direction is G major. G major has one sharp, which we now know is F#. Then comes D major which has F# and C#. Going in the other direction, F major has one flat, Bb. Bb major has two flats, Eb major has three flats.
Use the interactive circle of fifths above to notice the enharmonic changes this creates in flat keys between, for example F# and Gb. Look at the circle in D major and then in Db major to see how the pitches are renamed. Two notes that have the same pitch but are represented by different letter names and accidentals are described as enharmonic.
The circle of fifths can also be used to work out which keys are related to each other. You can see that the keys on either side of C are F and G. Therefore, the two closest keys to C, which has no sharps or flats, are F, which has one flat, and G, which has one sharp. F and G therefore make up the primary chords in C major. F is chord IV, the subdominant, and G is chord V, the dominant. Using these three chords you can build the standard chord progression IV V I.
The secondary chords are those further away from the note of your key, so in C major, D, A and E would be secondary chords, which means they may appear in the harmony of your piece but are not as strong as the primary chords.
Watch these two clips. They explain how the circle of fifths works in major and minor keys:
The circle of fifths is also useful for understanding chord progressions such as those from dominant seventh chords. Dominant seventh chords have a tendency to want to go towards another chord. They contain a dissonance that melodically and harmonically needs to resolve. The chord that the dominant seventh resolves to is one fifth lower, so A7 resolves to D major, F7 resolves to Bb major, and so on. If you are asked to play a dominant seventh in the key of D, you will start on the note A.
Here is another clip explaining how to use the circle of fifths to understand your scales.
The model of a circle of fifths, with the consequent understanding of chord progressions and harmony and the hierarchy and relationships between keys, has played a hugely important part in Western music.
When you first start learning the violin, you will also start learning to read music. To a musician, written music is like an actor’s script. It tells you what to play, when to play it and how to play it. Music, like language, is written with symbols which represent sounds; from the most basic notation which shows the pitch, duration and timing of each note, to more detailed and subtle instructions showing expression, tone quality or timbre, and sometimes even special effects. What you see on the page is a sort of drawing of what you will hear.
The notes in Western music are given the names of the first seven letters of the alphabet: A, B, C, D, E, F and G. Once you get to G the note names begin again at A.
The notes of the violin strings without any fingers pressed down, which are commonly known as the open strings, are called G, D, A and E, with G being the lowest, fattest string and E the highest sounding, finest string.
When notated, the open string sounds of the violin look like this:
You will see that the notes are placed in various positions on five parallel lines called a stave. Every line and space on the stave represents a different pitch, the higher the note, the higher the pitch. The note on the left here is the G - string note, which is the lowest note on the violin. The note on the right is the E - string pitch, which is much higher. The round part, or head of the note shows the pitch by its placing on the stave. Each note also has a stem that can go either up or down.
The symbol at the front of the stave is called a treble clef. The clef defines which pitches will be played and shows if it’s a low or high instrument. Violin music is always written in treble clef. When notes fall outside of the pitches that fit onto the stave, small lines called ledger lines are added above or below to place the notes, as you can see with the low G - string pitch which sits below two ledger lines. Once too many ledger lines are needed and the music becomes visually confusing, it’s time to switch to a new clef, such as bass clef.
The numbers after the treble clef are called the time signature. The stave works both up and down (pitch) and from left to right. From left to right, the stave shows the beat and the rhythm. The beat is the heartbeat or pulse of the music. It doesn’t change. The music is written in small sections called bars, which fall between the vertical lines on the stave called bar lines. Some pieces have four beats in a bar, which means you feel them in four time, some have three, like a waltz, and so on. The time signature shows how many beats are in each bar, and what kind of note each of those beats is.
The rhythm is where notes have different durations within the structure of the bar. This is where pieces can really start to get interesting.
Here we can see a variety of rhythms.
Each of these bars has a value of four beats. The first of the notes above is called a semibreve. It lasts for four beats. The second is called a minim (or half note, in America) and each minim lasts for two beats. You can see there are two minims in a four-beat bar. The third example is a crotchet or quarter note. Each crotchet is one beat long. The fourth rhythm is a quaver, or eighth note, which lasts for half a beat, and the last note value shown is called a semiquaver or sixteenth note, and lasts for quarter of a beat, so sixteen semiquavers fit into a four beat bar. The smaller notes are written in groups of four so they match up with the beat visually and are easy to read. Each note length has a corresponding symbol to show when there is a rest (silence) of that duration.
The time signature 4/4 shows that there are four beats in each bar (the top 4) and that each of those is a crotchet or quarter note (the lower 4). The time signature 3/8 would show three (the top number) quaver, or eighth note, (the bottom number) beats in a bar.
As you put your fingers on the strings to play new notes on the violin, the music shows the pitch rising. So the first finger note on each string of the violin would look like this:
The note after G on the G – string is called A and is played with the first finger. The note after D on the D – string is called E, on the A – string it’s B and on the E – string it’s F. The first finger in violin fingering is the index finger, unlike on the piano where 1 denotes the thumb.
There are other symbols which show pitch, one of which, the thing that looks like a hash tag, is shown above. This one is called a sharp and the full name of the second note shown on the E – string is F sharp. You will see these symbols for sharps or flats in the key signature of nearly every piece. The key signature is placed between the treble clef and the time signature and shows you which key or tonality to play in.
As you add the other fingers, you can see below how the gaps on the stave are filled, until you are playing every first position note on your violin. As you build up your fingers one at a time, the pitches on the stave look like this:
The very last note here is played with the fourth finger on the E – string. It is worth noting at this point that because the pitches of the violin strings are five notes or a fifth apart, each open string note after G can also be played with the fourth finger or pinkie on the previous string, so the A – string note, for example, can be played with the fourth finger on the D –string.
This seems a lot to remember but there are a couple of helpful memory tricks. The notes in the spaces of the stave, in ascending order, are F, A, C and E, or FACE. The notes on the lines are E, G, B, D and F. You may remember learning the mnemonic, Every Good Boy Deserves Fun.
You will soon begin to memorise which note corresponds to which sound and finger placement on your violin. Remember that when you learned to read, you were simultaneously studying writing skills. Try downloading and printing this music manuscript paper, and practice writing out the notes as you learn to play them. Write out the open string notes and practice from your own copy. Making the connection between writing, reading and playing will speed up and deepen the process of learning. Soon the note reading will become habitual, and just as you don’t have to process every letter to read a word, you will begin to see the piece as a whole rather than having to read each note and work out where to play it.
As with any new skill, the more you practise and try it out, the more confident you will feel and the sooner you will be reading music fluently.
For at least the last two thousand years, the majority of composed art music in the Western world has been made up of two or more simultaneous musical sounds or pitches. The name given to this combination of sounds is harmony.
Most descriptions of harmony focus on Western music but harmony exists in music from other cultures too. In the art music of Southern Asia the underlying harmonic foundation is a drone; a held tone, the pitch of which does not change throughout the piece. Drones have also been common in folk music for centuries, particularly with instruments such as the bagpipes. Combinations of sounds also appear in Indian classical music or rãgas, but whereas in Indian music improvisation takes a major role in the structure, improvisation has not been common in Western classical music since the late 19th century. Prior to that, improvisation often involved embellishment on written lines rather than the free melodic expression we associate with the word today.
The earliest forms of Western harmony have their origins in church music, when the chants sung by monks were sung in two parts, with a fixed tone, or tones moving parallel to the melody, accompanying the chant. This added depth and colour to the music, where previously a single stark line had existed. This single line chant is called plainchant, and was an ancient monophonic form of music influenced by the Greek modal system.
Harmony has the same function today; when a vocalist is accompanied by a guitar, the right hand of the piano is accompanied by the left hand or when we sing hymns along with an organ, the melody is given depth and interest.
In these instances, the guitar, organ or left hand part of the piano will normally play a combination of several notes at once. A combination of notes played together is called a chord. If you are absolutely new to the idea of harmony and music theory, take a look at this video where the basics are explained right from the start.
And here is a great resource introducing the idea of harmony for children.
Over the centuries, ideas have changed about which chords and combinations of notes make a good harmony. In the 10th century, the interval of a fourth (two notes, four notes apart) was very popular. Other early harmonies moved in fifths, five notes apart. Parallel fifths, where several chords of pitches five notes apart happen in succession, were also used in folk singing but by the 18th century parallel fifths were considered undesirable. By the Renaissance, harmony had developed, and the commonest chord was the triad.
A triad is a three-note chord built up in thirds, or where the interval of a fifth is filled out by its central note. It is used both with the notes in their basic order, 1, 3, 5 and in various inversions where the same notes are placed in a different vertical order.
The triad remained the basic harmonic unit in Western music until well into the 20th century; hence as violinists we practice arpeggios, which are no more than the notes of the triad in each key. It is possible to make a good harmony for many melodies just by using two or three triads, normally the triads of the first, fourth and fifth notes of the scale. These are known as the tonic, subdominant and dominant triads and often written as I, IV and V. More developed melodies sound better with a wider range of harmonies since the way a note is harmonised can change the sense of a piece of music.
Some chords are made up of notes that are dissonant. These are called dissonances, and a dissonance needs to be resolved. Dissonant chords are resolved by consonant chords, which naturally succeed them, creating a smoother sound. The tension generated by dissonant chords can provide a feeling of impetus and energy in music. Wagner used dissonance to great effect in his operas, sometimes moving from one dissonant chord to another, sustaining the resultant tension without resolution, for entire acts which could be as long as two hours of music.
Ideas have altered over the centuries as to which chords and intervals are dissonant and which are consonant. The interval of two notes a semitone, or minor second, apart or their inversion, a major 7th, forms the strongest dissonance in Western triadic music. The interval of a fourth can be quite dissonant but was not considered so in the 10th century.
By the early 20th century, composers were introducing new ideas that replaced the traditional triadic harmony. In modern music, tension and dissonance may be less prepared and less formally structured than in Baroque and Classical music.
Another function of harmony is to punctuate musical phrases. Music has natural stopping places or called cadences, with strong cadences at the end of phrases and weaker ones at other parts in the musical line. Cadences with a clear finality are called perfect cadences and often lead from the dominant or fifth triad to the tonic or first triad, V-I. Imperfect cadences are less final and lead to the dominant.
Harmony is not just chords. As violinists we tend to think melodically or horizontally and harmony can seem quite vertical, but harmony works in the way each successive chord relates to the previous one. It is useful to have an understanding of harmony, particularly if you want to be able to improvise, but also to deepen your understanding of intonation in solo lines and as an ensemble player.
Music theory creates a distinction between harmony and counterpoint. Harmony is understood to occur where there is a melody with accompaniment. Counterpoint is where melodic lines are heard against each other, weaving together so that their notes harmonise. Music using counterpoint is called contrapuntal. Another useful word meaning music made up of several strands is polyphony, from the Greek for many sounds.
Counterpoint was a very important technique for composers in the late Middle Ages and in the Renaissance, when it was used widely in church music.
The concept of imitative counterpoint, a favourite device of composers such as Palestrina, is familiar to anyone who has every sung Three Blind Mice or Frère Jacques as a round.
When music is written, there is an interdependence and integration between vertical and horizontal musical lines. Counterpoint was not succeeded by harmony; harmony developed out of counterpoint and comprises both vertical and horizontal movement. Harmony is a process involving not only the notes which make up a chord, but also the overall flow and progression of chords throughout a composition and the resultant countermelodies which occur.
In Western music, improvisational styles such as jazz have in the past been considered to be inferior to art music, which is pre-composed. Music that exists in oral traditions is separated from notated music, largely because the evolution of harmony has been facilitated by the process of prior composition, which allows for the analysis and study of harmonic techniques.
Jazz and pop harmonies are presented differently and are the basis for improvised melody, rather than being an accompaniment for a pre-composed tune. Have a look at this demonstration of basic jazz harmony. You will see that the chords are shown in the same way, named by their root (bottom note) as IV, V, I and so on, but they are also described by various terms and characters which determine and define the qualities of the chord. Jazz musicians have to develop a really deep understanding of the notes in each chord and how they operate within the chord in order to be able to improvise with apparent freedom.
If you would like to learn more about harmony, check out our free music theory programme Musition, which comes free with your Violin School subscription. | https://www.violinschool.com/category/music-theory/ | 18 |
40 | We think you have liked this presentation. If you wish to download it, please recommend it to your friends in any social system. Share buttons are a little bit lower. Thank you!
Presentation is loading. Please wait.
supports HTML5 video
Published byElijah Berry
Modified over 4 years ago
Lecture Presentation Chapter 3 Stoichiometry: Calculations with Chemical Formulas and Equations LO 1.17, 1.18, 3.1, 3.5, 1.4, 3.3, 1.2, 1.3 Ashley Warren Kings High School
Stoichiometry The area of study that examines the quantities of substances consumed and produced in chemical reactions. Stoicheion means “element” and metron means “measure.”
Law of Conservation of Mass“We may lay it down as an incontestable axiom that, in all the operations of art and nature, nothing is created; an equal amount of matter exists both before and after the experiment. Upon this principle, the whole art of performing chemical experiments depends.” --Antoine Lavoisier, 1789 Chemists came to understand the basis for this law as atoms are neither created nor destroyed during a chemical reaction. The changes that occur during any reaction merely rearrange the atoms. The same collection of atoms is present both before and after the reaction. Lavoisier observed that mass is conserved in a chemical reaction. © 2012 Pearson Education, Inc.
Anatomy of a Chemical EquationCH4(g) + 2O2(g) CO2(g) + 2H2O(g) Chemical equations give a description of a chemical reaction. We read the + sign as “reacts with” or “yields” or “produces” Remember there are two numbers in chemical equations: Reactants appear on the left side of the equation. Products appear on the right side. The states of the reactants and products are written in parentheses to the right of each compound. Coefficients are inserted to balance the equation. © 2012 Pearson Education, Inc.
Subscripts and Coefficients Give Different InformationStoichiometric coefficients give the ratio in which the reactants and products exist. Remember we can NEVER change the subscripts when we are balancing a chemical equation. Subscripts tell the number of atoms of each element in a molecule. Coefficients tell the number of molecules. © 2012 Pearson Education, Inc.
Interpreting and Balancing Chemical EquationsThe following diagram represents a chemical reaction in which the red spheres are oxygen atoms and the blue spheres are nitrogen atoms. Write the chemical formulas for the reactants and products. Write a balanced chemical equation for the reaction. Is the diagram consistent with the Law of conservation of mass? © 2012 Pearson Education, Inc.
Interpreting and Balancing Chemical EquationsIn the diagram above, the white spheres represent hydrogen atoms and the blue spheres represent nitrogen atoms. To be consistent with the law of conservation of mass, how many NH3 molecules should be shown in the right (product) box? 6 We have had so much practice last year with balancing chemical reactions that I will not focus on balancing reactions in this powerpoint. If you need additional help then just let me know and I can give you some extra problems/help. © 2012 Pearson Education, Inc.
© 2012 Pearson Education, Inc.Reaction Types In this section we will see three types of reactions: combination, decomposition, and combustion. We will be predicting products here and finding patterns of reactivity. © 2012 Pearson Education, Inc.
Combination ReactionsIn combination reactions two or more substances react to form one product. Combination reactions have more reactants than products. Remember all of the types of synthesis reactions we learned last year. Those are fair game. Examples: 2Mg(s) + O2(g) 2MgO(s) N2(g) + 3H2(g) 2NH3(g) C3H6(g) + Br2(l) C3H6Br2(l) © 2012 Pearson Education, Inc.
Decomposition ReactionsIn a decomposition reaction one substance breaks down into two or more substances. Again refer to the decomposition reaction types that we learned last year. More products than reactants. Many decomp rxns happen when heated. Combination and Decomposition reactions are opposites. Examples: CaCO3(s) CaO(s) + CO2(g) 2KClO3(s) 2KCl(s) + O2(g) 2NaN3(s) 2Na(s) + 3N2(g) © 2012 Pearson Education, Inc.
© 2012 Pearson Education, Inc.Combustion Reactions Combustion reactions are generally rapid reactions that produce a flame. Combustion reactions most often involve hydrocarbons reacting with oxygen in the air. Examples: CH4(g) + 2O2(g) CO2(g) + 2H2O(g) C3H8(g) + 5O2(g) 3CO2(g) + 4H2O(g) © 2012 Pearson Education, Inc.
© 2012 Pearson Education, Inc.More Reactions Addition rxns: combination rxn where a substance is added to a compound with a double bond. © 2012 Pearson Education, Inc.
© 2012 Pearson Education, Inc.More Reactions Substitution reactions An atom or a functional group in molecule in substituted for another atom or functional group. © 2012 Pearson Education, Inc.
© 2012 Pearson Education, Inc.Formula Weights Chemical formulas and equations have a quantitative significance. While we cannot directly count atoms or molecules we can indirectly determine their numbers if we know their masses! © 2012 Pearson Education, Inc.
© 2012 Pearson Education, Inc.Formula Weight (FW) A formula weight is the sum of the atomic weights for the atoms in a chemical formula. So, the formula weight of calcium chloride, CaCl2, would be Ca: 1(40.08 amu) + Cl: 2( amu) amu Formula weights are generally reported for ionic compounds. What is the formula weight for sulfuric acid? Remember that atomic weights are simply what we find as the mass on the periodic table. For example, the atomic weight of oxygen is amu. © 2012 Pearson Education, Inc.
© 2012 Pearson Education, Inc.Molecular Weight (MW) A molecular weight is the sum of the atomic weights of the atoms in a molecule. For the molecule ethane, C2H6, the molecular weight would be C: 2( amu) What is the molecule weight for glucose? C6H12O6 + H: 6( amu) amu © 2012 Pearson Education, Inc.
© 2012 Pearson Education, Inc.Percent Composition One can find the percentage of the mass of a compound that comes from each of the elements in the compound by using this equation: % Element = (number of atoms)(atomic weight) (FW of the compound) x 100 © 2012 Pearson Education, Inc.
© 2012 Pearson Education, Inc.Percent Composition So the percentage of carbon in ethane (C2H6) is %C = (2)( amu) ( amu) amu amu = x 100 = % © 2012 Pearson Education, Inc.
© 2012 Pearson Education, Inc.Moles Even the smallest samples we deal with in the lab contain enormous numbers of atoms, ions, or molecules. We have devised a unit for describing such large numbers of atoms or molecules. © 2012 Pearson Education, Inc.
© 2012 Pearson Education, Inc.Avogadro’s Number 1 mole = 6.02 x 1023, symbolized NA. 1 mole of 12C has a mass of g. Abbreviation = mol © 2012 Pearson Education, Inc.
© 2012 Pearson Education, Inc.Avogadro’s Number See exercise 3.7 and 3.8 © 2012 Pearson Education, Inc.
© 2012 Pearson Education, Inc.Molar Mass By definition, a molar mass is the mass of 1 mol of a substance (i.e., g/mol). The molar mass of an element is the mass number for the element that we find on the periodic table. The formula weight (in amu’s) will be the same number as the molar mass (in g/mol). Cl has an atomic weight of 35.5 amu 1 mol Cl has a mass of 35.5 grams *”The atomic weight of an element in atomic mass units is numerically equal to the mass in grams of 1 mol of that element.” © 2012 Pearson Education, Inc.
© 2012 Pearson Education, Inc.Mole Relationships One mole of atoms, ions, or molecules contains Avogadro’s number of those particles. One mole of molecules or formula units contains Avogadro’s number times the number of atoms or ions of each element in the compound. © 2012 Pearson Education, Inc.
© 2012 Pearson Education, Inc.Molar Mass Which has more mass, a mole of water or a mole of glucose? Which contains more molecules, a mole of water or a mole of glucose? Calculate the molar mass of glucose. Calculate the molar mass of calcium nitrate. © 2012 Pearson Education, Inc.
© 2012 Pearson Education, Inc.Using Moles Moles provide a bridge from the molecular scale to the real-world scale. Let’s think about units in all of these cases! If you know the units you will ALWAYS be able to figure out what to do! © 2012 Pearson Education, Inc.
© 2012 Pearson Education, Inc.Masses and Moles Calculate the number of moles of water in grams of water. Calculate the mass, in grams, of mol of potassium chlorite. How many molecules are in grams of C3H8? © 2012 Pearson Education, Inc.
Finding Empirical FormulasRemember from chapter 2 that an empirical formula Is the smallest whole number ratio of a compound. It tells us the relative number of atoms of each element in the substance. This also applies on the molar level. 1 mol of water contains 2 mols of Hydrogen and 1 mol of oxygen. Conversely, the ratio of the numbers of moles of all elements in a compound gives the subscripts in the compound’s empirical formula. © 2012 Pearson Education, Inc.
Calculating Empirical FormulasOne can calculate the empirical formula from the percent composition. © 2012 Pearson Education, Inc.
Calculating Empirical FormulasThe compound para-aminobenzoic acid (you may have seen it listed as PABA on your bottle of sunscreen) is composed of carbon (61.31%), hydrogen (5.14%), nitrogen (10.21%), and oxygen (23.33%). Find the empirical formula of PABA. © 2012 Pearson Education, Inc.
Calculating Empirical FormulasAssuming g of para-aminobenzoic acid, C: g x = mol C H: g x = 5.09 mol H N: g x = mol N O: g x = mol O 1 mol 12.01 g 14.01 g 1.01 g 16.00 g © 2012 Pearson Education, Inc.
Calculating Empirical FormulasCalculate the mole ratio by dividing by the smallest number of moles: C: = 7 H: = 7 N: = 1.000 O: = 2 5.105 mol mol 5.09 mol 1.458 mol © 2012 Pearson Education, Inc.
Calculating Empirical FormulasThese are the subscripts for the empirical formula: C7H7NO2 © 2012 Pearson Education, Inc.
Molecular Formulas from EmpiricalMesitylene, a hydrocarbon found in crude oil, has an empirical formula of C3H4, and an experimentally determined molecular weight of 121 amu. What is its molecular formula? The formula weight of C3H4 = 40.0 amu Whole number multiple = molecular weight empirical formula weight = 121 = 3.02 40 This means the molecular formula is triple the empirical. Therefore the molecular formula is C9H12. © 2012 Pearson Education, Inc.
© 2012 Pearson Education, Inc.Combustion Analysis Compounds containing C, H, and O are routinely analyzed through combustion in a chamber like the one shown in Figure 3.14. C is determined from the mass of CO2 produced. H is determined from the mass of H2O produced. O is determined by difference after the C and H have been determined. ***See Exercise 3.15 Empirical formulas are routinely determined by combustion analysis. When a compound containing carbon and hydrogen is completely combusted the carbon is converted to CO2 and the hydrogen is converted to H2O. The amounts of CO2 and H2O produced are determined by measuring the mass increase in the CO2 and H2O absorbers. From the masses of CO2 and H2O we can calculate the number of moles and therefore the empirical formula. If a third element (like oxygen) is present then we can determine the mass by subtracting the measured masses of carbon and hydrogen from the original sample mass. © 2012 Pearson Education, Inc.
Stoichiometric CalculationsIt is important to realize that the stoichiometric ratios are the ideal proportions in which reactants are needed to form products. The coefficients in the balanced equation give the ratio of moles of reactants and products as well as the relative numbers of molecules. © 2012 Pearson Education, Inc.
Stoichiometric CalculationsStarting with the mass of Substance A, you can use the ratio of the coefficients of A and B to calculate the mass of Substance B formed (if it’s a product) or used (if it’s a reactant). Remember grams to moles to moles to grams! © 2012 Pearson Education, Inc.
Stoichiometric CalculationsC6H12O6 + 6 O2 6 CO2 + 6 H2O Starting with 1.00 g of C6H12O6… we calculate the moles of C6H12O6…(using molar mass) use the coefficients to find the moles of H2O…(using mole to mole ratio) and then turn the moles of water to grams…(using molar mass again) © 2012 Pearson Education, Inc.
Stoichiometric CalculationsIf given the following reaction, how many grams of O2 can be prepared from 4.50 grams of KClO3? 2KClO3 (s) 2KCl (s) + 3 O2 (g) © 2012 Pearson Education, Inc.
© 2012 Pearson Education, Inc.Limiting Reactants Often one or more reactants is present in excess. Therefore, at the end of the reaction those reactants present in excess will still be in the reaction mixture. © 2012 Pearson Education, Inc.
© 2012 Pearson Education, Inc.Limiting Reactants The limiting reactant is the reactant present in the smallest stoichiometric amount. In other words, it’s the reactant you’ll run out of first (in this case, the H2). In the example below, the O2 would be the excess reagent. The limiting reactant is completely consumed. The excess reagent is the reactant that’s present in excess. © 2012 Pearson Education, Inc.
© 2012 Pearson Education, Inc.Theoretical Yield The theoretical yield is the maximum amount of product that can be made. In other words, it’s the amount of product possible as calculated through the stoichiometry problem. This is different from the actual yield, which is the amount one actually produces and measures. © 2012 Pearson Education, Inc.
© 2012 Pearson Education, Inc.Percent Yield One finds the percent yield by comparing the amount actually obtained (actual yield) to the amount it was possible to make (theoretical yield): Percent yield = x 100 actual yield theoretical yield © 2012 Pearson Education, Inc.
Limiting Reactant/Theoretical Yield CalculationsSee Exercise 3.18, 3.19, and 3.20 © 2012 Pearson Education, Inc.
Chapter 3 Homework Problems1, 3, 5, 6, 8, 10, 12, 13, 15, 17, 19, 22, 23, 25, 27, 30, 31, 33, 35, 37, 40, 41, 43, 46, 47, 49, 52, 53, 55, 61, 63, 65, 67, 69, 71, 76, 77, 80, 81, 83, 87, 89a,b, 97, 101 © 2012 Pearson Education, Inc.
Lecture 0302 Molecular Wt. % Composition and Elemental Analysis
Chemical Quantities or
Chemistry, The Central Science, 10th edition
Chapter 7 Chemical Quantities
Chapter 3 Stoichiometry: Calculations with Chemical Formulas and Equations.
AP Chemistry Stoichiometry HW:
Chapter 3 Chemical Reactions and Reaction Stoichiometry
Chapter Three: Stoichiometry Nick Tang 2 nd Period Ms. Ricks.
Law of Conservation of Mass
Stoichiometry Law of Conservation of Mass “We may lay it down as an incontestable axiom that, in all the operations of art and nature, nothing is created;
Stoichiometry is one where the substance retains its identity. Examples of physical reactions: ◦ melting / freezing ◦ boiling / condensing ◦ subliming.
© 2012 Pearson Education, Inc. Chapter 3 Stoichiometry: Calculations with Chemical Formulas and Equations John D. Bookstaver St. Charles Community College.
Stoichiometry Chapter 3 Stoichiometry: Calculations with Chemical Formulas and Equations.
Mathematics of Chemical Formulas. Formula Weights.
Anatomy of a Chemical Equation
Chemistry, The Central Science, 12th edition
© 2018 SlidePlayer.com Inc. All rights reserved. | http://slideplayer.com/slide/273858/ | 18 |
23 | On October 4, 1957, the Soviet Union launched Sputnik 1, the first human-made object to orbit Earth. This event marks the beginning of humanity’s space exploration history. After that, humanity went to the moon, astronauts and cosmonauts performed countless spacewalks, and since the arrival of Expedition 1 on November 2, 2000, the International Space Station station has been continuously occupied. To date, this is the longest continuous human presence in space, having surpassed the previous record of 9 years and 357 days held by Mir. But maybe even more important, we launched thousands of artificial satellites into the Earth’s orbit. These artificial satellites shape our modern life: weather forecasts, broadcasting, communication and GPS are just a few examples. But, there’s a side effect: just like here on the Earth, we are slowly filling the most important part just above us, with junk. And this junk can end space exploration, and destroy our modern way life. This (very possible) scenario known as the Kessler Syndrome, proposed by the American astrophysicist and former NASA scientist Donald J. Kessler in 1978.Continue reading Kessler Syndrome: Space junk can end space exploration, and destroy modern way life
Around 66 million years ago, an asteroid (or a comet) with a diameter of at least 10 kilometers (6 miles) impacted a few miles from the present-day town of Chicxulub in Mexico at around 64,000 kilometers per hour (40,000 mph). The impact triggered a chain of events what it is known today as the Cretaceous-Paleogene (K-Pg) extinction event, also known as the Cretaceous-Tertiary (K-T) extinction, and wiped out three-quarters of the plant and animal species on Earth, including non-avian dinosaurs.
If this Chicxulub impactor happened today, it would wipe out the human civilization. Luckily, events like Chicxulub impact are rare. Asteroids with a 1 km (0.62 mi) diameter strike Earth every 500,000 years on average. But that doesn’t mean we are totally safe. Asteroids with a diameter of at least 140 meters (460 ft) are big enough to cause regional devastation to human settlements unprecedented in human history in the case of a land impact or a major tsunami in the case of an ocean impact.
European Space Agency has published an amazing time-lapse video showing the Earth from the International Space Station (ISS). The space agency wrote “Join ESA astronaut Alexander Gerst for a quick flight from the USA to Africa aboard the International Space Station in this time-lapse filmed 12.5 times faster than actual speed.” You can watch that breathtaking video below:
Continue reading Watch: Amazing Time-Lapse Video Showing Earth from the ISS
NASA has published an amazing video titled “Sounds of Saturn: Hear Radio Emissions of the Planet and Its Moon Enceladus”. The analyze of the data from the Cassini Spacecraft’s Grand Finale orbits showed a surprisingly powerful interaction of plasma waves moving from Saturn to its icy moon Enceladus. Researchers converted the recording of plasma waves into a “whooshing” audio file that we can hear, in the same way a radio translates electromagnetic waves into music.
Continue reading Song of Saturn and Its Moon Enceladus
European Space Agency (ESA) has published the complete Rosetta image archive under a Creative Commons license, which means the complete archive is freely available: you can copy, share, and tweak the content.
Continue reading ESA Has Published Complete Rosetta Image Archive
To able to reach the space, we need rockets. Rocket engines work by action and reaction (“To every action, there is always opposed an equal reaction”Notes 1) and push rockets forward simply by expelling their exhaust in the opposite direction at high speed and can therefore work in the vacuum of space. Space rockets are usually enormous in size, because the bigger the rocket is, the more thrust can produce its engine and can carry more weight into the orbit. Here are the 10 tallest rockets ever launched in the history of space exploration.
Continue reading Watch: Top 10 Tallest Rockets Ever Launched
In a video published by the European Space Agency on Twitter, retired American astronaut Scott Kelly describes seeing Earth from space for the first time.
Continue reading Astronaut describes seeing Earth from space for the first time
On April 24, 1990, Hubble Space Telescope was launched into low Earth orbit from space shuttle Discovery (STS-31). It orbits the Earth at an altitude of about 350 miles (560 kilometers). For a comparison, the International Space Station (ISS) maintains an orbit with an altitude of between 205 and 270 miles (330 and 435 kilometers). The telescope is 43.5 feet (13.2 meters) long, weighs 24,500 pounds (11,110 kilograms).
Continue reading Hubble Space Telescope Launch
The European Space Agency (ESA) occasionally posts high-resolution photos of space under the title of “week in images”. This amazing image of the Mont Saint-Michel from space, which was captured on 21 June 2017, is also featured on the ESA’s Earth from Space video programme, presented by Kelsea Brennan-Wessels from the ESA Web TV virtual studios.
Continue reading Mont Saint-Michel, France, From Space
ESA’s (European Space Agency) Gaia spacecraft has created the most accurate and detailed map of the Milky Way galaxy (and beyond) to date. The map includes high-precision measurements of nearly 1.7 billion stars and reveals previously unseen details of our home Galaxy. It is the second iteration of the map and published by ESA on April 25, 2018.
Continue reading ESA’s Gaia has Created the Most Detailed Map of the Milky Way | https://ourplnt.com/tag/european-space-agency/ | 18 |
19 | - Image 1 of 2
- Image 2 of 2
The smallest supermassive black hole ever identified is gobbling material at rates similar to its larger cousins, providing insights into how these behemoths evolve.
Located at the heart of a dwarf galaxy known as RGG 118, the black hole contains about 50,000 times more mass than the sun. It's therefore less than half as heavy as the second-smallest known supermassive black hole, researchers said.
"It might sound contradictory, but finding such a small, large black hole is very important," lead author Vivienne Baldassare, a doctoral student at the University of Michigan (UM) in Ann Arbor, said in a statement. "We can use observations of the lightest supermassive black holes to better understand how black holes of different sizes grow." [Images: Black Holes of the Universe]
There are two types of black hole — stellar mass and supermassive. Stellar-mass black holes weigh a few times as much as the sun and form after the collapse of huge stars. Supermassive black holes reside at the center of most, if not all, galaxies and are thought to evolve and grow along with the collection of stars they inhabit.
RGG 118 is located about 340 million light-years from Earth; the dwarf galaxy was originally identified by the Sloan Digital Sky Survey. Baldassare and her colleagues were able to determine the mass of RGG 118's central black hole by studying the motion of gas near the galaxy's center with the 21-foot Clay Telescope in Chile.
At 50,000 solar masses, the black hole is quite a lightweight. For example, the Milky Way galaxy's central supermassive black hole is about 100 times more massive. The heaviest known black holes weigh about 200,000 times as much as the one in RGG 118.
"In a sense, it's a teeny supermassive black hole," said co-author Elena Gallo of UM in another statement.
The team also used NASA's Chandra X-ray Observatory to measure the X-ray brightness of RGG 118's hot gas, which allowed them to calculate how quickly the black hole is gobbling up material. The scientists found that RGG 118 is consuming material at about 1 percent the maximum rate — similar to that of other, larger supermassive black holes.
"This little supermassive black hole behaves very much like its bigger, and in some cases much bigger, cousins," said study co-author Amy Reines, also of UM. "This tells us black holes grow in a similar way, no matter what their size."
Scientists still aren't sure exactly how supermassive black holesare born and grow. One idea posits that huge clouds of gas collapse into "seed" black holes, which merge over time to form the larger, supermassive black holes. Other researchers think they form when a giant star, approximately 100 times the mass of the sun, runs out of fuel and collapses into a black hole.
"This black hole in RGG 118 is serving as a proxy for those in the very early universe, and ultimately may help us decide which of the two [ideas] is right," Gallo said.
Active black holeshelp shape how their galaxies grow and evolve, regulating temperature and the movement of the gas and dust that grow into stars. The small size of RGG 118's black hole indicates that the dwarf galaxy has likely never endured a mergerwith a neighbor — the process by which larger galaxies are thought to grow, researchers said.
"These little galaxies can serve as analogs to galaxies in the earlier universe," Baldassare said. "By studying how galaxies like this one are growing and feeding their black holes and how the two are influencing each other, we could gain a better understanding of how galaxies were forming in the early universe."
The research, which included a fourth author, Jenny Greene of Princeton University, is available online in the Astrophysical Journal Letters. | https://www.foxnews.com/science/tiniest-monster-black-hole-discovered | 18 |
13 | Here is a step-by-step guide to how to solve an inequality with a fraction in it. Even if fractions seem to trip you up every time, once you learn this concept, you'll be solving problems with fractions in them in no time.
Begin by simply taking in the inequality before you even begin to use any processes to try and solve the problem. Take note of any negatives that you will need to remember to carry through while solving the problem. You should also notice all the processes in the inequality such as multiplication, subtraction, exponents, parentheses and such.
Use the order of operation in reverse to begin to solve the problem. One easy way to remember the order of operations is to remember the word PEMDAS (parentheses, exponents, multiplication/division, addition/subtraction). Now, when you are solving for a variable, you will use the order of operations in reverse, so instead of beginning with parentheses and ending with addition/subtraction, you will begin with addition/subtraction and end with parentheses.
Sciencing Video Vault
If you have the inequality 3<(x/9)+7
Begin with subtraction by subtracting 7 from both sides, rather than beginning with the parentheses x/9.
Do all processes to both sides of the inequality until you have solved for x.
Example: As mentioned in the previous step, you would begin by subtracting 7 from both sides.
So 3<(x/9)+7 becomes, -4<x/9
Now you would multiply both sides by 9 because the fraction x/9 is the same as x divided by 9, and the opposite of division is of course multiplication.
This process leaves you with the solution, -36<x, so x is greater than -36.
Remember that if your problem requires you to multiply or divide by a negative number, then you need to flip the inequality sign when you do so.
For example: If instead of multiplying by 9 in the previous problem you had to multiply by -9, you would get 36>x rather than 36<x.
Always double check your work. | https://sciencing.com/solve-inequalities-fractions-2365165.html | 18 |
13 | What is a fire triangle?
The fire triangle's three sides illustrate the three elements of fire: heat, fuel and oxidization.
Updated May 14, 2018
A simplified cousin to the fire tetrahedron, the fire triangle is a model for conveying the components of a fire. The fire triangle’s three sides illustrate the three elements of fire, which are heat, fuel and oxidization.
The three elements must be combined in the right proportions for a fire to occur. If any of the three elements are removed, the fire is extinguished.
The first element in the fire triangle is heat, which is perhaps the most essential of fire elements. A fire cannot ignite unless it has a certain amount of heat, and it cannot grow without heat either.
One of the first things firefighters do to extinguish a fire is to apply a cooling agent – usually water. Another cooling agent is a chemical fire retardant, such as the ones used in fire extinguishers.
Another method of diffusing heat from a fire is to scrape the embers from the fire source, such as wood embers on a burning building. Firefighters will also turn off the electricity in a burning building to remove a source of heat.
The second element in the fire triangle is fuel. A fire needs a fuel source in order to burn. The fuel source can be anything that is flammable, such as wood, paper, fabric, or chemicals. Once the fuel element of the fire triangle is removed, the fire will go out.
If a fire is allowed to burn without any attempt to extinguish it, as in the case of a controlled burn conducted by the Forest Service, it will extinguish on its own when it is consumed all of the fuel.
The final element of the fire triangle is oxygen, which is also an essential component of fire. A fire needs oxygen to start and continue. That is why one recommendation for extinguishing a small fire is to smother it with a non-flammable blanket, sand or dirt.
A decrease in the concentration of oxygen retards the combustion process. In large fires where firefighters are called in, decreasing the amount of oxygen is not usually an option because there os no effective way to make that happen in an extended area.
An alternative to the fire triangle model is the fire tetrahedron. The fire tetrahedron adds another element to the fire, which is chemical reaction. Fires involving metals such as titanium, lithium and magnesium have a chemical reaction that requires a different approach for firefighters.
This is called a class D fire and the application of water will exacerbate the combustion. Because of the chain reaction caused by the metals in class D fires, firefighters must use a different approach involving the introduction of inert agents like sand to smother it.
Learning about the fire triangle is a good way to understand the elements of fire and is an essential component of firefighting education. | https://www.firerescue1.com/fire-products/apparatus-accessories/articles/1206070-What-is-a-fire-triangle/ | 18 |
11 | This lesson asks students to compare and contrast specific elements of a subtraction problem and it's equivalent addition problem. The goal is for students to look closely and notice patterns of similarities and differences so they can explain why adding the opposite can be used to solve subtraction problems. So often they learn the "trick" with out making sense of why it works. Without a sense of how the relationships between the numbers and operation apply they can't internalize the "trick". Instead they just use it because the teacher said so, but don't understand or believe that it is really equivalent. It is important for students to be able to explain and justify every step they take mathematically in order to make sense of the math. (mp1 & mp3)
This is also a fairly self directed lesson which makes it easier for a sub. The warm up consists of two open ended questions with multiple solution possibilities. This engages students more and gives them more ownership of the activity when it allows them choice and creativity. It also raises questions within the math family groups and encourages students to check each other's solutions (mp3).
The Warm up gives students two open ended questions about integer addition.
1. Vanessa added three numbers together and got a sum of zero. What might the 3 numbers have been.
The sub is told to have students share their solutions and have the explain or show how it works or have the rest of the class verify it. I also have him/her ask students if it is possible for all three numbers to be negative? positive? (no) and have them explain why not. I expect them to say that in order to cancel out to zero the numbers must combine both positive and negative numbers. They may say that in order to go back to zero on the number line they have to move in the opposite direction which would involve adding numbers with both positive and negatives. These sub notes are included on an Warm up answers.
2. Caleb was adding integers and he knew that the sign on the sum would be negative. What could that tell you about the numbers being added.
Students may think all the numbers being added have to be negative, in which case the sub is directed to ask if they could make a negative sum if one of the numbers were positive? If two of the numbers were positive? (it doesn't say how many numbers are being added).
I really want students to grapple with the relationships between the numbers here, which is why I don't give them specific numbers. These open ended questions force students to test and adjust the numbers and learn the limits which helps them generalize the patterns or rules.
After going over it I ask the sub to have students take a second look at the front page and draw their attention to the pairs of equivalent equations. Ask students to check these to see if they follow the same patterns as were found in the comparisons on the back.
For the remainder of class students may work together on their homework subtraction int patterns. One side asks students to solve addition in one section, subtraction in the next, and then match the equivalent equations. They are also asked to explain why it makes sense that subtracting positives is equivalent to adding negatives and vice versa.
The second page asks students to compare equivalent equations, numberlines, etc. as they did last night. This assignment has students compare each part separately so that they must say there are "no differences" in the solution, the number line, etc. It makes it very clear what is different and emphasizes the "sameness". This is important because it helps them see why they can choose to do the simpler problem. | https://betterlesson.com/lesson/530225/patterns-in-subtraction-day-2-of-2 | 18 |
37 | Solving Absolute Value Equations. Solving Compound and Absolute Value Inequalities. Solving Systems of Equations by Graphing. Solving Systems of Equations Algebraically. Inconsistent and Dependent Systems. Solving Systems of Inequalities By Graphing. Solving Systems of Equations in Three Variables.
Solving Systems in Three Variables. Identity and Inverse Matrices. Solving Systems of Equations Using Matrices. Solving Systems of Equations. Write as Matrix Equation.
Axis of Symmetry, Vertex, Graph. Solving Quadratic Equations by Graphing. Solving Quadratic Equations by Factoring. Imaginary and Complex Numbers. Quadratic Formula and the Discriminant. Analyzing the Graphs of Quadratic Functions. Writing Quadratic Equations in Vertex Form.
Graphing and Solving Quadratic Inequalities. Analyzing Graphs of Polynomial Functions. Maximum and Minimum Points.
Relative Maximum and Relative Minimum. Remainder and Factor Theorems. Inverse Functions and Relations. Procedure to Construct an Inverse Function. Square Root Functions and Inequalities.
Graphing Square Root Functions. Graph Square Root Function. Operations with Radical Expressions. Adding and Subtracting Radicals. Solving Radical Equations and Inequalities. Multiplying and Dividing Rational Expressions.
Adding and Subtracting Rational Expressions. Holes and Vertical Asymptotes. Direct, Joint, and Inverse Variation. Connect with algebra tutors and math tutors nearby. Prefer to meet online? Find online algebra tutors or online math tutors in a couple of clicks. Simplifying Use this calculator if you only want to simplify, not solve an equation.
Expression Factoring Factors expressions using 3 methods. Factoring and Prime Factoring Calculator. Consecutive Integer Word Problems. Simplifying and Solving Equations with Multiple Signs. Negative Exponents of Numbers. Negative Exponents of Variables. Negative Exponents in Fractions. Factoring a Difference Between Two Squares. Solve Using the Quadratic Formula. Reading the Coordinates of Points on a Graph. Determining the Slope of a Line. Determining x and y Intercepts of a Line. Algebra Help This section is a collection of lessons, calculators, and worksheets created to assist students and teachers of algebra.
Lessons Explore one of our dozens of lessons on key algebra topics like Equations , Simplifying and Factoring. Calculators Having trouble solving a specific equation? Worksheets Need to practice a new type of problem? Simplifying Using the Distributive Property.
Simplifying Exponents of Numbers. Simplifying Exponents of Variables. Simplifying Exponents of Polynomials Parentheses.
Algebra, math homework solvers, lessons and free tutors funday24.ml-algebra, Algebra I, Algebra II, Geometry, Physics. Created by our FREE tutors. Solvers with work shown, write algebra lessons, help you solve your homework problems.
Learn algebra 2 for free—tackle more complex (and interesting) mathematical relationships than in algebra 1. Full curriculum of exercises and videos.
Algebra 2. OK. So what are you going to learn here? You will learn about Numbers, Polynomials, They are a great way to see what is going on and can help you solve things. But you need to be careful as they may not always give you the full story. "Second degree" just means the variable has an exponent of 2, like x 2. It is the next . Free math problem solver answers your algebra homework questions with step-by-step explanations.
Free algebra lessons, games, videos, books, and online tutoring. We can help you with middle school, high school, or even college algebra, and we have math lessons in many other subjects too. Step-by-step solutions to all your Algebra 2 homework questions - Slader. | http://funday24.ml/kkkc/algebra-2-help-wix.php | 18 |
22 | The identity function in math is one in which the output of the function is equal to its input, often written as f(x) = x for all x. The input-output pair made up of x and y are always identical, thus the name identity function. This holds true not only for the set of all real numbers, but also for the set of all real functions. Often considered mathematically trivial, the identity function is the basis for all other functions.
Casting the identity function as a linear function of the form y = mx + b, where m = 1 and b = 0 for all real numbers, more properties of the identity function become easier to see. All linear functions are combinations of the identity function and two constant functions. For example, the linear function y = 3x + 2 breaks down into the identity function multiplied by the constant function y = 3, then added to the constant function y = 2. Conversely, the identity function is a special case of all linear functions.
Although there are not many practical real-world applications of the identity function, it is also true that the identify function underlies all practical real-world applications. It is most frequently used in the theoretical or abstract fields of mathematics, each of which may or may not have direct real-world applications. It is occasionally useful in computer science applications when a function requires that its arguments be functions. In certain cases then, the identity function f(x) can replace the simple variable x. | https://www.reference.com/math/identity-function-math-49402714ba2c5456 | 18 |
21 | A router[a] is a networking device that forwards data packets between computer networks. Routers perform the traffic directing functions on the Internet. Data sent through the internet, such as a web page or email, is in the form of data packets. A packet is typically forwarded from one router to another router through the networks that constitute an internetwork until it reaches its destination node.
A router is connected to two or more data lines from different networks.[b] When a data packet comes in on one of the lines, the router reads the network address information in the packet to determine the ultimate destination. Then, using information in its routing table or routing policy, it directs the packet to the next network on its journey.
The most familiar type of routers are home and small office routers that simply forward IP packets between the home computers and the Internet. An example of a router would be the owner's cable or DSL router, which connects to the Internet through an Internet service provider (ISP). More sophisticated routers, such as enterprise routers, connect large business or ISP networks up to the powerful core routers that forward data at high speed along the optical fiber lines of the Internet backbone. Though routers are typically dedicated hardware devices, software-based routers also exist.
When multiple routers are used in interconnected networks, the routers can exchange information about destination addresses using a routing protocol. Each router builds up a routing table listing the preferred routes between any two systems on the interconnected networks.
A router has two types of network element components organized onto separate planes:
- Control plane: A router maintains a routing table that lists which route should be used to forward a data packet, and through which physical interface connection. It does this using internal preconfigured directives, called static routes, or by learning routes dynamically using a routing protocol. Static and dynamic routes are stored in the routing table. The control-plane logic then strips non-essential directives from the table and builds a forwarding information base (FIB) to be used by the forwarding plane.
- Forwarding plane: The router forwards data packets between incoming and outgoing interface connections. It forwards them to the correct network type using information that the packet header contains matched to entries in the FIB supplied by the control plane.
A router may have interfaces for different types of physical layer connections, such as copper cables, fiber optic, or wireless transmission. It can also support different network layer transmission standards. Each network interface is used to enable data packets to be forwarded from one transmission system to another. Routers may also be used to connect two or more logical groups of computer devices known as subnets, each with a different network prefix.
Routers may provide connectivity within enterprises, between enterprises and the Internet, or between internet service providers' (ISPs') networks. The largest routers (such as the Cisco CRS-1 or Juniper PTX) interconnect the various ISPs, or may be used in large enterprise networks. Smaller routers usually provide connectivity for typical home and office networks.
All sizes of routers may be found inside enterprises. The most powerful routers are usually found in ISPs, academic and research facilities. Large businesses may also need more powerful routers to cope with ever-increasing demands of intranet data traffic. A hierarchical internetworking model for interconnecting routers in large networks is in common use.
Access, core and distributionEdit
Access routers, including small office/home office (SOHO) models, are located at home and customer sites such as branch offices that do not need hierarchical routing of their own. Typically, they are optimized for low cost. Some SOHO routers are capable of running alternative free Linux-based firmware like Tomato, OpenWrt or DD-WRT.[not in citation given]
In enterprises, a core router may provide a collapsed backbone interconnecting the distribution tier routers from multiple buildings of a campus, or large enterprise locations. They tend to be optimized for high bandwidth, but lack some of the features of edge routers.[not in citation given]
Distribution routers aggregate traffic from multiple access routers, either at the same site, or to collect the data streams from multiple sites to a major enterprise location. Distribution routers are often responsible for enforcing quality of service across a wide area network (WAN), so they may have considerable memory installed, multiple WAN interface connections, and substantial onboard data processing routines. They may also provide connectivity to groups of file servers or other external networks.
External networks must be carefully considered as part of the overall security strategy of the local network. A router may include a firewall, VPN handling, and other security functions, or these may be handled by separate devices. Many companies produced security-oriented routers, including Cisco PIX series, Cisco Meraki MX series and Juniper NetScreen. Routers also commonly perform network address translation (which allows multiple devices on a network to share a single public IP address) and stateful packet inspection. Some experts argue that open source routers are more secure and reliable than closed source routers because open source routers allow mistakes to be quickly found and corrected.
Routing different networksEdit
Routers are also often distinguished on the basis of the network in which they operate. A router in a local area network (LAN) of a single organisation is called an interior router. An exterior router directs packets between hosts in one LAN and hosts in another LAN. A router that is operated in the Internet backbone is described as exterior router. While routers that connect a LAN with the wide area network (WAN) are called border routers, or gateway routers.
Internet connectivity and internal useEdit
Routers intended for ISP and major enterprise connectivity usually exchange routing information using the Border Gateway Protocol (BGP). RFC 4098 standard defines the types of BGP routers according to their functions:
- Edge router: Also called a provider edge router, is placed at the edge of an ISP network. The router uses External BGP to EBGP routers in other ISPs, or a large enterprise Autonomous System.
- Subscriber edge router: Also called a Customer Edge router, is located at the edge of the subscriber's network, it also uses EBGP to its provider's Autonomous System. It is typically used in an (enterprise) organization.
- Inter-provider border router: Interconnecting ISPs, is a BGP router that maintains BGP sessions with other BGP routers in ISP Autonomous Systems.
- Core router: A core router resides within an Autonomous System as a back bone to carry traffic between edge routers.
- Within an ISP: In the ISP's Autonomous System, a router uses internal BGP to communicate with other ISP edge routers, other intranet core routers, or the ISP's intranet provider border routers.
- "Internet backbone:" The Internet no longer has a clearly identifiable backbone, unlike its predecessor networks. See default-free zone (DFZ). The major ISPs' system routers make up what could be considered to be the current Internet backbone core. ISPs operate all four types of the BGP routers described here. An ISP "core" router is used to interconnect its edge and border routers. Core routers may also have specialized functions in virtual private networks based on a combination of BGP and Multi-Protocol Label Switching protocols.
- Port forwarding: Routers are also used for port forwarding between private Internet-connected servers.
- Voice/Data/Fax/Video Processing Routers: Commonly referred to as access servers or gateways, these devices are used to route and process voice, data, video and fax traffic on the Internet. Since 2005, most long-distance phone calls have been processed as IP traffic (VOIP) through a voice gateway. Use of access server type routers expanded with the advent of the Internet, first with dial-up access and another resurgence with voice phone service.
- Larger networks commonly use multilayer switches, with layer 3 devices being used to simply interconnect multiple subnets within the same security zone, and higher layer switches when filtering, translation, load balancing or other higher level functions are required, especially between zones.
Historical and technical informationEdit
The concept of an "Interface computer" was first used by Donald Davies for the NPL network in 1966. The Interface Message Processor (IMP), conceived in 1967 for use in the ARPANET, had fundamentally the same functionality as a router does today. The idea for a router (called "gateways" at the time) initially came about through an international group of computer networking researchers called the International Network Working Group (INWG). Set up in 1972 as an informal group to consider the technical issues involved in connecting different networks, it became a subcommittee of the International Federation for Information Processing later that year. These gateway devices were different from most previous packet switching schemes in two ways. First, they connected dissimilar kinds of networks, such as serial lines and local area networks. Second, they were connectionless devices, which had no role in assuring that traffic was delivered reliably, leaving that entirely to the hosts.[c]
The idea was explored in more detail, with the intention to produce a prototype system as part of two contemporaneous programs. One was the initial DARPA-initiated program, which created the TCP/IP architecture in use today. The other was a program at Xerox PARC to explore new networking technologies, which produced the PARC Universal Packet system; due to corporate intellectual property concerns it received little attention outside Xerox for years. Some time after early 1974, the first Xerox routers became operational. The first true IP router was developed by Ginny Strazisar at BBN, as part of that DARPA-initiated effort, during 1975-1976. By the end of 1976, three PDP-11-based routers were in service in the experimental prototype Internet.
The first multiprotocol routers were independently created by staff researchers at MIT and Stanford in 1981; the Stanford router was done by William Yeager, and the MIT one by Noel Chiappa; both were also based on PDP-11s. Virtually all networking now uses TCP/IP, but multiprotocol routers are still manufactured. They were important in the early stages of the growth of computer networking when protocols other than TCP/IP were in use. Modern Internet routers that handle both IPv4 and IPv6 are multiprotocol but are simpler devices than routers processing AppleTalk, DECnet, IP and Xerox protocols.
From the mid-1970s and in the 1980s, general-purpose minicomputers served as routers. Modern high-speed routers are highly specialized computers with extra hardware added to speed both common routing functions, such as packet forwarding, and specialised functions such as IPsec encryption. There is substantial use of Linux and Unix software based machines, running open source routing code, for research and other applications. The Cisco IOS operating system was independently designed. Major router operating systems, such as Junos and NX-OS, are extensively modified versions of Unix software.
The main purpose of a router is to connect multiple networks and forward packets destined either for its own networks or other networks. A router is considered a layer-3 device because its primary forwarding decision is based on the information in the layer-3 IP packet, specifically the destination IP address. When a router receives a packet, it searches its routing table to find the best match between the destination IP address of the packet and one of the addresses in the routing table. Once a match is found, the packet is encapsulated in the layer-2 data link frame for the outgoing interface indicated in the table entry. A router typically does not look into the packet payload, but only at the layer-3 addresses to make a forwarding decision, plus optionally other information in the header for hints on, for example, quality of service (QoS). For pure IP forwarding, a router is designed to minimize the state information associated with individual packets. Once a packet is forwarded, the router does not retain any historical information about the packet.[d]
The routing table itself can contain information derived from a variety of sources, such as a default or static routes that are configured manually, or dynamic routing protocols where the router learns routes from other routers. A default route is one that is used to route all traffic whose destination does not otherwise appear in the routing table; this is common – even necessary – in small networks, such as a home or small business where the default route simply sends all non-local traffic to the Internet service provider. The default route can be manually configured (as a static route), or learned by dynamic routing protocols, or be obtained by DHCP.[e]
A router can run more than one routing protocol at a time, particularly if it serves as an autonomous system border router between parts of a network that run different routing protocols; if it does so, then redistribution may be used (usually selectively) to share information between the different protocols running on the same router.
Besides making a decision as to which interface a packet is forwarded to, which is handled primarily via the routing table, a router also has to manage congestion when packets arrive at a rate higher than the router can process. Three policies commonly used in the Internet are tail drop, random early detection (RED), and weighted random early detection (WRED). Tail drop is the simplest and most easily implemented; the router simply drops new incoming packets once the length of the queue exceeds the size of the buffers in the router. RED probabilistically drops datagrams early when the queue exceeds a pre-configured portion of the buffer, until a pre-determined max, when it becomes tail drop. WRED requires a weight on the average queue size to act upon when the traffic is about to exceed the pre-configured size, so that short bursts will not trigger random drops.
Another function a router performs is to decide which packet should be processed first when multiple queues exist. This is managed through QoS, which is critical when Voice over IP is deployed, so as not to introduce excessive latency.
Yet another function a router performs is called policy-based routing where special rules are constructed to override the rules derived from the routing table when a packet forwarding decision is made.
Router functions may be performed through the same internal paths that the packets travel inside the router. Some of the functions may be performed through an application-specific integrated circuit (ASIC) to avoid overhead of scheduling CPU time to process the packets. Others may have to be performed through the CPU as these packets need special attention that cannot be handled by an ASIC.
- Router is pronounced // in British English and is typically pronounced // in American and Australian English.
- As opposed to a network switch, which connects data lines from one single network
- This particular idea had been previously pioneered in the CYCLADES network.
- The forwarding action can be collected into the statistical data, if so configured.
- A router can serve as a DHCP client or as a DHCP server.
- "router". Oxford English Dictionary (3rd ed.). Oxford University Press. September 2005. (Subscription or UK public library membership required.)
- "Overview Of Key Routing Protocol Concepts: Architectures, Protocol Types, Algorithms and Metrics". Tcpipguide.com. Archived from the original on 20 December 2010. Retrieved 15 January 2011.
- "Cisco Networking Academy's Introduction to Routing Dynamically". Cisco. Archived from the original on October 27, 2015. Retrieved August 1, 2015.
- H. Khosravi & T. Anderson (November 2003). Requirements for Separation of IP Control and Forwarding. doi:10.17487/RFC3654. RFC 3654. https://tools.ietf.org/html/rfc3654.
- "Setting uo Netflow on Cisco Routers". MY-Technet.com date unknown. Archived from the original on 14 July 2011. Retrieved 15 January 2011.
- "Windows Home Server: Router Setup". Microsoft Technet 14 Aug 2010. Archived from the original on 22 December 2010. Retrieved 15 January 2011.
- Oppenheimer, Pr (2004). Top-Down Network Design. Indianapolis: Cisco Press. ISBN 1-58705-152-4.
- "Windows Small Business Server 2008: Router Setup". Microsoft Technet Nov 2010. Archived from the original on 30 December 2010. Retrieved 15 January 2011.
- "Core Network Planning". Microsoft Technet May 28, 2009. Archived from the original on 2 October 2010. Retrieved 15 January 2011.
- See "Network Address Translation (NAT) FAQ". Archived from the original on 2014-06-06.
- Cf. "RFC 3022 – Traditional IP Network Address Translator (Traditional NAT)". Archived from the original on 2014-04-16.
- But see "Security Considerations Of NAT" (PDF). University of Michigan. Archived from the original (PDF) on October 18, 2014., which argues that NAT is not a security feature.
- "Global Internet Experts Reveal Plan for More Secure, Reliable Wi-Fi Routers - and Internet". Archived from the original on 2015-10-20.
- Tamara Dean (2009). Network+ Guide to Networks. Cengage Learning. p. 272. ISBN 9781423902454.
- H. Berkowitz; et al. (June 2005), Terminology for Benchmarking BGP Device Convergence in the Control Plane, RFC 4098
- "M160 Internet Backbone Router" (PDF). Juniper Networks Date unknown. Archived (PDF) from the original on 20 September 2011. Retrieved 15 January 2011.
- "Virtual Backbone Routers" (PDF). IronBridge Networks, Inc. September, 2000. Archived (PDF) from the original on 16 July 2011. Retrieved 15 January 2011.
- BGP/MPLS VPNs,RFC 2547, E. Rosen and Y. Rekhter, April 2004
- Roberts, Dr. Lawrence G. (May 1995). "The ARPANET & Computer Networks". Archived from the original on 24 March 2016. Retrieved 13 April 2016.
Then in June 1966, Davies wrote a second internal paper, "Proposal for a Digital Communication Network" In which he coined the word packet,- a small sub part of the message the user wants to send, and also introduced the concept of an "Interface computer" to sit between the user equipment and the packet network.
- Davies, Shanks, Heart, Barker, Despres, Detwiler and Riml, "Report of Subgroup 1 on Communication System", INWG Note No. 1.
- Vinton Cerf, Robert Kahn, "A Protocol for Packet Network Intercommunication", IEEE Transactions on Communications, Volume 22, Issue 5, May 1974, pp. 637 - 648.
- David Boggs, John Shoch, Edward Taft, Robert Metcalfe, "Pup: An Internetwork Architecture" Archived 2008-09-11 at the Wayback Machine., IEEE Transactions on Communications, Volume 28, Issue 4, April 1980, pp. 612- 624.
- "Ms. Ginny Strazisar". IT History Society. Archived from the original on 1 December 2017. Retrieved 21 November 2017.
- Craig Partridge, S. Blumenthal, "Data networking at BBN"; IEEE Annals of the History of Computing, Volume 28, Issue 1; January–March 2006.
- Valley of the Nerds: Who Really Invented the Multiprotocol Router, and Why Should We Care? Archived 2016-03-03 at the Wayback Machine., Public Broadcasting Service, Accessed August 11, 2007.
- Router Man Archived 2013-06-05 at the Wayback Machine., NetworkWorld, Accessed June 22, 2007.
- David D. Clark, "M.I.T. Campus Network Implementation", CCNG-2, Campus Computer Network Group, M.I.T., Cambridge, 1982; pp. 26.
- Pete Carey, "A Start-Up's True Tale: Often-told story of Cisco's launch leaves out the drama, intrigue", San Jose Mercury News, December 1, 2001.
- Roberts, Lawrence (22 July 2003). "The Next Generation of IP - Flow Routing". Archived from the original on 4 April 2015. Retrieved 22 February 2015.
- David Davis (April 19, 2007). "Cisco administration 101: What you need to know about default routes". Archived from the original on December 19, 2017.
- Diane Teare (March 2013). Implementing Cisco IP Routing (ROUTE): Foundation Learning Guide. Cisco Press. pp. 330–334.
- Diane Teare (March 2013). "Chapter 5: Implementing Path Control". Implementing Cisco IP Routing (ROUTE): Foundation Learning Guide. Cisco Press. pp. 330–334. | https://en.m.wikipedia.org/wiki/Router_(computing) | 18 |
13 | |Fig. 1: Discovery Space Shuttle Liftoff. (Source: Wikimedia Commons)|
Space travel has always been fascinating for both scientists and engineers alike. Not only does space travel require many calculations with the utmost precision, but an enormous amount of energy is also required. This energy all comes from chemical combustion, allowing the rocket engines to provide propulsion for the space shuttle. These space shuttle launches can often be spectacular to watch, as seen in Fig. 1.
Rockets work using Newton's laws of motion. Chemical reactions in the rocket cause the rocket to expell propellant, and because of the mass being pushed backwards, the rocket is propelled forwards. Rockets can accelerate at up to 25 times the acceleration of gravity (25 g = 245 m/sec2). However, space shuttle rockets accelerate between 1.2 and 6 times acceleration of gravity. Rockets can generate a thrust-to-weight ratio of 75:1. By comparison, turbojet engines, used mostly in planes, have a thrust-to-weight ratio of 5:1. While achieving a great amount of thrust for its weight, most of a rocket's weight is fuel. The solid rocket booster, or SRB, weigh about 1.3 million pounds at launch, with the fuel weighing 1.1 million pounds, leaving the rocket itself weighing about 192 thousand pounds. To get a sense of how much thrust these rockets are actually producing, a Boeing 737 engine generates about 20 thousand lbs of thrust, while SRB from a space shuttle launch generate about 3.3 million lbs of thrust. [2,3] These rockets are providing most of the thrust during the launch, and once the shuttle reaches 150,000 feet, will detach and return to Earth to be reused.
|Fig. 2: External Tank of Space Shuttle. (Source: Wikimedia Commons)|
Along with the rockets serving as propulsion, the space shuttle also has its own main engines, abbreviated as SSME. Shuttles will have three main engines. The three SSME combined will provide another 1.5 million lbs of thrust. [2,4] Along with providing thrust, the engines can also move around to control the pitch, yaw and roll of the space shuttle. The fuel for these three engines is located in the external fuel tank, which is pictured in Fig. 2. At 154 feet long, the external tank is the largest component of the space shuttle, also serving as skeleton for the shuttle itself and the rockets used for launch. These engines will be throttled down and back up at different time intervals throughout the launch, before being completely shut off just before the shuttle enters orbit, traveling at 17,000 mph. Once the engines have been shutoff, the external tank detaches and falls back to earth.
Space shuttles are magnificent feats in engineering, unthinkably heavy and using massive amounts of fuel, going insanely fast. The sheer power is amazing and quite a sight to behold.
© Brandon Wu. The author warrants that the work is the author's own and that Stanford University provided no input other than typesetting and referencing guidelines. The author grants permission to copy, distribute and display this work in unaltered form, with attribution to the author, for noncommercial purposes only. All other rights, including commercial rights, are reserved to the author.
G. P. Sutton and O. Biblarz, Rocket Propulsion Elements, 9th Ed. (Wiley, 2016).
"The Space Shuttle's Return to Flight," U.S. National Aeronautics and Space Administration, July 2005.
"Type Certificate Data Sheet A16We," Boeing Company, 1 Sep 10.
N. Nguyen, "Space Propulsion Technology and Energy Expenditures," Physics 240, Stanford University, Fall 2011 | http://large.stanford.edu/courses/2017/ph240/wu1/ | 18 |
24 | Check out these 10 great ideas for critical thinking activities and see how you can use them with your own modern learners. So why not take advantage of the time you normally waste by practicing your critical thinking during this may involve direct action or a what you are reading. Work sheet library: critical thinking: grades 3-5 that you can use with your students to build a wide variety of critical thinking snow activities (k. Develop a reading strategy and boost your learning critical reading and critical thinking are therefore the very foundations of true learning and personal. - have had an opportunity to engage in some creative thinking activities creative thinking skills involve such and, ‘ active and critical reading. 3 activities to encourage critical thinking in the classroom below are some activities to help teachers incorporate curiosity what we're reading.
How phonics instruction teaches critical thinking skills phonics critics have it backwards a common misconception about phonics is that it consists entirely of rote. The standards-based critical thinking activities of reading detective® develop the analysis many reading comprehension materials involve reading pages and pages. Critical thinkingthe awakening of the intellect to the study of itself critical thinking is a rich concept that has been developing throughout the past 2500 years.
Critical readind activities to develop critical thinking in the subject of the first article was the difficulty involved in two critical reading activities. Strategies that foster critical reading this guide offers strategies faculty members can use to foster careful reading and critical thinking activities, you can.
Fun critical thinking activities - for students in any subject by monica dorcz | this newsletter was created with smore, an online tool for creating beautiful. There are other skills involved in critical thinking these activities will help thinking and reading that are involved in critical thinking. Questions involve critical thinking with a focus on logic and inference beginning documents similar to critical thinking reading comprehension worksheets.
Find and save ideas about critical thinking activities on pinterest improve critical thinking skills and reading all types of students get involved. Games and activities for developing critical thinking skills thinking the workbook critical not get involved at all 3. Nonfiction activities library activities reading activities reading strategies reading comprehension asking good questions leads to creative and critical thinking. | http://bfcourseworkqpvx.dosshier.me/reading-activities-that-involve-critical-thinking.html | 18 |
49 | Expressions and equations (7th grade) problems involving numerical and algebraic expressions and equations (7th grade) write an inequality from a word problem. Worked-out word problems on linear equations with solutions explained step-by-step in different types of examples there are several problems which involve relations. Writing algebraic equations is presented by math goodies word problems writing algebraic equations writing algebraic expressions. Two-step equation word problems date_____ period____ 1) 331 students went on a field trip six buses were two-step word problems author: mike created date. Linear inequalities word problems worksheet new 79 best equations quiz worksheet distance between two points study com word problems using slope intercept form.
6th and 7th grade students do not like to write equations for word problems they just want to scratch out some calculations and give me their answer often, their. Busca trabajos relacionados con writing linear equations from word problems o contrata en el mercado de freelancing más grande del mundo con más de. This linear functions worksheet will produce problems for practicing writing linear equations from graphed lines. Students are given word problems and asked to write a pair of simultaneous linear equations that could be used to solve them.
Word problems involving quadratic equations from writing linear equations from word problems worksheet pdf , image source: pinterestcom. Lots of word problems can be solved with systems of linear equations however, before we bother with those, let's look at some word problems that describe single. 120 resources for systems of linear equations word problems on 12 grades and 1 subjects search and discovery of digital educational resources from all over the web.
Algebra ia unit 5 worksheet 4 – writing equations of lines part 4 writing equation of lines from word problems name _____ write an. Worksheet equation word problems writing equations from word problems youtube algebra i help solving systems of linear equations word problems part iv youtube. Linear expressions and word problems and match linear expressions to word problems grant the chance to apply one step equations to real world scenarios. Slope intercept form word problems worksheet with answers slope intercept form puzzle equation writing linear equations worksheet . Solve word problems by writing and solving solve word problems by writing and solving equations knowledge and understanding of linear equations in one.
Solve simultaneous equations associated with mixture word problems, coin word problems, investment word problems, and ticket word problems with themathpage. Recommended videos description of word problems worksheets one step equation word problems these algebra 1 equations worksheets will. Word problems involving linear and quadratic equations one-on-one writing assistance from a professional writer word problems involving linear equations 1. Writing equations with how to solve multi-step algebra equations in word problems 6:16 how to solve one-step algebra equations in word problems.
Quiz theme/title: writing word problems description/instructions linear equations can be written in words word problems require reading and analyzing of the. Slope-intercept form word problems name: _____ date: _____ 1 in which of the following linear models predicts the population, y, of florida. Writing equations for word problems right now, though, it will be be temporarily a bit frustrating to have to go through the slower step of writing. Linear function word problems, graphs, intervals of increase, decrease and solved problems with solutions.
Writing linear equations for word problems task card activity find this pin and more on tpt math lessons by thebestoftpt word problem and equation matching . A super common question is the difference between expressions and equations word problems simultaneous linear equations equations or inequalities writing. One-step/two-step word problems name: for each one-step word problem, write a one-step algebraic equation using the given variable solve using appropriate. Algebra that functionswriting linear equations for word problems related to teensdo you want your middle school students engaged in writing linear equations. | http://butermpaperrmwm.shapeyourworld.info/writing-linear-equations-word-problems.html | 18 |
17 | Concepts of BiologyScience and Technology
When a cell divides, it is important that each daughter cell receives an identical copy of the DNA. This is accomplished by the process of DNA replication. The replication of DNA occurs during the synthesis phase, or S phase, of the cell cycle, before the cell enters mitosis or meiosis.
The elucidation of the structure of the double helix provided a hint as to how DNA is copied. Recall that adenine nucleotides pair with thymine nucleotides, and cytosine with guanine. This means that the two strands are complementary to each other. For example, a strand of DNA with a nucleotide sequence of AGTCATGA will have a complementary strand with the sequence TCAGTACT ([link]).
Because of the complementarity of the two strands, having one strand means that it is possible to recreate the other strand. This model for replication suggests that the two strands of the double helix separate during replication, and each strand serves as a template from which the new complementary strand is copied ([link]).
During DNA replication, each of the two strands that make up the double helix serves as a template from which new strands are copied. The new strand will be complementary to the parental or “old” strand. Each new double strand consists of one parental strand and one new daughter strand. This is known as semiconservative replication. When two DNA copies are formed, they have an identical sequence of nucleotide bases and are divided equally into two daughter cells.
DNA Replication in Eukaryotes
Because eukaryotic genomes are very complex, DNA replication is a very complicated process that involves several enzymes and other proteins. It occurs in three main stages: initiation, elongation, and termination.
Recall that eukaryotic DNA is bound to proteins known as histones to form structures called nucleosomes. During initiation, the DNA is made accessible to the proteins and enzymes involved in the replication process. How does the replication machinery know where on the DNA double helix to begin? It turns out that there are specific nucleotide sequences called origins of replication at which replication begins. Certain proteins bind to the origin of replication while an enzyme called helicase unwinds and opens up the DNA helix. As the DNA opens up, Y-shaped structures called replication forks are formed ([link]). Two replication forks are formed at the origin of replication, and these get extended in both directions as replication proceeds. There are multiple origins of replication on the eukaryotic chromosome, such that replication can occur simultaneously from several places in the genome.
During elongation, an enzyme called DNA polymerase adds DNA nucleotides to the 3' end of the template. Because DNA polymerase can only add new nucleotides at the end of a backbone, a primer sequence, which provides this starting point, is added with complementary RNA nucleotides. This primer is removed later, and the nucleotides are replaced with DNA nucleotides. One strand, which is complementary to the parental DNA strand, is synthesized continuously toward the replication fork so the polymerase can add nucleotides in this direction. This continuously synthesized strand is known as the leading strand. Because DNA polymerase can only synthesize DNA in a 5' to 3' direction, the other new strand is put together in short pieces called Okazaki fragments. The Okazaki fragments each require a primer made of RNA to start the synthesis. The strand with the Okazaki fragments is known as the lagging strand. As synthesis proceeds, an enzyme removes the RNA primer, which is then replaced with DNA nucleotides, and the gaps between fragments are sealed by an enzyme called DNA ligase.
The process of DNA replication can be summarized as follows:
- DNA unwinds at the origin of replication.
- New bases are added to the complementary parental strands. One new strand is made continuously, while the other strand is made in pieces.
- Primers are removed, new DNA nucleotides are put in place of the primers and the backbone is sealed by DNA ligase.
You isolate a cell strain in which the joining together of Okazaki fragments is impaired and suspect that a mutation has occurred in an enzyme found at the replication fork. Which enzyme is most likely to be mutated?
Because eukaryotic chromosomes are linear, DNA replication comes to the end of a line in eukaryotic chromosomes. As you have learned, the DNA polymerase enzyme can add nucleotides in only one direction. In the leading strand, synthesis continues until the end of the chromosome is reached; however, on the lagging strand there is no place for a primer to be made for the DNA fragment to be copied at the end of the chromosome. This presents a problem for the cell because the ends remain unpaired, and over time these ends get progressively shorter as cells continue to divide. The ends of the linear chromosomes are known as telomeres, which have repetitive sequences that do not code for a particular gene. As a consequence, it is telomeres that are shortened with each round of DNA replication instead of genes. For example, in humans, a six base-pair sequence, TTAGGG, is repeated 100 to 1000 times. The discovery of the enzyme telomerase ([link]) helped in the understanding of how chromosome ends are maintained. The telomerase attaches to the end of the chromosome, and complementary bases to the RNA template are added on the end of the DNA strand. Once the lagging strand template is sufficiently elongated, DNA polymerase can now add nucleotides that are complementary to the ends of the chromosomes. Thus, the ends of the chromosomes are replicated.
Telomerase is typically found to be active in germ cells, adult stem cells, and some cancer cells. For her discovery of telomerase and its action, Elizabeth Blackburn ([link]) received the Nobel Prize for Medicine and Physiology in 2009.
Telomerase is not active in adult somatic cells. Adult somatic cells that undergo cell division continue to have their telomeres shortened. This essentially means that telomere shortening is associated with aging. In 2010, scientists found that telomerase can reverse some age-related conditions in mice, and this may have potential in regenerative medicine.
DNA Replication in Prokaryotes
Recall that the prokaryotic chromosome is a circular molecule with a less extensive coiling structure than eukaryotic chromosomes. The eukaryotic chromosome is linear and highly coiled around proteins. While there are many similarities in the DNA replication process, these structural differences necessitate some differences in the DNA replication process in these two life forms.
DNA replication has been extremely well-studied in prokaryotes, primarily because of the small size of the genome and large number of variants available. Escherichia coli has 4.6 million base pairs in a single circular chromosome, and all of it gets replicated in approximately 42 minutes, starting from a single origin of replication and proceeding around the chromosome in both directions. This means that approximately 1000 nucleotides are added per second. The process is much more rapid than in eukaryotes. [link] summarizes the differences between prokaryotic and eukaryotic replications.
|Differences between Prokaryotic and Eukaryotic Replications|
|Origin of replication||Single||Multiple|
|Rate of replication||1000 nucleotides/s||50 to 100 nucleotides/s|
Click through a tutorial on DNA replication.
DNA polymerase can make mistakes while adding nucleotides. It edits the DNA by proofreading every newly added base. Incorrect bases are removed and replaced by the correct base, and then polymerization continues ([link]a). Most mistakes are corrected during replication, although when this does not happen, the mismatch repair mechanism is employed. Mismatch repair enzymes recognize the wrongly incorporated base and excise it from the DNA, replacing it with the correct base ([link]b). In yet another type of repair, nucleotide excision repair, the DNA double strand is unwound and separated, the incorrect bases are removed along with a few bases on the 5' and 3' end, and these are replaced by copying the template with the help of DNA polymerase ([link]c). Nucleotide excision repair is particularly important in correcting thymine dimers, which are primarily caused by ultraviolet light. In a thymine dimer, two thymine nucleotides adjacent to each other on one strand are covalently bonded to each other rather than their complementary bases. If the dimer is not removed and repaired it will lead to a mutation. Individuals with flaws in their nucleotide excision repair genes show extreme sensitivity to sunlight and develop skin cancers early in life.
Most mistakes are corrected; if they are not, they may result in a mutation—defined as a permanent change in the DNA sequence. Mutations in repair genes may lead to serious consequences like cancer.
DNA replicates by a semi-conservative method in which each of the two parental DNA strands act as a template for new DNA to be synthesized. After replication, each DNA has one parental or “old” strand, and one daughter or “new” strand.
Replication in eukaryotes starts at multiple origins of replication, while replication in prokaryotes starts from a single origin of replication. The DNA is opened with enzymes, resulting in the formation of the replication fork. Primase synthesizes an RNA primer to initiate synthesis by DNA polymerase, which can add nucleotides in only one direction. One strand is synthesized continuously in the direction of the replication fork; this is called the leading strand. The other strand is synthesized in a direction away from the replication fork, in short stretches of DNA known as Okazaki fragments. This strand is known as the lagging strand. Once replication is completed, the RNA primers are replaced by DNA nucleotides and the DNA is sealed with DNA ligase.
The ends of eukaryotic chromosomes pose a problem, as polymerase is unable to extend them without a primer. Telomerase, an enzyme with an inbuilt RNA template, extends the ends by copying the RNA template and extending one end of the chromosome. DNA polymerase can then extend the DNA using the primer. In this way, the ends of the chromosomes are protected. Cells have mechanisms for repairing DNA when it becomes damaged or errors are made in replication. These mechanisms include mismatch repair to replace nucleotides that are paired with a non-complementary base and nucleotide excision repair, which removes bases that are damaged such as thymine dimers.
DNA replicates by which of the following models?
- none of the above
The initial mechanism for repairing nucleotide errors in DNA is ________.
- mismatch repair
- DNA polymerase proofreading
- nucleotide excision repair
- thymine dimers
How do the linear chromosomes in eukaryotes ensure that its ends are replicated completely?
Telomerase has an inbuilt RNA template that extends the 3' end, so a primer is synthesized and extended. Thus, the ends are protected.
- Concepts of Biology
- Unit 1. The Cellular Foundation of Life
- Introduction to Biology
- Chemistry of Life
- Cell Structure and Function
- How Cells Obtain Energy
- Unit 2. Cell Division and Genetics
- Reproduction at the Cellular Level
- The Cellular Basis of Inheritance
- Patterns of Inheritance
- Unit 3. Molecular Biology and Biotechnology
- Unit 4. Evolution and the Diversity of Life
- Evolution and Its Processes
- Diversity of Life
- Diversity of Microbes, Fungi, and Protists
- Diversity of Plants
- Diversity of Animals
- Unit 5. Animal Structure and Function
- The Body’s Systems
- The Immune System and Disease
- Animal Reproduction and Development
- Unit 6. Ecology
- Population and Community Ecology
- Ecosystems and the Biosphere
- Conservation and Biodiversity | http://voer.edu.vn/c/dna-replication/a64457a4/9fc9504c | 18 |
12 | Skip to 0 minutes and 0 secondsTeaching with Variation is being used in mathematics classrooms for a long time. Based on the previous experiences and his own experiment in Shanghai mathematics classrooms, Professor Gu who's one of the most well known mathematics educators in China, systematically analyzed and synthesized the concept of Teaching with Variation. You will hear what he has to say later in this course about Teaching with Variation. And according to Professor Gu, there are two forms of variations, conceptual variation and procedural variation. Conceptual variation is about understanding concepts from multiple perspectives. As Professor Gu said, so it is to illustrate the essential features by demonstrating different forms of visual materials and instances, or highlight the essence of a concept by varying the non-essential features.
Skip to 1 minute and 32 secondsSo here is one example to help students understand the concept of the height, or attitude of a triangle from the top vertex. So let's look at the first triangle. Normally we will use this kind of position, so it is easy for student to understand the concept of the height. So this is the height. Then in this triangle, the height and one side of the triangle are the same. So that is height, so that is a bit more challenging. And the following three are variations to understand the concept. So the third one is this one. So you can see some students will make mistakes like this.
Skip to 2 minutes and 40 secondsThey draw a horizontal line, and then they draw a vertical line from the top vertex of the triangle to this horizontal line. Of course this is incorrect. The correct one is this. Okay, the next one is another variation. The situation for students to learn the concept. So you should extend this side beyond this point. Then you draw a height, that is a perpendicular line from the top vertex of this triangle to this horizontal line. So that might not be so difficult for students. So the most difficult one is this one. So many students will likely draw a horizontal line passing through this vertex of the triangle.
Skip to 4 minutes and 0 secondsThen they will draw a line from the top vertex here, perpendicular to the horizontal line. So that is the height they might get, but this is incorrect. So the correct one should be this one. You extend this side of the triangle, go beyond this point. Then you draw a perpendicular line to this side, the extension of the side from the top vertex of the triangle, so that is height. The next is procedural variation. It means to progressively unfold mathematical and activities. The variations for constructing a particular experience system are derived from three dimensions of problem solving.
Skip to 5 minutes and 9 secondsFirst, varying a problem which means varying the original one as a scaffolding or extending the original problem by varying the conditions changing the results and generalization. The second is multiple methods of solving a problem by varying the different processes of solving a problem and associating different methods of solving a problem. So you vary the process of solving the problem. And the third one is multiple applications of a method by applying the same method to a group of similar problems. So in Chinese we say you use one method to solve three problems or even more. So here is an example, and you will think it's a very simple example. So we ask students to calculate 2 plus 8 equals how much.
Skip to 6 minutes and 23 secondsOf course student will know it's 10. In terms of variation theory, we can vary the problem into different scenarios. So variation one is 2 plus what equals 10. And variation two is what plus 8 equals 10. Variation three is more difficult. So what number plus what number equals 10. So students are asked to find two numbers whose sum is 10. And variation four is 10 equals 2 plus what. So I'm sure you can find more ways to vary the problems. According to researchers in this area, that Teaching with Variation is a main feature of Chinese mathematics classroom.
Skip to 7 minutes and 37 secondsAnd by adopting Teaching with Variation, even with a large classes like in Shanghai and in many other Asian classrooms, students still can actively involve themselves in the process of learning. And they can achieve excellent results.
Presenting teaching with Variation
This video explains what teaching with Variation is and demonstrates the principle with some concrete examples.
In China, teaching with Variation has been used in classrooms for many years.
Professor Gu is one of the most well-known mathematics educators in China. Based on previous experience and his own experiments in Shanghai maths classrooms, he systematically analysed and synthesised the concepts of teaching with Variation (1981). According to Gu, there are two forms of Variation: conceptual variation and procedural variation.
According to researchers (Gu, Huang, & Marton, 2004), teaching with Variation characterises mathematics teaching in China. By adopting teaching with Variation, even with large classes, students can actively involve themselves in the process of learning and achieve excellent results.
Gu, L. (1981). The visual effect and psychological implication of transformation of figures in geometry. Paper presented at annual conference of Shanghai Mathematics Association. Shanghai, China.
Gu, L., Huang, R., & Marton, F. (2004). Teaching with variation: A Chinese way of promoting effective mathematics learning. In Fan, L., Wong, N-Y., Cai, J., & Li, S. (Eds.) (2004). How Chinese learn mathematics: Perspectives from insiders. Singapore: World Scientific.
© 2018 University of Southampton | https://www.futurelearn.com/courses/asian-maths-teaching-methods/0/steps/38719 | 18 |
16 | No worksheet or portion thereof is to be hosted on, uploaded to, or stored on any other web site, blog, forum, file sharing, computer, file storage device, etc. Each time a question is asked a student can come to the board and click on the numbers that have been eliminated. Main Activity 1 Students can complete this worksheet on ordering decimals, which requires them to place the numbers in a grid with the smallest at the top.
Here you will find a wide range of free 4th grade Math Worksheets, which will help your child to learn about decimal place value. Word Problems - Solve the money word problems.
They show a good understanding of place value in relation to decimals, and can add and subtract decimals with up to 2 decimal places.
Using these sheets will help your child to: The materials found on this site are available for you to print and use with your child or the students in your class.
Counting Coins Worksheet 3 - Count the pennies, nickels, and dimes in each piggy bank and circle the correct total. The Fourth Grade Grocer - Subtraction, multiplication, and division word problems. Ask pupils to write the numbers in order in their exercise books.
Decimal Place Value Worksheets Read and Write Decimals to 2dp Here you will find a selection of 4th Grade Math sheets designed to help your child understand place value involving tenths and hundredths.
How to Print or Save these sheets Need help printing or saving? Add a variety of coins and circle the correct total. You could play this game regularly and use the timer function to encourage the class to ask better questions and beat their previous time to find the number you were thinking of.
Sharpen Your Skills Worksheet 7 - Students will multiply decimals by 10,and Adding Coins Worksheet 5 - Students will add the pennies and nickels shown on each piggy bank. You can then mark this, or a student can display the answer on the board. They could ask questions like, is your number greater than Buying School Supplies - Students will add three numbers to find the total cost of each set of school supplies.
Money Matters - Can your students solve these word problems? They are able to add columns of numbers together accurately, and subtract numbers proficiently.
During Fourth Grade, most children learn to round off numbers to the nearest 10,or million. Place Value Worksheets - Tenths. They are able to solve multi-step problems involving whole numbers, fractions and decimals.
Division Word Problems - Easy money division word problems with no remainders. Nothing from this site may be stored on Google Drive or any other online file storage system.
Plenary Display the 2nd tool from this pack of ordering decimals tools on the board. Afterwards, ask students the key question above. Percent, decimal, and money worksheets for home and classroom use. They can use their multiplication table facts to answer related questions.
Pot of Gold Addition Worksheet 2 - Regrouping required when adding the money and writing the total on each pot of gold. Sharpen Your Skills Worksheet 5 - Students will write fractions as decimals and write the decimal to tell the shaded part of an object.
Children will enjoy completing these Math games and Free 4th Grade Math worksheets whilst learning at the same time.
In the UK, 4th Grade is equivalent to Year 5.Compare two decimals to thousandths based on meanings of the digits in each place, using >, =, and decimals to any place. Decimals Worksheets and Resources Free worksheets, interactivities and other resources to support teaching and learning about decimals.
Read and Write Decimals Jessica can read as 3 tenths and 2 hundredths, but it is easier to understand her if The least place value in is thousandths. You read the decimal to tell how many fractional part of a second? 13ow can you write H two and thirty-five hundredths as a decimal? Ordering Decimals Student Probe Which decimal is larger, or ?
Answer: Read, write, and compare decimals to thousandths. (b) Compare two decimals to for worksheets, markers, Base Ten Blocks (optional) Student Worksheets.
Place Value, Decimals & Percentages: Teacher’s Manual A Guide to Teaching and Learning Read, write and order numerals, () Estimate the number of objects in a set Number Comparing decimals Compare and order fractions, percentages and decimals.
Read and write decimals to the thousandths using expanded form (with fractions of 1/10, 1/, and 1/ to denote decimal places). mint-body.com3a Compare two decimals to the thousandths .Download | http://vulomuqebonijic.mint-body.com/read-write-and-compare-decimals-to-thousandths-worksheets-for-2nd-84788478.html | 18 |
49 | The Hyper Text Transfer Protocol (HTTP), the simple, constrained and ultimately boring application layer protocol forms the foundation of the World Wide Web. In essence, HTTP enables the retrieval of network connected resources available across the cyber world and has evolved through the decades to deliver fast, secure and rich medium for digital communication.
This Guide Highlights The Following Key Aspects of HTTP/2:
HTTP was originally proposed by Tim Berners-Lee, the pioneer of the World Wide Web who designed the application protocol with simplicity in mind to perform high-level data communication functions between Web-servers and clients.
The first documented version of HTTP was released in 1991 as HTTP0.9, which later led to the official introduction and recognition of HTTP1.0 in 1996. HTTP1.1 followed in 1997 and has since received little iterative improvements.
In February 2015, the Internet Engineering Task Force (IETF) HTTP Working Group revised HTTP and developed the second major version of the application protocol in the form of HTTP/2. In May 2015, the HTTP/2 implementation specification was officially standardized in response to Google’s HTTP-compatible SPDY protocol. The HTTP/2 vs SPDY argument continues throughout the guide.
What is a Protocol?
The HTTP/2 vs HTTP1 debate must proceed with a short primer on the term Protocol frequently used in this resource. A protocol is a set of rules that govern the data communication mechanisms between clients (for example web browsers used by internet users to request information) and servers (the machines containing the requested information).
Protocols usually consist of three main parts: Header, Payload and Footer. The Header placed before the Payload contains information such as source and destination addresses as well as other details (such as size and type) regarding the Payload. Payload is the actual information transmitted using the protocol. The Footer follows the Payload and works as a control field to route client-server requests to the intended recipients along with the Header to ensure the Payload data is transmitted free of errors.
The system is similar to the post mail service. The letter (Payload) is inserted into an envelope (Header) with destination address written on it and sealed with glue and postage stamp (Footer) before it is dispatched. Except that transmitting digital information in the form of 0s and 1s isn’t as simple, and necessitates a new dimension innovation in response to heightening technological advancements emerging with the explosive growth of internet usage.
HTTP protocol originally comprised of basic commands: GET, to retrieve information from the server and POST, to deliver the requested information to the client. This simple and apparently boring set of few commands to GET data and POST a response essentially formed the foundation to construct other network protocols as well. The protocol is yet another move to improve internet user experience and effectiveness, necessitating HTTP/2 implementation to enhance online presence.
Since its inception in early 1990s, HTTP has seen only a few major overhauls. The most recent version, HTTP1.1 has served the cyber world for over 15 years. Web pages in the current era of dynamic information updates, resource-intensive multimedia content formats and excessive inclination toward web performance have placed old protocol technologies in the legacy category. These trends necessitate significant HTTP/2 changes to improve the internet experience.
The primary goal with research and development for a new version of HTTP centers around three qualities rarely associated with a single network protocol without necessitating additional networking technologies – simplicity, high performance and robustness. These goals are achieved by introducing capabilities that reduce latency in processing browser requests with techniques such as multiplexing, compression, request prioritization and server push.
Mechanisms such as flow control, upgrade and error handling work as enhancements to the HTTP protocol for developers to ensure high performance and resilience of web-based applications.
The collective system allows servers to respond efficiently with more content than originally requested by clients, eliminating user intervention to continuously request for information until the website is fully loaded onto the web browser. For instance, the Server Push capability with HTTP/2 allows servers to respond with a page’s full contents other than the information already available in the browser cache. Efficient compression of HTTP header files minimizes protocol overhead to improve performance with each browser request and server response.
HTTP/2 changes are designed to maintain interoperability and compatibility with HTTP1.1. HTTP/2 advantages are expected to increase over time based on real-world experiments and its ability to address performance related issues in real-world comparison with HTTP1.1 will greatly impact its evolution over the long term.
“…we are not replacing all of HTTP – the methods, status codes, and most of the headers you use today will be the same. Instead, we’re re-defining how it gets used “on the wire” so it’s more efficient, and so that it is more gentle to the internet itself…” Mark Nottingham, Chair the IETF HTTP Working Group and member of the W3C TAG. Source
It is important to note that the new HTTP version comes as an extension to its predecessor and is not expected to replace HTTP1.1 anytime soon. HTTP/2 implementation will not enable automatic support for all encryption types available with HTTP1.1, but definitely opens the door to better alternatives or additional encryption compatibility updates in the near future. However feature comparisons such as HTTP/2 vs HTTP1 and SPDY vs HTTP/2 present only the latest application protocol as the winner in terms of performance, security and reliability alike.
HTTP1.1 was limited to processing only one outstanding request per TCP connection, forcing browsers to use multiple TCP connections to process multiple requests simultaneously.
However, using too many TCP connections in parallel leads to TCP congestion that causes unfair monopolization of network resources. Web browsers using multiple connections to process additional requests occupy a greater share of the available network resources, hence downgrading network performance for other users.
Issuing multiple requests from the browser also causes data duplication on data transmission wires, which in turn requires additional protocols to extract the desired information free of errors at the end-nodes.
The internet industry was naturally forced to hack these constraints with practices such as domain sharding, concatenation, data inlining and spriting, among others. Ineffective use of the underlying TCP connections with HTTP1.1 also leads to poor resource prioritization, causing exponential performance degradation as web applications grow in terms of complexity, functionality and scope.
The web has evolved well beyond the capacity of legacy HTTP-based networking technologies. The core qualities of HTTP1.1 developed over a decade ago have opened the doors to several embarrassing performance and security loopholes.
The Cookie Hack for instance, allows cybercriminals to reuse a previous working session to compromise account passwords because HTTP1.1 provides no session endpoint-identity facilities. While the similar security concerns will continue to haunt HTTP/2, the new application protocol is designed with better security capabilities such as the improved implementation of new TLS features.
Bi-directional sequence of text format frames sent over the HTTP/2 protocol exchanged between the server and client are known as “streams”. Earlier iterations of the HTTP protocol were capable of transmitting only one stream at a time along with some time delay between each stream transmission.
Receiving tons of media content via individual streams sent one by one is both inefficient and resource consuming. HTTP/2 changes have helped establish a new binary framing layer to addresses these concerns.
This layer allows client and server to disintegrate the HTTP payload into small, independent and manageable interleaved sequence of frames. This information is then reassembled at the other end.
Binary frame formats enable the exchange of multiple, concurrently open, independent bi-directional sequences without latency between successive streams. This approach presents an array of benefits of HTTP/2 explained below:
- The parallel multiplexed requests and response do not block each other.
- A single TCP connection is used to ensure effective network resource utilization despite transmitting multiple data streams.
- No need to apply unnecessary optimization hacks – such as image sprites, concatenation and domain sharding, among others – that compromise other areas of network performance.
- Reduced latency, faster web performance, better search engine rankings.
- Reduced OpEx and CapEx in running network and IT resources.
With this capability, data packages from multiple streams are essentially mixed and transmitted over a single TCP connection. These packages are then split at the receiving end and presented as individual data streams. Transmitting multiple parallel requests simultaneously using HTTP version 1.1 or earlier required multiple TCP connections, which inherently bottlenecks overall network performance despite transmitting more data streams at faster rates.
HTTP/2 Server Push
This capability allows the server to send additional cacheable information to the client that isn’t requested but is anticipated in future requests. For example, if the client requests for the resource X and it is understood that the resource Y is referenced with the requested file, the server can choose to push Y along with X instead of waiting for an appropriate client request.
The client places the pushed resource Y into its cache for future use. This mechanism saves a request-respond round trip and reduces network latency. Server Push was originally introduced in Google’s SPDY protocol. Stream identifiers containing pseudo headers such as :path allow the server to initiate the Push for information that must be cacheable. The client must explicitly allow the server to Push cacheable resources with HTTP/2 or terminate pushed streams with a specific stream identifier.
Other HTTP/2 changes such as Server Push proactively updates or invalidates the client’s cache and is also known as “Cache Push”. Long-term consequences center around the ability of servers to identify possible push-able resources that the client actually does not want.
HTTP/2 implementation presents significant performance for pushed resources, with other benefits of HTTP/2 explained below:
- The client saves pushed resources in the cache.
- The client can reuse these cached resources across different pages.
- The server can multiplex pushed resources along with originally requested information within the same TCP connection.
- The server can prioritize pushed resources – a key performance differentiator in HTTP/2 vs HTTP1.
- The client can decline pushed resources to maintain an effective repository of cached resources or disable Server Push entirely.
- The client can also limit the number of pushed streams multiplexed concurrently.
Similar Push capabilities are already available with suboptimal techniques such as Inlining to Push server responses, whereas Server Push presents a protocol-level solution to avoid complexities with optimization hacks secondary to the baseline capabilities of the application protocol itself.
The HTTP/2 multiplexes and prioritizes the pushed data stream to ensure better transmission performance as seen with other request-response data streams. As a built-in security mechanism, the server must be authorized to Push the resources beforehand.
The latest HTTP version has evolved significantly in terms of capabilities, and attributes such as transforming from a text protocol to a binary protocol. HTTP1.x used to process text commands to complete request-response cycles. HTTP/2 will use binary commands (in 1s and 0s) to execute the same tasks. This attribute eases complications with framing and simplifies implementation of commands that were confusingly intermixed due to commands containing text and optional spaces.
Although it will probably take more efforts to read binary as compared text commands, it is easier for the network to generate and parse frames available in binary. The actual semantics remain unchanged.
Browsers using HTTP/2 implementation will convert the same text commands into binary before transmitting it over the network. The binary framing layer is not backward compatible with HTTP1.x clients and servers and a key enabler to significant performance benefits over SPDY and HTTP1.x. Using binary commands to enable key business advantages for internet companies and online business as detailed with benefits of HTTP/2 explained below:
- Low overhead in parsing data – a critical value proposition in HTTP/2 vs HTTP1.
- Less prone to errors.
- Lighter network footprint.
- Effective network resource utilization.
- Eliminating security concerns associated with the textual nature of HTTP1.x such as response splitting attacks.
- Enables other capabilities of the HTTP/2 including compression, multiplexing, prioritization, flow control and effective handling of TLS.
- Compact representation of commands for easier processing and implementation.
- Efficient and robust in terms of processing of data between client and server.
- Reduced network latency and improved throughput.
HTTP/2 implementation allows the client to provide preference to particular data streams. Although the server is not bound to follow these instructions from the client, the mechanism allows the server to optimize network resource allocation based on end-user requirements.
Stream prioritization works with Dependencies and Weight assigned to each stream. Although all streams are inherently dependent on each other except, the dependent streams are also assigned weight between 1 and 256. The details for stream prioritization mechanisms are still debated.
In the real world however, the server rarely has control over resources such as CPU and database connections. Implementation complexity itself prevents servers from accommodating stream priority requests. Research and development in this area is particularly important for long term success of HTTP/2 since the protocol is capable of processing multiple data streams with a single TCP connection.
This capability can lead to simultaneous arrival of server requests that actually differ in terms of priority from an end-user perspective. Holding off data stream processing requests on a random basis undermines the efficiencies and end-user experience promised by HTTP/2 changes. At the same time, an intelligent and widely adopted stream prioritization mechanism presents benefits of HTTP/2 explained as follows:
- Effective network resource utilization.
- Reduced time to deliver primary content requests.
- Improved page load speed and end-user experience.
- Optimized data communication between client and server.
- Reduced negative effect of network latency concerns.
Stateful Header Compression
Delivering high-end web user experience requires websites rich in content and graphics. The HTTP application protocol is state-less, which means each client request must include as much information as the server needs to perform the desired operation. This mechanism causes the data streams to carry multiple repetitive frames of information such that the server itself does not have to store information from previous client requests.
In the case of websites serving media-rich content, clients push multiple near-identical header frames leading to latency and unnecessary consumption of limited network resource. A prioritized mix of data streams cannot achieve the desired performance standards of parallelism without optimizing this mechanism.
HTTP/2 implementation addresses these concerns with the ability to compress large number of redundant header frames. It uses the HPACK specification as a simple and secure approach to header compression. Both client and server maintain a list of headers used in previous client-server requests.
HPACK compresses the individual value of each header before it is transferred to the server, which then looks up the encoded information in list of previously transferred header values to reconstruct the full header information. HPACK header compression for HTTP/2 implementation presents immense performance advantages, including some benefits of HTTP/2 explained below:
- Effective stream prioritization.
- Effective utilization of multiplexing mechanisms.
- Reduced resource overhead – one of the earliest areas of concerns in debates on HTTP/2 vs HTTP1 and HTTP/2 vs SPDY.
- Encodes large headers as well as commonly used headers which eliminates the need to send the entire header frame itself. The individual transfer size of each data stream shrinks rapidly.
- Not vulnerable to security attacks such as CRIME exploiting data streams with compressed headers.
Underlying application semantics of HTTP including HTTP status codes, URIs, methodologies and header files remain same in the latest iteration of the HTTP/2. HTTP/2 is based on SPDY, Google’s alternative to HTTP1.x. Real differences lies in the mechanisms used to process client-server requests. The following chart identifies a few areas of similarities and improvements among HTTP1.x, SPDY and HTTP/2:
|SSL not required but recommended.||SSL required.||SSL not required but recommended.|
|Slow encryption.||Fast encryption.||Even faster encryption.|
|One client-server request per TCP connection.||Multiple client-server request per TCP connection. Occurs on a single host at a time.||Multi-host multiplexing. Occurs on multiple hosts at a single instant.|
|No header compression.||Header compression introduced.||Header compression using improved algorithms that improve performance as well as security.|
|No stream prioritization.||Stream prioritization introduced.||Improved stream prioritization mechanisms used.|
HTTPS is used to establish an ultra-secure network connecting computers, machines and servers to process sensitive business and consumer information. Banks processing financial transactions and healthcare institutions maintaining patient records are prime targets for cybercriminal offenses. HTTPS works as an effective layer against persistent cybercrime threats, although not the only security deployment used to ward off sophisticated cyber-attacks infringing high-value corporate networks.
The HTTP/2 browser support includes HTTPS encryption and actually complements the overall security performance of HTTPS deployments. Features such as fewer TLS handshakes, low resource consumption on both client and server sides and improved capabilities in reusing existing web sessions while eliminating vulnerabilities associated with HTTP1.x present HTTP/2 as a key enabler to secure digital communication in sensitive network environments.
HTTPS is not limited to high-profile organizations and cyber security is just as valuable to online business owners, casual bloggers, e-commerce merchants and even social media users. The HTTP/2 inherently requires the latest, most secure TLS version and all online communities, business owners and webmasters must ensure their websites use HTTPS by default.
Usual processes to set up HTTPS include using web hosting plans, purchasing, activating and installing a security certificate and finally updating the website to use HTTPS.
Internet speed is not the same across all networks and geographic locations. The increasingly mobile user-base demands seamless high performance internet across all device form factors even though congested cellular networks can’t compete with high speed broadband internet. A completely revamped and overhauled networking and data communication mechanism in the form of HTTP/2 emerged as a viable solution with the following significant advantages.
The term sums up all advantages of HTTP/2 changes. HTTP/2 benchmark results (see in chapter: Performance Benchmark Comparison of HTTPS, SPDY and HTTP/2) demonstrate the performance improvements of HTTP/2 over its predecessors and alternatives alike.
The protocol’s ability to send and receive more data per client-server communication cycle is not an optimization hack but a real, realizable and practical HTTP/2 advantage in terms of performance. The analogy is similar to the idea of vacuum tube trains (Vactrain) in comparison with standard railway: eliminating air resistance from Vactrain tunnels allows the vehicle to travel faster and carry more passengers with improved utilization of the available channels without having to focus on installing bigger engines, reducing weight and making the vehicle more aerodynamic.
Technologies such as Multiplexing create additional space to carry and transmit more data simultaneously – like multi-story seating compartments in the Airbus airplane.
And what happens when the data communication mechanism eliminates all hurdles to improve web performance? The byproduct of superior website performance includes increased customer satisfaction, better search engine optimization, high productivity and resource utilization, expanding user-base, better sales figures and a whole lot more.
Fortunately, adopting the HTTP/2 is far more practical than creating vacuum chambers for large multistory locomotives.
Mobile Web Performance
Millions of internet users access the web from their mobile devices as a primary gateway to the cyber world. The Post PC era has fueled smartphone adoption to access Web-based services from the palm of their hand, and perform most of the mundane computing tasks on the go instead of sitting in front of desktop computers for prolonged periods of time.
HTTP/2 is designed in context of present-day web usage trends. Capabilities such as multiplexing and header compression work well to reduce latency in accessing internet services across mobile data networks offering limited bandwidth per user. HTTP/2 optimizes web experience for mobile users with high performance and security previously only attributed to desktop internet usage. HTTP/2 advantages for mobile users promises immediate positive impact in the way online businesses target customers in the cyber world.
The cost of internet has plunged rapidly since the dawn of the World Wide Web. Expanding web access and rising internet speed was always the aim with advancements in internet technologies. Meanwhile, cost improvements appear to have bottlenecked especially considering the allegations surrounding the monopoly of telecom service providers.
The HTTP/2 promising increased throughput and enhanced data communication efficiencies will allow telco providers to shrink operational expenses while maintaining the standards of high speed internet. The reduced OpEx will encourage service providers to slash pricing for the low-end market and introduce high speed service tiers for the existing pricing model.
Densely populated Asian and African markets remain underserved with limited access to affordable internet. Internet service providers focus their investments to yield the highest returns from services offered only to urban and developed locations. HTTP/2 advantages leading to large-scale adoption of the advanced application protocol will naturally reduce network congestion to spare resources and bandwidth for distant underserved geographic locations.
Media Rich Experience
Modern web experience is all about delivering media-rich content at lightning-fast page load speeds. Internet users ostensibly demand media-rich content and services updated on a regular basis. The cost of the underlying infrastructure even delivered via cloud as a subscription-based solution is not always affordable for internet startup firms. HTTP/2 advantages and technology features such as Header Compression may not shrink the actual file size, but do shave a few bytes of size overhead to transmit resource-consuming media rich content between client and servers.
Improved Mobile Experience
Progressive online businesses follow a Mobile-First strategy to effectively target the exploding mobile user-base. Mobile device hardware limitations are perhaps the biggest constraint to mobile web experience impacted by extended time taken to process browser requests. The HTTP/2 cuts load times and mobile network latency to manageable levels.
Improved Technology Utilization
Resource consumption has increased significantly for client and server processing browser requests to deliver media-rich social media content and complex web designs. Although web developers have worked around appropriate optimization hacks, a robust and reliable solution in the form of HTTP/2 was inevitable. Features such as Header Compression, Server Push, Stream Dependencies and Multiplexing all contribute toward improved network utilization as a key HTTP/2 advantage.
HTTP/2 advantages extend beyond performance as HPACK algorithm allows HTTP/2 to circumvent the prevalent security threats targeting text-based application layer protocols. HTTP/2 contains commands in binary and enable compression of the HTTP header metadata in following a ‘Security by Obscurity’ approach to protecting sensitive data transmitted between clients and servers. The protocol also boasts full support for encryption and requires an improved version of Transport Layer Security (TLS1.2) for better data protection.
HTTP/2 embodies innovation and the concept of high performance web. HTTP/2 underpins the cyber world as we know it today, and HTTP/2 changes are primarily based on Google’s SPDY protocol that took giant leaps ahead of the aging HTTP1.x versions and will almost entirely replace SPDY as well as all previous HTTP iterations in the near future. Riddance from complex web optimization hacks presents HTTP/2 browser support as a viable solution for web developers to produce high performance websites and online services.
HTTP/2 SEO Advantage
The discipline of SEO marketing lies somewhere between art and science. Traditional black hat SEO practices fail to manipulate search engine rankings following increasingly complex proprietary algorithms used by popular search engines. Online businesses need to evolve their marketing tactics accordingly. Smarter investments in the form of implementing thoroughly well designed websites not just optimized for speed but built for superior performance, security and user experience from the ground up. These attributes are preferred as means to return search queries with the most accurate information and service, conveniently accessible across a global target audience.
Standardized industry processes for search engine optimization go beyond front-end marketing tactics, and encompass the entire lifecycle of client-server communication. SEOs that were once the staple in internet market teams are not enjoying the same positions since the advent of latest digital communication technologies. Among these, the prevalence of HTTP/2 marks a key tectonic shift forcing web developers and marketers back to the drawing board.
Implementing and optimizing the infrastructure for HTTP/2 and the promising performance advantages is now a critical enabler to search engine optimization. Online businesses lacking adequate organic user-base cannot afford to neglect HTTP/2 and the resulting SEO boost while they compete ever growing online business empires on grounds of innovation and high value online service ranked even higher with the implementation of HTTP/2 on the server side.
The following performance benchmark comparisons between HTTPS, SPDY and HTTP/2 portray a clear picture of web performance improvements with the latest application protocol.
HTTP/2 benchmark results confirm the ideas that header compression, server push and other mechanisms used specifically to enhance page speed and user experience consistently deliver in the real-world:
Test details: This test comparing HTTPS, SPDY3.1 and HTTP/2 presents the following results:
- Size of client request and server response headers: HTTP/2 benchmarks demonstrate how the use of header compression mechanism shrinks the header size significantly, whereas SPDY only shrinks the header used in server response for this particular request. HTTPS does not shrink header size in both the request and response commands.
- Size of server response message: Although HTTP/2 server response was larger in size, it provides stronger encryption for improved security as a key tradeoff.
- Number of TCP connections used: HTTP/2 and SPDY use fewer network resources by processing multiple concurrent requests (multiplexing) and therefore reduce latency.
- Page Load Speed: HTTP was consistently faster than SPDY. HTTPS was significantly slower due to lack of header compression and server push capabilities.
HTTP/2 is already available with adequate web server, browser and mobile support. Technologies running HTTP1.x are not compromised upon implementing HTTP/2 for your website but require a quick update to support the new protocol. You can consider networking protocols as spoken languages. Communicating with new languages is only possible as long as it is adequately understood. Similarly, the client and server should be updated to support data communication using the HTTP/2 protocol.
Internet consumers don’t need to worry about configuring their desktop and mobile web browsers to support HTTP/2. Google Chrome and Firefox have supported the technology for years and Apple added HTTP/2 browser support to the Safari web browser back in 2014. Internet Explorer requires users to run Windows 8 to support the latest application protocol.
Major mobile web browsers including Android’s aptly named Browser, Chrome for Android and iOS, as well as Safari in iOS 8 and above support HTTP/2 for mobile web access. Internet users are advised to install the latest stable releases of mobile and desktop web browsers to experience the maximum performance and security advantages of the application protocol as seen in HTTP/2 benchmarks.
Web Server Support: Apache and Nginx
Online service providers running servers on-premise or in the cloud will have to update and configure web servers to add support for HTTP/2. At Kinsta we’ve already modified our servers accordingly of course! Considering the spoken language analogy described earlier, internet visitors accessing information delivered from these servers can only use HTTP/2 as long as the web server is updated and configured for this purpose.
Nginx servers constituting 66 percent of all active web servers boast native support for HTTP/2 whereas Apache servers use the mod_spdy module to offer HTTP/2 browser support. The module was developed by Google to support SPDY features such as multiplexing and header compression for Apache 2.2 servers and the software is now donated to the Apache Software Foundation. | https://www.egys.net/what-is-http-2/ | 18 |
22 | [7 min] Do Now
Review from 1st Law, introduce 2nd Law:
In which of these cases do we have balanced forces? Explain why.
- A cat is moving with constant velocity towards his date.
- A car is moving with constant acceleration to pick up more physics homework.
- A cow is at rest, taking a nap.
- An apple is hanging from a tree.
Share out and discuss. Bridge the transition between Newton’s First Law and the idea of net force into Newton’s Second Law.
[1 min] Making Clear the Objective
Objective: You will derive the relationship between force and acceleration from simulated experimental data.
Criteria for Success: Graphs of data will show proof of Newton’s 2nd Law of Motion.
[12 min] Simulation: Newton’s Second Law
We will be using the simulation of Newton’s 2nd Law located at: http://phet.colorado.edu/en/simulation/forces-1d
Set: show horizontal force, show total force.
Turn friction off.
Turn on graphs for acceleration and velocity.
Use students to run simulation and call out the data for their classmates to record.
We will be using a simulation. For each trial, record the following:
- mass of the object
- force applied to the object
- acceleration of the object
Run the simulation for the dog (25 kg) with three forces: 50 N, 100 N, 200 N. Ask the students to make a prediction before the last one. Make sure to reset the simulation and graphs before each trial.
Run the simulation for the textbook (10 kg) with the same three forces.
[15 min] Graphing the Data
Turn and Talk:
What was the independent variable and why?
What was the dependent variable and why?
What was the main control variable and why?
What do we put on the y-axis? What do we put on the x-axis?
The independent variable of our experiment always goes on the x-axis (Force). The dependent variable of our experiment always goes on the y-axis (Acceleration).
Work with your partner:
Draw 2 graphs. Don’t forget units and labels!
- Acceleration vs force variable for the dog
- Acceleration vs force variable for textbook
[15 min] Analyzing the Data
We seem to have found a correlation between two variables, force and acceleration. Let’s see if we can define a relationship between them.
Find the slope of each graph and write it next to the plot.
Find the inverse of the slope for each graph and write it next to the plot.
Do we see any patterns? Does the slope look like a variable we recognize? How would I write the equation of this line?
a = 1/m F → F = m a
[2 min] Summarize Findings
Newton’s 2nd Law of Motion:
The acceleration of an object is directly proportional to the net force acting on the object. The acceleration will be in the same direction as the net force. The acceleration is resisted by the mass of the object.
F = m a
Estimated Instructional Time: 52 min
[6 min] Exit Ticket
The catapult on an aircraft carrier can can accelerate a fighter jet from rest to 56 m/s in just 2.8 s. If the fighter jet has a mass of 13,000 kg, what is the force required? | http://www.theveryspringandroot.com/blog/tag/motion/ | 18 |
32 | Large numbers gcse(f) gcse(h) standard form is used to handle very small and very large numbers it is written in two parts: the first part is a decimal number between 1 and 10, and the second part is a power of 10. Number forms there are generally four word forms that help students to understand place value in large numbers those are standard form (the way we usually write numbers with thousand groups), word form, short word form (a combination of numbers and words) and expanded number form. Example: it is easier to write (and read) 13 × 10-9 than 00000000013 it can also make calculations easier, as in this example: example: a tiny space inside a computer chip has been measured to be 000000256m wide, 000000014m long and 0000275m high. Converting forms worksheets want to help support the site and remove the ads each worksheet has 20 problems writing in normal, word and expanded form each worksheet has 20 problems converting from a calculator scientific notation to standard form.
Worksheet involving standard form questions this website and its content is subject to our terms and conditions. A standard form ax + by = c a, b, c are integers (positive or negative whole numbers) no fractions nor decimals in standard form traditionally the ax term is positive b how to write the equation into standard form when given an equation if there are fractions. Write the number three million in standard form 2 write the number 0000045 in standard form 3 write 713 × 103 as an ordinary number 4 6write 25 × 10-as an ordinary number 5 2write the numbers 32, 45.
Write the equation of the parabola x 2 – 6x – y + 4 = 0 in standard form to determine its vertex and in which direction it opens write the equation of the parabola 2 x 2 + 8 x + y + 3 = 0 in standard form to determine its vertex and in which direction it opens. Standard, word & expanded form expanded form game expanded form - ws word form - ws 3rd grade challenge expanded form (millions) - ws word form (millions) - ws poll of the day click on your choice then press the vote button to submit and reveal the current results home. Numbers in standard form appear as a whole number followed by a decimal and two other numbers all multiplied by a power of ten numbers in standard form appear as a whole number followed by a decimal and two. Worksheet on standard form equation (pdf with answer key on this page's topic) overview of different forms of a line's equation there are many different ways that you can express the equation of a line.
The standard form of a polynomial equation has all non-zero terms on the left hand side in descending order and zero on the right hand side of the equation. The standard form for writing down a polynomial is to put the terms with the highest degree first (like the 2 in x 2 if there is one variable) example: put this in standard form: 3x 2 − 7 + 4x 3 + x 6 the highest degree is 6, so that goes first, then 3, 2 and then the constant last. Writing numbers in standard form showing top 8 worksheets in the category - writing numbers in standard form some of the worksheets displayed are writing scientific notation, expanding numbers, writing numbers in standard form work pdf, 3 indices and standard form mep y9 practice book a, number and operations in base ten 2 36nunmber and6oenmee, scientific notation, writing numbers in.
However, in this lesson's case, standard form is really another name for the scientific notation of a number think of the standard form/scientific notation as shorthand writing, but for math. Slope intercept form is the more popular of the two forms for writing equations however, you must be able to rewrite equations in both forms for standard form equations, just remember that the a, b, and c must be integers and a should not be negative. Find the equation of this line in point slope form, slope intercept form, standard form and the way to think about these, these are just three different ways of writing the same equation so if you give me one of them, we can manipulate it to get any of the other ones.
Scientific notation is a smart way of writing huge whole numbers and too small decimal numbers this page contains worksheets based on rewriting whole numbers or decimals in scientific notation and rewriting scientific notation form to standard form. Place value worksheets standard form with integers worksheets this place value worksheet generator is great for testing children on writing numbers in standard form.
Improve your math knowledge with free questions in write equations in standard form and thousands of other math skills. Number level 8: pupils will be able to write any ordinary number in standard formchange a number written in standard form back into an ordinary number. Writing algebra equations finding the equation of a line given two points we have written the equation of a line in slope intercept form and standard form we have also written the equation of a line when given slope and a point. To write decimals in standard form, move the decimal point to the right until it is at the right of the first nonzero digit then, multiply the number by 10 to the power of the negative of the number of spaces the decimal point was moved for example, the decimal 00000005467 can be expressed in. | http://adtermpaperuwia.alisher.info/writing-in-standard-form.html | 18 |
16 | A geometric series is a sequence of numbers created by multiplying each term by a fixed number to get the next term. For example, the series 1, 2, 4, 8, 16, 32 is a geometric series because it involves multiplying each term by 2 to get the next term. In mathematics, you may need to find the sum of the geometric series. You can do this by using a simple formula.
Understand the formula. The formula for determining the sum of a geometric series is as follows: Sn = a1(1 - r^n) / 1 - r. In this equation, "Sn" is the sum of the geometric series, "a1" is the first term in the series, "n" is the number of terms and "r" is the ratio by which the terms increase. In the example series 2, 4, 8, 16, 32, you know that a1 = 2, n = 5 and r = 2.
Plug in the known variables to the equation. To determine the sum, it is necessary to know the exact values of "a1," "n" and "r." Sometimes you will already know these values and other times you will have to determine them by simply counting. For example, you may be given the series 2, 4, 8, 16, 32, or you may be given the series 2, 4, 8 ... and told that "n" = 5. It is therefore not necessary to know every term in the series. When you know the values of the three variables, plug them in. In the example, this would give you: Sn = 2(1 - 2^5) / 1 - 2.
Sciencing Video Vault
Simplify the equation. Because you have all the needed information, you can simplify the equation to determine the geometric sum. You don't need to use any of the algebraic methods to move variables around because your "Sn" value is already isolated. Follow the basic order of simplifying an equation: brackets, exponents, multiplying/division and then addition/subtraction. In the example given, you will get: 2(-31) / -1, which further simplifies to 62. If the geometric series is simple -- like the example -- you can double-check your work: 2 + 4 + 8 + 16 + 32 = 62. The geometric sum is correct. | https://sciencing.com/calculate-sum-geometric-series-8166913.html | 18 |
15 | The Structure Of DNA
Disclaimer: This work has been submitted by a student. This is not an example of the work written by our professional academic writers. You can view samples of our professional work here.
Any opinions, findings, conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of UK Essays.
Published: Mon, 15 May 2017
DNA (deoxyribonucleic acid) and RNA (ribonucleic acid) are genetic materials. They are chemically similar but their 3 dimensional structures are different. DNA is informational molecule carrying genetic information in the exact sequences of its nucleotides but RNA is a catalytic molecule. DNA and RNA have three different conformations each, with distinct structure which are variously suited for their functions. (Freifelder et al. 1998) So, after reading this booklet, one can know about the different forms of DNA and RNA, how their different structural plans are ideally suited for their functions?
1.2: STRUCTURE OF DNA:
The correct structure of DNA was first obtained by J.D. Watson and F.H.C. Crick of Cambridge University in the year 1953. Their double-helix model of DNA structure was based on the E. Chargaff’s base composition rule. According to Chargaff, the concentration of thymine (T) was always equal to that of adenine (A) and the concentration of guanine (G) was always equal to that of cytosine(C). It was only true within the same species but was found different in the organism of different species. (Hartl et al. 2000)
DNA has no oxygen atom at the 2′ carbon. Nitrogenous bases are attached at the 1’carbon and phosphate group to the 5′ carbon of the pentose sugar. DNA double-helix contains two polynucleotide chains coiled one another in a spiral manner. Each polynucleotide chains consist of sequence of nucleotides joined together by the phosphodiester bonds. The two chains are linked together by H-bonds to give a helical configuration. H-bonds are formed between the purines (Adenine & Guanine) and the pyrimidines (Thymidine & Cytosine). Bases in DNA are specifically paired. Adenine (A) of one strand is linked with Thymine (T) of other strand through two H-bonds and Guanine (G) with Cytosine (C) through three H-bonds. Thus, base sequences of other strand are known through the base pairing of one strand (specific base pairing). Such condition is called “complementary base pairing”. Two strands runs antiparallel, with one strand in the 3′-5′ and other in the 5′-3′ direction (opposite chemical polarity). (Gardner et al. 2005)
In DNA double-helix, base pairs are stacked one upon another like a pile of papers with a 3.4Ǻ gap between consecutive base pairs. (Hartl et al. 2000) Bases are hidden towards inner side forming hydrophobic core and they are perpendicular to the axis of the helix. Bases are moving in the spiral manner around the helical axis and each turn has ten base pairs. Each base pair is rotated through 36˚ around the helical axis relative to the next base pair. Therefore, ten base pairs make the complete turn of 360˚. The twisting of two complementary strands in DNA double-helix forms a minor groove (12Ǻ) and a major groove (22Ǻ). (Nelson et al. 2000)
Fig1.1: (A) & (B): Double Helical Structure of DNA. (n.d)
Fig1.2: (C): Complementary base pairing in DNA. Sugar phosphate backbone is on outside. (n.d)
1.3: Various forms of DNA:
The standard model of DNA and is right handed. This conformation is shown by the DNA in the aqueous solution of low salt concentration. It has exactly 10.4 nucleotide pairs per turn. Each base is twisted to 36Ëš and has a diameter of 2nm. The base plane is tilted to 6Ëš and length of each turn measures 3.4nm. (Freifelder et al. 1998)
It is the conformation shown by DNA in the high salt concentration solution or dehydrated state. It has wider and flatter helix. Helix has minor and major grooves. It has 11 nucleotides per turn and each turn twisted to 33Ëš. (Gardner et al. 2005) Each turn is 3.1nm in length. Base plane is tilted to the helix at 20Ëš. (Freifelder et al. 1998)
• Z-form: (Z- zigzagged path of the sugar phosphate backbone of the structure)
Twist in the left handed direction. It has 12 base pairs per turn and the length of turn is 4.5nm. Diameter of each turn is 1.8nm and the base plane is tilted to 7Ëš. (Freifelder et al. 1998) B-form change to the Z-form and vice versa with the help of certain regulatory protein.
1.4: STRUCTURE OF RNA:
RNA: ribonucleic acid has the same structure as that of DNA but not identical, RNA has ribose sugar instead of deoxyribose. It is not duplex but single stranded. Uracil (U) is present in the place of thymine (T). The backbone in RNA is an alternating polymer of ribose and phosphate with phosphodiester bonds between 3′ and 5′ atoms from consecutive ribose.
RNA form comparatively shorter double strand on itself, thereby forming hairpin, stem and loop structures. Hairpins are formed by base pairing 5-10 nucleotides of each other. Stem-loops are formed by pairing of those bases which are separated more than ten to several hundreds nucleotides. When these simple folding comes together, they make up a more complex structure termed as ‘pseudo knot’. (Lodish et al. 2009) Double helix formed between DNA & RNA or RNA & RNA has the conformation same to that of A-form DNA. Such RNA is called A-RNA or RNA-II. Double helix of A-RNA contains 11bp per turn and each turn measuring to 3nm in length. (Lodish et al. 2009)
1.5: Bases of RNA. (Uracil instead of thymine):
Adenine (A). Guanine (G).
Uracil (U). Cytosine (C).
Fig1.3: Bases of RNA. (n.d)
1.6: Structure of RNA:
RNA molecule consists of four components: ribose, five carbon sugar, phosphate & family of four heterocyclic bases.
Fig1.4: RNA structure. (n.d)
1.7: Various forms of RNA based on their function in the protein synthesis:
• Ribosomal RNA (rRNA),
• Messenger RNA (mRNA) and
• Transfer RNA (tRNA).
It is a single, continuous strand H-bonded back on itself, with a 5′ at the start and 3′ at the end. It contains a complex pattern of short double stranded stems, interspersed with unpaired single-stranded loops and bubbles.
Fig1.5: Secondary structure of rRNA. (Steven 2009)
The mRNA constitutes 3-5% of total cellular RNA. (n.d). It is always single stranded. Some of the common bases found in the mRNA are A, G, C and U. Certain amount of random coiling occurs in it but base pairing never happen in it, as it will destroy its biological properties. Its base sequences are complementary to the segment of DNA from which it is transcribed. Its size is at least 100×3=300nts. (n.d).
Cap is formed at 5′ end by the condensation of guanylate residue in most eukaryotes and animal viruses. Cap is thus a blocked methylated structure, m7GppNmp Np; where m7G=
7methyl guanosine cap, N= any of the 4 nucleotides & Nmp= 20 methyl ribose. (n.d). It has behind its cap a non-coding region 1 (NC1) composed of 10-100 nucleotides. This region is rich in A and G residues. It does not translate protein. Then, it has initiation codon containing AUG in both prokaryotes and eukaryotes. It also consists of coding region containing 1500 nucleotides and it can translate protein. (n.d).
Fig1.6: Structure of mRNA. (n.d)
It is smallest of all RNA species. It contains sequence of 60-95 bases, mostly 76. It has a molecular weight of 18-20kd. Secondary structure is ‘cloverleaf’ shaped with four constant arms (D-arm, T-arm, anti-codon arm and variable arm); additional arm is present in case of larger tRNAs. (n.d) 5′ terminus tRNA is always phosphorylated. Seven base pair stem has non-Watson and Crick pairing like G pairing with U. Stem 3-4 and loop of D-arm contains dihyuridine (D) base. Anti-codon triplet (anti-codon arm) and TYC sequence, pseudouradine (T-arm) are present at 5bp stem. Between anti-codon and T-arm is ‘variable arm’ measuring 3-21 nucleotides in length.
This 3-D structure is formed in the solution. When ester linkage between 2′ or 3’OH group of adenylic acid at the end of acceptor arm and COOH group of the amino acid, it gives charged aminoacyl-tRNA. (Lodish et al. 2009) L-shaped tertiary structure formed from the cloverleaf has the acceptor arm at one end and anticodon arm at the other end.
Fig1.7: (A) structure of tRNA and (C) cloverleaf structure of tRNA (n.d).
1.8: Comparative functions of DNA and RNA as per their structural plan:
Functions of DNA:
Due to the formation of minor and major groove in DNA, edge atoms of individual bases inside the grooves are made reachable from outside the helix. Thus, DNA binding proteins can read the base sequences of duplex DNA by coming in contact with atoms either in minor or major groove. (Hartl et al. 2000)
H-bonds are not parallel to the axis of DNA unlike alpha helix in proteins. This property enables DNA to bend in order to form a complex with binding proteins. Protein-DNA complex occurs as nuclear DNA in eukaryotic cells. This bending property of DNA allows it to get densely packed in the chromatin.
In DNA, H-atom at 2’position of deoxyribose sugar (OH in RNA) accounts for comparatively greater stability of the molecule. It allows DNA molecule to store genetic information for the longer duration. Whereas, in RNA 2′-OH group undergoes alkaline hydrolysis of phosphodiester bond at neutral pH catalyzed by OH anion. It does not takes place in DNA.
The presence of Thymine (T) instead of Uracil (U) also enables DNA for the long term stability because of Thymine’s function in DNA repair. (Lodish et al. 2009) Complementary base pairing in DNA (A=T, G=C) form the basis for exact duplication. This allows precise replication process to occur so, that the information stored in them is replicated correctly and successfully inherited by the daughter cells
. Group of three bases in DNA molecule constitute genetic code which specifies amino acid sequence in proteins. All information contained in the genetic code plays a major role in directing the cell organization and cell metabolic functions. Sometimes bases are mispaired in DNA. This leads to the occasional mutations. An occasional mutation allows slow accumulation of favorable mutations, which as a whole leads to the evolution of variety of organisms. (Nelson et al. 2000) Certain bases are methylated in DNA molecule. Adenine & cytosine are more methylated then guanine & thymine. Presence of methylated base (like thymine) suppresses the migration of segment of DNA called transposons. Methylation of cytidine posses’ structural importance as it increases the tendency of that segment of DNA to take the Z-form. (Nelson et al 2000)
Functions of RNA and its different forms:
RNA is a catalytic molecule. It plays a wide range of roles in the living cells.
Functions of rRNA:
It has folded structure like that of α-helices & β strands of proteins but they has catalytic properties. Thus, it catalyses the splicing process during the formation of majority of functional mRNA in multicellular & unicellular (yeast, bacteria etc.) eukaryotes. rRNA has the catalytic role in the formation of peptide bonds during protein synthesis. rRNA serves as the central component of the ribosome protein manufacturing machinery. (Hartl et al. 2000)
Functions of tRNA:
It functions as adaptor molecules that decode the genetic code. (n.d) The anti-codon end of tRNA has nucleotide sequence complementary to the codon representing its amino acid. The anticodon enables tRNA to recognize the codon through complementary base pairing. Amino-acyl tRNA-synthase proteins formed by the reaction between 3’OH group of adenylic acid at acceptor arm & COOH group of amino acid, is the true translator of genetic code into amino acid sequence. If it fails to acetylate RNA properly then it will lead to amino acid mutation. RNA has three different species viz. mRNA, tRNA & rRNA. The mRNA carries coding information from the DNA to the site of protein synthesis (the ribosome), tRNA helps in the recognition of the codons & provides corresponding amino acid. (Nelson et al. 2000)
Functions of mRNA:
It is used as the template for protein synthesis. The presence of cap at 5′ end of mRNA plays vital role in recognition of ribosome and also in protection of RNAses. After polyadenylation, poly-A tail is attached to the 3′ end of mRNA. It is the binding site of proteins. These proteins shield mRNA from degradation by exonucleases. The process of polyadenylation is also vital for termination of transcription, export of mRNA from the nucleus & translation. (n.d)
• Hartl, D.L. & Jones, E.W. (2000) Genetics: Analysis of genes & genomes. (5th Ed.) USA: Jones & Bartlett Publishers.
• Lodish, H., (2008). Molecular Cell Biology. (6th Ed.) New York: W.H.Freeman & Company.
• Malacinski, G.M. & Freifelder, D. (1998). Essentials of Molecular Biology. (3rd Ed.) USA: Jones & Bartlett Publishers.
• Nelson, D.L. & Cox, M.M. (2000). Principle of Biochemistry. (3rd Ed.) U.K: Worth Publisher.
• Gardner, E.J., Simmons, M.J. & Snustad, D.P. (2005) Principles of Genetics. (8th Ed.)
Singapore: John Wiley and Sons, INC.
• Structure of RNA. (n.d). Retrieved on 24th March 2010, from http//en.wikipedia.org/wiki/messenger_RNA.
• Double helical structure of DNA. (n.d). Retrieved on 24th March 2010, from http://www.rnabase.org/primer/ .
• Complementary base pairing in DNA. (n.d). Retrieved on 24th March 2010, from http://images.google.com/imagres?
• Steven, M.C. (2OO9). Structure of rRNA. Retrieved on 27th March 2010, from http://aa.yhs.search.yahoo.com/avg/search?fr=yhs-avg&type=yahoo_avg_hs2-tb-web_aa&p=structure%20of%20rRNA. Structure of mRNA. (n.d). Retrieved on 1st April 2010, from http://www.google.com/search?hl=en&source=hp&q=STRUCTURE+OF+mRNA&aq=o&aqi=g10&aql=&oq=&gs_rfai=).
• Structure of mRNA. (n.d). Retrieved on 1st April 2010, from http://en.wikipedia.org/wiki/File:MRNA_Structure.svg.
• Structure of tRNA. (n.d). Retrieved on 2nd April 2010, from http://images.google.com/imagres?
Structure of DNA.
Various forms of DNA.
Structure of RNA
Various forms of RNA
Comparative functions of DNA and RNA
Functions of DNA
Functions of RNA
Cite This Work
To export a reference to this article please select a referencing stye below: | https://www.ukessays.com/essays/biology/the-structure-of-dna-biology-essay.php | 18 |
11 | Properties of sampling distributions a point estimator is a formula that uses sample data to if the sampling distribution of a statistic has a mean equal to. Statistics, science, and observations a sampling distribution is a theoretical distribution of the values that a specified statistic of a sample. Sampling distributions from last week, we know that hypothesis testing involves: 1 calculating some test statistic 2 comparing that statistic to an underlying distribution to determine how likely it would be to occur by chance. A sampling distribution tells us which outcomes we should expect for some sample statistic (mean, standard deviation, correlation or other.
The distribution of this population is referred to as sampling distribution of the sample mean the population average (or mean). This lesson considers the fundamentals of the sampling distribution of the sample mean, and discusses how to calculate the parameters and probabilities associated with it, using a normal probability table and minitab this lesson also demonstrates the central limit theorem using simulated data. By considering a simple random sample as being derived from a distribution of samples of equal size. Mcq 1166 in case of sampling with replacement is equal to: mcq 1167 the distribution of the mean of sample of size 4, taken from a.
Definition of sampling distribution, from the stat trek dictionary of statistical terms and concepts this statistics glossary includes definitions of. Types of sampling in , you see a rather small population and then a complete derivation and description of the sampling distribution of the sample mean. Sampling distributions and simulation opre 6301 a statistic is a sampling distribution the sampling distribution helps us understand how close is a statistic. The sampling distribution of a statistic is the set of values that we would obtain if we drew an infinite number of random samples from a given population and calculated the statistic on each sample.
Sampling distribution the sampling distribution of a statistic $s$ for samples of size $n$ is defined as follows the experiment consists of choosing a sample of size $n$ from the population and measuring the statistic $s$ the sampling distribution is the resulting probability distribution. 1 stats chapter 7: sampling distributions section 71: sampling distribution terms: parameter: a number that describes an aspect of a population. Sampling distribution of the sample mean sampling distribution of the mean when the population is normal central limit theorem application of sample mean distribution demonstrations of central limit theore before we begin, we will introduce a brief explanation of notation and some new terms that we. The sampling distribution of the mean introductory statistics: concepts, models, and applications 3rd introductory statistics: concepts, models, and. C) use the results of part (b) to construct the sampling distribution of the means of these samples the sampling distribution of the mean.
Sampling distribution definition, the distribution of a statistic based on all possible random samples that can be drawn from a given population see more. Sampling distribution the frequency (probability) distribution of a statistic unlike a distribution of a variable, where the units (individual values) represent values of individual observations, in a sampling distribution, these units are statistics calculated from samples of a particular size drawn from a population (theoretically drawn an. Normal distribution, sometimes called the bell curve, is a common way to describe a continuous distribution in probability theory and statistics in the natural sciences, scientists typically assume that a series of measurements of a population will be normally distributed, even though the actual. Sampling distribution of the mean 0 005 01 015 02 025 03 20 30 40 50 •the primary use of the distribution of sample means is to find the probability.
Sampling and sampling distributions asw, sampling distribution of the sample mean when random sampling if a simple random sample is drawn from a normally. A sampling distribution is a probability distribution of a statistic obtained through a large number of samples drawn from a specific population. The practice of statistics, 4th edition – for ap starnes, yates, moore chapter 7: sampling distributions section 71 what is a sampling distribution . Use the sampling distribution simulationjava applet at the rice virtual lab in statistics to do the following 6/12/2004 unit 5 - stat 571 - ramon v leon 10.
The distribution chamber is used to direct the outflow from a wastewater tank to a percolation pipe network allowing treated water to percolate back into the ground. Sampling distribution of a normal variable given a random variable suppose that the x population is highly skewed, then the sampling distribution of x. 2/10/12 lecture 10 3 sampling distribution of sample proportion • if x ~ b(n, p), the sample proportion is defined as • mean & variance of a sample proportion: µ. | http://dtassignmentriwt.vatsa.info/sampling-distribution.html | 18 |
18 | Differentiated instruction is a term in education that gets thrown around quite a bit, but what does it actually mean?
To determine what differentiated instruction is, we can start by deciding what it is not. According to TeachThought1, differentiated instruction is NOT:
- Grouping students by “ability”
- Incompatible with standards
- Dumbing down
- Just for gifted students/just for students with learning challenges
- Something “extra” on top of teaching
TeachThought, a blog dedicated to teaching and learning, defines differentiated instruction as “adapting content, process, or product” according to a specific student’s “readiness, interest, and learning profile.”1 This means in order to effectively differentiate instruction in a classroom, the educator needs to know extensive information about each student. What are the student’s strengths, and what areas might they be struggling with? What engages this student? How does their academic and personal history factor in? These questions can be answered most accurately by reviewing the data of each student.
Differentiated instruction can be thought of as “optimizing the packaging of academic content for individual students.”1 In other words, instead of changing the content that is to be taught (which could be harmful to the student, as they need to learn what the standards specify), the educator can tailor the content in a way that is ideal for an individual student to learn it (and even enjoy learning it!).
The end goal of differentiated instruction is to achieve maximum success for each student.1 What makes differentiated instruction so important and unique is that instead of just making sure all students are on grade level or have mastered a particular standard, it pushes each student to reach their individual potential.
TeachThought1 offers a list of what differentiated instruction IS, which includes:
- Valuing and planning for diversity
- A student-focused way of thinking about teaching and learning
- Use of whole-group, small-group, and individual tasks based on content and student needs
- Purposeful use of flexible grouping
How can data be used to drive differentiated instruction?
Data can inform us of a particular student’s strengths and areas of challenge, learning style and preferences, and personal and academic history. Data enable educators to deeply know their students right away and understand what a particular student needs.
Since differentiated instruction relies on understanding the diversity of students, accessing and analyzing student data is essential when planning for a differentiated classroom. Educators should use data to determine three important pieces of information that drive differentiated instruction: student readiness (where a student currently stands academically), learning profile (how a student learns), and student interests (what engages and excited the student).2 Using student data to pinpoint these three areas for each student will result in impactful instructional and optimal student growth.
Here are a few ways to gather data in order to plan for differentiated lessons from Scholastic2:
- Administer a variety of surveys throughout the year - these surveys can cover multiple intelligences, reading, writing, math, student interests, and can be given to students, parents, and other instructors.
- Analyze prior student assessments.
- Create and administer assessments that will help to identify student’s strengths and weaknesses.
Make sure that the data are stored in a way where they can easily be viewed and studied. As the school year progresses and more information is gathered from various sources, the student data should be updated and re-evaluated frequently. School data team meetings are especially useful in accomplishing this task, as well establishing methods for saving and reviewing the data.
Differentiating instruction can be complex, and the effective use of data is what will make the planning of differentiated lessons easier and more impactful. If data are stored in a way where educators can easily access, analyze, and understand the information the data is providing, lessons that encompass the diversity of each learning will become easier to create and implement. When educators work together to study student data and share practices, instruction becomes even more powerful.
What are some best practices for differentiated instruction?
Effective differentiated instruction becomes about more that just the content of the lessons. It is also about the learning process, such as the activities incorporated in the lesson, and the products of the lesson, or demonstration of what was learned.2 When planning for differentiated instruction, make sure to keep these three areas in mind. Backwards plan to ensure that each of these areas is covered and the lesson leads to the end goal. The products of the lesson will be a valuable point for gathering the data that will shape further instruction, so make sure that this is included in the lesson plan as well.
Giving students independence and ownership is another best practice for created a differentiated classroom. Activities that might accomplish this are (from Scholastic)2:
- Journal and writing topics
- Graphic organizers
- The option of working alone or with a partner or small group
- Anchor activities
As differentiation instruction becomes more familiar, more challenging activities can be added in. Carol Ann Tomlinson, a leader in the use and techniques of differentiated instruction, expands on the best practices for differentiated instruction.3 Some of her practice suggestions are:
- Providing students with materials that reflect a variety of cultures and home settings
- Developing routines that guide students to work independently and know how to get help when the teacher is engaged with other students
- Helping students understand that we all learn differently and to identify their unique style of learning
Check out Resource 4 for additional tips on how to build a powerful curriculum unit and plan for differentiation in lessons.
By: Mary Conroy Almada
1What Differentiated Instruction Is - And Is Not http://www.teachthought.com/teaching/the-definition-of-differentiated-instruction/
28 Lessons Learned on Differentiating Instruction
3What is Differentiated Instruction?
49 Ways to Plan Transformational Lessons http://www.edutopia.org/blog/9-ways-plan-transformational-lessons-todd-finley
5Using Assessment Results to Guide and Differentiate Instruction | http://blog.ioeducation.com/data-differentiated-instruction | 18 |
31 | In electromagnetism, charge density is a measure of the amount of electric charge per unit length, surface area, or volume. Volume charge density (symbolized by the Greek letter ρ) is the quantity of charge per unit volume, measured in the SI system in coulombs per cubic meter (C•m−3), at any point in a volume. Surface charge density (σ) is the quantity of charge per unit area, measured in coulombs per square meter (C•m−2), at any point on a surface charge distribution on a two dimensional surface. Linear charge density (λ) is the quantity of charge per unit length, measured in coulombs per meter (C•m−1), at any point on a line charge distribution. Charge density can be either positive or negative, since electric charge can be either positive or negative.
Like mass density, charge density can vary with position. In classical electromagnetic theory charge density is idealized as a continuous scalar function of position , like a fluid, and , , and are usually regarded as continuous charge distributions, even though all real charge distributions are made up of discrete charged particles. Due to the conservation of electric charge, the charge density in any volume can only change if an electric current of charge flows into or out of the volume. This is expressed by a continuity equation which links the rate of change of charge density and the current density .
Since all charge is carried by subatomic particles, which can be idealized as points, the concept of a continuous charge distribution is an approximation, which becomes inaccurate at small length scales. A charge distribution is ultimately composed of individual charged particles separated by regions containing no charge. For example the charge in an electrically charged metal object is made up of conduction electrons moving randomly in the metal's crystal lattice. Static electricity is caused by surface charges consisting of ions on the surface of objects, and the space charge in a vacuum tube is composed of a cloud of free electrons moving randomly in space. The charge carrier density in a conductor is equal to the number of mobile charge carriers (electrons, ions, etc.) per unit volume. The charge density at any point is equal to the charge carrier density multiplied by the elementary charge on the particles. However because the elementary charge on an electron is so small (1.6•10−19 C) and there are so many of them in a macroscopic volume (there are about 1022 conduction electrons in a cubic centimeter of copper) the continuous approximation is very accurate when applied to macroscopic volumes, and even microscopic volumes above the nanometer level.
At atomic scales, due to the uncertainty principle of quantum mechanics, a charged particle does not have a precise position but is represented by a probability distribution, so the charge of an individual particle is not concentrated at a point but is 'smeared out' in space and acts like a true continuous charge distribution. This is the meaning of 'charge distribution' and 'charge density' used in chemistry and chemical bonding. An electron is represented by a wavefunction whose square is proportional to the probability of finding the electron at any point in space, so is proportional to the charge density of the electron at any point. In atoms and molecules the charge of the electrons is distributed in clouds called orbitals which surround the atom or molecule, and are responsible for chemical bondings.
similarly the surface charge density uses a surface area element dS
and the volume charge density uses a volume element dV
Integrating the definitions gives the total charge Q of a region according to line integral of the linear charge density λq(r) over a line or 1d curve C,
similarly a surface integral of the surface charge density σq(r) over a surface S,
and a volume integral of the volume charge density ρq(r) over a volume V,
where the subscript q is to clarify that the density is for electric charge, not other densities like mass density, number density, probability density, and prevent conflict with the many other uses of λ, σ, ρ in electromagnetism for wavelength, electrical resistivity and conductivity.
Within the context of electromagnetism, the subscripts are usually dropped for simplicity: λ, σ, ρ. Other notations may include: ρℓ, ρs, ρv, ρL, ρS, ρV etc.
The total charge divided by the length, surface area, or volume will be the average charge densities:
Free, bound and total chargeEdit
In dielectric materials, the total charge of an object can be separated into "free" and "bound" charges.
Bound charges set up electric dipoles in response to an applied electric field E, and polarize other nearby dipoles tending to line them up, the net accumulation of charge from the orientation of the dipoles is the bound charge. They are called bound because they cannot be removed: in the dielectric material the charges are the electrons bound to the nuclei.
Free charges are the excess charges which can move into electrostatic equilibrium, i.e. when the charges are not moving and the resultant electric field is independent of time, or constitute electric currents.
Total charge densitiesEdit
In terms of volume charges densities, the total charge density is:
as for surface charge densities:
where subscripts "f" and "b" denote "free" and "bound" respectively.
and dividing by the differential surface element dS gives the bound surface charge density:
Using the divergence theorem, the bound volume charge density within the material is
The negative sign arises due to the opposite signs on the charges in the dipoles, one end is within the volume of the object, the other at the surface.
A more rigorous derivation is given below.
Derivation of bound surface and volume charge densities from internal dipole moments (bound charges) The electric potential due to a dipole moment d is:
For a continuous distribution, the material can be divided up into infinitely many infinitesimal dipoles
where dV = d3r′ is the volume element, so the potential is the volume integral over the object:
where ∇′ is the gradient in the r′ coordinates,
using the divergence theorem:
which separates into the potential of the surface charge (surface integral) and the potential due to the volume charge (volume integral):
Free charge densityEdit
The free charge density serves as a useful simplification in Gauss's law for electricity; the volume integral of it is the free charge enclosed in a charged object - equal to the net flux of the electric displacement field D emerging from the object:
Homogeneous charge densityEdit
For the special case of a homogeneous charge density ρ0, independent of position i.e. constant throughout the region of the material, the equation simplifies to:
The proof of this is immediate. Start with the definition of the charge of any volume:
Then, by definition of homogeneity, ρq(r) is a constant denoted by ρq, 0 (to differ between the constant and non-constant densities), and so by the properties of an integral can be pulled outside of the integral resulting in:
The equivalent proofs for linear charge density and surface charge density follow the same arguments as above.
where r is the position to calculate the charge.
As always, the integral of the charge density over a region of space is the charge contained in that region. The delta function has the shifting property for any function f:
so the delta function ensures that when the charge density is integrated over R, the total charge in R is q:
This can be extended to N discrete point-like charge carriers. The charge density of the system at a point r is a sum of the charge densities for each charge qi at position ri, where i = 1, 2, ..., N:
The delta function for each charge qi in the sum, δ(r − ri), ensures the integral of charge density over R returns the total charge in R:
If all charge carriers have the same charge q (for electrons q = −e, the electron charge) the charge density can be expressed through the number of charge carriers per unit volume, n(r), by
Similar equations are used for the linear and surface charge densities.
Charge density in special relativityEdit
In special relativity, the length of a segment of wire depends on velocity of observer because of length contraction, so charge density will also depend on velocity. Anthony French has described how the magnetic field force of a current-bearing wire arises from this relative charge density. He used (p 260) a Minkowski diagram to show "how a neutral current-bearing wire appears to carry a net charge density as observed in a moving frame." When a charge density is measured in a moving frame of reference it is called proper charge density.
Charge density in quantum mechanicsEdit
where q is the charge of the particle and |ψ(r)|2 = ψ*(r)ψ(r) is the probability density function i.e. probability per unit volume of a particle located at r.
When the wavefunction is normalized - the average charge in the region r ∈ R is
where d3r is the integration measure over 3d position space.
The charge density appears in the continuity equation for electric current, and also in Maxwell's Equations. It is the principal source term of the electromagnetic field, when the charge distribution moves this corresponds to a current density. The charge density of molecules impacts chemical and separation processes. For example, charge density influences metal-metal bonding and hydrogen bonding. For separation processes such as nanofiltration, the charge density of ions influences their rejection by the membrane.
- P.M. Whelan, M.J. Hodgeson (1978). Essential Principles of Physics (2nd ed.). John Murray. ISBN 0-7195-3382-1.
- "Physics 2: Electricity and Magnetism, Course Notes, Ch. 2, p. 15-16" (PDF). MIT OpenCourseware. Massachusetts Institute of Technology. 2007. Retrieved December 3, 2017.
- Serway, Raymond A.; Jewett, John W. (2013). Physics for Scientists and Engineers, Vol. 2, 9th Ed. Cengage Learning. p. 704.
- Purcell, Edward (2011-09-22). Electricity and Magnetism. Cambridge University Press. ISBN 9781107013605.
- I.S. Grant, W.R. Phillips (2008). Electromagnetism (2nd ed.). Manchester Physics, John Wiley & Sons. ISBN 978-0-471-92712-9.
- D.J. Griffiths (2007). Introduction to Electrodynamics (3rd ed.). Pearson Education, Dorling Kindersley. ISBN 81-7758-293-3.
- A. French (1968) Special Relativity, chapter 8 Relativity and electricity, pp 229–65, W. W. Norton.
- Richard A. Mould (2001) Basic Relativity, §62 Lorentz force, Springer Science & Business Media ISBN 0-387-95210-1
- Derek F. Lawden (2012) An Introduction to Tensor Calculus: Relativity and Cosmology, page 74, Courier Corporation ISBN 0-486-13214-5
- Jack Vanderlinde (2006) Classical Electromagnetic Theory, § 11.1 The Four-potential and Coulomb's Law, page 314, Springer Science & Business Media ISBN 1-4020-2700-1
- R. J. Gillespie & P. L. A. Popelier (2001). "Chemical Bonding and Molecular Geometry". Oxford University Press. Bibcode:2018EnST...52.4108E. doi:10.1021/acs.est.7b06400.
- Razi Epsztein, Evyatar Shaulsky, Nadir Dizge, David M Warsinger, Menachem Elimelech (2018). "Ionic Charge Density-Dependent Donnan Exclusion in Nanofiltration of Monovalent Anions". Environmental Science & Technology. 52 (7): 4108–4116. Bibcode:2018EnST...52.4108E. doi:10.1021/acs.est.7b06400.
- A. Halpern (1988). 3000 Solved Problems in Physics. Schaum Series, Mc Graw Hill. ISBN 978-0-07-025734-4.
- G. Woan (2010). The Cambridge Handbook of Physics Formulas. Cambridge University Press. ISBN 978-0-521-57507-2.
- P. A. Tipler, G. Mosca (2008). Physics for Scientists and Engineers - with Modern Physics (6th ed.). Freeman. ISBN 978-0-7167-8964-2.
- R.G. Lerner, G.L. Trigg (1991). Encyclopaedia of Physics (2nd ed.). VHC publishers. ISBN 978-0-89573-752-6.
- C.B. Parker (1994). McGraw Hill Encyclopaedia of Physics (2nd ed.). VHC publishers. ISBN 978-0-07-051400-3.
- - Spatial charge distributions | https://en.m.wikipedia.org/wiki/Charge_density | 18 |
20 | Mass of the reactants and products
Chemical reactions convert reactants into products, but, typically, there are always some amounts of reactants left in the products of the reaction. In a chemical reaction, substances (elements and/or compounds) called reactants are changed into other substances (compounds and/or elements) called products you can. How do u calculate masses of reactants and products. Which branch of chemistry deals with the mass relationships of elements in compounds and the mass relationships among reactants and products in chemical reactions.
Mass changes in chemical reactions - activity about they are just rearranged from the reactants to form the products these mass changes allow scientists to. In a chemical reaction, only the atoms present in the reactants can end up in the products mass is conserved in a chemical reaction summary. This resource is a collection i have made over the years of 12 gcse chemistry questions for students to calculate the mass of reactants and products use this. Bd chemical reactions can be described by chemical equations the law of conservation of mass states that in a chemical reaction, the total mass of reactants is equal. Reacting masses in all chemical reactions, the total mass of reactants used is equal to the total mass of the products made. 39 stoichiometric calculations: amounts of reactants between reactants and/or products in a mass of products and initial mass of the.
The mass of products equals the mass of the reactants nothing is created or destroyed - the law of conservation of energy- many reactions like. What is the difference between reactants and products • reactants are the substances consumed during a reaction and products are formed • so.
Describe the mass of the reactants and products given the balanced chemical reaction 2h2o → 2h2 + o2 a) the mass of the products is 2/3 that of the. Stoichiometry calculator after entering the balanced equation, you can specify any one of the reactants and its amount products input. A chemical reaction is like an equation it is read from left to right the reactants are on the left side of the equation and the products are on the right side of.
reactants, products and leftovers 1110. The law of conservation of mass is defined and examples of reacting mas calculations using the law are explained total mass reactants = total mass of products. Given the masses of two reactants, determine how many grams of one of the products forms and identify the reactant in excess made by faculty at the.
Mass of the reactants and products
Chapter 4: quantities of reactants and products 123 to take the grams of fecl 3 to moles of fecl 3, use the molar mass of fecl 3: g fecl3 × 1 mol fecl 3. Stoichiometry / ˌ s t ɔɪ k i ˈ ɒ m ɪ t r i / is the calculation of reactants and products in chemical reactions stoichiometry is founded on the law of.
- Reactants into products by rearranging atoms ¥ chemical reactions can be observed and identified that is,the mass of the products appears to be less than the mass.
- Describe the mass of the reactants and products given the balanced chemical reaction athe mass of the products is 2/3 that of the reactants bthe mass of the.
- In a chemical change, all the atoms in the reactants end up in the products chemical changes involve rearranging atoms to form different substances.
- Given mass of reactants example: a reaction combines 6481g of silver nitrate with 9267g of potassium bromide how much silver bromide is formed.
- In a balanced chemical equation, the number of atoms of each element is the same for the reactants and products the term reactant first came into use around 1900-1920.
A chemical reaction is a process that leads initially involved in a chemical reaction are called reactants or here the analysis starts from the products. Conservation of mass in chemical if atoms cannot be destroyed then the mass of reactants must equal the mass of mass reactants = mass products. Quantities of reactants and products 63 in a mixture, convert between mass of the mixture and mass of any one of the reactants or products in the reaction. 39 stoichiometric calcs: amounts of reactants and • mass of products is equal to mass of reactants 2o factor mass o 2 calculating the mass of a. Gas stoichiometry at standard relationships of the reactants and products in chemical meaning that the mass of the reactants must be equal to the mass of the. The mole ratio can be used to calculate the mass of reactants and products mole ratio. | http://akpaperpqbe.jordancatapano.us/mass-of-the-reactants-and-products.html | 18 |
40 | In the context of this wiki, graphics refers to pictures, images, figures, illustrations, and other artwork that can be used in an eBook. A specialized form of graphics are the glyphs used in fonts and symbols. See also editing graphics and video.
Types of Graphics
There are two types of graphics data, bitmapped and vector. It is possible for an image to be built out of both forms. For example a vector representation could be displayed over the top of a bitmap.
Bitmapped image
This format is also called raster image from the way is it usually displayed. In its simplest form a bitmap draws a picture by laying down a series of dots. If these dots are small enough and laid closely together in a rectangle the image can be discerned. This is the way a TV and a computer screen displays a picture or video. It is also the way digital camera pictures are stored. It is sometimes referred to as a scanned image since it can be recreated by displaying the image one line at a time using a scan device. Scanning it done using horizontal scan lines beginning at the top. Scans are easy to transmit serially.
The smallest bit map would have a single bit representing a dot. This image would have only a background, perhaps white, and a contrasting foreground of dots, perhaps black. This would be called a binary image. Most often these dots will have varying brightness which will product a grayscale image. Multiple dots can be used to product a color image as well. One dot is referred to as a pixel when it is represented on an color electronic screen since it may be represented with more than one dot. For example, on a TV a color pixel is built from 3 different colored dots (Red, Green, Blue). RGB is another term used to describe this bitmapped format utilizing the first letters of the three colors. In digital color a "true color" image will devote a full Byte, 8 bits, to each of the three colors thus a single pixel will have 24 bits of data. High Dynamic Range (HDR) will use more than 8 bits to represent each color.
Image sizes are often stated in Mega Pixels (MP). For example an eight Mpixel image in 4:3 ratio could be 3264 × 2448 pixels. A twelve Mpixel might be 4032 × 3024 pixels. (numbers are from Apple products.)
While RGB is a popular format there are also other formats. For example a format described as YCbCr would describe the data information by breaking out the Brightness (monochrome data) Y from the Color component. With this definition a grayscale image could be produced from the Y component alone. The Cb, chroma blue, would contain the data needed to represent the blue and Cr represents the red data. Note that subtracting the B and R data from the Y data would represent the G (Green) data. YCbCr format allows a black and white TV to work with the same data as a color TV.
While a bitmap is typically rectangular in shape, it is possible to make it seem to be an arbitrary shape. Some bitmap formats support a transparent color, that allows you to see through the graphic to the underlying background. This can be used to create odd shaped images and even holes in the image. Some formats can even support a partially transparent color, called an Alpha channel. The number of bits in the Alpha channel determines the level of transparency. An RGB signal with an Alpha channel would be called RGBA.
Of course bitmap images can even represent characters (glyphs) as in Bitmapped fonts.
Vector image
This format is more like a person would draw. It is lines and shapes drawn like you would do with a pen. However, unlike a free hand drawing these lines and shapes are described mathematically like you or a draftsman might do with graph paper. A line could be described by two xy coordinates on the paper. Attributes could be defined to describe the thickness of the line, the color, etc. Curved lines can also be described mathematically. Closed vector objects can be empty (transparent inside) or filled with a color similar to coloring the image in a coloring book. Objects are typically rendered in a defined order such that later objects are on top of earlier objects which may cause them to obscure a portion of the earlier object. In some cases transparency can be defined to permit the earlier object to show through.
Many standard graphic formats do not support the use of vector formats. TIFF can use a vector format. Vector and Bitmaps images can be combined by drawing the vector data over the top of a bitmap image.
The ePUB standard requires support for SVG, Scaleable Vector Graphics, format. This is a full vector format and is intended to provide searchable text and scaleable images, two things that raster images have trouble with. PDF can handle SVG as well.
The concept of zooming means to reduce (zoom out) or enlarge (zoom in) the object. For text this generally means to change the amount of text on a screen by increasing or decreasing the individual fonts that are used for the text and then reflowing the page. The font change can be done with scaleable fonts or by replacing the font with one of a different size. For images zooming does different things depending on the type of image.
A bitmap image starts at 100% when there is a one to one correspondence between the pixels in the image with the pixels on the screen. Bitmapped graphic images zoom out by removing or merging pixels to achieve the desired size. There can be slight distortion of content unless the zoomed scale is an exact multiple of the source file. Zooming in replicates the pixels which can cause distortion due to uneven replication and will cause blockiness if the zoom in is so much that the duplicated pixels start becoming visible as pixels. Zooming an image beyond the edge of the screen will require panning to see the rest of the image. With some eBook formats the original image is intentionally shrunk for initial display so that the user can zoom in without losing display resolution of the image. Sometimes zooming is accomplished by replacing the image with another image with more or less pixels to avoid the distortion effects. Of course this causes the eBook to be larger to accommodate the separate images. Zooming out can also be accomplished by smoother changes. It this case the image pixels are not just duplicated but instead they are analyzed and replaced with a color or brightness half way between the two original pixels. This, of course, increases the time to display the image.
A vector image typically does not change quality or get distorted as it is zoomed in or out. The image itself is stored as points and objects in a database so it is not stored in a form that looks like an image. Instead the image is created on the screen based on the scaling of coordinates. Zooming just recomputes and recreates the image with a new scale factor. While vector images will scale well there can still be distortion. Sometimes an image is created by connecting point with lines to form an outline of a complex shape. Having too few points can cause distortion when zooming in. Using a Bézier curve can sometimes reduce this distortion.
Graphic Formats
The Graphics formats listed here are the predominant ones that are used in eBooks. There are many other formats that are not covered because of their limited use in eBooks. Some formats are further described on their own page as indicated by the link.
BMP is the extension used for a BitMaP image file which is an uncompressed graphics format developed by Microsoft. The same format can be compressed and if so the extension is normally changed to RLE. A program that claims to support BMP files may or may not be able to support an RLE file. A generic term that is applied to BMP files is RGB files since the bitmap is made up of red, green, and blue information. Some Generic bitmap files may not list the colors in this order.
A 24 bit uncompressed BMP file is huge and should not be used on any but the smallest of images. Reducing the number of bits may reduce the file size considerably with a sacrifice in the quality of the color image due to the reduced colors if they were in use.
Most OS's other than Windows do not support BMP files and it is not standard for web browsers although accepted by Internet Explorer. TIF files can contain a generic version of BMP data. Device-independent bitmap, DIB, is the term used to describe a generic bitmap. Windows also uses the BMP format for its icons which often have an ICO extension. See also PPM.
GIF stands for Graphics Interchange Format and developed in 1987 by CompuServe. It is a lossless bitmapped graphics format which means the compression technology does not lose any of the image detail. It was designed for the reproduction of line drawings but can work well with photographs. Widely used on the Internet for logotypes and drawings. The coding scheme uses a LZW lossless compression scheme which is patented but patents ran out in 2004.
GIF files use a palletized color scheme which means that the pixel color is selected from a predefined pallet of colors. There can be only a maximum of 256 different colors (8 bits) in the pallet but different pictures can have their own pallet. Some people consider GIF to by a lossy format since there could be losses due to having to reduce the number of colors down to 256. The pallet of colors itself is defined as needed by up to 24 bits of data (12 bits is a popular option). Using less colors and less bits will reduce the file size of the picture. Large areas of one color will also reduce the file size.
It is possible to define one of the colors as a transparency color. When this is done the rendering code will not draw that color on the screen. This allows the background color of the screen to be seen in the image. This is mostly used to make the image shape look like it is the total shape rather than a picture with a frame around it.
Motion GIF - GIF images can be animated but eBook readers typically do not support animation although many web browsers do. If animation isn't supported only the first image in the animation sequence will be shown. Removing the animation frames will make the image size smaller. Animation is done by combining several GIF images in one file. The later images are simply superimposed on the earlier images one by one. The later image can be smaller in size than the original by providing an offset as to where it should be placed. The frame rate is specified in the file itself.
GIF can represent grayscale images of course and can be lossless in this task. Reducing the grayscale gradient can drastically reduce the size of the file. Line art is often best represented as a GIF image.
Gifsicle - A program to manipulates GIF images and animations.
JPG (or JPEG) stands for the Joint Photographic Experts Group. It uses 24 bits to represent a color pixel (often called True Color). It uses a lossy compressed graphics format that is designed to support photographs rather than line art. It was developed in 1992 and issued as the ISO 10918-1 standard in 1994, the quality depends directly on the amount of compression employed. It is widely used on the Internet and by most digital camera manufacturers.
The JPEG File Interchange Format (JFIF) is a minimal version of the JPEG Interchange Format that was deliberately simplified so that it could be widely implemented and thus has become the de-facto standard. Another standard format is the Exchangeable image file format (Exif). This is the specification used for the image files on digital cameras.
There is also a newer format is called JPEG 2000. It generally has a jp2 or jpg2 extension to distinguish the new improved format from the original format however that is not always the case. Older imaging display programs will not display this newer format. PDF files may have images in this format which can make converting them difficult. The latest JPEG format is JPEG XR. It provides even better compression with less processing and can provide lossless compression as well.
PNG stands for Portable Network Graphics format. It is a bitmapped graphic format that employs a lossless compression system. It was designed to improve upon and replace GIF files back when the GIF patent was in effect. PNG does not require a patent license. Its main drawback is the complexity of its color model. It supports many different color schemes including both palettes and direct with differing number of bits. Some software that support PNG may specify that it only supports 8 bit images or have other restrictions on what is supported.
PNG files with more bits per pixel can use some of those bits for transparency. For example a 32 bit pixel might use 24 bits for color (8 for each of the 3 primary colors) and 8 bits for transparency. This is similar to the transparency of GIF images but an existing color is not used. Instead the amount of transparency can be defined per color so that some amount, but perhaps not all, of a background image can show through.
TIF (or TIFF) stands for Tagged Image File Format. It is a container that can hold images in a wide variety of bitmapped or even vector formats. They can also be compressed or uncompressed. If compressed they can use RLE, JPG, LZW, ZIP or potentially other formats. This standard is owned by Adobe. TIF can even support multiple images and even a mix of bitmapped and vector images in the same file.
DjVu is a digital image format with advanced compression technology and high performance value. DjVu supports very high resolution images of scanned documents, digital documents, and photographs. DjVu viewers are available for web browsers, the desktop, and PDA devices. Its main characteristics is that the compress ratio is about 10x better than in PDF format at the same quality. IW44 is a subset simplified version of DJVU.
SVG, scalable vector graphics, is a vector graphic system and is a required feature of the ePUB publishing standard. It is the only vector graphics system with any standardized use in the eBook field. Scalable Vector Graphics (SVG) is a text-based graphics language that describes images with vector shapes, text, and embedded raster graphics.
SWF, Shockwave Flash, currently functions as the dominant format for displaying "animated" vector graphics on the Web. It can also support static vector images of course. Currently this format is only used on eBooks that have LCD screens.
HEIF (High Efficiency Image Format) is a standard image format created by the MPEG group for Video encoding (HEVC) but is suitable for still pictures as well. It is featured in the latest Apple products.
- Netpbm a Unix collection of formats.
- PCX early bitmap format
- DXF exchange format of vector CAD files.
- EXR OpenEXR is a high dynamic-range (HDR) image file format.
- HDR High Dynamic Range Image extends the color/brightness spectrum.
- WebP's lossy image format is based on the intra-frame coding of the VP8 video format.
- FLIF Free Lossless Image Format. Best compression of all lossless formats.
- The IrfanView wiki page contains a fairly exhaustive list of graphics format.
- Native formats defines some internal proprietary formats for graphics. | https://wiki.mobileread.com/wiki/Graphics | 18 |
33 | Electromagnetic radiation, also known as “light” is pretty handy for astronomers. They can use it to directly and indirectly observe stars, nebula, planets and more. But as you probably know, light can act like a wave, creating interference patterns tto teach us even more about the Universe.
Forty years ago, Canadian physicist Bill Unruh made a surprising prediction regarding quantum field theory. Known as the Unruh effect, his theory predicted that an accelerating observer would be bathed in blackbody radiation, whereas an inertial observer would be exposed to none. What better way to mark the 40th anniversary of this theory than to consider how it could affect human beings attempting relativistic space travel?
Such was the intent behind a new study by a team of researchers from Sao Paulo, Brazil. In essence, they consider how the Unruh effect could be confirmed using a simple experiment that relies on existing technology. Not only would this experiment prove once and for all if the Unruh effect is real, it could also help us plan for the day when interstellar travel becomes a reality.
To put it in layman’s terms, Einstein’s Theory of Relativity states that time and space are dependent upon the inertial reference frame of the observer. Consistent with this is the theory that if an observer is traveling at a constant speed through empty vacuum, they will find that the temperature of said vacuum is absolute zero. But if they were to begin to accelerate, the temperature of the empty space would become hotter.
This is what William Unruh – a theorist from the University of British Columbia (UBC), Vancouver – asserted in 1976. According to his theory, an observer accelerating through space would be subject to a “thermal bath” – i.e. photons and other particles – which would intensify the more they accelerated. Unfortunately, no one has ever been able to measure this effect, since no spacecraft exists that can achieve the kind of speeds necessary.
For the sake of their study – which was recently published in the journal Physical Review Letters under the title “Virtual observation of the Unruh effect” – the research team proposed a simple experiment to test for the Unruh effect. Led by Gabriel Cozzella of the Institute of Theoretical Physics (IFT) at Sao Paulo State University, they claim that this experiment would settle the issue by measuring an already-understood electromagnetic phenomenon.
Essentially, they argue that it would be possible to detect the Unruh effect by measuring what is known as Larmor radiation. This refers to the electromagnetic energy that is radiated away from charged particles (such as electrons, protons or ions) when they accelerate. As they state in their study:
“A more promising strategy consists of seeking for fingerprints of the Unruh effect in the radiation emitted by accelerated charges. Accelerated charges should back react due to radiation emission, quivering accordingly. Such a quivering would be naturally interpreted by Rindler observers as a consequence of the charge interaction with the photons of the Unruh thermal bath.”
As they describe in their paper, this would consist of monitoring the light emitted by electrons within two separate reference frames. In the first, known as the “accelerating frame”, electrons are fired laterally across a magnetic field, which would cause the electrons to move in a circular pattern. In the second, the “laboratory frame”, a vertical field is applied to accelerate the electrons upwards, causing them to follow a corkscrew-like path.
In the accelerating frame, Cozzella and his colleagues assume that the electrons would encounter the “fog of photons”, where they both radiate and emit them. In the laboratory frame, the electrons would heat up once vertical acceleration was applied, causing them to show an excess of long-wavelength photons. However, this would be dependent on the “fog” existing in the accelerated frame to begin with.
In short, this experiment offers a simple test which could determine whether or not the Unruh effect exists, which is something that has been in dispute ever since it was proposed. One of the beauties of the proposed experiment is that it could be conducted using particle accelerators and electromagnets that are currently available.
On the other side of the debate are those who claim that the Unruh effect is due to a mathematical error made by Unruh and his colleagues. For those individuals, this experiment is useful because it would effectively debunk this theory. Regardless, Cozzella and his team are confident their proposed experiment will yield positive results.
“We have proposed a simple experiment where the presence of the Unruh thermal bath is codified in the Larmor radiation emitted from an accelerated charge,” they state. “Then, we carried out a straightforward classical-electrodynamics calculation (checked by a quantum-field-theory one) to confirm it by ourselves. Unless one challenges classical electrodynamics, our results must be virtually considered as an observation of the Unruh effect.”
If the experiments should prove successful, and the Unruh effect is proven to exist, it would certainly have consequences for any future deep-space missions that rely on advanced propulsion systems. Between Project Starshot, and any proposed mission that would involve sending a crew to another star system, the added effects of a “fog of photons” and a “thermal bath” will need to be factored in.
Each new probe we launch into space follows a finely-tuned, predetermined trajectory that opens up a new avenue of understanding into our solar system and our universe. The results from each probe shapes the objectives of the next. Each probe is built with maximum science in mind, and is designed to answer crucial questions and build our understanding of astronomy, cosmology, astrophysics, and planetary studies.
The Juno probe is no different. When it arrives at Jupiter in July 2016, it will begin working on a checklist of scientific questions about Jupiter.
But there’s a problem.
Jupiter is enormous. And at it’s heart is a chunk of ice and rock, or so we think. Surrounding that is an enormous region of liquid metallic hydrogen. This core is 10 to 20 times as massive as Earth’s, and it’s rotating. As it rotates, it generates a powerful magnetic field that draws in particles from the sun, then whips them into a near-light-speed frenzy. This whirlwind of radiation devastates anything that gets too close.
Enter the tiny Juno spacecraft, about the size of a bus. Juno has to get close to Jupiter to do its work—within 5,000km (3,100 miles) above the cloud tops—and though it’s designed to weave its way carefully past Jupiter’s most dangerous radiation fields, its orbits will still expose it to the paper-shredder effect of those fields. There’s no way around it.
Juno Project Scientist Steve Levin, and Dave Stevenson from Caltech explain Juno’s orbiting pattern in this short video:
The most vulnerable part of Juno is the sensitive electronics that are the heart and brains of the spacecraft. Jupiter’s extreme radiation would quickly destroy Juno’s sensitive systems, and the Juno designers had to come up with a way to protect those components while Juno does its work. The solution? The titanium vault.
All kinds of materials and methods have been employed to protect spacecraft electronics, but this is the first time that titanium has been tried. Titanium is renowned for its light weight and its strength. It’s used in all kinds of demanding manufacturing applications here on Earth.
The titanium vault won’t protect Juno’s heart forever. In fact, some of the components are not expected to last the length of the mission. The radiation will slowly degrade the titanium, as high velocity particles punch microscopic holes in it. Bit by bit, radiation will perforate the vault, and the electronics within will be exposed. And as the electronic systems stop functioning, one by one, Juno will slowly become brain-dead, before plunging purposefully into Jupiter.
But Juno won’t die in vain. It will answer important questions about Jupiter’s core, atmospheric composition, planetary evolution, magnetosphere, polar auroras, gravitational field, and more. The spacecraft’s onboard camera, the Junocam, also promises to capture stunning images of Jupiter. But beyond all that, Juno—and its titanium vault—will show us how good we are at protecting spacecraft from extreme radiation.
Juno is still over 160 million km (100 million miles) from Jupiter and is fully functional. Once it arrives, it will insert itself into orbit and begin to do its job. How well it can do its job, and for how long, will depend on how effectively the titanium vault shields Juno’s heart.
Radio waves are electromagnetic waves, or electromagnetic radiation, with wavelengths of about a centimeter or longer (the boundary is rather fuzzy; microwaves and terahertz radiation are sometimes considered to be radio waves; these have wavelengths as short as a tenth of a millimeter or so). In other words, radio waves are electromagnetic radiation at the lowest energy end of the electromagnetic spectrum.
Radio waves were predicted two decades or so before they were generated and detected; in fact, the historical story is one of the great triumphs of modern science.
Many years – centuries even – of work on electrical and magnetic phenomena, by many scientists, culminated in the work of James Clerk Maxwell. In 1865 he published a set of equations which describe everything known about electricity and magnetism (electromagnetism) up till that time (the next major advance was the work of Planck and Einstein – among others – some four decades or so later, involving the discovery of photons, or quantized electromagnetic radiation). Maxwell’s equations, as they are now called, predicted that there should be a kind of wave of interacting electrical and magnetic fields, which is self-propagating, and which travels at the speed of light.
In 1887, Heinrich Hertz created radio waves in his lab, and detected them after they’d travelled a short distance … exactly as Maxwell had predicted! It wasn’t long before practical applications of this discovery were developed, leading to satellite TV, cell phones, GPS, radar, wireless home networks, and much, much, more.
For Universe Today readers, the discovery of radio waves lead to radio astronomy. Interestingly, theory again preceded observation … several scientists – Planck among them – predicted that the Sun should emit radio waves (be a source of radio waves), but the Sun’s radio emission was not detected until 1942 (by Hey, in England), nearly a decade after celestial radio waves were detected and studied, by Jansky (and Reber, among others).
“Gamma wave” is not, strictly speaking, a standard scientific term … at least not in physics, and this is rather curious (the standard physics term is “gamma ray”).
The part of the electromagnetic spectrum ‘to the left’ (high energy/short wavelength/high frequency) is called the gamma ray region; the word ‘ray’ was in common use at the time of the discovery of this form of radiation (‘cathode rays’, ‘x-rays’, and so on); by the time it was discovered that gamma rays (and x-rays) are electromagnetic radiation (and that cathode rays, beta radiation, and alpha radiation, is not), the word ‘ray’ was well-entrenched. On the other hand, radio waves were discovered as a result of a new theory of electromagnetism … Maxwell’s equations predict the existence of electromagnetic waves (and that’s exactly what Hertz discovered, in 1886).
Paul Villard is credited with having discovered gamma radiation, in 1900, though it was Rutherford who gave them the name “gamma rays”, in 1903 (Rutherford had discovered alpha and beta rays in 1899). So when, and how, was it discovered that gamma rays are, in fact, gamma waves (just like radio waves, only with much, much, much shorter wavelengths)? In 1914; Ernest Rutherford and Edward Andrade used crystal diffraction to measure the wavelength of gamma rays emitted by Radium B (which is the radioactive isotope of lead, 214Pb) and Radium C (which is the radioactive isotope of bismuth, 214Bi).
We usually think of electromagnetic radiation in terms of photons, a term which arises from quantum physics; for astronomy (which is almost entirely based on electromagnetic radiation/photons), however, instruments and detectors are nearly always more easily understood in terms of whether they detect waves (e.g. radio receivers) or particles (e.g. scintillators). In gamma ray astronomy, in all instruments used to date, the particle nature of gamma rays is key (for direct detection anyway; Cherenkov telescopes work quite differently!). Can the circle be closed? Is it possible to use crystal diffraction (or something similar) – as Rutherford and Andrade did – and the wave nature of gamma rays, to build gamma ray astronomical instruments? Yes … and the next generation of gamma ray observatories might include just such instruments! | https://www.universetoday.com/tag/electromagnetic-radiation/ | 18 |
329 | A circle (black), which is measured by its circumference (C), diameter (D) in cyan, and radius (R) in red; its centre (O) is in magenta.
A circle is a simple closed shape. It is the set of all points in a plane that are at a given distance from a given point, the centre; equivalently it is the curve traced out by a point that moves so that its distance from a given point is constant. The distance between any of the points and the centre is called the radius. This article is about circles in Euclidean geometry, and, in particular, the Euclidean plane, except where otherwise noted.
A circle is a simple closed curve that divides the plane into two regions: an interior and an exterior. In everyday use, the term "circle" may be used interchangeably to refer to either the boundary of the figure, or to the whole figure including its interior; in strict technical usage, the circle is only the boundary and the whole figure is called a disc.
A circle may also be defined as a special kind of ellipse in which the two foci are coincident and the eccentricity is 0, or the two-dimensional shape enclosing the most area per unit perimeter squared, using calculus of variations.
- 1 Euclid's definition
- 2 Terminology
- 3 History
- 4 Analytic results
- 5 Properties
- 6 Compass and straightedge constructions
- 7 Circle of Apollonius
- 8 Circles inscribed in or circumscribed about other figures
- 9 Circle as limiting case of other figures
- 10 Circles in other p-norms
- 11 Squaring the circle
- 12 See also
- 13 References
- 14 Further reading
- 15 External links
A circle is a plane figure bounded by one line, and such that all right lines drawn from a certain point within it to the bounding line, are equal. The bounding line is called its circumference and the point, its centre.
- Annulus: the ring-shaped object, the region bounded by two concentric circles.
- Arc: any connected part of the circle.
- Centre: the point equidistant from the points on the circle.
- Chord: a line segment whose endpoints lie on the circle.
- Circumference: the length of one circuit along the circle, or the distance around the circle.
- Diameter: a line segment whose endpoints lie on the circle and which passes through the centre; or the length of such a line segment, which is the largest distance between any two points on the circle. It is a special case of a chord, namely the longest chord, and it is twice the radius.
- Disc: the region of the plane bounded by a circle.
- Lens: the intersection of two discs.
- Passant: a coplanar straight line that does not touch the circle.
- Radius: a line segment joining the centre of the circle to any point on the circle itself; or the length of such a segment, which is half a diameter.
- Sector: a region bounded by two radii and an arc lying between the radii.
- Segment: a region, not containing the centre, bounded by a chord and an arc lying between the chord's endpoints.
- Secant: an extended chord, a coplanar straight line cutting the circle at two points.
- Semicircle: an arc that extends from one of a diameter's endpoints to the other. In non-technical common usage it may mean the diameter, arc, and its interior, a two dimensional region, that is technically called a half-disc. A half-disc is a special case of a segment, namely the largest one.
- Tangent: a coplanar straight line that touches the circle at a single point.
The word circle derives from the Greek κίρκος/κύκλος (kirkos/kuklos), itself a metathesis of the Homeric Greek κρίκος (krikos), meaning "hoop" or "ring". The origins of the words circus and circuit are closely related.
The circle has been known since before the beginning of recorded history. Natural circles would have been observed, such as the Moon, Sun, and a short plant stalk blowing in the wind on sand, which forms a circle shape in the sand. The circle is the basis for the wheel, which, with related inventions such as gears, makes much of modern machinery possible. In mathematics, the study of the circle has helped inspire the development of geometry, astronomy and calculus.
Early science, particularly geometry and astrology and astronomy, was connected to the divine for most medieval scholars, and many believed that there was something intrinsically "divine" or "perfect" that could be found in circles.
Some highlights in the history of the circle are:
- 1700 BCE – The Rhind papyrus gives a method to find the area of a circular field. The result corresponds to 256/ (3.16049...) as an approximate value of π.
- 300 BCE – Book 3 of Euclid's Elements deals with the properties of circles.
- In Plato's Seventh Letter there is a detailed definition and explanation of the circle. Plato explains the perfect circle, and how it is different from any drawing, words, definition or explanation.
- 1880 CE – Lindemann proves that π is transcendental, effectively settling the millennia-old problem of squaring the circle.
Length of circumference
The ratio of a circle's circumference to its diameter is π (pi), an irrational constant approximately equal to 3.141592654. Thus the length of the circumference C is related to the radius r and diameter d by:
As proved by Archimedes, in his Measurement of a Circle, the area enclosed by a circle is equal to that of a triangle whose base has the length of the circle's circumference and whose height equals the circle's radius, which comes to π multiplied by the radius squared:
Equivalently, denoting diameter by d,
that is, approximately 79% of the circumscribing square (whose side is of length d).
This equation, known as the Equation of the Circle, follows from the Pythagorean theorem applied to any point on the circle: as shown in the adjacent diagram, the radius is the hypotenuse of a right-angled triangle whose other sides are of length |x − a| and |y − b|. If the circle is centred at the origin (0, 0), then the equation simplifies to
An alternative parametrisation of the circle is:
In this parameterisation, the ratio of t to r can be interpreted geometrically as the stereographic projection of the line passing through the centre parallel to the x-axis (see Tangent half-angle substitution). However, this parameterisation works only if t is made to range not only through all reals but also to a point at infinity; otherwise, the bottom-most point of the circle would be omitted.
The equation of the circle determined by three points not on a line is obtained by a conversion of the 3-point-form of a circle's equation
It can be proven that a conic section is a circle exactly when it contains (when extended to the complex projective plane) the points I(1: i: 0) and J(1: −i: 0). These points are called the circular points at infinity.
In polar coordinates, the equation of a circle is:
where a is the radius of the circle, is the polar coordinate of a generic point on the circle, and is the polar coordinate of the centre of the circle (i.e., r0 is the distance from the origin to the centre of the circle, and φ is the anticlockwise angle from the positive x-axis to the line connecting the origin to the centre of the circle). For a circle centred on the origin, i.e. r0 = 0, this reduces to simply r = a. When r0 = a, or when the origin lies on the circle, the equation becomes
In the general case, the equation can be solved for r, giving
Note that without the ± sign, the equation would in some cases describe only half a circle.
In the complex plane, a circle with a centre at c and radius r has the equation:
In parametric form, this can be written:
The slightly generalised equation
for real p, q and complex g is sometimes called a generalised circle. This becomes the above equation for a circle with , since . Not all generalised circles are actually circles: a generalised circle is either a (true) circle or a line.
The tangent line through a point P on the circle is perpendicular to the diameter passing through P. If P = (x1, y1) and the circle has centre (a, b) and radius r, then the tangent line is perpendicular to the line from (a, b) to (x1, y1), so it has the form (x1 − a)x + (y1 – b)y = c. Evaluating at (x1, y1) determines the value of c and the result is that the equation of the tangent is
If y1 ≠ b then the slope of this line is
This can also be found using implicit differentiation.
When the centre of the circle is at the origin then the equation of the tangent line becomes
and its slope is
- The circle is the shape with the largest area for a given length of perimeter. (See Isoperimetric inequality.)
- The circle is a highly symmetric shape: every line through the centre forms a line of reflection symmetry and it has rotational symmetry around the centre for every angle. Its symmetry group is the orthogonal group O(2,R). The group of rotations alone is the circle group T.
- All circles are similar.
- The circle that is centred at the origin with radius 1 is called the unit circle.
- Through any three points, not all on the same line, there lies a unique circle. In Cartesian coordinates, it is possible to give explicit formulae for the coordinates of the centre of the circle and the radius in terms of the coordinates of the three given points. See circumcircle.
- Chords are equidistant from the centre of a circle if and only if they are equal in length.
- The perpendicular bisector of a chord passes through the centre of a circle; equivalent statements stemming from the uniqueness of the perpendicular bisector are:
- If a central angle and an inscribed angle of a circle are subtended by the same chord and on the same side of the chord, then the central angle is twice the inscribed angle.
- If two angles are inscribed on the same chord and on the same side of the chord, then they are equal.
- If two angles are inscribed on the same chord and on opposite sides of the chord, then they are supplementary.
- An inscribed angle subtended by a diameter is a right angle (see Thales' theorem).
- The diameter is the longest chord of the circle.
- Among all the circles with a chord AB in common, the circle with minimal radius is the one with diameter AB.
- If the intersection of any two chords divides one chord into lengths a and b and divides the other chord into lengths c and d, then ab = cd.
- If the intersection of any two perpendicular chords divides one chord into lengths a and b and divides the other chord into lengths c and d, then a2 + b2 + c2 + d2 equals the square of the diameter.
- The sum of the squared lengths of any two chords intersecting at right angles at a given point is the same as that of any other two perpendicular chords intersecting at the same point, and is given by 8r 2 – 4p 2 (where r is the circle's radius and p is the distance from the centre point to the point of intersection).
- The distance from a point on the circle to a given chord times the diameter of the circle equals the product of the distances from the point to the ends of the chord.:p.71
- A line drawn perpendicular to a radius through the end point of the radius lying on the circle is a tangent to the circle.
- A line drawn perpendicular to a tangent through the point of contact with a circle passes through the centre of the circle.
- Two tangents can always be drawn to a circle from any point outside the circle, and these tangents are equal in length.
- If a tangent at A and a tangent at B intersect at the exterior point P, then denoting the centre as O, the angles ∠BOA and ∠BPA are supplementary.
- If AD is tangent to the circle at A and if AQ is a chord of the circle, then ∠DAQ = 1/arc(AQ).
- The chord theorem states that if two chords, CD and EB, intersect at A, then AC × AD = AB × AE.
- If two secants, AE and AD, also cut the circle at B and C respectively, then AC × AD = AB × AE. (Corollary of the chord theorem.)
- A tangent can be considered a limiting case of a secant whose ends are coincident. If a tangent from an external point A meets the circle at F and a secant from the external point A meets the circle at C and D respectively, then AF2 = AC × AD. (Tangent-secant theorem.)
- The angle between a chord and the tangent at one of its endpoints is equal to one half the angle subtended at the centre of the circle, on the opposite side of the chord (Tangent Chord Angle).
- If the angle subtended by the chord at the centre is 90 degrees then ℓ = r √, where ℓ is the length of the chord and r is the radius of the circle.
- If two secants are inscribed in the circle as shown at right, then the measurement of angle A is equal to one half the difference of the measurements of the enclosed arcs (DE and BC). I.e. 2∠CAB = ∠DOE − ∠BOC, where O is the centre of the circle. This is the secant-secant theorem.
An inscribed angle (examples are the blue and green angles in the figure) is exactly half the corresponding central angle (red). Hence, all inscribed angles that subtend the same arc (pink) are equal. Angles inscribed on the arc (brown) are supplementary. In particular, every inscribed angle that subtends a diameter is a right angle (since the central angle is 180 degrees).
- The sagitta (also known as the versine) is a line segment drawn perpendicular to a chord, between the midpoint of that chord and the arc of the circle.
- Given the length y of a chord, and the length x of the sagitta, the Pythagorean theorem can be used to calculate the radius of the unique circle that will fit around the two lines:
Another proof of this result, which relies only on two chord properties given above, is as follows. Given a chord of length y and with sagitta of length x, since the sagitta intersects the midpoint of the chord, we know it is part of a diameter of the circle. Since the diameter is twice the radius, the "missing" part of the diameter is (2r − x) in length. Using the fact that one part of one chord times the other part is equal to the same product taken along a chord intersecting the first chord, we find that (2r − x)x = (y / 2)2. Solving for r, we find the required result.
Compass and straightedge constructions
There are many compass-and-straightedge constructions resulting in circles.
The simplest and most basic is the construction given the centre of the circle and a point on the circle. Place the fixed leg of the compass on the centre point, the movable leg on the point on the circle and rotate the compass.
Construct a circle with a given diameter
- Construct the midpoint M of the diameter.
- Construct the circle with centre M passing through one of the endpoints of the diameter (it will also pass through the other endpoint).
Construct a circle through 3 noncollinear points
- Name the points P, Q and R,
- Construct the perpendicular bisector of the segment PQ.
- Construct the perpendicular bisector of the segment PR.
- Label the point of intersection of these two perpendicular bisectors M. (They meet because the points are not collinear).
- Construct the circle with centre M passing through one of the points P, Q or R (it will also pass through the other two points).
Circle of Apollonius
Apollonius of Perga showed that a circle may also be defined as the set of points in a plane having a constant ratio (other than 1) of distances to two fixed foci, A and B. (The set of points where the distances are equal is the perpendicular bisector of segment AB, a line.) That circle is sometimes said to be drawn about two points.
The proof is in two parts. First, one must prove that, given two foci A and B and a ratio of distances, any point P satisfying the ratio of distances must fall on a particular circle. Let C be another point, also satisfying the ratio and lying on segment AB. By the angle bisector theorem the line segment PC will bisect the interior angle APB, since the segments are similar:
Analogously, a line segment PD through some point D on AB extended bisects the corresponding exterior angle BPQ where Q is on AP extended. Since the interior and exterior angles sum to 180 degrees, the angle CPD is exactly 90 degrees, i.e., a right angle. The set of points P such that angle CPD is a right angle forms a circle, of which CD is a diameter.
Second, see:p.15 for a proof that every point on the indicated circle satisfies the given ratio.
A closely related property of circles involves the geometry of the cross-ratio of points in the complex plane. If A, B, and C are as above, then the circle of Apollonius for these three points is the collection of points P for which the absolute value of the cross-ratio is equal to one:
Stated another way, P is a point on the circle of Apollonius if and only if the cross-ratio [A,B;C,P] is on the unit circle in the complex plane.
If C is the midpoint of the segment AB, then the collection of points P satisfying the Apollonius condition
is not a circle, but rather a line.
Thus, if A, B, and C are given distinct points in the plane, then the locus of points P satisfying the above equation is called a "generalised circle." It may either be a true circle or a line. In this sense a line is a generalised circle of infinite radius.
Circles inscribed in or circumscribed about other figures
A tangential polygon, such as a tangential quadrilateral, is any convex polygon within which a circle can be inscribed that is tangent to each side of the polygon. Every regular polygon and every triangle is a tangential polygon.
A cyclic polygon is any convex polygon about which a circle can be circumscribed, passing through each vertex. A well-studied example is the cyclic quadrilateral. Every regular polygon and every triangle is a cyclic polygon. A polygon that is both cyclic and tangential is called a bicentric polygon.
A hypocycloid is a curve that is inscribed in a given circle by tracing a fixed point on a smaller circle that rolls within and tangent to the given circle.
Circle as limiting case of other figures
The circle can be viewed as a limiting case of each of various other figures:
- A Cartesian oval is a set of points such that a weighted sum of the distances from any of its points to two fixed points (foci) is a constant. An ellipse is the case in which the weights are equal. A circle is an ellipse with an eccentricity of zero, meaning that the two foci coincide with each other as the centre of the circle. A circle is also a different special case of a Cartesian oval in which one of the weights is zero.
- A superellipse has an equation of the form for positive a, b, and n. A supercircle has b = a. A circle is the special case of a supercircle in which n = 2.
- A Cassini oval is a set of points such that the product of the distances from any of its points to two fixed points is a constant. When the two fixed points coincide, a circle results.
- A curve of constant width is a figure whose width, defined as the perpendicular distance between two distinct parallel lines each intersecting its boundary in a single point, is the same regardless of the direction of those two parallel lines. The circle is the simplest example of this type of figure.
Circles in other p-norms
Defining a circle as the set of points with a fixed distance from a point, different shapes can be considered circles under different definitions of distance. In p-norm, distance is determined by
In Euclidean geometry, p = 2, giving the familiar
In taxicab geometry, p = 1. Taxicab circles are squares with sides oriented at a 45° angle to the coordinate axes. While each side would have length using a Euclidean metric, where r is the circle's radius, its length in taxicab geometry is 2r. Thus, a circle's circumference is 8r. Thus, the value of a geometric analog to is 4 in this geometry. The formula for the unit circle in taxicab geometry is in Cartesian coordinates and
A circle of radius 1 (using this distance) is the von Neumann neighborhood of its center.
A circle of radius r for the Chebyshev distance (L∞ metric) on a plane is also a square with side length 2r parallel to the coordinate axes, so planar Chebyshev distance can be viewed as equivalent by rotation and scaling to planar taxicab distance. However, this equivalence between L1 and L∞ metrics does not generalize to higher dimensions.
Squaring the circle
In 1882, the task was proven to be impossible, as a consequence of the Lindemann–Weierstrass theorem, which proves that pi (π) is a transcendental number, rather than an algebraic irrational number; that is, it is not the root of any polynomial with rational coefficients.
Specially named circles
Of a triangle
Of certain quadrilaterals
Of certain polygons
Of a conic section
Of a sphere
Of a torus
- OL 7227282M
- krikos Archived 2013-11-06 at the Wayback Machine., Henry George Liddell, Robert Scott, A Greek-English Lexicon, on Perseus
- Arthur Koestler, The Sleepwalkers: A History of Man's Changing Vision of the Universe (1959)
- Proclus, The Six Books of Proclus, the Platonic Successor, on the Theology of Plato Archived 2017-01-23 at the Wayback Machine. Tr. Thomas Taylor (1816) Vol. 2, Ch. 2, "Of Plato"
- Chronology for 30000 BC to 500 BC Archived 2008-03-22 at the Wayback Machine.. History.mcs.st-andrews.ac.uk. Retrieved on 2012-05-03.
- Squaring the circle Archived 2008-06-24 at the Wayback Machine.. History.mcs.st-andrews.ac.uk. Retrieved on 2012-05-03.
- Katz, Victor J. (1998), A History of Mathematics / An Introduction (2nd ed.), Addison Wesley Longman, p. 108, ISBN 978-0-321-01618-8
- Posamentier and Salkind, Challenging Problems in Geometry, Dover, 2nd edition, 1996: pp. 104–105, #4–23.
- College Mathematics Journal 29(4), September 1998, p. 331, problem 635.
- Johnson, Roger A., Advanced Euclidean Geometry, Dover Publ., 2007.
- Harkness, James (1898). Introduction to the theory of analytic functions. London, New York: Macmillan and Co. p. 30.[permanent dead link]
- Ogilvy, C. Stanley, Excursions in Geometry, Dover, 1969, 14–17.
- Altshiller-Court, Nathan, College Geometry, Dover, 2007 (orig. 1952).
- Incircle – from Wolfram MathWorld Archived 2012-01-21 at the Wayback Machine.. Mathworld.wolfram.com (2012-04-26). Retrieved on 2012-05-03.
- Circumcircle – from Wolfram MathWorld Archived 2012-01-20 at the Wayback Machine.. Mathworld.wolfram.com (2012-04-26). Retrieved on 2012-05-03.
- Tangential Polygon – from Wolfram MathWorld Archived 2013-09-03 at the Wayback Machine.. Mathworld.wolfram.com (2012-04-26). Retrieved on 2012-05-03.
- Pedoe, Dan (1988). Geometry: a comprehensive course. Dover.
- "Circle" in The MacTutor History of Mathematics archive
|Wikimedia Commons has media related to:|
|Wikiquote has quotations related to: Circles|
|Wikisource has the text of the 1911 Encyclopædia Britannica article Circle.|
- Hazewinkel, Michiel, ed. (2001) , "Circle", Encyclopedia of Mathematics, Springer Science+Business Media B.V. / Kluwer Academic Publishers, ISBN 978-1-55608-010-4
- Circle (PlanetMath.org website)
- Weisstein, Eric W. "Circle". MathWorld.
- Interactive Java applets for the properties of and elementary constructions involving circles.
- Interactive Standard Form Equation of Circle Click and drag points to see standard form equation in action
- Munching on Circles at cut-the-knot | https://en.wikipedia.org/wiki/Circles | 18 |
53 | Newton's laws of motion(Redirected from Newton's second law)
Newton's laws of motion are three physical laws that, together, laid the foundation for classical mechanics. They describe the relationship between a body and the forces acting upon it, and its motion in response to those forces. More precisely, the first law defines the force qualitatively, the second law offers a quantitative measure of the force, and the third asserts that a single isolated force doesn't exist. These three laws have been expressed in several ways, over nearly three centuries, and can be summarised as follows:
|First law:||In an inertial frame of reference, an object either remains at rest or continues to move at a constant velocity, unless acted upon by a force.|
|Second law:||In an inertial reference frame, the vector sum of the forces F on an object is equal to the mass m of that object multiplied by the acceleration a of the object: F = ma. (It is assumed here that the mass m is constant – see below.)|
|Third law:||When one body exerts a force on a second body, the second body simultaneously exerts a force equal in magnitude and opposite in direction on the first body.|
The three laws of motion were first compiled by Isaac Newton in his Philosophiæ Naturalis Principia Mathematica (Mathematical Principles of Natural Philosophy), first published in 1687. Newton used them to explain and investigate the motion of many physical objects and systems. For example, in the third volume of the text, Newton showed that these laws of motion, combined with his law of universal gravitation, explained Kepler's laws of planetary motion.
Newton's laws are applied to objects which are idealised as single point masses, in the sense that the size and shape of the object's body are neglected to focus on its motion more easily. This can be done when the object is small compared to the distances involved in its analysis, or the deformation and rotation of the body are of no importance. In this way, even a planet can be idealised as a particle for analysis of its orbital motion around a star.
In their original form, Newton's laws of motion are not adequate to characterise the motion of rigid bodies and deformable bodies. Leonhard Euler in 1750 introduced a generalisation of Newton's laws of motion for rigid bodies called Euler's laws of motion, later applied as well for deformable bodies assumed as a continuum. If a body is represented as an assemblage of discrete particles, each governed by Newton's laws of motion, then Euler's laws can be derived from Newton's laws. Euler's laws can, however, be taken as axioms describing the laws of motion for extended bodies, independently of any particle structure.
Newton's laws hold only with respect to a certain set of frames of reference called Newtonian or inertial reference frames. Some authors interpret the first law as defining what an inertial reference frame is; from this point of view, the second law holds only when the observation is made from an inertial reference frame, and therefore the first law cannot be proved as a special case of the second. Other authors do treat the first law as a corollary of the second. The explicit concept of an inertial frame of reference was not developed until long after Newton's death.
In the given interpretation mass, acceleration, momentum, and (most importantly) force are assumed to be externally defined quantities. This is the most common, but not the only interpretation of the way one can consider the laws to be a definition of these quantities.
Newton's first law
The first law states that if the net force (the vector sum of all forces acting on an object) is zero, then the velocity of the object is constant. Velocity is a vector quantity which expresses both the object's speed and the direction of its motion; therefore, the statement that the object's velocity is constant is a statement that both its speed and the direction of its motion are constant.
The first law can be stated mathematically when the mass is a non-zero constant, as,
- An object that is at rest will stay at rest unless a force acts upon it.
- An object that is in motion will not change its velocity unless a force acts upon it.
This is known as uniform motion. An object continues to do whatever it happens to be doing unless a force is exerted upon it. If it is at rest, it continues in a state of rest (demonstrated when a tablecloth is skilfully whipped from under dishes on a tabletop and the dishes remain in their initial state of rest). If an object is moving, it continues to move without turning or changing its speed. This is evident in space probes that continuously move in outer space. Changes in motion must be imposed against the tendency of an object to retain its state of motion. In the absence of net forces, a moving object tends to move along a straight line path indefinitely.
Newton placed the first law of motion to establish frames of reference for which the other laws are applicable. The first law of motion postulates the existence of at least one frame of reference called a Newtonian or inertial reference frame, relative to which the motion of a particle not subject to forces is a straight line at a constant speed. Newton's first law is often referred to as the law of inertia. Thus, a condition necessary for the uniform motion of a particle relative to an inertial reference frame is that the total net force acting on it is zero. In this sense, the first law can be restated as:
In every material universe, the motion of a particle in a preferential reference frame Φ is determined by the action of forces whose total vanished for all times when and only when the velocity of the particle is constant in Φ. That is, a particle initially at rest or in uniform motion in the preferential frame Φ continues in that state unless compelled by forces to change it.
Newton's first and second laws are valid only in an inertial reference frame. Any reference frame that is in uniform motion with respect to an inertial frame is also an inertial frame, i.e. Galilean invariance or the principle of Newtonian relativity.
Newton's second law
The second law states that the rate of change of momentum of a body is directly proportional to the force applied, and this change in momentum takes place in the direction of the applied force.
The second law can also be stated in terms of an object's acceleration. Since Newton's second law is valid only for constant-mass systems, m can be taken outside the differentiation operator by the constant factor rule in differentiation. Thus,
where F is the net force applied, m is the mass of the body, and a is the body's acceleration. Thus, the net force applied to a body produces a proportional acceleration. In other words, if a body is accelerating, then there is a force on it. An application of this notation is the derivation of G Subscript C.
Consistent with the first law, the time derivative of the momentum is non-zero when the momentum changes direction, even if there is no change in its magnitude; such is the case with uniform circular motion. The relationship also implies the conservation of momentum: when the net force on the body is zero, the momentum of the body is constant. Any net force is equal to the rate of change of the momentum.
Any mass that is gained or lost by the system will cause a change in momentum that is not the result of an external force. A different equation is necessary for variable-mass systems (see below).
Newton's second law is an approximation that is increasingly worse at high speeds because of relativistic effects.
Since force is the time derivative of momentum, it follows that
This relation between impulse and momentum is closer to Newton's wording of the second law.
Impulse is a concept frequently used in the analysis of collisions and impacts.
Variable-mass systems, like a rocket burning fuel and ejecting spent gases, are not closed and cannot be directly treated by making mass a function of time in the second law; that is, the following formula is wrong:
The falsehood of this formula can be seen by noting that it does not respect Galilean invariance: a variable-mass object with F = 0 in one frame will be seen to have F ≠ 0 in another frame. The correct equation of motion for a body whose mass m varies with time by either ejecting or accreting mass is obtained by applying the second law to the entire, constant-mass system consisting of the body and its ejected/accreted mass; the result is
where u is the velocity of the escaping or incoming mass relative to the body. From this equation one can derive the equation of motion for a varying mass system, for example, the Tsiolkovsky rocket equation. Under some conventions, the quantity u dm/dt on the left-hand side, which represents the advection of momentum, is defined as a force (the force exerted on the body by the changing mass, such as rocket exhaust) and is included in the quantity F. Then, by substituting the definition of acceleration, the equation becomes F = ma.
Newton's third law
The third law states that all forces between two objects exist in equal magnitude and opposite direction: if one object A exerts a force FA on a second object B, then B simultaneously exerts a force FB on A, and the two forces are equal in magnitude and opposite in direction: FA = −FB. The third law means that all forces are interactions between different bodies, or different regions within one body, and thus that there is no such thing as a force that is not accompanied by an equal and opposite force. In some situations, the magnitude and direction of the forces are determined entirely by one of the two bodies, say Body A; the force exerted by Body A on Body B is called the "action", and the force exerted by Body B on Body A is called the "reaction". This law is sometimes referred to as the action-reaction law, with FA called the "action" and FB the "reaction". In other situations the magnitude and directions of the forces are determined jointly by both bodies and it isn't necessary to identify one force as the "action" and the other as the "reaction". The action and the reaction are simultaneous, and it does not matter which is called the action and which is called reaction; both forces are part of a single interaction, and neither force exists without the other.
The two forces in Newton's third law are of the same type (e.g., if the road exerts a forward frictional force on an accelerating car's tires, then it is also a frictional force that Newton's third law predicts for the tires pushing backward on the road).
From a conceptual standpoint, Newton's third law is seen when a person walks: they push against the floor, and the floor pushes against the person. Similarly, the tires of a car push against the road while the road pushes back on the tires—the tires and road simultaneously push against each other. In swimming, a person interacts with the water, pushing the water backward, while the water simultaneously pushes the person forward—both the person and the water push against each other. The reaction forces account for the motion in these examples. These forces depend on friction; a person or car on ice, for example, may be unable to exert the action force to produce the needed reaction force.
Newton's 1st Law
From the original Latin of Newton's Principia:
|“||Lex I: Corpus omne perseverare in statu suo quiescendi vel movendi uniformiter in directum, nisi quatenus a viribus impressis cogitur statum illum mutare.||”|
Translated to English, this reads:
|“||Law I: Every body persists in its state of being at rest or of moving uniformly straight forward, except insofar as it is compelled to change its state by force impressed.||”|
The ancient Greek philosopher Aristotle had the view that all objects have a natural place in the universe: that heavy objects (such as rocks) wanted to be at rest on the Earth and that light objects like smoke wanted to be at rest in the sky and the stars wanted to remain in the heavens. He thought that a body was in its natural state when it was at rest, and for the body to move in a straight line at a constant speed an external agent was needed continually to propel it, otherwise it would stop moving. Galileo Galilei, however, realised that a force is necessary to change the velocity of a body, i.e., acceleration, but no force is needed to maintain its velocity. In other words, Galileo stated that, in the absence of a force, a moving object will continue moving. (The tendency of objects to resist changes in motion was what Johannes Kepler had called inertia.) This insight was refined by Newton, who made it into his first law, also known as the "law of inertia"—no force means no acceleration, and hence the body will maintain its velocity. As Newton's first law is a restatement of the law of inertia which Galileo had already described, Newton appropriately gave credit to Galileo.
The law of inertia apparently occurred to several different natural philosophers and scientists independently, including Thomas Hobbes in his Leviathan. The 17th-century philosopher and mathematician René Descartes also formulated the law, although he did not perform any experiments to confirm it.
Newton's 2nd Law
Newton's original Latin reads:
|“||Lex II: Mutationem motus proportionalem esse vi motrici impressae, et fieri secundum lineam rectam qua vis illa imprimitur.||”|
This was translated quite closely in Motte's 1729 translation as:
|“||Law II: The alteration of motion is ever proportional to the motive force impress'd; and is made in the direction of the right line in which that force is impress'd.||”|
According to modern ideas of how Newton was using his terminology, this is understood, in modern terms, as an equivalent of:
The change of momentum of a body is proportional to the impulse impressed on the body, and happens along the straight line on which that impulse is impressed.
This may be expressed by the formula F = p', where p' is the time derivative of the momentum p. This equation can be seen clearly in the Wren Library of Trinity College, Cambridge, in a glass case in which Newton's manuscript is open to the relevant page.
Motte's 1729 translation of Newton's Latin continued with Newton's commentary on the second law of motion, reading:
If a force generates a motion, a double force will generate double the motion, a triple force triple the motion, whether that force be impressed altogether and at once, or gradually and successively. And this motion (being always directed the same way with the generating force), if the body moved before, is added to or subtracted from the former motion, according as they directly conspire with or are directly contrary to each other; or obliquely joined, when they are oblique, so as to produce a new motion compounded from the determination of both.
The sense or senses in which Newton used his terminology, and how he understood the second law and intended it to be understood, have been extensively discussed by historians of science, along with the relations between Newton's formulation and modern formulations.
Newton's 3rd Law
|“||Lex III: Actioni contrariam semper et æqualem esse reactionem: sive corporum duorum actiones in se mutuo semper esse æquales et in partes contrarias dirigi.||”|
Translated to English, this reads:
|“||Law III: To every action there is always opposed an equal reaction: or the mutual actions of two bodies upon each other are always equal, and directed to contrary parts.||”|
Newton's Scholium (explanatory comment) to this law:
Whatever draws or presses another is as much drawn or pressed by that other. If you press a stone with your finger, the finger is also pressed by the stone. If a horse draws a stone tied to a rope, the horse (if I may so say) will be equally drawn back towards the stone: for the distended rope, by the same endeavour to relax or unbend itself, will draw the horse as much towards the stone, as it does the stone towards the horse, and will obstruct the progress of the one as much as it advances that of the other. If a body impinges upon another, and by its force changes the motion of the other, that body also (because of the equality of the mutual pressure) will undergo an equal change, in its own motion, toward the contrary part. The changes made by these actions are equal, not in the velocities but in the motions of the bodies; that is to say, if the bodies are not hindered by any other impediments. For, as the motions are equally changed, the changes of the velocities made toward contrary parts are reciprocally proportional to the bodies. This law takes place also in attractions, as will be proved in the next scholium.
In the above, as usual, motion is Newton's name for momentum, hence his careful distinction between motion and velocity.
Newton used the third law to derive the law of conservation of momentum; from a deeper perspective, however, conservation of momentum is the more fundamental idea (derived via Noether's theorem from Galilean invariance), and holds in cases where Newton's third law appears to fail, for instance when force fields as well as particles carry momentum, and in quantum mechanics.
Importance and range of validity
Newton's laws were verified by experiment and observation for over 200 years, and they are excellent approximations at the scales and speeds of everyday life. Newton's laws of motion, together with his law of universal gravitation and the mathematical techniques of calculus, provided for the first time a unified quantitative explanation for a wide range of physical phenomena.
These three laws hold to a good approximation for macroscopic objects under everyday conditions. However, Newton's laws (combined with universal gravitation and classical electrodynamics) are inappropriate for use in certain circumstances, most notably at very small scales, very high speeds (in special relativity, the Lorentz factor must be included in the expression for momentum along with the rest mass and velocity) or very strong gravitational fields. Therefore, the laws cannot be used to explain phenomena such as conduction of electricity in a semiconductor, optical properties of substances, errors in non-relativistically corrected GPS systems and superconductivity. Explanation of these phenomena requires more sophisticated physical theories, including general relativity and quantum field theory.
In quantum mechanics, concepts such as force, momentum, and position are defined by linear operators that operate on the quantum state; at speeds that are much lower than the speed of light, Newton's laws are just as exact for these operators as they are for classical objects. At speeds comparable to the speed of light, the second law holds in the original form F = dp/dt, where F and p are four-vectors.
Relationship to the conservation laws
In modern physics, the laws of conservation of momentum, energy, and angular momentum are of more general validity than Newton's laws, since they apply to both light and matter, and to both classical and non-classical physics.
This can be stated simply, "Momentum, energy and angular momentum cannot be created or destroyed."
Because force is the time derivative of momentum, the concept of force is redundant and subordinate to the conservation of momentum, and is not used in fundamental theories (e.g., quantum mechanics, quantum electrodynamics, general relativity, etc.). The standard model explains in detail how the three fundamental forces known as gauge forces originate out of exchange by virtual particles. Other forces, such as gravity and fermionic degeneracy pressure, also arise from the momentum conservation. Indeed, the conservation of 4-momentum in inertial motion via curved space-time results in what we call gravitational force in general relativity theory. The application of the space derivative (which is a momentum operator in quantum mechanics) to the overlapping wave functions of a pair of fermions (particles with half-integer spin) results in shifts of maxima of compound wavefunction away from each other, which is observable as the "repulsion" of the fermions.
Newton stated the third law within a world-view that assumed instantaneous action at a distance between material particles. However, he was prepared for philosophical criticism of this action at a distance, and it was in this context that he stated the famous phrase "I feign no hypotheses". In modern physics, action at a distance has been completely eliminated, except for subtle effects involving quantum entanglement. (In particular, this refers to Bell's theorem – that no local model can reproduce the predictions of quantum theory.) Despite only being an approximation, in modern engineering and all practical applications involving the motion of vehicles and satellites, the concept of action at a distance is used extensively.
The discovery of the second law of thermodynamics by Carnot in the 19th century showed that not every physical quantity is conserved over time, thus disproving the validity of inducing the opposite metaphysical view from Newton's laws. Hence, a "steady-state" worldview based solely on Newton's laws and the conservation laws does not take entropy into account.
|Wikimedia Commons has media related to Newton's laws of motion.|
References and notes
- For explanations of Newton's laws of motion by Newton in the early 18th century, by the physicist William Thomson (Lord Kelvin) in the mid-19th century, and by a modern text of the early 21st century, see:-
- Newton's "Axioms or Laws of Motion" starting on page 19 of volume 1 of the 1729 translation Archived 28 September 2015 at the Wayback Machine. of the Principia;
- Section 242, Newton's laws of motion Archived 22 March 2015 at the Wayback Machine. in Thomson, W (Lord Kelvin), and Tait, P G, (1867), Treatise on natural philosophy, volume 1; and
- Benjamin Crowell (2000), Newtonian Physics.
- Browne, Michael E. (July 1999). Schaum's outline of theory and problems of physics for engineering and science (Series: Schaum's Outline Series). McGraw-Hill Companies. p. 58. ISBN 978-0-07-008498-8.
- Holzner, Steven (December 2005). Physics for Dummies. Wiley, John & Sons, Incorporated. p. 64. ISBN 978-0-7645-5433-9.
- See the Principia on line at Andrew Motte Translation
- Andrew Motte translation of Newton's Principia (1687) Axioms or Laws of Motion
- Greiner, Walter (2003). Classical mechanics: point particles and relativity. New York: Springer. ISBN 978-0-387-21851-9.
- Zeidler, E. (1988). Nonlinear Functional Analysis and its Applications IV: Applications to Mathematical Physics. New York, NY: Springer New York. ISBN 978-1-4612-4566-7.
- Wachter, Armin; Hoeber, Henning (2006). Compendium of theoretical physics. New York, NY: Springer. ISBN 0-387-25799-3.
- [...]while Newton had used the word 'body' vaguely and in at least three different meanings, Euler realized that the statements of Newton are generally correct only when applied to masses concentrated at isolated points;Truesdell, Clifford A.; Becchi, Antonio; Benvenuto, Edoardo (2003). Essays on the history of mechanics: in memory of Clifford Ambrose Truesdell and Edoardo Benvenuto. New York: Birkhäuser. p. 207. ISBN 3-7643-1476-1.
- Lubliner, Jacob (2008). Plasticity Theory (Revised Edition) (PDF). Dover Publications. ISBN 0-486-46290-0. Archived from the original (PDF) on 31 March 2010.
- Galili, I.; Tseitlin, M. (2003). "Newton's First Law: Text, Translations, Interpretations and Physics Education". Science & Education. 12 (1): 45–73. Bibcode:2003Sc&Ed..12...45G. doi:10.1023/A:1022632600805.
- Benjamin Crowell. "4. Force and Motion". Newtonian Physics. ISBN 0-9704670-1-X. Archived from the original on 16 February 2007.
- In making a modern adjustment of the second law for (some of) the effects of relativity, m would be treated as the relativistic mass, producing the relativistic expression for momentum, and the third law might be modified if possible to allow for the finite signal propagation speed between distant interacting particles.
- NMJ Woodhouse (2003). Special relativity. London/Berlin: Springer. p. 6. ISBN 1-85233-426-6.
- Beatty, Millard F. (2006). Principles of engineering mechanics Volume 2 of Principles of Engineering Mechanics: Dynamics-The Analysis of Motion,. Springer. p. 24. ISBN 0-387-23704-6.
- Thornton, Marion (2004). Classical dynamics of particles and systems (5th ed.). Brooks/Cole. p. 53. ISBN 0-534-40896-6.
- Plastino, Angel R.; Muzzio, Juan C. (1992). "On the use and abuse of Newton's second law for variable mass problems". Celestial Mechanics and Dynamical Astronomy. Netherlands: Kluwer Academic Publishers. 53 (3): 227–232. Bibcode:1992CeMDA..53..227P. doi:10.1007/BF00052611. ISSN 0923-2958. "We may conclude emphasizing that Newton's second law is valid for constant mass only. When the mass varies due to accretion or ablation, [an alternate equation explicitly accounting for the changing mass] should be used."
- Halliday; Resnick. Physics. 1. p. 199. ISBN 0-471-03710-9.
It is important to note that we cannot derive a general expression for Newton's second law for variable mass systems by treating the mass in F = dP/dt = d(Mv) as a variable. [...] We can use F = dP/dt to analyze variable mass systems only if we apply it to an entire system of constant mass having parts among which there is an interchange of mass.[Emphasis as in the original]
Kleppner, Daniel; Robert Kolenkow (1973). An Introduction to Mechanics. McGraw-Hill. pp. 133–134. ISBN 0-07-035048-5.
Recall that F = dP/dt was established for a system composed of a certain set of particles[. ... I]t is essential to deal with the same set of particles throughout the time interval[. ...] Consequently, the mass of the system can not change during the time of interest.
- Hannah, J, Hillier, M J, Applied Mechanics, p221, Pitman Paperbacks, 1971
- Raymond A. Serway; Jerry S. Faughn (2006). College Physics. Pacific Grove CA: Thompson-Brooks/Cole. p. 161. ISBN 0-534-99724-4.
- I. Bernard Cohen (Peter M. Harman & Alan E. Shapiro, Eds) (2002). The investigation of difficult things: essays on Newton and the history of the exact sciences in honour of D.T. Whiteside. Cambridge UK: Cambridge University Press. p. 353. ISBN 0-521-89266-X.
- WJ Stronge (2004). Impact mechanics. Cambridge UK: Cambridge University Press. p. 12 ff. ISBN 0-521-60289-0.
- Resnick; Halliday; Krane (1992). Physics, Volume 1 (4th ed.). p. 83.
- C Hellingman (1992). "Newton's third law revisited". Phys. Educ. 27 (2): 112–115. Bibcode:1992PhyEd..27..112H. doi:10.1088/0031-9120/27/2/011.
Quoting Newton in the Principia: It is not one action by which the Sun attracts Jupiter, and another by which Jupiter attracts the Sun; but it is one action by which the Sun and Jupiter mutually endeavour to come nearer together.
- Resnick & Halliday (1977). Physics (Third ed.). John Wiley & Sons. pp. 78–79.
Any single force is only one aspect of a mutual interaction between two bodies.
- Hewitt (2006), p. 75
- Isaac Newton, The Principia, A new translation by I.B. Cohen and A. Whitman, University of California press, Berkeley 1999.
- Thomas Hobbes wrote in Leviathan:
That when a thing lies still, unless somewhat else stir it, it will lie still forever, is a truth that no man doubts. But [the proposition] that when a thing is in motion it will eternally be in motion unless somewhat else stay it, though the reason be the same (namely that nothing can change itself), is not so easily assented to. For men measure not only other men but all other things by themselves. And because they find themselves subject after motion to pain and lassitude, [they] think every thing else grows weary of motion and seeks repose of its own accord, little considering whether it be not some other motion wherein that desire of rest they find in themselves, consists.
- Cohen, I. B. (1995). Science and the Founding Fathers: Science in the Political Thought of Jefferson, Franklin, Adams and Madison. New York: W.W. Norton. p. 117. ISBN 978-0393315103. Archived from the original on 22 March 2017.
- Cohen, I. B. (1980). The Newtonian Revolution: With Illustrations of the Transformation of Scientific Ideas. Cambridge, England: Cambridge University Press. pp. 183–4. ISBN 978-0521273800.
- According to Maxwell in Matter and Motion, Newton meant by motion "the quantity of matter moved as well as the rate at which it travels" and by impressed force he meant "the time during which the force acts as well as the intensity of the force". See Harman and Shapiro, cited below.
- See for example (1) I Bernard Cohen, "Newton's Second Law and the Concept of Force in the Principia", in "The Annus Mirabilis of Sir Isaac Newton 1666–1966" (Cambridge, Massachusetts: The MIT Press, 1967), pages 143–185; (2) Stuart Pierson, "'Corpore cadente. . .': Historians Discuss Newton’s Second Law", Perspectives on Science, 1 (1993), pages 627–658; and (3) Bruce Pourciau, "Newton's Interpretation of Newton's Second Law", Archive for History of Exact Sciences, vol.60 (2006), pages 157–207; also an online discussion by G E Smith, in 5. Newton's Laws of Motion, s.5 of "Newton's Philosophiae Naturalis Principia Mathematica" in (online) Stanford Encyclopedia of Philosophy, 2007.
- This translation of the third law and the commentary following it can be found in the Principia on page 20 of volume 1 of the 1729 translation Archived 25 April 2016 at the Wayback Machine..
- Newton, Principia, Corollary III to the laws of motion
Further reading and works cited
- Crowell, Benjamin (2011), Light and Matter (2011, Light and Matter), especially at Section 4.2, Newton's First Law, Section 4.3, Newton's Second Law, and Section 5.1, Newton's Third Law.
- Feynman, R. P.; Leighton, R. B.; Sands, M. (2005). The Feynman Lectures on Physics. Vol. 1 (2nd ed.). Pearson/Addison-Wesley. ISBN 0-8053-9049-9.
- Fowles, G. R.; Cassiday, G. L. (1999). Analytical Mechanics (6th ed.). Saunders College Publishing. ISBN 0-03-022317-2.
- Likins, Peter W. (1973). Elements of Engineering Mechanics. McGraw-Hill Book Company. ISBN 0-07-037852-5.
- Marion, Jerry; Thornton, Stephen (1995). Classical Dynamics of Particles and Systems. Harcourt College Publishers. ISBN 0-03-097302-3.
- NMJ Woodhouse (2003). Special Relativity. London/Berlin: Springer. p. 6. ISBN 1-85233-426-6.
- Newton, Isaac, "Mathematical Principles of Natural Philosophy", 1729 English translation based on 3rd Latin edition (1726), volume 1, containing Book 1, especially at the section Axioms or Laws of Motion, starting page 19.
- Newton, Isaac, "Mathematical Principles of Natural Philosophy", 1729 English translation based on 3rd Latin edition (1726), volume 2, containing Books 2 & 3.
- Thomson, W (Lord Kelvin), and Tait, P G, (1867), Treatise on natural philosophy, volume 1, especially at Section 242, Newton's laws of motion.
- MIT Physics video lecture on Newton's three laws
- Light and Matter – an on-line textbook
- Simulation on Newton's first law of motion
- "Newton's Second Law" by Enrique Zeleny, Wolfram Demonstrations Project.
- on YouTube
- The Laws of Motion, BBC Radio 4 discussion with Simon Schaffer, Raymond Flood & Rob Iliffe (In Our Time, Apr.3, 2008) | https://en.m.wikipedia.org/wiki/Newton%27s_second_law | 18 |
116 | Hands-on manipulatives help students to prove how, why, and when the pythagorean theorem shows relationships within triangles plan your 60 minutes lesson in math or pythagorean theroem with helpful tips from christa lemily. Pythagorean theorem the pythagorean theorem is a2 + b2 = c2 pythagorean theorem n the theorem that the sum of the squares of the lengths of the sides of a right triangle is . How to use the pythagorean theorem the pythagorean theorem describes the lengths of the sides of a right triangle in a way that is so elegant and practical that the theorem is still widely used today. The pythagorean theorem date_____ period____ do the following lengths form a right triangle 1) 6 8 9 no 2) 5 12 13 yes 3) 6 8 10 yes 4) 3 4 5 yes. 65 using the pythagorean theorem how can you use the pythagorean theorem to solve real-life problems work with a partner a explain your reasoning b.
Pythagorean theorem the sum of the areas of the two squares on the legs (a and b) equals the area of the square on the hypotenuse (c) geometry. Euclid's proof of pythagoras' theorem (i47) for the comparison and reference sake we'll have on this page the proof of the pythagorean theorem as it is given in elements i47, see sir thomas heath's translation. The pythagorean theorem has been known for at least 2,500 years you use the pythagorean theorem when you know the lengths of two sides of a right triangle and you want to figure out the length of the third side. Called pythagorean triples pythagoras’ theorem is used in determining the distance between two points in both two and three dimensional space.
It's one of the most important math formulas they'll ever learn, yet many students can't effectively use the pythagorean theorem this lesson will discuss and demonstrate how to use diagrams and models to explain the pythagorean theorem. Pythagorean theorem inquiry based unit plan by: overhead projector, pythagorean squares and sure everyone can use at least one of them and explain why it . These pythagorean theorem worksheets are perfect for providing children a fun way to practice and learn the pythagorean theorem these worksheets are great resources for the 6th grade, 7th grade, and 8th grade.
Students are asked to explain the steps of a proof of the pythagorean theorem that uses similar triangles. The formula of the pythagorean theorem is one of the most basic relations in euclidean two-dimensional geometry in this article, the theorem explained. Pythagorean theorem explained with pictures, examples and cool interactive html5 applet of a right triangle. The pythagorean theorem: after an appropriate period of time, each group can demonstrate their applet to the class and explain why the shapes fit, .
What is the pythagorean theorem when do you use it. A pythagorean triple $(a,b,c) and 5 to produce a right angle uses the converse of the pythagorean theorem explain, in this particular case, . Pythagorean theorem chart these descriptive charts explain the pythagorean theorem with an illustration this emphasizes the relation of the theorem derived as an equation. Pythagorean theorem example explained step by step . In mathematics, the pythagorean theorem or pythagoras's theorem is a statement about the sides of a right triangle one of the angles of a right triangle is always equal to 90 degrees.
Minkowski space elegantly describes special relativity as consequences of the makeup then use the pythagorean theorem where i explained how einstein derived . The pythagorean theorem is a celebrity: if an equation can make it into the simpsons, i'd say it's well-known but most of us think the formula only applies to triangles and geometry think again the pythagorean theorem can be used with any shape and for any formula that squares a number read on . Explain to your students in an understandable way the proof of the pythagorean theorem this lesson is great for eighth grade math students and is part one on a series of lessons designed to teach and assess your student's knowledge on the pythagorean theorem.
Bhaskara's second proof of the pythagorean theorem in this proof, bhaskara began with a right triangle and then he drew an altitude on the hypotenuse. Free practice questions for common core: 8th grade math - explain a proof of the pythagorean theorem and its converse: ccssmathcontent8gb6 includes full. The pythagorean theorem whether pythagoras learned about the 3, 4, 5 right triangle while he studied in egypt or not, he was certainly aware of it.
45 pythagorean trigonometric identity the pythagorean theorem or pythagoras' theorem is a relation in euclidean geometry among the three sides of a right . Of the hundreds of proofs of the pythagorean theorem that you can find in books like eli maor's the pythagorean theorem: a 4,000-year history, several are appropriate for children. Algebra - pythagorean theorem - duration: 13:22 yaymath 747,347 views 13:22 area of a circle, how to get the formula - duration: 2:47. | http://xhassignmenthsnz.njdata.info/pythagorean-theorem-explained.html | 18 |
16 | In your early days of studying Algebra, lessons deal with both algebraic and geometric sequences. Identifying patterns is also a must in Algebra. When working with fractions, these patterns can be algebraic, geometric or something completely different. The key to noticing these patterns is to be vigilant and hyper-aware of potential patterns among your numbers.
Determine whether a given quantity is added to each fraction, to obtain the next fraction. For instance, if you have the sequence 1/8, 1/4, 3/8, 1/2 -- if you make all the denominators equal to 8, you will notice that the fractions increase from 1/8 to 2/8 to 3/8 to 4/8. Therefore, you have an arithmetic sequence, in which the pattern involves adding 1/8 to each fraction to obtain the next.
Determine whether a "factor" pattern, known as a geometric sequence, exists among the fractions. In other words, determine if a number is multiplied by each fraction to obtain the next. If you have the sequence 1/(2^4), 1/(2^3), 1/(2^2), 1/2, which can also be written as 1/16, 1/8, 1/4, 1/2, notice that you must multiply each fraction by 2 to obtain the next one.
Sciencing Video Vault
Determine -- if you see neither an algebraic or geometric sequence -- whether the problem is combining an algebraic and/or geometric sequence with another mathematical operation, such as working with the reciprocals of fractions. For instance, the problem could give you a sequence such as 2/3, 6/4, 8/12, 24/16. You' ll notice that the second and fourth fractions in the sequence are equal to the reciprocals of 2/3 and 8/12, in which both the numerator and denominator is multiplied by 2. | https://sciencing.com/patterns-fractions-8518222.html | 18 |
47 | Activities for critical thinking class
Through emphasis on evidence, teachers can facilitate an environment where deep, critical thinking and meta cognition are the norm below are some activities to help teachers incorporate curiosity, evidence, and critical thinking into their classrooms. Lesson summary critical thinking is the ability to objectively analyze situations and information in order to draw critical thinking: exercises, activities & strategies related study . Great for finding lesson plans, apps, sites, and game-based activities perfect for developing critical and creative thinking cinderella a critical thinking activity: take a look at the traditional story in a new way. The infusion of critical thinking skills into literature and composition classes is a constant concern for faculty members we can begin to infuse these skills on the first. Developing critical thinking through science presents standards-based, hands-on, minds-on activities that help students learn basic physical science principles and the scientific method of investigation.
Find and save ideas about critical thinking activities on pinterest | see more ideas about think education, higher level questioning and higher higher. This activity, as simple as it sounds, involves lots of logic and critical thinking for example, students may decide that a skateboard is probably the slowest form of transportation on the list however, it gets a bit more difficult after that. Home • critical thinking lesson plans pages games and interactives critical thinking resources. Find this pin and more on creative and critical thinking by activities growth mindset classroom growth mindset critical thinking and critical literacy .
How can students own their learning with critical thinking activities they’ll really love allowing our students to take stands on issues that matter to them engages the classroom in a way that fosters great critical thinking. These activities can be used in conjunction with specific media examples on cmp or more generally used to elicit class discussion and critical thinking it may be useful to review the section that overviews key concepts tied to each identity. Critical thinking work sheets a critical-thinking activity invite students to share their poems with the class this activity is a fun one that enables you . Strategies for developing ell critical thinking skills try one of chris’ time-tested critical thinking activities in your classroom: make up words – an .
Whether critical thinking is a stand-alone lesson taught at the beginning of a course followed by various exercises and activities as is the case at the defense financial management and comptroller school or integrated into the curriculum and utilized in case. Effective written communication is an integral part of science education—learn some new ways to amalgamate writing and science lessons in order to strengthen students' writing and thinking skills. 81 fresh & fun critical-thinking activities engaging activities and reproducibles to develop kids’ higher-level thinking skills by laurie rozakis. 10 team-building games that promote collaborative critical thinking games that promote critical thinking critical thinking you can purchase a classroom . Critical thinking exercise: crime and punishment this critical thinking exercise is based on a current news article in which a young woman was arrested for selling $400 worth of heroin to an undercover police officer in 1974.
Activities for critical thinking class
Engage students in critical thinking activities with these great applications the most important gift that educators can give to students is the ability to think critically critical thinking is the ability to take information, then instead of simply memorizing it. 20 great icebreakers for the classroom this stem activity from the growing a stem classroom encourages team building and critical thinking it can also serve as . Critical thinking is a skill that students develop gradually as they progress in school this skill becomes more important in higher grades, but some students find it difficult to understand the concept of critical thinking the concept can be difficult to grasp because it requires students to set . Critical thinking activities for high school read on for several suggestions for implementing critical thinking activities in the classroom critical thinking skills are important.
Classroom activities for encouraging evidence-based critical thinking 85 the journal of effective teaching, vol 13, no 2, 2013, 83-93. Jumpstart has a fun collection of free, printable critical thinking worksheets and free critical thinking activities for kids homeschooling parents as well as teachers can encourage better logical thinking, and deductive reasoning skills in kids by introducing them to these exercises. Here are 12 interesting ways to approach teaching critical thinking skills with any of your students, and in any classroom subject.
Problem solving and critical thinking refers to the ability to use the activities in this section focus on learning how to solve problems in a variety of ways in . Critical readind activities to develop critical thinking in science classes begoña oliveras1 conxita márquez2 and neus sanmartí3 1, 2, 3 department of science and mathematics education, university autonoma of barcelona,. June 12, 2014, volume 1, issue 5, no 8 driving question: what does critical thinking look and sound like in an elementary classroom the other day, i walked into one of our primary multi-aged classroom communities. Fun critical thinking activities - for students in any subject by monica dorcz | this newsletter was created with smore, an online tool for creating beautiful newsletters for for educators, nonprofits, businesses and more. | http://wycourseworkohfz.safeschools.us/activities-for-critical-thinking-class.html | 18 |
16 | The angle of repose is the minimum angle at which any piled-up bulky or loose material will stand without falling downhill. One way to demonstrate this would be to pour sand from a bag to the ground. There is a minimum angle or maximum slope the sand will maintain due to the forces of gravity and the effect of friction between the particles of sand. The angle is calculated between the peak of the pile and the horizontal ground. The angle of repose for dry sand has been calculated to be 35 degrees, whereas cement has an angle of repose of 20 degrees.
Pour the dry sand into a pile on a level surface allowing it to build a pile from the top. This will result in a pile with a relatively circular base, making measurement easier.
Using the ruler and a tape measure, measure the height (h) of the pile of sand from the peak to the ground. Stand the ruler next to the pile so it can be read easily. Extend the tape measure carefully to the top of the pile without disturbing the pile and allow the other end of the tape measure to intersect the ruler. While keeping the tape measure level, observe the intersection of the tape measure with the ruler. Write the value on the paper. (Example: h = 12 inches.)
Sciencing Video Vault
Using the tape measure, measure the horizontal distance (d) from the middle of the pile to the edge. Place the tape measure on the ground beside the pile. Line up one end with one side of the pile and extend the tape measure to the other end of the pile. Write the value on the paper and divide by 2. This will give you the distance from the center of the pile to the edge. (Example: Total distance on tape measure from one end of the pile to the other = 30 inches. Divide by 2 to get 15 inches. Thus, d = 15 inches)
The equation for calculating the angle of repose is: tan-1(2h/d). Using your scientific calculator, multiply height by 2 and divide this value by the distance. Then, hit the inverse tan key (or tan-1) and the answer just calculated. This will give you the angle of repose, α.
Place the protractor on the level surface next to the pile of sand. Using the ruler, create a straight line from the peak of the sand pile down the slope. Read the angle of repose valueand write the value on the paper.
Compare the calculated angle of repose from Step 4 and the measured angle of repose from Step 5. If the values are not within 1 degree of each other, repeat Step 5.
Bags of dry sand can be heavy (30-50 lbs). Use caution when lifting and pouring sand. | https://sciencing.com/calculate-angle-repose-6712029.html | 18 |
13 | Strings and Lists
Let’s talk about some basic Python functions in the context of simple exercises! Say that you had some text that you wanted to cut up and combine … For example, you might want to pick two slices of a text string and connect them. This function takes 5 arguments: an inputted text string, a starting slice for side a, an ending slice for slice a, a starting slice for side b, and an ending slice for side b. The slices of side a and side b are then combined.
def slice_and_dice(text_string, a, b, c, d): text_string_ab_slice = text_string[a:b+1] text_string_cd_slice = text_string[c:d+1] left_space_right_slices = text_string_ab_slice + ' ' + text_string_cd_slice return left_space_right_slices # I tested the code w/ this stuff: # text = "HumptyDumptysatonawallHumptyDumptyhadagreatfallAlltheKingshorsesandalltheKings \ menCouldntputHumptyDumptyinhisplaceagain." # print(slice_and_dice(text, 22, 27, 97, 102)) text = "vQj40bDqlDzbiEMUobkQ01senjgnUhrxilwiuGrOvisxF7KqgJ7TJbzmwXyaHgdtvqRBiLfWUF9CtI7\ kMBWwGyxYVfk0rwdZSM3kSDT0hpzfiber73j3R5WgtTrfoIuowfuFq6Za7JQtg8bvug6NdRXxGi27Yks\ MEe6g1vfqyyxBoxxOTWEPMbbFHYNmprubaSPNL." print(slice_and_dice(text, 39, 42, 107, 111))
First of all, a Python function is a callable set of Python code of the format: def FUNCTION_NAME(FUNCTION, INPUTS): INDENTED CODE. Python functions don’t have curly brackets enclosing the code the same way that R does. In Python, the indentation is the only thing that matters, so it’s important to be very careful with it. Don’t use spaces! Use tabs! Tabs are a quick and repeatable distance.
As you can see in the ‘slice_and_dice’ function definition below, function arguments are enclosed within a set of parenthesis and separated by commas. These are the name placeholders for the functional arguments submitted to the function. Check out the call to this function down at the end of the code block. The call looks a lot like the functional definition except that the call has the actual inputs. Note that ‘text’ is a variable that is defined above the call to the function.
Pound signs (#) are used for in-line comments. If you’re just recently learning to code, you’ll be surprised how much you forgot about why functions were defined it certain ways, so it’s very important to carefully explain yourself. I’m being a hypocrite in the function call below because I’m only using pound signs to talk about how I tested the code, but in my defense, this is an easy program. You’ll see plenty of comments in my more complicated code.
I’m sure you can guess what ‘print’ does, haha. I’m using it here to print an output to the screen. Alternatively, you could print output to a file. That’s something that I’ll be explaining later. Looking at the indented bit of code within the function, you’ll see the use of square brackets; those are used to ‘slice’ the text. ‘text_string’, one of the functional arguments, will be sliced with a starting point of argument ‘a’. The reason that the end slice position, in the bracket, isn’t just ‘b’ is because Python list slice ends are not inclusive; that’s why there’s a +1.
The variable ‘left_space_right_slices’ is the concatenation of the ‘text_string_ab_slice’ and the ‘text_string_cd_slice’. This is done with a plus sign. I’ve used a ‘ ‘ (blank space) to create a gap between the left side and the right side. Finally, the output of the function is returned. It’s a simple function, but I did enjoy reviewing it again. I believe it is important to go back to basics to strengthen conceptual understanding. | http://www.bioinformaticsanalyst.com/python/2018/04/15/basic-python-functions.html | 18 |
48 | Get the lowdown on the breakdown of topics in systems of linear equations here let us make it easier for you by simplifying things. Where a1,a2 ,an and b are constant real or complex numbers the constant ai is called the coefficient of xi and b is called the constant term of the equation a system of linear equations (or linear system) is a finite collection of linear equations in same variables for instance, a linear system of m equations in n variables x1. Mathematics | system of linear equations trace of a matrix : let a=[aij] nxn is a square matrix of order n, then the sum of diagonal elements is called the trace of a matrix which is denoted by tr(a) tr(a) = a11 + a22 + a33++ ann properties of trace of matrix: let a and b be any two square matrix of order n, then. A system of equations is a collection of two or more equations with the same set of variables in this blog post read more high school math solutions – systems of equations calculator, nonlinear in a previous post, we learned about how to solve a system of linear equations in this post, we will learn how read more. A system of equations is when we have two or more linear equations working together.
A summary of solving systems of linear equations by addition/subtraction in 's systems of equations learn exactly what happened in this chapter, scene, or section of systems of equations and what it means perfect for acing essays, tests , and quizzes, as well as for writing lesson plans. Systems of equations with three variables are only slightly more complicated to solve than those with two variables the two most straightforward methods of sol. Systems of linear equations introduction consider the two equations ax+by=c and dx+ey=f since these equations represent two lines in the xy-plane, the simultaneous solution of these two equations (ie those points (x,y) that satisfy both equations) is merely the intersection of the two lines the graphs below illustrate the.
Systems of linear equations and their solution, explained with pictures , examples and a cool interactive applet also, a look at the using substitution, graphing and elimination methods. This value of x can then be used to find y by substituting 1 with x eg in the first equation y = 2 x + 4 y = 2 ⋅ 1 + 4 y = 6 the solution of the linear system is (1, 6) you can use the substitution method even if both equations of the linear system are in standard form just begin by solving one of the equations for one of its. Solve systems of linear equations in matrix or equation form.
A linear system of two equations with two variables is any system that can be written in the form where any of the constants can be zero with the exception that each equation must have at least one variable in it also, the system is called linear if the variables are only to the first power, are only in the numerator and there are. Learn how to interpret solutions to systems of linear equations and solve them. This page will show you how to solve two equations with two unknowns there are many ways of doing this, but this page used the method of substitution note the = signs are already put in for you you just need to fill in the boxes around the equals signs. Two consistent equations a - matrix(c(1, 2, -1, 2), 2, 2) b - c(2,1) showeqn(a, b ) ## 1x1 - 1x2 = 2 ## 2x1 + 2x2 = 1 c( r(a), r(cbind(a,b)) ) # show ranks ## 2 2 allequal( r(a), r(cbind(a,b)) ) # consistent ## true plot the equations: ploteqn(a,b) ## x1 - 1x2 = 2 ## 2x1 + 2x2 = 1 solve() is.
Solve/linear systems of linear equations calling sequence parameters description examples calling sequence solve( eqns , vars ) parameters eqns - set or list of linear equations and inequations vars - list of names (unknowns) description the linear system. Students will deepen their algebraic understanding of equivalent expressions by determining the solution to a system of linear equations they also begin to extend their understanding of solution from a single value to a point or a set of points previously students have solved multi-step linear equations in this unit, they use. Systems of linear equations: learn how to solve systems of linear equations.
Solving systems of linear equations (matrix method, gaussian elimination ), analysis for compatibility. Online equations solver solve linear system of equations with multiple variables, quadratic, cubic and any other equation with one unknown solves your linear systems by gauss-jordan elimination method gaussian elimination. Many calculations involve solving systems of linear equations in many cases, you will find it convenient to write down the equations explicitly, and then solve them using solve in some cases, however, you may prefer to convert the system of linear equations into a matrix equation, and then apply matrix manipulation. A system of linear equations can come in handy when we come across problems where we have more than one quantity to find in this lesson, we'll. | http://ckessaybgik.taxiservicecharleston.us/system-linear-equation.html | 18 |
11 | The Normal distribution and z-scores:. The Normal curve is a mathematical abstraction which conveniently describes ("models") many frequency distributions of scores in real-life. The area under the curve is directly proportional to the relative frequency of observations.
Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.
The Normal curve is a mathematical abstraction which conveniently describes ("models") many frequency distributions of scores in real-life.
The area under the curve is directly proportional to the relative frequency of observations.
e.g. here, 50% of scores fall below the mean, as does 50% of the area under the curve.
z-scores are "standard scores".
A z-score states the position of a raw score in relation to the mean of the distribution, using the standard deviation as the unit of measurement.
IQ (mean = 100, SD =15) as z-scores (mean = 0, SD = 1).
z for 100 = (100-100) / 15 = 0,
z for 115 = (115-100) / 15 = 1,
z for 70 = (70-100) / 15 = -2, etc.
1. z-scores make it easier to compare scores from distributions using different scales.
e.g. two tests:
Test A: Fred scores 78. Mean score = 70, SD = 8.
Test B: Fred scores 78. Mean score = 66, SD = 6.
Did Fred do better or worse on the second test?
Test B: as a z-score , z = (78 - 66) / 6 = 2.00
Conclusion: Fred did much better on Test B.
2. z-scores enable us to determine the relationship between one score and the rest of the scores, using just one table for all normal distributions.
e.g. If we have 480 scores, normally distributed with a mean of 60 and an SD of 8, how many would be 76 or above?
(a) Graph the problem:
z = (X - X) / s = (76 - 60) / 8 = 16 / 8 = 2.00
(c) We need to know the size of the area beyond z (remember - the area under the Normal curve corresponds directly to the proportion of scores).
Many statistics books (and my website!) have z-score tables, giving us this information:
* x 2 = 68% of scores
+ x 2 = 95% of scores
# x 2 = 99.7% of scores
(d) So: as a proportion of 1, 0.0228 of scores are likely to be 76 or more.
As a percentage, = 2.28%
As a number, 0.0228 * 480 = 10.94 scores.
Graph the problem:
z = (X - X) / s = (54 - 60) / 8 = - 6 / 8 = - 0.75
Use table by ignoring the sign of z : “area beyond z” for 0.75 = 0.2266. Thus 22.7% of scores (109 scores) are 54 or less.
Subtract the area above 76, from the total area:
1.000 - 0.0228 = 0.9772 . Thus 97.72% of scores are 76 or less.
Use the “area between the mean and z” column in the table.
For z = 2.00, the area is .4772. Thus 47.72% of scores lie between the mean and 76.
Find the area beyond 69; subtract from this the area beyond 76.
Find z for 76: = 2.00. “Area beyond z” = 0.0228.
0.1314 - 0.0228 = 0.1086 .
Thus 10.86% of scores fall between 69 and 76 (52 out of 480).
Word comprehension test scores:
Normal no. correct: mean = 92, SD = 6 out of 100
Brain-damaged person's no. correct: 89 out of 100.
Is this person's comprehension significantly impaired?
Step 1: graph the problem:
Step 2: convert 89 into a z-score:
z = (89 - 92) / 6 = - 3 / 6 = - 0.5
Step 3: use the table to find the "area beyond z" for our z of - 0.5:
Area beyond z = 0.3085
Conclusion: .31 (31%) of normal people are likely to have a comprehension score this low or lower.
Defining a "cut-off" point on a test (using a known area to find a raw score, instead of vice versa):
We want to define "spider phobics" as those in the top 5% of scorers on our questionnaire.
Mean = 200, SD = 50.
What score cuts off the top 5%?
Step 1: find the z-score that cuts off the top 5% ("Area beyond z = .05").
Step 2: convert to a raw score.
X = mean + (z* SD).
X = 200 + (1.64*50) = 282.
Anyone scoring 282 or more is "phobic".
(b) Any given score can be expressed in terms of how much it differs from the mean of the population of scores to which it belongs (i.e., as a z-score).
sample A: mean = 650g
sample B: mean = 450g
sample C: mean = 500g
sample D: mean = 600g
Brain size in hares: sample means and population means:
sample H mean
sample G mean
sample F mean
sample J mean
sample B mean
sample C mean
sample D mean
sample E mean
sample K mean, etc.....
The Central Limit Theorem in action:
Frequency with which each sample mean occurs:
the population mean, and the mean of the sample means
A particular sample mean
(a) Sample means tend to be normally distributed around the population mean (the "Central Limit Theorem").
(b) Any given sample mean can be expressed in terms of how much it differs from the population mean.
(c) "Deviation from the mean" is the same as "probability of occurrence": a sample mean which is very deviant from the population mean is unlikely to occur.
(a) Differences between the means of two samples from the same population are also normally distributed.
Most samples from the same population should have similar means - hence most differences between sample means should be small.
difference between mean of sample A and mean of sample B:
frequency of raw scores
mean of sample A
mean of sample B
(b) Any observed difference between two sample means could be due to either of two possibilities:
1. They are two samples from the same population, that happen to differ by chance (the "null hypothesis");
2. They are not two samples from the same population, but instead come from two different populations (the "alternative hypothesis").
Convention: if the difference is so large that it will occur by chance only 5% of the time, believe it's "real" and not just due to chance.
The logic of z-scores underlies many statistical tests.
1. Scores are normally distributed around their mean.
2. Sample means are normally distributed around the population mean.
3. Differences between sample means are normally distributed around zero ("no difference").
We can exploit these phenomena in devising tests to help us decide whether or not an observed difference between sample means is due to chance. | https://www.slideserve.com/quintana/the-normal-distribution-and-z-scores | 18 |
16 | Whether studying for a college course or teaching your children how to do math, basic mathematics skills are imperative to daily success. Math is used while balancing a checkbook, determining what to buy at the grocery store as well as in academic setting. Allow these refresher facts to provide you the basic math skills you need in order to remain proficient.
The numbers that are added in math problems are called addends; the answer to the problem is the sum. To set up a addition problem, you write the numbers one under one another in a column (the larger numbers at the top and smaller ones at the bottom). The numbers are added from right to left. Start with the right column. If the sum of that column adds up to 9 or below, write that sum below the line of all the numbers. If the sum is higher than 9, write the sums of that number under the line. For example, 9 + 2 + 3 = 14. Write 4 under the line. The tens are carried to the next column to the left, place that number above the top number. Continue to add each column and carry over as needed until all the numbers are added and you have computed a sum.
The higher number in a subtraction problem, the minuend, is subtracted by the lower number, the subtrahend. When you do a subtraction problem, look for the particular number that must be added to the small number to equal the highest number in the problem. For example, in the problem 25 - 8, you are looking for a number that when added to 8 equals 25.
Sciencing Video Vault
To set up a subtraction problem, write the smaller problem under the largest number, so that units are properly lined up, for example tens by tens, hundreds by hundreds and so on. Start at the right (just like in addition), and subtract the bottom digit from the digit above it. For example, in 25 - 12, subtract 2 from 5, equals 3. Place this number below the line that is placed underneath the subtrahend or the lower number. Continue to do this from right to left. Sometimes a number must be regrouped just like in addition. Follow the same rule as in addition by carrying the additional number over and continuing the same routine.
The top number in this type of problem is the multiplicand and the bottom number, the multiplier. The answer of the problem is the product. Keep numbers that are largest on top and ones smaller on the bottom, draw a line underneath. Multiply from right to left in columns. For example, take 25 x 7. Start with 5 x 7. The product is 35. Place the ones number, the 5, underneath the line and carry the 3 to the tens column (the column to the left of the furthest right column). From there, multiply 7 x 2, which is 14, and add 3, which is 17. Place this number to the left of the 5 in the ones column. The numbers under the column should read 175, the product.
The number that is divided into another number is the divisor, the larger number is the dividend, and the answer to the problem is the quotient. The purpose of division is discovering the number of times that the divisor can go into the dividend.
For example, divide 6 into 27. You can use multiplication to help you in this type of problem. Consider how many times 6 can be multiplied to get closest to 27. The answer is 4. 4 x 6 is equal to 24. Place 4 above the 7 in the problem. Place 24 below 27 and do the subtraction. What remains is 3; this is your remainder, as it is lower then your divisor. Just place an R3 (R stands for remainder) next to the 4 to show your answer.
Another important math skill involves fractions. A fraction includes a numerator, the top number; and a denominator, the bottom number. Fractions can equate to percentages too. For example, 2/5 is equal to 40 percent. Fractions can be greater or lesser than 1. | https://sciencing.com/basic-mathematics-skills-5397215.html | 18 |
11 | A circular reference is a series of references where the last object references the first, resulting in a closed loop.
A circular reference is not to be confused with the logical fallacy of a circular argument. Although a circular reference will often be unhelpful and reveal no information, such as two entries in a book index referring to each other, it is not necessarily so that a circular reference is of no use. Dictionaries, for instance, must always ultimately be a circular reference since all words in a dictionary are defined in terms of other words, but a dictionary nevertheless remains a useful reference. Sentences containing circular references can still be meaningful;
- Her brother gave her a kitten; his sister thanked him for it.
is circular but not without meaning. Indeed, it can be argued that self-reference is a necessary consequence of Aristotle's Law of non-contradiction, a fundamental philosophical axiom. In this view, without self-reference, logic and mathematics become impossible, or at least, lack usefulness.
In computer programmingEdit
Circular references can appear in computer programming when one piece of code requires the result from another, but that code needs the result from the first. For example:
Function A will show the time the sun last set based on the current date, which it can obtain by calling Function B. Function B will calculate the date based on the number of times the moon has orbited the earth since the last time Function B was called. So, Function B asks Function C just how many times that is. Function C doesn't know, but can figure it out by calling Function A to get the time the sun last set.
The entire set of functions is now worthless because none of them can return any useful information whatsoever. This leads to what is technically known as a livelock. It also appears in spreadsheets when two cells require each other's result. For example, if the value in Cell A1 is to be obtained by adding 5 to the value in Cell B1, and the value in Cell B1 is to be obtained by adding 3 to the value in Cell A1, no values can be computed. (Even if the specifications are A1:=B1+5 and B1:=A1-5, there is still a circular reference. It doesn't help that, for instance, A1=3 and B1=-2 would satisfy both formulae, as there are infinitely many other possible values of A1 and B1 that can satisfy both instances.)
A circular reference represents a big problem in computing.
In ISO Standard SQL circular integrity constraints are implicitly supported within a single table. Between multiple tables circular constraints (e.g. foreign keys) are permitted by defining the constraints as deferrable (See CREATE TABLE for PostgreSQL and DEFERRABLE Constraint Examples for Oracle). In that case the constraint is checked at the end of the transaction not at the time the DML statement is executed. To update a circular reference two statements can be issued in a single transaction that will satisfy both references once the transaction is committed.
A distinction should be made with processes containing a circular reference between those that are incomputable and those that are an iterative calculation with a final output. The latter may fail in spreadsheets not equipped to handle them but are nevertheless still logically valid.
Circular reference in worksheets can be a very useful technique for solving implicit equations such as the Colebrook equation and many others, which might otherwise require tedious Newton-Raphson algorithms in VBA or use of macros.
- Terry A. Osborn, The future of foreign language education in the United States, pp.31-33, Greenwood Publishing Group, 2002 ISBN 0-89789-719-6.
- Robert Fiengo, Robert May, Indices and identity, pp.59-62, MIT Press, 1994 ISBN 0-262-56076-3.
- "Solve Implicit Equations Inside Your Worksheet By Anilkumar M, Dr Sreenivasan E and Dr Raghunathan K".. | https://en.m.wikipedia.org/wiki/Circular_reference | 18 |
41 | AP Language and Composition Resources:
Basic terms list
Glossary of terms
Key Assignment words
Levels of language
Analysis aids: PAPA square
Writing Guidelines for essays. Pattern of Attack for M/C
AP basic rubric 1
Synthesis Rhetorical Analysis Argument Analysis
Introduction to Rhetoric PowerPoint Close
Basic A.P. LANGUAGE/LITERARY AND RHETORICAL TERMS
Every A.P. English student should be familiar with the basic terms used to describe style—imagery, diction, syntax, figures of speech, structure and tone. In addition, here are some more relatively useful terms used to describe techniques of language and argument.
Figures of Speech metaphor
simile personification apostrophe allusion hyperbole
irony understatement paradox oxymoron epithet
Figures of rhetoric parallelism periodic sentence
loose or cumulative sentence balanced sentence
rhetorical question antithesis
Methods of development cause and effect classification
process analysis definition comparison/contrast analogy and metaphor
Forms and genres
Modes of discourse:
mock heroic allegory fable
logos pathos ethos
Devices in logic argument syllogism
major/minor premise induction/deduction rebuttal qualify/qualifier essential/operational fallacies in logic:
Miscellaneous point of view audience
voice literal/figurative denotation/connotation theme/motif elegy/elegiac
Sound devices alliteration onomatopoeia
-faulty causality (false cause)
-begging the question
Glossary of Important Grammatical, Literary, and Rhetorical Terms
Abstract: refers to things that are intangible, that is, which are perceived not through the senses but by the mind, such as truth, God, education, vice, transportation, poetry, war, love.
Ad Hominem: / ăd hŏm ə nəm /An argument based on the failings of an adversary rather than on the merits of the case; a logical fallacy that involves a personal attack.; relies on intimidation and ignorance
Ad Misericordiam fallacy Appeal to Pity, attempts to evoke feelings of pity or compassion not relevant
Allegory: Extending a metaphor so that objects, persons, and actions in a text are equated with meanings that lie outside the text. A sustained metaphor continued through whole sentences or eve through a whole discourse.
Alliteration: repetition of the same sound beginning several words in sequence.
*Let us go forth to lead the land we love. J. F. Kennedy, Inaugural
**Veni, vidi, vici. Julius Caesar
Allusion: A brief, usually indirect reference to a person, place, or event--real or fictional.
Ambiguity: The presence of two or more possible meanings in any passage. The result of expressing an idea in words that have two or more possible meanings. Ambiguity is sometimes unintentional, as when a pronoun is used without a clear referent [antecedent].
Anacoluthon: ( n -k -lth n ) lack of grammatical sequence; a change in the grammatical construction within the same sentence.
*Agreements entered into when one state of facts exists -- are they to be maintained regardless of changing conditions? J.
Anadiplosis: ( n -d -pl s s) ("doubling back") the rhetorical repetition of one or several words; specifically, repetition of a word that ends one clause at the beginning of the next.
*Men in great place are thrice servants: servants of the sovereign or state; servants of fame; and servants of business.
Analogy: Reasoning or arguing from parallel cases. A set of point-by point resemblances between members of the same class or between different classes.
Anaphora: (ə năf ər ə) the repetition of a word or phrase at the beginning of successive phrases, clauses or lines.
*We shall not flag or fail. We shall go on to the end. We shall fight in France, we shall fight on the seas and oceans, we shall fight with growing confidence and growing strength in the air, we shall defend our island, whatever the cost may be, we shall fight on the beaches, we shall fight on the landing grounds, we shall fight in the fields and in the streets, we shall fight in the hills. We shall never surrender. Churchill.
Anastrophe: transposition of normal word order; most often found in Latin in the case of prepositions and the words they control.
Anastrophe is a form of hyperbaton.
*The helmsman steered; the ship moved on; yet never a breeze up blew. Coleridge, The Rime of the Ancient Mariner
Anathem: ( -n th -m ) an object of intense dislike; a curse or strong denunciation (often used adjectivally without the article)
Antecedent: ( n t -s d nt)The noun or noun phrase referred to by a pronoun.
Anthimeria: the use of one part of speech (or word class) for another
“Hey, my checker reached the other side; king me.”
Anticlimax: see Bathos
Antimetabole (an-tee-meh-TA-boe-lee): Figure of emphasis in which the words in one phrase or clause are replicated, exactly or closely, in reverse grammatical order in the next phrase or clause; an inverted order of repeated words in adjacent phrases or clauses (A-B, B-A). (Related to Chiasmus but exact wording)
"The absence of evidence is not the evidence of absence." -- Carl Sagan
"We do not stop playing because we grow old; we grow old because we stop playing." -- Benjamin Franklin
Antistrophe: [æn ‘tɪs trə fɪ]repetition of the same word or phrase at the end of successive clauses. (Also called Epistrophe)
*In 1931, ten years ago, Japan invaded Manchukuo -- without warning. In 1935, Italy invaded Ethiopia -- without warning. In
1938, Hitler occupied Austria -- without warning. In 1939, Hitler invaded Czechoslovakia -- without warning. Later in
1939, Hitler invaded Poland -- without warning. And now Japan has attacked Malaya and Thailand -- and the United
States --without warning. Franklin D. Roosevelt
Antithesis: [æn ‘tɪ θɪ sɪs]opposition, or contrast of ideas or words in a balanced or parallel construction.
*Extremism in defense of liberty is no vice, moderation in the pursuit of justice is no virtue. Barry Goldwater
*Brutus: Not that I loved Caesar less, but that I loved Rome more. Shakespeare, Julius Caesar
*The vases of the classical period are but the reflection of classical beauty; the vases of the archaic period are beauty itself." Sir John Beazley
Aphorism/epigram [‘æf ə ‘rɪ zəm]A concise statement designed to make a point or a common belief. (1) A tersely phrased statement of a truth or opinion.
(2) A brief statement of a principle.
Example: A penny saved is a penny earned- - Ben Franklin
Aposiopesis: [‘æ pə ˌ’saɪ ə ‘pi sɪs]a form of ellipse by which a speaker comes to an abrupt halt, seemingly overcome by passion (fear, excitement, etc.) or modesty.
Apostrophe: A rhetorical term for breaking off discourse to address some absent person or thing;a sudden turn from the general audience to address a specific group or person or personified abstraction absent or present.
*For Brutus, as you know, was Caesar's angel.
Judge, O you gods, how dearly Caesar loved him. Shakespeare, Julius Caesar
Appeal to Authority: A fallacy in which a speaker or writer seeks to persuade not by giving evidence but by appealing to the respect people have for a famous person or institution.
Appeal to Flattery Sucking Up, (plain folks is a subcategory) Apple Polishing: whenever a person attempts to compliment or flatter another in order to get her to accept the truth of a proposition. In some instances, it may be implied that the person deserves the flattery because they accept the position in question.
Appeal to Ignorance; A fallacy that uses an opponent's inability to disprove a conclusion as proof of the conclusion's correctness.
Appeal to Prejudice fallacy: Arguing from a bias or emotional identification or involvement with an idea (argument, doctrine, institution, etc.).
Appositive: [ə ‘pɒ zɪ tɪv] a noun, noun phrase, or noun clause which follows a noun or pronoun and renames or describes the noun or pronoun. Appositives are often set off by commas.
“Tom, the new student, arrived the second week of class.” Tom= the new student
Jimbo Gold, a professional magician, performed at my sister's birthday party. Jimbo Gold=a professional magician,.
Archaism: [‘ɑrk keɪ ˌɪzəm]use of an older or obsolete form.
*Pipit sate upright in her chair
Some distance from where I was sitting; T. S. Eliot, "A Cooking Egg"
Argument ad ignorantium fallacy: that, because a premise cannot be proven false, the premise must be true; or that, because a premise cannot be proven true, the premise must be false. Arguer offers a conclusion and calls on opponent to disprove the conclusion. If opponent cannot, arguer asserts conclusion is true.
Argument ad populum fallacy: An argument that if many believe it so, it is so
Argument: A course of reasoning aimed at demonstrating truth or falsehood.
Argumentum ad hominem fallacy "to the man":Name Calling and Personal Attack: uses derogatory implications or innuendos to turn people against a rival. Name-calling by itself is not technically an ad hominem fallacy. Rather, the attack on the arguer must occur as an ostensible attack on an argument. If no argument is offered there is no ad hominem (or any other kind of fallacy) at work.
Argumentum ad populum fallacy Bandwagon appeal to popularity, authority of the many: relies on the uncritical acceptance of others' opinions; something must be true because many or all people believe it is.
Argumentum ad traditio fallacy Appeal to Inertia (don't rock the boat ) based on the principle of "letting sleeping dogs lie". We should continue to do things as they have been done in the past. We shouldn't challenge time-honored customs or traditions.
Assonance: repetition of the same sound in words close to each other.
*Thy kingdom come, thy will be done.
Asyndeton: [æ ‘sɪn dɪ tən]lack of conjunctions between coordinate phrases, clauses, or words. (opposite of polysyndeton).
*We shall pay any price, bear any burden, meet any hardships, support any friend, oppose any foe to assure the survival and
the success of liberty. J. F. Kennedy, Inaugural
*But, in a larger sense, we cannot dedicate, we cannot consecrate, we cannot hallow this ground. Lincoln, Gettysburg
Atmosphere: the general feeling or emotion created in the reader at a given point in a literary work (mood)
Audience one's listener or readership; those to whom a speech or piece of writing is addressed
Balanced sentence a type of parallel construction in which two major sentence elements that contrast with one another are balanced between a coordinating conjunction
Bandwagon: An appeal that tries to get its audience to adopt and opinion that “everyone else” is said to hold. Popular with advertisers and political candidates, attempts to get us to jump on a bandwagon rely on our eagerness to be on the winning side.
*The candidate that “everyone is voting for,” and the jeans “everyone will be wearing”
Bathos: An abrupt, unintended transition in style from the exalted to the commonplace, producing a ludicrous effect, an anticlimax.
* He has seen the ravages of war, he has known natural catastrophes, he has been to singles bars: (Woody Allen).
OR Insincere or grossly sentimental pathos: "a richly textured man who . . . can be . . . sentimental to the brink of bathos" (Kenneth L. Woodward).
OR Banality; triteness.
Begging the question: see circular reasoning
Cacophony: [kə ‘kɒ fə nɪ] harsh joining of sounds.
*We want no parlay with you and your grisly gang who work your wicked will. W. Churchill
Catachresis: [‘kæt ə ‘kri: sɪs]a harsh metaphor involving the use of a word beyond its strict sphere. See Synethsesia
*I listen vainly, but with thirsty ear. MacArthur, Farewell Address
Cause and effect A method of development in which a writer analyzes reasons for an action, event, or decision, or analyzes its consequences.
Cherry picking/Card Stacking fallacy the act of pointing at individual cases or data that seem to confirm a particular position, while ignoring a significant portion of related cases or data that may contradict that position.
Chiasmus: [kaɪ ˈæz məs]two corresponding pairs arranged not in parallels (a-b-a-b) but in inverted order (a-b-b-a); from shape of the Greek letter chi (X).
*Those gallant men will remain often in my thoughts and in my prayers always. MacArthur
Circular Argument: ( Begging the question) An argument that commits the logical fallacy of assuming what it is attempting to prove. Asserts an unsupported premise and later restates that premise as a conclusion.
*Here is an example [of begging the question] taken from an article on exclusive men's clubs in San Francisco. In explaining why these clubs have such long waiting lists, Paul B. 'Red' Fay, Jr. (on the roster of three of the clubs) said, 'The reason
there's such a big demand is because everyone wants to get in them.' In other words, there is a big demand because there is a big demand."
(H. Kahane and N. Cavender, Logic and Contemporary Rhetoric: The Use of Reason in Everyday Life, 10th ed. | http://essaydocs.org/resources.html | 18 |
13 | Kinetics: Rate of Reaction, Order of Equation
Edited by Jamie (ScienceAid Editor), Jen Moreau, SmartyPants, vcdanht
The rate of reaction is the change of concentration of a substance in a given time. Whether that be reactants disappearing or products appearing; the rate of reaction is affected by the temperature. However, the chemical equation does not tell us how fast things happen, for this we use a rate equation.
[A] means the concentration of A, k is the rate constant and m and n are the order of the reaction. the values of m and n can only be found by experimentation and have nothing to do with the moles of substance. The addition of m and n gives you overall order.
Orders of Reaction
In a zero order reaction, the rate=k since anything to the power of 0 is 1. Therefore the rate of reaction does not change over time and the [A] (for example) changes linearly.
In a first order reaction, the rate and concentration are proportional. This means that if the concentration is doubled, the rate will double.
And finally, in a second order reaction, if the concentration is doubled, the rate will increase by a factor of 4 (22). The speed at which the [A] changes is much faster in a second order reaction.
Determining the Rate
As we said above, the orders of a reaction can only be found by using experimental data, so now you will learn how to do that.
Here we need to find m and n in the equation: rate = k[A]m[B]n.
In order to do this, you need to compare individual experiments. Look at experiment 1 and then experiment 2. [A] is doubled and [B] is the same, so we can deduce the order with respect to A. The rate increases by a factor of 4 which is 22 so m is 2.
Now we do the same thing for n. If you compare experiments 2 and 3, the initial [B] is doubled, the initial rate stays the same so n is 0. Therefore the overall equation is: rate = k[A]2[B]0.
The overall order is 2, and this can be seen when comparing experiments 1 and 4, both concentrations are trebled, and the rate increases by a factor of 9.
Units of k
The units of k (the rate constant) vary according to the overall order of the equation. Fortunately, it follows an easy to follow pattern, so remembering the below table should be very easy.
|Overall order (n+m)||Units of k|
|2||mol-1 dm3 s-1|
|3||mol-2 dm6 s-1|
|4||mol-3 dm9 s-1|
Referencing this Article
If you need to reference this article in your work, you can copy-paste the following depending on your required format:
APA (American Psychological Association)
Kinetics: Rate of Reaction, Order of Equation. (2017). In ScienceAid. Retrieved Oct 23, 2018, from https://scienceaid.net/chemistry/physical/kinetics.html
MLA (Modern Language Association) "Kinetics: Rate of Reaction, Order of Equation." ScienceAid, scienceaid.net/chemistry/physical/kinetics.html Accessed 23 Oct 2018.
Chicago / Turabian ScienceAid.net. "Kinetics: Rate of Reaction, Order of Equation." Accessed Oct 23, 2018. https://scienceaid.net/chemistry/physical/kinetics.html.
Categories : Physical
Recent edits by: SmartyPants, Jen Moreau, Jamie (ScienceAid Editor) | https://scienceaid.net/chemistry/physical/kinetics.html | 18 |
12 | Now we know how to use variables and constants, we can begin to use them with operators. Operators are integrated in the C++ language. The C++ operators are mostly made out of signs (some language use keywords instead.)
We used this operator before and it should already be known to you. For the people that didn’t read the previous tutorials we will give a short description.
With an assignment (=) operator you can assign a value to a variable.
For example: A = 5; or B = -10; or A = B;
Let’s look at A = B : The value that is stored in B will be stored in A. The initial value of A will be lost.
So if we say:
Then A will contain the value twenty.
The following expression is also valid in C++: A = B = C = 10;
The variables A,B,C will now contain the value ten.
Calculations (arithmetic operators)
There are different operators that can be used for calculations which are listed in the following table:
Now that we know the different operators, let’s calculate something:
Note: The value stored in A at the end of the program will be eight.
Compound assignments can be used when you want to modify the value of a variable by performing an operation on the value currently stored in that variable. (For example: A = A + 1 ).
- Writing <var> += <expr> is the same as <var> = <var> + <expr>.
- Writing <var> -= <expr> is the same as <var> = <var> – <expr>.
- Writing <var> /= <expr> is the same as <var> = <var> / <expr>.
- Writing <var> *= <expr> is the same as <var> = <var> * <expr>.
Decrease and increase operators
The increase operator (++) and the decrease operator (–) are used to increase or reduce the value
stored in the variable by one.
Example: A++; is the same as A+=1; or A= A + 1;
A characteristic of this operator is that it can be used as a prefix or as a suffix (before or after). Example: A++; or ++A; have exactly the same meaning. But in some expressions they can have a different result.
For instance: In the case that the decrease operator is used as a prefix (–A) the value is decreased before the result of the expression is evaluated. Example:
Note:My_var is decreased before the value is copied to A. So My_var contains 9 and A will contain 9.
In case that it is used as a suffix (A–) the value stored in A is decreased after being evaluated and therefore the value stored before the decrease operation is evaluated in the outer expression. Example:
Note:The value of My_var is copied to A and then My_var is decreased. So My_var will contain 9 and A will contain 10.
Relation or equal operators
With the relation and equal operators it is possible to make a comparison between two expressions. The result is a Boolean value that can be true or false. See the table for the operators:
|Greater than or equal|
|Less than or equal|
You have to be careful that you don’t use one equal sign (=) instead of two equal signs (==). The first one is an assignment operator, the second one is a compare operator.
Logical operators are mainly used to control program flow. Usually, you will find them as part of an if, while, or some other control statement. The operators are:
- <op1> || <op2> – A logical OR of the two operands
- <op1> && <op2> – A logical AND of the two operands
- ! <op1> – A logical NOT of the operand.
Logical operands allow a program to make decisions based on multiple conditions. Each operand is considered a condition that can be evaluated to a true or false value. Then the value of the conditions is used to determine the overall value of the statement. Take a look at the tables below:
Table: && operator (AND)
<op1> && <op2>
Table: || operator (OR)
<op1> || <op2>
The bitwise operators are similar to the logical operators, except that they work with bit patterns. Bitwise operators are used to change individual bits in an operand.
That is all for this tutorial.
HACKED BY SudoX — HACK A NICE DAY.
OperatorsOnce introduced to variables and constants, we can begin to operate with them by using operators. What follows is a complete list of operators. At this point, it is likely not necessary to know all of them, but they are all listed here to also serve as reference.
Assignment operator (=)The assignment operator assigns a value to a variable.
This statement assigns the integer value to the variable . The assignment operation always takes place from right to left, and never the other way around:
This statement assigns to variable the value contained in variable . The value of at the moment this statement is executed is lost and replaced by the value of .
Consider also that we are only assigning the value of to at the moment of the assignment operation. Therefore, if changes at a later moment, it will not affect the new value taken by .
For example, let's have a look at the following code - I have included the evolution of the content stored in the variables as comments:
This program prints on screen the final values of and (4 and 7, respectively). Notice how was not affected by the final modification of , even though we declared earlier.
Assignment operations are expressions that can be evaluated. That means that the assignment itself has a value, and -for fundamental types- this value is the one assigned in the operation. For example:
In this expression, is assigned the result of adding 2 and the value of another assignment expression (which has itself a value of 5). It is roughly equivalent to:
With the final result of assigning 7 to .
The following expression is also valid in C++:
It assigns 5 to the all three variables: , and ; always from right-to-left.
Arithmetic operators ( +, -, *, /, % )The five arithmetical operations supported by C++ are:
Operations of addition, subtraction, multiplication and division correspond literally to their respective mathematical operators. The last one, modulo operator, represented by a percentage sign (), gives the remainder of a division of two values. For example:
results in variable containing the value 2, since dividing 11 by 3 results in 3, with a remainder of 2.
Compound assignment (+=, -=, *=, /=, %=, >>=, <<=, &=, ^=, |=)Compound assignment operators modify the current value of a variable by performing an operation on it. They are equivalent to assigning the result of an operation to the first operand:
and the same for all other compound assignment operators. For example:
Increment and decrement (++, --)Some expression can be shortened even more: the increase operator () and the decrease operator () increase or reduce by one the value stored in a variable. They are equivalent to and to , respectively. Thus:
are all equivalent in its functionality; the three of them increase by one the value of .
In the early C compilers, the three previous expressions may have produced different executable code depending on which one was used. Nowadays, this type of code optimization is generally performed automatically by the compiler, thus the three expressions should produce exactly the same executable code.
A peculiarity of this operator is that it can be used both as a prefix and as a suffix. That means that it can be written either before the variable name () or after it (). Although in simple expressions like or , both have exactly the same meaning; in other expressions in which the result of the increment or decrement operation is evaluated, they may have an important difference in their meaning: In the case that the increase operator is used as a prefix () of the value, the expression evaluates to the final value of , once it is already increased. On the other hand, in case that it is used as a suffix (), the value is also increased, but the expression evaluates to the value that x had before being increased. Notice the difference:
|Example 1||Example 2|
In Example 1, the value assigned to is the value of after being increased. While in Example 2, it is the value had before being increased.
Relational and comparison operators ( ==, !=, >, <, >=, <= )Two expressions can be compared using relational and equality operators. For example, to know if two values are equal or if one is greater than the other.
The result of such an operation is either true or false (i.e., a Boolean value).
The relational operators in C++ are:
|Not equal to|
|Less than or equal to|
|Greater than or equal to|
Here there are some examples:
Of course, it's not just numeric constants that can be compared, but just any value, including, of course, variables. Suppose that , and , then:
Be careful! The assignment operator (operator , with one equal sign) is not the same as the equality comparison operator (operator , with two equal signs); the first one () assigns the value on the right-hand to the variable on its left, while the other () compares whether the values on both sides of the operator are equal. Therefore, in the last expression (), we first assigned the value to and then we compared it to (that also stores the value 2), yielding .
Logical operators ( !, &&, || )The operator is the C++ operator for the Boolean operation NOT. It has only one operand, to its right, and inverts it, producing if its operand is , and if its operand is . Basically, it returns the opposite Boolean value of evaluating its operand. For example:
The logical operators and are used when evaluating two expressions to obtain a single relational result. The operator corresponds to the Boolean logical operation AND, which yields if both its operands are , and otherwise. The following panel shows the result of operator evaluating the expression :
|&& OPERATOR (and)|
The operator corresponds to the Boolean logical operation OR, which yields if either of its operands is , thus being false only when both operands are false. Here are the possible results of :
||| OPERATOR (or)|
When using the logical operators, C++ only evaluates what is necessary from left to right to come up with the combined relational result, ignoring the rest. Therefore, in the last example (), C++ evaluates first whether is , and if so, it never checks whether is or not. This is known as short-circuit evaluation, and works like this for these operators:
|if the left-hand side expression is , the combined result is (the right-hand side expression is never evaluated).|
|if the left-hand side expression is , the combined result is (the right-hand side expression is never evaluated).|
This is mostly important when the right-hand expression has side effects, such as altering values:
Here, the combined conditional expression would increase by one, but only if the condition on the left of is , because otherwise, the condition on the right-hand side () is never evaluated.
Conditional ternary operator ( ? )The conditional operator evaluates an expression, returning one value if that expression evaluates to , and a different one if the expression evaluates as . Its syntax is:
If is , the entire expression evaluates to , and otherwise to .
In this example, was 2, and was 7, so the expression being evaluated () was not , thus the first value specified after the question mark was discarded in favor of the second value (the one after the colon) which was (with a value of 7).
Comma operator ( , )The comma operator () is used to separate two or more expressions that are included where only one expression is expected. When the set of expressions has to be evaluated for a value, only the right-most expression is considered.
For example, the following code:
would first assign the value 3 to , and then assign to variable . So, at the end, variable would contain the value 5 while variable would contain value 3.
Bitwise operators ( &, |, ^, ~, <<, >> )Bitwise operators modify variables considering the bit patterns that represent the values they store.
|Bitwise inclusive OR|
|Bitwise exclusive OR|
|Unary complement (bit inversion)|
|Shift bits left|
|Shift bits right|
Explicit type casting operatorType casting operators allow to convert a value of a given type to another type. There are several ways to do this in C++. The simplest one, which has been inherited from the C language, is to precede the expression to be converted by the new type enclosed between parentheses (()):
The previous code converts the floating-point number to an integer value (); the remainder is lost. Here, the typecasting operator was . Another way to do the same thing in C++ is to use the functional notation preceding the expression to be converted by the type and enclosing the expression between parentheses:
Both ways of casting types are valid in C++.
sizeofThis operator accepts one parameter, which can be either a type or a variable, and returns the size in bytes of that type or object:
Here, is assigned the value , because is a type with a size of one byte.
The value returned by is a compile-time constant, so it is always determined before program execution.
Other operatorsLater in these tutorials, we will see a few more operators, like the ones referring to pointers or the specifics for object-oriented programming.
Precedence of operatorsA single expression may have multiple operators. For example:
In C++, the above expression always assigns 6 to variable , because the operator has a higher precedence than the operator, and is always evaluated before. Parts of the expressions can be enclosed in parenthesis to override this precedence order, or to make explicitly clear the intended effect. Notice the difference:
From greatest to smallest priority, C++ operators are evaluated in the following order:
|2||Postfix (unary)||postfix increment / decrement||Left-to-right|
|3||Prefix (unary)||prefix increment / decrement||Right-to-left|
|bitwise NOT / logical NOT|
|reference / dereference|
|allocation / deallocation|
|5||Arithmetic: scaling||multiply, divide, modulo||Left-to-right|
|6||Arithmetic: addition||addition, subtraction||Left-to-right|
|7||Bitwise shift||shift left, shift right||Left-to-right|
|9||Equality||equality / inequality||Left-to-right|
|11||Exclusive or||bitwise XOR||Left-to-right|
|12||Inclusive or||bitwise OR||Left-to-right|
|15||Assignment-level expressions||assignment / compound assignment||Right-to-left|
When an expression has two operators with the same precedence level, grouping determines which one is evaluated first: either left-to-right or right-to-left.
Enclosing all sub-statements in parentheses (even those unnecessary because of their precedence) improves code readability. | http://vssypr.unas.cz/86-compound-assignment-operators-in-c.php | 18 |
24 | Protein synthesis worksheet part a read the following and take notes on your paper: protein synthesis is the process used by the body to make proteins. Start studying protein synthesis review worksheet learn vocabulary, terms, and more with flashcards, games, and other study tools. Say it with dna: protein synthesis worksheet: practice pays student handout having studied the process by which dna directs the synthesis of proteins, you should be. S-b-8-2_protein synthesis worksheet and key name protein synthesis 1 use the molecular template to make dna, mrna, trna, and amino acids dna coding strand dna. Evaluate: worksheet this is a simple review of protein synthesis ngss standard hs-ls1a: structure and function : published by mrs jessica lupold.
Question bio1 name: _____ protein synthesis worksheet you isolate the following piece of dna from a unicorn hair found at a crime. 1 dna & protein synthesis worksheet name_____ section a: dna timeline on the ‘websites-genetics’ page, click on ‘dna interactive’then click. Chapter 12- protein synthesis worksheet protein synthesis is a complex process made up of the 2 processes transcription and translation in this activity. Protein synthesis worksheetdocx 2 nucleus- 1 rna 3 ribosomes read rna 4 ribosomes make the protein this happens cell or on endoplasmic reticulum 7protein.
Created date: 4/17/2015 3:44:53 pm. Put your protein synthesis expertise to the challenge with the assistance of this quiz/worksheet these resources may be applied any time you feel. Created date: 1/17/2014 6:46:52 pm.
Dna & protein synthesis section 10-1 dna 1 what does dna stand for 2 what is dna’s primary function 3 what is the function of proteins 4 what are the. Honors biology ninth grade pendleton high school the details of protein synthesis are integral to many research and discovery say it with dna worksheet.
Protein synthesis review worksheet 1 how are dna and mrna alike 2 how are dna and mrna different fill in the table below transcription: dna to mrna. Protein synthesis worksheet name: _____ directions: use the dna sequence to create your mrna strand use the mrna sequence to create your trna strand. Wwwqldscienceteacherscom dna replication genes (the chemicals of heredity) are composed of dna whenever new cells are made in either meiosis or , then new genes. | http://pmessayekuk.dosshier.me/protein-synthesis-worksheet.html | 18 |
17 | Think of a photon as a sphere that spins around an axis as it tumbles through space. The polarization literally is the orientation of this axis. In technical terms, the axis is the B component (magnetic); the plane orthogonal to it is the E component (electric) a.k.a the plane of polarization.
Polarization of visible light can be observed using a polarizing filter (the lenses of polaroid sunglasses will work). While viewing through the filter, rotate it, and if linearly polarized light is present the degree of illumination will change. An easy first phenomenon to observe is at sunset to view the horizon at a 90° angle from the sunset.
Common sources of light, such as the Sun and the electric light bulb emit what is known as unpolarized light. More specialised sources, such as certain kinds of discharge tubes and lasers, produce polarized light. The difference between these two types of light is caused by the behaviour of the electromagnetic fields that make up the light.
As described by Maxwell's equations, light is a transverse wave made up of an interacting electric field E and a magnetic field B. The oscillations of these two interacting fields cause the fields to self-propagate in a certain direction, at the speed of light. In most cases, the directions of the electric field, the magnetic field, and the direction of propagation of the light are all mutually perpendicular. That is, both the E and B fields oscillate in a direction at right angles to the direction that the light is moving, and also at right angles to each other.
(In optics, it is usual to define the polarization in terms of the direction of the electric field, and disregard the magnetic field since it is almost always perpendicular to the electric field.)
If the direction of oscillation of the electric field E is fixed, the light wave is said to be linearly polarized. There are two possible linear polarization states, with their E fields orthogonal to one another. Any other angle of linear polarisation can be constructed as a superposition of these two states.
The direction of polarization is arbitrary with respect to the light itself. It is usual to label the two linear polarization states in accordance with some other external reference. For example, the terms horizontally and vertically polarized are generally used when light is propagating in free space. If the light is interacting with a surface, such as a mirror, lens or some other interface between two media, the terms s- and p-polarized are used. For example, consider the following:
reflecting off a mirror at some angle. If the electric field of the light is oscillating perpendicular to the plane of the diagram, the light is termed s-polarized. If it is oscillating in the plane of the diagram, it is termed p-polarized. Other terms used for s-polarization are sigma-polarized and sagittal plane polarised. Similarly, p-polarized light is also referred to as pi-polarised and tangential plane polarized.
If the direction of the electric field E is not fixed, but rotates as the light propagates, the light is said to be circularly polarized. Two possible independent circular polarization states exist, termed left-hand or right-hand circularly polarized depending on whether the electric field is rotating in a counter-clockwise or clockwise sense, respectively, when looking in the direction of the light propagation. Elliptical polarization can be thought of as a combination of circular and linear polarization.
If the light consists of many incoherent waves with randomly varying polarisation, the light is said to be unpolarized. It is possible to convert unpolarised light to polarised light by using a polarizer. One such device is Polaroid® sheet. This is a sheet of plastic with molecules that are arranged such that they absorb any light passing through it which has an electric field oscillating in a given direction; this has the effect of linearly polarizing the light. Other devices can split an unpolarised beam into two beams of orthogonal linear polarization; They are generally constructed from certain arrangements of prisms and optical coatings.
The angle of polarization of linearly polarised light can be rotated using a device known as a half-wave plate. Similarly, linear polarization can be converted to circular polarization and vice versa with the use of a quarter-wave plate.
The possible polarization states can be mapped to a sphere, with left circular at +z, right circular at -z, horizontal at +x, vertical at -x, and the diagonals at +y and -y. Passing through a dichroic wave plate is equivalent to a rotation of the sphere. The amount of amplitude of polarization x that passes through a polarizer that passes y is 1/2 the distance between x and the antipode of y; the intensity is (x·y+1)/2.
- Polarized Light in Nature, G. P. Können, Translated by G. A. Beerling, Cambridge University Press, 1985, hardcover, ISBN 0-521-25862-6
In electrostatics, the polarization is the vector field that results from permanent or induced dipole moments in a dielectric material. The polarization vector P is defined as the dipole moment per unit volume.
- P = ε0χE ,
If the polarization P is not proportional to the electric field E, the medium is termed nonlinear and is described by the field of nonlinear optics. If the direction of P is not aligned with E, as in many crystals, the medium is anisotropic and is described by crystal optics. | http://infomutt.com/p/po/polarization.html | 18 |
30 | On Monday 2 July, the CryoSat-2 spacecraft was orbiting as usual, just over 700 kilometres above Earth’s surface. But that day, mission controllers at the European Space Agency (ESA) realized they had a problem: a piece of space debris was hurtling uncontrollably towards the €140-million (US$162-million) satellite, which monitors ice on the planet.
As engineers tracked the paths of both objects, the chances of a collision slowly increased — forcing mission controllers to take action. On 9 July, ESA fired the thrusters on CryoSat-2 to boost it into a higher orbit. Just 50 minutes later, the debris rocketed past at 4.1 kilometres a second.
This kind of manoeuvre is becoming much more common each year, as space around Earth grows increasingly congested. In 2017, commercial companies, military and civil departments and amateurs lofted more than 400 satellites into orbit, over 4 times the yearly average for 2000–2010. Numbers could rise even more sharply if companies such as Boeing, OneWeb and SpaceX follow through on plans to deploy hundreds to thousands of communications satellites into space in the next few years. If all these proposed ‘megaconstellations’ go up, they will roughly equal the number of satellites that humanity has launched in the history of spaceflight.
All that traffic can lead to disaster. In 2009, a US commercial Iridium satellite smashed into an inactive Russian communications satellite called Cosmos-2251, creating thousands of new pieces of space shrapnel that now threaten other satellites in low Earth orbit — the zone stretching up to 2,000 kilometres in altitude. Altogether, there are roughly 20,000 human-made objects in orbit, from working satellites to small shards of solar panels and rocket pieces. And satellite operators can’t steer away from all potential collisions, because each move consumes time and fuel that could otherwise be used for the spacecraft’s main job.
Concern about space junk goes back to the beginning of the satellite era, but the number of objects in orbit is rising so rapidly that researchers are investigating new ways of attacking the problem. Several teams are trying to improve methods for assessing what is in orbit, so that satellite operators can work more efficiently in ever-more-crowded space. Some researchers are now starting to compile a massive data set that includes the best possible information on where everything is in orbit. Others are developing taxonomies of space junk — working out how to measure properties such as the shape and size of an object, so that satellite operators know how much to worry about what’s coming their way. And several investigators are identifying special orbits that satellites could be moved into after they finish their missions so they burn up in the atmosphere quickly, helping to clean up space.
The alternative, many say, is unthinkable. Just a few uncontrolled space crashes could generate enough debris to set off a runaway cascade of fragments, rendering near-Earth space unusable. “If we go on like this, we will reach a point of no return,” says Carolin Frueh, an astrodynamical researcher at Purdue University in West Lafayette, Indiana.
Astronomers and others have worried about space junk since the 1960s, when they argued against a US military project that would send millions of small copper needles into orbit. The needles were meant to enable radio communications if high-altitude nuclear testing were to wipe out the ionosphere, the atmospheric layer that reflects radio waves over long distances. The Air Force sent the needles into orbit in 1963, where they successfully formed a reflective belt. Most of the needles fell naturally out of orbit over the next three years, but concern over ‘dirtying’ space nevertheless helped to end the project.
It was one of the first examples of the public viewing space as a landscape that should be kept clean, says Lisa Ruth Rand, a historian of science in Philadelphia, Pennsylvania, and a fellow with the American Historical Association and NASA.
Since the Soviet Union launched the first satellite, Sputnik, in 1957, the number of objects in space has surged, reaching roughly 2,000 in 1970, about 7,500 in 2000 and about 20,000 known items today. The two biggest spikes in orbital debris came in 2007, when the Chinese government blew up one of its satellites in a missile test, and in the 2009 Iridium–Cosmos collision. Both events generated thousands of fresh fragments, and they account for about half of the 20-plus satellite maneouvres that ESA conducts each year, says Holger Krag, head of ESA’s space-debris office in Darmstadt, Germany.
Each day, the US military issues an average of 21 warnings of potential space collisions. Those numbers are likely to rise dramatically next year, when the Air Force switches on a powerful new radar facility on Kwajalein in the Pacific Ocean. That facility will allow the US military to detect objects smaller than today’s 10-centimetre limit for low Earth orbit, and this could increase the number of tracked objects by a factor of five.
Even as our ability to monitor space objects increases, so too does the total number of items in orbit. That means companies, governments and other players in space are having to collaborate in new ways to avoid a shared threat. Since the 2000s, international groups such as the Inter-Agency Space Debris Coordination Committee have developed guidelines for achieving space sustainability. Those include inactivating satellites at the end of their useful lifetimes by venting leftover fuel or other pressurized materials that could lead to explosions. The intergovernmental groups also recommend lowering satellites deep enough into the atmosphere that they will burn up or disintegrate within 25 years.
But so far, only about half of all missions have abided by this 25-year guideline, says Krag. Operators of the planned megaconstellations say they will be responsible stewards of space, but Krag worries that the problem could increase, despite their best intentions. “What happens to those that fail or go bankrupt?” he asks. “They are probably not going to spend money to remove their satellites from space.”
Traffic cops for space
In theory, satellite operators should have plenty of room for all these missions to fly safely without ever nearing another object. So some scientists are tackling the problem of space junk by trying to understand where all the debris is to a high degree of precision. That would alleviate the need for many unnecessary manoeuvres that today are used to avoid potential collisions. “If you knew exactly where everything was, you would almost never have a problem,” says Marlon Sorge, a space-debris specialist at the Aerospace Corporation in El Segundo, California.
The field is called space-traffic management, because it’s analogous to managing traffic on the roads or in the air. Think about a busy day at an airport, says Moriba Jah, an astrodynamicist at the University of Texas at Austin: planes line up in the sky like a string of pearls, landing and taking off close to one another in a carefully choreographed routine. Air-traffic controllers know the location of the planes down to 1 metre in accuracy.
The same can’t be said for space debris. Not all objects in orbit are known, and even those included in databases are tracked to varying levels of precision. On top of that, there is no authoritative catalogue that accurately lists the orbits of all known space debris.
Jah illustrates this with a web-based database that he developed, called ASTRIAGraph. It draws on several sources, such as catalogues maintained by the US and Russian goverments, to visualize the locations of objects in space. When he types in an identifier for a particular space object, ASTRIAGraph draws a purple line to designate its orbit.
Only this doesn’t quite work for a number of objects, such as a Russian rocket body launched in 2007 and designated in the database as object number 32280. When Jah enters that number, ASTRIAGraph draws two purple lines: the US and Russian sources contain two completely different orbits for the same object. Jah says that it is almost impossible to tell which is correct, unless a third source of information could help to cross-correlate the correct location.
ASTRIAGraph currently contains some, but not all, of the major sources of information about tracking space objects. The US military catalogue — the largest such database publicly available — almost certainly omits information on classified satellites. The Russian government similarly holds many of its data close. Several commercial space-tracking databases have sprung up in the past few years, and most of those do not share openly.
Jah describes himself as a space environmentalist: “I want to make space a place that is safe to operate, that is free and useful for future generations.” Until that happens, he argues, the space community will continue devolving into a tragedy of the commons, in which all spaceflight operators are polluting a common resource.
He and other space environmentalists are starting to make headway, at least when it comes to US space policy. Jah testified on space-traffic management in front of Congress last year, at the invitation of Ted Cruz, a Republican senator from Texas who co-introduced a space-regulations bill this July. In June, President Donald Trump also signed a directive on space policy that, among other things, would shift responsibility for the US public space-debris catalogue from the military to a civilian agency — probably the Department of Commerce, which regulates business.
The space-policy directive is a rare opportunity to discuss space junk at the highest levels of the US government. “This is the first time we’re really having this conversation in a serious fashion,” says Mike Gold, vice-president for regulatory, policy and government contracts at Maxar Technologies of Westminster, Colorado, which owns and operates a number of satellites.
The orbiting dead
The space around Earth is filled with zombies: some 95% of all objects in orbit are dead satellites or pieces of inactive ones. When someone operating an active satellite gets an alert about an object on a collision course, it would be helpful to know how dangerous that incoming debris is. “With more and more objects, and the uncertainties we currently have, you just get collision warnings no end,” says Frueh. (Micrometeorites represent a separate threat and can’t be tracked at all.)
To assess the risk of an impending collision, satellite operators need to know what the object is, but tracking catalogues have little information about many items. In those cases, the military and other space trackers use telescopes to gather clues in the short period before a potential collision.
Working with the Air Force, Frueh and her colleagues are developing methods to rapidly decipher details of orbiting objects even when very little is known about them. By studying how an object reflects sunlight as it passes overhead, for instance, she can deduce whether it is tumbling or stable — a clue to whether or not it is operational. Her team is also experimenting with a machine-learning algorithm that could speed up the process of characterizing items, work she will describe on 14 September at a space-tracking meeting in Maui, Hawaii.
Once researchers know what an orbiting object is made of, they have a number of potential ways to reduce its threat. Some sci-fi-tinged proposals involve using magnets to sweep up space junk, or lasers to obliterate or deflect debris in orbit. In the coming weeks, researchers at the University of Surrey in Guildford, UK, will experiment with a net to ensnare a test satellite. The project, called RemoveDEBRIS, will then redirect the satellite into an orbit that will re-enter the atmosphere.
But such active approaches to cleaning up space junk aren’t likely to be practical over the long term, given the huge number of objects in orbit. So some other experts consider the best way of mitigating space junk to be a passive approach. This takes advantage of the gravitational pulls of the Sun and the Moon, known as resonances, that can put the satellites on a path to destruction. At the University of Arizona in Tucson, astrodynamicist Aaron Rosengren is developing ways to do so.
Rosengren first came across the idea when studying the fates of satellites in medium Earth orbit (MEO). These travel at altitudes anywhere between about 2,000 kilometres up, where low Earth orbit ends, and 35,000 kilometres up, where geostationary orbits begin.
Satellites in low Earth orbit can be disposed of by forcing them to re-enter the atmosphere, and most satellites in the less heavily trafficked geostationary region can be safely placed in ‘graveyard’ orbits that never interact with other objects. But in MEO, satellite trajectories can be unstable over the long term because of gravitational resonances.
An early hint that spacecraft operators could harness this phenomenon came from ESA’s INTEGRAL γ-ray space tele-scope, which launched in 2002. INTEGRAL travels in a stretched-out orbit that spans all the way from low Earth orbit, through MEO, and into geostationary orbit. It would normally have remained in space for more than a century, but in 2015, ESA decided to tweak its orbit. With a few small thruster burns, mission controllers placed it on a path to interact with gravitational resonances. It will now re-enter the atmosphere in 2029, rather than decades later.
In 2016, Rosengren and his colleagues in France and Italy showed that there is a dense web of orbital resonances that dictates how objects behave in MEO (J. Daquin et al. Celest. Mech. Dyn. Astr. 124, 335–366; 2016). Rosengren thinks this might offer a potential solution. There are paths in this web of resonances that lead not to MEO, but directly into the atmosphere, and operators could take advantage of them to send satellites straight to their doom. “We call it passive disposal through resonances and instabilities,” says Rosengren. “Yeah, we need a new name.”
Other researchers have explored the concept before, but Rosengren is trying to push it into the mainstream. “It’s one of the newer things in space debris,” he says.
These disposal highways in the sky could be easy to access. At a space conference in July in Pasadena, California, Rosengren and his colleagues reported on their analysis of US Orbiting Geophysical Observatory satellites from the 1960s. The scientists found that changing the launch date or time by as little as 15 minutes could lead to huge differences in how long a satellite remains in orbit. Such information could be used to help calculate the best times to depart the launch pad.
Being proactive now could head off a lot of trouble down the road, as operators of satellites such as CryoSat-2 have found. When ESA decided to take evasive action in early July, its engineers had to scramble and work through the weekend to get ready for the manoeuvre. Once the space junk had safely flown by, CryoSat-2 took a few days to get back into its normal orbit, says Vitali Braun, a space-debris engineer with ESA.
But the alerts didn’t stop coming. In the weeks that followed, mission controllers had to shift various satellites at least six times to dodge debris. And on 23 August, they nudged the Sentinel-3B satellite out of the way of space junk for the first time. It had been in orbit for only four months.
Nature 561, 24-26 (2018) | http://www.nature.com/articles/d41586-018-06170-1?curator=MediaREDEF&error=cookies_not_supported&code=f06d7892-7a0a-43a7-a969-286b917744a7 | 18 |
18 | Parallax, Distance and Parsecs
- Introduction to Astronomy
- The Celestial Sphere - Right Ascension and Declination
- What is Angular Size?
- What is the Milky Way Galaxy?
- The Astronomical Magnitude Scale
- Sidereal Time, Civil Time and Solar Time
- Equinoxes and Solstices
- Parallax, Distance and Parsecs
- A Newbie's Guide to Distances in Space
- Luminosity and Flux of Stars
- Kepler's Laws of Planetary Motion
- What Are Lagrange Points?
- Glossary of Astronomy & Photographic Terms
- Astronomical Constants
In Astronomy, parallax is a method used to determine the distance to the closest stars. This technique for measuring astronomical distances is very important because it is a geometric method and independent of the object being observed.
Adverts Blocked This website is supported entirely by advertisements. Please disable AdBlocking software so that I can continue providing free content and services.
Astronomers use an effect called parallax to measure distances to nearby stars. The principal of Parallax can easily be demonstrated by holding your finger up at arm's length. Close one eye, then the other and notice how your finger appears to move in relation to the background. This occurs because each eye sees a slightly different view because they are separated by a few inches.
If you measure the distance between your eyes and the distance your finger appears to move, then you can calculate the length of your arm.
This same principle can be used on a larger scale to calculate the distance to an object in the sky, only we use different points on the Earth's orbit instead of looking through alternate eyes. This is a fantastic way of measuring distance as it relies solely on geometry. Parallax calculations are based on measuring two angles and the included side of a triangle formed by the star, Earth on one side of its orbit and Earth six months later on the other side of its orbit.
Calculating parallax requires that the objects Right Ascension and Declination be recorded accurately so that we know the object's precise location on the celestial sphere.
We take a measurement of the position of an object relative to the other background stars during the winter months, and then again 6 months later, in the summer, when the Earth has moved 180° around its orbit around the Sun to give maximum separation distance.
In this diagram (not to scale) during the summer the position of the object appears to be at point A in the sky. Six months later, during the winter, it appears to be at point B. The imaginary line between the two opposite positions in the Earth's orbit is called the baseline. The half baseline is the Earth's orbit radius.
We know the radius of the Earth's orbit radius (r), and we can calculate the angle, θ from the observed apparent motion, measured in radians. Finally, we just need a little trigonometry to calculate the distance, d.
Equation 8 - Pythagoras Triangle Trig
Since the value of theta measured is going to be very small, we can approximate tan θ = θ. Rearranging to solve for d gives us:
Equation 9 - Pythagoras Triangle Trig
This equation forms the basis for a new unit of length called the parsec (PC). A parsec is defined as the distance at which 1 AU subtends 1 arcsecond. So an object located at 1pc would, by definition, have a parallax of 1 arcsecond.
The parallax measured for α Centauri is 0.74 arcseconds Calculate the distance in light years to α Centauri.
Equation 10 - Distance Calculation using Parallax
Equation 11 - Distance Parallax Calculation
1 AU is equal to 1.4960x1011 meters and 1 parsec is equal to 3.26 light years, which makes α Centauri 4.405 light years away.
Last updated on: Wednesday 24th January 2018 | https://perfectastronomy.com/parallax-distance-parsecs/ | 18 |
20 | Arithmetic and Geometric Sequences Multiple Choice Questions and Answers 8 PDF Download
Learn arithmetic and geometric sequences MCQs, grade 9 math test 8 for online learning courses and test prep. Geometric sequence multiple choice questions (MCQs), arithmetic and geometric sequences quiz questions and answers include math worksheets for online think through math courses distance learning.
Math multiple choice questions (MCQ): sequence of numbers, each of which after 1st is obtained by multiplying preceding by a fixed number is known as with options finite sequence, geometric sequence, infinite sequence and arithmetic sequence, geometric sequence quiz for online school teachers' job preparation with common interview questions and answers. Free math study guide to learn geometric sequence quiz to attempt multiple choice questions based test.
MCQs on Arithmetic and Geometric Sequences Worksheets 8 Quiz PDF Download
MCQ: A sequence of numbers, each of which after 1st is obtained by multiplying preceding by a fixed number is known as
- geometric sequence
- finite sequence
- infinite sequence
- arithmetic sequence | https://www.mcqlearn.com/grade9/math/arithmetic-and-geometric-sequences.php?page=8 | 18 |
11 | This article may be too technical for most readers to understand. Please help improve it to make it understandable to non-experts, without removing the technical details. (February 2010) (Learn how and when to remove this template message)
Temporal range: Cretaceous–recent
|Welwitschia mirabilis female plant with cones|
|Families & Genera|
|Distribution, separated by genus: |
Green – Welwitschia
Blue – Gnetum
Red – Ephedra
Purple – Gnetum and Ephedra
Gnetophyta is a division of plants, grouped within the gymnosperms (which also includes conifers, cycads, and ginkgos), that consists of some 70 species across the three relict genera: Gnetum (family Gnetaceae), Welwitschia (family Welwitschiaceae), and Ephedra (family Ephedraceae). Fossilized pollen attributed to a close relative of Ephedra has been dated as far back as the Early Cretaceous. Though diverse and dominant in the Paleogene and the Neogene , only three families, each containing a single genus, are still alive today. The primary difference between gnetophytes and other gymnosperms is the presence of vessel elements, a system of conduits that transport water within the plant, similar to those found in flowering plants. Because of this, gnetophytes were once thought to be the closest gymnosperm relatives to flowering plants, but more recent molecular studies have broughtthis hypothesis into question.
Though it is clear they are all closely related, the exact evolutionary inter-relationships between gnetophytes are unclear. Some classifications hold that all three genera should be placed in a single order (Gnetales), while other classifications say they should be distributed among three separate orders, each containing a single family and genus. Most morphological and molecular studies confirm that the genera Gnetum and Welwitschia diverged from each other more recently than they did from Ephedra.
Ecology and morphology
Unlike most biological groupings, it is difficult to find many common characteristics between all of the members of the gnetophytes. The two common characteristics most commonly used are the presence of enveloping bracts around both the ovules and microsporangia as well as a micropylar projection of the outer membrane of the ovule that produces a pollination droplet, though these are highly specific compared to the similarities between most other plant divisions. L. M. Bowe refers to the gnetophyte genera as a "bizarre and enigmatic" trio because, the gnetophytes' specialization to their respective environments is so complete that they hardly resemble each other at all. Gnetum species are mostly woody vines in tropical forests, though the best-known member of this group, Gnetum gnemon, is a tree native to western Malesia. The one remaining species of Welwitschia, Welwitschia mirabilis, native only to the dry deserts of Namibia and Angola, is a ground-hugging species with only two large strap-like leaves that grow continuously from the base throughout the plant's life. Ephedra species, known as "jointfirs" in the United States, have long slender branches which bear tiny scale-like leaves at their nodes. Infusions from these plants have been traditionally used as a stimulant, but ephedrine is a controlled substance today in many places because of the risk of harmful or even fatal overdosing.
Knowledge of gnetophyte history through fossil discovery has increased greatly since the 1980s. Gnetophyte fossils have been found that date from the Permian and the Triassic. Fossils dating back to the Jurassic have been found, though whether or not they belong to the gnetophytes is uncertain. Overall, the fossil record is richest in the early Cretaceous, with fossils of plants, seeds, and pollen have been found that can clearly be assigned to the gnetophytes.
With just three well-defined genera within an entire division, there still is understandable difficulty in establishing an unambiguous interrelationship among them; in earlier times matters were even more difficult and we find for example Pearson in the early 20th century speaking of the class Gnetales, rather than the order. G. H. M. Lawrence referred to them as an order, but remarked that the three families were distinct enough to deserve recognition as separate orders. Foster & Gifford accepted this principle, and placed the three orders together in a common class for convenience, which they called Gnetopsida. In general the evolutionary relationships among the seed plants still are unresolved, and the Gnetophyta have played an important role in the formation of phylogenetic hypotheses. Molecular phylogenies of extant gymnosperms have conflicted with morphological characters with regard to whether the gymnosperms as a whole (including gnetophytes) comprise a monophyletic group or a paraphyletic one that gave rise to angiosperms. At issue is whether the Gnetophyta are the sister group of angiosperms, or whether they are sister to, or nested within, other extant gymnosperms. Numerous fossil gymnosperm clades once existed that are morphologically at least as distinctive as the four living gymnosperm groups, such as Bennettitales, Caytonia and the glossopterids. When these gymnosperm fossils are considered, the question of gnetophyte relationships to other seed plants becomes even more complicated. Several hypotheses, illustrated below, have been presented to explain seed plant evolution.
Recent research by Lee EK, Cibrian-Jaramillo A, et al. (2011) suggests that the Gnetophyta are a sister group to the rest of the gymnosperms, contradicting the anthophyte hypothesis, which held that gnetophytes were sister to the flowering plants.
From the early twentieth century, the anthophyte hypothesis was the prevailing explanation for seed plant evolution, based on shared morphological characters between the gnetophytes and angiosperms. In this hypothesis, the gnetophytes, along with the extinct order Bennettitales, are sister to the angiosperms, forming the "anthophytes". Some morphological characters that were suggested to unite the anthophytes include vessels in wood, net-veined leaves (in Gnetum only), lignin chemistry, the layering of cells in the apical meristem, pollen and megaspore features (including thin megaspore wall), short cambial initials, and lignin syringal groups. However, most genetic studies, as well as more recent morphological analyses, have rejected the anthophyte hypothesis. Several of these studies have suggested that the gnetophytes and angiosperms have independently derived characters, including flower-like reproductive structures and tracheid vessel elements, that appear shared but are actually the result of parallel evolution.
In the gnetifer hypothesis, the gnetophytes are sister to the conifers, and the gymnosperms are a monophyletic group, sister to the angiosperms. The gnetifer hypothesis first emerged formally in the mid-twentieth century, when vessel elements in the gnetophytes were interpreted as being derived from tracheids with circular bordered pits, as in conifers. It did not gain strong support, however, until the emergence of molecular data in the late 1990s. Although the most salient morphological evidence still largely supports the anthophyte hypothesis, there are some more obscure morphological commonalities between the gnetophytes and conifers that lend support to the gnetifer hypothesis. These shared traits include: tracheids with scalariform pits with tori interspersed with annular thickenings, absence of scalariform pitting in primary xylem, scale-like and strap-shaped leaves of Ephedra and Welwitschia; and reduced sporophylls.
angiosperms (flowering plants)
The gnepine hypothesis is a modification of the gnetifer hypothesis, and suggests that the gnetophytes belong within the conifers as a sister group to the Pinaceae. According to this hypothesis, the conifers as currently defined are not a monophyletic group, in contrast with molecular findings that support its monophyly. All existing evidence for this hypothesis comes from molecular studies since 1999. However, the morphological evidence remains difficult to reconcile with the gnepine hypothesis. If the gnetophytes are nested within conifers, they must have lost several shared derived characters of the conifers (or these characters must have evolved in parallel in the other two conifer lineages): narrowly triangular leaves (gnetophytes have diverse leaf shapes), resin canals, a tiered proembryo, and flat woody ovuliferous cone scales. These kinds of major morphological changes are not without precedent in the Pinaceae, however: the Taxaceae, for example, have lost the classical cone of the conifers in favor of a single-terminal ovule surrounded by a fleshy aril.
angiosperms (flowering plants)
Some partitions of the genetic data suggest that the gnetophytes are sister to all of the other extant seed plant groups. However, there is no morphological evidence nor examples from the fossil record to support the gnetophyte-sister hypotheses.
|Wikimedia Commons has media related to Gnetophyta.|
- "Morphology and affinities of an Early Cretaceous Ephedra".
- Arber, E.A.N.; Parkin, J. (1908). "Studies on the evolution of the angiosperms: the relationship of the angiosperms to the Gnetales". Annals of Botany. 22: 489–515.
- Peter R. Crane; Patrick Herendeen; Else Marie Friis (2004). "Fossils and plant phylogeny". American Journal of Botany. 91 (10): 1683–1699. doi:10.3732/ajb.91.10.1683. PMID 21652317.
- Bowe, L.M.; Coat, G.; dePamphilis, C.W. (2000). "Phylogeny of seed plants based on all three genomic compartments: Extant gymnosperms are monophyletic and Gnetales' closest relatives are conifers". Proceedings of the National Academy of Sciences. 97 (8): 4092–4097. doi:10.1073/pnas.97.8.4092. PMC 18159. PMID 10760278.
- Gugerli, F.; Sperisen, C.; Buchler, U.; Brunner, L.; Brodbeck, S.; Palmer, J.D.; Qiu, Y.L. (2001). "The evolutionary split of Pinaceae from other conifers: evidence from an intron loss and a multigene phylogeny". Molecular Phylogenetics and Evolution. 21 (2): 167–175. doi:10.1006/mpev.2001.1004. PMID 11697913.
- Rai, H.S.; Reeves, P.A.; Peakall, R.; Olmstead, R.G.; Graham, S.W. (2008). "Inference of higher-order conifer relationships from a multi-locus plastid data set". Botany. 86 (7): 658–669. doi:10.1139/B08-062. Archived from the original on 2014-08-29.
- Ickert-Bond, S. M.; C. Rydin & S. S. Renner (2009). "A fossil-calibrated relaxed clock for Ephedra indicates an Oligocene age for the divergence of Asian and New World clades, and Miocene dispersal into South America" (PDF). Journal of Systematics and Evolution. 47: 444–456. doi:10.1111/j.1759-6831.2009.00053.x.
- Judd, W.S.; Campbell, C.S.; Kellogg, E.A.; Stevens, P.F.; and Donoghue, M.J. (2008) Plant Systematics: A Phylogenetics Approach. 3rd ed. Sunderland, Massachusetts, USA: Sinauer Associates, Inc.
- Zi-Qiang Wang (2004). "A New Permian Gnetalean Cone as Fossil Evidence for Supporting Current Molecular Phylogeny". Annals of Botany. 94 (2): 281–288. doi:10.1093/aob/mch138. PMC 4242163. PMID 15229124.
- Catarina Rydin; Kaj Raunsgaard Pedersen; Peter R. Crane; Else Marie Friis (2006). "Former Diversity of Ephedra (Gnetales): Evidence from Early Cretaceous Seeds from Portugal and North America". Annals of Botany. 98 (1): 123–140. doi:10.1093/aob/mcl078. PMC 2803531. PMID 16675607.
- Pearson, H. H. W. Gnetales. Cambridge University Press 1929. Reissued 2010. ISBN 978-1108013987
- Lawrence, George Hill Mathewson. Taxonomy of vascular plants. Macmillan, 1951
- Foster, Adriance S., Gifford, Ernest M. Jr. Comparative Morphology of Vascular Plants Freeman 1974. ISBN 0-7167-0712-8
- Lee EK, Cibrian-Jaramillo A, Kolokotronis SO, Katari MS, Stamatakis A, et al. (2011). "A Functional Phylogenomic View of the Seed Plants". PLoS Genet. 7 (12): e1002411. doi:10.1371/journal.pgen.1002411. PMC 3240601. PMID 22194700.
- Donoghue, M.J.; Doyle, J.A. (2000). "Seed plant phylogeny: demise of the anthophyte hypothesis?". Current Biology. 10 (3): R106–R109. doi:10.1016/S0960-9822(00)00304-3. PMID 10679315.
- Loconte, H.; Stevenson, D.W. (1990). "Cladistics of the Spermatophyta". Brittonia. 42 (3): 197–211. doi:10.2307/2807216. JSTOR 2807216.
- Nixon, K.C.; Crepet, W.L.; Stevenson, D.; Friis, E.M. (1994). "A reevaluation of seed plant phylogeny". Annals of the Missouri Botanical Garden. 81 (3): 494–533. doi:10.2307/2399901. JSTOR 2399901.
- Coiro, M.; Chomicki, G.; Doyle, J.A. (2018). "Experimental signal dissection and method sensitivity analyses reaffirm the potential of fossils and morphology in the resolution of the relationship of angiosperms and Gnetales". Paleobiology: 1–21. doi:10.1017/pab.2018.23.
- Chaw, S.M.; Aharkikh, A.; Sung, H.M.; Lau, T.C.; Li, W.H. (1997). "Molecular phylogeny of extant gymnosperms and seed plant evolution: analysis of nuclear 18S rRNA sequences". Molecular Biology and Evolution. 14 (1): 56–68. doi:10.1093/oxfordjournals.molbev.a025702. PMID 9000754.
- Chaw, S.M.; Parkinson, C.L.; Cheng, Y.; Vincent, T.M.; Palmer, J.D. (2000). "Seed plant phylogeny inferred from all three plant genomes: Monophyly of extant gymnosperms and origin of Gnetales from conifers". Proceedings of the National Academy of Sciences USA. 97 (8): 4086–4091. doi:10.1073/pnas.97.8.4086. PMC 18157. PMID 10760277.
- Goremykin, V.; Bobrova, V.; Pahnke, J.; Troitsky, A.; Antonov, A.; Martin, W. (1996). "Noncoding sequences from the slowly evolving chloroplast inverted repeat in addition to rbcL data do not support gnetalean affinities of angiosperms". Molecular Biology and Evolution. 13 (2): 383–396. doi:10.1093/oxfordjournals.molbev.a025597. PMID 8587503.
- Hajibabaei, M.; Xia, J.; Drouin, G. (2006). "Seed plant phylogeny: Gnetophytes are derived conifers and a sister group to Pinaceae". Molecular Phylogenetics and Evolution. 40 (1): 208–217. doi:10.1016/j.ympev.2006.03.006. PMID 16621615.
- Hansen, A.; Hansmann, S.; Samigullin, T.; Antonov, A.; Martin, W. (1999). "Gnetum and the angiosperms: molecular evidence that their shared morphological characters are convergent rather than homologous". Molecular Biology and Evolution. 16 (7): 1006–1009. doi:10.1093/oxfordjournals.molbev.a026176.
- Magallon, S.; Sanderson, M.J. (2002). "Relationships among seed plants inferred from highly conserved genes: sorting conflicting phylogenetic signals among ancient lineages". American Journal of Botany. 89 (12): 1991–2006. doi:10.3732/ajb.89.12.1991. JSTOR 4122754. PMID 21665628.
- Qiu, Y.L.; Lee, J.; Bernasconi-Quadroni, F.; Soltis, D.E.; Soltis, P.S.; Zanis, M.; Zimmer, E.A.; Chen, Z.; Savalainen, V. & Chase, M.W. (1999). "The earliest angiosperms: evidence from mitochondrial, plastid and nuclear genomes". Nature. 402 (6760): 404–407. doi:10.1038/46536. PMID 10586879.
- Samigullin, T.K.; Martin, W.F.; Troitsky, A.V.; Antonov, A.S. (1999). "Molecular data from the chloroplast rpoC1 gene suggest a deep and distinct dichotomy of contemporary spermatophytes into two monophyla: gymnosperms (including Gnetalaes) and angiosperms". Journal of Molecular Evolution. 49 (3): 310–315. doi:10.1007/PL00006553. PMID 10473771.
- Sanderson, M.J.; Wojciechowski, M.F.; Hu, J.M.; Sher Khan, T.; Brady, S.G. (2000). "Error, bias, and long-branch attraction in data for two chloroplast photosystem genes in seed plants". Molecular Biology and Evolution. 17 (5): 782–797. doi:10.1093/oxfordjournals.molbev.a026357. PMID 10779539.
- Rydin, C.; Kallersjo, M.; Friist, E.M. (2002). "Seed plant relationships and the systematic position of Gnetales based on nuclear and chloroplast DNA: conflicting data, rooting problems, and the monophyly of conifers". International Journal of Plant Sciences. 163 (2): 197–214. doi:10.1086/338321. JSTOR 3080238.
- Braukmann, T.W.A.; Kuzmina, M.; Stefanovic, S. (2009). "Loss of all plastid nhd genes in Gnetales and conifers: extent and evolutionary significance for the seed plant phylogeny". Current Genetics. 55 (3): 323–337. doi:10.1007/s00294-009-0249-7. PMID 19449185.
- Burleigh, J.G.; Mathews, S. (2007). "Phylogenetic signal in nucleotide data from seed plants: implications for resolving the seed plant tree of life". International Journal of Plant Sciences. 168 (10): 125–135. doi:10.3732/ajb.91.10.1599.
- Gifford, Ernest M., Adriance S. Foster. 1989. Morphology and Evolution of Vascular Plants. Third edition. WH Freeman and Company, New York.
- Hilton, Jason, and Richard M. Bateman. 2006. Pteridosperms are the backbone of seed-plant phylogeny. Journal of the Torrey Botanical Society 133: 119-168 (abstract) | https://en.wikipedia.org/wiki/Gnetophyta | 18 |
65 | Introduction to Calculus/Differentiation
|This lesson assumes you have a working knowledge of the topics presented in the following lessons:|
- 1 Resources
- 2 Prelude
- 3 Notion of secant, slope,
- 4 The Derivative
- 5 Fundamental Rules of Differentiation
- 6 Exponentials and logarithms
- 7 Trigonometric functions
- 8 Hyperbolic functions
Arithmetic is about what you can do with numbers. Algebra is about what you can do with variables. Calculus is about what you can do with functions. Just as in arithmetic there are things you can do to a number to give another number, such as square it or add it to another number, in calculus there are two basic operations that given a function yield new and intimately related functions. The first of these operations is called differentiation, and the new function is called the derivative of the original function.
This set of notes deals with the fundamentals of differentiation. For information about the second functional operator of calculus, visit Integration by Substitution after completing this unit.
Before we dive in, we will warm up with an excursion into the mathematical workings of interest in banking.
Let us suppose that we deposit an amount in the bank on New Year's Day, and furthermore that every year on the year the amount is augmented by a rate times the present amount. Then the amount in the bank on any given New Year's Day, years after the first is given by the expression
Unfortunately, if we withdraw the money three days before the New Year, we don't get any of the interest payment for that year. A fairer system would involve calculating interest times a year at the rate . In fact this gives us a slightly different value even if we take our money out on a New Year's Day, because every time we calculate interest, we receive interest on our previous interest. The amount we receive with this improved system is given by the expression
With this flexible system, we could set to to compound every month, or to to compound every day or to about to compound every second. But why stop there? Why not compound the interest every moment? What is really meant by that is this: as we increase does the value for get ever greater with or does it approach some reasonable quantity? If the latter is the case, then it is meaningful to ask, "What does approach?" As we can see from the following table with sample values, this is in fact the case.
1 1.02500 12 1.02529 365 1.02531 31536000 1.02532 100000000 1.02532 , ,
As we can see, as goes off toward infinity, approaches a finite value. Taking this to heart, we may come to our final system in which we define as follows:
Thus we set now not to evaluated for some large , but rather to the limit of that value as approaches infinity. This is the formula for continually compounded interest. To clean up this formula, note that neither nor "interfere" in any way with the evaluation of the limit, and may consequently be moved outside of the limit without affecting the value of the expression:
We can see from the form of the expression that increases exponentially with much as it did in our very first equation. The difference is that the original base has been replaced with the base which we have yet to simplify.
Take a moment to step back and do the following exercises:
- Without looking back, see if you can write down the expressions that represent
- yearly interest
- semiannual interest
- monthly interest
- interest times a year
- continually compounded interest
- Think about how much money you have. Figure out how long you would have to leave your money in a bank that compounds interest monthly before you became a millionaire, with a yearly interest rate of
- .02 (common for a savings account)
- .07 (average gain in the US stock market over a reasonably long period).
Finding the Base
In order to shed some light on the expression whose value we call , we shall make use of the following expansion, known as the Binomial Theorem:
By applying it to our limit, we get
This last step may seem mystifying at first. What happened to the limit? And where did all of the 's go? In fact it was the evaluation of the limit that allowed us to remove the 's. More exactly, as , so too , , etc., so that the top left and bottom right of each term cancel to produce the last expression.
Take a moment to look over the following exercises. Take the time to follow the trains of thought that are newest to you.
The Birth of
Now comes a real surprise. As it turns out, the infinite polynomial above is in fact exponential in . That is, , for some . In order to show this far-from-obvious fact, I offer the following.
To this last infinite series of numbers, define the quantity to be :
, an irrational (and in fact transcendental) number, has the approximate value 2.71828, which you may easily verify on a standard pocket or graphing calculator.
There are a few things to think about.
- The first line in the preceding derivation was motivated by my knowledge of the outcome.
- Convince yourself that the two expressions are in fact equal to one another.
- Evaluate the term for and . How does that compare to ? How about with ?
- Now that you have convinced yourself that I may do it, ask yourself why I would do it.
- Using the reverse Binomial Theorem, do you understand how it leads to the next expression?
- Convince yourself that the two expressions are in fact equal to one another.
- Is the equation something that one would predict merely from the rules of exponents or distribution?
- What makes certain seemingly uninteresting numbers so profoundly central to mathematics, such as , , , and ?
Back to the Start
From here, everything cascades back to our original goal, namely to find a usable formula for continually compounded interest. . And there she is.
Take a moment to do the following exercises.
- Think about how much money you have. How long will it take to become millionaire if you leave the money in a bank with yearly interest of .025
- that compounds interest yearly?
- that compounds interest continually?
- Seeing as the values with and without continually compounded interest are very close to one another, what does that tell you about the two equations used?
- Both formulas are of the form _____. Compare the various values that we have put in this blank, especially the in the equations for yearly and continually compounded interest.
- How close in value is to ? Does that surprise you?
- Now look at the infinite series version of the function . Does it still surprise you that and are so close in value?
The formula itself, however, is quite forgettable. In fact, as you may have guessed, the importance of compounded interest pales in comparison to the importance of the ideas we stumbled upon on the way, namely limits and . It is these two things that beg for us to go further into the heart of the life and being of functions. That wish is called calculus. And it all starts rather innocently with the derivative…
Notion of secant, slope,
The slope of a curve is most usefully approached by considering the simplest curve, the straight line. So, imagine a line plotted on square graph paper, of the kind familiar to just about every schoolchild. What can you say about such a line? We suppose for our discussion here that the line goes off your sheet of paper on both sides, and keeps going forever. Take your page and look at it. A line might be flat, that is parallel to the bottom of the page. It might be vertical, parallel to the sides of the page, or it might lie between these two extremes, not as flat as the first, and not as steep as the second.
The first part of the idea of 'slope' is that of steepness. How steep is the line, taking the horizontal line and a vertical line as our two extremes?
Our flat horizontal line has a slope of zero - nothing happens to the y's whatever you do to the x's, think of cycling in parts of the Netherlands for example.
A line at 45 degrees to the horizontal (that is exactly half way between vertical (90 degrees) and horizontal (0 degrees) has a slope of 1 (this would be a brutal to impossible hill for a bicycle, and very tough on foot). As it goes across one unit, it also goes up (or down) one unit.
Our vertical line is more interesting, if harder to cycle on. The slope is not defined, and as our line gets closer and closer to vertical, the slope gets bigger and bigger without limit.
The second part of the idea of slope captures something slightly different. It captures the idea of direction. Look at your line again. As it goes to your right does it go up the page, or down the page? If it was a road going up a hill would it be hard to follow it on a bicycle(going up), or very easy (going down)? This is expressed in the slope of a line by saying that a line has a positive slope if, as it goes across, it also goes up, (or as the y's increase the x's increase too). A line is said to have a negative slope if it goes down as it goes across, (or as the y's increase the x's decrease). As a cyclist you want a negative slope, unless you're in training.
Given a function , we define the derivative to be
This definition is motivated by the proportion , which for any h defines the slope of a line, when f is linear. Because of the nature of the calculation, the derivative can be figuratively thought of as the ratio between an infinitesimal dy and an infinitesimal dx and is often written . Both functional notation and infinitesimal or Leibniz notation have their virtues. In operator theory, the derivative of a function is sometimes written as .
- Using the definition above, what is ?
- Note that this is a short way of asking, if , what is ? One may also ask, what is ?
- If you have trouble remembering the definition of the derivative, it's much more important to know what it means, that is, why it's defined how it is. Remember it like this:
- From this we get the definition as stated above, .
- What kinds of functions have derivatives? What would a function need to have, for it not to have a derivative at some point?
The derivative satisfies a number of fundamental properties
An operator is called linear if and for any constant . To show that differentiation is a linear operator, we must show that and for any constant .
In other words, the differential operator (e.g., ) distributes over addition.
In other words, addition before and after doing differentiation are equivalent.
Fundamental Rules of Differentiation
Along with linearity, which is so simple that one hardly thinks of it as a rule, the following are essential to finding the derivative of arbitrary functions.
The Product Rule
It may be shown that for functions f and g, . Like the other two rules, this one is not a new axiom: it is directly provable from the definition of the derivative.
If a function f(x) can be written as a compound function f(g(x)), one can obtain its derivative using the chain rule. The chain rule states that the derivative of f(x) will equal the derivative of f(g) with respect to g, multiplied by the derivative of g(x) with respect to x. In mathematical terms: This is commonly written as , or more explicitly
The proof makes use of an alternate but patently equivalent definition of the derivative: . The first step is to write the derivative of the compound function in this form; one then manipulates it and obtains the chain rule.
In the third step, the first limit changes from p→x to g(p)→g(x). This is valid because if g is continuous at x, which it must be to have a derivative at x, then of course as p approaches x the value of g(p) approaches that of g(x).
Differentiating a nested function occurs very frequently, which makes this rule very useful.
The Power Rule
We may now readily show the relation as follows:
While this derivation assumes that is an positive integer, it turns out that the same rule holds for all real . For example, .
Take a moment to do the following exercises.
- Using the rule and linearity, find the derivatives of the following:
- What functions have the following derivatives?
Exponentials and logarithms
Exponentials and logarithms involve a special number denoted e.
Now, recall that
Using the three basic rules established above we can differentiate any polynomial, even one of infinite degree:
is the remarkable function that is its own derivative. In other words, is an eigenfunction of the differential operator. Which means that the application of the differential operator on has the same effect as multiplication by a real number. For example, these concepts are useful in quantum mechanics.
The natural logarithm is the function such that if then ; in other words, it is the inverse function of . We will make use of the chain rule (marked by the brace) in order to find its derivative:
This conclusion, that the derivative of is , is remarkable: it ties together two seemingly unrelated functions. Be careful, this derivative has definite values only when x > 0! (Examine the to understand why.)
Supppose we have the function
To differentiate this, we rewrite this as
Since is a constant,
In other words, for a constant a, we have
This re-enforces the special place that has in calculus - it is the unique number for which the constant is precisely equal to one.
Let us differentiate the function
We already know how to differentiate , so let's change it into another form with the base e.
Because is a constant,
In conclusion, for any constant a, the derivative of is
Let's suppose that
One could find with the quotient rule, but for more complicated functions, it may be better to use what is called "implicit differentiation".
In this case, we take the logarithm of both sides, to obtain
or, in other words, just simply
Differentiating the left and right hand side, we get
Now, multiply both sides by y, which we know is just to obtain the answer:
which of course can be simplified further. You should verify that this result agrees with the quotient rule. Differentials of logarithms of functions occur frequently in places like statistical mechanics.
General exponentials and logarithms
Consider the function
It can be immediately seen that
Compare this result to the chain rule and power rule results. The first term results in treating v constant. The second term results in treating u constant.
Consider the function . To find the derivative of , we use the definition of the derivative, as well as some trigonometric identities and the linearity of the limit operator.
and since and , the above expression simplifies to .
Thus, the derivative of is .
We perform the same process to find the derivatives of the other trigonometric functions (try to derive them on your own as an exercise). Since these derivatives come up quite often, it would behoove (advantageous to) you to memorize them.
The rules for differentiation involving hyperbolic functions behave very much like their trigonometric counterparts. Here,
so it can be seen that | https://en.wikiversity.org/wiki/Introduction_to_differentiation | 18 |
26 | Critical thinking is thinking that assesses itself ( center for critical thinking, 1996b ) critical thinking is the ability to think about one's thinking in such a way as 1 to recognize its strengths and weaknesses and, as a result, 2. Critical thinking logic puzzles puzzle workbooks for kids - updated each month on this page, you will find dozens of different logic puzzles in over a dozen different categories, including general logic printables in both two and three dimensions, decimals, and measurement. Distribute this packet of worksheets to give students practice in using charts and graphs to answer word problems.
Novel thinking: charlie and the chocolate factory use the vocabulary words to complete the crossword puzzle (grades 6-8) novel thinking: charlottes web draw a line from each important event to the detail that tells more about it (grades 6-8) novel thinking: georges marvelous medicine use the clue to help you unscramble each vocabulary word. Critical thinking c - level 2 this one page worksheet is on math terminology students use two sets of 0-9 numbers to fill in the empty boxes they need to use their basic math vocabulary and thinking process to answer the questions correctly helpful idea: have students cut out numbers and place in the empty boxes like pieces to a puzzle.
Jumpstart has a fun collection of free, printable critical thinking worksheets and free critical thinking activities for kids homeschooling parents as well as teachers can encourage better logical thinking, and deductive reasoning skills in kids by introducing them to these exercises. Critical thinking is intentional thought that is logical, rational, and open-minded this is a pretty broad definition, but critical thinking is a very open concept the key to critical thinking is the idea of actively analyzing your own thought.
Skills to pay the bills 98 problem solving and critical thinking everyone experiences problems from time to time some of our problems are big and complicated, while. Problems arise if you don't possess the necessary knowledge, but that's a different story in any case, these 4 questions, similar to the other 3 can hardly evaluate your critical thinking skills an example of a low level puzzle to evaluate your critical thinking skills would be eg some variety of multiple choice test. Critical thinking word problems ks1 we are a full-scale graphic design agency and studio, with a stable, in-house team of talented graphic design professionals, web programmers and project managers working together, giving a friendly and cost-effective service.
Critical thinking puzzles are designed to stimulate the logical areas of the rain. The word problems in these books help students conquer the dreaded math word problem by teaching them how and when to apply the math operations they know to real-life situations the developmentally sequenced problems in each book are arranged so they cannot be solved by rote processes. Critical thinking word problems when designing lessons for students, teachers are constantly trying to develop ways to increase their students' retention of material while also challenging the way they think they're also trying to get home in time to cook dinner and watch the latest episode of their favorite tv show.
Critical thinking is more than just a simple thought process it involves thinking on a much deeper underlying level rather than just at the surface there is so much information available to us in this world that we don't know what is true and what is not.
Math word problems helps students conquer the dreaded math word problem by teaching them when and how to apply the math operations they already know to real-life situations the developmentally sequenced problems in each book are arranged so they cannot be solved by rote processes. Solving, according to a 2010 critical skills survey by the american management association and others problem solving and critical thinking refers to the ability to use knowledge, facts, and data to effectively solve problems this doesn’t mean you need to have an immediate answer, it means you have to be able to think on your feet, assess problems and find solutions.
Characteristics of critical thinking wade (1995) identifies eight characteristics of critical thinking critical thinking involves asking questions, defining a problem, examining evidence, analyzing assumptions and biases, avoiding emotional reasoning, avoiding oversimplification, considering other interpretations, and tolerating ambiguity. Critical thinking worksheets critical thinking is more than just a simple thought process it involves thinking on a much deeper underlying level rather than just at the surface there is so much information available to us in this world that we don't know what is true and what is not. | http://eyessayhvwo.frieslandvakantiebungalow.info/critical-thinking-word-problems.html | 18 |
17 | A neuron, also known as a neurone (British spelling) and nerve cell, is an electrically excitable cell that receives, processes, and transmits information through electrical and chemical signals. These signals between neurons occur via specialized connections called synapses. Neurons can connect to each other to form neural circuits. Neurons are the primary components of the central nervous system, which includes the brain and spinal cord, and of the peripheral nervous system, which comprises the autonomic nervous system and the somatic nervous system.
There are many types of specialized neurons. Sensory neurons respond to one particular type of stimulus such as touch, sound, or light and all other stimuli affecting the cells of the sensory organs, and converts it into an electrical signal via transduction, which is then sent to the spinal cord or brain. Motor neurons receive signals from the brain and spinal cord to cause everything from muscle contractions and affect glandular outputs. Interneurons connect neurons to other neurons within the same region of the brain or spinal cord in neural networks.
A typical neuron consists of a cell body (soma), dendrites, and an axon. The term neurite is used to describe either a dendrite or an axon, particularly in its undifferentiated stage. Dendrites are thin structures that arise from the cell body, often extending for hundreds of micrometers and branching multiple times, giving rise to a complex "dendritic tree". An axon (also called a nerve fiber) is a special cellular extension (process) that arises from the cell body at a site called the axon hillock and travels for a distance, as far as 1 meter in humans or even more in other species. Most neurons receive signals via the dendrites and send out signals down the axon. Numerous axons are often bundled into fascicles that make up the nerves in the peripheral nervous system (like strands of wire make up cables). Bundles of axons in the central nervous system are called tracts. The cell body of a neuron frequently gives rise to multiple dendrites, but never to more than one axon, although the axon may branch hundreds of times before it terminates. At the majority of synapses, signals are sent from the axon of one neuron to a dendrite of another. There are, however, many exceptions to these rules: for example, neurons can lack dendrites, or have no axon, and synapses can connect an axon to another axon or a dendrite to another dendrite.
All neurons are electrically excitable, due to maintenance of voltage gradients across their membranes by means of metabolically driven ion pumps, which combine with ion channels embedded in the membrane to generate intracellular-versus-extracellular concentration differences of ions such as sodium, potassium, chloride, and calcium. Changes in the cross-membrane voltage can alter the function of voltage-dependent ion channels. If the voltage changes by a large enough amount, an all-or-none electrochemical pulse called an action potential is generated and this change in cross-membrane potential travels rapidly along the cell's axon, and activates synaptic connections with other cells when it arrives.
In most cases, neurons are generated by special types of stem cells during brain development and childhood. Neurons in the adult brain generally do not undergo cell division. Astrocytes are star-shaped glial cells that have also been observed to turn into neurons by virtue of the stem cell characteristic pluripotency. Neurogenesis largely ceases during adulthood in most areas of the brain. However, there is strong evidence for generation of substantial numbers of new neurons in two brain areas, the hippocampus and olfactory bulb.
A neuron is a specialized type of cell found in the bodies of all eumetozoans. Only sponges and a few other simpler animals lack neurons. The features that define a neuron are electrical excitability and the presence of synapses, which are complex membrane junctions that transmit signals to other cells. The body's neurons, plus the glial cells that give them structural and metabolic support, together constitute the nervous system. In vertebrates, the majority of neurons belong to the central nervous system, but some reside in peripheral ganglia, and many sensory neurons are situated in sensory organs such as the retina and cochlea.
A typical neuron is divided into three parts: the soma or cell body, dendrites, and axon. The soma is usually compact; the axon and dendrites are filaments that extrude from it. Dendrites typically branch profusely, getting thinner with each branching, and extending their farthest branches a few hundred micrometers from the soma. The axon leaves the soma at a swelling called the axon hillock, and can extend for great distances, giving rise to hundreds of branches. Unlike dendrites, an axon usually maintains the same diameter as it extends. The soma may give rise to numerous dendrites, but never to more than one axon. Synaptic signals from other neurons are received by the soma and dendrites; signals to other neurons are transmitted by the axon. A typical synapse, then, is a contact between the axon of one neuron and a dendrite or soma of another. Synaptic signals may be excitatory or inhibitory. If the net excitation received by a neuron over a short period of time is large enough, the neuron generates a brief pulse called an action potential, which originates at the soma and propagates rapidly along the axon, activating synapses onto other neurons as it goes.
Many neurons fit the foregoing schema in every respect, but there are also exceptions to most parts of it. There are no neurons that lack a soma, but there are neurons that lack dendrites, and others that lack an axon. Furthermore, in addition to the typical axodendritic and axosomatic synapses, there are axoaxonic (axon-to-axon) and dendrodendritic (dendrite-to-dendrite) synapses.
The key to neural function is the synaptic signaling process, which is partly electrical and partly chemical. The electrical aspect depends on properties of the neuron's membrane. Like all animal cells, the cell body of every neuron is enclosed by a plasma membrane, a bilayer of lipid molecules with many types of protein structures embedded in it. A lipid bilayer is a powerful electrical insulator, but in neurons, many of the protein structures embedded in the membrane are electrically active. These include ion channels that permit electrically charged ions to flow across the membrane and ion pumps that actively transport ions from one side of the membrane to the other. Most ion channels are permeable only to specific types of ions. Some ion channels are voltage gated, meaning that they can be switched between open and closed states by altering the voltage difference across the membrane. Others are chemically gated, meaning that they can be switched between open and closed states by interactions with chemicals that diffuse through the extracellular fluid. The interactions between ion channels and ion pumps produce a voltage difference across the membrane, typically a bit less than 1/10 of a volt at baseline. This voltage has two functions: first, it provides a power source for an assortment of voltage-dependent protein machinery that is embedded in the membrane; second, it provides a basis for electrical signal transmission between different parts of the membrane.
Neurons communicate by chemical and electrical synapses in a process known as neurotransmission, also called synaptic transmission. The fundamental process that triggers the release of neurotransmitters is the action potential, a propagating electrical signal that is generated by exploiting the electrically excitable membrane of the neuron. This is also known as a wave of depolarization.
Neurons are highly specialized for the processing and transmission of cellular signals. Given their diversity of functions performed in different parts of the nervous system, there is a wide variety in their shape, size, and electrochemical properties. For instance, the soma of a neuron can vary from 4 to 100 micrometers in diameter.
The accepted view of the neuron attributes dedicated functions to its various anatomical components; however, dendrites and axons often act in ways contrary to their so-called main function.
Axons and dendrites in the central nervous system are typically only about one micrometer thick, while some in the peripheral nervous system are much thicker. The soma is usually about 10–25 micrometers in diameter and often is not much larger than the cell nucleus it contains. The longest axon of a human motor neuron can be over a meter long, reaching from the base of the spine to the toes.
Sensory neurons can have axons that run from the toes to the posterior column of the spinal cord, over 1.5 meters in adults. Giraffes have single axons several meters in length running along the entire length of their necks. Much of what is known about axonal function comes from studying the squid giant axon, an ideal experimental preparation because of its relatively immense size (0.5–1 millimeters thick, several centimeters long).
Fully differentiated neurons are permanently postmitotic; however, research starting around 2002 shows that additional neurons throughout the brain can originate from neural stem cells through the process of neurogenesis. These are found throughout the brain, but are particularly concentrated in the subventricular zone and subgranular zone.
Numerous microscopic clumps called Nissl substance (or Nissl bodies) are seen when nerve cell bodies are stained with a basophilic ("base-loving") dye. These structures consist of rough endoplasmic reticulum and associated ribosomal RNA. Named after German psychiatrist and neuropathologist Franz Nissl (1860–1919), they are involved in protein synthesis and their prominence can be explained by the fact that nerve cells are very metabolically active. Basophilic dyes such as aniline or (weakly) haematoxylin highlight negatively charged components, and so bind to the phosphate backbone of the ribosomal RNA.
The cell body of a neuron is supported by a complex mesh of structural proteins called neurofilaments, which are assembled into larger neurofibrils. Some neurons also contain pigment granules, such as neuromelanin (a brownish-black pigment that is byproduct of synthesis of catecholamines), and lipofuscin (a yellowish-brown pigment), both of which accumulate with age. Other structural proteins that are important for neuronal function are actin and the tubulin of microtubules. Actin is predominately found at the tips of axons and dendrites during neuronal development. There the actin dynamics can be modulated via an interplay with microtubule.
There are different internal structural characteristics between axons and dendrites. Typical axons almost never contain ribosomes, except some in the initial segment. Dendrites contain granular endoplasmic reticulum or ribosomes, in diminishing amounts as the distance from the cell body increases.
Neurons exist in a number of different shapes and sizes and can be classified by their morphology and function. The anatomist Camillo Golgi grouped neurons into two types; type I with long axons used to move signals over long distances and type II with short axons, which can often be confused with dendrites. Type I cells can be further divided by where the cell body or soma is located. The basic morphology of type I neurons, represented by spinal motor neurons, consists of a cell body called the soma and a long thin axon covered by the myelin sheath. Around the cell body is a branching dendritic tree that receives signals from other neurons. The end of the axon has branching terminals (axon terminal) that release neurotransmitters into a gap called the synaptic cleft between the terminals and the dendrites of the next neuron.
Most neurons can be anatomically characterized as:
only 1 process
1 axon and 1 dendrite
1 axon and 2 or more dendrites
neurons with long-projecting axonal processes; examples are pyramidal cells, Purkinje cells, and anterior horn cells.
neurons whose axonal process projects locally; the best example is the granule cell.
where the axon cannot be distinguished from the dendrite(s).
1 process which then serves as both an axon and a dendrite
Furthermore, some unique neuronal types can be identified according to their location in the nervous system and distinct shape. Some examples are:
Afferent and efferent also refer generally to neurons that, respectively, bring information to or send information from the brain.
A neuron affects other neurons by releasing a neurotransmitter that binds to chemical receptors. The effect upon the postsynaptic neuron is determined not by the presynaptic neuron or by the neurotransmitter, but by the type of receptor that is activated. A neurotransmitter can be thought of as a key, and a receptor as a lock: the same type of key can here be used to open many different types of locks. Receptors can be classified broadly as excitatory (causing an increase in firing rate), inhibitory (causing a decrease in firing rate), or modulatory (causing long-lasting effects not directly related to firing rate).
The two most common neurotransmitters in the brain, glutamate and GABA, have actions that are largely consistent. Glutamate acts on several different types of receptors, and have effects that are excitatory at ionotropic receptors and a modulatory effect at metabotropic receptors. Similarly, GABA acts on several different types of receptors, but all of them have effects (in adult animals, at least) that are inhibitory. Because of this consistency, it is common for neuroscientists to simplify the terminology by referring to cells that release glutamate as "excitatory neurons", and cells that release GABA as "inhibitory neurons". Since over 90% of the neurons in the brain release either glutamate or GABA, these labels encompass the great majority of neurons. There are also other types of neurons that have consistent effects on their targets, for example, "excitatory" motor neurons in the spinal cord that release acetylcholine, and "inhibitory" spinal neurons that release glycine.
The distinction between excitatory and inhibitory neurotransmitters is not absolute, however. Rather, it depends on the class of chemical receptors present on the postsynaptic neuron. In principle, a single neuron, releasing a single neurotransmitter, can have excitatory effects on some targets, inhibitory effects on others, and modulatory effects on others still. For example, photoreceptor cells in the retina constantly release the neurotransmitter glutamate in the absence of light. So-called OFF bipolar cells are, like most neurons, excited by the released glutamate. However, neighboring target neurons called ON bipolar cells are instead inhibited by glutamate, because they lack the typical ionotropic glutamate receptors and instead express a class of inhibitory metabotropic glutamate receptors. When light is present, the photoreceptors cease releasing glutamate, which relieves the ON bipolar cells from inhibition, activating them; this simultaneously removes the excitation from the OFF bipolar cells, silencing them.
It is possible to identify the type of inhibitory effect a presynaptic neuron will have on a postsynaptic neuron, based on the proteins the presynaptic neuron expresses. Parvalbumin-expressing neurons typically dampen the output signal of the postsynaptic neuron in the visual cortex, whereas somatostatin-expressing neurons typically block dendritic inputs to the postsynaptic neuron.
Neurons have intrinsic electroresponsive properties like intrinsic transmembrane voltage oscillatory patterns. So neurons can be classified according to their electrophysiological characteristics:
Glutamate can cause excitotoxicity when blood flow to the brain is interrupted, resulting in brain damage. When blood flow is suppressed, glutamate is released from presynaptic neurons causing NMDA and AMPA receptor activation more so than would normally be the case outside of stress conditions, leading to elevated Ca2+ and Na+ entering the post synaptic neuron and cell damage. Glutamate is synthesized from the amino acid glutamine by the enzyme glutamate synthase.
See main article: Synapse and Chemical synapse. Neurons communicate with one another via synapses, where the axon terminal or en passant bouton (a type of terminal located along the length of the axon) of one cell contacts another neuron's dendrite, soma or, less commonly, axon. Neurons such as Purkinje cells in the cerebellum can have over 1000 dendritic branches, making connections with tens of thousands of other cells; other neurons, such as the magnocellular neurons of the supraoptic nucleus, have only one or two dendrites, each of which receives thousands of synapses. Synapses can be excitatory or inhibitory and either increase or decrease activity in the target neuron, respectively. Some neurons also communicate via electrical synapses, which are direct, electrically conductive junctions between cells.
In a chemical synapse, the process of synaptic transmission is as follows: when an action potential reaches the axon terminal, it opens voltage-gated calcium channels, allowing calcium ions to enter the terminal. Calcium causes synaptic vesicles filled with neurotransmitter molecules to fuse with the membrane, releasing their contents into the synaptic cleft. The neurotransmitters diffuse across the synaptic cleft and activate receptors on the postsynaptic neuron. High cytosolic calcium in the axon terminal also triggers mitochondrial calcium uptake, which, in turn, activates mitochondrial energy metabolism to produce ATP to support continuous neurotransmission.
An autapse is a synapse in which a neuron's axon connects to its own dendrites.
The human brain has a huge number of synapses. Each of the 1011 (one hundred billion) neurons has on average 7,000 synaptic connections to other neurons. It has been estimated that the brain of a three-year-old child has about 1015 synapses (1 quadrillion). This number declines with age, stabilizing by adulthood. Estimates vary for an adult, ranging from 1014 to 5 x 1014 synapses (100 to 500 trillion).
In 1937, John Zachary Young suggested that the squid giant axon could be used to study neuronal electrical properties. Being larger than but similar in nature to human neurons, squid cells were easier to study. By inserting electrodes into the giant squid axons, accurate measurements were made of the membrane potential.
The cell membrane of the axon and soma contain voltage-gated ion channels that allow the neuron to generate and propagate an electrical signal (an action potential). These signals are generated and propagated by charge-carrying ions including sodium (Na+), potassium (K+), chloride (Cl−), and calcium (Ca2+).
There are several stimuli that can activate a neuron leading to electrical activity, including pressure, stretch, chemical transmitters, and changes of the electric potential across the cell membrane. Stimuli cause specific ion-channels within the cell membrane to open, leading to a flow of ions through the cell membrane, changing the membrane potential.
Thin neurons and axons require less metabolic expense to produce and carry action potentials, but thicker axons convey impulses more rapidly. To minimize metabolic expense while maintaining rapid conduction, many neurons have insulating sheaths of myelin around their axons. The sheaths are formed by glial cells: oligodendrocytes in the central nervous system and Schwann cells in the peripheral nervous system. The sheath enables action potentials to travel faster than in unmyelinated axons of the same diameter, whilst using less energy. The myelin sheath in peripheral nerves normally runs along the axon in sections about 1 mm long, punctuated by unsheathed nodes of Ranvier, which contain a high density of voltage-gated ion channels. Multiple sclerosis is a neurological disorder that results from demyelination of axons in the central nervous system.
Some neurons do not generate action potentials, but instead generate a graded electrical signal, which in turn causes graded neurotransmitter release. Such nonspiking neurons tend to be sensory neurons or interneurons, because they cannot carry signals long distances.
Neural coding is concerned with how sensory and other information is represented in the brain by neurons. The main goal of studying neural coding is to characterize the relationship between the stimulus and the individual or ensemble neuronal responses, and the relationships amongst the electrical activities of the neurons within the ensemble. It is thought that neurons can encode both digital and analog information.
The conduction of nerve impulses is an example of an all-or-none response. In other words, if a neuron responds at all, then it must respond completely. Greater intensity of stimulation does not produce a stronger signal but can produce a higher frequency of firing. There are different types of receptor responses to stimuli, slowly adapting or tonic receptors respond to steady stimulus and produce a steady rate of firing. These tonic receptors most often respond to increased intensity of stimulus by increasing their firing frequency, usually as a power function of stimulus plotted against impulses per second. This can be likened to an intrinsic property of light where to get greater intensity of a specific frequency (color) there have to be more photons, as the photons can't become "stronger" for a specific frequency.
There are a number of other receptor types that are called quickly adapting or phasic receptors, where firing decreases or stops with steady stimulus; examples include: skin when touched by an object causes the neurons to fire, but if the object maintains even pressure against the skin, the neurons stop firing. The neurons of the skin and muscles that are responsive to pressure and vibration have filtering accessory structures that aid their function.
The pacinian corpuscle is one such structure. It has concentric layers like an onion, which form around the axon terminal. When pressure is applied and the corpuscle is deformed, mechanical stimulus is transferred to the axon, which fires. If the pressure is steady, there is no more stimulus; thus, typically these neurons respond with a transient depolarization during the initial deformation and again when the pressure is removed, which causes the corpuscle to change shape again. Other types of adaptation are important in extending the function of a number of other neurons.
To make the structure of individual neurons visible, Ramón y Cajal improved a silver staining process that had been developed by Camillo Golgi. The improved process involves a technique called "double impregnation" and is still in use today.
In 1888 Ramón y Cajal published a paper about the bird cerebellum. In this paper, he tells he could not find evidence for anastomis between axons and dendrites and calls each nervous element "an absolutely autonomous canton." This became known as the neuron doctrine, one of the central tenets of modern neuroscience.
In 1891 the German anatomist Heinrich Wilhelm Waldeyer wrote a highly influential review about the neuron doctrine in which he introduced the term neuron to describe the anatomical and physiological unit of the nervous system.
The silver impregnation stains are an extremely useful method for neuroanatomical investigations because, for reasons unknown, it stains a very small percentage of cells in a tissue, so one is able to see the complete micro structure of individual neurons without much overlap from other cells in the densely packed brain.
The neuron doctrine is the now fundamental idea that neurons are the basic structural and functional units of the nervous system. The theory was put forward by Santiago Ramón y Cajal in the late 19th century. It held that neurons are discrete cells (not connected in a meshwork), acting as metabolically distinct units.
Later discoveries yielded a few refinements to the simplest form of the doctrine. For example, glial cells, which are not considered neurons, play an essential role in information processing. Also, electrical synapses are more common than previously thought, meaning that there are direct, cytoplasmic connections between neurons. In fact, there are examples of neurons forming even tighter coupling: the squid giant axon arises from the fusion of multiple axons.
Ramón y Cajal also postulated the Law of Dynamic Polarization, which states that a neuron receives signals at its dendrites and cell body and transmits them, as action potentials, along the axon in one direction: away from the cell body. The Law of Dynamic Polarization has important exceptions; dendrites can serve as synaptic output sites of neurons and axons can receive synaptic inputs.
The number of neurons in the brain varies dramatically from species to species. The adult human brain contains about 85-86 billion neurons, of which 16.3 billion are in the cerebral cortex and 69 billion in the cerebellum. By contrast, the nematode worm Caenorhabditis elegans has just 302 neurons, making it an ideal experimental subject as scientists have been able to map all of the organism's neurons. The fruit fly Drosophila melanogaster, a common subject in biological experiments, has around 100,000 neurons and exhibits many complex behaviors. Many properties of neurons, from the type of neurotransmitters used to ion channel composition, are maintained across species, allowing scientists to study processes occurring in more complex organisms in much simpler experimental systems.
See main article: Neurology. Charcot–Marie–Tooth disease (CMT) is a heterogeneous inherited disorder of nerves (neuropathy) that is characterized by loss of muscle tissue and touch sensation, predominantly in the feet and legs but also in the hands and arms in the advanced stages of disease. Presently incurable, this disease is one of the most common inherited neurological disorders, with 36 in 100,000 affected.
Alzheimer's disease (AD), also known simply as Alzheimer's, is a neurodegenerative disease characterized by progressive cognitive deterioration, together with declining activities of daily living and neuropsychiatric symptoms or behavioral changes. The most striking early symptom is loss of short-term memory (amnesia), which usually manifests as minor forgetfulness that becomes steadily more pronounced with illness progression, with relative preservation of older memories. As the disorder progresses, cognitive (intellectual) impairment extends to the domains of language (aphasia), skilled movements (apraxia), and recognition (agnosia), and functions such as decision-making and planning become impaired.
Parkinson's disease (PD), also known as Parkinson disease, is a degenerative disorder of the central nervous system that often impairs the sufferer's motor skills and speech. Parkinson's disease belongs to a group of conditions called movement disorders. It is characterized by muscle rigidity, tremor, a slowing of physical movement (bradykinesia), and in extreme cases, a loss of physical movement (akinesia). The primary symptoms are the results of decreased stimulation of the motor cortex by the basal ganglia, normally caused by the insufficient formation and action of dopamine, which is produced in the dopaminergic neurons of the brain. Secondary symptoms may include high level cognitive dysfunction and subtle language problems. PD is both chronic and progressive.
Myasthenia gravis is a neuromuscular disease leading to fluctuating muscle weakness and fatigability during simple activities. Weakness is typically caused by circulating antibodies that block acetylcholine receptors at the post-synaptic neuromuscular junction, inhibiting the stimulative effect of the neurotransmitter acetylcholine. Myasthenia is treated with immunosuppressants, cholinesterase inhibitors and, in selected cases, thymectomy.
Demyelination is the act of demyelinating, or the loss of the myelin sheath insulating the nerves. When myelin degrades, conduction of signals along the nerve can be impaired or lost, and the nerve eventually withers. This leads to certain neurodegenerative disorders like multiple sclerosis and chronic inflammatory demyelinating polyneuropathy.
Although most injury responses include a calcium influx signaling to promote resealing of severed parts, axonal injuries initially lead to acute axonal degeneration, which is rapid separation of the proximal and distal ends within 30 minutes of injury. Degeneration follows with swelling of the axolemma, and eventually leads to bead like formation. Granular disintegration of the axonal cytoskeleton and inner organelles occurs after axolemma degradation. Early changes include accumulation of mitochondria in the paranodal regions at the site of injury. Endoplasmic reticulum degrades and mitochondria swell up and eventually disintegrate. The disintegration is dependent on ubiquitin and calpain proteases (caused by influx of calcium ion), suggesting that axonal degeneration is an active process. Thus the axon undergoes complete fragmentation. The process takes about roughly 24 hrs in the peripheral nervous system (PNS), and longer in the CNS. The signaling pathways leading to axolemma degeneration are currently unknown.
See main article: Neurogenesis. It has been demonstrated that neurogenesis can sometimes occur in the adult vertebrate brain, a finding that led to controversy in 1999. Later studies of the age of human neurons suggest that this process occurs only for a minority of cells, and a vast majority of neurons composing the neocortex were formed before birth and persist without replacement.
The body contains a variety of stem cell types that have the capacity to differentiate into neurons. A report in Nature suggested that researchers had found a way to transform human skin cells into working nerve cells using a process called transdifferentiation in which "cells are forced to adopt new identities".
See main article: Neuroregeneration. It is often possible for peripheral axons to regrow if they are severed, but a neuron cannot be functionally replaced by one of another type (Llinás' law). | http://everything.explained.today/Neuron/ | 18 |
13 | - Our Approach
- Order Now
Aerodynamics is the way air moves around the object. Anything that moves through air reacts to aerodynamics. The word is derived from two Greek words; aerios which mean air and dynamis which mean force. It is the study of forces and the resulting motion of objects through the air.
Aerodynamics is the branch of dynamics that is concerned with the study of the motion of air, especially when it interacts with a solid object. It is basically a branch of fluid dynamics and gas dynamics which shares many aspects of aerodynamics theory.
The aerodynamics assignment help introduces students with the fundamental knowledge for understanding various aspects of aerodynamics which includes its principle, nature, performance, stability, control, and applications. It also enables students to apply practical methods in calculating the aerodynamic forces. Thus, having a formula-based understanding in this field of study helps students with their assignments and professional career.
The major application of Aerodynamics is in the designing of flights and aircraft. However, automobile and train bodies also apply its principles in their designs. The aerodynamics results that fall under different categories are observed on the basis of following elements:
Aerodynamics normally deals with the calculation of forces to understand the motion of air (flow field) around the solid body. When a body moves in the air, a pressure and friction are produced on the body. The pressure acts normal to the surface and friction acts tangentially to the surface of the body. The resultant force is the sum of pressure and friction forces. The area at which the resultant forces act on the body is known as the center of the pressure. The aerodynamics forces are because of the pressure and friction stress distribution over the body surface.
There are four basic forces in aerodynamics. These include:
Lift- Lift is a push which lets something move up. It is an aerodynamic force at acts through the air at the right angle to the direction of motion of the object. It is produced when the moving object and the fluid interacts. This interaction leads to differences in the pressure between the upper surface and the lower surface of the object. The pressure produces a force that sustains the object against fall due to gravity. The lift generally includes three theories; Bernoulli's principle, Coanda effect and Newton's third law of motion.
Weight- Everything on earth has weight. This force is directed from gravity that pulls down objects. Weight is a gravity force which acts in the downward direction towards the center of the earth. It determines the characteristics of objects. The weight of an object determines and controls the level of push that needs to be imposed on that object.
Drag- Drag is a force that tries to slow down the object. It makes it difficult for the object to move. Drag is an aerodynamic force that resists the motion of the object through the fluid. The fluid causes more drag than air. The level of drag also depends upon the shape and size of an object. Round and narrow surfaces usually have a less amount of drag than flat and wide surfaces. The drag is generated by front and rear pressure difference, shearing between fluid and solid surface, gas compression level at high speed and residual lift components produced by 3D flow rotation.
Thrust - It is a push that is opposite of drag which moves something forward. Thrust is an aerodynamic force that acts in the direction of motion. It is produced by engines which transfer energy into the flow. The energy flows in the form of increased fluid momentum. The object must have more thrust than drag in order to keep moving forward.
Aerodynamics is a branch of study that deals with the designing of aircraft, automobiles, and trains to minimize the effects of forces on the motion of objects. If these bodies have an aerodynamic design or shape, it moves faster and consumes less fuel than others because the air passes over it more easily. Thus aerodynamics comes with certain advantages.
Aerodynamics is important with its number of application in various fields. Some of the fields are mentioned below:
Aerodynamics in Aircraft- Aerodynamics forces are used to control an aircraft in flight. When the airplane is in level flight at a static speed, the aircraft generates thrust with a propeller or a jet engine which are enough to counteract the aerodynamic drag force. It also produces the upward force in level flight to overcome the downward force caused by gravity.
Aerodynamics in Automobiles- Automobiles use aerodynamics body shapes to increase engine efficiency and speed. Automotive experts use computer simulation and wind-tunnel experiments to fine-tune the aerodynamics of automobiles so that they general required amount of downward forces to the wheels with the minimum amount of drag forces.
Aerodynamics in Sailing- In sailing, aerodynamics study is extensively applied in the prediction of forces and moments. It is also used in designing mechanical components like hard drive heads.
Aerodynamics in Structural Engineering- Aerodynamics and specifically aero-elasticity is very critical and widely used to calculate wind loads in the design of buildings and bridges.
Aerodynamics in Urban Planning- Town planners and designers use urban aerodynamics to improve the comfort level in outdoor spaces. It helps them to create urban microclimates and to reduce the effect of urban population.
Aerodynamics in the Study of Ecosystem- The study of environmental aerodynamics helps in identifying the ways atmospheric circulation and flight mechanics affect ecosystems. Additionally, aerodynamics is helpful in determining the wind turbine design and used as a part of numerical weather prediction.
Aerodynamics problems are determined by the flow environment or flow properties including flow speed, viscosity, and compressibility.
The flow environment-
External Aerodynamics- It is the study of flow that deals with the flow of air around the solid bodies of various shapes. For instance, evaluation of the lift and drag or the shock waves of airplanes.
Internal Aerodynamics- The study deals with the flow of air through the passages in solid bodies. For instance, it takes in the examination of airflow by a jet engine or by an air conditioning pipe.
The flow properties-
Flow Speed- The problem can be classified by determining if the flow speed is below, above or near the speed of sound. The flow speed is characterized by taking into consideration the following measure:
Viscosity- The problems are influenced by the effect of flows. The flows that have very small viscous effects on the solution are called inviscid flows whereas the flows for which viscosity cannot be ignored are called viscous flows.
Compressibility- Compressibility is an examination of the amount of change of density in the problem. This property can be measured according to the following factors:
Students pursuing aerodynamics study must acquire an excellent hold on the following topics to excel in the field with the best grades and the practical skills.
|Airfoils and Wings for Aerodynamics||Fluid Mechanic Concepts|
|Vector Kinematics of a Fluid Element||Parameter Sensitivity|
|Subsonic Airflow||Viscous Fluid|
|Supersonic and Hypersonic Airflow Theory||Newtonian Fluid Method|
|Aircraft Flight and Spacecraft Dynamics||Trim and Static Stability|
|Introduction to Potential Flow Theory||Stability Derivatives|
|Longitudinal and Lateral-directional Motions||Physical Effects of the Wings on Aircraft Motion|
|Control Techniques for Flight Vehicle Stabilization||Time and Frequency Analysis of Control System|
|Human-pilot Models with Applications||Pilot-in-the-loop Controls|
|Momentum Theory||Gas Dynamics|
|Fundamentals of Compressible Flow||Normal and Oblique Shocks|
Choosing an optimum career path as per the interest area is utmost important for students who have been qualified in aerodynamics subject. Having proper information of the eligibility criteria and the roles and responsibilities of the entitled positions gives them a clear vision to opt the desired career profession. With proficiency in the subject and practical experience, students can be entitled to various career positions as described below:
Aerodynamic Engineer- Aerodynamics Engineer is one who is responsible for designing, constructing and testing aircraft, missiles, automobiles and spacecraft. They propose basic and applied research to identify the adaptability of materials and equipment to the design and manufacture of aerospace products and certain vehicles.
Aerodynamicist- An aerodynamicist is an engineer who performs research and analysis related to aerodynamics, thermodynamics, aerothermodynamics and aero-physics designs and concepts in order to examine their potential suitability to aerospace products and various vehicles. They are responsible for performing engineering duties in the design, development, and testing of these vehicles which further helps in maximizing their performance.
Aeronautical Engineer- Aeronautical engineer is responsible for researching, designing, developing, maintaining, testing and determining the performance of both civil and military aircraft that exists within the earth's atmosphere.
Aero Software Engineer- The core responsibility of the person is to design, develop, integrate and test high scalable aerodynamics software models for real-time simulation training applications. He/she is also responsible for troubleshooting and fixing present software inefficiencies in engines, flight controls, motion and aircraft systems.
In addition to the above-described career paths, there is various other career option for Aerodynamics scholars as Mechanical Engineer, Aerodynamics Research and Technology Engineer, Fluid Mechanical Analyst, Aerospace Engineer, Aerodynamics Product Engineer, Turbo Machinery Engineer, and lots more.
Aerodynamics is a broad subject that incorporates various complex theories and calculations. Students working on aerodynamics assignments often needs to acquire an in-depth knowledge of all the related concepts to produce quality and research-based assignment. This is a very tedious process that consumes a lot of time of students. Don't worry! We are here to take all your stress and provide the solutions you desired.
TutorVersal is the leader in providing assignment help, and 24/7 assistance to students of Universities/Colleges all around the world. You can simply count on us for our unique features of creating affordable, error-free and fully original content within the time. We raise the level of writing to ensure that it is fully compliant with the university standards.
We have a skilled team of assignment writing experts who are proficient in this field and are dedicated to providing the best online aerodynamics assignment help that will definitely raise your scores. The step-by-step approaches they follow not only simplify any piece of writing but also help students acquire more understanding of the complex terms. Whether you are seeking aerodynamics homework help or looking for online aerodynamics tutors, look no further than TutorVersal. | https://www.tutorversal.com/aerodynamics-assignment-help.html | 18 |
31 | List of questions to the exam
1. Basic information about the matrices.
Kinds of matrices.
Find the sum of matrices: .
2. Operations on matrices.
Find the product of matrices:
3. What is the determinant order two?
What is the determinant order three?
Calculate the determinant:
4. Given a square matrix order n. What is the minor of the element ?
What is the cofactor (algebraic complement) of the element ?
State the Laplace's theorem.
Calculate the determinant by Laplace's theorem:
5. Let A is a square matrix order n. Give the definition of inverse matrix.
How to calculate the inverse matrix using a algebraic complement?
Find the inverse matrix: .
6. What actions are called elementary transformations in the matrix?
Describe the method elementary transformations to finding inverse matrices.
Find the inverse of a matrix by the elementary transformations method:
7. What is the rank of matrix.
Find rank of the matrix:
8. The system of m linear equations with n unknowns.
The basic concepts.
What is the matrix and the augmented matrix of system?
State the KroneckerľCapelli's theorem.
Investigate the system for consistency by the KroneckerľCapelli's theorem:
9. The system of n linear equations with n unknowns.
Solve the system of equations by Cramer's rule:
10. Solution of n linear equations with n unknowns by the inverse matrix method.
Solve the system of equations by method of inverse matrix:
11. The system of m linear equations with n unknowns.
The Gauss method.
Solve the system of equations by the Gauss method:
12. The system of linear homogeneous equations.
Solve the homogeneous system
13. Let and are vectors in the space.
Find the length of the vector .
How to multiply the vector by constant ?
Find the sum of vectors and .
14. What is the scalar product of vectors and ?
Given a vectors Ó=(1,-2,0) and b=(5,0,-1). Find the scalar product and angle between them.
15. What is the vector product of vectors and ?
Find the vector product of vectors Ó=(1,-2,0) and b=(5,0,-1).
16. Linear dependence and independence of vectors.
Investigate, that the vectors x = (3,2, -1), y = (2, -1,3) and z = (1,3, -4) linearly dependent or no?
17. What kind of vectors constitute a basis?
Decomposition of the vector in the basis.
Prove that vectors , , is a basis. Expanded in the basis ( , , ) the vector : =(2, -1, 5), =(3, 2, 1), =(1, 1, 0), =(5, 6, -3).
18. The distance between two points and .
Division the segment AB in a given ratio └╩:╩┬.
Given a triangle with vertices ╬(0;0), └(8;0), ┬(0;6). Find the length of the median ╬Đ and the bisector OD.
19. First-order line in the plane.
Given a triangle with vertices └(-2;0), ┬(2;6), Đ(4;2), and median ┬┼ and height BD of the triangle are given. Write the equations of lines └Đ, ┬┼ and BD.
20. Parallelism and perpendicularity of lines.
Write the equation of a line, parallel and perpendicular to the line 3§+5ˇ=7, and passing through ╠(2,-4).
21. The distance from the point to the line.
Given a triangle with vertices └(-2;0), ┬(2;6), Đ(4;2). Find the distance from the length vertices └ to the line BĐ.
22. The general equation of second-order curves.
What is the circle?
Find the coordinates of the center and the radius of the circle
23. What is the ellipse?
The focus, the eccentricity, directrix of the ellipse.
Find he focus, the eccentricity, directrix of the ellipse
Date: 2015-04-20; view: 4375 | https://doclecture.net/1-20627.html | 18 |
65 | High School Mathematics Extensions/Further Modular Arithmetic< High School Mathematics Extensions
|Further Modular Arithmetic|
|Multiplicative Group and Discrete Log|
|Problems & Projects|
|Problem Set Solutions|
Mathematics is the queen of the sciences and number theory is the queen of mathematics. -- Carl Friedrich Gauss 1777 - 1855
In the Primes and Modular Arithmetic section, we discussed the elementary properties of a prime and its connection to modular arithmetic. For the most part our attention has been restricted to arithmetic mod p, where p is prime.
In this chapter, we start by discussing some more elementary results in arithmetic modulo a prime p, and then moving on to discuss those results modulo m where m is composite. In particular, we will take a closer look at the Chinese Remainder Theorem (CRT), and how it allows us to break arithmetic modulo m into components. From that point of view, the CRT is an extremely powerful tool that can help us unlock the many secrets of modulo arithmetic (with relative ease).
Lastly, we will introduce the idea of an abelian group through multiplication in modular arithmetic and discuss the discrete log problem which underpins one of the most important cryptographic systems known today.
Assumed knowledge In this chapter we assume the reader can find inverses and be able to solve a system of congruences (Chinese Remainder Theorem) (see: Primes and Modular Arithmetic).
Wilson's theorem is a simple result that leads to a number of interesting observations in elementary number theory. It states that, if p is prime then
We know the inverse of p - 1 is p - 1, so each other number can be paired up by its inverse and eliminated. For example, let p = 7, we consider
- 1 × 2 × .. × 6 ≡ (2 × 4) × (3 × 5) × 1 × 6 = 6
What we have done is that we paired up numbers that are inverses of each other, then we are left with numbers whose inverse is itself. In this case, they are 1 and 6.
But there is a technical difficulty. For a general prime number, p, how do we know that 1 and p - 1 are the only numbers in mod p which when squared give 1? For m not a prime, there are more than 2 solutions to x2 ≡ 1 (mod m), for example, let m = 15, then x = 1, 14, 4, 11 are solutions to x2 ≡ 1 (mod m).
However, we can show that there can only be (at most) two solutions to x2 ≡ 1 (mod p) when p is prime. We do that by a simple proof by contradiction argument. You may want to skip the following proof and come up with your own justification of why Wilson's theorem is true.
Let p be a prime, and x2 ≡ 1 (mod p). We aim to prove that there can only be 2 solutions, namely x = 1, -1
it obvious from the above that x = 1, -1 (≡ p - 1) are solutions. Suppose there is another solution, x = d, and d not equal to 1 or -1. Since p is prime, we know d - 1 must have an inverse. We substitute x with d and multiply both sides by the inverse, i.e.
but we our initial assumption was that d ≠ -1. This is a contradiction. Therefore there can only be 2 solutions to x2 ≡ 1 (mod p).
Fermat's little TheoremEdit
There is a remarkable little theorem named after Fermat, the prince of amateur mathematicians. It states that if p is prime and given a ≠ 0 then
This theorem hinges on the fact that p is prime. Recall that if p is prime then a ≠ 0 has an inverse. So for any b and c we must have
- if and only if
A simple consequence of the above is that the following numbers must all be different mod p
- a, 2a , 3a, 4a, ..., (p-1)a
and there are p - 1 of these numbers! Therefore the above list is just the numbers 1, 2, ... p - 1 in a different order. Let's see an example, take p = 5, and a = 2:
- 1, 2, 3, 4
multiply each of the above by 2 in mod 5, we get
- 2, 4, 1, 3
They are just the original numbers in a different order.
So for any p and using Wilson's Theorem (recall: 1 × 2 × ... × (p-1) ≡ -1), we get
on the other hand we also get
Equating the two results, we get
which is essentially Fermat's little theorem.
Modular Arithmetic with a general mEdit
*Chinese Remainder Theorem revisited*Edit
This section is rather theoretical, and is aimed at justifying the arithmetic we will cover in the next section. Therefore it is not necessary to fully understand the material here, and the reader may safely choose to skip the material below.
Recall the Chinese Remainder Theorem (CRT) we covered in the Modular Arithmetic section. In states that the following congruences
have a solution if and only if gcd(n1,n2) divides (b - c).
This deceptively simple theorem holds the key to arithmetic modulo m (not prime)! We shall consider the case where m has only two prime factors, and then the general case shall follow.
Suppose m = piqj, where p and q are distinct primes, then every natural number below m (0, 1, 2, ..., m - 1) corresponds uniquely to a system of congruence mod pi and mod qj. This is due to the fact that gcd(pi,qj) = 1, so it divides all numbers.
Consider a number n, it corresponds to
for some xn and yn. If r ≠ n then r corresponds to
Now since r and n are different, we must have either xr ≠ xn and/or yr ≠ yn
For example take , then we can construct the following table showing the , for each n (0, 1, 2 ... 11)
n n (mod 22) n (mod 3) 0 0 0 1 1 1 2 2 2 3 3 0 4 0 1 5 1 2 6 2 0 7 3 1 8 0 2 9 1 0 10 2 1 11 3 2
Note that as predicted each number corresponds uniquely to two different systems of congruences mod 22 and mod 3.
1. Consider m = 45 = 32 × 5. Complete the table below and verify that any two numbers must differ in at least one place in the second and third column
|n||n (mod 32)||n (mod 5)|
2. Suppose m = piqj, n corresponds to
and r corresponds to
Is it true that
Arithmetic with CRTEdit
Exercise 2 above gave the biggest indication yet as to how the CRT can help with arithmetic modulo m. It is not essential for the reader to fully understand the above at this stage. We will proceed to describe how CRT can help with arithmetic modulo m. In simple terms, the CRT helps to break a modulo-m calculation into smaller calculations modulo prime factors of m.
As always, let's consider a simple example first. Let and we see that m has two distinct prime factors. We should demonstrate multiplication of 51 and 13 modulo 63 in two ways. Firstly, the standard way
Alternatively, we notice that
We can represent the two expressions above as a two-tuple (6,2). We abuse the notation a little by writing 51 = (6,2). Similarly, we write 13 = (4,6). When we do multiplication with two-tuples, we multiply component-wise, i.e. (a,b) × (c,d) = (ac,bd),
Now let's solve
we write x = 6 + 9a, which is the first congruence equation, and then
therefore we have a = 3 + 7b, substitute back to get
which is the same answer we got from multiplying 51 and 13 (mod 63) the standard way!
Let's summarise what we did. By representing the two numbers (51 and 13) as two two-tuples and multiplying component-wise, we ended up with another two-tuple. And this two-tuple corresponds to the product of the two numbers (mod m) via the Chinese Remainder Theorem.
We will do two more examples. Let , and lets multiply 66 and 40 in two ways. Firstly, the standard way
and now the second way, 40 = (0,7) and 66 = (4,0) and
For the second example, we notice that there is no need to stop at just two distinct prime factors. We let , and multiply 900 and 647 (mod 975),
For the other way, we note that 900 ≡ 0 (mod 3) ≡ 0 (mod 25) ≡ 3 (mod 13), and for 647 ≡ 2 (mod 3) ≡ 22 (mod 25) ≡ 10 (mod 13),
now if we solve the following congruences
then we will get x ≡ 225!
Why? If anything, breaking modular arithmetic in m into smaller components seems to be quite a bit of work. Take the example of multiplications, firstly, we need to express each number as a n-tuple (n is the number of distinct prime factors of m), multiply component-wise and then solve the resultant n congruences. Surely, it must be more complicated than just multiplying the two numbers and then reduce the result modulo m. So why bother studying it at all?
By breaking a number m into prime factors, we have gained insight into how the arithmetic really works. More importantly, many problems in modular m can be difficult to solve, but when broken into components it suddenly becomes quite easy, e.g. Wilson's Theorem for a general m (discussed below).
1. Show that addition can also be done component-wise.
2. Multiply component-wise 32 and 84 (mod 134).
To discuss the more general form of Wilson's Theorem and Fermat's Little Theorem in mod m (not prime), it's nice to know a simple result from the famous mathematician Euler. More specifically, we want to discuss a function, called the Euler totient function (or Euler Phi), denoted φ.
The φ functions does a very simple thing. For any natural number m, φ(m) gives the number of n < m, such that gcd(n,m) = 1. In other words, it calculates how many numbers in mod m have an inverse. We will discuss the value of φ(m) for simple cases first and then derive the formula for a general m from the basic results.
For example, let m = 5, then φ(m) = 4. As 5 is prime, all non-zero natural numbers below 5 (1,2,3 and 4) are coprimes to it. So there are 4 numbers in mod 5 that have inverses. In fact, if m is prime then φ(m) = m - 1.
We can generalise the above to m = pr where p is prime. In this case, we try to employ a counting argument to calculate φ(m). Note that there are pr natural numbers below m (0, 1, 2 ... pr - 1), and so φ(m) = pr - (number of n < m such that gcd(n,m) ≠ 1). We did that because it is easier to count the number of n 's without an inverse mod m.
An element, n, in mod m does not have an inverse if and only if it shares a common factor with m. But all factors of m (not equal to 1) are a multiple of p. So how many multiples of p are there in mod m? We can list them, they are
where the last element can be written as (pr-1 - 1)p, and so there are multiples of p. Therefore we can conclude
We now have all the machinery necessary to derive the formula of φ(m) for any m.
By the Fundamental Theorem of Arithmetic, any natural number m can be uniquely expressed as the product of primes, that is
where pi for i = 1, 2 ... r are distinct primes and ki are positive integers. For example 1225275 = 3×52×17×312. From here, the reader should try to derive the following result (the CRT may help).
Euler totient function φ
Suppose m can be uniquely expressed as below then
With the Euler totient function we can derive a more general case of Fermat's Little Theorem, that is:
Wilson's Theorem for a general m states that the product of all the invertible element in mod m
- equals -1 if m has only one prime factor, or m = 2pk for some prime p
- equals 1 for all other cases
An invertible element of mod m is a natural number n < m such that gcd(n, m) = 1. A self-invertible element is an element whose inverse is itself.
In the proof of Wilson's Theorem for a prime p, the numbers 1 to p - 1 all have inverses. This is not true for a general m. In fact it is certain that (m - 1)! ≡ 0 (mod m), for this reason we instead consider the product of all invertible elements in mod m.
For the case where m = p is prime we also appealed to the fact 1 and p - 1 are the only elements when squared gives 1. In fact for m = pk, 1 and m - 1 (≡ -1)are the only self-invertible elements (see exercise). But for a general m, this is not true. Let's take for example m = 21. In arithmetic modulo 21 each of the following numbers has itself as an inverse
- 1, 20, 8, 13
so how can we say the product of all invertible elements equal to 1?
We use the CRT described above. Let us consider the case where m = 2pk. By the CRT, each element in mod m can be represented as a two tuple (a,b) where a can take the value 0 or 1 while b can take the value 0, 1, ..., or pk - 1. Each two tuple corresponds uniquely to a pair of congruence equations and multiplication can be performed component-wise.
Using the above information, we can easily list all the self-invertible elements, because (a,b)2 ≡ 1 means (a2,b2) = (1,1), so a is an invertible element in mod 2 and b is an invertible element in mod pk, so a ≡ 1 or -1, b ≡ 1 or -1. But in mod 2 1 ≡ -1, so a = 1. Therefore, there are two elements that are self invertible in mod m = 2pk, they are (1,1) = 1, and (1, -1) = m - 1 . So in this case, the result is the same as when m has only a single prime factor.
For the case where m has more than one prime factors and m≠ 2pk. Let say m has n prime factors then m can be represented as a n-tuple. Let say m has 3 distinct prime factors, then all the self-invertible elements of m are
their product is (1,1,1) which corresponds to 1 in mod m.
1. Let p be a prime. Show that in arithmetic modulo pk, 1 and pk - 1 are the only self-invertible elements.
...more to come
Fermat's Little TheoremEdit
As mentioned in the previous section, not every element is invertible (i.e. has an inverse) mod m. A generalised version of Fermat's Last Theorem uses Euler's Totient function, it states
for all a ≠ 0 satisfying gcd(a,m) = 1. This is easy to see from the generalised version of Wilson's Theorem. We use a similar technique from the prove of Fermat's Little Theorem. We have
where the bi's are all the invertible elements mod m. By Wilson's theorem the product of all the invertible elements equals to, say, d (= 1 or -1). So we get
which is essentially the statement of Fermat's Little Theorem.
Although the FLT is very neat, it is imprecise in some sense. For example take m = 15 = 3 × 5, we know that if a has an inverse mod 15 then aφ(15) = a8 ≡ 1 (mod 15). But 8 is too large, all we need is 4, by that we mean, a4 ≡ 1 (mod 15) for all a with an inverse (the reader can check).
The Carmichael function λ(m) is the smallest number such that aλ(m) ≡ 1 (mod m) for invertible a. A question in the Problem Set deals with this function.
...more to come
It it quite clear that factorising a large number can be extremely difficult. For example, given that 76372591715434667 is the product of two primes, can the reader factorise it? Without the help of a good computer algebra software, the task is close to being impossible. As of today, there is no known efficient all purpose algorithm for factorising a number into prime factors.
However, under certain special circumstances, factorising can be easy. We shall consider the two-torsion factorisation method. A two-torsion element in modular m arithmetic is a number a such that a2 ≡ 1 (mod m).
Let's consider an example in arithmetic modulo 21. Note that using the CRT we can represent any number in mod 21 as a two-tuple. We note that the two-torsion elements are 1 = (1,1), 13 = (1,-1), 8 = (-1,1) and 20 = (-1,-1). Of interest are the numbers 13 and 8, because 1 and 20 (≡ - 1) are obviously two-torsion, we call these numbers trivially two-torsion.
Now, 13 + 1 = (1,-1) + (1,1) = (2,0). Therefore 13 + 1 = 14 is an element sharing a common factor with 21, as the second component in the two-tuple representation of 14 is zero. Therefore GCD(14,21) = 7 is a factor of 21.
The above example is very silly because anyone can factorise 21. But what about 24131? Factorising it is not so easy. But, if we are given that 12271 is a non-trivial (i.e. ≠ 1 or -1) two-torsion element, then we can conclude that both gcd(12271 + 1,24131) and gcd(12271 - 1,24131) are factors of 24131. Indeed gcd(12272,24131) = 59 and gcd(12270,24131) = 409 are both factors of 24131.
More generally, let m be a composite, and t be a non-trivial two-torsion element mod m i.e. t ≠ 1, -1. Then
- gcd(t + 1,m) divides m, and
- gdc(t - 1,m) divides m
this can be explained using the CRT.
We shall explain the case where m = pq and p and q are primes. Given t is a non-trivial two-torsion element, then t has representaion (1,-1) or (-1,1). Suppose t = (-1,1) then t + 1 = (-1,1) + (1,1) = (0,2), therefore t + 1 must be a multiple of p therefore gcd(t,m) = p. In the other case where t - 1 = (-1,1) - (1,1) = (-2,0) and so gcd(t - 1,m) = q.
So if we are given a non-trivial two-torsion element then we have effectively found one (and possibly more) prime factors, which goes a long way in factorising the number. In most modern public key cryptography applications, to break the system we need only to factorise a number with two prime factors. In that regard two-torsion factorisation method is frightening effectively.
Of course, finding a non-trivial two-torsion element is not an easy task either. So internet banking is still safe for the moment. By the way 76372591715434667 = 224364191 × 340395637.
1. Given that 18815 is a two-torsion element mod 26176. Factorise 26176.
...more to come' | https://en.m.wikibooks.org/wiki/High_School_Mathematics_Extensions/Further_Modular_Arithmetic | 18 |
10 | Cross multiplication is a math process used to solve for a missing number when comparing two equal ratios. It involves multiplying the top and bottom numbers of each ratio or fraction with the one opposite to it in the other ratio.
Multiply first numerator and second denominator
The first numerator is the top of the ratio on the left of the equal sign. The second denominator is the bottom number on the right of the equal sign. In the formula 1/2 = n/4, multiply 1 x 4. The result is 4.
Multiply second numerator and first denominator
Multiply the numerator, or top number, in the ratio on the right side of the equal sign by the denominator, or bottom number, in the ratio on the left side of the = sign. In the example, multiply n x 2. The result is 2n.
Set up the equation and solve for n
To complete the algebraic equation and solve for n, state the formula 2n = 4. Each side of the equal sign has one of the results of the cross multiplication steps. To solve for n, divide each side by 2. Since 2n/2 is n, and 4/2 is 2, n = 2. | https://www.reference.com/article/cross-multiply-acc2149ecb505caf | 18 |
33 | Let’s first remember how to construct angle bisectors. First you draw a circle with center in a vertex of your angle and arbitral radius. From the points you got as intersections with your circle and arms of angle, you construct another two circles. Draw a semi line that goes through vertex and intersection of those two circles and you bisected your angle.
After knowing to construct angle bisector, people started wondering what would happen if they drew all three angle bisectors in a triangle. Will they intersect in the same point and what would that mean. So let’s try constructing it.
We got that all three angle bisectors intersect in the same point. Let’s mark it with an ‘I’.
First, what are the properties of angle bisector? Every point on an angle bisector is equidistant to the point on the first arm and the second arm.
And this will be valid for every point on that bisector, and also equivalent for other angles.
The point where they intersect is the point where the distance between that point and all sides are equal and is called the center of the inscribed circle.
If we put the needle of the compass in I, and spread it to the one side, you’ll get a circle which touches all sides of a triangle. But, you have to be careful, most common mistakes are those that you usually take distance from intersection from your bisector to the center, but that is not proper way. You have to have real distance from your point to the side, and you find that by making a perpendicular line from your side through center.
Interesting fact: in a right triangle, the center of incircle is located in the middle of hypotenuse.
That center is called the center of incircle or inscribed circle.
It can be opposite – to inscribe triangle into circle:
Medians are line segment which connect points of a triangle with the middle of the opposite side. They also intersect in one point which is called centroid. It is called a centroid because he is located at a center of mass. That means if you cut out a triangle form a cardboard, find his centroid, and pierce it with your compass, your triangle should be perfectly horizontal.
Centroid is usually marked with a G.
Medians have one special property. No matter in which triangle we are constructing them, medians will intersect in 2/3 of their length from points of triangles.
In other words, centroid divides all medians in a radio .
Construct angle bisectors and medians worksheets
Angle bisectors & angle measure (657.5 KiB, 903 hits)
Line segment of angle bisectors (482.6 KiB, 766 hits)
Medians (877.3 KiB, 833 hits)
Centroid (484.5 KiB, 643 hits)
Triangle Inequality Theorem (121.6 KiB, 577 hits) | http://www.mathemania.com/lesson/angle-bisectors-medians-triangle-inequality-theorem/ | 18 |
31 | What is the difference? For a random number x, both the following equations are true: Questions Eliciting Thinking How many solutions can an absolute value equation have?
Why is it necessary to use absolute value symbols to represent the difference that is described in the second problem?
Examples of Student Work at this Level The student: Instructional Implications Model using absolute value to represent differences between two numbers. You can now drop the absolute value brackets from the original equation and write instead: When you take the absolute value of a number, the result is always positive, even if the number itself is negative.
This is the solution for equation 2. If needed, clarify the difference between an absolute value equation and the statement of its solutions. Plug these values into both equations.
Should you use absolute value symbols to show the solutions? Guide the student to write an equation to represent the relationship described in the second problem.
Then explain why the equation the student originally wrote does not model the relationship described in the problem. What are these two values? What are the solutions of the first equation? Equation 2 is the correct one. Do you think you found all of the solutions of the first equation?
A difference is described between two values.
Questions Eliciting Thinking Can you reread the first sentence of the second problem? This is solution for equation 1. Examples of Student Work at this Level The student correctly writes and solves the first equation: Evaluate the expression x — 12 for a sample of values some of which are less than 12 and some of which are greater than 12 to demonstrate how the expression represents the difference between a particular value and Writing an Equation with a Known Solution If you have values for x and y for the above example, you can determine which of the two possible relationships between x and y is true, and this tells you whether the expression in the absolute value brackets is positive or negative.
Ask the student to solve the equation and provide feedback. Sciencing Video Vault 1. Plug in known values to determine which solution is correct, then rewrite the equation without absolute value brackets.
If you plot the above two equations on a graph, they will both be straight lines that intersect the origin. To solve this, you have to set up two equalities and solve each separately. Provide additional opportunities for the student to write and solve absolute value equations.
This means that any equation that has an absolute value in it has two possible solutions. Got It The student provides complete and correct responses to all components of the task. Do you know whether or not the temperature on the first day of the month is greater or less than 74 degrees?
If you already know the solution, you can tell immediately whether the number inside the absolute value brackets is positive or negative, and you can drop the absolute value brackets.
Emphasize that each expression simply means the difference between x and For example, represent the difference between x and 12 as x — 12 or 12 — x. Writes the solutions of the first equation using absolute value symbols. Ask the student to consider these two solutions in the context of the problem to see if each fits the condition given in the problem i.
Set Up Two Equations Set up two separate and unrelated equations for x in terms of y, being careful not to treat them as two equations in two variables: Instructional Implications Provide feedback to the student concerning any errors made. Finds only one of the solutions of the first equation.This means that any equation that has an absolute value in it has two possible solutions.
If you already know the solution, you can tell immediately whether the number inside the absolute value brackets is positive or negative, and. Absolute Value Equations Discussion: Absolute value refers to the measure of distance from zero for any value on the number line.
For example, the absolute value of 3 is 3 (written as) because there are three units between the. Aug 21, · Best Answer: Sometimes, when you look at a graph of an equation, you can look at it and see two or more clearly defined "pieces".
For example, perhaps part of the equation looks like a line, and another part looks like half of a parabola. That's what "piecewise" is mint-body.com: Resolved. Ask the student to solve the second equation and interpret the solutions in the context of the problem.
Ask the student to identify and write as many equivalent forms of the equation as possible. Then have the student solve each equation to show that they are equivalent.
Consider implementing MFAS task Writing Absolute Value Inequalities (A. The absolute number of a number a is written as $$\left | a \right |$$ And represents the distance between a and 0 on a number line.
An absolute value equation is an equation that contains an absolute value expression. The equation $$\left | x \right |=a$$ Has two solutions x = a and x = -a because both numbers are at the distance a from 0.
Sep 06, · This algebra video tutorial shows you how to solve absolute value equations with inequalities and how to plot the solution on a number line and write the answer in interval notation.
This video also shows you how to identify cases of no solution or infinitely many solutions / all solutions.Download | http://pirojyzurosyjikyn.mint-body.com/write-an-absolute-value-equation-represented-by-the-graph-club-6485264852.html | 18 |
12 | Home / materials for teachers / mathematics / geometry curriculum – math / unit 7 – geometry curriculum – math activity 753 survey of 12th grade students. Survey of mathematics mm150 unit 7 survey of mathematics mm150 unit 7 by: pgmath loading livebinder survey of mathematics mm150 unit 7 search: . Constructed-response test questions: why we use them how we score them essay comparing and contrasting two poems, stories, or plays • mathematics — writing a.
This study sets out to survey current information and opinion relating to the question: what are the important causes at elementary and secondary school levels of incompetence, dropout, and unnecessary failure in the study of mathematics the objectives of this report are to outline the findings . Find 9780134112237 a survey of mathematics with applications 10th edition by angel et al at over 30 bookstores buy, rent or sell. Found in a survey of a random sample math terms it means that we are 95% confi dent that the actual percentage of 20 springboard® mathematics algebra 2, unit 7 .
Survey of mm150 survey of mathematics school: kaplan university essays (1) homework help (15) unit 7 discussion. Grade 6 math practice test 864+7098+109901 a 155281 which statement bestdescribes the distribution of the data from benny’s survey. In this unit, students will review the elements of a narrative essay and practice writing using setting and chronological order unit 2: story and logic in an expository essay in this unit, students will review the elements of an expository essay and practice writing using logical, relevant facts. Measuring student attitude in mathematics classrooms prepared by: the student attitude survey (sas) explores unit in their classrooms that focused on linear.
This classroom-tested unit will take you and your students through the process of planning, drafting, revising, and polishing an argumentative (or persuasive) essay. Which of these is a unit of measure for the mass of a bag of apples 0 brandon wants to conduct a survey as to whether mushrooms should grade 7 mathematics . Mathematics 7 (grade 6/7) unit 2: ratios and proportional relationships (7rp/7g) in this unit, students extend their understanding of ratios and develop understanding of proportionality to solve single- and multi-step problems.
Learn surveys math with free interactive flashcards choose from 232 different sets of surveys math flashcards on quizlet. Learn vocab quiz math unit 7 data analysis with free interactive flashcards choose from 500 different sets of vocab quiz math unit 7 data analysis flashcards on quizlet. Course syllabus mm150 survey of mathematics you turn this project in during unit 7, two letter grades will be deducted from it, giving you see sample essays . Essay on unit 1 business adminstration unit one: principles of personal responsibilities and working in a business environment section 1: know the employment rights and responsibilities of the employee and employer 1.
Writing a research paper in mathematics ashley reiter september 12, 1995 section 1: introduction: why bother good mathematical writing, like good mathematics thinking, is a skill which must be practiced and developed for optimal performance. Common core 3-8 ela and mathematics tests grade 7 ela module 1, unit 2 is the first draft of a literary analysis essay requiring textual support to discuss . Old paperscom mathematics for 10th class (unit # 7) ===== om shahzad iftikhar contact # 0313-5665666 website: www home tutors e-mail: [email protected] Note: a score of 16 or more on this first grade math test is a good indication that most skills taught in first grade were mastered want a solution to this test add to your shopping cart and purchase a detailed 7 pages solution and top-notch explanations with paypal.
This site is my virtual classroom and a home to years worth of social studies materials. Unit 7 discussion math statistics play an important role in predicting what will happen in the future based on collecting, reviewing, and analyzing data from the .
Mathematics teacher questionnaire main survey your school has agreed to participate in the third international mathematics and science study - repeat (timss-r), an . Sample items: mathematics grade 6 each sample assessment item gives an idea of how an assessment item on the msa might be presented the items appropriately measure the content of the state curriculum and may be formatted similarly to those appearing on the msa however, these are sample items only and have not appeared on any msa form. Grade 7 math practice test which group would likely give the best representation for her survey a 50 students at a library b 50 students in her school. | http://rapaperrbwj.landscaperben.us/unit-7-essay-survey-of-mathematics.html | 18 |
26 | Or download our app "Guided Lessons by Education.com" on your device's app store.
After this lesson, students will be able to compare fractions with different numerators and denominators. Students will be able to record the results of fraction comparisons with symbols >, =, or <, and justify the conclusions.
- Call students together as a group and ask them if they have ever played the card game War before. For students who have never seen this game previously, quickly demonstrate the game by dividing a deck of cards in half between two students. Have the two students pull the top card off their half of the decks and declare the winner of the hand to be the student with the higher numbered card.
- Explain that today students are going to be playing this game with fractions. They will be playing Fraction Wars.
- In order to play this game, students will need to make fraction playing cards. Brainstorm as a group some fractions students might want to include in their decks. (Possibilities include: 1/2, 1/3, 1/4, 3/8, etc.) List some common fractions up on the board, such as 1/2, 1/3, 1/4, for students as they create their fraction cards.
- Pass out index cards or squares of scrap paper along with pens/pencils to students, so that they can make their cards.
Explicit instruction/Teacher modeling(20 minutes)
- After students have finished making their fraction cards, call them back together as a group.
- Remind students that when deciding the winner in a fraction war, they must know which fraction is larger. Ask students, “What are some different ways to determine which fraction is larger?”
- Once students have offered various suggestions, point out to students that drawing pictures can be a quick way to visually compare two fractions.
- Another way to quickly compare two fractions is to compare them to a benchmark fraction. For example, if students need to compare 1/4 and 5/8, they know that 1/4 is less than 1/2, and 5/8 is greater than 1/2. Thus, 5/8 must be larger.
- A third possibility might be to create the same denominator. For example, when comparing 1/2 and 3/8, students might realize that 1/2 is greater, because 1/2 is the same as 4/8, and 4/8 is more than 3/8. Encourage students to be creative and keep thinking of other ways to determine which fraction is larger.
- Remind students that a number can be greater than, less than, or equal to another number. The same is true with fractions. They should already very familiar with the equal sign (=), which is used to show two numbers are equal.
- Since students have probably had less exposure to the greater than (>) and less than (<) signs, remind them to think of the symbols as an alligator’s mouth. The mouth wants to eat the bigger number, so it always is open on the side of the bigger number.
- Tell students that they are ready to play the game. Explain to students that they will be recording their games.
- Pass out sheets of lined paper. Have students create 3 columns, and label the top of each: My Fraction, My Partner’s Fraction, and Justification.
Guided practice/Interactive modeling(10 minutes)
- Demonstrate a round of the game for students by having two students pull a card from the top of their fraction deck. Have the class record these fractions on the first line of their papers under the columns My Fraction and My Partner’s Fraction.
- As a class, determine which fraction is greater, or if the two are equal, and place the appropriate sign between the two columns. Together, in the Justification column, draw a picture of the two fractions or write a brief explanation proving why one fraction is greater than or equal to the other.
- Play another round of the game using the same process, but this time chose student volunteers to demonstrate how to fill in the chart. Have the volunteers determine which fraction is greater, and show sample justifications for their reasoning.
- If students feel comfortable with the game and how to record their work, students can move on to independent working time. Otherwise, continue playing a few more rounds of the game with student volunteers.
Independent working time(20 minutes)
- At this point, students should be broken-up into partners to play the game.
- Remind the class that they must record their fraction wars and justify how they knew a fraction was greater than, less than, or equal to another. They will be sharing this information with the class later
- Tell students that each pair of partners must agree on which fraction wins the fraction war.
- While students play, float around the classroom to check in with students individually and correct any misunderstandings.
- Enrichment: For students in need of a greater challenge, creating groups of three where three fractions instead of two are being compared can increase the difficulty. Students can also be required to justify their answers in more than one way (ex. through both a words and pictures).
- Support: For students who need a little extra help, being paired with a student who has a greater understanding of fractions can offer the opportunity for peer assistance/scaffolding. Having students draw a visual of the fractions on the back of their cards can help students to more quickly and easily visually compare the fractions, too. Limiting the activity to more common fractions is also an option.
- To assess student understanding during the course of the lesson, float around the room while students are playing the game with partners. This allows for brief check-in assessments with students to determine whether they are actually understanding the concepts.
- At the conclusion of the lesson, collect students’ written game records to determine if there are errors using symbols, or an inability to justify the comparisons. Looking over these sheets can provide insight into whether the class as a whole is struggling with understanding something, if individual students have some misconceptions that should be corrected, or if everyone is on the right path.
- Assign students to play 10-15 hands of Fraction Wars at home for homework, completing a written game record. This provides another opportunity to monitor individual student progress and understanding.
Review and closing(15 minutes)
- Call students together as a group. Tell them that they are going to have the opportunity to share about their experiences playing Fraction Wars.
- Ask the group: Were there any fractions you had a hard time comparing? If so, how did you figure it out? What are some different ways you justified your answers? Did two partners ever disagree? If so, what methods did they use to come to an agreement?
- Before students are done, remind them that there are three different signs used to compare fractions: greater than (>), less than (<), and equal to (=). Remind them that they can keep these signs straight if they think about how the open side of the mouth always wants to eat the bigger number. Encourage students to use pictures and comparisons to benchmark fractions when comparing two fractions. | https://www.education.com/lesson-plan/fraction-wars/ | 18 |
40 | Systems of equations can help solve real-life questions in all kinds of fields, from chemistry to business to sports. Solving them isn't just important for your math grades; it can save you a lot of time whether you're trying to set goals for your business or your sports team.
TL;DR (Too Long; Didn't Read)
To solve a system of equations by graphing, graph each line on the same coordinate plane and see where they intersect.
For example, imagine you and your friend are setting up a lemonade stand. You decide to divide and conquer, so your friend goes to the neighborhood basketball court while you stay on your family's street corner. At the end of the day, you pool your money. Together, you've made $200, but your friend made $50 more than you. How much money did each of you make?
Or think about basketball: Shots made outside the 3-point line are worth 3 points, baskets made inside the 3-point line are worth 2 points and free throws are only worth 1 point. Your opponent is 19 points ahead of you. What combinations of baskets could you make in order to catch up?
Sciencing Video Vault
Solve Systems of Equations by Graphing
Graphing is one of the simplest ways to solve systems of equations. All you have to do is graph both lines on the same coordinate plane, and then see where they intersect.
First, you need to write the word problem as a system of equations. Assign variables to the unknowns. Call the money you make Y, and the money your friend makes F.
Now you have two kinds of information: information about how much money you made together, and information about how the money you made compared to the money your friend made. Each of these will become an equation.
For the first equation, write:
Y + F = 200
since your money plus your friend's money adds up to $200.
Next, write an equation to describe the comparison between your earnings.
Y = F – 50
because the amount you made is equal to 50 dollars less than what your friend made. You could also write this equation as Y + 50 = F, since what you made plus 50 dollars equals what your friend made. These are different ways of writing the same thing and will not change your final answer.
So the system of equations looks like this:
Y + F = 200
Y = F – 50
Next, you need to graph both equations on the same coordinate plane. Graph your amount, Y, on the y-axis and your friend's amount, F, on the x-axis (it actually doesn't matter which is which as long as you label them correctly). You can use graph paper and a pencil, a handheld graphing calculator or an online graphing calculator.
Right now one equation is in standard form and one is in slope-intercept form. That's not a problem, necessarily, but for the sake of consistency, get both equations into slope-intercept form.
So for the first equation, convert from standard form to slope-intercept form. That means solve for Y; in other words, get Y by itself on the left side of the equals sign. So subtract F from both sides:
Y + F = 200
Y = -F + 200.
Remember that in slope-intercept form, the number in front of the F is the slope and the constant is the y-intercept.
To graph the first equation, Y = -F + 200, draw a point at (0, 200), and then use the slope to find more points. The slope is -1, so go down one unit and over one unit and draw a point. That creates a point at (1, 199), and if you repeat the process starting with that point, you'll get another point at (2, 198). These are tiny movements on a big line, so draw one more point at the x-intercept to make sure you've got things nicely graphed in the long run. If Y = 0, then F will be 200, so draw a point at (200, 0).
To graph the second equation, Y = F – 50, use the y-intercept of -50 to draw the first point at (0, -50). Since the slope is 1, start at (0, -50), and then go up one unit and over one unit. That puts you at (1, -49). Repeat the process starting from (1, -49) and you'll get a third point at (2, -48). Again, to make sure you're doing things neatly over long distances, double-check yourself by also drawing in the x-intercept. When Y = 0, F will be 50, so also draw a point at (50, 0). Draw a neat line connecting these points.
Take a close look at your graph to see where the two lines intersect. This will be the solution, because the solution to a system of equations is the point (or points) that make both equations true. On a graph, this will look like the point (or points) where the two lines intersect.
In this case, the two lines intersect at (125, 75). So the solution is that your friend (the x-coordinate) made $125 and you (the y-coordinate) made $75.
Quick logic check: Does this make sense? Together, the two values add to 200, and 125 is 50 more than 75. Sounds good.
One Solution, Infinite Solutions or No Solutions
In this case, there was exactly one point where the two lines crossed. When you're working with systems of equations, there are three possible outcomes, and each will look different on a graph.
- If the system has one solution, the lines will cross at a single point, as they did in the example.
- If the system has no solutions, the lines will never cross. They will be parallel, which in algebraic terms means they will have the same slope.
- The system can also have infinite solutions, which means your "two" lines are actually the same line. So they'll have every single point in common, which is an infinite number of solutions. | https://sciencing.com/how-to-solve-systems-of-equations-by-graphing-13712250.html | 18 |
19 | How would you balance school and work? Giving examples, breaking tasks into smaller more manageable steps, giving hints or clues, and providing reminders can all help your students by giving them temporary supports in a new and challenging task. An open ended question that will challenge your students to think more deeply might look like the following.
In the meantime, be patient and give them the assistance they need to reach success. Simply form students into a circle and give each a unique picture of an object, animal or whatever else suits your fancy.
Doing so will help them think analytically which is part of thinking critically. You may set the parameters, including a time limit, materials and physical boundaries. Minefield Another classic team-building game. You can also require students to only use certain words or clues to make it challenging or content-area specific.
Simply count to sixty after asking a question to give your students a chance to think before they answer. These Classroom activities for critical thinking can and should be morphed to match the culture and needs of the individual classroom.
Getting your students to think about how they came to the answer that they did will challenge them to think critically, and it gets them using more language and using it in practical ways. The entire group must find a way to occupy a space that shrinks over time, until they are packed creatively like sardines.
Have students give reasons or examples that support their ideas, and they will learn to support their arguments naturally. A Shrinking Vessel This game requires a good deal of strategy in addition to team work.
You begin a story that incorporates whatever happens to be on your assigned photo. What will you have? You might ask them to come up with a list of 10 must-have items that would help them most, or a creative passage to safety.
When they make these predictions, they not only have to think critically, they will be using the language skills they are learning.
Team work; sportsmanship Try one or more of these techniques with your students and see how well they can express their thoughts with the language they are learning. Keep it Real This open-ended concept is simple and serves as an excellent segue into problem-based learning. Communication; trust See also: Creative collaboration; communication; problem-solving 8.
What part is most interesting to you? Take the thinking a step further and teach your students how to make a refutation, either spoken or in writing, a skill that is often useful in the academic world. When they use these phrases, it tells you that they are actively trying to answer your question and gives them the space they need to put their ideas and words together before speaking.
In some ways, critical thinking may seem out of place in the language classroom. If you would like to subscribe to the Education Dive: The following are some ways to integrate critical thinking exercises into your ESL lessons while still meeting the language goals you set for your students.
And if you are interested in more, you should follow our Facebook page where we share more about creative, non-boring ways to teach English. This specific list comes from activities used in the Allied Media: Encourage this type of thinking and expression and your students will benefit in more ways than one.
Debate Introduce a statement written in a clearly visible location. Critical thinking means being able to make an argument for your beliefs or opinions. Asking these questions challenges your students to say more.
The Worst-Case Scenario Fabricate a scenario in which students would need to work together and solve problems to succeed, like being stranded on a deserted island or getting lost at sea. This story is part of our newly expanding K12 coverage.
Connect it to bigger example. In order to solve the mystery — say, the case of the missing mascot — children must work together to solve the clues in order. You can encourage your students to express logical and reasonable supports for their opinions during discussions and for writing assignments.
Zoom Zoom is a classic classroom cooperative game that never seems to go out of style. If you are teaching ESL to children, teaching critical thinking is particularly important because it will serve them in their futures no matter what language they are speaking.
At the same time, you provide an opportunity for him to use English to express his ideas. By asking these questions, you challenge your student to think about his thinking.Critical Thinking Skills Chart Great Verbs to help explain Blooms. and create activities for higher level thinking skills in the classroom.
Find this Pin and more on Homeschooling English by Jennifer Erix.
If this had a level 7 that is "creating", this would be perfect! 81 Fresh & Fun Critical-Thinking Activities Engaging Activities and Reproducibles to Develop Kids’ Higher-Level Thinking Skills by Laurie Rozakis.
How can students own their learning with critical thinking activities they’ll really love? Allowing our students to take stands on issues that matter to them engages the classroom in a way that fosters great critical thinking.
Thinking Outside the Blank. 8 Critical Thinking Activities for ESL Students. by Susan Verner 56, views. Teaching critical thinking, though, isn’t always easy. The following are some ways to integrate critical thinking exercises into your ESL lessons while still meeting the language goals you set for your students.
Try These 8. Incorporating 'evidence' into lessons allows educators to create classroom cultures seeded in deep critical thinking. 3 activities to encourage critical thinking in the classroom Below are some activities to help teachers incorporate curiosity, evidence, and critical thinking into their classrooms.
FUN Critical Thinking Activities - For Students in Any Subject by Monica Dorcz | This newsletter was created with Smore, an online tool for creating beautiful newsletters for for educators, nonprofits, businesses and more.Download | http://xybalenavunusyraq.bsaconcordia.com/classroom-activities-for-critical-thinking-6698766987.html | 18 |
24 | In linguistics, a clause is the smallest grammatical unit that can express a complete proposition. A typical clause consists of a subject and a predicate, the latter typically a verb phrase, a verb with any objects and other modifiers. However, the subject is sometimes not said or explicit, often the case in null-subject languages if the subject is retrievable from context, but it sometimes also occurs in other languages such as English (as in imperative sentences and non-finite clauses).
A simple sentence usually consists of a single finite clause with a finite verb that is independent. More complex sentences may contain multiple clauses. Main clauses (matrix clauses, independent clauses) are those that can stand alone as a sentence. Subordinate clauses (embedded clauses, dependent clauses) are those that would be awkward or incomplete if they were alone.
A primary division for the discussion of clauses is the distinction between main clauses (i.e. matrix clauses, independent clauses) and subordinate clauses (i.e. embedded clauses, dependent clauses). A main clause can stand alone, i.e. it can constitute a complete sentence by itself. A subordinate clause (i.e. embedded clause), in contrast, is reliant on the appearance of a main clause; it depends on the main clause and is therefore a dependent clause, whereas the main clause is an independent clause.
A second major distinction concerns the difference between finite and non-finite clauses. A finite clause contains a structurally central finite verb, whereas the structurally central word of a non-finite clause is often a non-finite verb. Traditional grammar focuses on finite clauses, the awareness of non-finite clauses having arisen much later in connection with the modern study of syntax. The discussion here also focuses on finite clauses, although some aspects of non-finite clauses are considered further below.
Clauses can be classified according to a distinctive trait that is a prominent characteristic of their syntactic form. The position of the finite verb is one major trait used for classification, and the appearance of a specific type of focusing word (e.g. wh-word) is another. These two criteria overlap to an extent, which means that often no single aspect of syntactic form is always decisive in determining how the clause functions. There are, however, strong tendencies.
Standard SV-clauses (subject-verb) are the norm in English. They are usually declarative (as opposed to exclamative, imperative, or interrogative); they express information in a neutral manner, e.g.
Declarative clauses like these are by far the most frequently occurring type of clause in any language. They can be viewed as basic, other clause types being derived from them. Standard SV-clauses can also be interrogative or exclamative, however, given the appropriate intonation contour and/or the appearance of a question word, e.g.
Examples like these demonstrate that how a clause functions cannot be known based entirely on a single distinctive syntactic criterion. SV-clauses are usually declarative, but intonation and/or the appearance of a question word can render them interrogative or exclamative.
Verb first clauses in English usually play one of three roles: 1. They express a yes/no-question via subject-auxiliary inversion, 2. they express a condition as an embedded clause, or 3. they express a command via imperative mood, e.g.
Most verb first clauses are main clauses. Verb first conditional clauses, however, must be classified as embedded clauses because they cannot stand alone.
Wh-clauses contain a wh-word. Wh-words often serve to help express a constituent question. They are also prevalent, though, as relative pronouns, in which case they serve to introduce a relative clause and are not part of a question. The wh-word focuses a particular constituent and most of the time, it appears in clause-initial position. The following examples illustrate standard interrogative wh-clauses. The b-sentences are direct questions (main clauses), and the c-sentences contain the corresponding indirect questions (embedded clauses):
One important aspect of matrix wh-clauses is that subject-auxiliary inversion is obligatory when something other than the subject is focused. When it is the subject (or something embedded in the subject) that is focused, however, subject-auxiliary inversion does not occur.
Another important aspect of wh-clauses concerns the absence of subject-auxiliary inversion in embedded clauses, as illustrated in the c-examples just produced. Subject-auxiliary inversion is obligatory in matrix clauses when something other than the subject is focused, but it never occurs in embedded clauses regardless of the constituent that is focused. A systematic distinction in word order emerges across matrix wh-clauses, which can have VS order, and embedded wh-clauses, which always maintain SV order, e.g.
Relative clauses are a mixed group. In English they can be standard SV-clauses if they are introduced by that or lack a relative pronoun entirely, or they can be wh-clauses if they are introduced by a wh-word that serves as a relative pronoun.
As embedded clauses, relative clauses in English cannot display subject-auxiliary inversion.
A particular type of wh-relative-clause is the so-called free relative clause. Free relatives typically function as arguments, e.g.
These relative clauses are "free" because they can appear in a variety of syntactic positions; they are not limited to appearing as modifiers of nominals. The suffix -ever is often employed to render a standard relative pronoun as a pronoun that can introduce a free relative clause.
Embedded clauses can be categorized according to their syntactic function in terms of predicate-argument structures. They can function as arguments, as adjuncts, or as predicative expressions. That is, embedded clauses can be an argument of a predicate, an adjunct on a predicate, or (part of) the predicate itself. The predicate in question is usually the matrix predicate of a main clause, but embedding of predicates is also frequent.
A clause that functions as the argument of a given predicate is known as an argument clause. Argument clauses can appear as subjects, as objects, and as obliques. They can also modify a noun predicate, in which case they are known as content clauses.
The following examples illustrate argument clauses that provide the content of a noun. Such argument clauses are content clauses:
The content clauses like these in the a-sentences are arguments. Relative clauses introduced by the relative pronoun that as in the b-clauses here have an outward appearance that is closely similar to that of content clauses. The relative clauses are adjuncts, however, not arguments.
Adjunct clauses are embedded clauses that modify an entire predicate-argument structure. All clause types (SV-, verb first, wh-) can function as adjuncts, although the stereotypical adjunct clause is SV and introduced by a subordinator (i.e. subordinate conjunction, e.g. after, because, before, when, etc.), e.g.
These adjunct clauses modify the entire matrix clause. Thus before you did in the first example modifies the matrix clause Fred arrived. Adjunct clauses can also modify a nominal predicate. The typical instance of this type of adjunct is a relative clause, e.g.
An embedded clause can also function as a predicative expression. That is, it can form (part of) the predicate of a greater clause.
These predicative clauses are functioning just like other predicative expressions, e.g. predicative adjectives (That was good) and predicative nominals (That was the truth). They form the matrix predicate together with the copula.
Some of the distinctions presented above are represented in syntax trees. These trees make the difference between main and subordinate clauses very clear, and they also illustrate well the difference between argument and adjunct clauses. The following dependency grammar trees show that embedded clauses are dependent on an element in the main clause, often on a verb:
The main clause encompasses the entire tree each time, whereas the embedded clause is contained within the main clause. These two embedded clauses are arguments. The embedded wh-clause what we want is the object argument of the predicate know. The embedded clause that he is gaining is the subject argument of the predicate is motivating. Both of these argument clauses are directly dependent on the main verb of the matrix clause. The following trees identify adjunct clauses using an arrow dependency edge:
These two embedded clauses are adjunct clauses because they provide circumstantial information that modifies a superordinate expression. The first is a dependent of the main verb of the matrix clause and the second is a dependent of the object noun. The arrow dependency edges identify them as adjuncts. The arrow points away from the adjunct towards it governor to indicate that semantic selection is running counter to the direction of the syntactic dependency; the adjunct is selecting its governor. The next four trees illustrate the distinction mentioned above between matrix wh-clauses and embedded wh-clauses
The embedded wh-clause is an object argument each time. The position of the wh-word across the matrix clauses (a-trees) and the embedded clauses (b-trees) captures the difference in word order. Matrix wh-clauses have V2 word order, whereas embedded wh-clauses have (what amounts to) V3 word order. In the matrix clauses, the wh-word is a dependent of the finite verb, whereas it is the head over the finite verb in the embedded wh-clauses.
There has been confusion about the distinction between clauses and phrases. This confusion is due in part to how these concepts are employed in the phrase structure grammars of the Chomskyan tradition. In the 1970s, Chomskyan grammars began labeling many clauses as CPs (i.e. complementizer phrases) or as IPs (i.e. inflection phrases), and then later as TPs (i.e. tense phrases), etc. The choice of labels was influenced by the theory-internal desire to use the labels consistently. The X-bar schema acknowledged at least three projection levels for every lexical head: a minimal projection (e.g. N, V, P, etc.), an intermediate projection (e.g. N', V', P', etc.), and a phrase level projection (e.g. NP, VP, PP, etc.). Extending this convention to the clausal categories occurred in the interest of the consistent use of labels.
This use of labels should not, however, be confused with the actual status of the syntactic units to which the labels are attached. A more traditional understanding of clauses and phrases maintains that phrases are not clauses, and clauses are not phrases. There is a progression in the size and status of syntactic units: words < phrases < clauses. The characteristic trait of clauses, i.e. the presence of a subject and a (finite) verb, is absent from phrases. Clauses can be, however, embedded inside phrases.
The central word of a non-finite clause is usually a non-finite verb (as opposed to a finite verb). There are various types of non-finite clauses that can be acknowledged based in part on the type of non-finite verb at hand. Gerunds are widely acknowledged to constitute non-finite clauses, and some modern grammars also judge many to-infinitives to be the structural locus of non-finite clauses. Finally, some modern grammars also acknowledge so-called small clauses, which often lack a verb altogether. It should be apparent that non-finite clauses are (by and large) embedded clauses.
The underlined words in the following examples are considered non-finite clauses, e.g.
Each of the gerunds in the a-sentences (stopping, attempting, and cheating) constitutes a non-finite clause. The subject-predicate relationship that has long been taken as the defining trait of clauses is fully present in the a-sentences. The fact that the b-sentences are also acceptable illustrates the enigmatic behavior of gerunds. They seem to straddle two syntactic categories: they can function as non-finite verbs or as nouns. When they function as nouns as in the b-sentences, it is debatable whether they constitute clauses, since nouns are not generally taken to be constitutive of clauses.
Some modern theories of syntax take many to-infinitives to be constitutive of non-finite clauses. This stance is supported by the clear predicate status of many to-infinitives. It is challenged, however, by the fact that to-infinitives do not take an overt subject, e.g.
The to-infinitives to consider and to explain clearly qualify as predicates (because they can be negated). They do not, however, take overt subjects. The subjects she and he are dependents of the matrix verbs refuses and attempted, respectively, not of the to-infinitives. Data like these are often addressed in terms of control. The matrix predicates refuses and attempted are control verbs; they control the embedded predicates consider and explain, which means they determine which of their arguments serves as the subject argument of the embedded predicate. Some theories of syntax posit the null subject PRO (i.e. pronoun) to help address the facts of control constructions, e.g.
With the presence of PRO as a null subject, to-infinitives can be construed as complete clauses, since both subject and predicate are present.
One must keep in mind, though, that PRO-theory is particular to one tradition in the study of syntax and grammar (Government and Binding Theory, Minimalist Program). Other theories of syntax and grammar (e.g. Head-Driven Phrase Structure Grammar, Construction Grammar, dependency grammar) reject the presence of null elements such as PRO, which means they are likely to reject the stance that to-infinitives constitute clauses.
Another type of construction that some schools of syntax and grammar view as non-finite clauses is the so-called small clause. A typical small clause consists of a noun phrase and a predicative expression, e.g.
The subject-predicate relationship is clearly present in the underlined strings. The expression on the right is a predication over the noun phrase immediately to its left. While the subject-predicate relationship is indisputably present, the underlined strings do not behave as single constituents, a fact that undermines their status as clauses. Hence one can debate whether the underlined strings in these examples should qualify as clauses. The layered structures of the chomskyan tradition are again likely to view the underlined strings as clauses, whereas the schools of syntax that posit flatter structures are likely to reject clause status for them. | http://www.like2do.com/learn?s=Clause | 18 |
13 | The problems in the practice bunch at the end of this chapter are done to one decimal point of the atomic weight. Now, there are two hydrogen molecules on the reactant side, and 4 on the product side.
Concentration times volume is number of mols of the solute material, or: See laboratory cautions below. Next, we have to look at balancing hydrogen. Next, we look at Cl because it is a nonmetal that is not O or H.
Other structures were shown in class, but will not be included in the test. As mentioned before, this relies on correcting the ratios between elements.
The flaws need to crack formation, and crack propagation perpendicular to the applied stress is usually transgranular, along cleavage planes. Our equation right now looks like this: We need to make the ratio 3: Many times students have been observed gathering all the numbers in the numerator, gathering all the numbers in the denominator, presenting a new fraction of the collected numbers, and then doing the division to find an answer.
List the information available to you from this formula. Defects will appear if the charge of the impurities is not balanced. The volume of a solution, V, is measured the same way the volume of a pure liquid is measured. Graphite is a good electrical conductor and chemically stable even at high temperatures.
Last, we need to balance Cl.
Calculate the number of atoms in: There is a reaction. We look at the zinc ratio and see that it is 1: What is the percentage composition of sulfate in each of the following materials: Sketch out an outline of the math according to the roadmap. What is the percentage composition of oxygen in each of the following materials: There is more on solutions in the chapter devoted to that.1 Chapter 3 Centrifugation Biochemistry and Molecular Biology (BMB) Introduction Basic Principle of sedimentation Types, care and safety of centrifuges.
Moles and Percents Why do we need Moles? A chemical mole, or mol, is a unit of measure, just like a gram or an ounce. It is used internationally so that all chemists speak the same measurement language.
Chapter Chemical Equilibrium Key topics: Equilibrium Constant Calculating Equilibrium Concentrations The Concept of Equilibrium Consider the reaction. CHAPTER 18 - ELECTROCHEMISTRY.
Electrochemistry The Stoichiometry of Electrolysis – Total Charge and Theoretical Yield. An electric current that flows through a cell is measured in.
ampere (A), which is the amount of charge, in. Coulomb (C. Butane, CH 3-CH 2-CH 2-CH 3, is a hydrocarbon fuel used in mi-centre.com many moles of molecules are there in a gram sample of butane??
REACTION STOICHIOMETRY. 1. Write the unbalanced equation. H 2 SO 4 + Al Ž Al 2 (SO 4) 3 + H 2. 2. Balance the equation. 3H 2 SO 4 + 2Al Ž Al 2 (SO 4) 3 + 3H 2.
3. Convert the known masses of the molecules to moles using molecular weight form the periodic table.Download | http://golysygycucozijoj.mi-centre.com/chapter-3-stoichiometry-4670746707.html | 18 |
Subsets and Splits