title
stringlengths
13
247
url
stringlengths
35
578
text
stringlengths
197
217k
__index_level_0__
int64
1
8.68k
16.17: Review of Chemical Kinetics
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Analytical_Chemistry_2.1_(Harvey)/16%3A_Appendix/16.17%3A_Review_of_Chemical_Kinetics
A reaction’s equilibrium position defines the extent to which the reaction can occur. For example, we expect a reaction with a large equilibrium constant, such as the dissociation of HCl in water\[\ce{HCl}(aq) + \ce{H2O}(l) \ce{->} \ce{H3O+}(aq) + \ce{Cl-}(aq) \nonumber\]to proceed nearly to completion. A large equilibrium constant, however, does not guarantee that a reaction will reach its equilibrium position. Many reactions with large equilibrium constants, such as the reduction of \(\ce{MnO4-}\) by \(\ce{H2O}\)\[\ce{4 MnO4-}(aq) + \ce{2 H2O}(l) \ce{->} \ce{4 MnO2}(s) + \ce{3 O2}(g) + \ce{4 OH-}(aq) \nonumber\]do not occur to an appreciable extent. The study of the rate at which a chemical reaction approaches its equilibrium position is called kinetics.A study of a reaction’s kinetics begins with the measurement of its reaction rate. Consider, for example, the general reaction shown below, involving the aqueous solutes A, B, C, and D, with stoichiometries of a, b, c, and d.\[a \ce{A} + b \ce{B} \ce{<=>} c \ce{C} + d \ce{D} \label{16.1}\]The rate, or velocity, at which this reaction approaches its equilibrium position is determined by following the change in concentration of one reactant or one product as a function of time. For example, if we monitor the concentration of reactant A, we express the rate as\[R = - \frac {d[\ce{A}]} {dt} \label{16.2}\]where R is the measured rate expressed as a change in concentration of A as a function of time. Because a reactant’s concentration decreases with time, we include a negative sign so that the rate has a positive value.We also can determine the rate by following the change in concentration of a product as a function of time, which we express as\[R^{\prime} = + \frac {d[\ce{C}]} {dt} \label{16.3}\] Rates determined by monitoring different species do not necessarily have the same value. The rate R in Equation \ref{16.2} and the rate \(R^{\prime}\) in Equation \ref{16.3} have the same value only if the stoichiometric coefficients of A and C in reaction \ref{16.1} are identical. In general, the relationship between the rates R and \(R^{\prime}\) is\[R = \frac {a} {c} \times R^{\prime} \nonumber\]A rate law describes how a reaction’s rate is affected by the concentration of each species in the reaction mixture. The rate law for Reaction \ref{16.1} takes the general form of\[R = k[\ce{A}]^{\alpha} [\ce{B}]^{\beta} [\ce{C}]^{\gamma} [\ce{D}]^{\delta} [\ce{E}]^{\epsilon} ... \label{16.4}\]where k is the rate constant, and \(\alpha\), \(\beta\), \(\gamma\), \(\delta\), and \(\epsilon\) are the reaction orders of the reaction for each species present in the reaction.There are several important points about the rate law in Equation \ref{16.4}. First, a reaction’s rate may depend on the concentrations of both reactants and products, as well as the concentration of a species that does not appear in the reaction’s overall stoichiometry. Species E in Equation \ref{16.4}, for example, may be a catalyst that does not appear in the reaction’s overall stoichiometry, but which increases the reaction’s rate. Second, the reaction order for a given species is not necessarily the same as its stoichiometry in the chemical reaction. Reaction orders may be positive, negative, or zero, and may take integer or non-integer values. Finally, the reaction’s overall reaction order is the sum of the individual reaction orders for each species. Thus, the overall reaction order for Equation \ref{16.4} is \(\alpha + \beta +\gamma + \delta + \epsilon\).In this section we review the application of kinetics to several simple chemical reactions, focusing on how we can use the integrated form of the rate law to determine reaction orders. In addition, we consider how we can determine the rate law for a more complex system.The simplest case we can treat is a first-order reaction in which the reaction’s rate depends on the concentration of only one species. The simplest example of a first-order reaction is an irreversible thermal decomposition of a single reactant, which we represent as\[\ce{A} \ce{->} \text{products} \label{16.5}\]with a rate law of\[R = - \frac {d[\ce{A}]} {dt} = k[\ce{A}] \label{16.6}\]The simplest way to demonstrate that a reaction is first-order in A, is to double the concentration of A and note the effect on the reaction’s rate. If the observed rate doubles, then the reaction is first-order in A. Alternatively, we can derive a relationship between the concentration of A and time by rearranging Equation \ref{16.6} and integrating.\[\frac {d[\ce{A}]} {[\ce{A}]} = -kdt \nonumber\]\[\int_{[{A}]_0}^{[{A}]_t}\frac{1}{[A]}d[A] = - k \int_{o}^{t}dt \label{16.7}\]Evaluating the integrals in Equation \ref{16.7} and rearranging\[\ln \frac {[\ce{A}]_t} {[\ce{A}]_0} = -kt \label{16.8}\]\[\ln [\ce{A}]_t = \ln [\ce{A}]_0 - kt \label{16.9}\]shows that for a first-order reaction, a plot of \(\ln[\ce{A}]_t\) versus time is linear with a slope of –k and a y-intercept of \(\ln[\ce{A}]_0\). Equation \ref{16.8} and Equation \ref{16.9} are known as integrated forms of the rate law. Reaction \ref{16.5} is not the only possible form of a first-order reaction. For example, the reaction\[\ce{A} + \ce{B} \ce{->} \text{products} \label{16.10}\]will follow first-order kinetics if the reaction is first-order in A and if the concentration of B does not affect the reaction’s rate, which may happen if the reaction’s mechanism involves at least two steps. Imagine that in the first step, A slowly converts to an intermediate species, C, which reacts rapidly with the remaining reactant, B, in one or more steps, to form the products.\[\ce{A} \ce{->} \ce{B} (\text{slow}) \nonumber\]\[\ce{B} + \ce{C} \ce{->} \text{products} \nonumber\]Because a reaction’s rate depends only on those species in the slowest step—usually called the rate-determining step—and any preceding steps, species B will not appear in the rate law.The simplest reaction demonstrating second-order behavior is\[\ce{2 A} \ce{->} \text{products} \nonumber\]for which the rate law is\[R = - \frac {d[\ce{A}]} {dt} = k[\ce{A}]^2 \nonumber\]Proceeding as we did earlier for a first-order reaction, we can easily derive the integrated form of the rate law.\[\frac {d[\ce{A}]} {[\ce{A}]^2} = -kdt \nonumber\]\[\int_{[\ce{A}]_0}^{[\ce{A}]_t} = -k \int_0^t dt \nonumber\]\[\frac {1} {[\ce{A}]_t} = kt + \frac {1} {[\ce{A}]_0} \nonumber\]For a second-order reaction, therefore, a plot of ([A]t)–1 versus t is linear with a slope of k and a y-intercept of ([A]0)–1. Alternatively, we can show that a reaction is second-order in A by observing the effect on the rate when we change the concentration of A. In this case, doubling the concentration of A produces a four-fold increase in the reaction’s rate.The following data were obtained during a kinetic study of the hydration of p-methoxyphenylacetylene by measuring the relative amounts of reactants and products by NMR [data from Kaufman, D,; Sterner, C.; Masek, B.; Svenningsen, R.; Samuelson, G. J. Chem. Educ. 1982, 59, 885–886].SolutionTo determine the reaction’s order we plot ln(%p-methoxyphenylacetylene) versus time for a first-order reaction, and (%p-methoxyphenylacetylene)–1 versus time for a second-order reaction (see below). Because a straight-line for the first-order plot fits the data nicely, we conclude that the reaction is first-order in p-meth- oxyphenylacetylene. Note that when we plot the data using the equation for a second-order reaction, the data show curvature that does not fit the straight-line model.Unfortunately, most reactions of importance in analytical chemistry do not follow the simple first-order or second-order rate laws discussed above. We are more likely to encounter the second-order rate law given in Equation \ref{16.11} than that in Equation \ref{16.10}.\[R = k [\ce{A}] [\ce{B}] \label{16.11}\]Demonstrating that a reaction obeys the rate law in Equation \ref{16.11} is complicated by the lack of a simple integrated form of the rate law. Often we can simplify the kinetics by carrying out the analysis under conditions where the concentrations of all species but one are so large that their concentrations effectively remain constant during the reaction. For example, if the concentration of B is selected such that \([\ce{B}] >> [\ce{A}]\), then Equation \ref{16.11} simplifies to\[R = k^{\prime} [\ce{A}] \nonumber\]where the rate constant k ́ is equal to k[B]. Under these conditions, the reaction appears to follow first-order kinetics in A; for this reason we identify the reaction as pseudo-first-order in A. We can verify the reaction order for A using either the integrated rate law or by observing the effect on the reaction’s rate of changing the concentration of A. To find the reaction order for B, we repeat the process under conditions where \([\ce{A}] >> [\ce{B}]\).A variation on the use of pseudo-ordered reactions is the initial rate method. In this approach we run a series of experiments in which we change one-at-a-time the concentration of each species that might affect the reaction’s rate and measure the resulting initial rate. Comparing the reaction’s initial rate for two experiments in which only the concentration of one species is different allows us to determine the reaction order for that species. The application of this method is outlined in the following example.The following data was collected during a kinetic study of the iodation of acetone by measuring the concentration of unreacted I2 in solution [data from Birk, J. P.; Walters, D. L. J. Chem. Educ. 1992, 69, 585–587].\(6.65 \times 10^{-3}\)SolutionThe order of the rate law with respect to the three reactants is determined by comparing the rates of two experiments in which there is a change in concentration for only one of the reactants. For example, in Experiments 1 and 2, only the \([\ce{H3O+}]\) changes; as doubling the \([\ce{H3O+}]\) doubles the rate, we know that the reaction is first-order in \(\ce{H3O+}\). Working in the same manner, Experiments 6 and 7 show that the reaction is also first order with respect to \([\ce{C3H6O}]\), and Experiments 6 and 8 show that the rate of the reaction is independentof the \([\ce{I2}]\). Thus, the rate law is\[R = k [\ce{C3H6O}] [\ce{H3O+}] \nonumber\]To determine the value of the rate constant, we substitute the rate, the \([\ce{H3O+}]\), and the \([\ce{H3O+}]\) for each experiment into the rate law and solve for k. Using the data from Experiment 1, for example, gives a rate constant of \(3.31 \times 10^{-5} \text{ M}^{-1} \text{ s}^{-1}\). The average rate constant for the eight experiments is \(3.49 \times 10^{-5} \text{ M}^{-1} \text{ s}^{-1}\).This page titled 16.17: Review of Chemical Kinetics is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by David Harvey.
112
16.18: Atomic Weights of the Elements
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Analytical_Chemistry_2.1_(Harvey)/16%3A_Appendix/16.18%3A_Atomic_Weights_of_the_Elements
The atomic weight of any isotope of an element is referenced to 12C, which is assigned an exact atomic weight of 12. The atomic weight of an element, therefore, is calculated using the atomic weights of its isotopes and the known abundance of those isotopes. For some elements the isotopic abundance varies slightly from material- to-material such that the element’s atomic weight in any specific material falls within a range of possible value; this is the case for carbon, for which the range of atomic masses is reported as [12.0096, 12.0116]. For such elements, a conventional, or representative atomic weight often is reported, chosen such that it falls within the range with an uncertainty of \(\pm 1\) in the last reported digit; in the case of carbon, for example, the representative atomic weight is 12.011. The atomic weights reported here—most to five significant figures, but a few to just three or four significant figures—are taken from the IUPAC technical report (“Atomic Weights of the Elements 2011,” Pure Appl.Chem. 2013, 85, 1047–1078). Values in ( ) are uncertainties in the last significant figure quoted and values in [ ] are the mass number for the longest lived isotope for elements that have no stable isotopes. The atomic weights for the elements B, Br, C, Cl, H, Li, Mg, N, O, Si, S, Tl are representative values.This page titled 16.18: Atomic Weights of the Elements is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by David Harvey.
113
2.1: Measurements in Analytical Chemistry
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Analytical_Chemistry_2.1_(Harvey)/02%3A_Basic_Tools_of_Analytical_Chemistry/2.01%3A_Measurements_in_Analytical_Chemistry
Analytical chemistry is a quantitative science. Whether determining the concentration of a species, evaluating an equilibrium constant, measuring a reaction rate, or drawing a correlation between a compound’s structure and its reactivity, analytical chemists engage in “measuring important chemical things” [Murray, R. W. Anal. Chem. 2007, 79, 1765]. In this section we review briefly the basic units of measurement and the proper use of significant figures.A measurement usually consists of a unit and a number that expresses the quantity of that unit. We can express the same physical measurement with different units, which creates confusion if we are not careful to specify the unit. For example, the mass of a sample that weighs 1.5 g is equivalent to 0.0033 lb or to 0.053 oz. To ensure consistency, and to avoid problems, scientists use the common set of fundamental base units listed in Table 2.1.1 . These units are called SI units after the Système International d’Unités.It is important for scientists to agree upon a common set of units. In 1999, for example, NASA lost a Mar’s Orbiter spacecraft because one engineering team used English units in their calculations and another engineering team used metric units. As a result, the spacecraft came too close to the planet’s surface, causing its propulsion system to overheat and fail.Some measurements, such as absorbance, do not have units. Because the meaning of a unitless number often is unclear, some authors include an artificial unit. It is not unusual to see the abbreviation AU—short for absorbance unit—following an absorbance value, which helps clarify that the measurement is an absorbance value.MeasurementUnitSymbolDefinition (1 unit is...)masskilogramkg...the mass of the international prototype, a Pt-Ir object housed at the Bureau International de Poids and Measures at Sèvres, France. (Note: The mass of the international prototype changes at a rate of approximately 1 μg per year due to reversible surface contamination. The reference mass, therefore, is determined immediately after its cleaning using a specified procedure. Current plans call for retiring the international prototype and defining the kilogram in terms of Planck’s constant; see this link for more details.)distancemeterm...the distance light travels in (299 792 458)–1 seconds.temperatureKelvinK...equal to (273.16)–1, where 273.16 K is the triple point of water (where its solid, liquid, and gaseous forms are in equilibrium).timeseconds...the time it takes for 9 192 631 770 periods of radiation corresponding to a specific transition of the 133Cs atom.currentampereA...the current producing a force of 2 \(\times\) 10–7 N/m between two straight parallel conductors of infinite length separated by one meter (in a vacuum).amount of substancemolemol...the amount of a substance containing as many particles as there are atoms in exactly 0.012 kilogram of 12C.lightcandelacd...the luminous intensity of a source with a monochromatic frequency of 540 \(\times\) 1012 hertz and a radiant power of–1 watts per steradian.There is some disagreement on the use of “amount of substance” to describe the measurement for which the mole is the base SI unit; see “What’s in a Name? Amount of Substance, Chemical Amount, and Stoichiometric Amount,” the full reference for which is Giunta, C. J. J. Chem. Educ. 2016, 93, 583–586.We define other measurements using these fundamental SI units. For example, we measure the quantity of heat produced during a chemical reaction in joules, (J), where 1 J is equivalent to 1 m kg/s . Table 2.1.2 provides a list of some important derived SI units, as well as a few common non-SI units.pressurepascal (SI)atmosphere (non-SI)Paatm1 Pa = 1 N/m3 = 1 kg/(m\(\cdot\)s2)1 atm = 101 325 Paenergy, work, heatjoule (SI)calorie (non-SI)electron volt (non-SI)JcaleV1 J = 1 N\(\cdot\)m = 1 m2\(\cdot\)kg/s21 cal = 4.184 J1 eV = 1.602 177 33 \(\times\) 10–19 JChemists frequently work with measurements that are very large or very small. A mole contains 602 213 670 000 000 000 000 000 particles and some analytical techniques can detect as little as 0.000 000 000 000 001 g of a compound. For simplicity, we express these measurements using scientific notation; thus, a mole contains 6.022 136 7 \(\times\) 1023 particles, and the detected mass is 1 \(\times\) 10–15 g. Sometimes we wish to express a measurement without the exponential term, replacing it with a prefix (Table 2.1.3 ). A mass of \(1 \times 10^{-15}\) g, for example, is the same as 1 fg, or femtogram.Writing a lengthy number with spaces instead of commas may strike you as unusual. For a number with more than four digits on either side of the decimal point, however, the recommendation from the International Union of Pure and Applied Chemistry is to use a thin space instead of a comma.A measurement provides information about both its magnitude and its uncertainty. Consider, for example, the three photos in Figure 2.1.1 , taken at intervals of approximately 1 sec after placing a sample on the balance. Assuming the balance is properly calibrated, we are certain that the sample’s mass is more than 0.5729 g and less than 0.5731 g. We are uncertain, however, about the sample’s mass in the last decimal place since the final two decimal places fluctuate between 29, 30, and 31. The best we can do is to report the sample’s mass as 0.5730 g ± 0.0001 g, indicating both its magnitude and its absolute uncertainty.Figure 2.1.1 : When weighing an sample on a balance, the measurement fluctuates in the final decimal place. We record this sample’s mass as 0.5730 g ± 0.0001 g.A measurement’s significant figures convey information about a measurement’s magnitude and uncertainty. The number of significant figures in a measurement is the number of digits known exactly plus one digit whose value is uncertain. The mass shown in Figure 2.1.1 , for example, has four significant figures, three which we know exactly and one, the last, which is uncertain.Suppose we weigh a second sample, using the same balance, and obtain a mass of 0.0990 g. Does this measurement have 3, 4, or 5 significant figures? The zero in the last decimal place is the one uncertain digit and is significant. The other two zeros, however, simply indicates the decimal point’s location. Writing the measurement in scientific notation, \(9.90 \times 10^{-2}\), clarifies that there are three significant figures in 0.0990.In the measurement 0.0990 g, the zero in green is a significant digit and the zeros in red are not significant digits.How many significant figures are in each of the following measurements? Convert each measurement to its equivalent scientific notation or decimal form.Solution(a) Three significant figures; \(1.20 \times 10^{-2}\) mol HCl.(b) Four significant figures; \(6.053 \times 10^2\) mg CaCO3.(c) Four significant figures; 0.000 104 3 mol Ag+.(d) Two significant figures; 93 000 mg NaOH.There are two special cases when determining the number of significant figures in a measurement. For a measurement given as a logarithm, such as pH, the number of significant figures is equal to the number of digits to the right of the decimal point. Digits to the left of the decimal point are not significant figures since they indicate only the power of 10. A pH of 2.45, therefore, contains two significant figures.The log of \(2.8 \times 10^2\) is 2.45. The log of 2.8 is 0.45 and the log of 102 is 2. The 2 in 2.45, therefore, only indicates the power of 10 and is not a significant digit.An exact number, such as a stoichiometric coefficient, has an infinite number of significant figures. A mole of CaCl2, for example, contains exactly two moles of chloride ions and one mole of calcium ions. Another example of an exact number is the relationship between some units. There are, for example, exactly 1000 mL in 1 L. Both the 1 and the 1000 have an infinite number of significant figures.Using the correct number of significant figures is important because it tells other scientists about the uncertainty of your measurements. Suppose you weigh a sample on a balance that measures mass to the nearest ±0.1 mg. Reporting the sample’s mass as 1.762 g instead of 1.7623 g is incorrect because it does not convey properly the measurement’s uncertainty. Reporting the sample’s mass as 1.76231 g also is incorrect because it falsely suggests an uncertainty of ±0.01 mg.Significant figures are also important because they guide us when reporting the result of an analysis. When we calculate a result, the answer cannot be more certain than the least certain measurement in the analysis. Rounding an answer to the correct number of significant figures is important.For addition and subtraction, we round the answer to the last decimal place in common for each measurement in the calculation. The exact sum of 135.621, 97.33, and 21.2163 is 254.1673. Since the last decimal place common to all three numbers is the hundredth’s place\[\begin{align*} &135.6{\color{Red} 2}1\\ &\phantom{1}97.3{\color{Red} 3}\\ &\underline{\phantom{1}21.2{\color{Red} 1}63}\\ &254.1673 \end{align*}\]we round the result to 254.17.The last common decimal place shared by 135.621, 97.33, and 21.2163 is shown in red.When working with scientific notation, first convert each measurement to a common exponent before determining the number of significant figures. For example, the sum of \(6.17 \times 10^7\), \(4.3 \times 10^5\), and \(3.23 \times 10^4\) is \(6.22 \times 10^7\).\[\begin{align*} &6.1{\color{Red} 7} \phantom{323} \times 10^7\\ &0.0{\color{Red} 4}3 \phantom{23} \times 10^7\\ &\underline{0.0{\color{Red} 0}323 \times 10^7}\\ &6.21623 \times 10^7 \end{align*}\]The last common decimal place shared by \(6.17 \times 10^7\), \(4.3 \times 10^5\) and \(3.23 \times 10^4\) is shown in red.For multiplication and division, we round the answer to the same number of significant figures as the measurement with the fewest number of significant figures. For example, when we divide the product of 22.91 and 0.152 by 16.302, we report the answer as 0.214 (three significant figures) because 0.152 has the fewest number of significant figures.\[\frac {22.91 \times 0.{\color{Red} 152}} {16.302} = 0.2136 = 0.214\nonumber\]There is no need to convert measurements in scientific notation to a common exponent when multiplying or dividing.It is important to recognize that the rules presented here for working with significant figures are generalizations. What actually is conserved is uncertainty, not the number of significant figures. For example, the following calculation101/99 = 1.02is correct even though it violates the general rules outlined earlier. Since the relative uncertainty in each measurement is approximately 1% (101 ± 1 and 99 ± 1), the relative uncertainty in the final answer also is approximately 1%. Reporting the answer as 1.0 (two significant figures), as required by the general rules, implies a relative uncertainty of 10%, which is too large. The correct answer, with three significant figures, yields the expected relative uncertainty. Chapter 4 presents a more thorough treatment of uncertainty and its importance in reporting the result of an analysis.Finally, to avoid “round-off” errors, it is a good idea to retain at least one extra significant figure throughout any calculation. Better yet, invest in a good scientific calculator that allows you to perform lengthy calculations without the need to record intermediate values. When your calculation is complete, round the answer to the correct number of significant figures using the following simple rules.For a problem that involves both addition and/or subtraction, and multiplication and/or division, be sure to account for significant figures at each step of the calculation. With this in mind, report the result of this calculation to the correct number of significant figures.\[\frac {0.250 \times (9.93 \times 10^{-3}) - 0.100 \times (1.927 \times 10^{-2})} {9.93 \times 10^{-3} + 1.927 \times 10^{-2}} = \nonumber\]The correct answer to this exercise is \(1.9 \times 10^{-2}\). To see why this is correct, let’s work through the problem in a series of steps. Here is the original problem\[\frac {0.250 \times (9.93 \times 10^{-3}) - 0.100 \times (1.927 \times 10^{-2})} {9.93 \times 10^{-3} + 1.927 \times 10^{-2}} = \nonumber\]Following the correct order of operations we first complete the two multiplications in the numerator. In each case the answer has three significant figures, although we retain an extra digit, highlight in red, to avoid round-off errors.\[\frac {2.48{\color{Red} 2} \times 10^{-3} - 1.92{\color{Red} 7} \times 10^{-3}} {9.93 \times 10^{-3} + 1.927 \times 10^{-2}} = \nonumber\]Completing the subtraction in the numerator leaves us with two significant figures since the last significant digit for each value is in the hundredths place.\[\frac {0.55{\color{Red} 5} \times 10^{-3}} {9.93 \times 10^{-3} + 1.927 \times 10^{-2}} = \nonumber\]The two values in the denominator have different exponents. Because we are adding together these values, we first rewrite them using a common exponent.\[\frac {0.55{\color{Red} 5} \times 10^{-3}} {0.993 \times 10^{-2} + 1.927 \times 10^{-2}} = \nonumber\]The sum in the denominator has four significant figures since each of the addends has three decimal places.\[\frac {0.55{\color{Red} 5} \times 10^{-3}} {2.92{\color{Red} 0} \times 10^{-2}} = \nonumber\]Finally, we complete the division, which leaves us with a result having two significant figures.\[\frac {0.55{\color{Red} 5} \times 10^{-3}} {2.92{\color{Red} 0} \times 10^{-2}} = 1.9 \times 10^{-2} \nonumber\]This page titled 2.1: Measurements in Analytical Chemistry is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by David Harvey.
114
2.2: Concentration
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Analytical_Chemistry_2.1_(Harvey)/02%3A_Basic_Tools_of_Analytical_Chemistry/2.02%3A_Concentration
Concentration is a general measurement unit that reports the amount of solute present in a known amount of solution\[\text{concentration} = \dfrac {\text{amount of solute}} {\text{amount of solution}} \label{2.1}\]Although we associate the terms “solute” and “solution” with liquid samples, we can extend their use to gas-phase and solid-phase samples as well. Table 2.2.1 lists the most common units of concentration.An alternative expression for weight percent is\[\dfrac {\text{grams solute}} {\text{grams solution}} \times 100\ \nonumber\]You can use similar alternative expressions for volume percent and for weight-to-volume percent.Both molarity and formality express concentration as moles of solute per liter of solution; however, there is a subtle difference between them. Molarity is the concentration of a particular chemical species. Formality, on the other hand, is a substance’s total concentration without regard to its specific chemical form. There is no difference between a compound’s molarity and formality if it dissolves without dissociating into ions. The formal concentration of a solution of glucose, for example, is the same as its molarity.For a compound that ionizes in solution, such as CaCl2, molarity and formality are different. When we dissolve 0.1 moles of CaCl2 in 1 L of water, the solution contains 0.1 moles of Ca2+ and 0.2 moles of Cl–. The molarity of CaCl2, therefore, is zero since there is no undissociated CaCl2 in solution; instead, the solution is 0.1 M in Ca2+ and 0.2 M in Cl–. The formality of CaCl2, however, is 0.1 F since it represents the total amount of CaCl2 in solution. This more rigorous definition of molarity, for better or worse, largely is ignored in the current literature, as it is in this textbook. When we state that a solution is 0.1 M CaCl2 we understand it to consist of Ca2+ and Cl– ions. We will reserve the unit of formality to situations where it provides a clearer description of solution chemistry.Molarity is used so frequently that we use a symbolic notation to simplify its expression in equations and in writing. Square brackets around a species indicate that we are referring to that species’ molarity. Thus, [Ca2+] is read as “the molarity of calcium ions.”For a solute that dissolves without undergoing ionization, molarity and formality have the same value. A solution that is 0.0259 M in glucose, for example, is 0.0259 F in glucose as well.Normality is a concentration unit that no longer is in common use; however, because you may encounter normality in older handbooks of analytical methods, it is helpful to understand its meaning. Normality defines concentration in terms of an equivalent, which is the amount of one chemical species that reacts stoichiometrically with another chemical species. Note that this definition makes an equivalent, and thus normality, a function of the chemical reaction in which the species participates. Although a solution of H2SO4 has a fixed molarity, its normality depends on how it reacts. You will find a more detailed treatment of normality in Appendix 1.One handbook that still uses normality is Standard Methods for the Examination of Water and Wastewater, a joint publication of the American Public Health Association, the American Water Works Association, and the Water Environment Federation. This handbook is one of the primary resources for the environmental analysis of water and wastewater.Molality is used in thermodynamic calculations where a temperature independent unit of concentration is needed. Molarity is based on the volume of solution that contains the solute. Since density is a temperature dependent property, a solution’s volume, and thus its molar concentration, changes with temperature. By using the solvent’s mass in place of the solution’s volume, the resulting concentration becomes independent of temperature.Weight percent (% w/w), volume percent (% v/v) and weight-to-volume percent (% w/v) express concentration as the units of solute present in 100 units of solution. A solution that is 1.5% w/v NH4NO3, for example, contains 1.5 gram of NH4NO3 in 100 mL of solution.Parts per million (ppm) and parts per billion (ppb) are ratios that give the grams of solute in, respectively, one million or one billion grams of sample. For example, a sample of steel that is 450 ppm in Mn contains 450 μg of Mn for every gram of steel. If we approximate the density of an aqueous solution as 1.00 g/mL, then we can express solution concentrations in ppm or ppb using the following relationships.\[\text{ppm} = \dfrac {\mu \text{g}} {\text{g}} = \dfrac {\text{mg}} {\text{L}} = \dfrac {\mu \text{g}} {\text{mL}} \quad \text{ppb} = \dfrac {\text{ng}} {\text{g}} = \dfrac {\mu \text{g}} {\text{L}} = \dfrac {\text{ng}} {\text{mL}} \nonumber\]For gases a part per million usually is expressed as a volume ratio; for example, a helium concentration of 6.3 ppm means that one liter of air contains 6.3 μL of He.You should be careful when using parts per million and parts per billion to express the concentration of an aqueous solute. The difference between a solute’s concentration in mg/L and ng/g, for example, is significant if the solution’s density is not 1.00 g/mL. For this reason many organizations advise against using the abbreviation ppm and ppb (see section 7.10.3 at www.nist.gov). If in doubt, include the exact units, such as 0.53 μg Pb2+/L for the concentration of lead in a sample of seawater.The most common ways to express concentration in analytical chemistry are molarity, weight percent, volume percent, weight-to-volume percent, parts per million and parts per billion. The general definition of concentration in Equation \ref{2.1} makes it is easy to convert between concentration units.A concentrated solution of ammonia is 28.0% w/w NH3 and has a density of 0.899 g/mL. What is the molar concentration of NH3 in this solution?Solution\[\dfrac {28.0 \text{ g } \ce{NH3}} {100 \text{ g soln}} \times \dfrac {0.899 \text{ g soln}} {\text{ml soln}} \times \dfrac {1 \text{ mol } \ce{NH3}} {17.03 \text{ g } \ce{NH3}} \times \dfrac {1000 \text{mL}} {\text{L}} = 14.8 \text{ M} \nonumber\]The maximum permissible concentration of chloride ion in a municipal drinking water supply is \(2.50 \times 10^2\) ppm Cl–. When the supply of water exceeds this limit it often has a distinctive salty taste. What is the equivalent molar concentration of Cl–?Solution\[\dfrac {2.50 \times 10^2 \text{ mg } \ce{Cl-}} {\text{L}} \times \dfrac {1 \text{ g}} {1000 \text{ mg}} \times \dfrac {1 \text{ mol } \ce{Cl-}} {35.453 \text{ g} \ce{Cl-}} = 7.05 \times 10^{-3} \text{ M} \nonumber\]Which solution—0.50 M NaCl or 0.25 M SrCl2—has the larger concentration when expressed in mg/mL?The concentrations of the two solutions are\[\dfrac {0.50 \text{ mol NaCl}} {\text{L}} \times \dfrac {58.44 \text{ g NaCl}} {\text{mol NaCl}} \times \dfrac {10^6 \: \mu \text{g}} {\text{g}} \times \dfrac {1 \text{L}} {1000 \text{ mL}} = 2.9 \times 10^{4} \: \mu \text{g/mL NaCl} \nonumber\]\[\dfrac {0.25 \text{ mol } \ce{SrCl2}} {\text{L}} \times \dfrac {158.5 \text{ g } \ce{SrCl2}} {\text{mol } \ce{SrCl2}} \times \dfrac {10^6 \: \mu \text{g}} {\text{g}} \times \dfrac {1 \text{L}} {1000 \text{ mL}} = 4.0 \times 10^{4} \: \mu \text{g/ml } \ce{SrCl2} \nonumber\]The solution of SrCl2 has the larger concentration when it is expressed in μg/mL instead of in mol/L.Sometimes it is inconvenient to use the concentration units in Table 2.2.1 . For example, during a chemical reaction a species’ concentration may change by many orders of magnitude. If we want to display the reaction’s progress graphically we might wish to plot the reactant’s concentration as a function of the volume of a reagent added to the reaction. Such is the case in Figure 2.2.1 for the titration of HCl with NaOH. The y-axis on the left-side of the figure displays the [H+] as a function of the volume of NaOH. The initial [H+] is 0.10 M and its concentration after adding 80 mL of NaOH is \(4.3 \times 10^{-13}\) M. We easily can follow the change in [H+] for the addition of the first 50 mL of NaOH; however, for the remaining volumes of NaOH the change in [H+] is too small to see.When working with concentrations that span many orders of magnitude, it often is more convenient to express concentration using a p-function. The p-function of X is written as pX and is defined as\[\text{p} X = - \log (X) \nonumber\]The pH of a solution that is 0.10 M H+ for example, is\[\text{pH} = - \log [\ce{H+}] = - \log (0.10) = 1.00 \nonumber\]and the pH of \(4.3 \times 10^{-13}\) M H+ is\[\text{pH} = - \log [\ce{H+}] = - \log (4.3 \times 10^{-13}) = 12.37 \nonumber\]Figure 2.2.1 shows that plotting pH as a function of the volume of NaOH provides more useful information about how the concentration of H+ changes during the titration.A more appropriate equation for pH is \(\text{pH} = - \log (a_{\ce{H+}})\) where \(a_{\ce{H+}}\) is the activity of the hydrogen ion. See Chapter 6.9 for more details. For now the approximate equation \(\text{pH} = - \log [\ce{H+}]\) is sufficient.What is pNa for a solution of \(1.76 \times 10^{-3}\) M Na3PO4?SolutionSince each mole of Na3PO4 contains three moles of Na+, the concentration of Na+ is\[[\ce{Na+}] = (1.76 \times 10^{-3} \text{ M}) \times \dfrac {3 \text{ mol } \ce{Na+}} {\text{mol } \ce{Na3PO4}} = 5.28 \times 10^{-3} \text{ M} \nonumber\]and pNa is\[\text{pNa} = - \log [\ce{Na+}] = - \log (5.28 \times 10^{-3}) = 2.277 \nonumber\]Remember that a pNa of 2.777 has three, not four, significant figures; the 2 that appears in the one’s place indicates the power of 10 when we write [Na+] as \(0.528 \times 10^{-2}\) M.What is the [H+] in a solution that has a pH of 5.16?SolutionThe concentration of H+ is\[\text{pH} = - \log [\ce{H+}] = 5.16 \nonumber\]\[\log [\ce{H+}] = -5.16 \nonumber\]\[[\ce{H+}] = 10^{-5.16} = 6.9 \times 10^{-6} \text{ M} \nonumber\]Recall that if log(X) = a, then X = 10a. What are the values for pNa and pSO4 if we dissolve 1.5 g Na2SO4 in a total solution volume of 500.0 mL?The concentrations of Na+ and \(\ce{SO4^{2-}}\) are\[\dfrac {1.5 \text{ g } \ce{Na2SO4}} {0.500 \text{L}} \times \dfrac {1 \text{ mol } \ce{Na2SO4}} {142.0 \text{ g } \ce{Na2SO4}} \times \dfrac {2 \text{ mol } \ce{Na+}} {\text{mol } \ce{mol } \ce{Na2SO4}} = 4.23 \times 10^{-2} \text{ M } \ce{Na+} \nonumber\]\[\dfrac {1.5 \text{ g } \ce{Na2SO4}} {0.500 \text{L}} \times \dfrac {1 \text{ mol } \ce{Na2SO4}} {142.0 \text{ g } \ce{Na2SO4}} \times \dfrac {1 \text{ mol } \ce{SO4^{2-}}} {\text{mol } \ce{mol } \ce{Na2SO4}} = 2.11 \times 10^{-2} \text{ M } \ce{SO4^{2-}} \nonumber\]The pNa and pSO4 values are\[\text{pNa} = - \log (4.23 \times 10^{-2} \text{ M } \ce{Na+}) = 1.37 \nonumber\]\[\text{pSO}_4 = - \log (2.11 \times 10^{-2} \text{ M } \ce{SO4^{2-}}) = 1.68 \nonumber\]This page titled 2.2: Concentration is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by David Harvey.
115
2.3: Stoichiometric Calculations
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Analytical_Chemistry_2.1_(Harvey)/02%3A_Basic_Tools_of_Analytical_Chemistry/2.03%3A_Stoichiometric_Calculations
A balanced reaction, which defines the stoichiometric relationship between the moles of reactants and the moles of products, provides the basis for many analytical calculations. Consider, for example, an analysis for oxalic acid, H2C2O4, in which Fe3+ oxidizes oxalic acid to CO2\[2\ce{Fe^{3+}}(aq) + \ce{H2C2O4}(aq) + 2\ce{H2O}(l) \ce{->} 2\ce{Fe^{2+}}(aq) + 2\ce{CO2}(g) + 2\ce{H3O+}(aq) \nonumber\]The balanced reaction shows us that one mole of oxalic acid reacts with two moles of Fe3+. As shown in the following example, we can use this balanced reaction to determine the amount of H2C2O4 in a sample of rhubarb if we know the moles of Fe3+ needed to react completely with oxalic acid.In sufficient amounts, oxalic acid, the structure for which is shown below, is toxic. At lower physiological concentrations it leads to the formation of kidney stones. The leaves of the rhubarb plant contain relatively high concentrations of oxalic acid. The stalk, which many individuals enjoy eating, contains much smaller concentrations of oxalic acid.In the examples that follow, note that we retain an extra significant figure throughout the calculation, rounding to the correct number of significant figures at the end. We will follow this convention in any calculation that involves more than one step. If we forget that we are retaining an extra significant figure, we might report the final answer with one too many significant figures. Here we mark the extra digit in red for emphasis. Be sure you pick a system for keeping track of significant figures.The amount of oxalic acid in a sample of rhubarb was determined by reacting with Fe3+. After extracting a 10.62 g of rhubarb with a solvent, oxidation of the oxalic acid required 36.44 mL of 0.0130 M Fe3+. What is the weight percent of oxalic acid in the sample of rhubarb?SolutionWe begin by calculating the moles of Fe3+ used in the reaction\[\frac {0.0130 \text{ mol } \ce{Fe^{3+}}} {\text{L}} \times 0.03644 \text{ M} = 4.73{\color{Red} 7} \times 10^{-4} \text{ mol } \ce{Fe^{3+}} \nonumber\]The moles of oxalic acid reacting with the Fe3+, therefore, is\[4.73{\color{Red} 7} \times 10^{-4} \text{ mol } \ce{Fe^{3+}} \times \frac {1 \text{ mol } \ce{H2C2O4}} {2 \text{ mol } \ce{Fe^{3+}}} = 2.36{\color{Red} 8} \times 10^{-4} \text{ mol } \ce{H2C2O4} \nonumber\]Converting the moles of oxalic acid to grams of oxalic acid\[2.36{\color{Red} 8} \times 10^{-4} \text{ mol } \ce{H2C2O4} \times \frac {90.03 \text{ g } \ce{H2C2O4}} {\text{mol } \ce{H2C2O4}} = 2.13{\color{Red} 2} \times 10^{-2} \text{ g } \ce{H2C2O4} \nonumber\]and calculating the weight percent gives the concentration of oxalic acid in the sample of rhubarb as\[\frac {2.13{\color{Red} 2} \times 10^{-2} \text{ g } \ce{H2C2O4}} {10.62 \text{ g rhubarb}} \times 100 = 0.201 \text{% w/w } \ce{H2C2O4} \nonumber\]You can dissolve a precipitate of AgBr by reacting it with Na2S2O3, as shown here.\[\ce{AgBr}(s) + 2\ce{Na2S2O3}(aq) \ce{->} \ce{Ag(S2O3)_2^{3-}}(aq) + \ce{Br-}(aq) + 4\ce{Na+}(aq) \nonumber\]How many mL of 0.0138 M Na2S2O3 do you need to dissolve 0.250 g of AgBr?First, we find the moles of AgBr\[0.250 \text{ g AgBr} \times \frac {1 \text{ mol AgBr}} {187.8 \text{ g AgBr}} = 1.331 \times 10^{-3} \text{ mol AgBr} \nonumber\]and then the moles and volume of Na2S2O3\[1.331 \times 10^{-3} \text{ mol AgBr} \times \frac {2 \text{ mol } \ce{Na2S2O3}} {\text{mol AgBr}} = 2.662 \times 10^{-3} \text{ mol } \ce{Na2S2O3} \nonumber\]\[2.662 \times 10^{-3} \text{ mol } \ce{Na2S2O3} \times \frac {1 \text{ L}} {0.0138 \text{ mol } \ce{Na2S2O3}} \times \frac {1000 \text{ mL}} {\text{L}} = 193 \text{ mL} \nonumber\]The analyte in Example 2.3.1 , oxalic acid, is in a chemically useful form because there is a reagent, Fe3+, that reacts with it quantitatively. In many analytical methods, we first must convert the analyte into a more accessible form before we can complete the analysis. For example, one method for the quantitative analysis of disulfiram, C10H20N2S4—the active ingredient in the drug Antabuse, and whose structure is shown below—requires that we first convert the sulfur to SO2 by combustion, and then oxidize the SO2 to H2SO4 by bubbling it through a solution of H2O2. When the conversion is complete, the amount of H2SO4 is determined by titrating with NaOH.To convert the moles of NaOH used in the titration to the moles of disulfiram in the sample, we need to know the stoichiometry of each reaction. Writing a balanced reaction for H2SO4 and NaOH is straightforward\[\ce{H2SO4}(aq) + 2\ce{NaOH}(aq) \ce{->} 2\ce{H2O}(l) + \ce{Na2SO4}(aq) \nonumber\]but the balanced reactions for the oxidations of C10H20N2S4 to SO2, and of SO2 to H2SO4 are not as immediately obvious. Although we can balance these redox reactions, it is often easier to deduce the overall stoichiometry by use a little chemical logic.An analysis for disulfiram, C10H20N2S4, in Antabuse is carried out by oxidizing the sulfur to H2SO4 and titrating the H2SO4 with NaOH. If a 0.4613-g sample of Antabuse requires 34.85 mL of 0.02500 M NaOH to titrate the H2SO4, what is the %w/w disulfiram in the sample?SolutionCalculating the moles of H2SO4 is easy—first, we calculate the moles of NaOH used in the titration\[(0.02500 \text{ M}) \times (0.03485 \text{ L}) = 8.712{\color{Red} 5} \times 10^{-4} \text{ mol NaOH} \nonumber\]and then we use the titration reaction’s stoichiometry to calculate the corresponding moles of H2SO4.\[8.712{\color{Red} 5} \times 10^{-4} \text{ mol NaOH} \times \frac {1 \text{ mol } \ce{H2SO4}} {2 \text{ mol NaOH}} = 4.356{\color{Red} 2} \times 10^{-4} \text{ mol } \ce{H2SO4} \nonumber\]Here is where we use a little chemical logic. Instead of balancing the reactions for the combustion of C10H20N2S4 to SO2 and for the subsequent oxidation of SO2 to H2SO4, we recognize that a conservation of mass requires that all the sulfur in C10H20N2S4 ends up in the H2SO4; thus\[4.356{\color{Red} 2} \times 10^{-4} \text{ mol } \ce{H2SO4} \times \frac {1 \text{ mol S}} {\text{mol } \ce{H2SO4}} \times \frac {1 \text{ mol } \ce{C10H20N2S4}} {4 \text{ mol S}} = 1.089{\color{Red} 0} \times 10^{-4} \text{ mol } \ce{C10H20N2S4} \nonumber\]\[1.089{\color{Red} 0} \times 10^{-4} \text{ mol } \ce{C10H20N2S4} \times \frac {296.54 \text{ g } \ce{C10H20N2S4}} {\text{mol } \ce{C10H20N2S4}} = 0.03229{\color{Red} 3} \text{ g } \ce{C10H20N2S4} \nonumber\]\[\frac {0.03229{\color{Red} 3} \text{ g } \ce{C10H20N2S4}} {0.4613 \text{ g sample}} \times 100 = 7.000 \text{% w/w } \ce{C10H20N2S4} \nonumber\]A conservation of mass is the essence of stoichiometry!This page titled 2.3: Stoichiometric Calculations is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by David Harvey.
116
2.4: Basic Equipment
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Analytical_Chemistry_2.1_(Harvey)/02%3A_Basic_Tools_of_Analytical_Chemistry/2.04%3A_Basic_Equipment
The array of equipment available for making analytical measurements and working with analytical samples is impressive, ranging from the simple and inexpensive, to the complex and expensive. With three exceptions—the measurement of mass, the measurement of volume, and the drying of materials—we will postpone the discussion of equipment to later chapters where its application to specific analytical methods is relevant.An object’s mass is measured using a digital electronic analytical balance (Figure 2.4.1 ). An electromagnet levitates the sample pan above a permanent cylindrical magnet. When we place an object on the sample pan, it displaces the sample pan downward by a force equal to the product of the sample’s mass and its acceleration due to gravity. The balance detects this downward movement and generates a counterbalancing force by increasing the current to the electromagnet. The current needed to return the balance to its original position is proportional to the object’s mass. A typical electronic balance has a capacity of 100–200 g, and can measure mass to the nearest ±0.01 mg to ±1 mg.Although we tend to use interchangeably, the terms “weight” and “mass,” there is an important distinction between them. Mass is the absolute amount of matter in an object, measured in grams. Weight, W, is a measure of the gravitational force, g, acting on that mass, m:\[W = m \times g \nonumber\]An object has a fixed mass but its weight depends upon the acceleration due to gravity, which varies subtly from location-to-location.A balance measures an object’s weight, not its mass. Because weight and mass are proportional to each other, we can calibrate a balance using a standard weight whose mass is traceable to the standard prototype for the kilogram. A properly calibrated balance gives an accurate value for an object’s mass; see Appendix 9 for more details on calibrating a balance.If the sample is not moisture sensitive, a clean and dry container is placed on the balance. The container’s mass is called the tare and most balances allow you to set the container’s tare to a mass of zero. The sample is transferred to the container, the new mass is measured and the sample’s mass determined by subtracting the tare. A sample that absorbs moisture from the air is treated differently. The sample is placed in a covered weighing bottle and their combined mass is determined. A portion of the sample is removed and the weighing bottle and the remaining sample are reweighed. The difference between the two masses gives the sample’s mass.Several important precautions help to minimize errors when we determine an object’s mass. To minimize the effect of vibrations, the balance is placed on a stable surface and in a level position. Because the sensitivity of an analytical balance is sufficient to measure the mass of a fingerprint, materials often are handled using tongs or laboratory tissues. Volatile liquid samples must be weighed in a covered container to avoid the loss of sample by evaporation. To minimize fluctuations in mass due to air currents, the balance pan often is housed within a wind shield, as seen in Figure 2.4.1 . A sample that is cooler or warmer than the surrounding air will create a convective air currents that affects the measurement of its mass. For this reason, bring your samples to room temperature before determining their mass. Finally, samples dried in an oven are stored in a desiccator to prevent them from reabsorbing moisture from the atmosphere.Analytical chemists use a variety of glassware to measure volume, including graduated cylinders, volumetric pipets, and volumetric flasks. The choice of what type of glassware to use depends on how accurately and how precisely we need to know the sample’s volume and whether we are interested in containing or delivering the sample.A graduated cylinder is the simplest device for delivering a known volume of a liquid reagent (Figure 2.4.2 ). The graduated scale allows you to deliver any volume up to the cylinder’s maximum. Typical accuracy is ±1% of the maximum volume. A 100-mL graduated cylinder, for example, is accurate to ±1 mL.A volumetric pipet provides a more accurate method for delivering a known volume of solution. Several different styles of pipets are available, two of which are shown in Figure 2.4.3 . Transfer pipets provide the most accurate means for delivering a known volume of solution. A transfer pipet delivering less than 100 mL generally is accurate to the hundredth of a mL. Larger transfer pipets are accurate to a tenth of a mL. For example, the 10-mL transfer pipet in Figure 2.4.3 will deliver 10.00 mL with an accuracy of ±0.02 mL.Scientists at the Brookhaven National Laboratory used a germanium nanowire to make a pipet that delivers a 35 zeptoliter (10–21 L) drop of a liquid gold-germanium alloy. You can read about this work in the April 21, 2007 issue of Science News.To fill a transfer pipet, use a rubber suction bulb to pull the solution up past the calibration mark (Never use your mouth to suck a solution into a pipet!). After replacing the bulb with your finger, adjust the solution’s level to the calibration mark and dry the outside of the pipet with a laboratory tissue. Allow the pipet’s contents to drain into the receiving container with the pipet’s tip touching the inner wall of the container. A small portion of the liquid remains in the pipet’s tip and is not be blown out. With some measuring pipets any solution remaining in the tip must be blown out.Delivering microliter volumes of liquids is not possible using transfer or measuring pipets. Digital micropipets (Figure 2.4.4 ), which come in a variety of volume ranges, provide for the routine measurement of microliter volumes.Graduated cylinders and pipets deliver a known volume of solution. A volumetric flask, on the other hand, contains a specific volume of solution (Figure 2.4.5 ). When filled to its calibration mark, a volumetric flask that contains less than 100 mL generally is accurate to the hundredth of a mL, whereas larger volumetric flasks are accurate to the tenth of a mL. For example, a 10-mL volumetric flask contains 10.00 mL ± 0.02 mL and a 250-mL volumetric flask contains 250.0 mL ± 0.12 mL.Because a volumetric flask contains a solution, it is used to prepare a solution with an accurately known concentration. Transfer the reagent to the volumetric flask and add enough solvent to bring the reagent into solution. Continuing adding solvent in several portions, mixing thoroughly after each addition, and then adjust the volume to the flask’s calibration mark using a dropper. Finally, complete the mixing process by inverting and shaking the flask at least 10 times.If you look closely at a volumetric pipet or a volumetric flask you will see markings similar to those shown in Figure 2.4.6 . The text of the markings, which reads10 mL T. D. at 20 oC ± 0.02 mLindicates that the pipet is calibrated to deliver (T. D.) 10 mL of solution with an uncertainty of ±0.02 mL at a temperature of 20 oC. The temperature is important because glass expands and contracts with changes in temperatures; thus, the pipet’s accuracy is less than ±0.02 mL at a higher or a lower temperature. For a more accurate result, you can calibrate your volumetric glassware at the temperature you are working by weighing the amount of water contained or delivered and calculating the volume using its temperature dependent density.A volumetric flask has similar markings, but uses the abbreviation T. C. for “to contain” in place of T. D.You should take three additional precautions when you work with pipets and volumetric flasks. First, the volume delivered by a pipet or contained by a volumetric flask assumes that the glassware is clean. Dirt and grease on the inner surface prevent liquids from draining evenly, leaving droplets of liquid on the container’s walls. For a pipet this means the delivered volume is less than the calibrated volume, while drops of liquid above the calibration mark mean that a volumetric flask contains more than its calibrated volume. Commercially available cleaning solutions are available for cleaning pipets and volumetric flasks.Second, when filling a pipet or volumetric flask the liquid’s level must be set exactly at the calibration mark. The liquid’s top surface is curved into a meniscus, the bottom of which should align with the glassware’s calibration mark (Figure 2.4.7 ). When adjusting the meniscus, keep your eye in line with the calibration mark to avoid parallax errors. If your eye level is above the calibration mark you will overfill the pipet or the volumetric flask and you will underfill them if your eye level is below the calibration mark.Finally, before using a pipet or volumetric flask rinse it with several small portions of the solution whose volume you are measuring. This ensures the removal of any residual liquid remaining in the pipet or volumetric flask.Many materials need to be dried prior to their analysis to remove residual moisture. Depending on the material, heating to a temperature between 110 oC and 140 oC usually is sufficient. Other materials need much higher temperatures to initiate thermal decomposition.Conventional drying ovens provide maximum temperatures of 160 oC to 325 oC, depending on the model. Some ovens include the ability to circulate heated air, which allows for a more efficient removal of moisture and shorter drying times. Other ovens provide a tight seal for the door, which allows the oven to be evacuated. In some situations a microwave oven can replace a conventional laboratory oven. Higher temperatures, up to as much as 1700 oC, require a muffle furnace (Figure 2.4.8 ).After drying or decomposing a sample, it is cooled to room temperature in a desiccator to prevent the readsorption of moisture. A desiccator (Figure 2.4.9 ) is a closed container that isolates the sample from the atmosphere. A drying agent, called a desiccant, is placed in the bottom of the container. Typical desiccants include calcium chloride and silica gel. A perforated plate sits above the desiccant, providing a shelf for storing samples. Some desiccators include a stopcock that allows them to be evacuated.This page titled 2.4: Basic Equipment is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by David Harvey.
117
2.5: Preparing Solutions
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Analytical_Chemistry_2.1_(Harvey)/02%3A_Basic_Tools_of_Analytical_Chemistry/2.05%3A_Preparing_Solutions
Preparing a solution of known concentration is perhaps the most common activity in any analytical lab. The method for measuring out the solute and the solvent depend on the desired concentration and how exact the solution’s concentration needs to be known. Pipets and volumetric flasks are used when we need to know a solution’s exact concentration; graduated cylinders, beakers, and/or reagent bottles suffice when a concentrations need only be approximate. Two methods for preparing solutions are described in this section.A stock solution is prepared by weighing out an appropriate portion of a pure solid or by measuring out an appropriate volume of a pure liquid, placing it in a suitable flask, and diluting to a known volume. Exactly how one measure’s the reagent depends on the desired concentration unit. For example, to prepare a solution with a known molarity you weigh out an appropriate mass of the reagent, dissolve it in a portion of solvent, and bring it to the desired volume. To prepare a solution where the solute’s concentration is a volume percent, you measure out an appropriate volume of solute and add sufficient solvent to obtain the desired total volume.Describe how to prepare the following three solutions: (a) 500 mL of approximately 0.20 M NaOH using solid NaOH; (b) 1 L of 150.0 ppm Cu2+ using Cu metal; and (c) 2 L of 4% v/v acetic acid using concentrated glacial acetic acid (99.8% w/w acetic acid).Solution(a) Because the desired concentration is known to two significant figures, we do not need to measure precisely the mass of NaOH or the volume of solution. The desired mass of NaOH is\[\frac {0.20 \text{ mol NaOH}} {\text{L}} \times \frac {40.0 \text{ g NaOH}} {\text{mol NaOH}} \times 0.50 \text{ L} = 4.0 \text{ g NaOH} \nonumber\]To prepare the solution, place 4.0 grams of NaOH, weighed to the nearest tenth of a gram, in a bottle or beaker and add approximately 500 mL of water.(b) Since the desired concentration of Cu2+ is given to four significant figures, we must measure precisely the mass of Cu metal and the final solution volume. The desired mass of Cu metal is\[\frac {150.0 \text{ mg Cu}} {\text{L}} \times 1.000 \text{ M } \times \frac {1 \text{ g}} {1000 \text{ mg}} = 0.1500 \text{ g Cu} \nonumber\]To prepare the solution, measure out exactly 0.1500 g of Cu into a small beaker and dissolve it using a small portion of concentrated HNO3. To ensure a complete transfer of Cu2+ from the beaker to the volumetric flask—what we call a quantitative transfer—rinse the beaker several times with small portions of water, adding each rinse to the volumetric flask. Finally, add additional water to the volumetric flask’s calibration mark.(c) The concentration of this solution is only approximate so it is not necessary to measure exactly the volumes, nor is it necessary to account for the fact that glacial acetic acid is slightly less than 100% w/w acetic acid (it is approximately 99.8% w/w). The necessary volume of glacial acetic acid is\[\frac {4 \text{ mL } \ce{CH3COOH}} {100 \text{ mL}} \times 2000 \text{ mL} = 80 \text{ mL } \ce{CH3COOH} \nonumber\]To prepare the solution, use a graduated cylinder to transfer 80 mL of glacial acetic acid to a container that holds approximately 2 L and add sufficient water to bring the solution to the desired volume.Provide instructions for preparing 500 mL of 0.1250 M KBrO3.Preparing 500 mL of 0.1250 M KBrO3 requires\[0.5000 \text{ L} \times \frac {0.1250 \text{ mol } \ce{KBrO3}} {\text{L}} \times \frac {167.00 \text{ g } \ce{KBrO3}} {\text{mol } \ce{KBrO3}} = 10.44 \text{ g } \ce{KBrO3} \nonumber\]Because the concentration has four significant figures, we must prepare the solution using volumetric glassware. Place a 10.44 g sample of KBrO3 in a 500-mL volumetric flask and fill part way with water. Swirl to dissolve the KBrO3 and then dilute with water to the flask’s calibration mark.Solutions are often prepared by diluting a more concentrated stock solution. A known volume of the stock solution is transferred to a new container and brought to a new volume. Since the total amount of solute is the same before and after dilution, we know that\[C_o \times V_o = C_d \times V_d \label{2.1}\]where \(C_o\) is the stock solution’s concentration, \(V_o\) is the volume of stock solution being diluted, \(C_d\) is the dilute solution’s concentration, and \(V_d\) is the volume of the dilute solution. Again, the type of glassware used to measure \(V_o\) and \(V_d\) depends on how precisely we need to know the solution’s concentration.Note that Equation \ref{2.1} applies only to those concentration units that are expressed in terms of the solution’s volume, including molarity, formality, normality, volume percent, and weight-to-volume percent. It also applies to weight percent, parts per million, and parts per billion if the solution’s density is 1.00 g/mL. We cannot use Equation \ref{2.1} if we express concentration in terms of molality as this is based on the mass of solvent, not the volume of solution. See Rodríquez-López, M.; Carrasquillo, A. J. Chem. Educ. 2005, 82, 1327-1328 for further discussion.A laboratory procedure calls for 250 mL of an approximately 0.10 M solution of NH3. Describe how you would prepare this solution using a stock solution of concentrated NH3 (14.8 M).SolutionSubstituting known volumes into Equation \ref{2.1}\[14.8 \text{ M} \times V_o = 0.10 \text{ M} \times 250 \text{ mL} \nonumber\]and solving for \(V_o\) gives 1.7 mL. Since we are making a solution that is approximately 0.10 M NH3, we can use a graduated cylinder to measure the 1.7 mL of concentrated NH3, transfer the NH3 to a beaker, and add sufficient water to give a total volume of approximately 250 mL.Although usually we express molarity as mol/L, we can express the volumes in mL if we do so both for both \(V_o\) and \(V_d\).To prepare a standard solution of Zn2+ you dissolve a 1.004 g sample of Zn wire in a minimal amount of HCl and dilute to volume in a 500-mL volumetric flask. If you dilute 2.000 mL of this stock solution to 250.0 mL, what is the concentration of Zn2+, in μg/mL, in your standard solution?The first solution is a stock solution, which we then dilute to prepare the standard solution. The concentration of Zn2+ in the stock solution is\[\frac {1.004 \text{ g } \ce{Zn^{2+}}} {500.0 \text{ mL}} \times \frac {10^6 \: \mu \text{g}} {\text{g}} = 2008 \: \mu \text{g } \ce{Zn^{2+}} \text{/mL} \nonumber\]To find the concentration of the standard solution we use Equation \ref{2.1}\[\frac {2008 \: \mu \text{g } \ce{Zn^{2+}}} {\text{mL}} \times 2.000 \text{ mL} = C_d \times 250.0 \text{ mL} \nonumber\]where Cd is the standard solution’s concentration. Solving gives a concentration of 16.06 μg Zn2+/mL.As shown in the following example, we can use Equation \ref{2.1} to calculate a solution’s original concentration using its known concentration after dilution.A sample of an ore was analyzed for Cu2+ as follows. A 1.25 gram sample of the ore was dissolved in acid and diluted to volume in a 250-mL volumetric flask. A 20 mL portion of the resulting solution was transferred by pipet to a 50-mL volumetric flask and diluted to volume. An analysis of this solution gives the concentration of Cu2+ as 4.62 μg/mL. What is the weight percent of Cu in the original ore?SolutionSubstituting known volumes (with significant figures appropriate for pipets and volumetric flasks) into Equation \ref{2.1}\[(C_{\ce{Cu}})_o \times 20.00 \text{ mL} = 4.62 \: \mu \text{g/mL } \ce{Cu^{2+}} \times 50.00 \text{ mL} \nonumber\]and solving for \((C_{\ce{Cu}})_o \) gives the original concentration as 11.55 μg/mL Cu2+. To calculate the grams of Cu2+ we multiply this concentration by the total volume\[\frac {11.55 \mu \text{g } \ce{Cu^{2+}}} {\text{mL}} \times 250.0 \text{ mL} \times \frac {1 \text{ g}} {10^6 \: \mu \text{g}} = 2.888 \times 10^{-3} \text{ g } \ce{Cu^{2+}} \nonumber\]The weight percent Cu is\[\frac {2.888 \times 10^{-3} \text{ g } \ce{Cu^{2+}}} {1.25 \text{ g sample}} \times 100 = 0.231 \text{% w/w } \ce{Cu^{2+}} \nonumber\]This page titled 2.5: Preparing Solutions is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by David Harvey.
118
2.6: Spreadsheets and Computational Software
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Analytical_Chemistry_2.1_(Harvey)/02%3A_Basic_Tools_of_Analytical_Chemistry/2.06%3A_Spreadsheets_and_Computational_Software
Analytical chemistry is a quantitative discipline. Whether you are completing a statistical analysis, trying to optimize experimental conditions, or exploring how a change in pH affects a compound’s solubility, the ability to work with complex mathematical equations is essential. Spreadsheets, such as Microsoft Excel are an important tool for analyzing your data and for preparing graphs of your results. Scattered throughout this textbook you will find instructions for using spreadsheets.If you do not have access to Microsoft Excel or another commercial spreadsheet package, you might considering using Calc, a freely available open-source spreadsheet that is part of the OpenOffice.org software package at www.openoffice.org, or Google Sheets.Although spreadsheets are useful, they are not always well suited for working with scientific data. If you plan to pursue a career in chemistry, you may wish to familiarize yourself with a more sophisticated computational software package, such as the freely available open-source program that goes by the name R, or commercial programs such as Mathematica or Matlab. You will find instructions for using R scattered throughout this textbook.You can download the current version of R from www.r-project.org. Click on the link for Download: CRAN and find a local mirror site. Click on the link for the mirror site and then use the link for Linux, MacOS X, or Windows under the heading “Download and Install R.”Despite the power of spreadsheets and computational programs, don’t forget that the most important software is behind your eyes and between your ears. The ability to think intuitively about chemistry is a critically important skill. In many cases you will find that it is possible to determine if an analytical method is feasible or to approximate the optimum conditions for an analytical method without resorting to complex calculations. Why spend time developing a complex spreadsheet or writing software code when a “back-of-the-envelope” estimate will do the trick? Once you know the general solution to your problem, you can use a spreadsheet or a computational program to work out the specifics. Throughout this textbook we will introduce tools to help develop your ability to think intuitively.For an interesting take on the importance of intuitive thinking, see Are You Smart Enough to Work at Google? by William Poundstone (Little, Brown and Company, New York, 2012).This page titled 2.6: Spreadsheets and Computational Software is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by David Harvey.
119
2.7: The Laboratory Notebook
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Analytical_Chemistry_2.1_(Harvey)/02%3A_Basic_Tools_of_Analytical_Chemistry/2.07%3A_The_Laboratory_Notebook
Finally, we can not end a chapter on the basic tools of analytical chemistry without mentioning the laboratory notebook. A laboratory notebook is your most important tool when working in the lab. If kept properly, you should be able to look back at your laboratory notebook several years from now and reconstruct the experiments on which you worked.Your instructor will provide you with detailed instructions on how he or she wants you to maintain your notebook. Of course, you should expect to bring your notebook to the lab. Everything you do, measure, or observe while working in the lab should be recorded in your notebook as it takes place. Preparing data tables to organize your data will help ensure that you record the data you need, and that you can find the data when it is time to calculate and analyze your results. Writing a narrative to accompany your data will help you remember what you did, why you did it, and why you thought it was significant. Reserve space for your calculations, for analyzing your data, and for interpreting your results. Take your notebook with you when you do research in the library.Maintaining a laboratory notebook may seem like a great deal of effort, but if you do it well you will have a permanent record of your work. Scientists working in academic, industrial and governmental research labs rely on their notebooks to provide a written record of their work. Questions about research carried out at some time in the past can be answered by finding the appropriate pages in the laboratory notebook. A laboratory notebook is also a legal document that helps establish patent rights and proof of discovery.This page titled 2.7: The Laboratory Notebook is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by David Harvey.
120
2.9: Additional Resources
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Analytical_Chemistry_2.1_(Harvey)/02%3A_Basic_Tools_of_Analytical_Chemistry/2.09%3A_Additional_Resources
The following two web sites contain useful information about the SI system of units.For a chemist’s perspective on the SI units for mass and amount, consult the following papers.Discussions regarding possible changes in the SI base units are reviewed in these articles.The following are useful resources for maintaining a laboratory notebook and for preparing laboratory reports.The following texts provide instructions for using spreadsheets in analytical chemistry.The following classic textbook emphasizes the application of intuitive thinking to the solving of problems.This page titled 2.9: Additional Resources is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by David Harvey.
122
2.10: Chapter Summary and Key Terms
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Analytical_Chemistry_2.1_(Harvey)/02%3A_Basic_Tools_of_Analytical_Chemistry/2.10%3A_Chapter_Summary_and_Key_Terms
There are a few basic numerical and experimental tools with which you must be familiar. Fundamental measurements in analytical chemistry, such as mass, use base SI units, such as the kilogram. Other units, such as energy, are defined in terms of these base units. When reporting a measurement, we must be careful to include only those digits that are significant, and to maintain the uncertainty implied by these significant figures when trans- forming measurements into results.The relative amount of a constituent in a sample is expressed as a concentration. There are many ways to express concentration, the most common of which are molarity, weight percent, volume percent, weight-to-volume percent, parts per million and parts per billion. Concentrations also can be expressed using p-functions.Stoichiometric relationships and calculations are important in many quantitative analyses. The stoichiometry between the reactants and the products of a chemical reaction are given by the coefficients of a balanced chemical reaction.Balances, volumetric flasks, pipets, and ovens are standard pieces of equipment that you will use routinely in the analytical lab. You should be familiar with the proper way to use this equipment. You also should be familiar with how to prepare a stock solution of known concentration, and how to prepare a dilute solution from a stock solution.analytical balancedesiccatorgraduated cylindermolarityparts per billionscientific notationstock solutionvolumetric flaskweight-to-volume percentconcentrationdilutionmeniscusnormalityp-functionsignificant figurestarevolumetric pipetdesiccantformalitymolalityparts per millionquantitative transferSI unitsvolume percentweight percentThis page titled 2.10: Chapter Summary and Key Terms is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by David Harvey.
123
3.1: Analysis, Determination, and Measurement
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Analytical_Chemistry_2.1_(Harvey)/03%3A__The_Vocabulary_of_Analytical_Chemistry/3.01%3A_Analysis_Determination_and_Measurement
The first important distinction we will make is among the terms analysis, determination, and measurement. An analysis provides chemical or physical information about a sample. The component in the sample of interest to us is called the analyte, and the remainder of the sample is the matrix. In an analysis we determine the identity, the concentration, or the properties of an analyte. To make this determination we measure one or more of the analyte’s chemical or physical properties.An example will help clarify the difference between an analysis, a determination and a measurement. In 1974 the federal government enacted the Safe Drinking Water Act to ensure the safety of the nation’s public drinking water supplies. To comply with this act, municipalities monitor their drinking water supply for potentially harmful substances, such as fecal coliform bacteria. Municipal water departments collect and analyze samples from their water supply. To determine the concentration of fecal coliform bacteria an analyst passes a portion of water through a membrane filter, places the filter in a dish that contains a nutrient broth, and incubates the sample for 22–24 hrs at 44.5 oC ± 0.2 oC. At the end of the incubation period the analyst counts the number of bacterial colonies in the dish and reports the result as the number of colonies per 100 mL (Figure 3.1.1 ). Thus, a municipal water department analyzes samples of water to determine the concentration of fecal coliform bacteria by measuring the number of bacterial colonies that form during a carefully defined incubation periodA fecal coliform count provides a general measure of the presence of pathogenic organisms in a water supply. For drinking water, the current maximum contaminant level (MCL) for total coliforms, including fecal coliforms is less than 1 colony/100 mL. Municipal water departments must regularly test the water supply and must take action if more than 5% of the samples in any month test positive for coliform bacteria.This page titled 3.1: Analysis, Determination, and Measurement is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by David Harvey.
124
3.2: Techniques, Methods, Procedures, and Protocols
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Analytical_Chemistry_2.1_(Harvey)/03%3A__The_Vocabulary_of_Analytical_Chemistry/3.02%3A_Techniques_Methods_Procedures_and_Protocols
Suppose you are asked to develop an analytical method to determine the concentration of lead in drinking water. How would you approach this problem? To provide a structure for answering this question, it is helpful to consider four levels of analytical methodology: techniques, methods, procedures, and protocols [Taylor, J. K. Anal. Chem. 1983, 55, 600A–608A].A technique is any chemical or physical principle that we can use to study an analyte. There are many techniques for that we can use to determine the concentration of lead in drinking water [Fitch, A.; Wang, Y.; Mellican, S.; Macha, S. Anal. Chem. 1996, 68, 727A–731A]. In graphite furnace atomic absorption spectroscopy (GFAAS), for example, we first convert aqueous lead ions into free atoms—a process we call atomization. We then measure the amount of light absorbed by the free atoms. Thus, GFAAS uses both a chemical principle (atomization) and a physical principle (absorption of light).See Chapter 10 for a discussion of graphite furnace atomic absorption spectroscopy.A method is the application of a technique for a specific analyte in a specific matrix. As shown in Figure 3.2.1 , the GFAAS method for determining the concentration of lead in water is different from that for lead in soil or blood.A procedure is a set of written directions that tell us how to apply a method to a particular sample, including information on how to collect the sample, how to handle interferents, and how to validate results. A method may have several procedures as each analyst or agency adapts it to a specific need. As shown in Figure 3.2.1 , the American Public Health Agency and the American Society for Testing Materials publish separate procedures for determining the concentration of lead in water.Finally, a protocol is a set of stringent guidelines that specify a procedure that an analyst must follow if an agency is to accept the results. Protocols are common when the result of an analysis supports or defines public policy. When determining the concentration of lead in water under the Safe Drinking Water Act, for example, the analyst must use a protocol specified by the Environmental Protection Agency.There is an obvious order to these four levels of analytical methodology. Ideally, a protocol uses a previously validated procedure. Before developing and validating a procedure, a method of analysis must be selected. This requires, in turn, an initial screening of available techniques to determine those that have the potential for monitoring the analyte.This page titled 3.2: Techniques, Methods, Procedures, and Protocols is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by David Harvey.
125
3.3: Classifying Analytical Techniques
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Analytical_Chemistry_2.1_(Harvey)/03%3A__The_Vocabulary_of_Analytical_Chemistry/3.03%3A_Classifying_Analytical_Techniques
The analysis of a sample generates a chemical or physical signal that is proportional to the amount of analyte in the sample. This signal may be anything we can measure, such as volume or absorbance. It is convenient to divide analytical techniques into two general classes based on whether the signal is proportional to the mass or moles of analyte, or is proportional to the analyte’s concentrationConsider the two graduated cylinders in Figure 3.3.1 , each of which contains a solution of 0.010 M Cu(NO3)2. Cylinder 1 contains 10 mL, or \(1.0 \times 10^{-4}\) moles of Cu2+, and cylinder 2 contains 20 mL, or \(2.0 \times 10^{-4}\) moles of Cu2+. If a technique responds to the absolute amount of analyte in the sample, then the signal due to the analyte SA\[S_A = k_A n_A \label{3.1}\]where nA is the moles or grams of analyte in the sample, and kA is a proportionality constant. Because cylinder 2 contains twice as many moles of Cu2+ as cylinder 1, analyzing the contents of cylinder 2 gives a signal twice as large as that for cylinder 1.A second class of analytical techniques are those that respond to the analyte’s concentration, CA \[S_A = k_A C_A \label{3.2}\]Since the solutions in both cylinders have the same concentration of Cu2+, their analysis yields identical signals.A technique that responds to the absolute amount of analyte is a total analysis technique. Mass and volume are the most common signals for a total analysis technique, and the corresponding techniques are gravimetry (Chapter 8) and titrimetry (Chapter 9). With a few exceptions, the signal for a total analysis technique is the result of one or more chemical reactions, the stoichiometry of which determines the value of kA in Equation \ref{3.1}.Historically, most early analytical methods used a total analysis technique. For this reason, total analysis techniques are often called “classical” techniques.Spectroscopy (Chapter 10) and electrochemistry (Chapter 11), in which an optical or an electrical signal is proportional to the relative amount of analyte in a sample, are examples of concentration techniques. The relationship between the signal and the analyte’s concentration is a theoretical function that depends on experimental conditions and the instrumentation used to measure the signal. For this reason the value of kA in Equation \ref{3.2} is determined experimentally.Since most concentration techniques rely on measuring an optical or electrical signal, they also are known as “instrumental” techniques.This page titled 3.3: Classifying Analytical Techniques is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by David Harvey.
126
3.4: Selecting an Analytical Method
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Analytical_Chemistry_2.1_(Harvey)/03%3A__The_Vocabulary_of_Analytical_Chemistry/3.04%3A_Selecting_an_Analytical_Method
A method is the application of a technique to a specific analyte in a specific matrix. We can develop an analytical method to determine the concentration of lead in drinking water using any of the techniques mentioned in the previous section. A gravimetric method, for example, might precipiate the lead as PbSO4 or as PbCrO4, and use the precipitate’s mass as the analytical signal. Lead forms several soluble complexes, which we can use to design a complexation titrimetric method. As shown in makes feasible a variety of electrochemical methods.Ultimately, the requirements of the analysis determine the best method. In choosing among the available methods, we give consideration to some or all the following design criteria: accuracy, precision, sensitivity, selectivity, robustness, ruggedness, scale of operation, analysis time, availability of equipment, and cost.Accuracy is how closely the result of an experiment agrees with the “true” or expected result. We can express accuracy as an absolute error, e \[e = \text{obtained result} - \text{expected result} \nonumber\]or as a percentage relative error, %er \[\% e_r = \frac {\text{obtained result} - \text{expected result}} {\text{expected result}} \times 100 \nonumber\]A method’s accuracy depends on many things, including the signal’s source, the value of kA in Equation 3.3.1 or Equation 3.3.2, and the ease of handling samples without loss or contamination. A total analysis technique, such as gravimetry and titrimetry, often produce more accurate results than does a concentration technique because we can measure mass and volume with high accuracy, and because the value of kA is known exactly through stoichiometry.Because it is unlikely that we know the true result, we use an expected or accepted result to evaluate accuracy. For example, we might use a standard reference material, which has an accepted value, to establish an analytical method’s accuracy. You will find a more detailed treatment of accuracy in Chapter 4, including a discussion of sources of errors.When a sample is analyzed several times, the individual results vary from trial-to-trial. Precision is a measure of this variability. The closer the agreement between individual analyses, the more precise the results. For example, the results shown in the upper half of Figure 3.4.1 for the concentration of K+ in a sample of serum are more precise than those in the lower half of Figure 3.4.1 . It is important to understand that precision does not imply accuracy. That the data in the upper half of Figure 3.4.1 are more precise does not mean that the first set of results is more accurate. In fact, neither set of results may be accurate.A method’s precision depends on several factors, including the uncertainty in measuring the signal and the ease of handling samples reproducibly. In most cases we can measure the signal for a total analysis technique with a higher precision than is the case for a concentration method.Confusing accuracy and precision is a common mistake. See Ryder, J.; Clark, A. U. Chem. Ed. 2002, 6, 1–3, and Tomlinson, J.; Dyson, P. J.; Garratt, J. U. Chem. Ed. 2001, 5, 16–23 for discussions of this and other common misconceptions about the meaning of error. You will find a more detailed treatment of precision in Chapter 4, including a discussion of sources of errors.The ability to demonstrate that two samples have different amounts of analyte is an essential part of many analyses. A method’s sensitivity is a measure of its ability to establish that such a difference is significant. Sensitivity is often confused with a method’s detection limit, which is the smallest amount of analyte we can determine with confidence.Confidence, as we will see in Chapter 4, is a statistical concept that builds on the idea of a population of results. For this reason, we will postpone our discussion of detection limits to Chapter 4. For now, the definition of a detection limit given here is sufficient.Sensitivity is equivalent to the proportionality constant, kA, in Equation 3.3.1 and Equation 3.3.2 [IUPAC Compendium of Chemical Terminology, Electronic version]. If \(\Delta S_A\) is the smallest difference we can measure between two signals, then the smallest detectable difference in the absolute amount or the relative amount of analyte is\[\Delta n_A = \frac {\Delta S_A} {k_A} \quad \text{ or } \quad \Delta C_A = \frac {\Delta S_A} {k_A} \nonumber\]Suppose, for example, that our analytical signal is a measurement of mass using a balance whose smallest detectable increment is ±0.0001 g. If our method’s sensitivity is 0.200, then our method can conceivably detect a difference in mass of as little as\[\Delta n_A = \frac {\pm 0.0001 \text{ g}} {0.200} = \pm 0.0005 \text{ g} \nonumber\]For two methods with the same \(\Delta S_A\), the method with the greater sensitivity—that is, the method with the larger kA—is better able to discriminate between smaller amounts of analyte.An analytical method is specific if its signal depends only on the analyte [Persson, B-A; Vessman, J. Trends Anal. Chem. 1998, 17, 117–119; Persson, B-A; Vessman, J. Trends Anal. Chem. 2001, 20, 526–532]. Although specificity is the ideal, few analytical methods are free from interferences. When an interferent contributes to the signal, we expand Equation 3.3.1 and Equation 3.3.2 to include its contribution to the sample’s signal, Ssamp \[S_{samp} = S_A + S_I = k_A n_A + k_I n_I \label{3.1}\]\[S_{samp} = S_A + S_I = k_A C_A + k_I C_I \label{3.2}\]where SI is the interferent’s contribution to the signal, kI is the interferent’s sensitivity, and nI and CI are the moles (or grams) and the concentration of interferent in the sample, respectively.Selectivity is a measure of a method’s freedom from interferences [Valcárcel, M.; Gomez-Hens, A.; Rubio, S. Trends Anal. Chem. 2001, 20, 386–393]. A method’s selectivity for an interferent relative to the analyte is defined by a selectivity coefficient, KA,I\[K_{A,I} = \frac {k_I} {k_A} \label{3.3}\]which may be positive or negative depending on the signs of kI and kA. The selectivity coefficient is greater than +1 or less than –1 when the method is more selective for the interferent than for the analyte.Although kA and kI usually are positive, they can be negative. For example, some analytical methods work by measuring the concentration of a species that remains after is reacts with the analyte. As the analyte’s concentration increases, the concentration of the species that produces the signal decreases, and the signal becomes smaller. If the signal in the absence of analyte is assigned a value of zero, then the subsequent signals are negative.Determining the selectivity coefficient’s value is easy if we already know the values for kA and kI. As shown by Example 3.4.1 , we also can determine KA,I by measuring Ssamp in the presence of and in the absence of the interferent.A method for the analysis of Ca2+ in water suffers from an interference in the presence of Zn2+. When the concentration of Ca2+ is 100 times greater than that of Zn2+, an analysis for Ca2+ has a relative error of +0.5%. What is the selectivity coefficient for this method?SolutionSince only relative concentrations are reported, we can arbitrarily assign absolute concentrations. To make the calculations easy, we will let CCa = 100 (arbitrary units) and CZn = 1. A relative error of +0.5% means the signal in the presence of Zn2+ is 0.5% greater than the signal in the absence of Zn2+. Again, we can assign values to make the calculation easier. If the signal for Cu2+ in the absence of Zn2+ is 100 (arbitrary units), then the signal in the presence of Zn2+ is 100.5.The value of kCa is determined using Equation 3.3.2\[k_\text{Ca} = \frac {S_\text{Ca}} {C_\text{Ca}} = \frac {100} {100} = 1 \nonumber\]In the presence of Zn2+ the signal is given by Equation 3.4.2; thus\[S_{samp} = 100.5 = k_\text{Ca} C_\text{Ca} + k_\text{Zn} C_\text{Zn} = (1 \times 100) + k_\text{Zn} \times 1 \nonumber\]Solving for kZn gives its value as 0.5. The selectivity coefficient is\[K_\text{Ca,Zn} = \frac {k_\text{Zn}} {k_\text{Ca}} = \frac {0.5} {1} = 0.5 \nonumber\]If you are unsure why, in the above example, the signal in the presence of zinc is 100.5, note that the percentage relative error for this problem is given by\[\frac {\text{obtained result} - 100} {100} \times 100 = +0.5 \% \nonumber\]Solving gives an obtained result of 100.5.Wang and colleagues describe a fluorescence method for the analysis of Ag+ in water. When analyzing a solution that contains \(1.0 \times 10^{-9}\) M Ag+ and \(1.1 \times 10^{-7}\) M Ni2+, the fluorescence intensity (the signal) was +4.9% greater than that obtained for a sample of \(1.0 \times 10^{-9}\) M Ag+. What is KAg,Ni for this analytical method? The full citation for the data in this exercise is Wang, L.; Liang, A. N.; Chen, H.; Liu, Y.; Qian, B.; Fu, J. Anal. Chim. Acta 2008, 616, 170-176.Because the signal for Ag+ in the presence of Ni2+ is reported as a relative error, we will assign a value of 100 as the signal for \(1 \times 10^{-9}\) M Ag+. With a relative error of +4.9%, the signal for the solution of \(1 \times 10^{-9}\) M Ag+ and \(1.1 \times 10^{-7}\) M Ni2+ is 104.9. The sensitivity for Ag+ is determined using the solution that does not contain Ni2+; thus\[k_\text{Ag} = \frac {S_\text{Ag}} {C_\text{Ag}} = \frac {100} {1 \times 10^{-9} \text{ M}} = 1.0 \times 10^{11} \text{ M}^{-1} \nonumber\]Substituting into Equation \ref{3.2} values for kAg, Ssamp , and the concentrations of Ag+ and Ni2+\[104.9 = (1.0 \times 10^{11} \text{ M}^{-1}) \times (1 \times 10^{-9} \text{ M}) + k_\text{Ni} \times (1.1 \times 10^{-7} \text{ M}) \nonumber\]and solving gives kNi as \(4.5 \times 10^7\) M–1. The selectivity coefficient is\[K_\text{Ag,Ni} = \frac {k_\text{Ni}} {k_\text{Ag}} = \frac {4.5 \times 10^7 \text{ M}^{-1}} {1.0 \times 10^{11} \text{ M}^{-1}} = 4.5 \times 10^{-4} \nonumber\]A selectivity coefficient provides us with a useful way to evaluate an interferent’s potential effect on an analysis. Solving Equation \ref{3.3} for kI \[k_I = K_{A,I} \times k_A \label{3.4}\]and substituting in Equation \ref{3.1} and Equation \ref{3.2}, and simplifying gives\[S_{samp} = k_A \{ n_A + K_{A,I} \times n_I \} \label{3.5}\]\[S_{samp} = k_A \{ C_A + K_{A,I} \times C_I \} \label{3.6}\]An interferent will not pose a problem as long as the term \(K_{A,I} \times n_I\) in Equation \ref{3.5} is significantly smaller than nA, or if \(K_{A,I} \times C_I\) in Equation \ref{3.6} is significantly smaller than CA.Barnett and colleagues developed a method to determine the concentration of codeine (structure shown below) in poppy plants [Barnett, N. W.; Bowser, T. A.; Geraldi, R. D.; Smith, B. Anal. Chim. Acta 1996, 318, 309– 317]. As part of their study they evaluated the effect of several interferents. For example, the authors found that equimolar solutions of codeine and the interferent 6-methoxycodeine gave signals, respectively of 40 and 6 (arbitrary units).(a) What is the selectivity coefficient for the interferent, 6-methoxycodeine, relative to that for the analyte, codeine.(b) If we need to know the concentration of codeine with an accuracy of ±0.50%, what is the maximum relative concentration of 6-methoxy-codeine that we can tolerate?Solution(a) The signals due to the analyte, SA, and the interferent, SI, are\[S_A = k_A C_A \quad \quad S_I = k_I C_I \nonumber\]Solving these equations for kA and for kI, and substituting into Equation \ref{3.4} gives\[K_{A,I} = \frac {S_I / C_I} {S_A / C_I} \nonumber\]Because the concentrations of analyte and interferent are equimolar (CA = CI), the selectivity coefficient is\[K_{A,I} = \frac {S_I} {S_A} = \frac {6} {40} = 0.15 \nonumber\](b) To achieve an accuracy of better than ±0.50% the term \(K_{A,I} \times C_I\) in Equation \ref{3.6} must be less than 0.50% of CA; thus\[K_{A,I} \times C_I \le 0.0050 \times C_A \nonumber\]Solving this inequality for the ratio CI/CA and substituting in the value for KA,I from part (a) gives\[\frac {C_I} {C_A} \le \frac {0.0050} {K_{A,I}} = \frac {0.0050} {0.15} = 0.033 \nonumber\]Therefore, the concentration of 6-methoxycodeine must be less than 3.3% of codeine’s concentration.When a method’s signal is the result of a chemical reaction—for example, when the signal is the mass of a precipitate—there is a good chance that the method is not very selective and that it is susceptible to an interference.Mercury (II) also is an interferent in the fluorescence method for Ag+ developed by Wang and colleagues (see Practice Exercise 3.4.1). The selectivity coefficient, KAg,Hg has a value of \(-1.0 \times 10^{-3}\).(a) What is the significance of the selectivity coefficient’s negative sign?(b) Suppose you plan to use this method to analyze solutions with concentrations of Ag+ no smaller than 1.0 nM. What is the maximum concentration of Hg2+ you can tolerate if your percentage relative errors must be less than ±1.0%?(a) A negative value for KAg,Hg means that the presence of Hg2+ decreases the signal from Ag+.(b) In this case we need to consider an error of –1%, since the effect of Hg2+ is to decrease the signal from Ag+. To achieve this error, the term \(K_{A,I} \times C_I\) in Equation \ref{3.6} must be less than –1% of CA; thus\[K_\text{Ag,Hg} \times C_\text{Hg} = -0.01 \times C_\text{Ag} \nonumber\]Substituting in known values for KAg,Hg and CAg, we find that the maximum concentration of Hg2+ is \(1.0 \times 10^{-8}\) M.Problems with selectivity also are more likely when the analyte is present at a very low concentration [Rodgers, L. B. J. Chem. Educ. 1986, 63, 3–6].Look back at Figure 1.1.1, which shows Fresenius’ analytical method for the determination of nickel in ores. The reason there are so many steps in this procedure is that precipitation reactions generally are not very selective. The method in Figure 1.1.2 includes fewer steps because dimethylglyoxime is a more selective reagent. Even so, if an ore contains palladium, additional steps are needed to prevent the palladium from interfering.For a method to be useful it must provide reliable results. Unfortunately, methods are subject to a variety of chemical and physical interferences that contribute uncertainty to the analysis. If a method is relatively free from chemical interferences, we can use it to analyze an analyte in a wide variety of sample matrices. Such methods are considered robust.Random variations in experimental conditions introduces uncertainty. If a method’s sensitivity, k, is too dependent on experimental conditions, such as temperature, acidity, or reaction time, then a slight change in any of these conditions may give a significantly different result. A rugged method is relatively insensitive to changes in experimental conditions.Another way to narrow the choice of methods is to consider three potential limitations: the amount of sample available for the analysis, the expected concentration of analyte in the samples, and the minimum amount of analyte that will produce a measurable signal. Collectively, these limitations define the analytical method’s scale of operations.We can display the scale of operations visually (Figure 3.4.2 ) by plotting the sample’s size on the x-axis and the analyte’s concentration on the y-axis. For convenience, we divide samples into macro (>0.1 g), meso (10 mg–100 mg), micro (0.1 mg–10 mg), and ultramicro (<0.1 mg) sizes, and we divide analytes into major (>1% w/w), minor (0.01% w/w–1% w/w), trace (10–7% w/w–0.01% w/w), and ultratrace (<10–7% w/w) components. Together, the analyte’s concentration and the sample’s size provide a characteristic description for an analysis. For example, in a microtrace analysis the sample weighs between 0.1 mg and 10 mg and contains a concentration of analyte between 10–7% w/w and 10–2% w/w.The diagonal lines connecting the axes show combinations of sample size and analyte concentration that contain the same absolute mass of analyte. As shown in Figure 3.4.2 , for example, a 1-g sample that is 1% w/w analyte has the same amount of analyte (10 mg) as a 100-mg sample that is 10% w/w analyte, or a 10-mg sample that is 100% w/w analyte.We can use Figure 3.4.2 to establish limits for analytical methods. If a method’s minimum detectable signal is equivalent to 10 mg of analyte, then it is best suited to a major analyte in a macro or meso sample. Extending the method to an analyte with a concentration of 0.1% w/w requires a sample of 10 g, which rarely is practical due to the complications of carrying such a large amount of material through the analysis. On the other hand, a small sample that contains a trace amount of analyte places significant restrictions on an analysis. For example, a 1-mg sample that is 10–4% w/w in analyte contains just 1 ng of analyte. If we isolate the analyte in 1 mL of solution, then we need an analytical method that reliably can detect it at a concentration of 1 ng/mL.It should not surprise you to learn that a total analysis technique typically requires a macro or a meso sample that contains a major analyte. A concentration technique is particularly useful for a minor, trace, or ultratrace analyte in a macro, meso, or micro sample.Finally, we can compare analytical methods with respect to their equipment needs, the time needed to complete an analysis, and the cost per sample. Methods that rely on instrumentation are equipment-intensive and may require significant operator training. For example, the graphite furnace atomic absorption spectroscopic method for determining lead in water requires a significant capital investment in the instrument and an experienced operator to obtain reliable results. Other methods, such as titrimetry, require less expensive equipment and less training.The time to complete an analysis for one sample often is fairly similar from method-to-method. This is somewhat misleading, however, because much of this time is spent preparing samples, preparing reagents, and gathering together equipment. Once the samples, reagents, and equipment are in place, the sampling rate may differ substantially. For example, it takes just a few minutes to analyze a single sample for lead using graphite furnace atomic absorption spectroscopy, but several hours to analyze the same sample using gravimetry. This is a significant factor in selecting a method for a laboratory that handles a high volume of samples.The cost of an analysis depends on many factors, including the cost of equipment and reagents, the cost of hiring analysts, and the number of samples that can be processed per hour. In general, methods that rely on instruments cost more per sample then other methods.Unfortunately, the design criteria discussed in this section are not mutually independent [Valcárcel, M.; Ríos, A. Anal. Chem. 1993, 65, 781A–787A]. Working with smaller samples or improving selectivity often comes at the expense of precision. Minimizing cost and analysis time may decrease accuracy. Selecting a method requires carefully balancing the various design criteria. Usually, the most important design criterion is accuracy, and the best method is the one that gives the most accurate result. When the need for a result is urgent, as is often the case in clinical labs, analysis time may become the critical factor.In some cases it is the sample’s properties that determine the best method. A sample with a complex matrix, for example, may require a method with excellent selectivity to avoid interferences. Samples in which the analyte is present at a trace or ultratrace concentration usually require a concentration method. If the quantity of sample is limited, then the method must not require a large amount of sample.Determining the concentration of lead in drinking water requires a method that can detect lead at the parts per billion concentration level. Selectivity is important because other metal ions are present at significantly higher concentrations. A method that uses graphite furnace atomic absorption spectroscopy is a common choice for determining lead in drinking water because it meets these specifications. The same method is also useful for determining lead in blood where its ability to detect low concentrations of lead using a few microliters of sample is an important consideration.This page titled 3.4: Selecting an Analytical Method is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by David Harvey.
127
3.5: Developing the Procedure
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Analytical_Chemistry_2.1_(Harvey)/03%3A__The_Vocabulary_of_Analytical_Chemistry/3.05%3A_Developing_the_Procedure
After selecting a method, the next step is to develop a procedure that accomplish our goals for the analysis. In developing a procedure we give attention to compensating for interferences, to selecting and calibrating equipment, to acquiring a representative sample, and to validating the method.A method’s accuracy depends on its selectivity for the analyte. Even the best method, however, may not be free from interferents that contribute to the measured signal. Potential interferents may be present in the sample itself or in the reagents used during the analysis.When the sample is free of interferents, the total signal, Stotal, is a sum of the signal due to the analyte, SA, and the signal due to interferents in the reagents, Sreag,\[S_{total} = S_A + S_{reag} = k_A n_A + S_{reag} \label{3.1}\]\[S_{total} = S_A + S_{reag} = k_A C_A + S_{reag} \label{3.2}\]Without an independent determination of Sreag we cannot solve Equation \ref{3.1} or \ref{3.2} for the moles or concentration of analyte.To determine the contribution of Sreag in Equations \ref{3.1} and \ref{3.2} we measure the signal for a method blank, a solution that does not contain the sample. Consider, for example, a procedure in which we dissolve a 0.1-g sample in a portion of solvent, add several reagents, and dilute to 100 mL with additional solvent. To prepare the method blank we omit the sample and dilute the reagents to 100 mL using the solvent. Because the analyte is absent, Stotal for the method blank is equal to Sreag. Knowing the value for Sreag makes it is easy to correct Stotal for the reagent’s contribution to the total signal; thus\[(S_{total} - S_{reag}) = S_A = k_A n_A \nonumber\]\[(S_{total} - S_{reag}) = S_A = k_A C_A \nonumber\]By itself, a method blank cannot compensate for an interferent that is part of the sample’s matrix. If we happen to know the interferent’s identity and concentration, then we can be add it to the method blank; however, this is not a common circumstance and we must, instead, find a method for separating the analyte and interferent before continuing the analysis.A method blank also is known as a reagent blank. When the sample is a liquid, or is in solution, we use an equivalent volume of an inert solvent as a substitute for the sample.A simple definition of a quantitative analytical method is that it is a mechanism for converting a measurement, the signal, into the amount of analyte in a sample. Assuming we can correct for interferents, a quantitative analysis is nothing more than solving Equation 3.3.1 or Equation 3.3.2 for nA or for CA.To solve these equations we need the value of kA. For a total analysis method usually we know the value of kA because it is defined by the stoichiometry of the chemical reactions responsible for the signal. For a concentration method, however, the value of kA usually is a complex function of experimental conditions. A calibration is the process of experimentally determining the value of kA by measuring the signal for one or more standard samples, each of which contains a known concentration of analyte.With a single standard we can calculate the value of kA using Equation 3.3.1 or Equation 3.3.2. When using several standards with different concentrations of analyte, the result is best viewed visually by plotting SA versus the concentration of analyte in the standards. Such a plot is known as a calibration curve, an example of which is shown in Figure 3.5.1 .Selecting an appropriate method and executing it properly helps us ensure that our analysis is accurate. If we analyze the wrong sample, however, then the accuracy of our work is of little consequence.A proper sampling strategy ensures that our samples are representative of the material from which they are taken. Biased or nonrepresentative sampling, and contaminating samples during or after their collection are two examples of sampling errors that can lead to a significant error in accuracy. It is important to realize that sampling errors are independent of errors in the analytical method. As a result, we cannot correct a sampling error in the laboratory by, for example, evaluating a reagent blank.Chapter 7 provides a more detailed discussion of sampling, including strategies for obtaining representative samples.If we are to have confidence in our procedure we must demonstrate that it can provide acceptable results, a process we call validation. Perhaps the most important part of validating a procedure is establishing that its precision and accuracy are appropriate for the problem we are trying to solve. We also ensure that the written procedure has sufficient detail so that different analysts or laboratories will obtain comparable results. Ideally, validation uses a standard sample whose composition closely matches the samples we will analyze. In the absence of appropriate standards, we can evaluate accuracy by comparing results to those obtained using a method of known accuracy.You will find more details about validating analytical methods in Chapter 14.This page titled 3.5: Developing the Procedure is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by David Harvey.
128
3.6: Protocols
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Analytical_Chemistry_2.1_(Harvey)/03%3A__The_Vocabulary_of_Analytical_Chemistry/3.06%3A_Protocols
Earlier we defined a protocol as a set of stringent written guidelines that specify an exact procedure that we must follow if an agency is to accept the results of our analysis. In addition to the considerations that went into the procedure’s design, a protocol also contains explicit instructions regarding internal and external quality assurance and quality control (QA/QC) procedures [Amore, F. Anal. Chem. 1979, 51, 1105A–1110A; Taylor, J. K. Anal. Chem. 1981, 53, 1588A–1593A]. The goal of internal QA/QC is to ensure that a laboratory’s work is both accurate and precise. External QA/QC is a process in which an external agency certifies a laboratory.As an example, let’s outline a portion of the Environmental Protection Agency’s protocol for determining trace metals in water by graphite furnace atomic absorption spectroscopy as part of its Contract Laboratory Program (CLP). The CLP protocol (see Figure 3.6.1 ) calls for an initial calibration using a method blank and three standards, one of which is at the detection limit. The resulting calibration curve is verified by analyzing initial calibration verification (ICV) and initial calibration blank (ICB) samples. The lab’s result for the ICV sample must fall within ±10% of its expected concentration. If the result is outside this limit the analysis is stopped and the problem identified and corrected before continuing.After a successful analysis of the ICV and ICB samples, the lab reverifies the calibration by analyzing a continuing calibration verification (CCV) sample and a continuing calibration blank (CCB). Results for the CCV also must be within ±10% of its expected concentration. Again, if the lab’s result for the CCV is outside the established limits, the analysis is stopped, the problem identified and corrected, and the system recalibrated as described above. Additional CCV and the CCB samples are analyzed before the first sample and after the last sample, and between every set of ten samples. If the result for any CCV or CCB sample is unacceptable, the results for the last set of samples are discarded, the system is recalibrated, and the samples reanalyzed. By following this protocol, each result is bound by successful checks on the calibration. Although not shown in Figure 3.6.1 , the protocol also contains instructions for analyzing duplicate or split samples, and for using spike tests to verify accuracy.This page titled 3.6: Protocols is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by David Harvey.
129
3.7: The Importance of Analytical Methodology
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Analytical_Chemistry_2.1_(Harvey)/03%3A__The_Vocabulary_of_Analytical_Chemistry/3.07%3A_The_Importance_of_Analytical_Methodology
The importance of the issues raised in this chapter is evident if we examine environmental monitoring programs. The purpose of a monitoring program is to determine the present status of an environmental system, and to assess long term trends in the system’s health. These are broad and poorly defined goals. In many cases, an environmental monitoring program begins before the essential questions are known. This is not surprising since it is difficult to formulate questions in the absence of results. Without careful planning, however, a poor experimental design may result in data that has little value.These concerns are illustrated by the Chesapeake Bay Monitoring Program. This research program, designed to study nutrients and toxic pollutants in the Chesapeake Bay, was initiated in 1984 as a cooperative venture between the federal government, the state governments of Maryland, Virginia, and Pennsylvania, and the District of Columbia. A 1989 review of the program highlights the problems common to many monitoring programs [D’Elia, C. F.; Sanders, J. G.; Capone, D. G. Envrion. Sci. Technol. 1989, 23, 768–774].At the beginning of the Chesapeake Bay monitoring program, little attention was given to selecting analytical methods, in large part because the eventual use of the data was not yet specified. The analytical methods initially chosen were standard methods already approved by the Environmental Protection Agency (EPA). In many cases these methods were not useful because they were designed to detect pollutants at their legally mandated maximum allowed concentrations. In unpolluted waters, however, the concentrations of these contaminants often are well below the detection limit of the EPA methods. For example, the detection limit for the EPA approved standard method for phosphate was 7.5 ppb. Since the actual phosphate concentrations in Chesapeake Bay were below the EPA method’s detection limit, it provided no useful information. On the other hand, the detection limit for a non-approved variant of the EPA method, a method routinely used by chemical oceanographers, was 0.06 ppb, a more realistic detection limit for their samples. In other cases, such as the elemental analysis for particulate forms of carbon, nitrogen and phosphorous, EPA approved procedures provided poorer reproducibility than nonapproved methods.This page titled 3.7: The Importance of Analytical Methodology is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by David Harvey.
130
3.8: Problems
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Analytical_Chemistry_2.1_(Harvey)/03%3A__The_Vocabulary_of_Analytical_Chemistry/3.08%3A_Problems
and in the presence of both glycolic acid and ascorbic acid (AA), the signal is \[S_{samp,2} = k_\text{GA} C_\text{GA} + k_\text{AA} C_\text{AA} \nonumber\] When the concentration of glycolic acid is \(1.0 \times 10^{-4} \text{ M}\) and the concentration of ascorbic acid is \(1.0 \times 10^{-5} \text{ M}\), the ratio of their signals is \[\frac {S_{samp,2}} {S_{samp,1}} = 1.44 \nonumber\] (a) Using the ratio of the two signals, determine the value of the selectivity ratio KGA,AA. (b) Is the method more selective toward glycolic acid or ascorbic acid? (c) If the concentration of ascorbic acid is \(1.0 \times 10^{-5} \text{ M}\), what is the smallest concentration of glycolic acid that can be determined such that the error introduced by failing to account for the signal from ascorbic acid is less than 1%?(d) What is the largest concentration of ascorbic acid that may be present if a concentration of \(1.12 \times 10^{-6} \text{ M}\) hypoxanthine is to be determined within 1.0%?This page titled 3.8: Problems is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by David Harvey.
131
3.9: Additional Resources
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Analytical_Chemistry_2.1_(Harvey)/03%3A__The_Vocabulary_of_Analytical_Chemistry/3.09%3A_Additional_Resources
The International Union of Pure and Applied Chemistry (IUPAC) maintains a web-based compendium of analytical terminology. You can find it at the following web site.The following papers provide alternative schemes for classifying analytical methods.Further details on criteria for evaluating analytical methods are found in the following series of papers.For a point/counterpoint debate on the meaning of sensitivity consult the following two papers and two letters of response.Several texts provide analytical procedures for specific analytes in well-defined matrices.For a review of the importance of analytical methodology in today’s regulatory environment, consult the following text.This page titled 3.9: Additional Resources is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by David Harvey.
132
3.10: Chapter Summary and Key Terms
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Analytical_Chemistry_2.1_(Harvey)/03%3A__The_Vocabulary_of_Analytical_Chemistry/3.10%3A_Chapter_Summary_and_Key_Terms
Every discipline has its own vocabulary and your success in studying ana- lytical chemistry will improve if you master this vocabulary. Be sure you understand the difference between an analyte and its matrix, between a technique and a method, between a procedure and a protocol, and between a total analysis technique and a concentration technique.In selecting an analytical method we consider criteria such as accu- racy, precision, sensitivity, selectivity, robustness, ruggedness, the amount of available sample, the amount of analyte in the sample, time, cost, and the availability of equipment. These criteria are not mutually independent, and often it is necessary to find an acceptable balance between them.In developing a procedure or protocol, we give consideration to compensating for interferences, calibrating the method, obtaining an appropriate sample, and validating the analysis. Poorly designed procedures and protocols produce results that are insufficient to meet the needs of the analysis.accuracycalibrationdetection limitmatrixmethod blankprotocolruggedsensitivitytechniqueanalysiscalibration curvedeterminationmeasurementprecisionQA/QCselectivitysignaltotal analysis techniqueanalyteconcentration technique interferentmethodprocedurerobustselectivity coefficientspecificityvalidationThis page titled 3.10: Chapter Summary and Key Terms is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by David Harvey.
133
4.1: Characterizing Measurements and Results
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Analytical_Chemistry_2.1_(Harvey)/04%3A_Evaluating_Analytical_Data/4.01%3A_Characterizing_Measurements_and_Results
Let’s begin by choosing a simple quantitative problem that requires a single measurement: What is the mass of a penny? You probably recognize that our statement of the problem is too broad. For example, are we interested in the mass of a United States penny or of a Canadian penny, or is the difference relevant? Because a penny’s composition and size may differ from country to country, let’s narrow our problem to pennies from the United States.There are other concerns we might consider. For example, the United States Mint produces pennies at two locations (Figure 4.1.1 ). Because it seems unlikely that a penny’s mass depends on where it is minted, we will ignore this concern. Another concern is whether the mass of a newly minted penny is different from the mass of a circulating penny. Because the answer this time is not obvious, let’s further narrow our question and ask “What is the mass of a circulating United States Penny?”A good way to begin our analysis is to gather some preliminary data. Table 4.1.1 shows masses for seven pennies collected from my change jar. In examining this data we see that our question does not have a simple answer. That is, we can not use the mass of a single penny to draw a specific conclusion about the mass of any other penny (although we might reasonably conclude that all pennies weigh at least 3 g). We can, however, characterize this data by reporting the spread of the individual measurements around a central value.One way to characterize the data in Table 4.1.1 is to assume that the masses of individual pennies are scattered randomly around a central value that is the best estimate of a penny’s expected, or “true” mass. There are two common ways to estimate central tendency: the mean and the median.The mean, \(\overline{X}\), is the numerical average for a data set. We calculate the mean by dividing the sum of the individual values by the size of the data set\[\overline{X} = \frac {\sum_{i = 1}^n X_i} {n} \nonumber\]where \(X_i\) is the ith measurement, and n is the size of the data set.What is the mean for the data in Table 4.1.1 ?SolutionTo calculate the mean we add together the results for all measurements\[3.080 + 3.094 + 3.107 + 3.056 + 3.112 + 3.174 + 3.198 = 21.821 \text{ g} \nonumber\]and divide by the number of measurements\[\overline{X} = \frac {21.821 \text{ g}} {7} = 3.117 \text{ g} \nonumber\]The mean is the most common estimate of central tendency. It is not a robust estimate, however, because a single extreme value—one much larger or much smaller than the remainder of the data—influences strongly the mean’s value [Rousseeuw, P. J. J. Chemom. 1991, 5, 1–20]. For example, if we accidently record the third penny’s mass as 31.07 g instead of 3.107 g, the mean changes from 3.117 g to 7.112 g!An estimate for a statistical parameter is robust if its value is not affected too much by an unusually large or an unusually small measurement.The median, \(\widetilde{X}\), is the middle value when we order our data from the smallest to the largest value. When the data has an odd number of values, the median is the middle value. For an even number of values, the median is the average of the n/2 and the (n/2) + 1 values, where n is the size of the data set.When n = 5, the median is the third value in the ordered data set; for n = 6, the median is the average of the third and fourth members of the ordered data set.What is the median for the data in Table 4.1.1 ?SolutionTo determine the median we order the measurements from the smallest to the largest value\(3.056 \quad 3.080 \quad 3.094 \quad 3.107 \quad 3.112 \quad 3.174 \quad 3.198\)Because there are seven measurements, the median is the fourth value in the ordered data; thus, the median is 3.107 g.As shown by Example 4.1.1 and Example 4.1.2 , the mean and the median provide similar estimates of central tendency when all measurements are comparable in magnitude. The median, however, is a more robust estimate of central tendency because it is less sensitive to measurements with extreme values. For example, if we accidently record the third penny’s mass as 31.07 g instead of 3.107 g, the median’s value changes from 3.107 g to 3.112 g.If the mean or the median provides an estimate of a penny’s expected mass, then the spread of individual measurements about the mean or median provides an estimate of the difference in mass among pennies or of the uncertainty in measuring mass with a balance. Although we often define the spread relative to a specific measure of central tendency, its magnitude is independent of the central value. Although shifting all measurements in the same direction by adding or subtracting a constant value changes the mean or median, it does not change the spread. There are three common measures of spread: the range, the standard deviation, and the variance.Problem 13 at the end of the chapter asks you to show that this is true.The range, w, is the difference between a data set’s largest and smallest values.\[w = X_\text{largest} - X_\text{smallest} \nonumber\]The range provides information about the total variability in the data set, but does not provide information about the distribution of individual values. The range for the data in Table 4.1.1 is\[w = 3.198 \text{ g} - 3.056 \text{ g} = 0.142 \text{ g} \nonumber\]The standard deviation, s, describes the spread of individual values about their mean, and is given as\[s = \sqrt{\frac {\sum_{i = 1}^{n} (X_i - \overline{X})^{2}} {n - 1}} \label{4.1}\]where \(X_i\) is one of the n individual values in the data set, and \(\overline{X}\) is the data set's mean value. Frequently, we report the relative standard deviation, sr, instead of the absolute standard deviation.\[s_r = \frac {s} {\overline{X}} \nonumber\]The percent relative standard deviation, %sr, is \(s_r \times 100\).The relative standard deviation is important because it allows for a more meaningful comparison between data sets when the individual measurements differ significantly in magnitude. Consider again the data in Table 4.1.1 . If we multiply each value by 10, the absolute standard deviation will increase by 10 as well; the relative standard deviation, however, is the same.Report the standard deviation, the relative standard deviation, and the percent relative standard deviation for the data in Table 4.1.1 ?SolutionTo calculate the standard deviation we first calculate the difference between each measurement and the data set’s mean value (3.117), square the resulting differences, and add them together to find the numerator of Equation \ref{4.1}\[\begin{align*} (3.080-3.117)^2 = (-0.037)^2 = 0.001369\\ (3.094-3.117)^2 = (-0.023)^2 = 0.000529\\ (3.107-3.117)^2 = (-0.010)^2 = 0.000100\\ (3.056-3.117)^2 = (-0.061)^2 = 0.003721\\ (3.112-3.117)^2 = (-0.005)^2 = 0.000025\\ (3.174-3.117)^2 = (+0.057)^2 = 0.003249\\ (3.198-3.117)^2 = (+0.081)^2 = \underline{0.006561}\\ 0.015554 \end{align*}\]For obvious reasons, the numerator of Equation \ref{4.1} is called a sum of squares. Next, we divide this sum of squares by n – 1, where n is the number of measurements, and take the square root.\[s = \sqrt{\frac {0.015554} {7 - 1}} = 0.051 \text{ g} \nonumber\]Finally, the relative standard deviation and percent relative standard deviation are\[s_r = \frac {0.051 \text{ g}} {3.117 \text{ g}} = 0.016 \nonumber\]\[\% s_r = (0.016) \times 100 = 1.6 \% \nonumber\]It is much easier to determine the standard deviation using a scientific calculator with built in statistical functions.Many scientific calculators include two keys for calculating the standard deviation. One key calculates the standard deviation for a data set of n samples drawn from a larger collection of possible samples, which corresponds to Equation \ref{4.1}. The other key calculates the standard deviation for all possible samples. The latter is known as the population’s standard deviation, which we will cover later in this chapter. Your calculator’s manual will help you determine the appropriate key for each.Another common measure of spread is the variance, which is the square of the standard deviation. We usually report a data set’s standard deviation, rather than its variance, because the mean value and the standard deviation share the same unit. As we will see shortly, the variance is a useful measure of spread because its values are additive.What is the variance for the data in Table 4.1.1 ?SolutionThe variance is the square of the absolute standard deviation. Using the standard deviation from Example 4.1.3 gives the variance as\[s^2 = (0.051)^2 = 0.0026 \nonumber\]The following data were collected as part of a quality control study for the analysis of sodium in serum; results are concentrations of Na+ in mmol/L.\(140 \quad 143 \quad 141 \quad 137 \quad 132 \quad 157 \quad 143 \quad 149 \quad 118 \quad 145\)Report the mean, the median, the range, the standard deviation, and the variance for this data. This data is a portion of a larger data set from Andrew, D. F.; Herzberg, A. M. Data: A Collection of Problems for the Student and Research Worker, Springer-Verlag:New York, 1985, pp. 151–155.Mean: To find the mean we add together the individual measurements and divide by the number of measurements. The sum of the 10 concentrations is 1405. Dividing the sum by 10, gives the mean as 140.5, or \(1.40 \times 10^2\) mmol/L.Median: To find the median we arrange the 10 measurements from the smallest concentration to the largest concentration; thus\(118 \quad 132 \quad 137 \quad 140 \quad 141 \quad 143 \quad 143 \quad 145 \quad 149 \quad 157\)The median for a data set with 10 members is the average of the fifth and sixth values; thus, the median is (141 + 143)/2, or 142 mmol/L.Range: The range is the difference between the largest value and the smallest value; thus, the range is 157 – 118 = 39 mmol/L.Standard Deviation: To calculate the standard deviation we first calculate the absolute difference between each measurement and the mean value (140.5), square the resulting differences, and add them together. The differences are\(–0.5 \quad 2.5 \quad 0.5 \quad –3.5 \quad –8.5 \quad 16.5 \quad 2.5 \quad 8.5 \quad –22.5 \quad 4.5\)and the squared differences are\(0.25 \quad 6.25 \quad 0.25 \quad 12.25 \quad 72.25 \quad 272.25 \quad 6.25 \quad 72.25 \quad 506.25 \quad 20.25\)The total sum of squares, which is the numerator of Equation \ref{4.1}, is 968.50. The standard deviation is\[s = \sqrt{\frac {968.50} {10 - 1}} = 10.37 \approx 10.4 \nonumber\]Variance: The variance is the square of the standard deviation, or 108.This page titled 4.1: Characterizing Measurements and Results is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by David Harvey.
134
4.2: Characterizing Experimental Errors
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Analytical_Chemistry_2.1_(Harvey)/04%3A_Evaluating_Analytical_Data/4.02%3A_Characterizing_Experimental_Errors
Characterizing a penny’s mass using the data in Table 4.1.1 suggests two questions. First, does our measure of central tendency agree with the penny’s expected mass? Second, why is there so much variability in the individual results? The first of these questions addresses the accuracy of our measurements and the second addresses the precision of our measurements. In this section we consider the types of experimental errors that affect accuracy and precision.Accuracy is how close a measure of central tendency is to its expected value, \(\mu\). We express accuracy either as an absolute error, e \[e = \overline{X} - \mu \label{4.1}\]or as a percent relative error, %e \[\% e = \frac {\overline{X} - \mu} {\mu} \times 100 \label{4.2}\]Although Equation \ref{4.1} and Equation \ref{4.2} use the mean as the measure of central tendency, we also can use the median.The convention for representing a statistical parameter is to use a Roman letter for a value calculated from experimental data, and a Greek letter for its corresponding expected value. For example, the experimentally determined mean is \(\overline{X}\) and its underlying expected value is \(\mu\). Likewise, the experimental standard deviation is s and the underlying expected value is \(\sigma\).We identify as determinate an error that affects the accuracy of an analysis. Each source of a determinate error has a specific magnitude and sign. Some sources of determinate error are positive and others are negative, and some are larger in magnitude and others are smaller in magnitude. The cumulative effect of these determinate errors is a net positive or negative error in accuracy.It is possible, although unlikely, that the positive and negative determinate errors will offset each other, producing a result with no net error in accuracy.We assign determinate errors into four categories—sampling errors, method errors, measurement errors, and personal errors—each of which we consider in this section.A determinate sampling error occurs when our sampling strategy does not provide a us with a representative sample. For example, if we monitor the environmental quality of a lake by sampling from a single site near a point source of pollution, such as an outlet for industrial effluent, then our results will be misleading. To determine the mass of a U. S. penny, our strategy for selecting pennies must ensure that we do not include pennies from other countries.An awareness of potential sampling errors especially is important when we work with heterogeneous materials. Strategies for obtaining representative samples are covered in Chapter 5.In any analysis the relationship between the signal, Stotal, and the absolute amount of analyte, nA, or the analyte’s concentration, CA, is\[S_{total} = k_A n_A + S_{mb} \label{4.3}\]\[S_{total} = k_A C_A + S_{mb} \label{4.4}\]where kA is the method’s sensitivity for the analyte and Smb is the signal from the method blank. A method error exists when our value for kA or for Smb is in error. For example, a method in which Stotal is the mass of a precipitate assumes that k is defined by a pure precipitate of known stoichiometry. If this assumption is not true, then the resulting determination of nA or CA is inaccurate. We can minimize a determinate error in kA by calibrating the method. A method error due to an interferent in the reagents is minimized by using a proper method blank.The manufacturers of analytical instruments and equipment, such as glassware and balances, usually provide a statement of the item’s maximum measurement error, or tolerance. For example, a 10-mL volumetric pipet (Figure 4.2.1 ) has a tolerance of ±0.02 mL, which means the pipet delivers an actual volume within the range 9.98–10.02 mL at a temperature of 20 oC. Although we express this tolerance as a range, the error is determinate; that is, the pipet’s expected volume, \(\mu\), is a fixed value within this stated range.Volumetric glassware is categorized into classes based on its relative accuracy. Class A glassware is manufactured to comply with tolerances specified by an agency, such as the National Institute of Standards and Technology or the American Society for Testing and Materials. The tolerance level for Class A glassware is small enough that normally we can use it without calibration. The tolerance levels for Class B glassware usually are twice that for Class A glassware. Other types of volumetric glassware, such as beakers and graduated cylinders, are not used to measure volume accurately. Table 4.2.1 provides a summary of typical measurement errors for Class A volumetric glassware. Tolerances for digital pipets and for balances are provided in Table 4.2.2 and Table 4.2.3 .The tolerance values for the volumetric glassware in Table 4.2.1 are from the ASTM E288, E542, and E694 standards. The measurement errors for the digital pipets in Table 4.2.2 are from www.eppendorf.com.We can minimize a determinate measurement error by calibrating our equipment. Balances are calibrated using a reference weight whose mass we can trace back to the SI standard kilogram. Volumetric glassware and digital pipets are calibrated by determining the mass of water delivered or contained and using the density of water to calculate the actual volume. It is never safe to assume that a calibration does not change during an analysis or over time. One study, for example, found that repeatedly exposing volumetric glassware to higher temperatures during machine washing and oven drying, led to small, but significant changes in the glassware’s calibration [Castanheira, I.; Batista, E.; Valente, A.; Dias, G.; Mora, M.; Pinto, L.; Costa, H. S. Food Control 2006, 17, 719–726]. Many instruments drift out of calibration over time and may require frequent recalibration during an analysis.Finally, analytical work is always subject to personal error, examples of which include the ability to see a change in the color of an indicator that signals the endpoint of a titration, biases, such as consistently overestimating or underestimating the value on an instrument’s readout scale, failing to calibrate instrumentation, and misinterpreting procedural directions. You can minimize personal errors by taking proper care.Determinate errors often are difficult to detect. Without knowing the expected value for an analysis, the usual situation in any analysis that matters, we often have nothing to which we can compare our experimental result. Nevertheless, there are strategies we can use to detect determinate errors.The magnitude of a constant determinate error is the same for all samples and is more significant when we analyze smaller samples. Analyzing samples of different sizes, therefore, allows us to detect a constant determinate error. For example, consider a quantitative analysis in which we separate the analyte from its matrix and determine its mass. Let’s assume the sample is 50.0% w/w analyte. As we see in Table 4.2.4 , the expected amount of analyte in a 0.100 g sample is 0.050 g. If the analysis has a positive constant determinate error of 0.010 g, then analyzing the sample gives 0.060 g of analyte, or an apparent concentration of 60.0% w/w. As we increase the size of the sample the experimental results become closer to the expected result. An upward or downward trend in a graph of the analyte’s experimental concentration versus the sample’s mass (Figure 4.2.2 ) is evidence of a constant determinate error.A proportional determinate error, in which the error’s magnitude depends on the amount of sample, is more difficult to detect because the result of the analysis is independent of the amount of sample. Table 4.2.5 outlines an example that shows the effect of a positive proportional error of 1.0% on the analysis of a sample that is 50.0% w/w in analyte. Regardless of the sample’s size, each analysis gives the same result of 50.5% w/w analyte.One approach for detecting a proportional determinate error is to analyze a standard that contains a known amount of analyte in a matrix similar to our samples. Standards are available from a variety of sources, such as the National Institute of Standards and Technology (where they are called Standard Reference Materials) or the American Society for Testing and Materials. Table 4.2.6 , for example, lists certified values for several analytes in a standard sample of Gingko biloba leaves. Another approach is to compare our analysis to an analysis carried out using an independent analytical method that is known to give accurate results. If the two methods give significantly different results, then a determinate error is the likely cause.The primary purpose of this Standard Reference Material is to validate analytical methods for determining flavonoids, terpene lactones, and toxic elements in Ginkgo biloba or other materials with a similar matrix. Values are from the official Certificate of Analysis available at www.nist.gov.Constant and proportional determinate errors have distinctly different sources, which we can define in terms of the relationship between the signal and the moles or concentration of analyte (Equation \ref{4.3} and Equation \ref{4.4}). An invalid method blank, Smb, is a constant determinate error as it adds or subtracts the same value to the signal. A poorly calibrated method, which yields an invalid sensitivity for the analyte, kA, results in a proportional determinate error.As we saw in Section 4.1, precision is a measure of the spread of individual measurements or results about a central value, which we express as a range, a standard deviation, or a variance. Here we draw a distinction between two types of precision: repeatability and reproducibility. Repeatability is the precision when a single analyst completes an analysis in a single session using the same solutions, equipment, and instrumentation. Reproducibility, on the other hand, is the precision under any other set of conditions, including between analysts or between laboratory sessions for a single analyst. Since reproducibility includes additional sources of variability, the reproducibility of an analysis cannot be better than its repeatability.The ratio of the standard deviation associated with reproducibility to the standard deviation associated with repeatability is called the Horowitz ratio. For a wide variety of analytes in foods, for example, the median Horowtiz ratio is 2.0 with larger values for fatty acids and for trace elements; see Thompson, M.; Wood, R. “The ‘Horowitz Ratio’–A Study of the Ratio Between Reproducibility and Repeatability in the Analysis of Foodstuffs,” Anal. Methods, 2015, 7, 375–379.Errors that affect precision are indeterminate and are characterized by random variations in their magnitude and their direction. Because they are random, positive and negative indeterminate errors tend to cancel, provided that we make a sufficient number of measurements. In such situations the mean and the median largely are unaffected by the precision of the analysis.We can assign indeterminate errors to several sources, including collecting samples, manipulating samples during the analysis, and making measurements. When we collect a sample, for instance, only a small portion of the available material is taken, which increases the chance that small-scale inhomogeneities in the sample will affect repeatability. Individual pennies, for example, may show variations in mass from several sources, including the manufacturing process and the loss of small amounts of metal or the addition of dirt during circulation. These variations are sources of indeterminate sampling errors.During an analysis there are many opportunities to introduce indeterminate method errors. If our method for determining the mass of a penny includes directions for cleaning them of dirt, then we must be careful to treat each penny in the same way. Cleaning some pennies more vigorously than others might introduce an indeterminate method error.Finally, all measuring devices are subject to indeterminate measurement errors due to limitations in our ability to read its scale. For example, a buret with scale divisions every 0.1 mL has an inherent indeterminate error of ±0.01–0.03 mL when we estimate the volume to the hundredth of a milliliter (Figure 4.2.3 ).Indeterminate errors associated with our analytical equipment or instrumentation generally are easy to estimate if we measure the standard deviation for several replicate measurements, or if we monitor the signal’s fluctuations over time in the absence of analyte (Figure 4.2.4 ) and calculate the standard deviation. Other sources of indeterminate error, such as treating samples inconsistently, are more difficult to estimate.To evaluate the effect of an indeterminate measurement error on our analysis of the mass of a circulating United States penny, we might make several determinations of the mass for a single penny (Table 4.2.7 ). The standard deviation for our original experiment (see Table 4.1.1) is 0.051 g, and it is 0.0024 g for the data in Table 4.2.7 . The significantly better precision when we determine the mass of a single penny suggests that the precision of our analysis is not limited by the balance. A more likely source of indeterminate error is a variability in the masses of individual pennies.In Section 4.5 we will discuss a statistical method—the F-test—that you can use to show that this difference is significant.Analytical chemists make a distinction between error and uncertainty [Ellison, S.; Wegscheider, W.; Williams, A. Anal. Chem. 1997, 69, 607A–613A]. Error is the difference between a single measurement or result and its expected value. In other words, error is a measure of bias. As discussed earlier, we divide errors into determinate and indeterminate sources. Although we can find and correct a source of determinate error, the indeterminate portion of the error remains.Uncertainty expresses the range of possible values for a measurement or result. Note that this definition of uncertainty is not the same as our definition of precision. We calculate precision from our experimental data and use it to estimate the magnitude of indeterminate errors. Uncertainty accounts for all errors—both determinate and indeterminate—that reasonably might affect a measurement or a result. Although we always try to correct determinate errors before we begin an analysis, the correction itself is subject to uncertainty.Here is an example to help illustrate the difference between precision and uncertainty. Suppose you purchase a 10-mL Class A pipet from a laboratory supply company and use it without any additional calibration. The pipet’s tolerance of ±0.02 mL is its uncertainty because your best estimate of its expected volume is 10.00 mL ± 0.02 mL. This uncertainty primarily is determinate. If you use the pipet to dispense several replicate samples of a solution and determine the volume of each sample, the resulting standard deviation is the pipet’s precision. Table 4.2.8 shows results for ten such trials, with a mean of 9.992 mL and a standard deviation of ±0.006 mL. This standard deviation is the precision with which we expect to deliver a solution using a Class A 10-mL pipet. In this case the pipet’s published uncertainty of ±0.02 mL is worse than its experimentally determined precision of ±0.006 ml. Interestingly, the data in Table 4.2.8 allows us to calibrate this specific pipet’s delivery volume as 9.992 mL. If we use this volume as a better estimate of the pipet’s expected volume, then its uncertainty is ±0.006 mL. As expected, calibrating the pipet allows us to decrease its uncertainty [Kadis, R. Talanta 2004, 64, 167–173].This page titled 4.2: Characterizing Experimental Errors is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by David Harvey.
135
4.3: Propagation of Uncertainty
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Analytical_Chemistry_2.1_(Harvey)/04%3A_Evaluating_Analytical_Data/4.03%3A_Propagation_of_Uncertainty
Suppose we dispense 20 mL of a reagent using the Class A 10-mL pipet whose calibration information is given in Table 4.2.8. If the volume and uncertainty for one use of the pipet is 9.992 ± 0.006 mL, what is the volume and uncertainty if we use the pipet twice?As a first guess, we might simply add together the volume and the maximum uncertainty for each delivery; thus(9.992 mL + 9.992 mL) ± (0.006 mL + 0.006 mL) = 19.984 ± 0.012 mLIt is easy to appreciate that combining uncertainties in this way overestimates the total uncertainty. Adding the uncertainty for the first delivery to that of the second delivery assumes that with each use the indeterminate error is in the same direction and is as large as possible. At the other extreme, we might assume that the uncertainty for one delivery is positive and the other is negative. If we subtract the maximum uncertainties for each delivery,(9.992 mL + 9.992 mL) ± (0.006 mL – 0.006 mL) = 19.984 ± 0.000 mLwe clearly underestimate the total uncertainty.So what is the total uncertainty? From the discussion above, we reasonably expect that the total uncertainty is greater than ±0.000 mL and that it is less than ±0.012 mL. To estimate the uncertainty we use a mathematical technique known as the propagation of uncertainty. Our treatment of the propagation of uncertainty is based on a few simple rules.A propagation of uncertainty allows us to estimate the uncertainty in a result from the uncertainties in the measurements used to calculate that result. For the equations in this section we represent the result with the symbol R, and we represent the measurements with the symbols A, B, and C. The corresponding uncertainties are uR, uA, uB, and uC. We can define the uncertainties for A, B, and C using standard deviations, ranges, or tolerances (or any other measure of uncertainty), as long as we use the same form for all measurements.The requirement that we express each uncertainty in the same way is a critically important point. Suppose you have a range for one measurement, such as a pipet’s tolerance, and standard deviations for the other measurements. All is not lost. There are ways to convert a range to an estimate of the standard deviation. See Appendix 2 for more details.When we add or subtract measurements we propagate their absolute uncertainties. For example, if the result is given by the equation\[R = A + B - C \nonumber\]the the absolute uncertainty in R is\[u_R = \sqrt{u_A^2 + u_B^2 + u_C^2} \label{4.1}\]If we dispense 20 mL using a 10-mL Class A pipet, what is the total volume dispensed and what is the uncertainty in this volume? First, complete the calculation using the manufacturer’s tolerance of 10.00 mL±0.02 mL, and then using the calibration data from Table 4.2.8.SolutionTo calculate the total volume we add the volumes for each use of the pipet. When using the manufacturer’s values, the total volume is\[V = 10.00 \text{ mL} + 10.00 \text{ mL} = 20.00 \text{ mL} \nonumber\]and when using the calibration data, the total volume is\[V = 9.992 \text{ mL} + 9.992 \text{ mL} = 19.984 \text{ mL} \nonumber\]Using the pipet’s tolerance as an estimate of its uncertainty gives the uncertainty in the total volume as\[u_R = (0.02)^2 + (0.02)^2 = 0.028 \text{ mL} = 0.028 \text{ mL} \nonumber\]and using the standard deviation for the data in Table 4.2.8 gives an uncertainty of\[u_R = (0.006)^2 + (0.006)^2 = 0.0085 \text{ mL} \nonumber\]Rounding the volumes to four significant figures gives 20.00 mL ± 0.03 mL when we use the tolerance values, and 19.98 ± 0.01 mL when we use the calibration data.When we multiple or divide measurements we propagate their relative uncertainties. For example, if the result is given by the equation\[R = \frac {A \times B} {C} \nonumber\]then the relative uncertainty in R is\[\frac {u_R} {R}= \sqrt{\left( \frac {u_A} {A} \right)^2 + \left( \frac {u_B} {B} \right)^2 + \left( \frac {u_C} {C} \right)^2} \label{4.2}\]The quantity of charge, Q, in coulombs that passes through an electrical circuit is\[Q = i \times t \nonumber\]where i is the current in amperes and t is the time in seconds. When a current of 0.15 A ± 0.01 A passes through the circuit for 120 s ± 1 s, what is the total charge and its uncertainty?SolutionThe total charge is\[Q = (0.15 \text{ A}) \times (120 \text{ s}) = 18 \text{ C} \nonumber\]Since charge is the product of current and time, the relative uncertainty in the charge is\[\frac {u_R} {R} = \sqrt{\left( \frac {0.01} {0.15} \right)^2 + \left( \frac {1} {120} \right)^2} = 0.0672 \nonumber\]and the charge’s absolute uncertainty is\[u_R = R \times 0.0672 = (18 \text{ C}) \times (0.0672) = 1.2 \text{ C} \nonumber\]Thus, we report the total charge as 18 C ± 1 C.Many chemical calculations involve a combination of adding and subtracting, and of multiply and dividing. As shown in the following example, we can calculate the uncertainty by separately treating each operation using Equation \ref{4.1} and Equation \ref{4.2} as needed.For a concentration technique, the relationship between the signal and the an analyte’s concentration is\[S_{total} = k_A C_A + S_{mb} \nonumber\]What is the analyte’s concentration, CA, and its uncertainty if Stotal is 24.37 ± 0.02, Smb is 0.96 ± 0.02, and kA is \(0.186 \pm 0.003 \text{ ppm}^{-1}\)?SolutionRearranging the equation and solving for CA\[C_A = \frac {S_{total} - S_{mb}} {k_A} = \frac {24.37 - 0.96} {0.186 \text{ ppm}^{-1}} = \frac {23.41} {0.186 \text{ ppm}^{-1}} = 125.9 \text{ ppm} \nonumber\]gives the analyte’s concentration as 126 ppm. To estimate the uncertainty in CA, we first use Equation \ref{4.1} to determine the uncertainty for the numerator.\[u_R = \sqrt{(0.02)^2 + (0.02)^2} = 0.028 \nonumber\]The numerator, therefore, is 23.41 ± 0.028. To complete the calculation we use Equation \ref{4.2} to estimate the relative uncertainty in CA.\[\frac {u_R} {R} = \sqrt{\left( \frac {0.028} {23.41} \right)^2 + \left( \frac {0.003} {0.186} \right)^2} = 0.0162 \nonumber\]The absolute uncertainty in the analyte’s concentration is\[u_R = (125.9 \text{ ppm}) \times (0.0162) = 2.0 \text{ ppm} \nonumber\]Thus, we report the analyte’s concentration as 126 ppm ± 2 ppm.To prepare a standard solution of Cu2+ you obtain a piece of copper from a spool of wire. The spool’s initial weight is 74.2991 g and its final weight is 73.3216 g. You place the sample of wire in a 500-mL volumetric flask, dissolve it in 10 mL of HNO3, and dilute to volume. Next, you pipet a 1 mL portion to a 250-mL volumetric flask and dilute to volume. What is the final concentration of Cu2+ in mg/L, and its uncertainty? Assume that the uncertainty in the balance is ±0.1 mg and that you are using Class A glassware.The first step is to determine the concentration of Cu2+ in the final solution. The mass of copper is\[74.2991 \text{ g} - 73.3216 \text{ g} = 0.9775 \text{ g Cu} \nonumber\]The 10 mL of HNO3 used to dissolve the copper does not factor into our calculation. The concentration of Cu2+ is\[\frac {0.9775 \text{ g Cu}} {0.5000 \text{ L}} \times \frac {1.000 \text{ mL}} {250.0 \text{ mL}} \times \frac {1000 \text{ mg}} {\text{g}} = 7.820 \text{ mg } \ce{Cu^{2+}} \text{/L} \nonumber\]Having found the concentration of Cu2+, we continue with the propagation of uncertainty. The absolute uncertainty in the mass of Cu wire is\[u_\text{g Cu} = \sqrt{(0.0001)^2 + (0.0001)^2} = 0.00014 \text{ g} \nonumber\]The relative uncertainty in the concentration of Cu2+ is\[\frac {u_\text{mg/L}} {7.820 \text{ mg/L}} = \sqrt{\left( \frac {0.00014} {0.9775} \right)^2 + \left( \frac {0.20} {500.0} \right)^2 + \left( \frac {0.006} {1.000} \right)^2 + \left( \frac {0.12} {250.0} \right)^2} = 0.00603 \nonumber\]Solving for umg/L gives the uncertainty as 0.0472. The concentration and uncertainty for Cu2+ is 7.820 mg/L ± 0.047 mg/L.Many other mathematical operations are common in analytical chemistry, including the use of powers, roots, and logarithms. Table 4.3.1 provides equations for propagating uncertainty for some of these function where A and B are independent measurements and where k is a constant whose value has no uncertainty.If the pH of a solution is 3.72 with an absolute uncertainty of ±0.03, what is the [H+] and its uncertainty?SolutionThe concentration of H+ is\[[\ce{H+}] = 10^{-\text{pH}} = 10^{-3.72} = 1.91 \times 10^{-4} \text{ M} \nonumber\]or \(1.9 \times 10^{-4}\) M to two significant figures. From Table 4.3.1 the relative uncertainty in [H+] is\[\frac {u_R} {R} = 2.303 \times u_A = 2.303 \times 0.03 = 0.069 \nonumber\]The uncertainty in the concentration, therefore, is\[(1.91 \times 10^{-4} \text{ M}) \times (0.069) = 1.3 \times 10^{-5} \text{ M} \nonumber\]We report the [H+] as \(1.9 (\pm 0.1) \times 10^{-4}\) M, which is equivalent to \(1.9 \times 10^{-4} \text{ M } \pm 0.1 \times 10^{-4} \text{ M}\).A solution of copper ions is blue because it absorbs yellow and orange light. Absorbance, A, is defined as\[A = - \log T = - \log \left( \frac {P} {P_\text{o}} \right) \nonumber\]where, T is the transmittance, Po is the power of radiation as emitted from the light source and P is its power after it passes through the solution. What is the absorbance if Po is \(3.80 \times 10^2\) and P is \(1.50 \times 10^2\)? If the uncertainty in measuring Po and P is 15, what is the uncertainty in the absorbance?The first step is to calculate the absorbance, which is\[A = - \log T = -\log \frac {P} {P_\text{o}} = - \log \frac {1.50 \times 10^2} {3.80 \times 10^2} = 0.4037 \approx 0.404 \nonumber\]Having found the absorbance, we continue with the propagation of uncertainty. First, we find the uncertainty for the ratio P/Po, which is the transmittance, T.\[\frac {u_{T}} {T} = \sqrt{\left( \frac {15} {3.80 \times 10^2} \right)^2 + \left( \frac {15} {1.50 \times 10^2} \right)^2 } = 0.1075 \nonumber\]Finally, from Table 4.3.1 the uncertainty in the absorbance is\[u_A = 0.4343 \times \frac {u_{T}} {T} = (0.4343) \times (0.1075) = 4.669 \times 10^{-2} \nonumber\]The absorbance and uncertainty is 0.40 ± 0.05 absorbance units.Given the effort it takes to calculate uncertainty, it is worth asking whether such calculations are useful. The short answer is, yes. Let’s consider three examples of how we can use a propagation of uncertainty to help guide the development of an analytical method.One reason to complete a propagation of uncertainty is that we can compare our estimate of the uncertainty to that obtained experimentally. For example, to determine the mass of a penny we measure its mass twice—once to tare the balance at 0.000 g and once to measure the penny’s mass. If the uncertainty in each measurement of mass is ±0.001 g, then we estimate the total uncertainty in the penny’s mass as\[u_R = \sqrt{(0.001)^2 + (0.001)^2} = 0.0014 \text{ g} \nonumber\]If we measure a single penny’s mass several times and obtain a standard deviation of ±0.050 g, then we have evidence that the measurement process is out of control. Knowing this, we can identify and correct the problem.We also can use a propagation of uncertainty to help us decide how to improve an analytical method’s uncertainty. In Example 4.3.3 , for instance, we calculated an analyte’s concentration as 126 ppm ± 2 ppm, which is a percent uncertainty of 1.6%. Suppose we want to decrease the percent uncertainty to no more than 0.8%. How might we accomplish this? Looking back at the calculation, we see that the concentration’s relative uncertainty is determined by the relative uncertainty in the measured signal (corrected for the reagent blank)\[\frac {0.028} {23.41} = 0.0012 \text{ or } 0.12\% \nonumber\]and the relative uncertainty in the method’s sensitivity, kA,\[\frac {0.003 \text{ ppm}^{-1}} {0.186 \text{ ppm}^{-1}} = 0.016 \text{ or } 1.6\% \nonumber\]Of these two terms, the uncertainty in the method’s sensitivity dominates the overall uncertainty. Improving the signal’s uncertainty will not improve the overall uncertainty of the analysis. To achieve an overall uncertainty of 0.8% we must improve the uncertainty in kA to ±0.0015 ppm–1.Verify that an uncertainty of ±0.0015 ppm–1 for kA is the correct result.An uncertainty of 0.8% is a relative uncertainty in the concentration of 0.008; thus, letting u be the uncertainty in kA\[0.008 = \sqrt{\left( \frac {0.028} {23.41} \right)^2 + \left( \frac {u} {0.186} \right)^2} \nonumber\]Squaring both sides of the equation gives\[6.4 \times 10^{-5} = \left( \frac {0.028} {23.41} \right)^2 + \left( \frac {u} {0.186} \right)^2 \nonumber\]Solving for the uncertainty in kA gives its value as \(1.47 \times 10^{-3}\) or ±0.0015 ppm–1.Finally, we can use a propagation of uncertainty to determine which of several procedures provides the smallest uncertainty. When we dilute a stock solution usually there are several combinations of volumetric glassware that will give the same final concentration. For instance, we can dilute a stock solution by a factor of 10 using a 10-mL pipet and a 100-mL volumetric flask, or using a 25-mL pipet and a 250-mL volumetric flask. We also can accomplish the same dilution in two steps using a 50-mL pipet and 100-mL volumetric flask for the first dilution, and a 10-mL pipet and a 50-mL volumetric flask for the second dilution. The overall uncertainty in the final concentration—and, therefore, the best option for the dilution—depends on the uncertainty of the volumetric pipets and volumetric flasks. As shown in the following example, we can use the tolerance values for volumetric glassware to determine the optimum dilution strategy [Lam, R. B.; Isenhour, T. L. Anal. Chem. 1980, 52, 1158–1161].Which of the following methods for preparing a 0.0010 M solution from a 1.0 M stock solution provides the smallest overall uncertainty? (a) A one-step dilution that uses a 1-mL pipet and a 1000-mL volumetric flask. (b) A two-step dilution that uses a 20-mL pipet and a 1000-mL volumetric flask for the first dilution, and a 25-mL pipet and a 500-mL volumetric flask for the second dilution.SolutionThe dilution calculations for case (a) and case (b) are\[\text{case (a): 1.0 M } \times \frac {1.000 \text { mL}} {1000.0 \text { mL}} = 0.0010 \text{ M} \nonumber\]\[\text{case (b): 1.0 M } \times \frac {20.00 \text { mL}} {1000.0 \text { mL}} \times \frac {25.00 \text{ mL}} {500.0 \text{mL}} = 0.0010 \text{ M} \nonumber\]Using tolerance values from Table 4.2.1, the relative uncertainty for case (a) is\[u_R = \sqrt{\left( \frac {0.006} {1.000} \right)^2 + \left( \frac {0.3} {1000.0} \right)^2} = 0.006 \nonumber\]and for case (b) the relative uncertainty is\[u_R = \sqrt{\left( \frac {0.03} {20.00} \right)^2 + \left( \frac {0.3} {1000} \right)^2 + \left( \frac {0.03} {25.00} \right)^2 + \left( \frac {0.2} {500.0} \right)^2} = 0.002 \nonumber\]Since the relative uncertainty for case (b) is less than that for case (a), the two-step dilution provides the smallest overall uncertainty. Of course we must balance the smaller uncertainty for case (b) against the increased opportunity for introducing a determinate error when making two dilutions instead of just one dilution, as in case (a).This page titled 4.3: Propagation of Uncertainty is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by David Harvey.
136
4.4: The Distribution of Measurements and Results
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Analytical_Chemistry_2.1_(Harvey)/04%3A_Evaluating_Analytical_Data/4.04%3A_The_Distribution_of_Measurements_and_Results
Earlier we reported results for a determination of the mass of a circulating United States penny, obtaining a mean of 3.117 g and a standard deviation of 0.051 g. Table 4.4.1 shows results for a second, independent determination of a penny’s mass, as well as the data from the first experiment. Although the means and standard deviations for the two experiments are similar, they are not identical. The difference between the two experiments raises some interesting questions. Are the results for one experiment better than the results for the other experiment? Do the two experiments provide equivalent estimates for the mean and the standard deviation? What is our best estimate of a penny’s expected mass? To answer these questions we need to understand how we might predict the properties of all pennies using the results from an analysis of a small sample of pennies. We begin by making a distinction between populations and samples.A population is the set of all objects in the system we are investigating. For the data in Table 4.4.1 , the population is all United States pennies in circulation. This population is so large that we cannot analyze every member of the population. Instead, we select and analyze a limited subset, or sample of the population. The data in Table 4.4.1 , for example, shows the results for two such samples drawn from the larger population of all circulating United States pennies.Table 4.4.1 provides the means and the standard deviations for two samples of circulating United States pennies. What do these samples tell us about the population of pennies? What is the largest possible mass for a penny? What is the smallest possible mass? Are all masses equally probable, or are some masses more common?To answer these questions we need to know how the masses of individual pennies are distributed about the population’s average mass. We represent the distribution of a population by plotting the probability or frequency of obtaining a specific result as a function of the possible results. Such plots are called probability distributions.There are many possible probability distributions; in fact, the probability distribution can take any shape depending on the nature of the population. Fortunately many chemical systems display one of several common probability distributions. Two of these distributions, the binomial distribution and the normal distribution, are discussed in this section.The binomial distribution describes a population in which the result is the number of times a particular event occurs during a fixed number of trials. Mathematically, the binomial distribution is defined as\[P(X, N) = \frac {N!} {X!(N - X)!} \times p^X \times (1 - p)^{N - X} \nonumber\]where P(X , N) is the probability that an event occurs X times during N trials, and p is the event’s probability for a single trial. If you flip a coin five times, P is the probability the coin will turn up “heads” exactly twice.The term N! reads as N-factorial and is the product \(N \times (N – 1) \times (N – 2) \times \cdots \times 1\). For example, 4! is \(4 \times 3 \times 2 \times 1 = 24\). Your calculator probably has a key for calculating factorials.A binomial distribution has well-defined measures of central tendency and spread. The expected mean value is\[\mu = Np \nonumber\]and the expected spread is given by the variance\[\sigma^2 = Np(1 - p) \nonumber\]or the standard deviation.\[\sigma = \sqrt{Np(1 - p)} \nonumber\]The binomial distribution describes a population whose members have only specific, discrete values. When you roll a die, for example, the possible values are 1, 2, 3, 4, 5, or 6. A roll of 3.45 is not possible. As shown in Worked Example 4.4.1 , one example of a chemical system that obeys the binomial distribution is the probability of finding a particular isotope in a molecule.Carbon has two stable, non-radioactive isotopes, 12C and 13C, with relative isotopic abundances of, respectively, 98.89% and 1.11%. (a) What are the mean and the standard deviation for the number of 13C atoms in a molecule of cholesterol (C27H44O)? (b) What is the probability that a molecule of cholesterol has no atoms of 13C?SolutionThe probability of finding an atom of 13C in a molecule of cholesterol follows a binomial distribution, where X is the number of 13C atoms, N is the number of carbon atoms in a molecule of cholesterol, and p is the probability that an atom of carbon in 13C.For (a), the mean number of 13C atoms in a molecule of cholesterol is\[\mu = Np = 27 \times 0.0111 = 0.300 \nonumber\]with a standard deviation of\[\sigma = \sqrt{Np(1 - p)} = \sqrt{27 \times 0.0111 \times (1 - 0.0111)} = 0.544 \nonumber\]For (b), the probability of finding a molecule of cholesterol without an atom of 13C is\[P = \frac {27!} {0! \: (27 - 0)!} \times (0.0111)^0 \times (1 - 0.0111)^{27 - 0} = 0.740 \nonumber\]There is a 74.0% probability that a molecule of cholesterol will not have an atom of 13C, a result consistent with the observation that the mean number of 13C atoms per molecule of cholesterol, 0.300, is less than one.A portion of the binomial distribution for atoms of 13C in cholesterol is shown in Figure 4.4.1 . Note in particular that there is little probability of finding more than two atoms of 13C in any molecule of cholesterol.A binomial distribution describes a population whose members have only certain discrete values. This is the case with the number of 13C atoms in cholesterol. A molecule of cholesterol, for example, can have two 13C atoms, but it can not have 2.5 atoms of 13C. A population is continuous if its members may take on any value. The efficiency of extracting cholesterol from a sample, for example, can take on any value between 0% (no cholesterol is extracted) and 100% (all cholesterol is extracted).The most common continuous distribution is the Gaussian, or normal distribution, the equation for which is\[f(X) = \frac {1} {\sqrt{2 \pi \sigma^2}} e^{- \frac {(X - \mu)^2} {2 \sigma^2}} \nonumber\]where \(\mu\) is the expected mean for a population with n members\[\mu = \frac {\sum_{i = 1}^n X_i} {n} \nonumber\]and \(\sigma^2\) is the population’s variance.\[\sigma^2 = \frac {\sum_{i = 1}^n (X_i - \mu)^2} {n} \label{4.1}\]Examples of three normal distributions, each with an expected mean of 0 and with variances of 25, 100, or 400, respectively, are shown in Figure 4.4.2 . Two features of these normal distribution curves deserve attention. First, note that each normal distribution has a single maximum that corresponds to \(\mu\), and that the distribution is symmetrical about this value. Second, increasing the population’s variance increases the distribution’s spread and decreases its height; the area under the curve, however, is the same for all three distributions.The area under a normal distribution curve is an important and useful property as it is equal to the probability of finding a member of the population within a particular range of values. In Figure 4.4.2 , for example, 99.99% of the population shown in curve (a) have values of X between –20 and +20. For curve (c), 68.26% of the population’s members have values of X between –20 and +20.Because a normal distribution depends solely on \(\mu\) and \(\sigma^2\), the probability of finding a member of the population between any two limits is the same for all normally distributed populations. Figure 4.4.3 , for example, shows that 68.26% of the members of a normal distribution have a value within the range \(\mu \pm 1 \sigma\), and that 95.44% of population’s members have values within the range \(\mu \pm 2 \sigma\). Only 0.27% members of a population have values that exceed the expected mean by more than ± 3\(\sigma\). Additional ranges and probabilities are gathered together in the probability table included in Appendix 3. As shown in Example 4.4.2 , if we know the mean and the standard deviation for a normally distributed population, then we can determine the percentage of the population between any defined limits.The amount of aspirin in the analgesic tablets from a particular manufacturer is known to follow a normal distribution with \(\mu\) = 250 mg and \(\sigma\) = 5. In a random sample of tablets from the production line, what percentage are expected to contain between 243 and 262 mg of aspirin?SolutionWe do not determine directly the percentage of tablets between 243 mg and 262 mg of aspirin. Instead, we first find the percentage of tablets with less than 243 mg of aspirin and the percentage of tablets having more than 262 mg of aspirin. Subtracting these results from 100%, gives the percentage of tablets that contain between 243 mg and 262 mg of aspirin.To find the percentage of tablets with less than 243 mg of aspirin or more than 262 mg of aspirin we calculate the deviation, z, of each limit from \(\mu\) in terms of the population’s standard deviation, \(\sigma\)\[z = \frac {X - \mu} {\sigma} \nonumber\]where X is the limit in question. The deviation for the lower limit is\[z_{lower} = \frac {243 - 250} {5} = -1.4 \nonumber\]and the deviation for the upper limit is\[z_{upper} = \frac {262 - 250} {5} = +2.4 \nonumber\]Using the table in Appendix 3, we find that the percentage of tablets with less than 243 mg of aspirin is 8.08%, and that the percentage of tablets with more than 262 mg of aspirin is 0.82%. Therefore, the percentage of tablets containing between 243 and 262 mg of aspirin is\[100.00 \% - 8.08 \% - 0.82 \% = 91.10 \% \nonumber\]Figure 4.4.4 shows the distribution of aspiring in the tablets, with the area in blue showing the percentage of tablets containing between 243 mg and 262 mg of aspirin.What percentage of aspirin tablets will contain between 240 mg and 245 mg of aspirin if the population’s mean is 250 mg and the population’s standard deviation is 5 mg.To find the percentage of tablets that contain less than 245 mg of aspirin we first calculate the deviation, z,\[z = \frac {245 - 250} {5} = -1.00 \nonumber\]and then look up the corresponding probability in Appendix 3, obtaining a value of 15.87%. To find the percentage of tablets that contain less than 240 mg of aspirin we find that\[z = \frac {240 - 250} {5} = -2.00 \nonumber\]which corresponds to 2.28%. The percentage of tablets containing between 240 and 245 mg of aspiring is 15.87% – 2.28% = 13.59%.If we select at random a single member from a population, what is its most likely value? This is an important question, and, in one form or another, it is at the heart of any analysis in which we wish to extrapolate from a sample to the sample’s parent population. One of the most important features of a population’s probability distribution is that it provides a way to answer this question.Figure 4.4.3 shows that for a normal distribution, 68.26% of the population’s members have values within the range \(\mu \pm 1\sigma\). Stating this another way, there is a 68.26% probability that the result for a single sample drawn from a normally distributed population is in the interval \(\mu \pm 1\sigma\). In general, if we select a single sample we expect its value, Xi is in the range\[X_i = \mu \pm z \sigma \label{4.2}\]where the value of z is how confident we are in assigning this range. Values reported in this fashion are called confidence intervals. Equation \ref{4.2}, for example, is the confidence interval for a single member of a population. Table 4.4.2 gives the confidence intervals for several values of z. For reasons discussed later in the chapter, a 95% confidence level is a common choice in analytical chemistry.When z = 1, we call this the 68.26% confidence interval.What is the 95% confidence interval for the amount of aspirin in a single analgesic tablet drawn from a population for which \(\mu\) is 250 mg and for which \(\sigma\) is 5?SolutionUsing Table 4.4.2 , we find that z is 1.96 for a 95% confidence interval. Substituting this into Equation \ref{4.2} gives the confidence interval for a single tablet as\[X_i = \mu \pm 1.96\sigma = 250 \text{ mg} \pm (1.96 \times 5) = 250 \text{ mg} \pm 10 \text{mg} \nonumber\]A confidence interval of 250 mg ± 10 mg means that 95% of the tablets in the population contain between 240 and 260 mg of aspirin.Alternatively, we can rewrite Equation \ref{4.2} so that it gives the confidence interval is for \(\mu\) based on the population’s standard deviation and the value of a single member drawn from the population.\[\mu = X_i \pm z \sigma \label{4.3}\]The population standard deviation for the amount of aspirin in a batch of analgesic tablets is known to be 7 mg of aspirin. If you randomly select and analyze a single tablet and find that it contains 245 mg of aspirin, what is the 95% confidence interval for the population’s mean?SolutionThe 95% confidence interval for the population mean is given as\[\mu = X_i \pm z \sigma = 245 \text{ mg} \pm (1.96 \times 7) \text{ mg} = 245 \text{ mg} \pm 14 \text{ mg} \nonumber\]Therefore, based on this one sample, we estimate that there is 95% probability that the population’s mean, \(\mu\), lies within the range of 231 mg to 259 mg of aspirin.Note the qualification that the prediction for \(\mu\) is based on one sample; a different sample likely will give a different 95% confidence interval. Our result here, therefore, is an estimate for \(\mu\) based on this one sample.It is unusual to predict the population’s expected mean from the analysis of a single sample; instead, we collect n samples drawn from a population of known \(\sigma\), and report the mean, X . The standard deviation of the mean, \(\sigma_{\overline{X}}\), which also is known as the standard error of the mean, is\[\sigma_{\overline{X}} = \frac {\sigma} {\sqrt{n}} \nonumber\]The confidence interval for the population’s mean, therefore, is\[\mu = \overline{X} \pm \frac {z \sigma} {\sqrt{n}} \nonumber\]What is the 95% confidence interval for the analgesic tablets in Example 4.4.4 , if an analysis of five tablets yields a mean of 245 mg of aspirin?SolutionIn this case the confidence interval is\[\mu = 245 \text{ mg} \pm \frac {1.96 \times 7} {\sqrt{5}} \text{ mg} = 245 \text{ mg} \pm 6 \text{ mg} \nonumber\]We estimate a 95% probability that the population’s mean is between 239 mg and 251 mg of aspirin. As expected, the confidence interval when using the mean of five samples is smaller than that for a single sample.An analysis of seven aspirin tablets from a population known to have a standard deviation of 5, gives the following results in mg aspirin per tablet:\(246 \quad 249 \quad 255 \quad 251 \quad 251 \quad 247 \quad 250\)What is the 95% confidence interval for the population’s expected mean?The mean is 249.9 mg aspirin/tablet for this sample of seven tablets. For a 95% confidence interval the value of z is 1.96, which makes the confidence interval\[249.9 \pm \frac {1.96 \times 5} {\sqrt{7}} = 249.9 \pm 3.7 \approx 250 \text{ mg} \pm 4 \text { mg} \nonumber\]In Examples 4.4.2 –4.4.5 we assumed that the amount of aspirin in analgesic tablets is normally distributed. Without analyzing every member of the population, how can we justify this assumption? In a situation where we cannot study the whole population, or when we cannot predict the mathematical form of a population’s probability distribution, we must deduce the distribution from a limited sampling of its members.Let’s return to the problem of determining a penny’s mass to explore further the relationship between a population’s distribution and the distribution of a sample drawn from that population. The two sets of data in Table 4.4.1 are too small to provide a useful picture of a sample’s distribution, so we will use the larger sample of 100 pennies shown in Table 4.4.3 . The mean and the standard deviation for this sample are 3.095 g and 0.0346 g, respectively.A histogram (Figure 4.4.5 ) is a useful way to examine the data in Table 4.4.3 . To create the histogram, we divide the sample into intervals, by mass, and determine the percentage of pennies within each interval (Table 4.4.4 ). Note that the sample’s mean is the midpoint of the histogram.Figure 4.4.5 also includes a normal distribution curve for the population of pennies, based on the assumption that the mean and the variance for the sample are appropriate estimates for the population’s mean and variance. Although the histogram is not perfectly symmetric in shape, it provides a good approximation of the normal distribution curve, suggesting that the sample of 100 pennies is normally distributed. It is easy to imagine that the histogram will approximate more closely a normal distribution if we include additional pennies in our sample.We will not offer a formal proof that the sample of pennies in Table 4.4.3 and the population of all circulating U. S. pennies are normally distributed; however, the evidence in Figure 4.4.5 strongly suggests this is true. Although we cannot claim that the results of all experiments are normally distributed, in most cases our data are normally distributed. According to the central limit theorem, when a measurement is subject to a variety of indeterminate errors, the results for that measurement will approximate a normal distribution [Mark, H.; Workman, J. Spectroscopy 1988, 3, 44–48]. The central limit theorem holds true even if the individual sources of indeterminate error are not normally distributed. The chief limitation to the central limit theorem is that the sources of indeterminate error must be independent and of similar magnitude so that no one source of error dominates the final distribution.An additional feature of the central limit theorem is that a distribution of means for samples drawn from a population with any distribution will approximate closely a normal distribution if the size of each sample is sufficiently large. For example, Figure 4.4.6 shows the distribution for two samples of 10 000 drawn from a uniform distribution in which every value between 0 and 1 occurs with an equal frequency. For samples of size n = 1, the resulting distribution closely approximates the population’s uniform distribution. The distribution of the means for samples of size n = 10, however, closely approximates a normal distribution.You might reasonably ask whether this aspect of the central limit theorem is important as it is unlikely that we will complete 10 000 analyses, each of which is the average of 10 individual trials. This is deceiving. When we acquire a sample of soil, for example, it consists of many individual particles each of which is an individual sample of the soil. Our analysis of this sample, therefore, gives the mean for this large number of individual soil particles. Because of this, the central limit theorem is relevant. For a discussion of circumstances where the central limit theorem may not apply, see “Do You Reckon It’s Normally Distributed?”, the full reference for which is Majewsky, M.; Wagner, M.; Farlin, J. Sci. Total Environ. 2016, 548–549, 408–409.Did you notice the differences between the equation for the variance of a population and the variance of a sample? If not, here are the two equations:\[\sigma^2 = \frac {\sum_{i = 1}^n (X_i - \mu)^2} {n} \nonumber\]\[s^2 = \frac {\sum_{i = 1}^n (X_i - \overline{X})^2} {n - 1} \nonumber\]Both equations measure the variance around the mean, using \(\mu\) for a population and \(\overline{X}\) for a sample. Although the equations use different measures for the mean, the intention is the same for both the sample and the population. A more interesting difference is between the denominators of the two equations. When we calculate the population’s variance we divide the numerator by the population’s size, n; for the sample’s variance, however, we divide by n – 1, where n is the sample’s size. Why do we divide by n – 1 when we calculate the sample’s variance?A variance is the average squared deviation of individual results relative to the mean. When we calculate an average we divide the sum by the number of independent measurements, or degrees of freedom, in the calculation. For the population’s variance, the degrees of freedom is equal to the population’s size, n. When we measure every member of a population we have complete information about the population.When we calculate the sample’s variance, however, we replace \(\mu\) with \(\overline{X}\), which we also calculate using the same data. If there are n members in the sample, we can deduce the value of the nth member from the remaining n – 1 members and the mean. For example, if \(n = 5\) and we know that the first four samples are 1, 2, 3 and 4, and that the mean is 3, then the fifth member of the sample must be\[X_5 = (\overline{X} \times n) - X_1 - X_2 - X_3 - X_4 = (3 \times 5) - 1 - 2 - 3 - 4 = 5 \nonumber\]Because we have just four independent measurements, we have lost one degree of freedom. Using n – 1 in place of n when we calculate the sample’s variance ensures that \(s^2\) is an unbiased estimator of \(\sigma^2\).Here is another way to think about degrees of freedom. We analyze samples to make predictions about the underlying population. When our sample consists of n measurements we cannot make more than n independent predictions about the population. Each time we estimate a parameter, such as the population’s mean, we lose a degree of freedom. If there are n degrees of freedom for calculating the sample’s mean, then n – 1 degrees of freedom remain when we calculate the sample’s variance.Earlier we introduced the confidence interval as a way to report the most probable value for a population’s mean, \(\mu\)\[\mu = \overline{X} \pm \frac {z \sigma} {\sqrt{n}} \label{4.4}\]where \(\overline{X}\) is the mean for a sample of size n, and \(\sigma\) is the population’s standard deviation. For most analyses we do not know the population’s standard deviation. We can still calculate a confidence interval, however, if we make two modifications to Equation \ref{4.4}.The first modification is straightforward—we replace the population’s standard deviation, \(\sigma\), with the sample’s standard deviation, s. The second modification is not as obvious. The values of z in Table 4.4.2 are for a normal distribution, which is a function of \(sigma^2\), not s2. Although the sample’s variance, s2, is an unbiased estimate of the population’s variance, \(\sigma^2\), the value of s2 will only rarely equal \(\sigma^2\). To account for this uncertainty in estimating \(\sigma^2\), we replace the variable z in Equation \ref{4.4} with the variable t, where t is defined such that \(t \ge z\) at all confidence levels.\[\mu = \overline{X} \pm \frac {t s} {\sqrt{n}} \label{4.5}\]Values for t at the 95% confidence level are shown in Table 4.4.5 . Note that t becomes smaller as the number of degrees of freedom increases, and that it approaches z as n approaches infinity. The larger the sample, the more closely its confidence interval for a sample (Equation \ref{4.5}) approaches the confidence interval for the population (Equation \ref{4.3}). Appendix 4 provides additional values of t for other confidence levels.What are the 95% confidence intervals for the two samples of pennies in Table 4.4.1 ?SolutionThe mean and the standard deviation for first experiment are, respectively, 3.117 g and 0.051 g. Because the sample consists of seven measurements, there are six degrees of freedom. The value of t from Table 4.4.5 , is 2.447. Substituting into Equation \ref{4.5} gives\[\mu = 3.117 \text{ g} \pm \frac {2.447 \times 0.051 \text{ g}} {\sqrt{7}} = 3.117 \text{ g} \pm 0.047 \text{ g} \nonumber\]For the second experiment the mean and the standard deviation are 3.081 g and 0.073 g, respectively, with four degrees of freedom. The 95% confidence interval is\[\mu = 3.081 \text{ g} \pm \frac {2.776 \times 0.037 \text{ g}} {\sqrt{5}} = 3.081 \text{ g} \pm 0.046 \text{ g} \nonumber\]Based on the first experiment, the 95% confidence interval for the population’s mean is 3.070–3.164 g. For the second experiment, the 95% confidence interval is 3.035–3.127 g. Although the two confidence intervals are not identical—remember, each confidence interval provides a different estimate for \(\mu\)—the mean for each experiment is contained within the other experiment’s confidence interval. There also is an appreciable overlap of the two confidence intervals. Both of these observations are consistent with samples drawn from the same population.Note that our comparison of these two confidence intervals at this point is somewhat vague and unsatisfying. We will return to this point in the next section, when we consider a statistical approach to comparing the results of experiments.What is the 95% confidence interval for the sample of 100 pennies in Table 4.4.3 ? The mean and the standard deviation for this sample are 3.095 g and 0.0346 g, respectively. Compare your result to the confidence intervals for the samples of pennies in Table 4.4.1 .With 100 pennies, we have 99 degrees of freedom for the mean. Although Table 4.4.3 does not include a value for t(0.05, 99), we can approximate its value by using the values for t(0.05, 60) and t(0.05, 100) and by assuming a linear change in its value.\[t(0.05, 99) = t(0.05, 60) - \frac {39} {40} \left\{ t(0.05, 60) - t(0.05, 100\} \right) \nonumber\]\[t(0.05, 99) = 2.000 - \frac {39} {40} \left\{ 2.000 - 1.984 \right\} = 1.9844 \nonumber\]The 95% confidence interval for the pennies is\[3.095 \pm \frac {1.9844 \times 0.0346} {\sqrt{100}} = 3.095 \text{ g} \pm 0.007 \text{ g} \nonumber\]From Example 4.4.6 , the 95% confidence intervals for the two samples in Table 4.4.1 are 3.117 g ± 0.047 g and 3.081 g ± 0.046 g. As expected, the confidence interval for the sample of 100 pennies is much smaller than that for the two smaller samples of pennies. Note, as well, that the confidence interval for the larger sample fits within the confidence intervals for the two smaller samples.There is a temptation when we analyze data simply to plug numbers into an equation, carry out the calculation, and report the result. This is never a good idea, and you should develop the habit of reviewing and evaluating your data. For example, if you analyze five samples and report an analyte’s mean concentration as 0.67 ppm with a standard deviation of 0.64 ppm, then the 95% confidence interval is\[\mu = 0.67 \text{ ppm} \pm \frac {2.776 \times 0.64 \text{ ppm}} {\sqrt{5}} = 0.67 \text{ ppm} \pm 0.79 \text{ ppm} \nonumber\]This confidence interval estimates that the analyte’s true concentration is between –0.12 ppm and 1.46 ppm. Including a negative concentration within the confidence interval should lead you to reevaluate your data or your conclusions. A closer examination of your data may convince you that the standard deviation is larger than expected, making the confidence interval too broad, or you may conclude that the analyte’s concentration is too small to report with confidence.We will return to the topic of detection limits near the end of this chapter.Here is a second example of why you should closely examine your data: results obtained on samples drawn at random from a normally distributed population must be random. If the results for a sequence of samples show a regular pattern or trend, then the underlying population either is not normally distributed or there is a time-dependent determinate error. For example, if we randomly select 20 pennies and find that the mass of each penny is greater than that for the preceding penny, then we might suspect that our balance is drifting out of calibration.This page titled 4.4: The Distribution of Measurements and Results is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by David Harvey.
137
4.5: Statistical Analysis of Data
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Analytical_Chemistry_2.1_(Harvey)/04%3A_Evaluating_Analytical_Data/4.05%3A_Statistical_Analysis_of_Data
A confidence interval is a useful way to report the result of an analysis because it sets limits on the expected result. In the absence of determinate error, a confidence interval based on a sample’s mean indicates the range of values in which we expect to find the population’s mean. When we report a 95% confidence interval for the mass of a penny as 3.117 g ± 0.047 g, for example, we are stating that there is only a 5% probability that the penny’s expected mass is less than 3.070 g or more than 3.164 g.Because a confidence interval is a statement of probability, it allows us to consider comparative questions, such as these: “Are the results for a newly developed method to determine cholesterol in blood significantly different from those obtained using a standard method?” or “Is there a significant variation in the composition of rainwater collected at different sites downwind from a coal-burning utility plant?” In this section we introduce a general approach to the statistical analysis of data. Specific statistical tests are presented in Section 4.6.The reliability of significance testing recently has received much attention—see Nuzzo, R. “Scientific Method: Statistical Errors,” Nature, 2014, 506, 150–152 for a general discussion of the issues—so it is appropriate to begin this section by noting the need to ensure that our data and our research question are compatible so that we do not read more into a statistical analysis than our data allows; see Leek, J. T.; Peng, R. D. “What is the Question? Science, 2015, 347, 1314-1315 for a use-ul discussion of six common research questions.In the context of analytical chemistry, significance testing often accompanies an exploratory data analysis (Is there a reason to suspect that there is a difference between these two analytical methods when applied to a common sample?) or an inferential data analysis (Is there a reason to suspect that there is a relationship between these two independent measurements?). A statistically significant result for these types of analytical research questions generally leads to the design of additional experiments better suited to making predictions or to explaining an underlying causal relationship. A significance test is the first step toward building a greater understanding of an analytical problem, not the final answer to that problem.Let’s consider the following problem. To determine if a medication is effective in lowering blood glucose concentrations, we collect two sets of blood samples from a patient. We collect one set of samples immediately before we administer the medication, and collect the second set of samples several hours later. After analyzing the samples, we report their respective means and variances. How do we decide if the medication was successful in lowering the patient’s concentration of blood glucose?One way to answer this question is to construct a normal distribution curve for each sample, and to compare the two curves to each other. Three possible outcomes are shown in Figure 4.5.1 . In Figure 4.5.1 a, there is a complete separation of the two normal distribution curves, which suggests the two samples are significantly different from each other. In Figure 4.5.1 b, the normal distribution curves for the two samples almost completely overlap, which suggests that the difference between the samples is insignificant. Figure 4.5.1 c, however, presents us with a dilemma. Although the means for the two samples seem different, the overlap of their normal distribution curves suggests that a significant number of possible outcomes could belong to either distribution. In this case the best we can do is to make a statement about the probability that the samples are significantly different from each other.The process by which we determine the probability that there is a significant difference between two samples is called significance testing or hypothesis testing. Before we discuss specific examples we will first establish a general approach to conducting and interpreting a significance test.The purpose of a significance test is to determine whether the difference between two or more results is sufficiently large that it cannot be explained by indeterminate errors. The first step in constructing a significance test is to state the problem as a yes or no question, such as “Is this medication effective at lowering a patient’s blood glucose levels?” A null hypothesis and an alternative hypothesis define the two possible answers to our yes or no question. The null hypothesis, H0, is that indeterminate errors are sufficient to explain any differences between our results. The alternative hypothesis, HA, is that the differences in our results are too great to be explained by random error and that they must be determinate in nature. We test the null hypothesis, which we either retain or reject. If we reject the null hypothesis, then we must accept the alternative hypothesis and conclude that the difference is significant.Failing to reject a null hypothesis is not the same as accepting it. We retain a null hypothesis because we have insufficient evidence to prove it incorrect. It is impossible to prove that a null hypothesis is true. This is an important point and one that is easy to forget. To appreciate this point let’s return to our sample of 100 pennies in Table 4.4.3. After looking at the data we might propose the following null and alternative hypotheses.H0: The mass of a circulating U.S. penny is between 2.900 g and 3.200 gHA: The mass of a circulating U.S. penny may be less than 2.900 g or more than 3.200 gTo test the null hypothesis we find a penny and determine its mass. If the penny’s mass is 2.512 g then we can reject the null hypothesis and accept the alternative hypothesis. Suppose that the penny’s mass is 3.162 g. Although this result increases our confidence in the null hypothesis, it does not prove that the null hypothesis is correct because the next penny we sample might weigh less than 2.900 g or more than 3.200 g.After we state the null and the alternative hypotheses, the second step is to choose a confidence level for the analysis. The confidence level defines the probability that we will reject the null hypothesis when it is, in fact, true. We can express this as our confidence that we are correct in rejecting the null hypothesis (e.g. 95%), or as the probability that we are incorrect in rejecting the null hypothesis. For the latter, the confidence level is given as \(\alpha\), where\[\alpha = 1 - \frac {\text{confidence interval (%)}} {100} \label{4.1}\]For a 95% confidence level, \(\alpha\) is 0.05.In this textbook we use \(\alpha\) to represent the probability that we incorrectly reject the null hypothesis. In other textbooks this probability is given as p (often read as “p- value”). Although the symbols differ, the meaning is the same.The third step is to calculate an appropriate test statistic and to compare it to a critical value. The test statistic’s critical value defines a breakpoint between values that lead us to reject or to retain the null hypothesis, which is the fourth, and final, step of a significance test. How we calculate the test statistic depends on what we are comparing, a topic we cover in Section 4.6. The last step is to either retain the null hypothesis, or to reject it and accept the alternative hypothesis.The four steps for a statistical analysis of data using a significance test:Suppose we want to evaluate the accuracy of a new analytical method. We might use the method to analyze a Standard Reference Material that contains a known concentration of analyte, \(\mu\). We analyze the standard several times, obtaining a mean value, \(\overline{X}\), for the analyte’s concentration. Our null hypothesis is that there is no difference between \(\overline{X}\) and \(\mu\)\[H_0 \text{: } \overline{X} = \mu \nonumber\]If we conduct the significance test at \(\alpha = 0.05\), then we retain the null hypothesis if a 95% confidence interval around \(\overline{X}\) contains \(\mu\). If the alternative hypothesis is\[H_\text{A} \text{: } \overline{X} \neq \mu \nonumber\]then we reject the null hypothesis and accept the alternative hypothesis if \(\mu\) lies in the shaded areas at either end of the sample’s probability distribution curve (Figure 4.5.2 a). Each of the shaded areas accounts for 2.5% of the area under the probability distribution curve, for a total of 5%. This is a two-tailed significance test because we reject the null hypothesis for values of \(\mu\) at either extreme of the sample’s probability distribution curve.We also can write the alternative hypothesis in two additional ways\[H_\text{A} \text{: } \overline{X} > \mu \nonumber\]\[H_\text{A} \text{: } \overline{X} < \mu \nonumber\]rejecting the null hypothesis if n falls within the shaded areas shown in Figure 4.5.2 b or Figure 4.5.2 c, respectively. In each case the shaded area represents 5% of the area under the probability distribution curve. These are examples of a one-tailed significance test.For a fixed confidence level, a two-tailed significance test is the more conservative test because rejecting the null hypothesis requires a larger difference between the parameters we are comparing. In most situations we have no particular reason to expect that one parameter must be larger (or must be smaller) than the other parameter. This is the case, for example, when we evaluate the accuracy of a new analytical method. A two-tailed significance test, therefore, usually is the appropriate choice.We reserve a one-tailed significance test for a situation where we specifically are interested in whether one parameter is larger (or smaller) than the other parameter. For example, a one-tailed significance test is appropriate if we are evaluating a medication’s ability to lower blood glucose levels. In this case we are interested only in whether the glucose levels after we administer the medication are less than the glucose levels before we initiated treatment. If a patient’s blood glucose level is greater after we administer the medication, then we know the answer—the medication did not work—and do not need to conduct a statistical analysis.Because a significance test relies on probability, its interpretation is subject to error. In a significance test, a defines the probability of rejecting a null hypothesis that is true. When we conduct a significance test at \(\alpha = 0.05\), there is a 5% probability that we will incorrectly reject the null hypothesis. This is known as a type 1 error, and its risk is always equivalent to \(\alpha\). A type 1 error in a two-tailed or a one-tailed significance tests corresponds to the shaded areas under the probability distribution curves in Figure 4.5.2 .A second type of error occurs when we retain a null hypothesis even though it is false. This is as a type 2 error, and the probability of its occurrence is \(\beta\). Unfortunately, in most cases we cannot calculate or estimate the value for \(\beta\). The probability of a type 2 error, however, is inversely proportional to the probability of a type 1 error.Minimizing a type 1 error by decreasing \(\alpha\) increases the likelihood of a type 2 error. When we choose a value for \(\alpha\) we must compromise between these two types of error. Most of the examples in this text use a 95% confidence level (\(\alpha = 0.05\)) because this usually is a reasonable compromise between type 1 and type 2 errors for analytical work. It is not unusual, however, to use a more stringent (e.g. \(\alpha = 0.01\)) or a more lenient (e.g. \(\alpha = 0.10\)) confidence level when the situation calls for it.This page titled 4.5: Statistical Analysis of Data is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by David Harvey.
138
4.6: Statistical Methods for Normal Distributions
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Analytical_Chemistry_2.1_(Harvey)/04%3A_Evaluating_Analytical_Data/4.06%3A_Statistical_Methods_for_Normal_Distributions
The most common distribution for our results is a normal distribution. Because the area between any two limits of a normal distribution curve is well defined, constructing and evaluating significance tests is straightforward.One way to validate a new analytical method is to analyze a sample that contains a known amount of analyte, \(\mu\). To judge the method’s accuracy we analyze several portions of the sample, determine the average amount of analyte in the sample, \(\overline{X}\), and use a significance test to compare \(\overline{X}\) to \(\mu\). Our null hypothesis is that the difference between \(\overline{X}\) and \(\mu\) is explained by indeterminate errors that affect the determination of \(\overline{X}\). The alternative hypothesis is that the difference between \(\overline{X}\) and \(\mu\) is too large to be explained by indeterminate error.\[H_0 \text{: } \overline{X} = \mu \nonumber\]\[H_A \text{: } \overline{X} \neq \mu \nonumber\]The test statistic is texp, which we substitute into the confidence interval for \(\mu\) given by Equation 4.4.5\[\mu = \overline{X} \pm \frac {t_\text{exp} s} {\sqrt{n}} \label{4.1}\]Rearranging this equation and solving for \(t_\text{exp}\)\[t_\text{exp} = \frac {|\mu - \overline{X}| \sqrt{n}} {s} \label{4.2}\]gives the value for \(t_\text{exp}\) when \(\mu\) is at either the right edge or the left edge of the sample's confidence interval (Figure 4.6.1 a)To determine if we should retain or reject the null hypothesis, we compare the value of texp to a critical value, \(t(\alpha, \nu)\), where \(\alpha\) is the confidence level and \(\nu\) is the degrees of freedom for the sample. The critical value \(t(\alpha, \nu)\) defines the largest confidence interval explained by indeterminate error. If \(t_\text{exp} > t(\alpha, \nu)\), then our sample’s confidence interval is greater than that explained by indeterminate errors (Figure 4.6.1 b). In this case, we reject the null hypothesis and accept the alternative hypothesis. If \(t_\text{exp} \leq t(\alpha, \nu)\), then our sample’s confidence interval is smaller than that explained by indeterminate error, and we retain the null hypothesis (Figure 4.6.1 c). Example 4.6.1 provides a typical application of this significance test, which is known as a t-test of \(\overline{X}\) to \(\mu\).You will find values for \(t(\alpha, \nu)\) in Appendix 4.Another name for the t-test is Student’s t-test. Student was the pen name for William Gossett who developed the t-test while working as a statistician for the Guiness Brewery in Dublin, Ireland. He published under the name Student because the brewery did not want its competitors to know they were using statistics to help improve the quality of their products.Before determining the amount of Na2CO3 in a sample, you decide to check your procedure by analyzing a standard sample that is 98.76% w/w Na2CO3. Five replicate determinations of the %w/w Na2CO3 in the standard gave the following results\(98.71 \% \quad 98.59 \% \quad 98.62 \% \quad 98.44 \% \quad 98.58 \%\)Using \(\alpha = 0.05\), is there any evidence that the analysis is giving inaccurate results?SolutionThe mean and standard deviation for the five trials are\[\overline{X} = 98.59 \quad \quad \quad s = 0.0973 \nonumber\]Because there is no reason to believe that the results for the standard must be larger or smaller than \(\mu\), a two-tailed t-test is appropriate. The null hypothesis and alternative hypothesis are\[H_0 \text{: } \overline{X} = \mu \quad \quad \quad H_\text{A} \text{: } \overline{X} \neq \mu \nonumber\]The test statistic, texp, is\[t_\text{exp} = \frac {|\mu - \overline{X}|\sqrt{n}} {s} = \frac {|98.76 - 98.59| \sqrt{5}} {0.0973} = 3.91 \nonumber\]The critical value for t(0.05, 4) from Appendix 4 is 2.78. Since texp is greater than t(0.05, 4), we reject the null hypothesis and accept the alternative hypothesis. At the 95% confidence level the difference between \(\overline{X}\) and \(\mu\) is too large to be explained by indeterminate sources of error, which suggests there is a determinate source of error that affects the analysis.There is another way to interpret the result of this t-test. Knowing that texp is 3.91 and that there are 4 degrees of freedom, we use Appendix 4 to estimate the \(\alpha\) value corresponding to a t(\(\alpha\), 4) of 3.91. From Appendix 4, t(0.02, 4) is 3.75 and t(0.01, 4) is 4.60. Although we can reject the null hypothesis at the 98% confidence level, we cannot reject it at the 99% confidence level. For a discussion of the advantages of this approach, see J. A. C. Sterne and G. D. Smith “Sifting the evidence—what’s wrong with significance tests?” BMJ 2001, 322, 226–231.To evaluate the accuracy of a new analytical method, an analyst determines the purity of a standard for which \(\mu\) is 100.0%, obtaining the following results.\(99.28 \% \quad 103.93 \% \quad 99.43 \% \quad 99.84 \% \quad 97.60 \% \quad 96.70 \% \quad 98.02 \%\)Is there any evidence at \(\alpha = 0.05\) that there is a determinate error affecting the results?The null hypothesis is \(H_0 \text{: } \overline{X} = \mu\) and the alternative hypothesis is \(H_\text{A} \text{: } \overline{X} \neq \mu\). The mean and the standard deviation for the data are 99.26% and 2.35%, respectively. The value for texp is\[t_\text{exp} = \frac {|100.0 - 99.26| \sqrt{7}} {2.35} = 0.833 \nonumber\]and the critical value for t(0.05, 6) is 2.477. Because texp is less than t(0.05, 6) we retain the null hypothesis and have no evidence for a significant difference between \(\overline{X}\) and \(\mu\).Earlier we made the point that we must exercise caution when we interpret the result of a statistical analysis. We will keep returning to this point because it is an important one. Having determined that a result is inaccurate, as we did in Example 4.6.1 , the next step is to identify and to correct the error. Before we expend time and money on this, however, we first should examine critically our data. For example, the smaller the value of s, the larger the value of texp. If the standard deviation for our analysis is unrealistically small, then the probability of a type 2 error increases. Including a few additional replicate analyses of the standard and reevaluating the t-test may strengthen our evidence for a determinate error, or it may show us that there is no evidence for a determinate error.If we analyze regularly a particular sample, we may be able to establish an expected variance, \(\sigma^2\), for the analysis. This often is the case, for example, in a clinical lab that analyze hundreds of blood samples each day. A few replicate analyses of a single sample gives a sample variance, s2, whose value may or may not differ significantly from \(\sigma^2\).We can use an F-test to evaluate whether a difference between s2 and \(\sigma^2\) is significant. The null hypothesis is \(H_0 \text{: } s^2 = \sigma^2\) and the alternative hypothesis is \(H_\text{A} \text{: } s^2 \neq \sigma^2\). The test statistic for evaluating the null hypothesis is Fexp, which is given as either\[F_\text{exp} = \frac {s^2} {\sigma^2} \text{ if } s^2 > \sigma^2 \text{ or } F_\text{exp} = \frac {\sigma^2} {s^2} \text{ if } \sigma^2 > s^2 \label{4.3}\]depending on whether s2 is larger or smaller than \(\sigma^2\). This way of defining Fexp ensures that its value is always greater than or equal to one.If the null hypothesis is true, then Fexp should equal one; however, because of indeterminate errors Fexp usually is greater than one. A critical value, \(F(\alpha, \nu_\text{num}, \nu_\text{den})\), is the largest value of Fexp that we can attribute to indeterminate error given the specified significance level, \(\alpha\), and the degrees of freedom for the variance in the numerator, \(\nu_\text{num}\), and the variance in the denominator, \(\nu_\text{den}\). The degrees of freedom for s2 is n – 1, where n is the number of replicates used to determine the sample’s variance, and the degrees of freedom for \(\sigma^2\) is defined as infinity, \(\infty\). Critical values of F for \(\alpha = 0.05\) are listed in Appendix 5 for both one-tailed and two-tailed F-tests.A manufacturer’s process for analyzing aspirin tablets has a known variance of 25. A sample of 10 aspirin tablets is selected and analyzed for the amount of aspirin, yielding the following results in mg aspirin/tablet.\(254 \quad 249 \quad 252 \quad 252 \quad 249 \quad 249 \quad 250 \quad 247 \quad 251 \quad 252\)Determine whether there is evidence of a significant difference between the sample’s variance and the expected variance at \(\alpha = 0.05\).SolutionThe variance for the sample of 10 tablets is 4.3. The null hypothesis and alternative hypotheses are\[H_0 \text{: } s^2 = \sigma^2 \quad \quad \quad H_\text{A} \text{: } s^2 \neq \sigma^2 \nonumber\]and the value for Fexp is\[F_\text{exp} = \frac {\sigma^2} {s^2} = \frac {25} {4.3} = 5.8 \nonumber\]The critical value for F(0.05, \(\infty\), 9) from Appendix 5 is 3.333. Since Fexp is greater than F(0.05, \(\infty\), 9), we reject the null hypothesis and accept the alternative hypothesis that there is a significant difference between the sample’s variance and the expected variance. One explanation for the difference might be that the aspirin tablets were not selected randomly.We can extend the F-test to compare the variances for two samples, A and B, by rewriting Equation \ref{4.3} as\[F_\text{exp} = \frac {s_A^2} {s_B^2} \nonumber\]defining A and B so that the value of Fexp is greater than or equal to 1.Table 4.4.1 shows results for two experiments to determine the mass of a circulating U.S. penny. Determine whether there is a difference in the variances of these analyses at \(\alpha = 0.05\).SolutionThe standard deviations for the two experiments are 0.051 for the first experiment (A) and 0.037 for the second experiment (B). The null and alternative hypotheses are\[H_0 \text{: } s_A^2 = s_B^2 \quad \quad \quad H_\text{A} \text{: } s_A^2 \neq s_B^2 \nonumber\]and the value of Fexp is\[F_\text{exp} = \frac {s_A^2} {s_B^2} = \frac {(0.051)^2} {(0.037)^2} = \frac {0.00260} {0.00137} = 1.90 \nonumber\]From Appendix 5, the critical value for F(0.05, 6, 4) is 9.197. Because Fexp < F(0.05, 6, 4), we retain the null hypothesis. There is no evidence at \(\alpha = 0.05\) to suggest that the difference in variances is significant.To compare two production lots of aspirin tablets, we collect an analyze samples from each, obtaining the following results (in mg aspirin/tablet).Lot 1: \(256 \quad 248 \quad 245 \quad 245 \quad 244 \quad 248 \quad 261\)Lot 2: \(241 \quad 258 \quad 241 \quad 244 \quad 256 \quad 254\)Is there any evidence at \(\alpha = 0.05\) that there is a significant difference in the variances for these two samples?The standard deviations are 6.451 mg for Lot 1 and 7.849 mg for Lot 2. The null and alternative hypotheses are\[H_0 \text{: } s_\text{Lot 1}^2 = s_\text{Lot 2}^2 \quad \quad \quad H_\text{A} \text{: } s_\text{Lot 1}^2 \neq s_\text{Lot 2}^2 \nonumber\]and the value of Fexp is\[F_\text{exp} = \frac {(7.849)^2} {(6.451)^2} = 1.480 \nonumber\]The critical value for F(0.05, 5, 6) is 5.988. Because Fexp < F(0.05, 5, 6), we retain the null hypothesis. There is no evidence at \(\alpha = 0.05\) to suggest that the difference in the variances is significant.Three factors influence the result of an analysis: the method, the sample, and the analyst. We can study the influence of these factors by conducting experiments in which we change one factor while holding constant the other factors. For example, to compare two analytical methods we can have the same analyst apply each method to the same sample and then examine the resulting means. In a similar fashion, we can design experiments to compare two analysts or to compare two samples.It also is possible to design experiments in which we vary more than one of these factors. We will return to this point in Chapter 14.Before we consider the significance tests for comparing the means of two samples, we need to make a distinction between unpaired data and paired data. This is a critical distinction and learning to distinguish between these two types of data is important. Here are two simple examples that highlight the difference between unpaired data and paired data. In each example the goal is to compare two balances by weighing pennies.In both examples the samples of 10 pennies were drawn from the same population; the difference is how we sampled that population. We will learn why this distinction is important when we review the significance test for paired data; first, however, we present the significance test for unpaired data.One simple test for determining whether data are paired or unpaired is to look at the size of each sample. If the samples are of different size, then the data must be unpaired. The converse is not true. If two samples are of equal size, they may be paired or unpaired.Consider two analyses, A and B with means of \(\overline{X}_A\) and \(\overline{X}_B\), and standard deviations of sA and sB. The confidence intervals for \(\mu_A\) and for \(\mu_B\) are\[\mu_A = \overline{X}_A \pm \frac {t s_A} {\sqrt{n_A}} \label{4.4}\]\[\mu_B = \overline{X}_B \pm \frac {t s_B} {\sqrt{n_B}} \label{4.5}\]where nA and nB are the sample sizes for A and for B. Our null hypothesis, \(H_0 \text{: } \mu_A = \mu_B\), is that and any difference between \(\mu_A\) and \(\mu_B\) is the result of indeterminate errors that affect the analyses. The alternative hypothesis, \(H_A \text{: } \mu_A \neq \mu_B\), is that the difference between \(\mu_A\)and \(\mu_B\) is too large to be explained by indeterminate error.To derive an equation for texp, we assume that \(\mu_A\) equals \(\mu_B\), and combine Equation \ref{4.4} and Equation \ref{4.5}\[\overline{X}_A \pm \frac {t_\text{exp} s_A} {\sqrt{n_A}} = \overline{X}_B \pm \frac {t_\text{exp} s_B} {\sqrt{n_B}} \nonumber\]Solving for \(|\overline{X}_A - \overline{X}_B|\) and using a propagation of uncertainty, gives\[|\overline{X}_A - \overline{X}_B| = t_\text{exp} \times \sqrt{\frac {s_A^2} {n_A} + \frac {s_B^2} {n_B}} \label{4.6}\]Finally, we solve for texp\[t_\text{exp} = \frac {|\overline{X}_A - \overline{X}_B|} {\sqrt{\frac {s_A^2} {n_A} + \frac {s_B^2} {n_B}}} \label{4.7}\]and compare it to a critical value, \(t(\alpha, \nu)\), where \(\alpha\) is the probability of a type 1 error, and \(\nu\) is the degrees of freedom.Problem 9 asks you to use a propagation of uncertainty to show that Equation \ref{4.6} is correct.Thus far our development of this t-test is similar to that for comparing \(\overline{X}\) to \(\mu\), and yet we do not have enough information to evaluate the t-test. Do you see the problem? With two independent sets of data it is unclear how many degrees of freedom we have.Suppose that the variances \(s_A^2\) and \(s_B^2\) provide estimates of the same \(\sigma^2\). In this case we can replace \(s_A^2\) and \(s_B^2\) with a pooled variance, \(s_\text{pool}^2\), that is a better estimate for the variance. Thus, Equation \ref{4.7} becomes\[t_\text{exp} = \frac {|\overline{X}_A - \overline{X}_B|} {s_\text{pool} \times \sqrt{\frac {1} {n_A} + \frac {1} {n_B}}} = \frac {|\overline{X}_A - \overline{X}_B|} {s_\text{pool}} \times \sqrt{\frac {n_A n_B} {n_A + n_B}} \label{4.8}\]where spool, the pooled standard deviation, is\[s_\text{pool} = \sqrt{\frac {(n_A - 1) s_A^2 + (n_B - 1)s_B^2} {n_A + n_B - 2}} \label{4.9}\]The denominator of Equation \ref{4.9} shows us that the degrees of freedom for a pooled standard deviation is \(n_A + n_B - 2\), which also is the degrees of freedom for the t-test. Note that we lose two degrees of freedom because the calculations for \(s_A^2\) and \(s_B^2\) require the prior calculation of \(\overline{X}_A\) amd \(\overline{X}_B\).So how do you determine if it is okay to pool the variances? Use an F-test.If \(s_A^2\) and \(s_B^2\) are significantly different, then we calculate texp using Equation \ref{4.7}. In this case, we find the degrees of freedom using the following imposing equation.\[\nu = \frac {\left( \frac {s_A^2} {n_A} + \frac {s_B^2} {n_B} \right)^2} {\frac {\left( \frac {s_A^2} {n_A} \right)^2} {n_A + 1} + \frac {\left( \frac {s_B^2} {n_B} \right)^2} {n_B + 1}} - 2 \label{4.10}\]Because the degrees of freedom must be an integer, we round to the nearest integer the value of \(\nu\) obtained using Equation \ref{4.10}.Equation \ref{4.10}, which is from Miller, J.C.; Miller, J.N. Statistics for Analytical Chemistry, 2nd Ed., Ellis-Horward: Chichester, UK, 1988. In the 6th Edition, the authors note that several different equations have been suggested for the number of degrees of freedom for t when sA and sB differ, reflecting the fact that the determination of degrees of freedom an approximation. An alternative equation—which is used by statistical software packages, such as R, Minitab, Excel—is\[\nu = \frac {\left( \frac {s_A^2} {n_A} + \frac {s_B^2} {n_B} \right)^2} {\frac {\left( \frac {s_A^2} {n_A} \right)^2} {n_A - 1} + \frac {\left( \frac {s_B^2} {n_B} \right)^2} {n_B - 1}} = \frac {\left( \frac {s_A^2} {n_A} + \frac {s_B^2} {n_B} \right)^2} {\frac {s_A^4} {n_A^2(n_A - 1)} + \frac {s_B^4} {n_B^2(n_B - 1)}} \nonumber\]For typical problems in analytical chemistry, the calculated degrees of freedom is reasonably insensitive to the choice of equation.Regardless of whether we calculate texp using Equation \ref{4.7} or Equation \ref{4.8}, we reject the null hypothesis if texp is greater than \(t(\alpha, \nu)\) and retain the null hypothesis if texp is less than or equal to \(t(\alpha, \nu)\).Table 4.4.1 provides results for two experiments to determine the mass of a circulating U.S. penny. Determine whether there is a difference in the means of these analyses at \(\alpha = 0.05\).SolutionFirst we use an F-test to determine whether we can pool the variances. We completed this analysis in Example 4.6.3 , finding no evidence of a significant difference, which means we can pool the standard deviations, obtaining\[s_\text{pool} = \sqrt{\frac {(7 - 1)(0.051)^2 + (5 - 1)(0.037)^2} {7 + 5 - 2}} = 0.0459 \nonumber\]with 10 degrees of freedom. To compare the means we use the following null hypothesis and alternative hypotheses\[H_0 \text{: } \mu_A = \mu_B \quad \quad \quad H_A \text{: } \mu_A \neq \mu_B \nonumber\]Because we are using the pooled standard deviation, we calculate texp using Equation \ref{4.8}.\[t_\text{exp} = \frac {|3.117 - 3.081|} {0.0459} \times \sqrt{\frac {7 \times 5} {7 + 5}} = 1.34 \nonumber\]The critical value for t(0.05, 10), from Appendix 4, is 2.23. Because texp is less than t(0.05, 10) we retain the null hypothesis. For \(\alpha = 0.05\) we do not have evidence that the two sets of pennies are significantly different.One method for determining the %w/w Na2CO3 in soda ash is to use an acid–base titration. When two analysts analyze the same sample of soda ash they obtain the results shown here.Analyst A: \(86.82 \% \quad 87.04 \% \quad 86.93 \% \quad 87.01 \% \quad 86.20 \% \quad 87.00 \%\)Analyst B: \(81.01 \% \quad 86.15 \% \quad 81.73 \% \quad 83.19 \% \quad 80.27 \% \quad 83.93 \% \quad\)Determine whether the difference in the mean values is significant at \(\alpha = 0.05\).SolutionWe begin by reporting the mean and standard deviation for each analyst.\[\overline{X}_A = 86.83\% \quad \quad s_A = 0.32\% \nonumber\]\[\overline{X}_B = 82.71\% \quad \quad s_B = 2.16\% \nonumber\]To determine whether we can use a pooled standard deviation, we first complete an F-test using the following null and alternative hypotheses.\[H_0 \text{: } s_A^2 = s_B^2 \quad \quad \quad H_A \text{: } s_A^2 \neq s_B^2 \nonumber\]Calculating Fexp, we obtain a value of\[F_\text{exp} = \frac {(2.16)^2} {(0.32)^2} = 45.6 \nonumber\]Because Fexp is larger than the critical value of 7.15 for F(0.05, 5, 5) from Appendix 5, we reject the null hypothesis and accept the alternative hypothesis that there is a significant difference between the variances; thus, we cannot calculate a pooled standard deviation.To compare the means for the two analysts we use the following null and alternative hypotheses.\[H_0 \text{: } \overline{X}_A = \overline{X}_B \quad \quad \quad H_A \text{: } \overline{X}_A \neq \overline{X}_B \nonumber\]Because we cannot pool the standard deviations, we calculate texp using Equation \ref{4.7} instead of Equation \ref{4.8}\[t_\text{exp} = \frac {|86.83 - 82.71|} {\sqrt{\frac {(0.32)^2} {6} + \frac {(2.16)^2} {6}}} = 4.62 \nonumber\]and calculate the degrees of freedom using Equation \ref{4.10}.\[\nu = \frac {\left( \frac {(0.32)^2} {6} + \frac {(2.16)^2} {6} \right)^2} {\frac {\left( \frac {(0.32)^2} {6} \right)^2} {6 + 1} + \frac {\left( \frac {(2.16)^2} {6} \right)^2} {6 + 1}} - 2 = 5.3 \approx 5 \nonumber\]From Appendix 4, the critical value for t(0.05, 5) is 2.57. Because texp is greater than t(0.05, 5) we reject the null hypothesis and accept the alternative hypothesis that the means for the two analysts are significantly different at \(\alpha = 0.05\).To compare two production lots of aspirin tablets, you collect samples from each and analyze them, obtaining the following results (in mg aspirin/tablet).Lot 1: \(256 \quad 248 \quad 245 \quad 245 \quad 244 \quad 248 \quad 261\)Lot 2: \(241 \quad 258 \quad 241 \quad 244 \quad 256 \quad 254\)Is there any evidence at \(\alpha = 0.05\) that there is a significant difference in the variance between the results for these two samples? This is the same data from Exercise 4.6.2 .To compare the means for the two lots, we use an unpaired t-test of the null hypothesis \(H_0 \text{: } \overline{X}_\text{Lot 1} = \overline{X}_\text{Lot 2}\) and the alternative hypothesis \(H_A \text{: } \overline{X}_\text{Lot 1} \neq \overline{X}_\text{Lot 2}\). Because there is no evidence to suggest a difference in the variances (see Exercise 4.6.2 ) we pool the standard deviations, obtaining an spool of\[s_\text{pool} = \sqrt{\frac {(7 - 1) (6.451)^2 + (6 - 1) (7.849)^2} {7 + 6 - 2}} = 7.121 \nonumber\]The means for the two samples are 249.57 mg for Lot 1 and 249.00 mg for Lot 2. The value for texp is\[t_\text{exp} = \frac {|249.57 - 249.00|} {7.121} \times \sqrt{\frac {7 \times 6} {7 + 6}} = 0.1439 \nonumber\]The critical value for t(0.05, 11) is 2.204. Because texp is less than t(0.05, 11), we retain the null hypothesis and find no evidence at \(\alpha = 0.05\) that there is a significant difference between the means for the two lots of aspirin tablets.Suppose we are evaluating a new method for monitoring blood glucose concentrations in patients. An important part of evaluating a new method is to compare it to an established method. What is the best way to gather data for this study? Because the variation in the blood glucose levels amongst patients is large we may be unable to detect a small, but significant difference between the methods if we use different patients to gather data for each method. Using paired data, in which the we analyze each patient’s blood using both methods, prevents a large variance within a population from adversely affecting a t-test of means.Typical blood glucose levels for most non-diabetic individuals ranges between 80–120 mg/dL (4.4–6.7 mM), rising to as high as 140 mg/dL (7.8 mM) shortly after eating. Higher levels are common for individuals who are pre-diabetic or diabetic.When we use paired data we first calculate the difference, di, between the paired values for each sample. Using these difference values, we then calculate the average difference, \(\overline{d}\), and the standard deviation of the differences, sd. The null hypothesis, \(H_0 \text{: } d = 0\), is that there is no difference between the two samples, and the alternative hypothesis, \(H_A \text{: } d \neq 0\), is that the difference between the two samples is significant.The test statistic, texp, is derived from a confidence interval around \(\overline{d}\)\[t_\text{exp} = \frac {|\overline{d}| \sqrt{n}} {s_d} \nonumber\]where n is the number of paired samples. As is true for other forms of the t-test, we compare texp to \(t(\alpha, \nu)\), where the degrees of freedom, \(\nu\), is n – 1. If texp is greater than \(t(\alpha, \nu)\), then we reject the null hypothesis and accept the alternative hypothesis. We retain the null hypothesis if texp is less than or equal to t(a, o). This is known as a paired t-test.Marecek et. al. developed a new electrochemical method for the rapid determination of the concentration of the antibiotic monensin in fermentation vats [Marecek, V.; Janchenova, H.; Brezina, M.; Betti, M. Anal. Chim. Acta 1991, 244, 15–19]. The standard method for the analysis is a test for microbiological activity, which is both difficult to complete and time-consuming. Samples were collected from the fermentation vats at various times during production and analyzed for the concentration of monensin using both methods. The results, in parts per thousand (ppt), are reported in the following table.Is there a significant difference between the methods at \(\alpha = 0.05\)?SolutionAcquiring samples over an extended period of time introduces a substantial time-dependent change in the concentration of monensin. Because the variation in concentration between samples is so large, we use a paired t-test with the following null and alternative hypotheses.\[H_0 \text{: } \overline{d} = 0 \quad \quad \quad H_A \text{: } \overline{d} \neq 0 \nonumber\]Defining the difference between the methods as\[d_i = (X_\text{elect})_i - (X_\text{micro})_i \nonumber\]we calculate the difference for each sample.The mean and the standard deviation for the differences are, respectively, 2.25 ppt and 5.63 ppt. The value of texp is\[t_\text{exp} = \frac {|2.25| \sqrt{11}} {5.63} = 1.33 \nonumber\]which is smaller than the critical value of 2.23 for t(0.05, 10) from Appendix 4. We retain the null hypothesis and find no evidence for a significant difference in the methods at \(\alpha = 0.05\).Suppose you are studying the distribution of zinc in a lake and want to know if there is a significant difference between the concentration of Zn2+ at the sediment-water interface and its concentration at the air-water interface. You collect samples from six locations—near the lake’s center, near its drainage outlet, etc.—obtaining the results (in mg/L) shown in the table. Using this data, determine if there is a significant difference between the concentration of Zn2+ at the two interfaces at \(\alpha = 0.05\). Complete this analysis treating the data as (a) unpaired and as (b) paired. Briefly comment on your results.Complete this analysis treating the data as (a) unpaired and as (b) paired. Briefly comment on your results.Treating as Unpaired Data: The mean and the standard deviation for the concentration of Zn2+ at the air-water interface are 0.5178 mg/L and 0.1732 mg/L, respectively, and the values for the sediment-water interface are 0.4445 mg/L and 0.1418 mg/L, respectively. An F-test of the variances gives an Fexp of 1.493 and an F(0.05, 5, 5) of 7.146. Because Fexp is smaller than F(0.05, 5, 5), we have no evidence at \(\alpha = 0.05\) to suggest that the difference in variances is significant. Pooling the standard deviations gives an spool of 0.1582 mg/L. An unpaired t-test gives texp as 0.8025. Because texp is smaller than t(0.05, 11), which is 2.204, we have no evidence that there is a difference in the concentration of Zn2+ between the two interfaces.Treating as Paired Data: To treat as paired data we need to calculate the difference, di, between the concentration of Zn2+ at the air-water interface and at the sediment-water interface for each location, where\[d_i = \left( \text{[Zn}^{2+} \text{]}_\text{air-water} \right)_i - \left( \text{[Zn}^{2+} \text{]}_\text{sed-water} \right)_i \nonumber\]The mean difference is 0.07333 mg/L with a standard deviation of 0.0441 mg/L. The null hypothesis and the alternative hypothesis are\[H_0 \text{: } \overline{d} = 0 \quad \quad \quad H_A \text{: } \overline{d} \neq 0 \nonumber\]and the value of texp is\[t_\text{exp} = \frac {|0.07333| \sqrt{6}} {0.0441} = 4.073 \nonumber\]Because texp is greater than t(0.05, 5), which is 2.571, we reject the null hypothesis and accept the alternative hypothesis that there is a significant difference in the concentration of Zn2+ between the air-water interface and the sediment-water interface.The difference in the concentration of Zn2+ between locations is much larger than the difference in the concentration of Zn2+ between the interfaces. Because out interest is in studying the difference between the interfaces, the larger standard deviation when treating the data as unpaired increases the probability of incorrectly retaining the null hypothesis, a type 2 error.One important requirement for a paired t-test is that the determinate and the indeterminate errors that affect the analysis must be independent of the analyte’s concentration. If this is not the case, then a sample with an unusually high concentration of analyte will have an unusually large di. Including this sample in the calculation of \(\overline{d}\) and sd gives a biased estimate for the expected mean and standard deviation. This rarely is a problem for samples that span a limited range of analyte concentrations, such as those in Example 4.6.6 or Exercise 4.6.4 . When paired data span a wide range of concentrations, however, the magnitude of the determinate and indeterminate sources of error may not be independent of the analyte’s concentration; when true, a paired t-test may give misleading results because the paired data with the largest absolute determinate and indeterminate errors will dominate \(\overline{d}\). In this situation a regression analysis, which is the subject of the next chapter, is more appropriate method for comparing the data.Earlier in the chapter we examined several data sets consisting of the mass of a circulating United States penny. Table 4.6.1 provides one more data set. Do you notice anything unusual in this data? Of the 112 pennies included in Table 4.4.1 and Table 4.4.3, no penny weighed less than 3 g. In Table 4.6.1, however, the mass of one penny is less than 3 g. We might ask whether this penny’s mass is so different from the other pennies that it is in error.A measurement that is not consistent with other measurements is called outlier. An outlier might exist for many reasons: the outlier might belong to a different population (Is this a Canadian penny?); the outlier might be a contaminated or otherwise altered sample (Is the penny damaged or unusually dirty?); or the outlier may result from an error in the analysis (Did we forget to tare the balance?). Regardless of its source, the presence of an outlier compromises any meaningful analysis of our data. There are many significance tests that we can use to identify a potential outlier, three of which we present here.One of the most common significance tests for identifying an outlier is Dixon’s Q-test. The null hypothesis is that there are no outliers, and the alternative hypothesis is that there is an outlier. The Q-test compares the gap between the suspected outlier and its nearest numerical neighbor to the range of the entire data set (Figure 4.6.2 ).The test statistic, Qexp, is\[Q_\text{exp} = \frac {\text{gap}} {\text{range}} = \frac {|\text{outlier's value} - \text{nearest value}|} {\text{largest value} - \text{smallest value}} \nonumber\]This equation is appropriate for evaluating a single outlier. Other forms of Dixon’s Q-test allow its extension to detecting multiple outliers [Rorabacher, D. B. Anal. Chem. 1991, 63, 139–146].The value of Qexp is compared to a critical value, \(Q(\alpha, n)\), where \(\alpha\) is the probability that we will reject a valid data point (a type 1 error) and n is the total number of data points. To protect against rejecting a valid data point, usually we apply the more conservative two-tailed Q-test, even though the possible outlier is the smallest or the largest value in the data set. If Qexp is greater than \(Q(\alpha, n)\), then we reject the null hypothesis and may exclude the outlier. We retain the possible outlier when Qexp is less than or equal to \(Q(\alpha, n)\). Table 4.6.2 provides values for \(Q(\alpha, n)\) for a data set that has 3–10 values. A more extensive table is in Appendix 6. Values for \(Q(\alpha, n)\) assume an underlying normal distribution.Although Dixon’s Q-test is a common method for evaluating outliers, it is no longer favored by the International Standards Organization (ISO), which recommends the Grubb’s test. There are several versions of Grubb’s test depending on the number of potential outliers. Here we will consider the case where there is a single suspected outlier.For details on this recommendation, see International Standards ISO Guide 5752-2 “Accuracy (trueness and precision) of measurement methods and results–Part 2: basic methods for the determination of repeatability and reproducibility of a standard measurement method,” 1994.The test statistic for Grubb’s test, Gexp, is the distance between the sample’s mean, \(\overline{X}\), and the potential outlier, \(X_\text{out}\), in terms of the sample’s standard deviation, s.\[G_\text{exp} = \frac {|X_\text{out} - \overline{X}|} {s} \nonumber\]We compare the value of Gexp to a critical value \(G(\alpha, n)\), where \(\alpha\) is the probability that we will reject a valid data point and n is the number of data points in the sample. If Gexp is greater than \(G(\alpha, n)\), then we may reject the data point as an outlier, otherwise we retain the data point as part of the sample. Table 4.6.3 provides values for G(0.05, n) for a sample containing 3–10 values. A more extensive table is in Appendix 7. Values for \(G(\alpha, n)\) assume an underlying normal distribution.Our final method for identifying an outlier is Chauvenet’s criterion. Unlike Dixon’s Q-Test and Grubb’s test, you can apply this method to any distribution as long as you know how to calculate the probability for a particular outcome. Chauvenet’s criterion states that we can reject a data point if the probability of obtaining the data point’s value is less than (2n)–1, where n is the size of the sample. For example, if n = 10, a result with a probability of less than \((2 \times 10)^{-1}\), or 0.05, is considered an outlier.To calculate a potential outlier’s probability we first calculate its standardized deviation, z \[z = \frac {|X_\text{out} - \overline{X}|} {s} \nonumber\]where \(X_\text{out}\) is the potential outlier, \(\overline{X}\) is the sample’s mean and s is the sample’s standard deviation. Note that this equation is identical to the equation for Gexp in the Grubb’s test. For a normal distribution, we can find the probability of obtaining a value of z using the probability table in Appendix 3.Table 4.6.1 contains the masses for nine circulating United States pennies. One entry, 2.514 g, appears to be an outlier. Determine if this penny is an outlier using a Q-test, Grubb’s test, and Chauvenet’s criterion. For the Q-test and Grubb’s test, let \(\alpha = 0.05\).SolutionFor the Q-test the value for Qexp is\[Q_\text{exp} = \frac {|2.514 - 3.039|} {3.109 - 2.514} = 0.882 \nonumber\]From Table 4.6.2 , the critical value for Q(0.05, 9) is 0.493. Because Qexp is greater than Q(0.05, 9), we can assume the penny with a mass of 2.514 g likely is an outlier.For Grubb’s test we first need the mean and the standard deviation, which are 3.011 g and 0.188 g, respectively. The value for Gexp is\[G_\text{exp} = \frac {|2.514 - 3.011} {0.188} = 2.64 \nonumber\]Using Table 4.6.3 , we find that the critical value for G(0.05, 9) is 2.215. Because Gexp is greater than G(0.05, 9), we can assume that the penny with a mass of 2.514 g likely is an outlier.For Chauvenet’s criterion, the critical probability is \((2 \times 9)^{-1}\), or 0.0556. The value of z is the same as Gexp, or 2.64. Using Appendix 3, the probability for z = 2.64 is 0.00415. Because the probability of obtaining a mass of 0.2514 g is less than the critical probability, we can assume the penny with a mass of 2.514 g likely is an outlier.You should exercise caution when using a significance test for outliers because there is a chance you will reject a valid result. In addition, you should avoid rejecting an outlier if it leads to a precision that is much better than expected based on a propagation of uncertainty. Given these concerns it is not surprising that some statisticians caution against the removal of outliers [Deming, W. E. Statistical Analysis of Data; Wiley: New York, 1943 (republished by Dover: New York, 1961); p. 171].You also can adopt a more stringent requirement for rejecting data. When using the Grubb’s test, for example, the ISO 5752 guidelines suggests retaining a value if the probability for rejecting it is greater than \(\alpha = 0.05\), and flagging a value as a “straggler” if the probability for rejecting it is between \(\alpha = 0.05\) and \(\alpha = 0.01\). A “straggler” is retained unless there is compelling reason for its rejection. The guidelines recommend using \(\alpha = 0.01\) as the minimum criterion for rejecting a possible outlier.On the other hand, testing for outliers can provide useful information if we try to understand the source of the suspected outlier. For example, the outlier in Table 4.6.1 represents a significant change in the mass of a penny (an approximately 17% decrease in mass), which is the result of a change in the composition of the U.S. penny. In 1982 the composition of a U.S. penny changed from a brass alloy that was 95% w/w Cu and 5% w/w Zn (with a nominal mass of 3.1 g), to a pure zinc core covered with copper (with a nominal mass of 2.5 g) [Richardson, T. H. J. Chem. Educ. 1991, 68, 310–311]. The pennies in Table 4.6.1 , therefore, were drawn from different populations.This page titled 4.6: Statistical Methods for Normal Distributions is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by David Harvey.
139
4.7: Detection Limits
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Analytical_Chemistry_2.1_(Harvey)/04%3A_Evaluating_Analytical_Data/4.07%3A_Detection_Limits
The International Union of Pure and Applied Chemistry (IUPAC) defines a method’s detection limit as the smallest concentration or absolute amount of analyte that has a signal significantly larger than the signal from a suitable blank [IUPAC Compendium of Chemical Technology, Electronic Version]. Although our interest is in the amount of analyte, in this section we will define the detection limit in terms of the analyte’s signal. Knowing the signal you can calculate the analyte’s concentration, CA, or the moles of analyte, nA, using the equations\[S_A = k_A C_A \text{ or } S_A = k_A n_A \nonumber\]where k is the method’s sensitivity.See Chapter 3 for a review of these equations.Let’s translate the IUPAC definition of the detection limit into a mathematical form by letting Smb represent the average signal for a method blank, and letting \(\sigma_{mb}\) represent the method blank’s standard deviation. The null hypothesis is that the analyte is not present in the sample, and the alternative hypothesis is that the analyte is present in the sample. To detect the analyte, its signal must exceed Smb by a suitable amount; thus,\[(S_A)_{DL} = S_{mb} \pm z \sigma_{mb} \label{4.1}\]where \((S_A)_{DL}\) is the analyte’s detection limit.If \(\sigma_{mb}\) is not known, we can replace it with smb; Equation \ref{4.1} then becomes\[(S_A)_{DL} = S_{mb} \pm t s_{mb} \nonumber\]You can make similar adjustments to other equations in this section. See, for example, Kirchner, C. J. “Estimation of Detection Limits for Environme tal Analytical Procedures,” in Currie, L. A. (ed) Detection in Analytical Chemistry: Importance, Theory, and Practice; American Chemical Society: Washington, D. C., 1988.The value we choose for z depends on our tolerance for reporting the analyte’s concentration even if it is absent from the sample (a type 1 error). Typically, z is set to three, which, from Appendix 3, corresponds to a probability, \(\alpha\), of 0.00135. As shown in Figure 4.7.1 a, there is only a 0.135% probability of detecting the analyte in a sample that actually is analyte-free.A detection limit also is subject to a type 2 error in which we fail to find evidence for the analyte even though it is present in the sample. Consider, for example, the situation shown in Figure 4.7.1 b where the signal for a sample that contains the analyte is exactly equal to (SA)DL. In this case the probability of a type 2 error is 50% because half of the sample’s possible signals are below the detection limit. We correctly detect the analyte at the IUPAC detection limit only half the time. The IUPAC definition for the detection limit is the smallest signal for which we can say, at a significance level of \(\alpha\), that an analyte is present in the sample; however, failing to detect the analyte does not mean it is not present in the sample.The detection limit often is represented, particularly when discussing public policy issues, as a distinct line that separates detectable concentrations of analytes from concentrations we cannot detect. This use of a detection limit is incorrect [Rogers, L. B. J. Chem. Educ. 1986, 63, 3–6]. As suggested by Figure 4.7.1 , for an analyte whose concentration is near the detection limit there is a high probability that we will fail to detect the analyte.An alternative expression for the detection limit, the limit of identification, minimizes both type 1 and type 2 errors [Long, G. L.; Winefordner, J. D. Anal. Chem. 1983, 55, 712A–724A]. The analyte’s signal at the limit of identification, (SA)LOI, includes an additional term, \(z \sigma_A\), to account for the distribution of the analyte’s signal.\[(S_A)_\text{LOI} = (S_A)_\text{DL} + z \sigma_A = S_{mb} + z \sigma_{mb} + z \sigma_A \nonumber\]As shown in Figure 4.7.2 , the limit of identification provides an equal probability of a type 1 and a type 2 error at the detection limit. When the analyte’s concentration is at its limit of identification, there is only a 0.135% probability that its signal is indistinguishable from that of the method blank.The ability to detect the analyte with confidence is not the same as the ability to report with confidence its concentration, or to distinguish between its concentration in two samples. For this reason the American Chemical Society’s Committee on Environmental Analytical Chemistry recommends the limit of quantitation, (SA)LOQ [“Guidelines for Data Acquisition and Data Quality Evaluation in Environmental Chemistry,” Anal. Chem. 1980, 52, 2242–2249 ].\[(S_A)_\text{LOQ} = S_{mb} + 10 \sigma_{mb} \nonumber\]This page titled 4.7: Detection Limits is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by David Harvey.
140
4.8: Using Excel and R to Analyze Data
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Analytical_Chemistry_2.1_(Harvey)/04%3A_Evaluating_Analytical_Data/4.08%3A_Using_Excel_and_R_to_Analyze_Data
Although the calculations in this chapter are relatively straightforward, it can be tedious to work problems using nothing more than a calculator. Both Excel and R include functions for many common statistical calculations. In addition, R provides useful functions for visualizing your data.Excel has built-in functions that we can use to complete many of the statistical calculations covered in this chapter, including reporting descriptive statistics, such as means and variances, predicting the probability of obtaining a given outcome from a binomial distribution or a normal distribution, and carrying out significance tests. Table 4.8.1 provides the syntax for many of these functions; you can information on functions not included here by using Excel’s Help menu.Let’s use Excel to provide a statistical summary of the data in Table 4.1.1. Enter the data into a spreadsheet, as shown in Figure 4.8.1 . To calculate the sample’s mean, for example, click on any empty cell, enter the formula= average(b2:b8)and press Return or Enter to replace the cell’s content with Excel’s calculation of the mean (3.117285714), which we round to 3.117. Excel does not have a function for the range, but we can use the functions that report the maximum value and the minimum value to calculate the range; thus= max(b2:b8) – min(b2:b8)returns 0.142 as an answer.In Example 4.4.2 we showed that 91.10% of a manufacturer’s analgesic tablets contained between 243 and 262 mg of aspirin. We arrived at this result by calculating the deviation, z, of each limit from the population’s expected mean, \(\mu\), of 250 mg in terms of the population’s expected standard deviation, \(\sigma\), of 5 mg. After we calculated values for z, we used the table in Appendix 3 to find the area under the normal distribution curve between these two limits.We can complete this calculation in Excel using the norm.dist function As shown in Figure 4.8.2 , the function calculates the probability of obtaining a result less than x from a normal distribution with a mean of \(\mu\) and a standard deviation of \(\sigma\). To solve Example 4.4.2 using Excel enter the following formulas into separate cells= norm.dist(243, 250, 5, TRUE)= norm.dist(262, 250, 5, TRUE)obtaining results of 0.080756659 and 0.991802464. Subtracting the smaller value from the larger value and adjusting to the correct number of significant figures gives the probability as 0.9910, or 99.10%.Excel also includes a function for working with binomial distributions. The function’s syntax is= binom.dist(X, N, p, TRUE or FALSE)where X is the number of times a particular outcome occurs in N trials, and p is the probability that X occurs in a single trial. Setting the function’s last term to TRUE gives the total probability for any result up to X and setting it to FALSE gives the probability for X. Using Example 4.4.1 to test this function, we use the formula= binom.dist(0, 27, 0.0111, FALSE)to find the probability of finding no atoms of 13C atoms in a molecule of cholesterol, C27H44O, which returns a value of 0.740 after adjusting for significant figures. Using the formula= binom.dist(2, 27, 0.0111, TRUE)we find that 99.7% of cholesterol molecules contain two or fewer atoms of 13C.As shown in Table 4.8.1 , Excel includes functions for the following significance tests covered in this chapter:Let’s use these functions to complete a t-test on the data in Table 4.4.1, which contains results for two experiments to determine the mass of a circulating U. S. penny. Enter the data from Table 4.4.1 into a spreadsheet as shown in Figure 4.8.3 .Because the data in this case are unpaired, we will use Excel to complete an unpaired t-test. Before we can complete the t-test, we use an F-test to determine whether the variances for the two data sets are equal or unequal.To complete the F-test, we click on any empty cell, enter the formula= f.test(b2:b8, c2:c6)and press Return or Enter, which replaces the cell’s content with the value of \(\alpha\) for which we can reject the null hypothesis of equal variances. In this case, Excel returns an \(\alpha\) of 0.566 105 03; because this value is not less than 0.05, we retain the null hypothesis that the variances are equal. Excel’s F-test is two-tailed; for a one-tailed F-test, we use the same function, but divide the result by two; thus= f.test(b2:b8, c2:c6)/2Having found no evidence to suggest unequal variances, we next complete an unpaired t-test assuming equal variances, entering into any empty cell the formula= t.test(b2:b8, c2:c6, 2, 2)where the first 2 indicates that this is a two-tailed t-test, and the second 2 indicates that this is an unpaired t-test with equal variances. Pressing Return or Enter replaces the cell’s content with the value of \(\alpha\) for which we can reject the null hypothesis of equal means. In this case, Excel returns an \(\alpha\) of 0.211 627 646; because this value is not less than 0.05, we retain the null hypothesis that the means are equal.See Example 4.6.3 and Example 4.6.4 for our earlier solutions to this problem.The other significance tests in Excel work in the same format. The following practice exercise provides you with an opportunity to test yourself.Rework Example 4.6.5 and Example 4.6.6 using Excel.You will find small differences between the values you obtain using Excel’s built in functions and the worked solutions in the chapter. These differences arise because Excel does not round off the results of intermediate calculations.R is a programming environment that provides powerful capabilities for analyzing data. There are many functions built into R’s standard installation and additional packages of functions are available from the R web site (www.r-project.org). Commands in R are not available from pull down menus. Instead, you interact with R by typing in commands.You can download the current version of R from www.r-project.org. Click on the link for Download: CRAN and find a local mirror site. Click on the link for the mirror site and then use the link for Linux, MacOS X, or Windows under the heading “Download and Install R.”Let’s use R to provide a statistical summary of the data in Table 4.1.1. To do this we first need to create an object that contains the data, which we do by typing in the following command.> penny1 = c(3.080, 3.094, 3.107, 3.056, 3.112, 3.174, 3.198)In R, the symbol ‘>’ is a prompt, which indicates that the program is waiting for you to enter a command. When you press ‘Return’ or ‘Enter,’ R executes the command, displays the result (if there is a result to return), and returns the > prompt.Table 4.8.2 lists some of the commands in R for calculating basic descriptive statistics. As is the case for Excel, R does not include stand alone commands for all descriptive statistics of interest to us, but we can calculate them using other commands. Using a command is easy—simply enter the appropriate code at the prompt; for example, to find the sample’s variance we enter> var(penny1) 0.002221918In Example 4.4.2 we showed that 91.10% of a manufacturer’s analgesic tablets contained between 243 and 262 mg of aspirin. We arrived at this result by calculating the deviation, z, of each limit from the population’s expected mean, \(\mu\), of 250 mg in terms of the population’s expected standard deviation, \(\sigma\), of 5 mg. After we calculated values for z, we used the table in Appendix 3 to find the area under the normal distribution curve between these two limits.We can complete this calculation in R using the function pnorm. The function’s general format ispnorm(\(x, \mu, \sigma\))where x is the limit of interest, \(\mu\) is the distribution’s expected mean, and \(\sigma\) is the distribution’s expected standard deviation. The function returns the probability of obtaining a result of less than x (Figure 4.8.4 ). Figure 4.8.4 : Shown in blue is the area returned by the function pnorm(\(x, \mu, \sigma\)).Here is the output of an R session for solving Example 4.4.2.> pnorm 0.08075666> pnorm 0.9918025Subtracting the smaller value from the larger value and adjusting to the correct number of significant figures gives the probability as 0.9910, or 99.10%.R also includes functions for binomial distributions. To find the probability of obtaining a particular outcome, X, in N trials we use the dbinom function.dbinom(X, N, p)where X is the number of times a particular outcome occurs in N trials, and p is the probability that X occurs in a single trial. Using Example 4.4.1 to test this function, we find that the probability of finding no atoms of 13C atoms in a molecule of cholesterol, C27H44O is> dbinom(0, 27, 0.0111) 0.73979970.740 after adjusting the significant figures. To find the probability of obtaining any outcome up to a maximum value of X, we use the pbinom function.pbinom(X, N, p)To find the percentage of cholesterol molecules that contain 0, 1, or 2 atoms of 13C, we enter> pbinom(2, 27, 0.0111) 0.9967226and find that the answer is 99.7% of cholesterol molecules.R includes commands for the following significance tests covered in this chapter:Let’s use these functions to complete a t-test on the data in Table 4.4.1, which contains results for two experiments to determine the mass of a circulating U. S. penny. First, enter the data from Table 4.4.1 into two objects.> penny1 = c(3.080, 3.094, 3.107, 3.056, 3.112, 3.174, 3.198)> penny2 = c(3.052, 3.141, 3.083, 3.083, 3.048)Because the data in this case are unpaired, we will use R to complete an unpaired t-test. Before we can complete a t-test we use an F-test to determine whether the variances for the two data sets are equal or unequal.To complete a two-tailed F-test in R we use the commandvar.test(X, Y)where X and Y are the objects that contain the two data sets. Figure 4.8.5 shows the output from an R session to solve this problem.Note that R does not provide the critical value for F(0.05, 6, 4); instead it reports the 95% confidence interval for Fexp. Because this confidence interval of 0.204 to 11.661 includes the expected value for F of 1.00, we retain the null hypothesis and have no evidence for a difference between the variances. R also provides the probability of incorrectly rejecting the null hypothesis, which in this case is 0.5561.For a one-tailed F-test the command is one of the followingvar.test(X, Y, alternative = “greater”)var.test(X, Y, alternative = “less”)where “greater” is used when the alternative hypothesis is \(s_X^2 > s_Y^2\), and “less” is used when the alternative hypothesis is \(s_X^2 < s_Y^2\).Having found no evidence suggesting unequal variances, we now complete an unpaired t-test assuming equal variances. The basic syntax for a two-tailed t-test ist.test(X, Y, mu = 0, paired = FALSE, var.equal = FALSE)where X and Y are the objects that contain the data sets. You can change the underlined terms to alter the nature of the t-test. Replacing “var.equal = FALSE” with “var.equal = TRUE” makes this a two-tailed t-test with equal variances, and replacing “paired = FALSE” with “paired = TRUE” makes this a paired t-test. The term “mu = 0” is the expected difference between the means, which for this problem is 0. You can, of course, change this to suit your needs. The underlined terms are default values; if you omit them, then R assumes you intend an unpaired two-tailed t-test of the null hypothesis that X = Y with unequal variances. Figure 4.8.6 shows the output of an R session for this problem.We can interpret the results of this t-test in two ways. First, the p-value of 0.2116 means there is a 21.16% probability of incorrectly rejecting the null hypothesis. Second, the 95% confidence interval of –0.024 to 0.0958 for the difference between the sample means includes the expected value of zero. Both ways of looking at the results provide no evidence for rejecting the null hypothesis; thus, we retain the null hypothesis and find no evidence for a difference between the two samples.The other significance tests in R work in the same format. The following practice exercise provides you with an opportunity to test yourself.Rework Example 4.6.5 and Example 4.6.6 using R.Shown here are copies of R sessions for each problem. You will find small differences between the values given here for texp and for Fexp and those values shown with the worked solutions in the chapter. These differences arise because R does not round off the results of intermediate calculations.Example 4.6.5> AnalystA = c(86.82, 87.04, 86.93, 87.01, 86.20, 87.00)> AnalystB = c(81.01, 86.15, 81.73, 83.19, 80.27, 83.94)> var.test(AnalystB, AnalystA)F test to compare two variancesdata: AnalystB and AnalystAF = 45.6358, num df = 5, denom df = 5, p-value = 0.0007148alternative hypothesis: true ratio of variances is not equal to 195 percent confidence interval:6.385863 326.130970sample estimates:ratio of variances45.63582> t.test(AnalystA, AnalystB, var.equal=FALSE)Welch Two Sample t-testdata: AnalystA and AnalystBt = 4.6147, df = 5.219, p-value = 0.005177alternative hypothesis: true difference in means is not equal to 095 percent confidence interval: 1.852919 6.383748sample estimates: mean of x mean of y86.83333 82.71500Example 4.21 > micro = c(129.5, 89.6, 76.6, 52.2, 110.8, 50.4, 72.4, 141.4, 75.0, 34.1, 60.3) > elect = c(132.3, 91.0, 73.6, 58.2, 104.2, 49.9, 82.1, 154.1, 73.4, 38.1, 60.1)> t.test(micro,elect,paired=TRUE)Paired t-testdata: micro and electt = -1.3225, df = 10, p-value = 0.2155alternative hypothesis: true difference in means is not equal to 095 percent confidence interval:-6.028684 1.537775sample estimates:mean of the differences–2.245455Unlike Excel, R also includes functions for evaluating outliers. These functions are not part of R’s standard installation. To install them enter the following command within R (note: you will need an internet connection to download the package of functions).> install.packages(“outliers”)After you install the package, you must load the functions into R by using the following command (note: you need to do this step each time you begin a new R session as the package does not automatically load when you start R).> library(“outliers”)You need to install a package once, but you need to load the package each time you plan to use it. There are ways to configure R so that it automatically loads certain packages; see An Introduction to R for more information (click here to view a PDF version of this document).Let’s use this package to find the outlier in Table 4.6.1 using both Dixon’s Q-test and Grubb’s test. The commands for these tests aredixon.test(X, type = 10, two.sided = TRUE)grubbs.test(X, type = 10, two.sided = TRUE)where X is the object that contains the data, “type = 10” specifies that we are looking for one outlier, and “two.sided = TRUE” indicates that we are using the more conservative two-tailed test. Both tests have other variants that allow for the testing of outliers on both ends of the data set (“type = 11”) or for more than one outlier (“type = 20”), but we will not consider these here. Figure 4.8.7 shows the output of a session for this problem. For both tests the very small p-value indicates that we can treat as an outlier the penny with a mass of 2.514 g.One of R’s more useful features is the ability to visualize data. Visualizing data is important because it provides us with an intuitive feel for our data that can help us in applying and evaluating statistical tests. It is tempting to believe that a statistical analysis is foolproof, particularly if the probability for incorrectly rejecting the null hypothesis is small. Looking at a visual display of our data, however, can help us determine whether our data is normally distributed—a requirement for most of the significance tests in this chapter—and can help us identify potential outliers. There are many useful ways to look at data, four of which we consider here.Visualizing data is important, a point we will return to in Chapter 5 when we consider the mathematical modeling of data.To plot data in R, we will use the package “lattice,” which you will need to load using the following command.> library(“lattice”)To demonstrate the types of plots we can generate, we will use the object “penny,” which contains the masses of the 100 pennies in Table 4.4.3.You do not need to use the command install.package this time because lattice was automatically installed on your computer when you downloaded R.Our first visualization is a histogram. To construct the histogram we use mass to divide the pennies into bins and plot the number of pennies or the percent of pennies in each bin on the y-axis as a function of mass on the x-axis. Figure 4.8.8 shows the result of entering the command> histogram(penny, type = “percent”, xlab = “Mass (g)”, ylab = “Percent of Pennies”, main = “Histogram of Data in Table 4.4.3”)A histogram allows us to visualize the data’s distribution. In this example the data appear to follow a normal distribution, although the largest bin does not include the mean of 3.095 g and the distribution is not perfectly symmetric. One limitation of a histogram is that its appearance depends on how we choose to bin the data. Increasing the number of bins and centering the bins around the data’s mean gives a histogram that more closely approximates a normal distribution .An alternative to the histogram is a kernel density plot, which basically is a smoothed histogram. In this plot each value in the data set is replaced with a normal distribution curve whose width is a function of the data set’s standard deviation and size. The resulting curve is a summation of the individual distributions. Figure 4.8.9 shows the result of entering the command> densityplot(penny, xlab = “Mass of Pennies (g)”, main = “Kernel Density Plot of Data in Table 4.4.3”)The circles at the bottom of the plot show the mass of each penny in the data set. This display provides a more convincing picture that the data in Table 4.4.3 are normally distributed, although we see evidence of a small clustering of pennies with a mass of approximately 3.06 g.We analyze samples to characterize the parent population. To reach a meaningful conclusion about a population, the samples must be representative of the population. One important requirement is that the samples are random. A dot chart provides a simple visual display that allows us to examine the data for non-random trends. Figure 4.8.10 shows the result of entering> dotchart(penny, xlab = “Mass of Pennies (g)”, ylab = “Penny Number”, main = “Dotchart of Data in Table 4.4.3”)In this plot the masses of the 100 pennies are arranged along the y-axis in the order in which they were sampled. If we see a pattern in the data along the y-axis, such as a trend toward smaller masses as we move from the first penny to the last penny, then we have clear evidence of non-random sampling. Because our data do not show a pattern, we have more confidence in the quality of our data.The last plot we will consider is a box plot, which is a useful way to identify potential outliers without making any assumptions about the data’s distribution. A box plot contains four pieces of information about a data set: the median, the middle 50% of the data, the smallest value and the largest value within a set distance of the middle 50% of the data, and possible outliers. Figure 4.8.11 shows the result of entering> bwplot(penny, xlab = “Mass of Pennies (g)”, main = “Boxplot of Data in Table 4.4.3)”The black dot (•) is the data set’s median. The rectangular box shows the range of masses spanning the middle 50% of the pennies. This also is known as the interquartile range, or IQR. The dashed lines, which are called “whiskers,” extend to the smallest value and the largest value that are within \(\pm 1.5 \times \text{IQR}\) of the rectangular box. Potential outliers are shown as open circles (o). For normally distributed data the median is near the center of the box and the whiskers will be equidistant from the box. As is often the case in statistics, the converse is not true—finding that a boxplot is perfectly symmetric does not prove that the data are normally distributed.To find the interquartile range you first find the median, which divides the data in half. The median of each half provides the limits for the box. The IQR is the median of the upper half of the data minus the median for the lower half of the data. For the data in Table 4.4.3 the median is 3.098. The median for the lower half of the data is 3.068 and the median for the upper half of the data is 3.115. The IQR is 3.115 – 3.068 = 0.047. You can use the command “summary(penny)” in R to obtain these values.The lower “whisker” extend to the first data point with a mass larger than3.068 – 1.5 \(\times\) IQR = 3.068 – 1.5 \(\times\) 0.047 = 2.9975which for this data is 2.998 g. The upper “whisker” extends to the last data point with a mass smaller than3.115 + 1.5 \(\times\) IQR = 3.115 + 1.5 \(\times\) 0.047 = 3.1855which for this data is 3.181 g.The box plot in Figure 4.8.11 is consistent with the histogram (Figure 4.8.8 ) and the kernel density plot (Figure 4.8.9 ). Together, the three plots provide evidence that the data in Table 4.4.3 are normally distributed. The potential outlier, whose mass of 3.198 g, is not sufficiently far away from the upper whisker to be of concern, particularly as the size of the data set (n = 100) is so large. A Grubb’s test on the potential outlier does not provide evidence for treating it as an outlier.Use R to create a data set consisting of 100 values from a uniform distribution by entering the command> data = runif(100, min = 0, max = 100)A uniform distribution is one in which every value between the minimum and the maximum is equally probable. Examine the data set by creating a histogram, a kernel density plot, a dot chart, and a box plot. Briefly comment on what the plots tell you about the your sample and its parent population.Because we are selecting a random sample of 100 members from a uniform distribution, you will see subtle differences between your plots and the plots shown as part of this answer. Here is a record of my R session and the resulting plots.> data = runif(100, min = 0, max = 0)> data 18.928795 80.423589 39.399693 23.757624 30.088554 76.622174 36.487084 62.186771 81.115515 15.726404 85.765317 53.994179 7.919424 10.125832 93.153308 38.079322 70.268597 49.879331 73.115203 99.329723 48.203305 33.093579 73.410984 75.128703 98.682127 11.433861 53.337359 81.705906 95.444703 96.843476 68.251721 40.567993 32.761695 74.635385 70.914957 96.054750 28.448719 88.580214 95.059215 20.316015 9.828515 44.172774 99.648405 85.593858 82.745774 54.963426 65.563743 87.820985 17.791443 26.417481 72.832037 5.518637 58.231329 10.213343 40.581266 6.584000 81.261052 48.534478 51.830513 17.214508 31.232099 60.545307 19.197450 60.485374 50.414960 88.908862 68.939084 92.515781 72.414388 83.195206 74.783176 10.643619 41.775788 20.464247 14.547841 89.887518 56.217573 77.606742 26.956787 29.641171 97.624246 46.406271 15.906540 23.007485 17.715668 84.652814 29.379712 4.093279 46.213753 57.963604 91.160366 34.278918 88.352789 93.004412 31.055807 47.822329 24.052306 95.498610 21.089686 2.629948> histogram(data, type = “percent”)> densityplot(data)> dotchart(data)> bwplot(data)The histogram (far left) divides the data into eight bins, each of which contains between 10 and 15 members. As we expect for a uniform distribution, the histogram’s overall pattern suggests that each outcome is equally probable. In interpreting the kernel density plot (second from left), it is important to remember that it treats each data point as if it is from a normally distributed population (even though, in this case, the underlying population is uniform). Although the plot appears to suggest that there are two normally distributed populations, the individual results shown at the bottom of the plot provide further evidence for a uniform distribution. The dot chart (second from right) shows no trend along the y-axis, which indicates that the individual members of this sample were drawn at random from the population. The distribution along the x-axis also shows no pattern, as expected for a uniform distribution, Finally, the box plot (far right) shows no evidence of outliers.This page titled 4.8: Using Excel and R to Analyze Data is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by David Harvey.
141
4.9: Problems
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Analytical_Chemistry_2.1_(Harvey)/04%3A_Evaluating_Analytical_Data/4.09%3A_Problems
1. The following masses were recorded for 12 different U.S. quarters (all given in grams):Report the mean, median, range, standard deviation and variance for this data.2. A determination of acetaminophen in 10 separate tablets of Excedrin Extra Strength Pain Reliever gives the following results (in mg)(a) Report the mean, median, range, standard deviation and variance for this data.(b) Assuming that \(\overline{X}\) and s2 are good approximations for \(\mu\) and for \(\sigma^2\), and that the population is normally distributed, what percentage of tablets contain more than the standard amount of 250 mg acetaminophen per tablet?The data in this problem are from Simonian, M. H.; Dinh, S.; Fray, L. A. Spectroscopy 1993, 8, 37–47.3. Salem and Galan developed a new method to determine the amount of morphine hydrochloride in tablets. An analysis of tablets with different nominal dosages gave the following results (in mg/tablet).(a) For each dosage, calculate the mean and the standard deviation for the mg of morphine hydrochloride per tablet.(b) For each dosage level, and assuming that \(\overline{X}\) and s2 are good approximations for \(\mu\) and for \(\sigma^2\), and that the population is normally distributed, what percentage of tablets contain more than the nominal amount of morphine hydro- chloride per tablet?The data in this problem are from Salem, I. I.; Galan, A. C. Anal. Chim. Acta 1993, 283, 334–337.4. Daskalakis and co-workers evaluated several procedures for digesting oyster and mussel tissue prior to analyzing them for silver. To evaluate the procedures they spiked samples with known amounts of silver and analyzed the samples to determine the amount of silver, reporting results as the percentage of added silver found in the analysis. A procedure was judged acceptable if its spike recoveries fell within the range 100±15%. The spike recoveries for one method are shown here.Assuming a normal distribution for the spike recoveries, what is the probability that any single spike recovery is within the accepted range?The data in this problem are from Daskalakis, K. D.; O’Connor, T. P.; Crecelius, E. A. Environ. Sci. Technol. 1997, 31, 2303– 2306. See Chapter 15 to learn more about using a spike recovery to evaluate an analytical method.5. The formula weight (FW) of a gas can be determined using the following form of the ideal gas law\[FW = \frac {g \text{R} T} {P V} \nonumber\]where g is the mass in grams, R is the gas constant, T is the temperature in Kelvin, P is the pressure in atmospheres, and V is the volume in liters. In a typical analysis the following data are obtained (with estimated uncertainties in parentheses)g = 0.118 g (± 0.002 g)R = 0.082056 L atm mol–1 K–1 (± 0.000001 L atm mol–1 K–1)T = 298.2 K (± 0.1 K)P = 0.724 atm (± 0.005 atm)V = 0.250 L (± 0.005 L)(a) What is the compound’s formula weight and its estimated uncertainty?(b) To which variable(s) should you direct your attention if you wish to improve the uncertainty in the compound’s molecular weight?6. To prepare a standard solution of Mn2+, a 0.250 g sample of Mn is dissolved in 10 mL of concentrated HNO3 (measured with a graduated cylinder). The resulting solution is quantitatively transferred to a 100-mL volumetric flask and diluted to volume with distilled water. A 10-mL aliquot of the solution is pipeted into a 500-mL volumetric flask and diluted to volume.(a) Express the concentration of Mn in mg/L, and estimate its uncertainty using a propagation of uncertainty.(b) Can you improve the concentration’s uncertainty by using a pipet to measure the HNO3, instead of a graduated cylinder?7. The mass of a hygroscopic compound is measured using the technique of weighing by difference. In this technique the compound is placed in a sealed container and weighed. A portion of the compound is removed and the container and the remaining material are reweighed. The difference between the two masses gives the sample’s mass. A solution of a hygroscopic compound with a gram formula weight of 121.34 g/mol (±0.01 g/mol) is prepared in the following manner. A sample of the compound and its container has a mass of 23.5811 g. A portion of the compound is transferred to a 100-mL volumetric flask and diluted to volume. The mass of the compound and container after the transfer is 22.1559 g. Calculate the compound’s molarity and estimate its uncertainty by a propagation of uncertainty.8. Use a propagation of uncertainty to show that the standard error of the mean for n determinations is \(\sigma / \sqrt{n}\).9. Beginning with Equation 4.6.4 and Equation 4.6.5, use a propagation of uncertainty to derive Equation 4.6.6.10. What is the smallest mass you can measure on an analytical balance that has a tolerance of ±0.1 mg, if the relative error must be less than 0.1%?11. Which of the following is the best way to dispense 100.0 mL if we wish to minimize the uncertainty: (a) use a 50-mL pipet twice; (b) use a 25-mL pipet four times; or (c) use a 10-mL pipet ten times?12. You can dilute a solution by a factor of 200 using readily available pipets (1-mL to 100-mL) and volumetric flasks (10-mL to 1000-mL) in either one step, two steps, or three steps. Limiting yourself to the glassware in Table 4.2.1, determine the proper combination of glassware to accomplish each dilution, and rank them in order of their most probable uncertainties.13. Explain why changing all values in a data set by a constant amount will change \(\overline{X}\) but has no effect on the standard deviation, s.14. Obtain a sample of a metal, or other material, from your instructor and determine its density by one or both of the following methods:Method A: Determine the sample’s mass with a balance. Calculate the sample’s volume using appropriate linear dimensions.Method B: Determine the sample’s mass with a balance. Calculate the sample’s volume by measuring the amount of water it displaces by adding water to a graduated cylinder, reading the volume, adding the sample, and reading the new volume. The difference in volumes is equal to the sample’s volume.Determine the density at least five times.(a) Report the mean, the standard deviation, and the 95% confidence interval for your results.(b) Find the accepted value for the metal’s density and determine the absolute and relative error for your determination of the metal’s density.(c) Use a propagation of uncertainty to determine the uncertainty for your method of analysis. Is the result of this calculation consistent with your experimental results? If not, suggest some possible reasons for this disagreement.15. How many carbon atoms must a molecule have if the mean number of 13C atoms per molecule is at least one? What percentage of such molecules will have no atoms of 13C?16. In Example 4.4.1 we determined the probability that a molecule of cholesterol, C27H44O, had no atoms of 13C.(a) Calculate the probability that a molecule of cholesterol, has 1 atom of 13C.(b) What is the probability that a molecule of cholesterol has two or more atoms of 13C?17. Berglund and Wichardt investigated the quantitative determination of Cr in high-alloy steels using a potentiometric titration of Cr(VI). Before the titration, samples of the steel were dissolved in acid and the chromium oxidized to Cr(VI) using peroxydisulfate. Shown here are the results ( as %w/w Cr) for the analysis of a reference steel.Calculate the mean, the standard deviation, and the 95% confidence interval about the mean. What does this confidence interval mean?The data in this problem are from Berglund, B.; Wichardt, C. Anal. Chim. Acta 1990, 236, 399–410.18. Ketkar and co-workers developed an analytical method to determine trace levels of atmospheric gases. An analysis of a sample that is 40.0 parts per thousand (ppt) 2-chloroethylsulfide gave the following results(a) Determine whether there is a significant difference between the experimental mean and the expected value at \(\alpha = 0.05\).(b) As part of this study, a reagent blank was analyzed 12 times giving a mean of 0.16 ppt and a standard deviation of 1.20 ppt. What are the IUPAC detection limit, the limit of identification, and limit of quantitation for this method assuming \(\alpha = 0.05\)?The data in this problem are from Ketkar, S. N.; Dulak, J. G.; Dheandhanou, S.; Fite, W. L. Anal. Chim. Acta 1991, 245, 267–270.19. To test a spectrophotometer’s accuracy a solution of 60.06 ppm K2Cr2O7 in 5.0 mM H2SO4 is prepared and analyzed. This solution has an expected absorbance of 0.640 at 350.0 nm in a 1.0-cm cell when using 5.0 mM H2SO4 as a reagent blank. Several aliquots of the solution produce the following absorbance values.Determine whether there is a significant difference between the experimental mean and the expected value at \(\alpha = 0.01\).20. Monna and co-workers used radioactive isotopes to date sediments from lakes and estuaries. To verify this method they analyzed a 208Po standard known to have an activity of 77.5 decays/min, obtaining the following results.Determine whether there is a significant difference between the mean and the expected value at \(\alpha = 0.05\).The data in this problem are from Monna, F.; Mathieu, D.; Marques, A. N.; Lancelot, J.; Bernat, M. Anal. Chim. Acta 1996, 330, 107–116.21. A 2.6540-g sample of an iron ore, which is 53.51% w/w Fe, is dissolved in a small portion of concentrated HCl and diluted to volume in a 250-mL volumetric flask. A spectrophotometric determination of the concentration of Fe in this solution yields results of 5840, 5770, 5650, and 5660 ppm. Determine whether there is a significant difference between the experimental mean and the expected value at \(\alpha = 0.05\).22. Horvat and co-workers used atomic absorption spectroscopy to determine the concentration of Hg in coal fly ash. Of particular interest to the authors was developing an appropriate procedure for digesting samples and releasing the Hg for analysis. As part of their study they tested several reagents for digesting samples. Their results using HNO3 and using a 1 + 3 mixture of HNO3 and HCl are shown here. All concentrations are given as ppb Hg sample.Determine whether there is a significant difference between these methods at \(\alpha = 0.05\).The data in this problem are from Horvat, M.; Lupsina, V.; Pihlar, B. Anal. Chim. Acta 1991, 243, 71–79.23, Lord Rayleigh, John William Strutt, was one of the most well known scientists of the late nineteenth and early twentieth centuries, publishing over 440 papers and receiving the Nobel Prize in 1904 for the discovery of argon. An important turning point in Rayleigh’s discovery of Ar was his experimental measurements of the density of N2. Rayleigh approached this experiment in two ways: first by taking atmospheric air and removing O2 and H2; and second, by chemically producing N2 by decomposing nitrogen containing compounds (NO, N2O, and NH4NO3) and again removing O2 and H2. The following table shows his results for the density of N2, as published in Proc. Roy. Soc. 1894, LV, 340 (publication 210); all values are the grams of gas at an equivalent volume, pressure, and temperature.Explain why this data led Rayleigh to look for and to discover Ar. You can read more about this discovery here: Larsen, R. D. J. Chem. Educ. 1990, 67, 925–928.24. Gács and Ferraroli reported a method for monitoring the concentration of SO2 in air. They compared their method to the standard method by analyzing urban air samples collected from a single location. Samples were collected by drawing air through a collection solution for 6 min. Shown here is a summary of their results with SO2 concentrations reported in \(\mu \text{L/m}^3\).Using an appropriate statistical test, determine whether there is any significant difference between the standard method and the new method at \(\alpha = 0.05\).The data in this problem are from Gács, I.; Ferraroli, R. Anal. Chim. Acta 1992, 269, 177–185.25. One way to check the accuracy of a spectrophotometer is to measure absorbances for a series of standard dichromate solutions obtained from the National Institute of Standards and Technology. Absorbances are measured at 257 nm and compared to the accepted values. The results obtained when testing a newly purchased spectrophotometer are shown here. Determine if the tested spectrophotometer is accurate at \(\alpha = 0.05\).26. Maskarinec and co-workers investigated the stability of volatile organics in environmental water samples. Of particular interest was establishing the proper conditions to maintain the sample’s integrity between its collection and its analysis. Two preservatives were investigated—ascorbic acid and sodium bisulfate—and maximum holding times were determined for a number of volatile organics and water matrices. The following table shows results for the holding time (in days) of nine organic compounds in surface water.Determine whether there is a significant difference in the effectiveness of the two preservatives at \(\alpha = 0.10\).The data in this problem are from Maxkarinec, M. P.; Johnson, L. H.; Holladay, S. K.; Moody, R. L.; Bayne, C. K.; Jenkins, R. A. Environ. Sci. Technol. 1990, 24, 1665–1670.27. Using X-ray diffraction, Karstang and Kvalhein reported a new method to determine the weight percent of kaolinite in complex clay minerals using X-ray diffraction. To test the method, nine samples containing known amounts of kaolinite were prepared and analyzed. The results (as % w/w kaolinite) are shown here.Evaluate the accuracy of the method at \(\alpha = 0.05\).The data in this problem are from Karstang, T. V.; Kvalhein, O. M. Anal. Chem. 1991, 63, 767–772.28. Mizutani, Yabuki and Asai developed an electrochemical method for analyzing l-malate. As part of their study they analyzed a series of beverages using both their method and a standard spectrophotometric procedure based on a clinical kit purchased from Boerhinger Scientific. The following table summarizes their results. All values are in ppm.The data in this problem are from Mizutani, F.; Yabuki, S.; Asai, M. Anal. Chim. Acta 1991, 245,145–150.29. Alexiev and colleagues describe an improved photometric method for determining Fe3+ based on its ability to catalyze the oxidation of sulphanilic acid by KIO4. As part of their study, the concentration of Fe3+ in human serum samples was determined by the improved method and the standard method. The results, with concentrations in \(\mu \text{mol/L}\), are shown in the following table.Determine whether there is a significant difference between the two methods at \(\alpha = 0.05\).The data in this problem are from Alexiev, A.; Rubino, S.; Deyanova, M.; Stoyanova, A.; Sicilia, D.; Perez Bendito, D. Anal. Chim. Acta, 1994, 295, 211–219.30. Ten laboratories were asked to determine an analyte’s concentration of in three standard test samples. Following are the results, in \(\mu \text{g/ml}\).Determine if there are any potential outliers in Sample 1, Sample 2 or Sample 3. Use all three methods—Dixon’s Q-test, Grubb’s test, and Chauvenet’s criterion—and compare the results to each other. For Dixon’s Q-test and for the Grubb’s test, use a significance level of \(\alpha = 0.05\).The data in this problem are adapted from Steiner, E. H. “Planning and Analysis of Results of Collaborative Tests,” in Statistical Manual of the Association of Official Analytical Chemists, Association of Official Analytical Chemists: Washington, D. C., 1975.31.When copper metal and powdered sulfur are placed in a crucible and ignited, the product is a sulfide with an empirical formula of CuxS. The value of x is determined by weighing the Cu and the S before ignition and finding the mass of CuxS when the reaction is complete (any excess sulfur leaves as SO2). The following table shows the Cu/S ratios from 62 such experiments (note that the values are organized from smallest-to-largest by rows).(a) Calculate the mean, the median, and the standard deviation for this data.(b) Construct a histogram for this data. From a visual inspection of your histogram, do the data appear normally distributed?(c) In a normally distributed population 68.26% of all members lie within the range \(\mu \pm 1 \sigma\). What percentage of the data lies within the range \(\overline{X} \pm 1 \sigma\)? Does this support your answer to the previous question?(d) Assuming that \(\overline{X}\) and \(s^2\) are good approximations for \(\mu\) and for \(\sigma^2\), what percentage of all experimentally determined Cu/S ratios should be greater than 2? How does this compare with the experimental data? Does this support your conclusion about whether the data is normally distributed?(e) It has been reported that this method of preparing copper sulfide results in a non-stoichiometric compound with a Cu/S ratio of less than 2. Determine if the mean value for this data is significantly less than 2 at a significance level of \(\alpha = 0.01\).See Blanchnik, R.; Müller, A. “The Formation of Cu2S From the Elements I. Copper Used in Form of Powders,” Thermochim. Acta, 2000, 361, 31-52 for a discussion of some of the factors affecting the formation of non-stoichiometric copper sulfide. The data in this problem were collected by students at DePauw University.32. Real-time quantitative PCR is an analytical method for determining trace amounts of DNA. During the analysis, each cycle doubles the amount of DNA. A probe species that fluoresces in the presence of DNA is added to the reaction mixture and the increase in fluorescence is monitored during the cycling. The cycle threshold, \(C_t\), is the cycle when the fluorescence exceeds a threshold value. The data in the following table shows \(C_t\) values for three samples using real-time quantitative PCR. Each sample was analyzed 18 times.Examine this data and write a brief report on your conclusions. Issues you may wish to address include the presence of outliers in the samples, a summary of the descriptive statistics for each sample, and any evidence for a difference between the samples.The data in this problem is from Burns, M. J.; Nixon, G. J.; Foy, C. A.; Harris, N. BMC Biotechnol. 2005, 5:31 (open access publication).This page titled 4.9: Problems is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by David Harvey.
142
4.10: Additional Resources
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Analytical_Chemistry_2.1_(Harvey)/04%3A_Evaluating_Analytical_Data/4.10%3A_Additional_Resources
The following experiments provide useful introductions to the statistical analysis of data in the analytical chemistry laboratory.A more comprehensive discussion of the analysis of data, which includes all topics considered in this chapter as well as additional material, are found in many textbook on statistics or data analysis; several such texts are listed here.The importance of defining statistical terms is covered in the following papers.The detection of outliers, particularly when working with a small number of samples, is discussed in the following papers.The following papers provide additional information on error and uncertainty, including the propagation of uncertainty.Consult the following resources for a further discussion of detection limits.The following articles provide thoughts on the limitations of statistical analysis based on significance testing.The following resources provide additional information on using Excel, including reports of errors in its handling of some statistical procedures.To learn more about using R, consult the following resources.The following papers provide insight into visualizing data.This page titled 4.10: Additional Resources is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by David Harvey.
143
4.11: Chapter Summary and Key Terms
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Analytical_Chemistry_2.1_(Harvey)/04%3A_Evaluating_Analytical_Data/4.11%3A_Chapter_Summary_and_Key_Terms
The data we collect are characterized by their central tendency (where the values cluster), and their spread (the variation of individual values around the central value). We report our data’s central tendency by stating the mean or median, and our data’s spread using the range, standard deviation or variance. Our collection of data is subject to errors, including determinate errors that affect the data’s accuracy and indeterminate errors that affect its precision. A propagation of uncertainty allows us to estimate how these determinate and indeterminate errors affect our results.When we analyze a sample several times the distribution of the results is described by a probability distribution, two examples of which are the binomial distribution and the normal distribution. Knowing the type of distribution allows us to determine the probability of obtaining a particular range of results. For a normal distribution we express this range as a confidence interval.A statistical analysis allows us to determine whether our results are significantly different from known values, or from values obtained by other analysts, by other methods of analysis, or for other samples. We can use a t-test to compare mean values and an F-test to compare variances. To compare two sets of data you first must determine whether the data is paired or unpaired. For unpaired data you also must decide if you can pool the standard deviations. A decision about whether to retain an outlying value can be made using Dixon’s Q-test, Grubb’s test, or Chauvenet’s criterion.You should be sure to exercise caution if you decide to reject an outlier. Finally, the detection limit is a statistical statement about the smallest amount of analyte we can detect with confidence. A detection limit is not exact since its value depends on how willing we are to falsely report the analyte’s presence or absence in a sample. When reporting a detection limit you should clearly indicate how you arrived at its value.alternative hypothesisbox plotconfidence intervaldetection limitdot chartGrubb’s testkernel density plotmeanmethod errorone-tailed significance testpaired t-testprobability distributionrangesamplestandard deviationtolerancetype 1 errorunpaired databiascentral limit theoremconstant determinate errordeterminate errorerrorhistogramlimit of identificationmediannormal distributionoutlierpersonal errorpropagation of uncertaintyrepeatabilitysampling errorstandard error of the meant-testtype 2 errorvariancebinomial distributionChauvenet’s criteriondegrees of freedomDixon’s Q-testF-testindeterminate errorlimit of quantitationmeasurement errornull hypothesispaired datapopulationproportional determinate errorreproducibilitysignificance testStandard Reference Materialtwo-tailed significance testuncertaintyThis page titled 4.11: Chapter Summary and Key Terms is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by David Harvey.
144
5.1: Analytical Signals
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Analytical_Chemistry_2.1_(Harvey)/05%3A_Standardizing_Analytical_Methods/5.01%3A_Analytical_Signals
To standardize an analytical method we use standards that contain known amounts of analyte. The accuracy of a standardization, therefore, depends on the quality of the reagents and the glassware we use to prepare these standards. For example, in an acid–base titration the stoichiometry of the acid–base reaction defines the relationship between the moles of analyte and the moles of titrant. In turn, the moles of titrant is the product of the titrant’s concentration and the volume of titrant used to reach the equivalence point. The accuracy of a titrimetric analysis, therefore, is never better than the accuracy with which we know the titrant’s concentration.See Chapter 9 for a thorough discussion of titrimetric methods of analysis.There are two categories of analytical standards: primary standards and secondary standards. A primary standard is a reagent that we can use to dispense an accurately known amount of analyte. For example, a 0.1250-g sample of K2Cr2O7 contains \(4.249 \times 10^{-4}\) moles of K2Cr2O7. If we place this sample in a 250-mL volumetric flask and dilute to volume, the concentration of K2Cr2O7 in the resulting solution is \(1.700 \times 10^{-3} \text{ M}\). A primary standard must have a known stoichiometry, a known purity (or assay), and it must be stable during long-term storage. Because it is difficult to establishing accurately the degree of hydration, even after drying, a hydrated reagent usually is not a primary standard.Reagents that do not meet these criteria are secondary standards. The concentration of a secondary standard is determined relative to a primary standard. Lists of acceptable primary standards are available (see, for instance, Smith, B. W.; Parsons, M. L. J. Chem. Educ. 1973, 50, 679–681; or Moody, J. R.; Green- burg, P. R.; Pratt, K. W.; Rains, T. C. Anal. Chem. 1988, 60, 1203A–1218A). Appendix 8 provides examples of some common primary standards.NaOH is one example of a secondary standard. Commercially available NaOH contains impurities of NaCl, Na2CO3, and Na2SO4, and readily absorbs H2O from the atmosphere. To determine the concentration of NaOH in a solution, we titrate it against a primary standard weak acid, such as potassium hydrogen phthalate, KHC8H4O4.Preparing a standard often requires additional reagents that are not primary standards or secondary standards, such as a suitable solvent or reagents needed to adjust the standard’s matrix. These solvents and reagents are potential sources of additional analyte, which, if not accounted for, produce a determinate error in the standardization. If available, reagent grade chemicals that conform to standards set by the American Chemical Society are used [Committee on Analytical Reagents, Reagent Chemicals, 8th ed., American Chemical Society: Washington, D. C., 1993]. The label on the bottle of a reagent grade chemical (Figure 5.1.1 ) lists either the limits for specific impurities or provides an assay for the impurities. We can improve the quality of a reagent grade chemical by purifying it, or by conducting a more accurate assay. As discussed later in the chapter, we can correct for contributions to Stotal from reagents used in an analysis by including an appropriate blank determination in the analytical procedure.It often is necessary to prepare a series of standards, each with a different concentration of analyte. We can prepare these standards in two ways. If the range of concentrations is limited to one or two orders of magnitude, then each solution is best prepared by transferring a known mass or volume of the pure standard to a volumetric flask and diluting to volume.When working with a larger range of concentrations, particularly a range that extends over more than three orders of magnitude, standards are best prepared by a serial dilution from a single stock solution. In a serial dilution we prepare the most concentrated standard and then dilute a portion of that solution to prepare the next most concentrated standard. Next, we dilute a portion of the second standard to prepare a third standard, continuing this process until we have prepared all of our standards. Serial dilutions must be prepared with extra care because an error in preparing one standard is passed on to all succeeding standards.This page titled 5.1: Analytical Signals is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by David Harvey.
145
5.2: Calibrating the Signal
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Analytical_Chemistry_2.1_(Harvey)/05%3A_Standardizing_Analytical_Methods/5.02%3A_Calibrating_the_Signal
The accuracy with which we determine kA and Sreag depends on how accurately we can measure the signal, Stotal. We measure signals using equipment, such as glassware and balances, and instrumentation, such as spectrophotometers and pH meters. To minimize determinate errors that might affect the signal, we first calibrate our equipment and instrumentation by measuring Stotal for a standard with a known response of Sstd, adjusting Stotal untilStotal = Sstd Here are two examples of how we calibrate signals; other examples are provided in later chapters that focus on specific analytical methods.When the signal is a measurement of mass, we determine Stotal using an analytical balance. To calibrate the balance’s signal we use a reference weight that meets standards established by a governing agency, such as the National Institute for Standards and Technology or the American Society for Testing and Materials. An electronic balance often includes an internal calibration weight for routine calibrations, as well as programs for calibrating with external weights. In either case, the balance automatically adjusts Stotal to match Sstd.See Chapter 2.4 to review how an electronic balance works. Calibrating a balance is important, but it does not eliminate all sources of determinate error when measuring mass. See Appendix 9 for a discussion of correcting for the buoyancy of air.We also must calibrate our instruments. For example, we can evaluate a spectrophotometer’s accuracy by measuring the absorbance of a carefully prepared solution of 60.06 mg/L K2Cr2O7 in 0.0050 M H2SO4, using 0.0050 M H2SO4 as a reagent blank [Ebel, S. Fresenius J. Anal. Chem. 1992, 342, 769]. An absorbance of \(0.640 \pm 0.010\) absorbance units at a wavelength of 350.0 nm indicates that the spectrometer’s signal is calibrated properly.Be sure to read and follow carefully the calibration instructions provided with any instrument you use.This page titled 5.2: Calibrating the Signal is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by David Harvey.
146
5.3: Determining the Sensitivity
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Analytical_Chemistry_2.1_(Harvey)/05%3A_Standardizing_Analytical_Methods/5.03%3A_Determining_the_Sensitivity
To standardize an analytical method we also must determine the analyte’s sensitivity, kA, in Equation 5.3.1 or Equation 5.3.2 .\[S_{total} = k_A n_A + S_{reag} \label{5.1}\]\[S_{total} = k_A C_A + S_{reag} \label{5.2}\]In principle, it is possible to derive the value of kA for any analytical method if we understand fully all the chemical reactions and physical processes responsible for the signal. Unfortunately, such calculations are not feasible if we lack a sufficiently developed theoretical model of the physical processes or if the chemical reaction’s evince non-ideal behavior. In such situations we must determine the value of kA by analyzing one or more standard solutions, each of which contains a known amount of analyte. In this section we consider several approaches for determining the value of kA. For simplicity we assume that Sreag is accounted for by a proper reagent blank, allowing us to replace Stotal in Equation \ref{5.1} and Equation \ref{5.2} with the analyte’s signal, SA.\[S_A = k_A n_A \label{5.3}\]\[S_A = k_A C_A \label{5.4}\]Equation \ref{5.3} and Equation \ref{5.4} essentially are identical, differing only in whether we choose to express the amount of analyte in moles or as a concentration. For the remainder of this chapter we will limit our treatment to Equation \ref{5.4}. You can extend this treatment to Equation \ref{5.3} by replacing CA with nA.The simplest way to determine the value of kA in Equation \ref{5.4} is to use a single-point standardization in which we measure the signal for a standard, Sstd, that contains a known concentration of analyte, Cstd. Substituting these values into Equation \ref{5.4}\[k_A = \frac {S_{std}} {C_{std}} \label{5.5}\]gives us the value for kA. Having determined kA, we can calculate the concentration of analyte in a sample by measuring its signal, Ssamp, and calculating CA using Equation \ref{5.6}.\[C_A = \frac {S_{samp}} {k_A} \label{5.6}\]A single-point standardization is the least desirable method for standardizing a method. There are two reasons for this. First, any error in our determination of kA carries over into our calculation of CA. Second, our experimental value for kA is based on a single concentration of analyte. To extend this value of kA to other concentrations of analyte requires that we assume a linear relationship between the signal and the analyte’s concentration, an assumption that often is not true [Cardone, M. J.; Palmero, P. J.; Sybrandt, L. B. Anal. Chem. 1980, 52, 1187–1191]. Figure 5.3.1 shows how assuming a constant value of kA leads to a determinate error in CA if kA becomes smaller at higher concentrations of analyte. Despite these limitations, single-point standardizations find routine use when the expected range for the analyte’s concentrations is small. Under these conditions it often is safe to assume that kA is constant (although you should verify this assumption experimentally). This is the case, for example, in clinical labs where many automated analyzers use only a single standard.The better way to standardize a method is to prepare a series of standards, each of which contains a different concentration of analyte. Standards are chosen such that they bracket the expected range for the analyte’s concentration. A multiple-point standardization should include at least three standards, although more are preferable. A plot of Sstd versus Cstd is called a calibration curve. The exact standardization, or calibration relationship, is determined by an appropriate curve-fitting algorithm.Linear regression, which also is known as the method of least squares, is one such algorithm. Its use is covered in Section 5.4.There are two advantages to a multiple-point standardization. First, although a determinate error in one standard introduces a determinate error, its effect is minimized by the remaining standards. Second, because we measure the signal for several concentrations of analyte, we no longer must assume kA is independent of the analyte’s concentration. Instead, we can construct a calibration curve similar to the “actual relationship” in Figure 5.3.1 .The most common method of standardization uses one or more external standards, each of which contains a known concentration of analyte. We call these standards “external” because they are prepared and analyzed separate from the samples.Appending the adjective “external” to the noun “standard” might strike you as odd at this point, as it seems reasonable to assume that standards and samples are analyzed separately. As we will soon learn, however, we can add standards to our samples and analyze both simultaneously.With a single external standard we determine kA using EEquation \ref{5.5} and then calculate the concentration of analyte, CA, using Equation \ref{5.6}.A spectrophotometric method for the quantitative analysis of Pb2+ in blood yields an Sstd of 0.474 for a single standard for which the concentration of lead is 1.75 ppb. What is the concentration of Pb2+ in a sample of blood for which Ssamp is 0.361?SolutionEquation \ref{5.5} allows us to calculate the value of kA using the data for the single external standard.\[k_A = \frac {S_{std}} {C_{std}} = \frac {0.474} {1.75 \text{ ppb}} = 0.2709 \text{ ppb}^{-1} \nonumber\]Having determined the value of kA, we calculate the concentration of Pb2+ in the sample of blood is calculated using Equation \ref{5.6}.\[C_A = \frac {S_{samp}} {k_A} = \frac {0.361} {0.2709 \text{ ppb}^{-1}} = 1.33 \text{ ppb} \nonumber\]Figure 5.3.2 shows a typical multiple-point external standardization. The volumetric flask on the left contains a reagent blank and the remaining volumetric flasks contain increasing concentrations of Cu2+. Shown below the volumetric flasks is the resulting calibration curve. Because this is the most common method of standardization, the resulting relationship is called a normal calibration curve.When a calibration curve is a straight-line, as it is in Figure 5.3.2 , the slope of the line gives the value of kA. This is the most desirable situation because the method’s sensitivity remains constant throughout the analyte’s concentration range. When the calibration curve is not a straight-line, the method’s sensitivity is a function of the analyte’s concentration. In Figure 5.3.1 , for example, the value of kA is greatest when the analyte’s concentration is small and it decreases continuously for higher concentrations of analyte. The value of kA at any point along the calibration curve in Figure 5.3.1 is the slope at that point. In either case, a calibration curve allows to relate Ssamp to the analyte’s concentration.A second spectrophotometric method for the quantitative analysis of Pb2+ in blood has a normal calibration curve for which\[S_{std} = (0.296 \text{ ppb}^{-1} \times C_{std}) + 0.003 \nonumber\]What is the concentration of Pb2+ in a sample of blood if Ssamp is 0.397?SolutionTo determine the concentration of Pb2+ in the sample of blood, we replace Sstd in the calibration equation with Ssamp and solve for CA.\[C_A = \frac {S_{samp} - 0.003} {0.296 \text{ ppb}^{-1}} = \frac {0.397 - 0.003} {0.296 \text{ ppb}^{-1}} = 1.33 \text{ ppb} \nonumber\]It is worth noting that the calibration equation in this problem includes an extra term that does not appear in Equation \ref{5.6}. Ideally we expect our calibration curve to have a signal of zero when CA is zero. This is the purpose of using a reagent blank to correct the measured signal. The extra term of +0.003 in our calibration equation results from the uncertainty in measuring the signal for the reagent blank and the standards.Figure 5.3.2 shows a normal calibration curve for the quantitative analysis of Cu2+. The equation for the calibration curve is\[S_{std} = 29.59 \text{ M}^{-1} \times C_{std} + 0.015 \nonumber\]What is the concentration of Cu2+ in a sample whose absorbance, Ssamp, is 0.114? Compare your answer to a one-point standardization where a standard of \(3.16 \times 10^{-3} \text{ M}\) Cu2+ gives a signal of 0.0931.Substituting the sample’s absorbance into the calibration equation and solving for CA give\[S_{samp} = 0.114 = 29.59 \text{ M}^{-1} \times C_{A} + 0.015 \nonumber\]\[C_A = 3.35 \times 10^{-3} \text{ M} \nonumber\]For the one-point standardization, we first solve for kA\[k_A = \frac {S_{std}} {C_{std}} = \frac {0.0931} {3.16 \times 10^{-3} \text{ M}} = 29.46 \text{ M}^{-1} \nonumber\]and then use this value of kA to solve for CA.\[C_A = \frac {S_{samp}} {k_A} = \frac {0.114} {29.46 \text{ M}^{-1}} = 3.87 \times 10^{-3} \text{ M} \nonumber\]When using multiple standards, the indeterminate errors that affect the signal for one standard are partially compensated for by the indeterminate errors that affect the other standards. The standard selected for the one-point standardization has a signal that is smaller than that predicted by the regression equation, which underestimates kA and overestimates CA.An external standardization allows us to analyze a series of samples using a single calibration curve. This is an important advantage when we have many samples to analyze. Not surprisingly, many of the most common quantitative analytical methods use an external standardization.There is a serious limitation, however, to an external standardization. When we determine the value of kA using Equation \ref{5.5}, the analyte is present in the external standard’s matrix, which usually is a much simpler matrix than that of our samples. When we use an external standardization we assume the matrix does not affect the value of kA. If this is not true, then we introduce a proportional determinate error into our analysis. This is not the case in Figure 5.3.3 , for instance, where we show calibration curves for an analyte in the sample’s matrix and in the standard’s matrix. In this case, using the calibration curve for the external standards leads to a negative determinate error in analyte’s reported concentration. If we expect that matrix effects are important, then we try to match the standard’s matrix to that of the sample, a process known as matrix matching. If we are unsure of the sample’s matrix, then we must show that matrix effects are negligible or use an alternative method of standardization. Both approaches are discussed in the following section.The matrix for the external standards in Figure 5.3.2 , for example, is dilute ammonia. Because the \(\ce{Cu(NH3)4^{2+}}\) complex absorbs more strongly than Cu2+, adding ammonia increases the signal’s magnitude. If we fail to add the same amount of ammonia to our samples, then we will introduce a proportional determinate error into our analysis.We can avoid the complication of matching the matrix of the standards to the matrix of the sample if we carry out the standardization in the sample. This is known as the method of standard additions.The simplest version of a standard addition is shown in Figure 5.3.4 . First we add a portion of the sample, Vo, to a volumetric flask, dilute it to volume, Vf, and measure its signal, Ssamp. Next, we add a second identical portion of sample to an equivalent volumetric flask along with a spike, Vstd, of an external standard whose concentration is Cstd. After we dilute the spiked sample to the same final volume, we measure its signal, Sspike.The following two equations relate Ssamp and Sspike to the concentration of analyte, CA, in the original sample.\[S_{samp} = k_A C_A \frac {V_o} {V_f} \label{5.7}\]\[S_{spike} = k_A \left( C_A \frac {V_o} {V_f} + C_{std} \frac {V_{std}} {V_f} \right) \label{5.8}\]As long as Vstd is small relative to Vo, the effect of the standard’s matrix on the sample’s matrix is insignificant. Under these conditions the value of kA is the same in Equation \ref{5.7} and Equation \ref{5.8}. Solving both equations for kA and equating gives\[\frac {S_{samp}} {C_A \frac {V_o} {V_f}} = \frac {S_{spike}} {C_A \frac {V_o} {V_f} + C_{std} \frac {V_{std}} {V_f}} \label{5.9}\]which we can solve for the concentration of analyte, CA, in the original sample.A third spectrophotometric method for the quantitative analysis of Pb2+ in blood yields an Ssamp of 0.193 when a 1.00 mL sample of blood is diluted to 5.00 mL. A second 1.00 mL sample of blood is spiked with 1.00 mL of a 1560-ppb Pb2+ external standard and diluted to 5.00 mL, yielding an Sspike of 0.419. What is the concentration of Pb2+ in the original sample of blood?SolutionWe begin by making appropriate substitutions into Equation \ref{5.9} and solving for CA. Note that all volumes must be in the same units; thus, we first convert Vstd from 1.00 mL to \(1.00 \times 10^{-3} \text{ mL}\).\[\frac {0.193} {C_A \frac {1.00 \text{ mL}} {5.00 \text{ mL}}} = \frac {0.419} {C_A \frac {1.00 \text{ mL}} {5.00 \text{ mL}} + 1560 \text{ ppb} \frac {1.00 \times 10^{-3} \text{ mL}} {5.00 \text{ mL}}} \nonumber\]\[\frac {0.193} {0.200C_A} = \frac {0.419} {0.200C_A + 0.3120 \text{ ppb}} \nonumber\]\[0.0386C_A + 0.0602 \text{ ppb} = 0.0838 C_A \nonumber\]\[0.0452 C_A = 0.0602 \text{ ppb} \nonumber\]\[C_A = 1.33 \text{ ppb} \nonumber\]The concentration of Pb2+ in the original sample of blood is 1.33 ppb.It also is possible to add the standard addition directly to the sample, measuring the signal both before and after the spike (Figure 5.3.5 ). In this case the final volume after the standard addition is Vo + Vstd and Equation \ref{5.7}, Equation \ref{5.8}, and Equation \ref{5.9} become\[S_{samp} = k_A C_A \nonumber\]\[S_{spike} = k_A \left( C_A \frac {V_o} {V_o + V_{std}} + C_{std} \frac {V_{std}} {V_o + V_{std}} \right) \label{5.10}\]\[\frac {S_{samp}} {C_A} = \frac {S_{spike}} {C_A \frac {V_o} {V_o + V_{std}} + C_{std} \frac {V_{std}} {V_o + V_{std}}} \label{5.11}\]A fourth spectrophotometric method for the quantitative analysis of Pb2+ in blood yields an Ssamp of 0.712 for a 5.00 mL sample of blood. After spiking the blood sample with 5.00 mL of a 1560-ppb Pb2+ external standard, an Sspike of 1.546 is measured. What is the concentration of Pb2+ in the original sample of blood?Solution\[\frac {0.712} {C_A} = \frac {1.546} {C_A \frac {5.00 \text{ mL}} {5.005 \text{ mL}} + 1560 \text{ ppb} \frac {5.00 \times 10^{-3} \text{ mL}} {5.005 \text{ mL}}} \nonumber\]\[\frac {0.712} {C_A} = \frac {1.546} {0.9990C_A + 1.558 \text{ ppb}} \nonumber\]\[0.7113C_A + 1.109 \text{ ppb} = 1.546C_A \nonumber\]\[C_A = 1.33 \text{ ppb} \nonumber\]The concentration of Pb2+ in the original sample of blood is 1.33 ppb.We can adapt a single-point standard addition into a multiple-point standard addition by preparing a series of samples that contain increasing amounts of the external standard. Figure 5.3.6 shows two ways to plot a standard addition calibration curve based on Equation \ref{5.8}. In Figure 5.3.6 a we plot Sspike against the volume of the spikes, Vstd. If kA is constant, then the calibration curve is a straight-line. It is easy to show that the x-intercept is equivalent to –CAVo/Cstd.Beginning with Equation \ref{5.8} show that the equations in Figure 5.3.6 a for the slope, the y-intercept, and the x-intercept are correct.SolutionWe begin by rewriting Equation \ref{5.8} as\[S_{spike} = \frac {k_A C_A V_o} {V_f} + \frac {k_A C_{std}} {V_f} \times V_{std} \nonumber\]which is in the form of the equation for a straight-line\[y = y\text{-intercept} + \text{slope} \times x\text{-intercept} \nonumber\]where y is Sspike and x is Vstd. The slope of the line, therefore, is kACstd/Vf and the y-intercept is kACAVo/Vf. The x-intercept is the value of x when y is zero, or\[0 = \frac {k_A C_A V_o} {V_f} + \frac {k_A C_{std}} {V_f} \times x\text{-intercept} \nonumber\]\[x\text{-intercept} = - \frac {k_A C_A V_o / V_f} {K_A C_{std} / V_f} = - \frac {C_A V_o} {C_{std}} \nonumber\]Beginning with Equation \ref{5.8} show that the Equations in Figure 5.3.6 b for the slope, the y-intercept, and the x-intercept are correct.We begin with Equation \ref{5.8}\[S_{spike} = k_A \left( C_A \frac {V_o} {V_f} + C_{std} \frac {V_{std}} {V_f} \right) \nonumber\]rewriting it as\[S_{spike} = \frac {k_A C_A V_o} {V_f} + k_A \left( C_{std} \frac {V_{std}} {V_f} \right) \nonumber\]which is in the form of the linear equation\[y = y\text{-intercept} + \text{slope} \times x\text{-intercept} \nonumber\]where y is Sspike and x is Cstd \(\times\) Vstd/Vf. The slope of the line, therefore, is kA, and the y-intercept is kACAVo/Vf. The x-intercept is the value of x when y is zero, or\[x\text{-intercept} = - \frac {k_A C_A V_o/V_F} {k_A} = - \frac {C_A V_o} {V_f} \nonumber\]Because we know the volume of the original sample, Vo, and the concentration of the external standard, Cstd, we can calculate the analyte’s concentrations from the x-intercept of a multiple-point standard additions.A fifth spectrophotometric method for the quantitative analysis of Pb2+ in blood uses a multiple-point standard addition based on Equation \ref{5.8}. The original blood sample has a volume of 1.00 mL and the standard used for spiking the sample has a concentration of 1560 ppb Pb2+. All samples were diluted to 5.00 mL before measuring the signal. A calibration curve of Sspike versus Vstd has the following equation\[S_{spike} = 0.266 + 312 \text{ mL}^{-1} \times V_{std} \nonumber\]What is the concentration of Pb2+ in the original sample of blood?SolutionTo find the x-intercept we set Sspike equal to zero.\[S_{spike} = 0.266 + 312 \text{ mL}^{-1} \times V_{std} \nonumber\]Solving for Vstd, we obtain a value of \(-8.526 \times 10^{-4} \text{ mL}\) for the x-intercept. Substituting the x-intercept’s value into the equation from Figure 5.3.6 a\[-8.526 \times 10^{-4} \text{ mL} = - \frac {C_A V_o} {C_{std}} = - \frac {C_A \times 1.00 \text{ mL}} {1560 \text{ ppb}} \nonumber\]and solving for CA gives the concentration of Pb2+ in the blood sample as 1.33 ppb.Figure 5.3.6 shows a standard additions calibration curve for the quantitative analysis of Mn2+. Each solution contains 25.00 mL of the original sample and either 0, 1.00, 2.00, 3.00, 4.00, or 5.00 mL of a 100.6 mg/L external standard of Mn2+. All standard addition samples were diluted to 50.00 mL with water before reading the absorbance. The equation for the calibration curve in Figure 5.3.6 a is\[S_{std} = 0.0854 \times V_{std} + 0.1478 \nonumber\]What is the concentration of Mn2+ in this sample? Compare your answer to the data in Figure 5.3.6 b, for which the calibration curve is\[S_{std} = 0.425 \times C_{std}(V_{std}/V_f) + 0.1478 \nonumber\]Using the calibration equation from Figure 5.3.6 a, we find that the x-intercept is\[x\text{-intercept} = - \frac {0.1478} {0.0854 \text{ mL}^{-1}} = - 1.731 \text{ mL} \nonumber\]If we plug this result into the equation for the x-intercept and solve for CA, we find that the concentration of Mn2+ is\[C_A = - \frac {x\text{-intercept} \times C_{std}} {V_o} = - \frac {-1.731 \text{ mL} \times 100.6 \text{ mg/L}} {25.00 \text{ mL}} = 6.96 \text{ mg/L} \nonumber\]For Figure 5.3.6 b, the x-intercept is\[x\text{-intercept} = - \frac {0.1478} {0.0425 \text{ mL/mg}} = - 3.478 \text{ mg/mL} \nonumber\]and the concentration of Mn2+ is\[C_A = - \frac {x\text{-intercept} \times V_f} {V_o} = - \frac {-3.478 \text{ mg/mL} \times 50.00 \text{ mL}} {25.00 \text{ mL}} = 6.96 \text{ mg/L} \nonumber\]Since we construct a standard additions calibration curve in the sample, we can not use the calibration equation for other samples. Each sample, therefore, requires its own standard additions calibration curve. This is a serious drawback if you have many samples. For example, suppose you need to analyze 10 samples using a five-point calibration curve. For a normal calibration curve you need to analyze only 15 solutions (five standards and ten samples). If you use the method of standard additions, however, you must analyze 50 solutions (each of the ten samples is analyzed five times, once before spiking and after each of four spikes).We can use the method of standard additions to validate an external standardization when matrix matching is not feasible. First, we prepare a normal calibration curve of Sstd versus Cstd and determine the value of kA from its slope. Next, we prepare a standard additions calibration curve using Equation \ref{5.8}, plotting the data as shown in Figure 5.3.6 b. The slope of this standard additions calibration curve provides an independent determination of kA. If there is no significant difference between the two values of kA, then we can ignore the difference between the sample’s matrix and that of the external standards. When the values of kA are significantly different, then using a normal calibration curve introduces a proportional determinate error.To use an external standardization or the method of standard additions, we must be able to treat identically all samples and standards. When this is not possible, the accuracy and precision of our standardization may suffer. For example, if our analyte is in a volatile solvent, then its concentration will increase if we lose solvent to evaporation. Suppose we have a sample and a standard with identical concentrations of analyte and identical signals. If both experience the same proportional loss of solvent, then their respective concentrations of analyte and signals remain identical. In effect, we can ignore evaporation if the samples and the standards experience an equivalent loss of solvent. If an identical standard and sample lose different amounts of solvent, however, then their respective concentrations and signals are no longer equal. In this case a simple external standardization or standard addition is not possible.We can still complete a standardization if we reference the analyte’s signal to a signal from another species that we add to all samples and standards. The species, which we call an internal standard, must be different than the analyte.Because the analyte and the internal standard receive the same treatment, the ratio of their signals is unaffected by any lack of reproducibility in the procedure. If a solution contains an analyte of concentration CA and an internal standard of concentration CIS, then the signals due to the analyte, SA, and the internal standard, SIS, are\[S_A = k_A C_A \nonumber\]\[S_{IS} = k_{SI} C_{IS} \nonumber\]where \(k_A\) and \(k_{IS}\) are the sensitivities for the analyte and the internal standard, respectively. Taking the ratio of the two signals gives the fundamental equation for an internal standardization.\[\frac {S_A} {S_{IS}} = \frac {k_A C_A} {k_{IS} C_{IS}} = K \times \frac {C_A} {C_{IS}} \label{5.12}\]Because K is a ratio of the analyte’s sensitivity and the internal standard’s sensitivity, it is not necessary to determine independently values for either kA or kIS.In a single-point internal standardization, we prepare a single standard that contains the analyte and the internal standard, and use it to determine the value of K in Equation \ref{5.12}.\[K = \left( \frac {C_{IS}} {C_A} \right)_{std} \times \left( \frac {S_A} {S_{IS}} \right)_{std} \label{5.13}\]Having standardized the method, the analyte’s concentration is given by\[C_A = \frac {C_{IS}} {K} \times \left( \frac {S_A} {S_{IS}} \right)_{samp} \nonumber\]A sixth spectrophotometric method for the quantitative analysis of Pb2+ in blood uses Cu2+ as an internal standard. A standard that is 1.75 ppb Pb2+ and 2.25 ppb Cu2+ yields a ratio of (SA/SIS)std of 2.37. A sample of blood spiked with the same concentration of Cu2+ gives a signal ratio, (SA/SIS)samp, of 1.80. What is the concentration of Pb2+ in the sample of blood?SolutionEquation \ref{5.13} allows us to calculate the value of K using the data for the standard\[K = \left( \frac {C_{IS}} {C_A} \right)_{std} \times \left( \frac {S_A} {S_{IS}} \right)_{std} = \frac {2.25 \text{ ppb } \ce{Cu^{2+}}} {1.75 \text{ ppb } \ce{Pb^{2+}}} \times 2.37 = 3.05 \frac {\text{ppb } \ce{Cu^{2+}}} {\text{ppb } \ce{Pb^{2+}}} \nonumber\]The concentration of Pb2+, therefore, is\[C_A = \frac {C_{IS}} {K} \times \left( \frac {S_A} {S_{IS}} \right)_{samp} = \frac {2.25 \text{ ppb } \ce{Cu^{2+}}} {3.05 \frac {\text{ppb } \ce{Cu^{2+}}} {\text{ppb } \ce{Pb^{2+}}}} \times 1.80 = 1.33 \text{ ppb } \ce{Pb^{2+}} \nonumber\]A single-point internal standardization has the same limitations as a single-point normal calibration. To construct an internal standard calibration curve we prepare a series of standards, each of which contains the same concentration of internal standard and a different concentrations of analyte. Under these conditions a calibration curve of (SA/SIS)std versus CA is linear with a slope of K/CIS.Although the usual practice is to prepare the standards so that each contains an identical amount of the internal standard, this is not a requirement.A seventh spectrophotometric method for the quantitative analysis of Pb2+ in blood gives a linear internal standards calibration curve for which\[\left( \frac {S_A} {S_{IS}} \right)_{std} = (2.11 \text{ ppb}^{-1} \times C_A) - 0.006 \nonumber\]What is the ppb Pb2+ in a sample of blood if (SA/SIS)samp is 2.80?SolutionTo determine the concentration of Pb2+ in the sample of blood we replace (SA/SIS)std in the calibration equation with (SA/SIS)samp and solve for CA.\[C_A = \frac {\left( \frac {S_A} {S_{IS}} \right)_{samp} + 0.006} {2.11 \text{ ppb}^{-1}} = \frac {2.80 + 0.006} {2.11 \text{ ppb}^{-1}} = 1.33 \text{ ppb } \ce{Pb^{2+}} \nonumber\]The concentration of Pb2+ in the sample of blood is 1.33 ppb.In some circumstances it is not possible to prepare the standards so that each contains the same concentration of internal standard. This is the case, for example, when we prepare samples by mass instead of volume. We can still prepare a calibration curve, however, by plotting \((S_A / S_{IS})_{std}\) versus CA/CIS, giving a linear calibration curve with a slope of K.You might wonder if it is possible to include an internal standard in the method of standard additions to correct for both matrix effects and uncontrolled variations between samples; well, the answer is yes as described in the paper “Standard Dilution Analysis,” the full reference for which is Jones, W. B.; Donati, G. L.; Calloway, C. P.; Jones, B. T. Anal. Chem. 2015, 87, 2321-2327.This page titled 5.3: Determining the Sensitivity is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by David Harvey.
147
5.4: Linear Regression and Calibration Curves
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Analytical_Chemistry_2.1_(Harvey)/05%3A_Standardizing_Analytical_Methods/5.04%3A_Linear_Regression_and_Calibration_Curves
In a single-point external standardization we determine the value of kA by measuring the signal for a single standard that contains a known concentration of analyte. Using this value of kA and our sample’s signal, we then calculate the concentration of analyte in our sample (see Example 5.3.1). With only a single determination of kA, a quantitative analysis using a single-point external standardization is straightforward.A multiple-point standardization presents a more difficult problem. Consider the data in Table 5.4.1 for a multiple-point external standardization. What is our best estimate of the relationship between Sstd and Cstd? It is tempting to treat this data as five separate single-point standardizations, determining kA for each standard, and reporting the mean value for the five trials. Despite it simplicity, this is not an appropriate way to treat a multiple-point standardization.So why is it inappropriate to calculate an average value for kA using the data in Table 5.4.1 ? In a single-point standardization we assume that the reagent blank (the first row in Table 5.4.1 ) corrects for all constant sources of determinate error. If this is not the case, then the value of kA from a single-point standardization has a constant determinate error. Table 5.4.2 demonstrates how an uncorrected constant error affects our determination of kA. The first three columns show the concentration of analyte in a set of standards, Cstd, the signal without any source of constant error, Sstd, and the actual value of kA for five standards. As we expect, the value of kA is the same for each standard. In the fourth column we add a constant determinate error of +0.50 to the signals, (Sstd)e. The last column contains the corresponding apparent values of kA. Note that we obtain a different value of kA for each standard and that each apparent kA is greater than the true value.\(S_{std}\) (without constant error)\(k_A = S_{std}/C_{std}\) (actual)\((S_{std})_e\) (with constant error)\(k_A = (S_{std})_e/C_{std}\) (apparent)How do we find the best estimate for the relationship between the signal and the concentration of analyte in a multiple-point standardization? Figure 5.4.1 shows the data in Table 5.4.1 plotted as a normal calibration curve. Although the data certainly appear to fall along a straight line, the actual calibration curve is not intuitively obvious. The process of determining the best equation for the calibration curve is called linear regression.When a calibration curve is a straight-line, we represent it using the following mathematical equation\[y = \beta_0 + \beta_1 x \label{5.1}\]where y is the analyte’s signal, Sstd, and x is the analyte’s concentration, Cstd. The constants \(\beta_0\) and \(\beta_1\) are, respectively, the calibration curve’s expected y-intercept and its expected slope. Because of uncertainty in our measurements, the best we can do is to estimate values for \(\beta_0\) and \(\beta_1\), which we represent as b0 and b1. The goal of a linear regression analysis is to determine the best estimates for b0 and b1. How we do this depends on the uncertainty in our measurements.The most common method for completing the linear regression for Equation \ref{5.1} makes three assumptions:Because we assume that the indeterminate errors are the same for all standards, each standard contributes equally in our estimate of the slope and the y-intercept. For this reason the result is considered an unweighted linear regression.The second assumption generally is true because of the central limit theorem, which we considered in Chapter 4. The validity of the two remaining assumptions is less obvious and you should evaluate them before you accept the results of a linear regression. In particular the first assumption always is suspect because there certainly is some indeterminate error in the measurement of x. When we prepare a calibration curve, however, it is not unusual to find that the uncertainty in the signal, Sstd, is significantly larger than the uncertainty in the analyte’s concentration, Cstd. In such circumstances the first assumption is usually reasonable.To understand the logic of a linear regression consider the example shown in Figure 5.4.2 , which shows three data points and two possible straight-lines that might reasonably explain the data. How do we decide how well these straight-lines fit the data, and how do we determine the best straight-line?Let’s focus on the solid line in Figure 5.4.2 . The equation for this line is\[\hat{y} = b_0 + b_1 x \label{5.2}\]where b0 and b1 are estimates for the y-intercept and the slope, and \(\hat{y}\) is the predicted value of y for any value of x. Because we assume that all uncertainty is the result of indeterminate errors in y, the difference between y and \(\hat{y}\) for each value of x is the residual error, r, in our mathematical model.\[r_i = (y_i - \hat{y}_i) \nonumber\]Figure 5.4.3 shows the residual errors for the three data points. The smaller the total residual error, R, which we define as\[R = \sum_{i = 1}^{n} (y_i - \hat{y}_i)^2 \label{5.3}\]the better the fit between the straight-line and the data. In a linear regression analysis, we seek values of b0 and b1 that give the smallest total residual error.The reason for squaring the individual residual errors is to prevent a positive residual error from canceling out a negative residual error. You have seen this before in the equations for the sample and population standard deviations. You also can see from this equation why a linear regression is sometimes called the method of least squares.Although we will not formally develop the mathematical equations for a linear regression analysis, you can find the derivations in many standard statistical texts [ See, for example, Draper, N. R.; Smith, H. Applied Regression Analysis, 3rd ed.; Wiley: New York, 1998]. The resulting equation for the slope, b1, is\[b_1 = \frac {n \sum_{i = 1}^{n} x_i y_i - \sum_{i = 1}^{n} x_i \sum_{i = 1}^{n} y_i} {n \sum_{i = 1}^{n} x_i^2 - \left( \sum_{i = 1}^{n} x_i \right)^2} \label{5.4}\]and the equation for the y-intercept, b0, is\[b_0 = \frac {\sum_{i = 1}^{n} y_i - b_1 \sum_{i = 1}^{n} x_i} {n} \label{5.5}\]Although Equation \ref{5.4} and Equation \ref{5.5} appear formidable, it is necessary only to evaluate the following four summations\[\sum_{i = 1}^{n} x_i \quad \sum_{i = 1}^{n} y_i \quad \sum_{i = 1}^{n} x_i y_i \quad \sum_{i = 1}^{n} x_i^2 \nonumber\]Many calculators, spreadsheets, and other statistical software packages are capable of performing a linear regression analysis based on this model. To save time and to avoid tedious calculations, learn how to use one of these tools (and see Section 5.6 for details on completing a linear regression analysis using Excel and R.). For illustrative purposes the necessary calculations are shown in detail in the following example.Equation \ref{5.4} and Equation \ref{5.5} are written in terms of the general variables x and y. As you work through this example, remember that x corresponds to Cstd, and that y corresponds to Sstd.Using the data from Table 5.4.1 , determine the relationship between Sstd and Cstd using an unweighted linear regression.SolutionWe begin by setting up a table to help us organize the calculation.Adding the values in each column gives\[\sum_{i = 1}^{n} x_i = 1.500 \quad \sum_{i = 1}^{n} y_i = 182.31 \quad \sum_{i = 1}^{n} x_i y_i = 66.701 \quad \sum_{i = 1}^{n} x_i^2 = 0.550 \nonumber\]Substituting these values into Equation \ref{5.4} and Equation \ref{5.5}, we find that the slope and the y-intercept are\[b_1 = \frac {(6 \times 66.701) - (1.500 \times 182.31)} {(6 \times 0.550) - (1.500)^2} = 120.706 \approx 120.71 \nonumber\]\[b_0 = \frac {182.31 - (120.706 \times 1.500)} {6} = 0.209 \approx 0.21 \nonumber\]The relationship between the signal and the analyte, therefore, is\[S_{std} = 120.71 \times C_{std} + 0.21 \nonumber\]For now we keep two decimal places to match the number of decimal places in the signal. The resulting calibration curve is shown in Figure 5.4.4 .As shown in Figure 5.4.4 , because indeterminate errors in the signal, the regression line may not pass through the exact center of each data point. The cumulative deviation of our data from the regression line—that is, the total residual error—is proportional to the uncertainty in the regression. We call this uncertainty the standard deviation about the regression, sr, which is equal to\[s_r = \sqrt{\frac {\sum_{i = 1}^{n} \left( y_i - \hat{y}_i \right)^2} {n - 2}} \label{5.6}\]where yi is the ith experimental value, and \(\hat{y}_i\) is the corresponding value predicted by the regression line in Equation \ref{5.2}. Note that the denominator of Equation \ref{5.6} indicates that our regression analysis has n – 2 degrees of freedom—we lose two degree of freedom because we use two parameters, the slope and the y-intercept, to calculate \(\hat{y}_i\).Did you notice the similarity between the standard deviation about the regression (Equation \ref{5.6}) and the standard deviation for a sample (Equation 4.1.1)?A more useful representation of the uncertainty in our regression analysis is to consider the effect of indeterminate errors on the slope, b1, and the y-intercept, b0, which we express as standard deviations.\[s_{b_1} = \sqrt{\frac {n s_r^2} {n \sum_{i = 1}^{n} x_i^2 - \left( \sum_{i = 1}^{n} x_i \right)^2}} = \sqrt{\frac {s_r^2} {\sum_{i = 1}^{n} \left( x_i - \overline{x} \right)^2}} \label{5.7}\]\[s_{b_0} = \sqrt{\frac {s_r^2 \sum_{i = 1}^{n} x_i^2} {n \sum_{i = 1}^{n} x_i^2 - \left( \sum_{i = 1}^{n} x_i \right)^2}} = \sqrt{\frac {s_r^2 \sum_{i = 1}^{n} x_i^2} {n \sum_{i = 1}^{n} \left( x_i - \overline{x} \right)^2}} \label{5.8}\]We use these standard deviations to establish confidence intervals for the expected slope, \(\beta_1\), and the expected y-intercept, \(\beta_0\)\[\beta_1 = b_1 \pm t s_{b_1} \label{5.9}\]\[\beta_0 = b_0 \pm t s_{b_0} \label{5.10}\]where we select t for a significance level of \(\alpha\) and for n – 2 degrees of freedom. Note that Equation \ref{5.9} and Equation \ref{5.10} do not contain a factor of \((\sqrt{n})^{-1}\) because the confidence interval is based on a single regression line.Calculate the 95% confidence intervals for the slope and y-intercept from Example 5.4.1 .SolutionWe begin by calculating the standard deviation about the regression. To do this we must calculate the predicted signals, \(\hat{y}_i\) , using the slope and y-intercept from Example 5.4.1 , and the squares of the residual error, \((y_i - \hat{y}_i)^2\). Using the last standard as an example, we find that the predicted signal is\[\hat{y}_6 = b_0 + b_1 x_6 = 0.209 + (120.706 \times 0.500) = 60.562 \nonumber\]and that the square of the residual error is\[(y_i - \hat{y}_i)^2 = (60.42 - 60.562)^2 = 0.2016 \approx 0.202 \nonumber\]The following table displays the results for all six solutions.\(\left( y_i - \hat{y}_i \right)^2\)Adding together the data in the last column gives the numerator of Equation \ref{5.6} as 0.6512; thus, the standard deviation about the regression is\[s_r = \sqrt{\frac {0.6512} {6 - 2}} = 0.4035 \nonumber\]Next we calculate the standard deviations for the slope and the y-intercept using Equation \ref{5.7} and Equation \ref{5.8}. The values for the summation terms are from Example 5.4.1 .\[s_{b_1} = \sqrt{\frac {6 \times (0.4035)^2} {(6 \times 0.550) - (1.500)^2}} = 0.965 \nonumber\]\[s_{b_0} = \sqrt{\frac {(0.4035)^2 \times 0.550} {(6 \times 0.550) - (1.500)^2}} = 0.292 \nonumber\]Finally, the 95% confidence intervals (\(\alpha = 0.05\), 4 degrees of freedom) for the slope and y-intercept are\[\beta_1 = b_1 \pm ts_{b_1} = 120.706 \pm (2.78 \times 0.965) = 120.7 \pm 2.7 \nonumber\]\[\beta_0 = b_0 \pm ts_{b_0} = 0.209 \pm (2.78 \times 0.292) = 0.2 \pm 0.80 \nonumber\]where t(0.05, 4) from Appendix 4 is 2.78. The standard deviation about the regression, sr, suggests that the signal, Sstd, is precise to one decimal place. For this reason we report the slope and the y-intercept to a single decimal place.To minimize the uncertainty in a calibration curve’s slope and y-intercept, we evenly space our standards over a wide range of analyte concentrations. A close examination of Equation \ref{5.7} and Equation \ref{5.8} help us appreciate why this is true. The denominators of both equations include the term \(\sum_{i = 1}^{n} (x_i - \overline{x}_i)^2\). The larger the value of this term—which we accomplish by increasing the range of x around its mean value—the smaller the standard deviations in the slope and the y-intercept. Furthermore, to minimize the uncertainty in the y-intercept, it helps to decrease the value of the term \(\sum_{i = 1}^{n} x_i\) in Equation \ref{5.8}, which we accomplish by including standards for lower concentrations of the analyte.Once we have our regression equation, it is easy to determine the concentration of analyte in a sample. When we use a normal calibration curve, for example, we measure the signal for our sample, Ssamp, and calculate the analyte’s concentration, CA, using the regression equation.\[C_A = \frac {S_{samp} - b_0} {b_1} \label{5.11}\]What is less obvious is how to report a confidence interval for CA that expresses the uncertainty in our analysis. To calculate a confidence interval we need to know the standard deviation in the analyte’s concentration, \(s_{C_A}\), which is given by the following equation\[s_{C_A} = \frac {s_r} {b_1} \sqrt{\frac {1} {m} + \frac {1} {n} + \frac {\left( \overline{S}_{samp} - \overline{S}_{std} \right)^2} {(b_1)^2 \sum_{i = 1}^{n} \left( C_{std_i} - \overline{C}_{std} \right)^2}} \label{5.12}\]where m is the number of replicate we use to establish the sample’s average signal, Ssamp, n is the number of calibration standards, Sstd is the average signal for the calibration standards, and \(C_{std_1}\) and \(\overline{C}_{std}\) are the individual and the mean concentrations for the calibration standards. Knowing the value of \(s_{C_A}\), the confidence interval for the analyte’s concentration is\[\mu_{C_A} = C_A \pm t s_{C_A} \nonumber\]where \(\mu_{C_A}\) is the expected value of CA in the absence of determinate errors, and with the value of t is based on the desired level of confidence and n – 2 degrees of freedom.Equation \ref{5.12} is written in terms of a calibration experiment. A more general form of the equation, written in terms of x and y, is given here.\[s_{x} = \frac {s_r} {b_1} \sqrt{\frac {1} {m} + \frac {1} {n} + \frac {\left( \overline{Y} - \overline{y} \right)^2} {(b_1)^2 \sum_{i = 1}^{n} \left( x_i - \overline{x} \right)^2}} \nonumber\]A close examination of Equation \ref{5.12} should convince you that the uncertainty in CA is smallest when the sample’s average signal, \(\overline{S}_{samp}\), is equal to the average signal for the standards, \(\overline{S}_{std}\). When practical, you should plan your calibration curve so that Ssamp falls in the middle of the calibration curve. For more information about these regression equations see (a) Miller, J. N. Analyst 1991, 116, 3–14; (b) Sharaf, M. A.; Illman, D. L.; Kowalski, B. R. Chemometrics, Wiley-Interscience: New York, 1986, pp. 126-127; (c) Analytical Methods Committee “Uncertainties in concentrations estimated from calibration experiments,” AMC Technical Brief, March 2006.Three replicate analyses for a sample that contains an unknown concentration of analyte, yield values for Ssamp of 29.32, 29.16 and 29.51 (arbitrary units). Using the results from Example 5.4.1 and Example 5.4.2 , determine the analyte’s concentration, CA, and its 95% confidence interval.SolutionThe average signal, \(\overline{S}_{samp}\), is 29.33, which, using Equation \ref{5.11} and the slope and the y-intercept from Example 5.4.1 , gives the analyte’s concentration as\[C_A = \frac {\overline{S}_{samp} - b_0} {b_1} = \frac {29.33 - 0.209} {120.706} = 0.241 \nonumber\]To calculate the standard deviation for the analyte’s concentration we must determine the values for \(\overline{S}_{std}\) and for \(\sum_{i = 1}^{2} (C_{std_i} - \overline{C}_{std})^2\). The former is just the average signal for the calibration standards, which, using the data in Table 5.4.1 , is 30.385. Calculating \(\sum_{i = 1}^{2} (C_{std_i} - \overline{C}_{std})^2\) looks formidable, but we can simplify its calculation by recognizing that this sum-of-squares is the numerator in a standard deviation equation; thus,\[\sum_{i = 1}^{n} (C_{std_i} - \overline{C}_{std})^2 = (s_{C_{std}})^2 \times (n - 1) \nonumber\]where \(s_{C_{std}}\) is the standard deviation for the concentration of analyte in the calibration standards. Using the data in Table 5.4.1 we find that \(s_{C_{std}}\) is 0.1871 and\[\sum_{i = 1}^{n} (C_{std_i} - \overline{C}_{std})^2 = (0.1872)^2 \times (6 - 1) = 0.175 \nonumber\]Substituting known values into Equation \ref{5.12} gives\[s_{C_A} = \frac {0.4035} {120.706} \sqrt{\frac {1} {3} + \frac {1} {6} + \frac {(29.33 - 30.385)^2} {(120.706)^2 \times 0.175}} = 0.0024 \nonumber\]Finally, the 95% confidence interval for 4 degrees of freedom is\[\mu_{C_A} = C_A \pm ts_{C_A} = 0.241 \pm (2.78 \times 0.0024) = 0.241 \pm 0.007 \nonumber\]Figure 5.4.5 shows the calibration curve with curves showing the 95% confidence interval for CA.In a standard addition we determine the analyte’s concentration by extrapolating the calibration curve to the x-intercept. In this case the value of CA is\[C_A = x\text{-intercept} = \frac {-b_0} {b_1} \nonumber\]and the standard deviation in CA is\[s_{C_A} = \frac {s_r} {b_1} \sqrt{\frac {1} {n} + \frac {(\overline{S}_{std})^2} {(b_1)^2 \sum_{i = 1}^{n}(C_{std_i} - \overline{C}_{std})^2}} \nonumber\]where n is the number of standard additions (including the sample with no added standard), and \(\overline{S}_{std}\) is the average signal for the n standards. Because we determine the analyte’s concentration by extrapolation, rather than by interpolation, \(s_{C_A}\) for the method of standard additions generally is larger than for a normal calibration curve.Figure 5.4.2 shows a normal calibration curve for the quantitative analysis of Cu2+. The data for the calibration curve are shown here.Complete a linear regression analysis for this calibration data, reporting the calibration equation and the 95% confidence interval for the slope and the y-intercept. If three replicate samples give an Ssamp of 0.114, what is the concentration of analyte in the sample and its 95% confidence interval?We begin by setting up a table to help us organize the calculationAdding the values in each column gives\[\sum_{i = 1}^{n} x_i = 2.371 \times 10^{-2} \quad \sum_{i = 1}^{n} y_i = 0.710 \quad \sum_{i = 1}^{n} x_i y_i = 4.110 \times 10^{-3} \quad \sum_{i = 1}^{n} x_i^2 = 1.378 \times 10^{-4} \nonumber\]When we substitute these values into Equation \ref{5.4} and Equation \ref{5.5}, we find that the slope and the y-intercept are\[b_1 = \frac {6 \times (4.110 \times 10^{-3}) - (2.371 \times 10^{-2}) \times 0.710} {6 \times (1.378 \times 10^{-4}) - (2.371 \times 10^{-2})^2}) = 29.57 \nonumber\]\[b_0 = \frac {0.710 - 29.57 \times (2.371 \times 10^{-2}} {6} = 0.0015 \nonumber\]and that the regression equation is\[S_{std} = 29.57 \times C_{std} + 0.0015 \nonumber\]To calculate the 95% confidence intervals, we first need to determine the standard deviation about the regression. The following table helps us organize the calculation.Adding together the data in the last column gives the numerator of Equation \ref{5.6} as \(1.596 \times 10^{-5}\). The standard deviation about the regression, therefore, is\[s_r = \sqrt{\frac {1.596 \times 10^{-5}} {6 - 2}} = 1.997 \times 10^{-3} \nonumber\]Next, we need to calculate the standard deviations for the slope and the y-intercept using Equation \ref{5.7} and Equation \ref{5.8}.\[s_{b_1} = \sqrt{\frac {6 \times (1.997 \times 10^{-3})^2} {6 \times (1.378 \times 10^{-4}) - (2.371 \times 10^{-2})^2}} = 0.3007 \nonumber\]\[s_{b_0} = \sqrt{\frac {(1.997 \times 10^{-3})^2 \times (1.378 \times 10^{-4})} {6 \times (1.378 \times 10^{-4}) - (2.371 \times 10^{-2})^2}} = 1.441 \times 10^{-3} \nonumber\]and use them to calculate the 95% confidence intervals for the slope and the y-intercept\[\beta_1 = b_1 \pm ts_{b_1} = 29.57 \pm (2.78 \times 0.3007) = 29.57 \text{ M}^{-1} \pm 0.84 \text{ M}^{-1} \nonumber\]\[\beta_0 = b_0 \pm ts_{b_0} = 0.0015 \pm (2.78 \times 1.441 \times 10^{-3}) = 0.0015 \pm 0.0040 \nonumber\]With an average Ssamp of 0.114, the concentration of analyte, CA, is\[C_A = \frac {S_{samp} - b_0} {b_1} = \frac {0.114 - 0.0015} {29.57 \text{ M}^{-1}} = 3.80 \times 10^{-3} \text{ M} \nonumber\]The standard deviation in CA is\[s_{C_A} = \frac {1.997 \times 10^{-3}} {29.57} \sqrt{\frac {1} {3} + \frac {1} {6} + \frac {(0.114 - 0.1183)^2} {(29.57)^2 \times (4.408 \times 10^{-5})}} = 4.778 \times 10^{-5} \nonumber\]and the 95% confidence interval is\[\mu = C_A \pm t s_{C_A} = 3.80 \times 10^{-3} \pm \{2.78 \times (4.778 \times 10^{-5})\} \nonumber\]\[\mu = 3.80 \times 10^{-3} \text{ M} \pm 0.13 \times 10^{-3} \text{ M} \nonumber\]You should never accept the result of a linear regression analysis without evaluating the validity of the model. Perhaps the simplest way to evaluate a regression analysis is to examine the residual errors. As we saw earlier, the residual error for a single calibration standard, ri, is\[r_i = (y_i - \hat{y}_i) \nonumber\]If the regression model is valid, then the residual errors should be distributed randomly about an average residual error of zero, with no apparent trend toward either smaller or larger residual errors (Figure 5.4.6 a). Trends such as those in Figure 5.4.6 b and Figure 5.4.6 c provide evidence that at least one of the model’s assumptions is incorrect. For example, a trend toward larger residual errors at higher concentrations, Figure 5.4.6 b, suggests that the indeterminate errors affecting the signal are not independent of the analyte’s concentration. In Figure 5.4.6 c, the residual errors are not random, which suggests we cannot model the data using a straight-line relationship. Regression methods for the latter two cases are discussed in the following sections.Using your results from Exercise 5.4.1 , construct a residual plot and explain its significance.To create a residual plot, we need to calculate the residual error for each standard. The following table contains the relevant information.The figure below shows a plot of the resulting residual errors. The residual errors appear random, although they do alternate in sign, and that do not show any significant dependence on the analyte’s concentration. Taken together, these observations suggest that our regression model is appropriate.Our treatment of linear regression to this point assumes that indeterminate errors affecting y are independent of the value of x. If this assumption is false, as is the case for the data in Figure 5.4.6 b, then we must include the variance for each value of y into our determination of the y-intercept, b0, and the slope, b1; thus\[b_0 = \frac {\sum_{i = 1}^{n} w_i y_i - b_1 \sum_{i = 1}^{n} w_i x_i} {n} \label{5.13}\]\[b_1 = \frac {n \sum_{i = 1}^{n} w_i x_i y_i - \sum_{i = 1}^{n} w_i x_i \sum_{i = 1}^{n} w_i y_i} {n \sum_{i =1}^{n} w_i x_i^2 - \left( \sum_{i = 1}^{n} w_i x_i \right)^2} \label{5.14}\]where wi is a weighting factor that accounts for the variance in yi \[w_i = \frac {n (s_{y_i})^{-2}} {\sum_{i = 1}^{n} (s_{y_i})^{-2}} \label{5.15}\]and \(s_{y_i}\) is the standard deviation for yi. In a weighted linear regression, each xy-pair’s contribution to the regression line is inversely proportional to the precision of yi; that is, the more precise the value of y, the greater its contribution to the regression.Shown here are data for an external standardization in which sstd is the standard deviation for three replicate determination of the signal. This is the same data used in Example 5.4.1 with additional information about the standard deviations in the signal.Determine the calibration curve’s equation using a weighted linear regression. As you work through this example, remember that x corresponds to Cstd, and that y corresponds to Sstd.SolutionWe begin by setting up a table to aid in calculating the weighting factors.Adding together the values in the fourth column gives\[\sum_{i = 1}^{n} (s_{y_i})^{-2} \nonumber\]which we use to calculate the individual weights in the last column. As a check on your calculations, the sum of the individual weights must equal the number of calibration standards, n. The sum of the entries in the last column is 6.0000, so all is well. After we calculate the individual weights, we use a second table to aid in calculating the four summation terms in Equation \ref{5.13} and Equation \ref{5.14}.Adding the values in the last four columns gives\[\sum_{i = 1}^{n} w_i x_i = 0.3644 \quad \sum_{i = 1}^{n} w_i y_i = 44.9499 \quad \sum_{i = 1}^{n} w_i x_i^2 = 0.0499 \quad \sum_{i = 1}^{n} w_i x_i y_i = 6.1451 \nonumber\]Substituting these values into the Equation \ref{5.13} and Equation \ref{5.14} gives the estimated slope and estimated y-intercept as\[b_1 = \frac {(6 \times 6.1451) - (0.3644 \times 44.9499)} {(6 \times 0.0499) - (0.3644)^2} = 122.985 \nonumber\]\[b_0 = \frac{44.9499 - (122.985 \times 0.3644)} {6} = 0.0224 \nonumber\]The calibration equation is\[S_{std} = 122.98 \times C_{std} + 0.2 \nonumber\]Figure 5.4.7 shows the calibration curve for the weighted regression and the calibration curve for the unweighted regression in Example 5.4.1 . Although the two calibration curves are very similar, there are slight differences in the slope and in the y-intercept. Most notably, the y-intercept for the weighted linear regression is closer to the expected value of zero. Because the standard deviation for the signal, Sstd, is smaller for smaller concentrations of analyte, Cstd, a weighted linear regression gives more emphasis to these standards, allowing for a better estimate of the y-intercept.Equations for calculating confidence intervals for the slope, the y-intercept, and the concentration of analyte when using a weighted linear regression are not as easy to define as for an unweighted linear regression [Bonate, P. J. Anal. Chem. 1993, 65, 1367–1372]. The confidence interval for the analyte’s concentration, however, is at its optimum value when the analyte’s signal is near the weighted centroid, yc , of the calibration curve.\[y_c = \frac {1} {n} \sum_{i = 1}^{n} w_i x_i \nonumber\]If we remove our assumption that indeterminate errors affecting a calibration curve are present only in the signal (y), then we also must factor into the regression model the indeterminate errors that affect the analyte’s concentration in the calibration standards (x). The solution for the resulting regression line is computationally more involved than that for either the unweighted or weighted regression lines. Although we will not consider the details in this textbook, you should be aware that neglecting the presence of indeterminate errors in x can bias the results of a linear regression.See, for example, Analytical Methods Committee, “Fitting a linear functional relationship to data with error on both variable,” AMC Technical Brief, March, 2002), as well as this chapter’s Additional Resources.A straight-line regression model, despite its apparent complexity, is the simplest functional relationship between two variables. What do we do if our calibration curve is curvilinear—that is, if it is a curved-line instead of a straight-line? One approach is to try transforming the data into a straight-line. Logarithms, exponentials, reciprocals, square roots, and trigonometric functions have been used in this way. A plot of log(y) versus x is a typical example. Such transformations are not without complications, of which the most obvious is that data with a uniform variance in y will not maintain that uniform variance after it is transformed.It is worth noting that the term “linear” does not mean a straight-line. A linear function may contain more than one additive term, but each such term has one and only one adjustable multiplicative parameter. The function\[y = ax + bx^2 \nonumber\]is an example of a linear function because the terms x and x2 each include a single multiplicative parameter, a and b, respectively. The function\[y = x^b \nonumber\]is nonlinear because b is not a multiplicative parameter; it is, instead, a power. This is why you can use linear regression to fit a polynomial equation to your data.Sometimes it is possible to transform a nonlinear function into a linear function. For example, taking the log of both sides of the nonlinear function above gives a linear function.\[\log(y) = b \log(x) \nonumber\]Another approach to developing a linear regression model is to fit a polynomial equation to the data, such as \(y = a + b x + c x^2\). You can use linear regression to calculate the parameters a, b, and c, although the equations are different than those for the linear regression of a straight-line. If you cannot fit your data using a single polynomial equation, it may be possible to fit separate polynomial equations to short segments of the calibration curve. The result is a single continuous calibration curve known as a spline function.For details about curvilinear regression, see (a) Sharaf, M. A.; Illman, D. L.; Kowalski, B. R. Chemometrics, Wiley-Interscience: New York, 1986; (b) Deming, S. N.; Morgan, S. L. Experimental Design: A Chemometric Approach, Elsevier: Amsterdam, 1987.The regression models in this chapter apply only to functions that contain a single independent variable, such as a signal that depends upon the analyte’s concentration. In the presence of an interferent, however, the signal may depend on the concentrations of both the analyte and the interferent\[S = k_A C_A + k_I CI + S_{reag} \nonumber\]where kI is the interferent’s sensitivity and CI is the interferent’s concentration. Multivariate calibration curves are prepared using standards that contain known amounts of both the analyte and the interferent, and modeled using multivariate regression.See Beebe, K. R.; Kowalski, B. R. Anal. Chem. 1987, 59, 1007A–1017A. for additional details, and check out this chapter’s Additional Resources for more information about linear regression with errors in both variables, curvilinear regression, and multivariate regression.This page titled 5.4: Linear Regression and Calibration Curves is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by David Harvey.
148
5.5: Compensating for the Reagent Blank
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Analytical_Chemistry_2.1_(Harvey)/05%3A_Standardizing_Analytical_Methods/5.05%3A_Compensating_for_the_Reagent_Blank
Thus far in our discussion of strategies for standardizing analytical methods, we have assumed that a suitable reagent blank is available to correct for signals arising from sources other than the analyte. We did not, however ask an important question: “What constitutes an appropriate reagent blank?” Surprisingly, the answer is not immediately obvious.In one study, approximately 200 analytical chemists were asked to evaluate a data set consisting of a normal calibration curve, a separate analyte-free blank, and three samples with different sizes, but drawn from the same source [Cardone, M. J. Anal. Chem. 1986, 58, 433–438]. The first two columns in Table 5.5.1 shows a series of external standards and their corresponding signals. The normal calibration curve for the data is\[S_{std} = 0.0750 \times W_{std} + 0.1250 \nonumber\]where the y-intercept of 0.1250 is the calibration blank. A separate reagent blank gives the signal for an analyte-free sample.In working up this data, the analytical chemists used at least four different approaches to correct the signals: (a) ignoring both the calibration blank, CB, and the reagent blank, RB, which clearly is incorrect; (b) using the calibration blank only; (c) using the reagent blank only; and (d) using both the calibration blank and the reagent blank. The first four rows of Table 5.5.2 shows the equations for calculating the analyte’s concentration using each approach, along with the reported concentrations for the analyte in each sample.\(C_A = \text{ concentration of analyte; } W_A = \text{ weight of analyte; } W_{samp} \text{ weight of sample; }\)\(k_A = \text{ slope of calibration curve (0.0750; slope of calibration equation); } CB = \text{ calibration blank (0.125; intercept of calibration equation); }\)\(RB = \text{ reagent blank (0.100); } TYB = \text{ total Youden blank (0.185; see text)}\)That all four methods give a different result for the analyte’s concentration underscores the importance of choosing a proper blank, but does not tell us which blank is correct. Because all four methods fail to predict the same concentration of analyte for each sample, none of these blank corrections properly accounts for an underlying constant source of determinate error.To correct for a constant method error, a blank must account for signals from any reagents and solvents used in the analysis and any bias that results from interactions between the analyte and the sample’s matrix. Both the calibration blank and the reagent blank compensate for signals from reagents and solvents. Any difference in their values is due to indeterminate errors in preparing and analyzing the standards.Because we are considering a matrix effect of sorts, you might think that the method of standard additions is one way to overcome this problem. Although the method of standard additions can compensate for proportional determinate errors, it cannot correct for a constant determinate error; see Ellison, S. L. R.; Thompson, M. T. “Standard additions: myth and reality,” Analyst, 2008, 133, 992–997.Unfortunately, neither a calibration blank nor a reagent blank can correct for a bias that results from an interaction between the analyte and the sample’s matrix. To be effective, the blank must include both the sample’s matrix and the analyte and, consequently, it must be determined using the sample itself. One approach is to measure the signal for samples of different size, and to determine the regression line for a plot of Ssamp versus the amount of sample. The resulting y-intercept gives the signal in the absence of sample, and is known as the total Youden blank [Cardone, M. J. Anal. Chem. 1986, 58, 438–445]. This is the true blank correction. The regression line for the three samples in Table 5.5.1 is\[S_{samp} = 0.009844 \times W_{samp} + 0.185 \nonumber\]giving a true blank correction of 0.185. As shown in Table 5.5.2 , using this value to correct Ssamp gives identical values for the concentration of analyte in all three samples.The use of the total Youden blank is not common in analytical work, with most chemists relying on a calibration blank when using a calibration curve and a reagent blank when using a single-point standardization. As long we can ignore any constant bias due to interactions between the analyte and the sample’s matrix, which is often the case, the accuracy of an analytical method will not suffer. It is a good idea, however, to check for constant sources of error before relying on either a calibration blank or a reagent blank.This page titled 5.5: Compensating for the Reagent Blank is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by David Harvey.
149
5.6: Using Excel and R for a Linear Regression
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Analytical_Chemistry_2.1_(Harvey)/05%3A_Standardizing_Analytical_Methods/5.06%3A_Using_Excel_and_R_for_a_Linear_Regression
Although the calculations in this chapter are relatively straightforward—consisting, as they do, mostly of summations—it is tedious to work through problems using nothing more than a calculator. Both Excel and R include functions for completing a linear regression analysis and for visually evaluating the resulting model.Let’s use Excel to fit the following straight-line model to the data in Example 5.4.1.\[y = \beta_0 + \beta_1 x \nonumber\]Enter the data into a spreadsheet, as shown in Figure 5.6.1 . Depending upon your needs, there are many ways that you can use Excel to complete a linear regression analysis. We will consider three approaches here.If all you need are values for the slope, \(\beta_1\), and the y-intercept, \(\beta_0\), you can use the following functions:= intercept(known_y's, known_x's)= slope(known_y's, known_x's)where known_y’s is the range of cells that contain the signals (y), and known_x’s is the range of cells that contain the concentrations (x). For example, if you click on an empty cell and enter= slope(B2:B7, A2:A7)Excel returns exact calculation for the slope (120.705 714 3).To obtain the slope and the y-intercept, along with additional statistical details, you can use the data analysis tools in the Data Analysis ToolPak. The ToolPak is not a standard part of Excel’s instillation. To see if you have access to the Analysis ToolPak on your computer, select Tools from the menu bar and look for the Data Analysis... option. If you do not see Data Analysis..., select Add-ins... from the Tools menu. Check the box for the Analysis ToolPak and click on OK to install them.Select Data Analysis... from the Tools menu, which opens the Data Analysis window. Scroll through the window, select Regression from the available options, and press OK. Place the cursor in the box for Input Y range and then click and drag over cells B1:B7. Place the cursor in the box for Input X range and click and drag over cells A1:A7. Because cells A1 and B1 contain labels, check the box for Labels.Including labels is a good idea. Excel’s summary output uses the x-axis label to identify the slope.Select the radio button for Output range and click on any empty cell; this is where Excel will place the results. Clicking OK generates the information shown in Figure 5.6.2 .There are three parts to Excel’s summary of a regression analysis. At the top of Figure 5.6.2 is a table of Regression Statistics. The standard error is the standard deviation about the regression, sr. Also of interest is the value for Multiple R, which is the model’s correlation coefficient, r, a term with which you may already be familiar. The correlation coefficient is a measure of the extent to which the regression model explains the variation in y. Values of r range from –1 to +1. The closer the correlation coefficient is to ±1, the better the model is at explaining the data. A correlation coefficient of 0 means there is no relationship between x and y. In developing the calculations for linear regression, we did not consider the correlation coefficient. There is a reason for this. For most straight-line calibration curves the correlation coefficient is very close to +1, typically 0.99 or better. There is a tendency, however, to put too much faith in the correlation coefficient’s significance, and to assume that an r greater than 0.99 means the linear regression model is appropriate. Figure 5.6.3 provides a useful counterexample. Although the regression line has a correlation coefficient of 0.993, the data clearly is curvilinear. The take-home lesson here is simple: do not fall in love with the correlation coefficient!The second table in Figure 5.6.2 is entitled ANOVA, which stands for analysis of variance. We will take a closer look at ANOVA in Chapter 14. For now, it is sufficient to understand that this part of Excel’s summary provides information on whether the linear regression model explains a significant portion of the variation in the values of y. The value for F is the result of an F-test of the following null and alternative hypotheses.H0: the regression model does not explain the variation in y HA: the regression model does explain the variation in y The value in the column for Significance F is the probability for retaining the null hypothesis. In this example, the probability is \(2.5 \times 10^{-6}\%\), which is strong evidence for accepting the regression model. As is the case with the correlation coefficient, a small value for the probability is a likely outcome for any calibration curve, even when the model is inappropriate. The probability for retaining the null hypothesis for the data in Figure 5.6.3 , for example, is \(9.0 \times 10^{-7}\%\).See Chapter 4.6 for a review of the F-test.The third table in Figure 5.6.2 provides a summary of the model itself. The values for the model’s coefficients—the slope,\(\beta_1\), and the y-intercept, \(\beta_0\)—are identified as intercept and with your label for the x-axis data, which in this example is Cstd. The standard deviations for the coefficients, \(s_{b_0}\) and \(s_{b_1}\), are in the column labeled Standard error. The column t Stat and the column P-value are for the following t-tests.slope: \(H_0 \text{: } \beta_1 = 0 \quad H_A \text{: } \beta_1 \neq 0\)y-intercept: \(H_0 \text{: } \beta_0 = 0 \quad H_A \text{: } \beta_0 \neq 0\)The results of these t-tests provide convincing evidence that the slope is not zero, but there is no evidence that the y-intercept differs significantly from zero. Also shown are the 95% confidence intervals for the slope and the y-intercept (lower 95% and upper 95%).See Chapter 4.6 for a review of the t-test.A third approach to completing a regression analysis is to program a spreadsheet using Excel’s built-in formula for a summation=sum(first cell:last cell)and its ability to parse mathematical equations. The resulting spreadsheet is shown in Figure 5.6.4 .You can use Excel to examine your data and the regression line. Begin by plotting the data. Organize your data in two columns, placing the x values in the left-most column. Click and drag over the data and select Charts from the ribbon. Select Scatter, choosing the option without lines that connect the points. To add a regression line to the chart, click on the chart’s data and select Chart: Add Trendline... from the main men. Pick the straight-line model and click OK to add the line to your chart. By default, Excel displays the regression line from your first point to your last point. Figure 5.6.5 shows the result for the data in Figure 5.6.1 .Excel also will create a plot of the regression model’s residual errors. To create the plot, build the regression model using the Analysis ToolPak, as described earlier. Clicking on the option for Residual plots creates the plot shown in Figure 5.6.6 .Excel’s biggest limitation for a regression analysis is that it does not provide a function to calculate the uncertainty when predicting values of x. In terms of this chapter, Excel can not calculate the uncertainty for the analyte’s concentration, CA, given the signal for a sample, Ssamp. Another limitation is that Excel does not have a built-in function for a weighted linear regression. You can, however, program a spreadsheet to handle these calculations.Use Excel to complete the regression analysis in Exercise 5.4.1.Begin by entering the data into an Excel spreadsheet, following the format shown in Figure 5.6.1 . Because Excel’s Data Analysis tools provide most of the information we need, we will use it here. The resulting output, which is shown below, provides the slope and the y-intercept, along with their respective 95% confidence intervals.Excel does not provide a function for calculating the uncertainty in the analyte’s concentration, CA, given the signal for a sample, Ssamp. You must complete these calculations by hand. With an Ssamp of 0.114, we find that CA is\[C_A = \frac {S_{samp} - b_0} {b_1} = \frac {0.114 - 0.0014} {29.59 \text{ M}^{-1}} = 3.80 \times 10^{-3} \text{ M} \nonumber\]The standard deviation in CA is\[s_{C_A} = \frac {1.996 \times 10^{-3}} {29.59} \sqrt{\frac {1} {3} + \frac {1} {6} + \frac {(0.114 - 0.1183)^2} {(29.59)^2 \times 4.408 \times 10^{-5})}} = 4.772 \times 10^{-5} \nonumber\]and the 95% confidence interval is\[\mu = C_A \pm ts_{C_A} = 3.80 \times 10^{-3} \pm \{2.78 \times (4.772 \times 10^{-5}) \} \nonumber\]\[\mu = 3.80 \times 10^{-3} \text{ M} \pm 0.13 \times 10^{-3} \text{ M} \nonumber\]Let’s use R to fit the following straight-line model to the data in Example 5.4.1.\[y = \beta_0 + \beta_1 x \nonumber\]To begin, create objects that contain the concentration of the standards and their corresponding signals.> conc = c(0, 0.1, 0.2, 0.3, 0.4, 0.5)> signal = c(0, 12.36, 24.83, 35.91, 48.79, 60.42)The command for a straight-line linear regression model islm(y ~ x)where y and x are the objects the objects our data. To access the results of the regression analysis, we assign them to an object using the following command> model = lm(signal ~ conc)where model is the name we assign to the object.As you might guess, lm is short for linear model.You can choose any name for the object that contains the results of the regression analysis.To evaluate the results of a linear regression we need to examine the data and the regression line, and to review a statistical summary of the model. To examine our data and the regression line, we use the plot command, which takes the following general formplot(x, y, optional arguments to control style)where x and y are the objects that contain our data, and the abline commandabline(object, optional arguments to control style)where object is the object that contains the results of the linear regression. Entering the commands> plot(conc, signal, pch = 19, col = “blue”, cex = 2)> abline(model, col = “red”)creates the plot shown in Figure 5.6.7 .To review a statistical summary of the regression model, we use the summary command.> summary(model)The resulting output, shown in Figure 5.6.8 , contains three sections.The first section of R’s summary of the regression model lists the residual errors. To examine a plot of the residual errors, use the command> plot(model, which = 1)which produces the result shown in Figure 5.6.9 . Note that R plots the residuals against the predicted (fitted) values of y instead of against the known values of x. The choice of how to plot the residuals is not critical, as you can see by comparing Figure 5.6.9 to Figure 5.6.6 . The line in Figure 5.6.9 is a smoothed fit of the residuals.The reason for including the argument which = 1 is not immediately obvious. When you use R’s plot command on an object created by the lm command, the default is to create four charts summarizing the model’s suitability. The first of these charts is the residual plot; thus, which = 1 limits the output to this plot.The second section of Figure 5.6.8 provides the model’s coefficients—the slope, \(\beta_1\), and the y-intercept, \(\beta_0\)—along with their respective standard deviations (Std. Error). The column t value and the column Pr(>|t|) are for the following t-tests.slope: \(H_0 \text{: } \beta_1 = 0 \quad H_A \text{: } \beta_1 \neq 0\)y-intercept: \(H_0 \text{: } \beta_0 = 0 \quad H_A \text{: } \beta_0 \neq 0\)The results of these t-tests provide convincing evidence that the slope is not zero, but no evidence that the y-intercept differs significantly from zero.The last section of the regression summary provides the standard deviation about the regression (residual standard error), the square of the correlation coefficient (multiple R-squared), and the result of an F-test on the model’s ability to explain the variation in the y values. For a discussion of the correlation coefficient and the F-test of a regression model, as well as their limitations, refer to the section on using Excel’s data analysis tools.Unlike Excel, R includes a command for predicting the uncertainty in an analyte’s concentration, CA, given the signal for a sample, Ssamp. This command is not part of R’s standard installation. To use the command you need to install the “chemCal” package by entering the following command (note: you will need an internet connection to download the package).> install.packages(“chemCal”)After installing the package, you need to load the functions into R using the following command. (note: you will need to do this step each time you begin a new R session as the package does not automatically load when you start R).> library(“chemCal”)You need to install a package once, but you need to load the package each time you plan to use it. There are ways to configure R so that it automatically loads certain packages; see An Introduction to R for more information (click here to view a PDF version of this document).The command for predicting the uncertainty in CA is inverse.predict, which takes the following form for an unweighted linear regressioninverse.predict(object, newdata, alpha = value)where object is the object that contains the regression model’s results, new-data is an object that contains values for Ssamp, and value is the numerical value for the significance level. Let’s use this command to complete Example 5.4.3. First, we create an object that contains the values of Ssamp > sample = c(29.32, 29.16, 29.51)and then we complete the computation using the following command> inverse.predict(model, sample, alpha = 0.05)producing the result shown in Figure 5.6.10 . The analyte’s concentration, CA, is given by the value $Prediction, and its standard deviation, \(s_{C_A}\), is shown as $`Standard Error`. The value for $Confidence is the confidence interval, \(\pm t s_{C_A}\), for the analyte’s concentration, and $`Confidence Limits` provides the lower limit and upper limit for the confidence interval for CA.R’s command for an unweighted linear regression also allows for a weighted linear regression if we include an additional argument, weights, whose value is an object that contains the weights.lm(y ~ x, weights = object)Let’s use this command to complete Example 5.4.4. First, we need to create an object that contains the weights, which in R are the reciprocals of the standard deviations in y, \((s_{y_i})^{-2}\). Using the data from Example 5.4.4, we enter> syi=c(0.02, 0.02, 0.07, 0.13, 0.22, 0.33)> w=1/syi^2to create the object that contains the weights. The commands> modelw= lm(signal ~ conc, weights = w)> summary(modelw)generate the output shown in Figure 5.6.11 . Any difference between the results shown here and the results shown in Example 5.4.4 are the result of round-off errors in our earlier calculations.You may have noticed that this way of defining weights is different than that shown in Equation 5.4.15. In deriving equations for a weighted linear regression, you can choose to normalize the sum of the weights to equal the number of points, or you can choose not to—the algorithm in R does not normalize the weights.Use R to complete the regression analysis in Exercise 5.4.1.The figure below shows the R session for this problem, including loading the chemCal package, creating objects to hold the values for Cstd, Sstd, and Ssamp. Note that for Ssamp, we do not have the actual values for the three replicate measurements. In place of the actual measurements, we just enter the average signal three times. This is okay because the calculation depends on the average signal and the number of replicates, and not on the individual measurements.This page titled 5.6: Using Excel and R for a Linear Regression is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by David Harvey.
150
5.7: Problems
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Analytical_Chemistry_2.1_(Harvey)/05%3A_Standardizing_Analytical_Methods/5.07%3A_Problems
1. Suppose you use a serial dilution to prepare 100 mL each of a series of standards with concentrations of \(1.00 \times10^{-5}\), \(1.00 \times10^{-4}\), \(1.00 \times10^{-3}\), and \(1.00 \times10^{-2}\) M from a 0.100 M stock solution. Calculate the uncertainty for each solution using a propagation of uncertainty, and compare to the uncertainty if you prepare each solution as a single dilution of the stock solution. You will find tolerances for different types of volumetric glassware and digital pipets in Table 4.2.1 and Table 4.2.2. Assume that the uncertainty in the stock solution’s molarity is ±0.0002.2. Three replicate determinations of Stotal for a standard solution that is 10.0 ppm in analyte give values of 0.163, 0.157, and 0.161 (arbitrary units). The signal for the reagent blank is 0.002. Calculate the concentration of analyte in a sample with a signal of 0.118.3. A 10.00-g sample that contains an analyte is transferred to a 250-mL volumetric flask and diluted to volume. When a 10.00 mL aliquot of the resulting solution is diluted to 25.00 mL it gives a signal of 0.235 (arbitrary units). A second 10.00-mL portion of the solution is spiked with 10.00 mL of a 1.00-ppm standard solution of the analyte and diluted to 25.00 mL. The signal for the spiked sample is 0.502. Calculate the weight percent of analyte in the original sample.4. A 50.00 mL sample that contains an analyte gives a signal of 11.5 (arbitrary units). A second 50 mL aliquot of the sample, which is spiked with 1.00 mL of a 10.0-ppm standard solution of the analyte, gives a signal of 23.1. What is the analyte’s concentration in the original sample?5. A standard additions calibration curve based on Equation 5.3.10 places \(S_{spike} \times (V_o + V_{std})\) on the y-axis and \(C_{std} \times V_{std}\) on the x-axis. Derive equations for the slope and the y-intercept and explain how you can determine the amount of analyte in a sample from the calibration curve. In addition, clearly explain why you cannot plot Sspike on the y-axis and \(C_{std} \times \{V_{std}/(V_o + V_{std})\}\) on the x-axis.6. A standard sample contains 10.0 mg/L of analyte and 15.0 mg/L of internal standard. Analysis of the sample gives signals for the analyte and the internal standard of 0.155 and 0.233 (arbitrary units), respectively. Sufficient internal standard is added to a sample to make its concentration 15.0 mg/L. Analysis of the sample yields signals for the analyte and the internal standard of 0.274 and 0.198, respectively. Report the analyte’s concentration in the sample.7. For each of the pair of calibration curves shown ibelow, select the calibration curve that uses the more appropriate set of standards. Briefly explain the reasons for your selections. The scales for the x-axis and the y-axis are the same for each pair.8. The following data are for a series of external standards of Cd2+ buffered to a pH of 4.6.(a) Use a linear regression analysis to determine the equation for the calibration curve and report confidence intervals for the slope and the y-intercept.(b) Construct a plot of the residuals and comment on their significance.At a pH of 3.7 the following data were recorded for the same set of external standards.(c) How much more or less sensitive is this method at the lower pH?(d) A single sample is buffered to a pH of 3.7 and analyzed for cadmium, yielding a signal of 66.3 nA. Report the concentration of Cd2+ in the sample and its 95% confidence interval.The data in this problem are from Wojciechowski, M.; Balcerzak, J. Anal. Chim. Acta 1991, 249, 433–445.9. To determine the concentration of analyte in a sample, a standard addition is performed. A 5.00-mL portion of sample is analyzed and then successive 0.10-mL spikes of a 600.0 ppb standard of the analyte are added, analyzing after each spike. The following table shows the results of this analysis.Construct an appropriate standard additions calibration curve and use a linear regression analysis to determine the concentration of analyte in the original sample and its 95% confidence interval.10. Troost and Olavsesn investigated the application of an internal standardization to the quantitative analysis of polynuclear aromatic hydrocarbons. The following results were obtained for the analysis of phenanthrene using isotopically labeled phenanthrene as an internal standard. Each solution was analyzed twice.\(C_A/C_{IS}\)0.5140.5220.9931.0241.4861.4712.0442.0802.3422.550(a) Determine the equation for the calibration curve using a linear regression, and report confidence intervals for the slope and the y-intercept. Average the replicate signals for each standard before you complete the linear regression analysis.(b) Based on your results explain why the authors concluded that the internal standardization was inappropriate.The data in this problem are from Troost, J. R.; Olavesen, E. Y. Anal. Chem. 1996, 68, 708–711.11. In Chapter 4.6. we used a paired t-test to compare two analytical methods that were used to analyze independently a series of samples of variable composition. An alternative approach is to plot the results for one method versus the results for the other method. If the two methods yield identical results, then the plot should have an expected slope, \(\beta_1\), of 1.00 and an expected y-intercept, \(\beta_0\), of 0.0. We can use a t-test to compare the slope and the y-intercept from a linear regression to the expected values. The appropriate test statistic for the y-intercept is found by rearranging Equation 5.4.10.\[t_{exp} = \frac {|\beta_0 - b_0|} {s_{b_0}} = \frac {|b_0|} {s_{b_0}} \nonumber\]Rearranging Equation 5.4.9 gives the test statistic for the slope.\[t_{exp} = \frac {|\beta_1 - b_1} {s_{b_1}} = \frac {|b_1|} {s_{b_1}} \nonumber\]Reevaluate the data in Problem 25 from Chapter 4 using the same significance level as in the original problem.Although this is a common approach for comparing two analytical methods, it does violate one of the requirements for an unweighted linear regression—that indeterminate errors affect y only. Because indeterminate errors affect both analytical methods, the result of an unweighted linear regression is biased. More specifically, the regression underestimates the slope, b1, and overestimates the y-intercept, b0. We can minimize the effect of this bias by placing the more precise analytical method on the x-axis, by using more samples to increase the degrees of freedom, and by using samples that uniformly cover the range of concentrations.For more information, see Miller, J. C.; Miller, J. N. Statistics for Analytical Chemistry, 3rd ed. Ellis Horwood PTR Prentice-Hall: New York, 1993. Alternative approaches are found in Hartman, C.; Smeyers-Verbeke, J.; Penninckx, W.; Massart, D. L. Anal. Chim. Acta 1997, 338, 19–40, and Zwanziger, H. W.; Sârbu, C. Anal. Chem. 1998, 70, 1277–1280.12. Consider the following three data sets, each of which gives values of y for the same values of x.(a) An unweighted linear regression analysis for the three data sets gives nearly identical results. To three significant figures, each data set has a slope of 0.500 and a y-intercept of 3.00. The standard deviations in the slope and the y-intercept are 0.118 and 1.125 for each data set. All three standard deviations about the regression are 1.24. Based on these results for a linear regression analysis, comment on the similarity of the data sets.(b) Complete a linear regression analysis for each data set and verify that the results from part (a) are correct. Construct a residual plot for each data set. Do these plots change your conclusion from part (a)? Explain.(c) Plot each data set along with the regression line and comment on your results.(d) Data set 3 appears to contain an outlier. Remove the apparent outlier and reanalyze the data using a linear regression. Comment on your result.(e) Briefly comment on the importance of visually examining your data.These three data sets are taken from Anscombe, F. J. “Graphs in Statistical Analysis,” Amer. Statis. 1973, 27, 17-21.13. Fanke and co-workers evaluated a standard additions method for a voltammetric determination of Tl. A summary of their results is tabulated in the following table.Use a weighted linear regression to determine the standardization relationship for this data.The data in this problem are from Franke, J. P.; de Zeeuw, R. A.; Hakkert, R. Anal. Chem. 1978, 50, 1374–1380.This page titled 5.7: Problems is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by David Harvey.
151
5.8: Additional Resources
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Analytical_Chemistry_2.1_(Harvey)/05%3A_Standardizing_Analytical_Methods/5.08%3A_Additional_Resources
Although there are many experiments in the literature that incorporate external standards, the method of standard additions, or internal standards, the issue of choosing a method standardization is not the experiment’s focus. One experiment designed to consider the issue of selecting a method of standardization is given here.In addition to the texts listed as suggested readings in Chapter 4, the following text provide additional details on linear regression.The following articles providing more details about linear regression.Useful papers providing additional details on the method of standard additions are gathered here.Approaches that combine a standard addition with an internal standard are described in the following paper.The following papers discusses the importance of weighting experimental data when use linear regression.Algorithms for performing a linear regression with errors in both X and Y are discussed in the following papers. Also included here are papers that address the difficulty of using linear regression to compare two analytical methods.Outliers present a problem for a linear regression analysis. The following papers discuss the use of robust linear regression techniques.The following papers discusses some of the problems with using linear regression to analyze data that has been mathematically transformed into a linear form, as well as alternative methods of evaluating curvilinear data.More information on multivariate and multiple regression can be found in the following papers.An additional discussion on method blanks, including the use of the total Youden blank, is found in the fol- lowing papers.There are a variety of computational packages for completing linear regression analyses. These papers provide details on there use in a variety of contexts.This page titled 5.8: Additional Resources is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by David Harvey.
152
5.9: Chapter Summary and Key Terms
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Analytical_Chemistry_2.1_(Harvey)/05%3A_Standardizing_Analytical_Methods/5.09%3A_Chapter_Summary_and_Key_Terms
In a quantitative analysis we measure a signal, Stotal, and calculate the amount of analyte, nA or CA, using one of the following equations.Stotal = kAnA + Sreag Stotal = kACA + Sreag To obtain an accurate result we must eliminate determinate errors that affect the signal, Stotal, the method’s sensitivity, kA, and the signal due to the reagents, Sreag.To ensure that we accurately measure Stotal, we calibrate our equipment and instruments. To calibrate a balance, for example, we use a standard weight of known mass. The manufacturer of an instrument usually suggests appropriate calibration standards and calibration methods.To standardize an analytical method we determine its sensitivity. There are several standardization strategies available to us, including external standards, the method of standard addition, and internal standards. The most common strategy is a multiple-point external standardization and a normal calibration curve. We use the method of standard additions, in which we add known amounts of analyte to the sample, when the sample’s matrix complicates the analysis. When it is difficult to reproducibly handle samples and standards, we may choose to add an internal standard.Single-point standardizations are common, but are subject to greater uncertainty. Whenever possible, a multiple-point standardization is preferred, with results displayed as a calibration curve. A linear regression analysis provides an equation for the standardization.A reagent blank corrects for any contribution to the signal from the reagents used in the analysis. The most common reagent blank is one in which an analyte-free sample is taken through the analysis. When a simple reagent blank does not compensate for all constant sources of determinate error, other types of blanks, such as the total Youden blank, are used.calibration curvelinear regressionmultiple-point standardizationreagent gradeserial dilutiontotal Youden blankexternal standardmatrix matchingnormal calibration curveresidual errorsingle-point standardizationunweighted linear regressioninternal standardmethod of standard additionsprimary standardsecondary standardstandard deviation about the regressionweighted linear regressionThis page titled 5.9: Chapter Summary and Key Terms is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by David Harvey.
153
6.1: Reversible Reactions and Chemical Equilibria
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Analytical_Chemistry_2.1_(Harvey)/06%3A_Equilibrium_Chemistry/6.01%3A_Reversible_Reactions_and_Chemical_Equilibria
In 1798, the chemist Claude Berthollet accompanied Napoleon’s military expedition to Egypt. While visiting the Natron Lakes, a series of salt water lakes carved from limestone, Berthollet made an observation that led him to an important discovery. When exploring the lake’s shore, Berthollet found deposits of Na2CO3, a result he found surprising. Why did Berthollet find this result surprising and how did it contribute to an important discovery? Answering these questions provides us with an example of chemical reasoning and introduces us to the topic of this chapter.Napoleon’s expedition to Egypt was the first to include a significant scientific presence. The Commission of Sciences and Arts, which included Claude Berthollet, began with 151 members, and operated in Egypt for three years. In addition to Berthollet’s work, other results included a publication on mirages and a detailed catalogs of plant and animal life, mineralogy, and archeology. For a review of the Commission’s contributions, see Gillispie, C. G. “Scientific Aspects of the French Egyptian Expedition, 1798‐1801,” Proc. Am. Phil. Soc. 1989, 133, 447–474.At the end of the 18th century, chemical reactivity was explained in terms of elective affinities [Quilez, J. Chem. Educ. Res. Pract. 2004, 5, 69–87]. If, for example, substance A reacts with substance BC to form AB\[\text{A}+\text{BC} \rightarrow \text{AB}+\text{C} \nonumber\]then A and B were said to have an elective affinity for each other. With elective affinity as the driving force for chemical reactivity, reactions were understood to proceed to completion and to proceed in one direction. Once formed, the compound AB could not revert to A and BC.\[\text{A}+\text{BC} \nrightarrow \text{AB}+\text{C} \nonumber\]From his experience in the laboratory, Berthollet knew that adding solid Na2CO3 to a solution of CaCl2 produces a precipitate of CaCO3.\[\mathrm{Na}_{2} \mathrm{CO}_{3}(s)+\mathrm{CaCl}_{2}(a q) \rightarrow 2 \mathrm{NaCl}(a q)+\mathrm{CaCO}_{3}(s) \nonumber\]Understanding this, Berthollet was surprised to find solid Na2CO3 forming on the edges of the lake, particularly since the deposits formed only when the lake’s salt water, NaCl(aq), was in contact with solid limestone, CaCO3(s). Where the lake was in contact with clay soils, there was little or no Na2CO3.Natron is another name for the mineral sodium carbonate, Na2CO3•10H2O. In nature, it usually contains impurities of NaHCO3 and NaCl. In ancient Egypt, natron was mined and used for a variety of purposes, including as a cleaning agent and in mummification.Berthollet’s important insight was recognizing that the chemistry leading to the formation of Na2CO3 is the reverse of that seen in the laboratory.\[2 \mathrm{NaCl}(a q)+\mathrm{CaCO}_{3}(s) \rightarrow \mathrm{Na}_{2} \mathrm{CO}_{3}(s)+\mathrm{CaCl}_{2}(a q) \nonumber\]Using this insight Berthollet reasoned that the reaction is reversible, and that the relative amounts of NaCl, CaCO3, Na2CO3, and CaCl2 determine the direction in which the reaction occurs and the final composition of the reaction mixture. We recognize a reaction’s ability to move in both directions by using a double arrow when we write the reaction.\[\mathrm{Na}_{2} \mathrm{CO}_{3}(s)+\mathrm{CaCl}_{2}(a q) \rightleftharpoons 2 \mathrm{NaCl}(a q)+\mathrm{CaCO}_{3}(s) \nonumber\]For obvious reasons, we call the double arrow, \(\rightleftharpoons\), an equilibrium arrow.Berthollet’s reasoning that reactions are reversible was an important step in understanding chemical reactivity. When we mix together solutions of Na2CO3 and CaCl2 they react to produce NaCl and CaCO3. As the reaction takes place, if we monitor the mass of Ca2+ that remains in solution and the mass of CaCO3 that precipitates, the result looks something like Figure 6.1.1 . At the start of the reaction the mass of Ca2+ decreases and the mass of CaCO3 increases. Eventually the reaction reaches a point after which there is no further change in the amounts of these species. Such a condition is called a state of equilibrium.Although a system at equilibrium appears static on a macroscopic level, it is important to remember that the forward and the reverse reactions continue to occur. A reaction at equilibrium exists in a steady‐state, in which the rate at which a species forms equals the rate at which it is consumed.This page titled 6.1: Reversible Reactions and Chemical Equilibria is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by David Harvey.
154
6.2: Thermodynamics and Equilibrium Chemistry
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Analytical_Chemistry_2.1_(Harvey)/06%3A_Equilibrium_Chemistry/6.02%3A_Thermodynamics_and_Equilibrium_Chemistry
Thermodynamics is the study of thermal, electrical, chemical, and mechanical forms of energy. The study of thermodynamics crosses many disciplines, including physics, engineering, and chemistry. Of the various branches of thermodynamics, the most important to chemistry is the study of how energy changes during a chemical reaction.Consider, for example, the general equilibrium reaction shown in Equation \ref{6.1}, which involves the species A, B, C, and D, with stoichiometric coefficients of a, b, c, and d.\[a A+b B \rightleftharpoons c C+d D \label{6.1}\]By convention, we identify the species on the left side of the equilibrium arrow as reactants and those on the right side of the equilibrium arrow as products. As Berthollet discovered, writing a reaction in this fashion does not guarantee that the reaction of A and B to produce C and D is favorable. Depending on initial conditions the reaction may move to the left, it may move to the right, or it may exist in a state of equilibrium. Understanding the factors that determine the reaction’s final equilibrium position is one of the goals of chemical thermodynamics.The direction of a reaction is that which lowers the overall free energy. At a constant temperature and pressure, which is typical of many benchtop chemical reactions, a reaction’s free energy is given by the Gibb’s free energy function\[\Delta G=\Delta H-T \Delta S \label{6.2}\]where T is the temperature in kelvin, and ∆G, ∆H, and ∆S are the differences in the Gibb's free energy, the enthalpy, and the entropy between the products and the reactants.Enthalpy is a measure of the flow of energy, as heat, during a chemical reaction. A reaction that releases heat has a negative ∆H and is called exothermic. An endothermic reaction absorbs heat from its surroundings and has a positive ∆H. Entropy is a measure of energy that is unavailable for useful, chemical work. The entropy of an individual species is always positive and generally is larger for gases than for solids, and for more complex molecules than for simpler molecules. Reactions that produce a large number of simple, gaseous products usually have a positive ∆S.For many students, entropy is the most difficult topic in thermodynamics to understand. For a rich resource on entropy, visit the following web site: http://entropysite.oxy.edu/.The sign of ∆G indicates the direction in which a reaction moves to reach its equilibrium position. A reaction is thermodynamically favorable when its enthalpy, ∆H, decreases and its entropy, ∆S, increases. Substituting the inequalities ∆H < 0 and ∆S > 0 into Equation \ref{6.2} shows that a reaction is thermodynamically favorable when ∆G is negative. When ∆G is positive the reaction is unfavorable as written (although the reverse reaction is favorable). A reaction at equilibrium has a ∆G of zero.Equation \ref{6.2} shows that the sign of ∆G depends on the signs of ∆H and of ∆S, and the temperature, T. The following table summarizes the possibilities.\(\Delta G > 0\) at all temperaturesNote that the what constitutes "low temperatures" or "high temperatures" depends on the reaction.As a reaction moves from its initial, non‐equilibrium condition to its equilibrium position, its value of ∆G approaches zero. At the same time, the chemical species in the reaction experience a change in their concentrations. The Gibb's free energy, therefore, must be a function of the concentrations of reactants and products.As shown in Equation \ref{6.3}, we can divide the Gibb’s free energy, ∆G, into two terms.\[\triangle G=\Delta G^{\circ}+R T \ln Q_r \label{6.3}\]The first term, ∆Go, is the change in the Gibb’s free energy when each species in the reaction is in its standard state, which we define as follows: gases with unit partial pressures, solutes with unit concentrations, and pure solids and pure liquids. The second term includes the reaction quotient, \(Q_r\), which accounts for non‐standard state pressures and concentrations. For reaction \ref{6.1} the reaction quotient is\[Q_r = \frac{[\mathrm{C}]^{c}[\mathrm{D}]^{d}}{[\mathrm{A}]^{a}[\mathrm{B}]^{b}} \label{6.4}\]where the terms in brackets are the concentrations of the reactants and products. Note that we define the reaction quotient with the products in the numerator and the reactants in the denominator. In addition, we raise the concentration of each species to a power equivalent to its stoichiometry in the balanced chemical reaction. For a gas, we use partial pressure in place of concentration. Pure solids and pure liquids do not appear in the reaction quotient.Although not shown here, each concentration term in Equation \ref{6.4} is divided by the corresponding standard state concentration; thus, the term [C]c really means\[\left\{\frac{[\mathrm{C}]}{[\mathrm{C}]^{\circ}}\right\} \nonumber\]where [C]o is the standard state concentration for C. There are two important consequences of this: the value of Q is unitless; and the ratio has a value of 1 for a pure solid or a pure liquid. This is the reason that pure solids and pure liquids do not appear in the reaction quotient.At equilibrium the Gibb’s free energy is zero, and Equation \ref{6.3} simplifies to\[\triangle G^{\circ}=-R T \ln K \nonumber\]where K is an equilibrium constant that defines the reaction’s equilibrium position. The equilibrium constant is just the reaction quotient’s numerical value when we substitute equilibrium concentrations into Equation \ref{6.4}.\[K = \frac{[\mathrm{C}]_{\mathrm{eq}}^{c}[\mathrm{D}]_{\mathrm{eq}}^{d}}{[\mathrm{A}]_{\mathrm{eq}}^{a}[\mathrm{B}]_{\mathrm{eq}}^{b}} \label{6.5}\]Here we include the subscript “eq” to indicate a concentration at equilibrium. Although generally we will omit the “eq” when we write an equilibrium constant expressions, it is important to remember that the value of K is determined by equilibrium concentrations.As written, Equation \ref{6.5} is a limiting law that applies only to infinitely dilute solutions where the chemical behavior of one species is unaffected by the presence of other species. Strictly speaking, Equation \ref{6.5} is written in terms of activities instead of concentrations. We will return to this point in Chapter 6.9. For now, we will stick with concentrations as this convention already is familiar to you.This page titled 6.2: Thermodynamics and Equilibrium Chemistry is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by David Harvey.
155
6.3: Manipulating Equilibrium Constants
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Analytical_Chemistry_2.1_(Harvey)/06%3A_Equilibrium_Chemistry/6.03%3A_Manipulating_Equilibrium_Constants
We will take advantage of two useful relationships when we work with equilibrium constants. First, if we reverse a reaction’s direction, the equilibrium constant for the new reaction is the inverse of that for the original reaction. For example, the equilibrium constant for the reaction\[\mathrm{A}+2 \mathrm{B}\rightleftharpoons \mathrm{AB}_{2} \quad \quad K_{1}=\frac{\left[\mathrm{AB}_{2}\right]}{[\mathrm{A}][\mathrm{B}]^{2}} \nonumber\]is the inverse of that for the reaction\[\mathrm{AB}_{2}\rightleftharpoons \mathrm{A}+2 \mathrm{B} \quad \quad K_{2}=\left(K_{1}\right)^{-1}=\frac{[\mathrm{A}][\mathrm{B}]^{2}}{\left[\mathrm{AB}_{2}\right]} \nonumber\]Second, if we add together two reactions to form a new reaction, the equilibrium constant for the new reaction is the product of the equilibrium constants for the original reactions.\[A+C\rightleftharpoons A C \quad \quad K_{3}=\frac{[A C]}{[A][C]} \nonumber\]\[\mathrm{AC}+\mathrm{C}\rightleftharpoons\mathrm{AC}_{2} \quad \quad K_{4}=\frac{\left[\mathrm{AC}_{2}\right]}{[\mathrm{AC}][\mathrm{C}]} \nonumber\]\[\mathrm{A}+2 \mathrm{C}\rightleftharpoons \mathrm{AC}_{2} \quad \quad K_{5}=K_{3} \times K_{4}=\frac{[\mathrm{AC}]}{[\mathrm{A}][\mathrm{C}]} \times \frac{\left[\mathrm{AC}_{2}\right]}{[\mathrm{AC}][\mathrm{C}]}=\frac{\left[\mathrm{AC}_{2}\right]}{[\mathrm{A}][\mathrm{C}]^{2}} \nonumber\]Calculate the equilibrium constant for the reaction\[2 \mathrm{A}+\mathrm{B}\rightleftharpoons \mathrm{C}+3 \mathrm{D} \nonumber\]given the following information\[\begin{array}{ll}{\text{Rxn} \ 1 : A+B\rightleftharpoons D} & {K_{1}=0.40} \\ {\text{Rxn} \ 2 : A+E\rightleftharpoons C+D+F} & {K_{2}=0.10} \\ {\text{Rxn} \ 3 : C+E\rightleftharpoons B} & {K_{3}=2.0} \\ {\text{Rxn} \ 4 : F+C\rightleftharpoons D+B} & {K_{4}=5.0}\end{array} \nonumber\]SolutionThe overall reaction is equivalent to\[\text{Rxn} \ 1+\text{Rxn} \ 2-\text{Rxn} \ 3+\text{Rxn} \ 4 \nonumber\]Subtracting a reaction is equivalent to adding the reverse reaction; thus, the overall equilibrium constant is\[K=\frac{K_{1} \times K_{2} \times K_{4}}{K_{3}}=\frac{0.40 \times 0.10 \times 5.0}{2.0}=0.10 \nonumber\]Calculate the equilibrium constant for the reaction\[C+D+F \rightleftharpoons 2 A+3 B \nonumber\]using the equilibrium constants from Example 6.3.1 .The overall reaction is equivalent to\[\operatorname{Rxn} 4-2 \times \operatorname{Rxn} 1 \nonumber\]Subtracting a reaction is equivalent to adding the reverse reaction; thus, the overall equilibrium constant is\[K=\frac{K_{4}}{\left(K_{1}\right)^{2}}=\frac{(5.0)}{(0.40)^{2}}=31.25 \approx 31 \nonumber\]This page titled 6.3: Manipulating Equilibrium Constants is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by David Harvey.
156
6.4: Equilibrium Constants for Chemical Reactions
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Analytical_Chemistry_2.1_(Harvey)/06%3A_Equilibrium_Chemistry/6.04%3A_Equilibrium_Constants_for_Chemical_Reactions
Several types of chemical reactions are important in analytical chemistry, either in preparing a sample for analysis or during the analysis. The most significant of these are precipitation reactions, acid–base reactions, complexation reactions, and oxidation–reduction reactions. In this section we review these reactions and their equilibrium constant expressions.Another common name for an oxidation–reduction reaction is a redox reaction, where “red” is short for reduction and “ox” is short for oxidation.In a precipitation reaction, two or more soluble species combine to form an insoluble precipitate. The most common precipitation reaction is a metathesis reaction in which two soluble ionic compounds exchange parts. For example, if we add a solution of lead nitrate, Pb(NO3)2, to a solution of potassium chloride, KCl, a precipitate of lead chloride, PbCl2, forms. We usually write a precipitation reaction as a net ionic equation, which shows only the precipitate and those ions that form the precipitate; thus, the precipitation reaction for PbCl2 is\[\mathrm{Pb}^{2+}(a q)+2 \mathrm{Cl}^{-}(a q) \rightleftharpoons \mathrm{PbCl}_{2}(s) \nonumber\]When we write the equilibrium constant for a precipitation reaction, we focus on the precipitate’s solubility; thus, for PbCl2, the solubility reaction is\[\mathrm{PbCl}_{2}(s)\rightleftharpoons \mathrm{Pb}^{2+}(a q)+2 \mathrm{Cl}^{-}(a q) \nonumber\]and its equilibrium constant, or solubility product, Ksp, is\[K_{\mathrm{sp}}=\left[\mathrm{Pb}^{2+}\right]\left[\mathrm{Cl}^{-}\right]^{2} \label{6.1}\]Even though it does not appear in the Ksp expression, it is important to remember that Equation \ref{6.1} is valid only if PbCl2(s) is present and in equilibrium with Pb2+ and Cl–. You will find values for selected solubility products in Appendix 10.A useful definition of acids and bases is that independently introduced in 1923 by Johannes Brønsted and Thomas Lowry. In the Brønsted‐Lowry definition, an acid is a proton donor and a base is a proton acceptor. Note the connection between these definitions—defining a base as a proton acceptor implies there is an acid available to donate the proton. For example, in reaction \ref{6.2} acetic acid, CH3COOH, donates a proton to ammonia, NH3, which serves as the base.\[\mathrm{CH}_{3} \mathrm{COOH}(aq)+\mathrm{NH}_{3}(aq) \rightleftharpoons \mathrm{NH}_{4}^{+}(aq)+\mathrm{CH}_{3} \mathrm{COO}^{-}(aq) \label{6.2}\]When an acid and a base react, the products are a new acid and a new base. For example, the acetate ion, CH3COO–, in reaction \ref{6.2} is a base that can accept a proton from the acidic ammonium ion, \(\text{NH}_4^+\), forming acetic acid and ammonia. We call the acetate ion the conjugate base of acetic acid, and we call the ammonium ion the conjugate acid of ammonia.The reaction of an acid with its solvent (typically water) is an acid dissociation reaction. We divide acids into two categories—strong and weak—based on their ability to donate a proton to the solvent. A strong acid, such as HCl, almost completely transfers its proton to the solvent, which acts as the base.\[\mathrm{HCl}(a q)+\mathrm{H}_{2} \mathrm{O}(l) \rightarrow \mathrm{H}_{3} \mathrm{O}^{+}(a q)+\mathrm{Cl}^{-}(a q) \nonumber\]We use a single arrow (\(\rightarrow\)) in place of the equilibrium arrow (\(\rightleftharpoons\)) because we treat HCl as if it dissociates completely in an aqueous solution. In water, the common strong acids are hydrochloric acid (HCl), hydroiodic acid (HI), hydrobromic acid (HBr), nitric acid (HNO3), perchloric acid (HClO4), and the first proton of sulfuric acid (H2SO4).The strength of an acid is a function of the acid and the solvent. For example, HCl does not act as a strong acid in methanol. In this case we use the equilibrium arrow when writing the acid–base reaction.\[\mathrm{HCl}+\mathrm{CH}_{3} \mathrm{OH}\rightleftharpoons \mathrm{CH}_{3} \mathrm{OH}_{2}^{+}+\mathrm{Cl}^{-} \nonumber\]A weak acid, of which aqueous acetic acid is one example, does not completely donate its acidic proton to the solvent. Instead, most of the acid remains undissociated with only a small fraction present as the conjugate base.\[\mathrm{CH}_{3} \mathrm{COOH}(a q)+\mathrm{H}_{2} \mathrm{O}(l)\rightleftharpoons \mathrm{H}_{3} \mathrm{O}^{+}(a q)+\mathrm{CH}_{3} \mathrm{COO}^{-}(a q) \nonumber\]The equilibrium constant for this reaction is an acid dissociation constant, Ka, which we write as\[K_{a}=\frac{\left[\mathrm{CH}_{3} \mathrm{COO}^{-}\right]\left[\mathrm{H}_{3} \mathrm{O}^{+}\right]}{\left[\mathrm{CH}_{3} \mathrm{COOH}\right]}=1.75 \times 10^{-5} \nonumber\]The magnitude of K provides information about a weak acid's relative strength, with a smaller Ka corresponding to a weaker acid. The ammonium ion, \(\text{NH}_4^+\), for example, has a Ka of \(5.702 \times 10^{-10}\) and is a weaker acid than acetic acid.Earlier we noted that we omit pure solids and pure liquids from equilibrium constant expressions. Because the solvent, H2O, is not pure, you might wonder why we have not included it in acetic acid’s Ka expression. Recall that we divide each term in an equilibrium constant expression by its standard state value. Because the concentration of H2O is so large—it is approximately 55.5 mol/L—its concentration as a pure liquid and as a solvent are virtually identical. The ratio\[\frac{\left[\mathrm{H}_{2} \mathrm{O}\right]}{\left[\mathrm{H}_{2} \mathrm{O}\right]^{\circ}} \nonumber\]is essentially 1.00.A monoprotic weak acid, such as acetic acid, has only a single acidic proton and a single acid dissociation constant. Other acids, such as phosphoric acid, have multiple acidic protons, each characterized by an acid dissociation constant. We call such acids polyprotic. Phosphoric acid, for example, has three acid dissociation reactions and three acid dissociation constants.\[\mathrm{H}_{3} \mathrm{PO}_{4}(a q)+\mathrm{H}_{2} \mathrm{O}(l) \rightleftharpoons \mathrm{H}_{3} \mathrm{O}^{+}(a q)+\mathrm{H}_{2} \mathrm{PO}_{4}^{-}(a q) \nonumber\]\[K_{\mathrm{al}}=\frac{\left[\mathrm{H}_{2} \mathrm{PO}_{4}^{-}\right]\left[\mathrm{H}_{3} \mathrm{O}^{+}\right]}{\left[\mathrm{H}_{3} \mathrm{PO}_{4}\right]}=7.11 \times 10^{-3} \nonumber\]\[\mathrm{H}_{2} \mathrm{PO}_{4}^-(a q)+\mathrm{H}_{2} \mathrm{O}(l)\rightleftharpoons \mathrm{H}_{3} \mathrm{O}^{+}(a q)+\mathrm{HPO}_{4}^{2-}(a q) \nonumber\]\[K_{a 2}=\frac{\left[\mathrm{HPO}_{4}^{2-}\right]\left[\mathrm{H}_{3} \mathrm{O}^{+}\right]}{\left[\mathrm{H}_{2} \mathrm{PO}_{4}^-\right]}=6.32 \times 10^{-8} \nonumber\]\[\mathrm{HPO}_{4}^{2-}(a q)+\mathrm{H}_{2} \mathrm{O}({l})\rightleftharpoons \mathrm{H}_{3} \mathrm{O}^{+}(a q)+\mathrm{PO}_{4}^{3-}(a q) \nonumber\]\[K_{\mathrm{a} 3}=\frac{\left[\mathrm{PO}_{4}^{3-}\right]\left[\mathrm{H}_{3} \mathrm{O}^{+}\right]}{\left[\mathrm{HPO}_{4}^{2-}\right]}=4.5 \times 10^{-13} \nonumber\]The decrease in the acid dissociation constants from Ka1 to Ka3 tells us that each successive proton is harder to remove. Consequently, H3PO4 is a stronger acid than \(\text{H}_2\text{PO}_4^-\), and \(\text{H}_2\text{PO}_4^-\) is a stronger acid than \(\text{HPO}_4^{2-}\).The most common example of a strong base is an alkali metal hydroxide, such as sodium hydroxide, NaOH, which completely dissociates to produce hydroxide ion.\[\mathrm{NaOH}(s) \rightarrow \mathrm{Na}^{+}(a q)+\mathrm{OH}^{-}(a q) \nonumber\]A weak base, such as the acetate ion, CH3COO–, only partially accepts a proton from the solvent, and is characterized by a base dissociation constant, Kb. For example, the base dissociation reaction and the base dissociation constant for the acetate ion are\[\mathrm{CH}_{3} \mathrm{COO}^{-}(a q)+\mathrm{H}_{2} \mathrm{O}(l)\rightleftharpoons \mathrm{OH}^{-}(a q)+\mathrm{CH}_{3} \mathrm{COOH}(a q) \nonumber\]\[K_{\mathrm{b}}=\frac{\left[\mathrm{CH}_{3} \mathrm{COOH}\right]\left[\mathrm{OH}^{-}\right]}{\left[\mathrm{CH}_{3} \mathrm{COO}^{-}\right]}=5.71 \times 10^{-10} \nonumber\]A polyprotic weak base, like a polyprotic acid, has more than one base dissociation reaction and more than one base dissociation constant.Some species can behave as either a weak acid or as a weak base. For example, the following two reactions show the chemical reactivity of the bicarbonate ion, \(\text{HCO}_3^-\), in water.\[\mathrm{HCO}_{3}^{-}(a q)+\mathrm{H}_{2} \mathrm{O}(l)\rightleftharpoons \mathrm{H}_{3} \mathrm{O}^{+}(a q)+\mathrm{CO}_{3}^{2-}(a q) \label{6.3}\]\[\mathrm{HCO}_{3}^{-}(a q)+\mathrm{H}_{2} \mathrm{O}(l)\rightleftharpoons \mathrm{OH}^{-}(a q)+\mathrm{H}_{2} \mathrm{CO}_{3}(a q) \label{6.4}\]A species that is both a proton donor and a proton acceptor is called amphiprotic. Whether an amphiprotic species behaves as an acid or as a base depends on the equilibrium constants for the competing reactions. For bicarbonate, the acid dissociation constant for reaction \ref{6.3}\[K_{a 2}=\frac{\left[\mathrm{CO}_{3}^{2-}\right]\left[\mathrm{H}_{3} \mathrm{O}^{+}\right]}{\left[\mathrm{HCO}_{3}^{-}\right]}=4.69 \times 10^{-11} \nonumber\]is smaller than the base dissociation constant for reaction \ref{6.4}.\[K_{\mathrm{b} 2}=\frac{\left[\mathrm{H}_{2} \mathrm{CO}_{3}\right]\left[\mathrm{OH}^{-}\right]}{\left[\mathrm{HCO}_{3}^{-}\right]}=2.25 \times 10^{-8} \nonumber\]Because bicarbonate is a stronger base than it is an acid, we expect that an aqueous solution of \(\text{HCO}_3^-\) is basic.Water is an amphiprotic solvent because it can serve as an acid or as a base. An interesting feature of an amphiprotic solvent is that it is capable of reacting with itself in an acid–base reaction.\[2 \mathrm{H}_{2} \mathrm{O}(l)\rightleftharpoons \mathrm{H}_{3} \mathrm{O}^{+}(a q)+\mathrm{OH}^{-}(a q) \label{6.5}\]We identify the equilibrium constant for this reaction as water’s dissociation constant, Kw,\[K_{w}=\left[\mathrm{H}_{3} \mathrm{O}^{+}\right]\left[\mathrm{OH}^{-}\right]=1.00 \times 10^{-14} \label{6.6}\]at a temperature of 24oC. The value of Kw varies substantially with temperature. For example, at 20oC Kw is \(6.809 \times 10^{-15}\), while at 30oC Kw is \(1.469 \times 10^{-14}\). At 25oC, Kw is \(1.008 \times 10^{-14}\), which is sufficiently close to \(1.00 \times 10^{-14}\) that we can use the latter value with negligible error.An important consequence of Equation \ref{6.6} is that the concentration of H3O+ and the concentration of OH– are related. If we know [H3O+] for a solution, then we can calculate [OH–] using Equation \ref{6.6}.What is the [OH–] if the [H3O+] is \(6.12 \times 10^{-5}\) M?Solution\[\left[\mathrm{OH}^{-}\right]=\frac{K_{w}}{\left[\mathrm{H}_{3} \mathrm{O}^{+}\right]}=\frac{1.00 \times 10^{-14}}{6.12 \times 10^{-5}}=1.63 \times 10^{-10} \nonumber\]Equation \ref{6.6} allows us to develop a pH scale (\(\text{pH} = - \log [\text{H}_3\text{O}^+]\)) that indicates a solution’s acidity. When the concentrations of H3O+ and OH– are equal a solution is neither acidic nor basic; that is, the solution is neutral. Letting\[\left[\mathrm{H}_{3} \mathrm{O}^{+}\right]=\left[\mathrm{OH}^{-}\right] \nonumber\]substituting into Equation \ref{6.6}\[K_{w}=\left[\mathrm{H}_{3} \mathrm{O}^{+}\right]^{2}=1.00 \times 10^{-14} \nonumber\]and solving for [H3O+] gives\[\left[\mathrm{H}_{3} \mathrm{O}^{+}\right]=\sqrt{1.00 \times 10^{-14}}=1.00 \times 10^{-7} \nonumber\]A neutral solution of water at 25oC has a hydronium ion concentration of \(1.00 \times 10^{-7}\) M and a pH of 7.00. In an acidic solution the concentration of H3O+ is greater than that for OH–, which means that\[\left[\mathrm{H}_{3} \mathrm{O}^{+}\right]>1.00 \times 10^{-7} \mathrm{M} \nonumber\]The pH of an acidic solution, therefore, is less than 7.00. A basic solution, on the other hand, has a pH greater than 7.00. Figure 6.4.1 shows the pH scale and pH values for some representative solutions.A useful observation about weak acids and weak bases is that the strength of a weak base is inversely proportional to the strength of its conjugate weak acid. Consider, for example, the dissociation reactions of acetic acid and acetate.\[\mathrm{CH}_{3} \mathrm{COOH}(a q)+\mathrm{H}_{2} \mathrm{O}(l)\rightleftharpoons \ \mathrm{H}_{3} \mathrm{O}^{+}(a q)+\mathrm{CH}_{3} \mathrm{COO}^{-}(a q) \label{6.7}\]\[\mathrm{CH}_{3} \mathrm{COO}^{-}(a q)+\mathrm{H}_{2} \mathrm{O}(l)\rightleftharpoons \mathrm{OH}^{-}(a q)+\mathrm{CH}_{3} \mathrm{COOH}(a q) \label{6.8}\]Adding together these two reactions gives the reaction\[2 \mathrm{H}_{2} \mathrm{O}(l)\rightleftharpoons \mathrm{H}_{3} \mathrm{O}^{+}(a q)+\mathrm{OH}^{-}(a q) \nonumber\]for which the equilibrium constant is Kw. Because adding together two reactions is equivalent to multiplying their respective equilibrium constants, we may express Kw as the product of Ka for CH3COOH and Kb for CH3COO–.\[K_{\mathrm{w}}=K_{\mathrm{a}, \mathrm{CH}_{3} \mathrm{COOH}} \times K_{\mathrm{b}, \mathrm{CH}_{3} \mathrm{COO}^{-}} \nonumber\]For any weak acid, HA, and its conjugate weak base, A–, we can generalize this to the following equation\[K_{\mathrm{w}}=K_{\mathrm{a}, \mathrm{HA}} \times K_{\mathrm{b}, \mathrm{A}^{-}} \label{6.9}\]where HA and A– are a conjugate acid–base pair. The relationship between Ka and Kb for a conjugate acid–base pair simplifies our tabulation of acid and base dissociation constants. Appendix 11 includes acid dissociation constants for a variety of weak acids. To find the value of Kb for a weak base, use Equation \ref{6.9} and the Ka value for its corresponding weak acid.A common mistake when using Equation \ref{6.9} is to forget that it applies to a conjugate acid–base pair only.Using Appendix 11, calculate values for the following equilibrium constants.Solution\[\text { (a) } K_{\mathrm{b}, \mathrm{C}_5 \mathrm{H}_{5} \mathrm{N}}=\frac{K_{\mathrm{w}}}{K_{\mathrm{a}, \mathrm{C}_{\mathrm{5}} \mathrm{H}_{5} \mathrm{NH}^{+}}}=\frac{1.00 \times 10^{-14}}{5.90 \times 10^{-6}}=1.69 \times 10^{-9} \nonumber\]\[\text { (b) } K_{\mathrm{b}, \mathrm{H}_2 \mathrm{PO}_{4}^- }=\frac{K_{\mathrm{w}}}{K_{\mathrm{a}, \mathrm{H}_{\mathrm{3}} \mathrm{PO}_{4} }}=\frac{1.00 \times 10^{-14}}{7.11 \times 10^{-3}}=1.41 \times 10^{-12} \nonumber\]When finding the Kb value for a polyprotic weak base, be careful to choose the correct Ka value. Remember that Equation \ref{6.9} applies to a conjugate acid–base pair only. The conjugate acid of \(\text{H}_2\text{PO}_4^-\) is H3PO4, not \(\text{HPO}_4^{2-}\).Using Appendix 11, calculate Kb values for hydrogen oxalate, \(\text{HC}_2\text{O}_4^-\), and for oxalate, \(\text{C}_2\text{O}_4^{2-}\).The Kb for hydrogen oxalate is\[K_{\mathrm{b}, \mathrm{HC}_{2} \mathrm{O}_{4}^-}=\frac{K_{\mathrm{w}}}{K_{\mathrm{a}, \mathrm{H}_{2} \mathrm{C}_{2} \mathrm{O}_{4}}}=\frac{1.00 \times 10^{-14}}{5.60 \times 10^{-2}}=1.79 \times 10^{-13} \nonumber\]and the Kb for oxalate is\[K_{\mathrm{b}, \mathrm{C}_{2} \mathrm{O}_{4}^{2-}}=\frac{K_{\mathrm{w}}}{K_{\mathrm{a}, \mathrm{HC}_{2} \mathrm{O}_{\mathrm{4}}^-}}=\frac{1.00 \times 10^{-14}}{5.42 \times 10^{-5}}=1.85 \times 10^{-10} \nonumber\]As we expect, the Kb value for \(\text{C}_2\text{O}_4^{2-}\) is larger than that for \(\text{HC}_2\text{O}_4^-\).A more general definition of acids and bases was proposed in 1923 by G. N. Lewis. The Brønsted‐Lowry definition of acids and bases focuses on an acid’s proton‐donating ability and a base’s proton‐accepting ability. Lewis theory, on the other hand, uses the breaking and the forming of covalent bonds to describe acids and bases. In this treatment, an acid is an electron pair acceptor and a base in an electron pair donor. Although we can apply Lewis theory to the treatment of acid–base reactions, it is more useful for treating complexation reactions between metal ions and ligands.The following reaction between the metal ion Cd2+ and the ligand NH3 is typical of a complexation reaction.\[\mathrm{Cd}^{2+}(a q)+4: \mathrm{NH}_{3}(a q)\rightleftharpoons \mathrm{Cd}\left( : \mathrm{NH}_{3}\right)_{4}^{2+}(a q) \label{6.10}\]The product of this reaction is a metal–ligand complex. In writing this reaction we show ammonia as :NH3, using a pair of dots to emphasize the pair of electrons that it donates to Cd2+. In subsequent reactions we will omit this notation.We characterize the formation of a metal–ligand complex by a formation constant, Kf. For example, the complexation reaction between Cd2+ and NH3, reaction \ref{6.10}, has the following equilibrium constant.\[K_{f}=\frac{\left[\mathrm{Cd}\left(\mathrm{NH}_{3}\right)_{4}^{2+}\right]}{\left[\mathrm{Cd}^{2+}\right]\left[\mathrm{NH}_{3}\right]^{4}}=5.5 \times 10^{7} \label{6.11}\]The reverse of reaction \ref{6.10} is a dissociation reaction, which we characterize by a dissociation constant, Kd, that is the reciprocal of Kf.Many complexation reactions occur in a stepwise fashion. For example, the reaction between Cd2+ and NH3 involves four successive reactions.\[\mathrm{Cd}^{2+}(a q)+\mathrm{NH}_{3}(a q) \rightleftharpoons \mathrm{Cd}\left(\mathrm{NH}_{3}\right)^{2+}(a q) \label{6.12}\]\[\mathrm{Cd}\left(\mathrm{NH}_{3}\right)^{2+}(a q)+\mathrm{NH}_{3}(a q)\rightleftharpoons \mathrm{Cd}\left(\mathrm{NH}_{3}\right)_{2}^{2+}(a q) \label{6.13}\]\[\mathrm{Cd}\left(\mathrm{NH}_{3}\right)_{2}^{2+}(a q)+\mathrm{NH}_{3}(a q)\rightleftharpoons \mathrm{Cd}\left(\mathrm{NH}_{3}\right)_{3}^{2+}(a q) \label{6.14}\]\[\mathrm{Cd}\left(\mathrm{NH}_{3}\right)_{3}^{2+}(a q)+\mathrm{NH}_{3}(a q)\rightleftharpoons \mathrm{Cd}\left(\mathrm{NH}_{3}\right)_{4}^{2+}(a q) \label{6.15}\]To avoid ambiguity, we divide formation constants into two categories. A stepwise formation constant, which we designate as Ki for the ith step, describes the successive addition of one ligand to the metal–ligand complex from the previous step. Thus, the equilibrium constants for reactions \ref{6.12}–\ref{6.15} are, respectively, K1, K2, K3, and K4. An overall, or cumulative formation constant, which we designate as \(\beta_i\), describes the addition of i ligands to the free metal ion. The equilibrium constant in Equation \ref{6.11} is correctly identified as \(\beta_4\), where\[\beta_{4}=K_{1} \times K_{2} \times K_{3} \times K_{4} \nonumber\]In general\[\beta_{n}=K_{1} \times K_{2} \times \cdots \times K_{n}=\prod_{i=1}^{n} K_{i} \nonumber\]Stepwise and overall formation constants for selected metal–ligand complexes are in Appendix 12.A formation constant describes the addition of one or more ligands to a free metal ion. To find the equilibrium constant for a complexation reaction that includes a solid, we combine appropriate Ksp and Kf expressions. For example, the solubility of AgCl increases in the presence of excess chloride ions as the result of the following complexation reaction.\[\operatorname{AgCl}(s)+\mathrm{Cl}^{-}(a q)\rightleftharpoons\operatorname{Ag}(\mathrm{Cl})_{2}^{-}(a q) \label{6.16}\]We can write this reaction as the sum of three other equilibrium reactions with known equilibrium constants—the solubility of AgCl, which is described by its Ksp reaction\[\mathrm{AgCl}(s) \rightleftharpoons \mathrm{Ag}^{+}(a q)+\mathrm{Cl}^{-}(a q) \nonumber\]and the stepwise formation of \(\text{AgCl}_2^-\), which is described by K1and K 2 reactions.\[\mathrm{Ag}^{+}(a q)+\mathrm{Cl}^{-}(a q) \rightleftharpoons \operatorname{Ag} \mathrm{Cl}(a q) \nonumber\]\[\operatorname{AgCl}(a q)+\mathrm{Cl}^{-}(a q) \rightleftharpoons \operatorname{AgCl}_{2}^{-}(a q) \nonumber\]The equilibrium constant for reaction \ref{6.16}, therefore, is \(K_\text{sp} \times K_1 \times K_2\).Determine the value of the equilibrium constant for the reaction\[\mathrm{PbCl}_{2}(s)\rightleftharpoons \mathrm{PbCl}_{2}(a q) \nonumber\]SolutionWe can write this reaction as the sum of three other reactions. The first of these reactions is the solubility of PbCl2(s), which is described by its Ksp reaction.\[\mathrm{PbCl}_{2}(s)\rightleftharpoons \mathrm{Pb}^{2+}(a q)+2 \mathrm{Cl}^{-}(a q) \nonumber\]The remaining two reactions are the stepwise formation of PbCl2(aq), which are described by K1 and K2.\[\mathrm{Pb}^{2+}(a q)+\mathrm{Cl}^{-}(a q)\rightleftharpoons \mathrm{PbCl}^{+}(a q) \nonumber\]\[\mathrm{PbCl}^{+}(a q)+\mathrm{Cl}^{-}(a q)\rightleftharpoons \mathrm{PbCl}_{2}(a q) \nonumber\]Using values for Ksp, K1, and K2 from Appendix 10 and Appendix 12, we find that the equilibrium constant is\[K=K_{\mathrm{sp}} \times K_{1} \times K_{2}=\left(1.7 \times 10^{-5}\right) \times 38.9 \times 1.62=1.1 \times 10^{-3} \nonumber\]What is the equilibrium constant for the following reaction? You will find appropriate equilibrium constants in Appendix 10 and Appendix 12.\[\operatorname{Ag} \mathrm{Br}(s)+2 \mathrm{S}_{2} \mathrm{O}_{3}^{2-}(a q)\rightleftharpoons\operatorname{Ag}\left(\mathrm{S}_{2} \mathrm{O}_{3}\right)_2^{3-}(a q)+\mathrm{Br}^{-}(a q) \nonumber\]We can write the reaction as a sum of three other reactions. The first reaction is the solubility of AgBr(s), which we characterize by its Ksp.\[\operatorname{AgBr}(s)\rightleftharpoons\operatorname{Ag}^{+}(a q)+\mathrm{Br}^{-}(a q) \nonumber\]The remaining two reactions are the stepwise formation of \(\text{Ag(S}_2\text{O}_3)_2^{3-}\), which we characterize by K1 and K2.\[\mathrm{Ag}^{+}(a q)+\mathrm{S}_{2} \mathrm{O}_{3}^{2-}(a q)\rightleftharpoons\operatorname{Ag}\left(\mathrm{S}_{2} \mathrm{O}_{3}\right)^{-}(a q) \nonumber\]\[\operatorname{Ag}\left(\mathrm{S}_{2} \mathrm{O}_{3}\right)^{-}(a q)+\mathrm{S}_{2} \mathrm{O}_{3}^{2-}(a q)\rightleftharpoons\operatorname{Ag}\left(\mathrm{S}_{2} \mathrm{O}_{3}\right)_{2}^{3-}(a q) \nonumber\]Using values for Ksp, K1, and K2 from Appendix 10 and Appendix 12, we find that the equilibrium constant for our reaction is\[K=K_{sp} \times K_{1} \times K_{2}=\left(5.0 \times 10^{-13}\right)\left(6.6 \times 10^{8}\right)\left(7.1 \times 10^{4}\right)=23 \nonumber\]An oxidation–reduction reaction occurs when electrons move from one reactant to another reactant. As a result of this transfer of electrons, the reactants undergo a change in oxidation state. Those reactant that increases its oxidation state undergoes oxidation, and the reactant that decreases its oxidation state undergoes reduction. For example, in the following redox reaction between Fe3+ and oxalic acid, H2C2O4, iron is reduced because its oxidation state changes from +3 to +2.\[2 \mathrm{Fe}^{3+}(a q)+\mathrm{H}_{2} \mathrm{C}_{2} \mathrm{O}_{4}(a q)+2 \mathrm{H}_{2} \mathrm{O}(l)\rightleftharpoons \\ {2 \mathrm{Fe}^{2+}(a q)+2 \mathrm{CO}_{2}(g)+2 \mathrm{H}_{3} \mathrm{O}^{+}(a q)} \label{6.17}\]Oxalic acid, on the other hand, is oxidized because the oxidation state for carbon increases from +3 in H2C2O4 to +4 in CO2.We can divide a redox reaction, such as reaction \ref{6.17}, into separate half‐reactions that show the oxidation and the reduction processes.\[\mathrm{H}_{2} \mathrm{C}_{2} \mathrm{O}_{4}(a q)+2 \mathrm{H}_{2} \mathrm{O}(l)\rightleftharpoons 2 \mathrm{CO}_{2}(g)+2 \mathrm{H}_{3} \mathrm{O}^{+}(a q)+2 e^{-} \nonumber\]\[\mathrm{Fe}^{3+}(a q)+e^{-} \rightleftharpoons \mathrm{Fe}^{2+}(a q) \nonumber\]It is important to remember, however, that an oxidation reaction and a reduction reaction always occur as a pair. We formalize this relationship by identifying as a reducing agent the reactant that is oxidized, because it provides the electrons for the reduction half‐reaction. Conversely, the reactant that is reduced is an oxidizing agent. In reaction \ref{6.17}, Fe3+ is the oxidizing agent and H2C2O4 is the reducing agent.The products of a redox reaction also have redox properties. For example, the Fe2+ in reaction \ref{6.17} is oxidized to Fe3+ when CO2 is reduced to H2C2O4. Borrowing some terminology from acid–base chemistry, Fe2+ is the conjugate reducing agent of the oxidizing agent Fe3+, and CO2 is the conjugate oxidizing agent of the reducing agent H2C2O4.Unlike precipitation reactions, acid–base reactions, and complexation reactions, we rarely express the equilibrium position of a redox reaction with an equilibrium constant. Because a redox reaction involves a transfer of electrons from a reducing agent to an oxidizing agent, it is convenient to consider the reaction’s thermodynamics in terms of the electron.For a reaction in which one mole of a reactant undergoes oxidation or reduction, the net transfer of charge, Q, in coulombs is\[Q=n F \nonumber\]where n is the moles of electrons per mole of reactant, and F is Faraday’s constant (96485 C/mol). The free energy, ∆G, to move this charge, Q, over a change in potential, E, is\[\triangle G=E Q \nonumber \]The change in free energy (in kJ/mole) for a redox reaction, therefore, is\[\Delta G=-n F E \label{6.18}\]where ∆G has units of kJ/mol. The minus sign in Equation \ref{6.18} is the result of a different convention for assigning a reaction’s favorable direction. In thermodynamics, a reaction is favored when ∆G is negative, but an oxidation‐reduction reaction is favored when E is positive. Substituting Equation \ref{6.18} into equation 6.2.3\[-n F E=-n F E^{\circ}+R T \ln Q_r \nonumber\]and dividing by –nF, leads to the well‐known Nernst equation\[E=E^{\circ}-\frac{R T}{n F} \ln Q_r \nonumber\]where Eo is the potential under standard‐state conditions. Substituting appropriate values for R and F, assuming a temperature of 25 oC (298 K), and switching from ln to log gives the potential in volts as\[E=E^{\mathrm{o}}-\frac{0.05916}{n} \log Q_r \label{6.19}\]A redox reaction’s standard potential, Eo, provides an alternative way of expressing its equilibrium constant and, therefore, its equilibrium position. Because a reaction at equilibrium has a ∆G of zero, the potential, E, also is zero at equilibrium. Substituting these values into Equation \ref{6.19} and rearranging provides a relationship between E o and K\[E^{\circ}=\frac{0.05916}{n} \log K \label{6.20}\]A standard potential is the potential when all species are in their standard states. You may recall that we define standard state conditions as follows: all gases have unit partial pressures, all solutes have unit concentrations, and all solids and liquids are pure.We generally do not tabulate standard potentials for redox reactions. Instead, we calculate Eo using the standard potentials for the corresponding oxidation half‐reaction and reduction half‐reaction. By convention, standard potentials are provided for reduction half‐reactions. The standard potential for a redox reaction, Eo, is\[E^{\circ}=E_{red}^{\circ}-E_{ox}^{\circ} \nonumber\]where \(E_{red}^{\circ}\) and \(E_{ox}^{\circ}\) are the standard reduction potentials for the reduction half‐reaction and the oxidation half‐reaction.Because we cannot measure the potential for a single half‐reaction, we arbitrarily assign a standard reduction potential of zero to a reference half‐reaction\[2 \mathrm{H}_{3} \mathrm{O}^{+}(a q)+2 e^{-}\rightleftharpoons 2 \mathrm{H}_{2} \mathrm{O}(l)+\mathrm{H}_{2}(g) \nonumber\]and report all other reduction potentials relative to this reference. Appendix 13 contains a list of selected standard reduction potentials. The more positive the standard reduction potential, the more favorable the reduction reaction is under standard state conditions. For example, under standard state conditions the reduction of Cu2+ to Cu (Eo = +0.3419 V) is more favorable than the reduction of Zn2+ to Zn (Eo = –0.7618 V).Calculate (a) the standard potential, (b) the equilibrium constant, and (c) the potential when [Ag+] = 0.020 M and [Cd2+] = 0.050 M, for the following reaction at 25oC.\[\mathrm{Cd}(s)+2 \mathrm{Ag}^{+}(a q)\rightleftharpoons2 \mathrm{Ag}(s)+\mathrm{Cd}^{2+}(a q) \nonumber\]Solution(a) In this reaction Cd is oxidized and Ag+ is reduced. The standard cell potential, therefore, is\[E^{\circ} = E^{\circ}_{\text{Ag}^+/ \text{Ag}} - E^{\circ}_{\text{Cd}^{2+}/ \text{Cd}} = 0.7996 - (-0.4030) = 1.2026 \ \text{V} \nonumber\](b) To calculate the equilibrium constant we substitute appropriate values into Equation \ref{6.20}.\[E^{\circ}=1.2026 \ \mathrm{V}=\frac{0.05916 \ \mathrm{V}}{2} \log K \nonumber\]Solving for K gives the equilibrium constant as\[\begin{array}{l}{\log K=40.6558} \\ {K=4.527 \times 10^{40}}\end{array} \nonumber\](c) To calculate the potential when [Ag+] is 0.020 M and [Cd2+] is 0.050M, we use the appropriate relationship for the reaction quotient, Qr, in Equation \ref{6.19}.\[\begin{array}{c}{E=E^{\circ}-\frac{0.05916 \ \mathrm{V}}{n} \log \frac{\left[\mathrm{Cd}^{2+}\right]}{\left[\mathrm{Ag}^{+}\right]^{2}}} \\ {E=1.2026 \ \mathrm{V}-\frac{0.05916 \ \mathrm{V}}{2} \log \frac{0.050}{(0.020)^{2}}=1.14 \ \mathrm{V}}\end{array} \nonumber\]For the following reaction at 25oC\[5 \mathrm{Fe}^{2+}(a q)+\mathrm{MnO}_{4}^{-}(a q)+8 \mathrm{H}^{+}(a q) \rightleftharpoons 5 \mathrm{Fe}^{3+}(a q)+\mathrm{Mn}^{2+}(a q)+4 \mathrm{H}_{2} \mathrm{O}(l) \nonumber\]calculate (a) the standard potential, (b) the equilibrium constant, and (c) the potential under these conditions: [Fe2+] = 0.50 M, [Fe3+] = 0.10 M, [\(\text{MnO}_4^{-}\)] = 0.025 M, [Mn2+] = 0.015 M, and a pH of 7.00. See Appendix 13 for standard state reduction potentials.The two half‐reactions are the oxidation of Fe2+ and the reduction of \(\text{MnO}_4^-\).\[\mathrm{Fe}^{2+}(a q) \rightleftharpoons \mathrm{Fe}^{3+}(a q)+e^{-} \nonumber\]\[\mathrm{MnO}_{4}^{-}(a q)+8 \mathrm{H}^{+}(a q)+5 e^{-} \rightleftharpoons \mathrm{Mn}^{2+}(a q)+4 \mathrm{H}_{2} \mathrm{O}(l) \nonumber\]From Appendix 13, the standard state reduction potentials for these half‐reactions are\[E_{\text{Fe}^{3+}/\text{Fe}^{2+}}^{\circ} = 0.771 \ \text{V and } E_{\text{MnO}_4^-/\text{Mn}^{2+}}^{\circ} = 1.51 \ \text{V} \nonumber\](a) The standard state potential for the reaction is\[E^{\circ} = E_{\text{MnO}_4^-/\text{Mn}^{2+}}^{\circ} - E_{\text{Fe}^{3+}/\text{Fe}^{2+}}^{\circ} = 1.51 \ \text{V} - 0.771 \ \text{V } = 0.74 \ \text{V} \nonumber\](b) To calculate the equilibrium constant we substitute appropriate values into Equation \ref{6.20}.\[E^{\circ}=0.74 \ \mathrm{V}=\frac{0.05916}{5} \log K \nonumber\]Solving for K gives its value as \(3.5 \times 10^{62}\).(c) To calculate the potential under these non‐standard state conditions, we make appropriate substitutions into the Nernst equation.\[E=E^{\circ}-\frac{R T}{n F} \ln \frac{\left[\mathrm{Mn}^{2+}\right]\left[\mathrm{Fe}^{3+}\right]^{5}}{\left[\mathrm{MnO}_{4}^{-}\right]\left[\mathrm{Fe}^{2+}\right]^{5}\left[\mathrm{H}^{+}\right]^{8}} \nonumber\]\[E=0.74-\frac{0.05916}{5} \log \frac{(0.015)(0.10)^{5}}{(0.025)(0.50)^{5}\left(1 \times 10^{-7}\right)^{8}}=0.12 \ \mathrm{V} \nonumber\]When writing precipitation, acid–base, and metal–ligand complexation reactions, we represent acidity as H3O+. Redox reactions more commonly are written using H+ instead of H3O+. For the reaction in Exercise 6.4.3 , we could replace H+ with H3O+ and increase the stoichiometric coefficient for H2O from 4 to 12.This page titled 6.4: Equilibrium Constants for Chemical Reactions is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by David Harvey.
157
6.5: Le Châtelier’s Principle
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Analytical_Chemistry_2.1_(Harvey)/06%3A_Equilibrium_Chemistry/6.05%3A_Le_Chateliers_Principle
At a temperature of 25oC, acetic acid’s dissociation reaction\[\mathrm{CH}_{3} \mathrm{COOH}(a q)+\mathrm{H}_{2} \mathrm{O}(l)\rightleftharpoons \mathrm{H}_{3} \mathrm{O}^{+}(a q)+\mathrm{CH}_{3} \mathrm{COO}^{-}(a q) \nonumber\]has an equilibrium constant of\[K_{\mathrm{a}}=\frac{\left[\mathrm{CH}_{3} \mathrm{COO}^{-}\right]\left[\mathrm{H}_{3} \mathrm{O}^{+}\right]}{\left[\mathrm{CH}_{3} \mathrm{COOH}\right]}=1.75 \times 10^{-5} \label{6.1}\]Because Equation \ref{6.1} has three variables—[CH3COOH], [CH3COO–], and [H3O+]—it does not have a unique mathematical solution. Nevertheless, although two solutions of acetic acid may have different values for [CH3COOH], [CH3COO–], and [H3O+], each solution has the same value of Ka.If we add sodium acetate to a solution of acetic acid, the concentration of CH3COO– increases, which suggests there is an increase in the value of Ka; however, because Ka must remain constant, the concentration of all three species in Equation \ref{6.1} must change to restore Ka to its original value. In this case, a partial reaction of CH3COO– and H3O+ decreases their concentrations, increases the concentration of CH3COOH, and reestablishes the equilibrium.The observation that a system at equilibrium responds to an external action by reequilibrating itself in a manner that diminishes that action, is formalized as Le Châtelier’s principle. One common action is to change the concentration of a reactant or product for a system at equilibrium. As noted above for a solution of acetic acid, if we add a product to a reaction at equilibrium the system responds by converting some of the products into reactants. Adding a reactant has the opposite effect, resulting in the conversion of reactants to products.When we add sodium acetate to a solution of acetic acid, we directly apply the action to the system. It is also possible to apply a change concentration indirectly. Consider, for example, the solubility of AgCl.\[\mathrm{AgCl}(s) \rightleftharpoons \mathrm{Ag}^{+}(a q)+\mathrm{Cl}^{-}(a q) \label{6.2}\]The effect on the solubility of AgCl of adding AgNO3 is obvious, but what is the effect if we add a ligand that forms a stable, soluble complex with Ag+? Ammonia, for example, reacts with Ag+ as shown here\[\mathrm{Ag}^{+}(a q)+2 \mathrm{NH}_{3}(a q) \rightleftharpoons \mathrm{Ag}\left(\mathrm{NH}_{3}\right)_{2}^{+}(a q) \label{6.3}\]Adding ammonia decreases the concentration of Ag+ as the \(\text{Ag(NH}_3)_2^+\) complex forms. In turn, a decrease in the concentration of Ag+ increases the solubility of AgCl as reaction \ref{6.2} reestablishes its equilibrium position. Adding together reaction \ref{6.2} and reaction \ref{6.3} clarifies the effect of ammonia on the solubility of AgCl, by showing ammonia as a reactant.\[\mathrm{AgCl}(s)+2 \mathrm{NH}_{3}(a q) \rightleftharpoons \mathrm{Ag}\left(\mathrm{NH}_{3}\right)_{2}^{+}(a q)+\mathrm{Cl}^{-}(a q) \label{6.4}\]So what is the effect on the solubility of AgCl of adding AgNO3? Adding AgNO3 increases the concentration of Ag+ in solution. To reestablish equilibrium, some of the Ag+ and Cl– react to form additional AgCl; thus, the solubility of AgCl decreases. The solubility product, Ksp, of course, remains unchanged.What happens to the solubility of AgCl if we add HNO3 to the equilibrium solution defined by reaction \ref{6.4}?SolutionNitric acid is a strong acid, which reacts with ammonia as shown here\[\mathrm{HNO}_{3}(a q)+\mathrm{NH}_{3}(a q)\rightleftharpoons \mathrm{NH}_{4}^{+}(a q)+\mathrm{NO}_{3}^{-}(a q) \nonumber\]Adding nitric acid lowers the concentration of ammonia. Decreasing ammonia’s concentration causes reaction \ref{6.4} to move from products to reactants, decreasing the solubility of AgCl.Increasing or decreasing the partial pressure of a gas is the same as increasing or decreasing its concentration. Because the concentration of a gas depends on its partial pressure, and not on the total pressure of the system, adding or removing an inert gas has no effect on a reaction’s equilibrium position.We can use the ideal gas law to deduce the relationship between pressure and concentration. Starting with PV = nRT, we solve for the molar concentration\[M=\frac{n}{V}=\frac{P}{R T} \nonumber\]Of course, this assumes that the gas is behaving ideally, which usually is a reasonable assumption under normal laboratory conditions.Most reactions involve reactants and products dispersed in a solvent. If we change the amount of solvent by diluting the solution, then the concentrations of all reactants and products must increase; conversely, if we allow the solvent to evaporate partially, then the concentration of the solutes must increase. The effect of simultaneously changing the concentrations of all reactants and products is not intuitively as obvious as when we change the concentration of a single reactant or product. As an example, let’s consider how diluting a solution affects the equilibrium position for the formation of the aqueous silver‐amine complex (reaction \ref{6.3}). The equilibrium constant for this reaction is\[\beta_{2}=\frac{\left[\mathrm{Ag}\left(\mathrm{NH}_{3}\right)_{2}^{+}\right]_{\mathrm{eq}}}{\left[\mathrm{Ag}^{+}\right]_{\mathrm{eq}}\left[\mathrm{NH}_{3}\right]_{\mathrm{eq}}^{2}} \label{6.5}\]where we include the subscript “eq” for clarification. If we dilute a portion of this solution with an equal volume of water, each of the concentration terms in Equation \ref{6.5} is cut in half. The reaction quotient, Qr, becomes\[Q_r=\frac{0.5\left[\mathrm{Ag}\left(\mathrm{NH}_{3}\right)_{2}^{+}\right]_{\mathrm{eq}}}{0.5\left[\mathrm{Ag}^{+}\right]_{\mathrm{eq}}(0.5)^{2}\left[\mathrm{NH}_{3}\right]_{\mathrm{eq}}^{2}}=\frac{0.5}{(0.5)^{3}} \times \frac{\left[\mathrm{Ag}\left(\mathrm{NH}_{3}\right)_{2}^{+}\right]_{\mathrm{eq}}}{\left[\mathrm{Ag}^{+}\right]_{\mathrm{eq}}\left[\mathrm{NH}_{3}\right]_{\mathrm{eq}}^{2}}=4 \beta_{2} \label{6.6}\]Because Qr is greater than \(\beta_2\), equilibrium is reestablished by shifting the reaction to the left, decreasing the concentration of \(\text{Ag(NH}_3)_2^+\). Note that the new equilibrium position lies toward the side of the equilibrium reaction that has the greatest number of solute particles (one Ag+ ion and two molecules of NH3 versus a single metal‐ligand complex). If we concentrate the solution of \(\text{Ag(NH}_3)_2^+\) by evaporating some of the solvent, equilibrium is reestablished in the opposite direction. This is a general conclusion that we can apply to any reaction. Increasing volume always favors the direction that produces the greatest number of particles, and decreasing volume always favors the direction that produces the fewest particles. If the number of particles is the same on both sides of the reaction, then the equilibrium position is unaffected by a change in volume.This page titled 6.5: Le Châtelier’s Principle is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by David Harvey.
158
6.6: Ladder Diagrams
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Analytical_Chemistry_2.1_(Harvey)/06%3A_Equilibrium_Chemistry/6.06%3A_Ladder_Diagrams
When we develop or evaluate an analytical method, we often need to understand how the chemistry that takes place affects our results. Suppose we wish to isolate Ag+ by precipitating it as AgCl. If we also need to control pH, then we must use a reagent that does not adversely affect the solubility of AgCl. It is a mistake to use NH3 to adjust the pH, for example, because it increases the solubility of AgCl (see reaction 6.5.4). One of the primary sources of determinate errors in many analytical methods is failing to account for potential chemical interferences.In this section we introduce the ladder diagram as a simple graphical tool for visualizing equilibrium chemistry. We will use ladder diagrams to determine what reactions occur when we combine several reagents, to estimate the approximate composition of a system at equilibrium, and to evaluate how a change to solution conditions might affect an analytical method.Although not specifically on the topic of ladder diagrams as developed in this section, the following papers provide appropriate background information: (a) Runo, J. R.; Peters, D. G. J. Chem. Educ. 1993, 70, 708–713; (b) Vale, J.; Fernández‐Pereira, C.; Alcalde, M. J. Chem. Educ. 1993, 70, 790–795; (c) Fernández‐Pereira, C.; Vale, J. Chem. Educator 1996, 6, 1–18; (d) Fernández‐ Pereira, C.; Vale, J.; Alcalde, M. Chem. Educator 2003, 8, 15–21; (e) Fernández‐Pereira, C.; Alcalde, M.; Villegas, R.; Vale, J. J. Chem. Educ. 2007, 84, 520–525. Ladder diagrams are a great tool for helping you to think intuitively about analytical chemistry. We will make frequent use of them in the chapters to follow.Let’s use acetic acid, CH3COOH, to illustrate the process we will use to draw and to interpret an acid–base ladder diagram. Before we draw the diagram, however, let’s consider the equilibrium reaction in more detail. Acetic acid's acid dissociation reaction and equilibrium constant expression are\[\mathrm{CH}_{3} \mathrm{COOH}(a q)+\mathrm{H}_{2} \mathrm{O}(l)\rightleftharpoons \mathrm{H}_{3} \mathrm{O}^{+}(a q)+\mathrm{CH}_{3} \mathrm{COO}^{-}(a q) \nonumber\]\[K_{\mathrm{a}}=\frac{\left[\mathrm{CH}_{3} \mathrm{COO}^{-}\right]\left[\mathrm{H}_{3} \mathrm{O}^{+}\right]}{\left[\mathrm{CH}_{3} \mathrm{COOH}\right]}=1.75 \times 10^{-5} \nonumber\]First, let’s take the logarithm of each term in this equation and multiply through by –1\[-\log K_{a}=4.76=-\log \left[\mathrm{H}_{3} \mathrm{O}^{+}\right]-\log \frac{\left[\mathrm{CH}_{3} \mathrm{COO}^{-}\right]}{\left[\mathrm{CH}_{3} \mathrm{COOH}\right]} \nonumber\]Now, let’s replace –log[H3O+] with pH and rearrange the equation to obtain the result shown here.\[\mathrm{pH}=4.76+\log \frac{\left[\mathrm{CH}_{3} \mathrm{COO}^{-}\right]}{\left[\mathrm{CH}_{3} \mathrm{COOH}\right]} \label{6.1}\]Equation \ref{6.1} tells us a great deal about the relationship between pH and the relative amounts of acetic acid and acetate at equilibrium. If the concentrations of CH3COOH and CH3COO– are equal, then Equation \ref{6.1} reduces to\[\mathrm{pH}=4.76+\log=4.76+0=4.76 \nonumber\]If the concentration of CH3COO– is greater than that of CH3COOH, then the log term in Equation \ref{6.1} is positive and the pH is greater than 4.76. This is a reasonable result because we expect the concentration of the conjugate base, CH3COO–, to increase as the pH increases. Similar reasoning will convince you that the pH is less than 4.76 when the concentration of CH3COOH exceeds that of CH3COO–.Now we are ready to construct acetic acid’s ladder diagram (Figure 6.6.1 ). First, we draw a vertical arrow that represents the solution’s pH, with smaller (more acidic) pH levels at the bottom and larger (more basic) pH levels at the top. Second, we draw a horizontal line at a pH equal to acetic acid’s pKa value. This line, or step on the ladder, divides the pH axis into regions where either CH3COOH or CH3COO– is the predominate species. This completes the ladder diagram.Using the ladder diagram, it is easy to identify the predominate form of acetic acid at any pH. At a pH of 3.5, for example, acetic acid exists primarily as CH3COOH. If we add sufficient base to the solution such that the pH increases to 6.5, the predominate form of acetic acid is CH3COO–.Draw a ladder diagram for the weak base p‐nitrophenolate and identify its predominate form at a pH of 6.00.SolutionTo draw a ladder diagram for a weak base, we simply draw the ladder diagram for its conjugate weak acid. From Appendix 11, the pKa for p‐nitrophenol is 7.15. The resulting ladder diagram is shown in Figure 6.6.2 . At a pH of 6.00, p‐nitrophenolate is present primarily in its weak acid form.Draw a ladder diagram for carbonic acid, H2CO3. Because H2CO3 is a diprotic weak acid, your ladder diagram will have two steps. What is the predominate form of carbonic acid when the pH is 7.00? Relevant equilibrium constants are in Appendix 11.From Appendix 11, the pKa values for H2CO3 are 6.352 and 10.329. The ladder diagram for H2CO3 is shown below. The predominate form at a pH of 7.00 is \(\text{HCO}_3^-\).A ladder diagram is particularly useful for evaluating the reactivity between a weak acid and a weak base. Figure 6.6.3 , for example, shows a single ladder diagram for acetic acid/acetate and for p‐nitrophenol/p‐nitrophenolate. An acid and a base can not co‐exist if their respective areas of predominance do not overlap. If we mix together solutions of acetic acid and sodium p‐nitrophenolate, the reaction\[\mathrm{C}_{6} \mathrm{H}_{4} \mathrm{NO}_{2}^{-}(a q)+\mathrm{CH}_{3} \mathrm{COOH}(a q)\rightleftharpoons \text{CH}_3\text{COO}^-(aq) + \text{C}_6\text{H}_4\text{NO}_2\text{H}(aq) \label{6.2}\]occurs because the areas of predominance for acetic acid and p‐nitrophenolate do not overlap. The solution’s final composition depends on which species is the limiting reagent. The following example shows how we can use the ladder diagram in Figure 6.6.3 to evaluate the result of mixing together solutions of acetic acid and p‐nitrophenolate.Predict the approximate pH and the final composition after mixing together 0.090 moles of acetic acid and 0.040 moles of p‐nitrophenolate.SolutionThe ladder diagram in Figure 6.6.3 indicates that the reaction between acetic acid and p‐nitrophenolate is favorable. Because acetic acid is in excess, we assume the reaction of p‐nitrophenolate to p‐nitrophenol is complete. At equilibrium essentially no p‐nitrophenolate remains and there are 0.040 mol of p‐nitrophenol. Converting p‐nitrophenolate to p‐nitrophenol consumes 0.040 moles of acetic acid; thus\[\begin{array}{c}{\text { moles } \mathrm{CH}_{3} \mathrm{COOH}=0.090-0.040=0.050 \ \mathrm{mol}} \\ {\text { moles } \mathrm{CH}_{3} \mathrm{COO}^{-}=0.040 \ \mathrm{mol}}\end{array} \nonumber\]According to the ladder diagram, the pH is 4.74 when there are equal amounts of CH3COOH and CH3COO–. Because we have slightly more CH3COOH than CH3COO–, the pH is slightly less than 4.74.Using Figure 6.6.3 , predict the approximate pH and the composition of the solution formed by mixing together 0.090 moles of p‐nitrophenolate and 0.040 moles of acetic acid.The ladder diagram in Figure 6.6.3 indicates that the reaction between acetic acid and p‐nitrophenolate is favorable. Because p‐nitrophenolate is in excess, we assume the reaction of acetic acid to acetate is complete. At equilibrium essentially no acetic acid remains and there are 0.040 moles of acetate. Converting acetic acid to acetate consumes 0.040 moles of p‐nitrophenolate; thus\[\text { moles } p \text {-nitrophenolate }=0.090-0.040=0.050 \text { mol } \nonumber\]\[\text { moles } p\text{-nitrophenol }=0.040 \ \mathrm{mol} \nonumber\]According to the ladder diagram for this system, the pH is 7.15 when there are equal concentrations of p‐nitrophenol and p‐nitrophenolate. Because we have slightly more p‐nitrophenolate than we have p‐nitrophenol, the pH is slightly greater than 7.15.If the areas of predominance for an acid and a base overlap, then we do not expect that much of a reaction will occur. For example, if we mix together solutions of CH3COO– and p‐nitrophenol, we do not expect a significant change in the moles of either reagent. Furthermore, the pH of the mixture must be between 4.76 and 7.15, with the exact pH depending upon the relative amounts of CH3COO– and p‐nitrophenol.We also can use an acid–base ladder diagram to evaluate the effect of pH on other equilibria. For example, the solubility of CaF2\[\mathrm{CaF}_{2}(s) \rightleftharpoons \mathrm{Ca}^{2+}(a q)+2 \mathrm{F}^{-}(a q) \nonumber\]is affected by pH because F– is a weak base. From Le Châtelier’s principle, we know that converting F– to HF will increase the solubility of CaF2. To minimize the solubility of CaF2 we need to maintain the solution’s pH so that F– is the predominate species. The ladder diagram for HF (Figure 6.6.4 ) shows us that maintaining a pH of more than 3.17 will minimize solubility losses.We can apply the same principles for constructing and interpreting an acid–base ladder diagram to equilibria that involve metal–ligand complexes. For a complexation reaction we define the ladder diagram’s scale using the concentration of uncomplexed, or free ligand, pL. Using the formation of \(\text{Cd(NH}_3)^{2+}\) as an example\[\mathrm{Cd}^{2+}(a q)+\mathrm{NH}_{3}(a q) \rightleftharpoons \mathrm{Cd}\left(\mathrm{NH}_{3}\right)^{2+}(a q) \nonumber\]we can show that log K1 is the dividing line between the areas of predominance for Cd2+ and for \(\text{Cd(NH}_3)^{2+}\).\[K_{1}=3.55 \times 10^{2}=\frac{\left[\mathrm{Cd}\left(\mathrm{NH}_{3}\right)^{2+}\right]}{\left[\mathrm{Cd}^{2+}\right]\left[\mathrm{NH}_{3}\right]} \nonumber\]\[\log K_{1}=\log \left(3.55 \times 10^{2}\right)=\log \frac{\left[\mathrm{Cd}\left(\mathrm{NH}_{3}\right)^{2+}\right]}{\left[\mathrm{Cd}^{2+}\right]}-\log \left[\mathrm{NH}_{3}\right] \nonumber\]\[\log K_{1}=2.55=\log \frac{\left[\mathrm{Cd}\left(\mathrm{NH}_{3}\right)^{2+}\right]}{\left[\mathrm{Cd}^{2+}\right]}+\mathrm{p} \mathrm{NH}_{3} \nonumber\]\[\mathrm{p} \mathrm{NH}_{3}=\log K_{1}+\log \frac{\left[\mathrm{Cd}^{2+}\right]}{\left[\mathrm{Cd}\left(\mathrm{NH}_{3}\right)^{2+}\right]}=2.55+\log \frac{\left[\mathrm{Cd}^{2+}\right]}{\left[\mathrm{Cd}\left(\mathrm{NH}_{3}\right)^{2+}\right]} \nonumber\]Thus, Cd2+ is the predominate species when pNH3 is greater than 2.55 (a concentration of NH3 smaller than \(2.82 \times 10^{-3}\) M) and for a pNH3 value less than 2.55, \(\text{Cd(NH}_3)^{2+}\) is the predominate species. Figure 6.6.5 shows a complete metal–ligand ladder diagram for Cd2+ and NH3 that includes additional Cd–NH3 complexes.Draw a single ladder diagram for the Ca(EDTA)2– and the Mg(EDTA)2– metal–ligand complexes. Use your ladder diagram to predict the result of adding 0.080 moles of Ca2+ to 0.060 moles of Mg(EDTA)2–. EDTA is an abbreviation for the ligand ethylenediaminetetraacetic acid.SolutionFigure 6.6.6 shows the ladder diagram for this system of metal–ligand complexes. Because the predominance regions for Ca2+ and Mg(EDTA)2‐ do not overlap, the reaction\[\mathrm{Ca}^{2+}(a q)+\mathrm{Mg}(\mathrm{EDTA})^{2-}(a q) \rightleftharpoons \mathrm{Ca}(\mathrm{EDTA})^{2-}(a q)+\mathrm{Mg}^{2+}(a q) \nonumber\]proceeds essentially to completion. Because Ca2+ is the excess reagent, the composition of the final solution is approximately\[\text { moles } \mathrm{Ca}^{2+}=0.080-0.060=0.020 \ \mathrm{mol} \nonumber\]\[\text { moles } \mathrm{Ca}(\mathrm{EDTA})^{2-}=0.060 \ \mathrm{mol} \nonumber\]\[\text { moles } \mathrm{Mg}^{2+}=0.060 \ \mathrm{mol} \nonumber\]\[\text { moles } \mathrm{Mg}(\mathrm{EDTA})^{2-}=0 \ \mathrm{mol} \nonumber\]The metal–ligand ladder diagram in Figure 6.6.5 uses stepwise formation constants. We also can construct a ladder diagram using cumulative formation constants. For example, the first three stepwise formation constants for the reaction of Zn2+ with NH3\[\mathrm{Zn}^{2+}(a q)+\mathrm{NH}_{3}(a q) \rightleftharpoons \mathrm{Zn}\left(\mathrm{NH}_{3}\right)^{2+}(a q) \quad K_{1}=1.6 \times 10^{2} \nonumber\]\[\mathrm{Zn}\left(\mathrm{NH}_{3}\right)^{2+}(a q)+\mathrm{NH}_{3}(a q)\rightleftharpoons\mathrm{Zn}\left(\mathrm{NH}_{3}\right)_{2}^{2+}(a q) \quad K_{2}=1.95 \times 10^{2} \nonumber\]\[\mathrm{Zn}\left(\mathrm{NH}_{3}\right)_{2}^{2+}(a q)+\mathrm{NH}_{3}(a q)=\mathrm{Zn}\left(\mathrm{NH}_{3}\right)_{3}^{2+}(a q) \quad K_{3}=2.3 \times 10^{2} \nonumber\]suggests that the formation of \(\text{Zn(NH}_3)_3^{2+}\) is more favorable than the formation of \(\text{Zn(NH}_3)^{2+}\) or \(\text{Zn(NH}_3)_2^{2+}\). For this reason, the equilibrium is best represented by the cumulative formation reaction shown here.\[\mathrm{Zn}^{2+}(a q)+3 \mathrm{NH}_{3}(a q)\rightleftharpoons \mathrm{Zn}\left(\mathrm{NH}_{3}\right)_{3}^{2+}(a q) \quad \beta_{3}=7.2 \times 10^{6} \nonumber\]Because K3 is greater than K2, which is greater than K1, the formation of the metal‐ligand complex \(\text{Zn(NH}_3)_3^{2+}\) is more favorable than the formation of the other metal ligand complexes. For this reason, at lower values of pNH3 the concentration of \(\text{Zn(NH}_3)_3^{2+}\) is larger than the concentrations of \(\text{Zn(NH}_3)^{2+}\) or \(\text{Zn(NH}_3)_2^{2+}\). The value of \(\beta_3\) is\[\beta_{3}=K_{1} \times K_{2} \times K_{3} \nonumber\]To see how we incorporate this cumulative formation constant into a ladder diagram, we begin with the reaction’s equilibrium constant expression.\[\beta_{3}=\frac{\left[\mathrm{Zn}\left(\mathrm{NH}_{3}\right)_{3}^{2+}\right]}{\left[\mathrm{Zn}^{2+}\right]\left[\mathrm{NH}_{3}\right]^{3}} \nonumber\]Taking the log of each side\[\log \beta_{3}=\log \frac{\left[\mathrm{Zn}\left(\mathrm{NH}_{3}\right)_{3}^{2+}\right]}{\left[\mathrm{Zn}^{2+}\right]}-3 \log \left[\mathrm{NH}_{3}\right] \nonumber\]and rearranging gives\[\mathrm{pNH}_{3}=\frac{1}{3} \log \beta_{3}+\frac{1}{3} \log \frac{\left[\mathrm{Zn}^{2+}\right]}{\left[\mathrm{Zn}\left(\mathrm{NH}_{3}\right)_{3}^{2+}\right]} \nonumber\]When the concentrations of Zn and \(\text{Zn(NH}_3)_3^{2+}\) are equal, then\[\mathrm{p} \mathrm{NH}_{3}=\frac{1}{3} \log \beta_{3}=2.29 \nonumber\]In general, for the metal–ligand complex MLn, the step for a cumulative formation constant is\[\mathrm{pL}=\frac{1}{n} \log \beta_{n} \nonumber\]Figure 6.6.7 shows the complete ladder diagram for the Zn2+–NH3 system.We also can construct ladder diagrams to help us evaluate redox equilibria. Figure 6.6.8 shows a typical ladder diagram for two half‐reactions in which the scale is the potential, E.The Nernst equation defines the areas of predominance. Using the Fe3+/Fe2+ half‐reaction as an example, we write\[E=E^{\circ}-\frac{R T}{n F} \ln \frac{\left[\mathrm{Fe}^{2+}\right]}{\left[\mathrm{Fe}^{3+}\right]}=0.771-0.05916 \log \frac{\left[\mathrm{Fe}^{2+}\right]}{\left[\mathrm{Fe}^{3+}\right]} \nonumber\]At a potential more positive than the standard state potential, the predominate species is Fe3+, whereas Fe2+ predominates at potentials more negative than Eo. When coupled with the step for the Sn4+/Sn2+ half‐reaction we see that Sn2+ is a useful reducing agent for Fe3+. If Sn2+ is in excess, the potential of the resulting solution is near +0.154 V.Because the steps on a redox ladder diagram are standard state potentials, a complication arises if solutes other than the oxidizing agent and reducing agent are present at non‐standard state concentrations. For example, the potential for the half‐reaction\[\mathrm{UO}_{2}^{2+}(a q)+4 \mathrm{H}_{3} \mathrm{O}^{+}(a q)+2 e^{-} \rightleftharpoons \mathrm{U}^{4+}(a q)+6 \mathrm{H}_{2} \mathrm{O}(l) \nonumber\]depends on the solution’s pH. To define areas of predominance in this case we begin with the Nernst equation\[E=+0.327-\frac{0.05916}{2} \log \frac{\left[\mathrm{U}^{4+}\right]}{\left[\mathrm{UO}_{2}^{2+}\right]\left[\mathrm{H}_{3} \mathrm{O}^{+}\right]^{4}} \nonumber\]and factor out the concentration of H3O+.\[E=+0.327+\frac{0.05916}{2} \log \left[\mathrm{H}_{3} \mathrm{O}^{+}\right]^{4}-\frac{0.05916}{2} \log \frac{\left[\mathrm{U}^{4+}\right]}{\left[\mathrm{UO}_{2}^{2+}\right]}\nonumber\]From this equation we see that the area of predominance for \(\text{UO}_2^{2+}\) and U4+ is defined by a step at a potential where [U4+] = [\(\text{UO}_2^{2+}\)].\[E=+0.327+\frac{0.05916}{2} \log \left[\mathrm{H}_{3} \mathrm{O}^{+}\right]^{4}=+0.327-0.1183 \mathrm{pH} \nonumber\]Figure 6.6.9 shows how pH affects the step for the \(\text{UO}_2^{2+}\) /U4+ half‐reaction.This page titled 6.6: Ladder Diagrams is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by David Harvey.
159
6.7: Solving Equilibrium Problems
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Analytical_Chemistry_2.1_(Harvey)/06%3A_Equilibrium_Chemistry/6.07%3A_Solving_Equilibrium_Problems
Ladder diagrams are a useful tool for evaluating chemical reactivity and for providing a reasonable estimate of a chemical system’s composition at equilibrium. If we need a more exact quantitative description of the equilibrium condition, then a ladder diagram is insufficient; instead, we need to find an algebraic solution. In this section we will learn how to set‐up and solve equilibrium problems. We will start with a simple problem and work toward more complex problems.If we place an insoluble compound such as Pb(IO3)2 in deionized water, the solid dissolves until the concentrations of Pb2+ and \(\text{IO}_3^-\) satisfy the solubility product for Pb(IO3)2. At equilibrium the solution is saturated with Pb(IO3)2, which means simply that no more solid can dissolve. How do we determine the equilibrium concentrations of Pb2+ and \(\text{IO}_3^-\), and what is the molar solubility of Pb(IO3)2 in this saturated solution?When we first add solid Pb(IO3)2 to water, the concentrations of Pb2+ and \(\text{IO}_3^-\) are zero and the reaction quotient, Qr, is\[Q_r = \left[\mathrm{Pb}^{2+}\right]\left[\mathrm{IO}_{3}^{-}\right]^{2}=0 \nonumber\]As the solid dissolves, the concentrations of these ions increase, but Qr remains smaller than Ksp. We reach equilibrium and “satisfy the solubility product” when Qr = Ksp.We begin by writing the equilibrium reaction and the solubility product expression for Pb(IO3)2.\[\mathrm{Pb}\left(\mathrm{IO}_{3}\right)_{2}(s)\rightleftharpoons \mathrm{Pb}^{2+}(a q)+2 \mathrm{IO}_{3}^{-}(a q) \nonumber\]e\[K_{\mathrm{sp}}=\left[\mathrm{Pb}^{2+}\right]\left[\mathrm{IO}_{3}^{-}\right]^{2}=2.5 \times 10^{-13} \label{6.1}\]As Pb(IO3)2 dissolves, two \(\text{IO}_3^-\) ions form for each ion of Pb2+. If we assume that the change in the molar concentration of Pb2+ at equilibrium is x, then the change in the molar concentration of \(\text{IO}_3^-\) is 2x. The following table helps us keep track of the initial concentrations, the change in con‐ centrations, and the equilibrium concentrations of Pb2+ and \(\text{IO}_3^-\).Because a solid, such as Pb(IO3)2 , does not appear in the solubility product expression, we do not need to keep track of its concentration. Remember, however, that the Ksp value applies only if there is some solid Pb(IO3)2 present at equilibrium.Substituting the equilibrium concentrations into Equation \ref{6.1} and solving gives\[(x)(2 x)^{2}=4 x^{3}=2.5 \times 10^{-13} \nonumber\]\[x=3.97 \times 10^{-5} \nonumber\]Substituting this value of x back into the equilibrium concentration expressions for Pb2+ and \(\text{IO}_3^-\) gives their concentrations as\[\left[\mathrm{Pb}^{2+}\right]=x=4.0 \times 10^{-5} \mathrm{M} \text { and }\left[\mathrm{IO}_{3}^{-}\right]=2 x=7.9 \times 10^{-5} \nonumber\]Because one mole of Pb(IO3)2 contains one mole of Pb2+, the molar solubility of Pb(IO3)2 is equal to the concentration of Pb2+, or \(4.0 \times 10^{-5}\) M.We can express a compound’s solubility in two ways: as its molar solubility (mol/L) or as its mass solubility (g/L). Be sure to express your answer clearly.Calculate the molar solubility and the mass solubility for Hg2Cl2, given the following solubility reaction and Ksp value.\[\mathrm{Hg}_{2} \mathrm{Cl}_{2}(s)\rightleftharpoons \mathrm{Hg}_{2}^{2+}(a q)+2 \mathrm{Cl}^{-}(a q) \quad K_{\mathrm{sp}}=1.2 \times 10^{-8} \nonumber\]When Hg2Cl2 dissolves, two Cl– are produced for each ion of \(\text{Hg}_2^{2+}\). If we assume x is the change in the molar concentration of \(\text{Hg}_2^{2+}\), then the change in the molar concentration of Cl– is 2x. The following table helps us keep track of our solution to this problem.Substituting the equilibrium concentrations into the Ksp expression forHg2Cl2 gives\[K_{\mathrm{sp}}=\left[\mathrm{Hg}_{2}^{2+}\right]\left[\mathrm{Cl}^{-}\right]^{2}=(x)(2 \mathrm{x})^{2}=4 x^{3}=1.2 \times 10^{-18} \nonumber\]\[x=6.69 \times 10^{-7} \nonumber\]Substituting x back into the equilibrium expressions for \(\text{Hg}_2^{2+}\) and Cl– gives their concentrations as\[\left[\mathrm{Hg}_{2}^{2+}\right]=x=6.7 \times 10^{-7} \ \mathrm{M} \quad\left[\mathrm{Cl}^{-}\right]=2 x=1.3 \times 10^{-6} \ \mathrm{M} \nonumber\]The molar solubility is equal to [\(\text{Hg}_2^{2+}\)], or \(6.7 \times 10^{-7}\) mol/L.Calculating the solubility of Pb(IO3)2 in deionized water is a straightforward problem because the solid’s dissolution is the only source of Pb2+ and \(\text{IO}_3^-\). But what if we add Pb(IO3)2 to a solution of 0.10 M Pb(NO3)2? Before we set‐up and solve this problem algebraically, think about the system’s chemistry and decide whether the solubility of Pb(IO3)2 will increase, decrease, or remain the same. Beginning a problem by thinking about the likely answer is a good habit to develop. Knowing what answers are reasonable will help you spot errors in your calculations and give you more confidence that your solution to a problem is correct. Because the solution already contains a source of Pb2+, we can use Le Châtelier’s principle to predict that the solubility of Pb(IO3)2 is smaller than that in our previous problem.We begin by setting up a table to help us keep track of the concentrations of Pb2+ and \(\text{IO}_3^-\) as this system moves toward and reaches equilibrium.Substituting the equilibrium concentrations into Equation \ref{6.1}\[(0.10+x)(2 x)^{2}=2.5 \times 10^{-13} \nonumber\]and multiplying out the terms on the equation’s left side leaves us with\[4 x^{3}+0.40 x^{2}=2.5 \times 10^{-13} \label{6.2}\]This is a more difficult equation to solve than that for the solubility of Pb(IO3)2 in deionized water, and its solution is not immediately obvious. We can find a rigorous solution to Equation \ref{6.2} using computational software packages and spreadsheets, some of which are described in Chapter 6.10.There are several approaches to solving cubic equations, but none are computationally easy using just paper and pencil.How might we solve Equation \ref{6.2} if we do not have access to a computer? One approach is to use our understanding of chemistry to simplify the problem. From Le Châtelier’s principle we know that a large initial concentration of Pb2+ will decrease significantly the solubility of Pb(IO3)2. One reasonable assumption is that the initial concentration of Pb2+ is very close to its equilibrium concentration. If this assumption is correct, then the following approximation is reasonable\[\left[\mathrm{Pb}^{2+}\right]=0.10+x \approx 0.10 \nonumber\]Substituting this approximation into Equation \ref{6.1} and solving for x gives\[(0.10)(2 x)^{2}=0.4 x^{2}=2.5 \times 10^{-13} \nonumber\]\[x=7.91 \times 10^{-7} \nonumber\]Before we accept this answer, we must verify that our approximation is reasonable. The difference between the actual concentration of Pb2+, which is 0.10 + x M, and our assumption that the concentration of Pb2+ is 0.10 M is \(7.9 \times 10^{-7}\), or \(7.9 \times 10^{-4}\) % of the assumed concentration. This is a negligible error. If we accept the result of our calculation, we find that the equilibrium concentrations of Pb2+ and \(\text{IO}_3^-\) are\[\left[\mathrm{Pb}^{2+}\right]=0.10+x \approx 0.10 \ \mathrm{M} \text { and }\left[\mathrm{IO}_{3}^{-}\right]=2 x=1.6 \times 10^{-6} \ \mathrm{M} \nonumber\]\[\begin{aligned} \% \text { error } &=\frac{\text { actual }-\text { assumed }}{\text { assumed }} \times 100 \\ &=\frac{(0.10+x)-0.10}{0.10} \times 100 \\ &=\frac{7.91 \times 10^{-7}}{0.10} \times 100 \\ &=7.91 \times 10^{-4} \% \end{aligned} \nonumber\]The molar solubility of Pb(IO3)2 is equal to the additional concentration of Pb2+ in solution, or \(7.9 \times 10^{-4}\) mol/L. As expected, we find that Pb(IO3)2 is less soluble in the presence of a solution that already contains one of its ions. This is known as the common ion effect.As outlined in the following example, if an approximation leads to an error that is unacceptably large, then we can extend the process of making and evaluating approximations.One “rule of thumb” when making an approximation is that it should not introduce an error of more than ±5%. Although this is not an unreasonable choice, what matters is that the error makes sense within the context of the problem you are solving.Calculate the solubility of Pb(IO3)2 in \(1.0 \times 10^{-4}\) M Pb(NO3)2.SolutionIf we let x equal the change in the concentration of Pb2+, then the equilibrium concentrations of Pb2+ and \(\text{IO}_3^-\) are\[\left[\mathrm{P} \mathrm{b}^{2+}\right]=1.0 \times 10^{-4}+ \ x \text { and }\left[\mathrm{IO}_{3}^-\right]=2 x \nonumber\]Substituting these concentrations into Equation \ref{6.1} leaves us with\[\left(1.0 \times 10^{-4}+ \ x\right)(2 x)^{2}=2.5 \times 10^{-13} \nonumber\]To solve this equation for x, let’s make the following assumption\[\left[\mathrm{Pb}^{2+}\right]=1.0 \times 10^{-4}+ \ x \approx 1.0 \times 10^{-4} \ \mathrm{M} \nonumber\]Solving for x gives its value as \(2.50 \times 10^{-5}\); however, when we substitute this value for x back, we find that the calculated concentration of Pb2+ at equilibrium\[\left[\mathrm{Pb}^{2+}\right]=1.0 \times 10^{-4}+ \ x=1.0 \times 10^{-4}+ \ 2.50 \times 10^{-5}=1.25 \times 10^{-4} \ \mathrm{M} \nonumber\]is 25% greater than our assumption of \(1.0 \times 10^{-4}\) M. This error is unreasonably large.Rather than shouting in frustration, let’s make a new assumption. Our first assumption—that the concentration of Pb2+ is \(1.0 \times 10^{-4}\) M—was too small. The calculated concentration of \(1.25 \times 10^{-4}\) M, therefore, probably is a too large, but closer to the correct concentration than was our first assumption. For our second approximation, let’s assume that\[\left[\mathrm{Pb}^{2+}\right]=1.0 \times 10^{-4}+ \ x=1.25 \times 10^{-4} \mathrm{M} \nonumber\]Substituting into Equation \ref{6.1} and solving for x gives its value as \(2.24 \times 10^{-5}\). The resulting concentration of Pb2+ is\[\left[\mathrm{Pb}^{2+}\right]=1.0 \times 10^{-4}+ \ 2.24 \times 10^{-5}=1.22 \times 10^{-4} \ \mathrm{M} \nonumber\]which differs from our assumption of \(1.25 \times 10^{-4}\) M by 2.4%. Because the original concentration of Pb2+ is given to two significant figure, this is a more reasonable error. Our final solution, to two significant figures, is\[\left[\mathrm{Pb}^{2+}\right]=1.2 \times 10^{-4} \ \mathrm{M} \text { and }\left[\mathrm{IO}_{3}\right]=4.5 \times 10^{-5} \ \mathrm{M} \nonumber\]and the molar solubility of Pb(IO3)2 is \(2.2 \times 10^{-5}\) mol/L. This iterative approach to solving the problems is known as the method of successive approximations.Calculate the molar solubility for Hg2Cl2 in 0.10 M NaCl and compare your answer to its molar solubility in deionized water (see Exercise 6.7.1 ).We begin by setting up a table to help us keep track of the concentrations \(\text{Hg}_2^{2+}\) and Cl– as this system moves toward and reaches equilibrium.Substituting the equilibrium concentrations into the Ksp expression for Hg2Cl2 leaves us with a difficult to solve cubic equation.\[K_{\mathrm{sp}}=\left[\mathrm{Hg}_{2}^{2+}\right]\left[\mathrm{Cl}^{-}\right]^{2}=(x)(0.10+2 x)^{2}=4 x^{3}+0.40 x^{2}+0.010 x \nonumber\]Let’s make an assumption to simplify this problem. Because we expect the value of x to be small, let’s assume that\[\left[\mathrm{Cl}^{-}\right]=0.10+2 x \approx 0.10 \nonumber\]This simplifies our problem to\[K_{\mathrm{sp}}=\left[\mathrm{Hg}_{2}^{2+}\right]\left[\mathrm{Cl}^{-}\right]^{2}=(x)(0.10)^{2}=0.010 x=1.2 \times 10^{-18} \nonumber\]which gives the value of x as \(1.2 \times 10^{-16}\) M. The difference between the actual concentration of Cl–, which is (0.10 + 2x) M, and our assumption that it is 0.10 M introduces an error of \(2.4 \times 10^{-13}\) %. This is a negligible error. The molar solubility of Hg2Cl2 is the same as the concentration of \(\text{Hg}_2^{2+}\), or \(1.2 \times 10^{-16}\) M. As expected, the molar solubility in 0.10 M NaCl is less than \(6.7 \times 10^{-7}\) mol/L, which is its solubility in water (see solution to Exercise 6.7.1 ).Calculating the solubility of Pb(IO3)2 in a solution of Pb(NO3)2 is more complicated than calculating its solubility in deionized water. The calculation, however, is still relatively easy to organize and the simplifying assumptions are fairly obvious. This problem is reasonably straightforward because it involves only one equilibrium reaction and one equilibrium constant.Determining the equilibrium composition of a system with multiple equilibrium reactions is more complicated. In this section we introduce a systematic approach to setting‐up and solving equilibrium problems. As shown in Table 6.7.1 , this approach involves four steps.Step 1Write all relevant equilibrium reactions and equilibrium constant expressions.Step 2Count the unique species that appear in the equilibrium constant expressions; these are your unknowns. You have enough information to solve the problem if the number of unknowns equals the number of equilibrium constant expressions. If not, add a mass balance equation and/or a charge balance equation. Continue adding equations until the number of equations equals the number of unknowns.Step 3Combine your equations and solve for one unknown. Whenever possible, simplify the algebra by making appropriate assumptions. If you make an assumption, set a limit for its error. This decision influences your evaluation of the assumption.Step 4Check your assumptions. If any assumption proves invalid, return to the previous step and continue solving. The problem is complete when you have an answer that does not violate any of your assumptions.In addition to equilibrium constant expressions, two other equations are important to this systematic approach to solving an equilibrium problem. The first of these equations is a mass balance equation, which simply is a statement that matter is conserved during a chemical reaction. In a solution of acetic acid, for example, the combined concentrations of the conjugate weak acid, CH3COOH, and the conjugate weak base, CH3COO–, must equal acetic acid’s initial concentration, \(C_{\text{CH}_3\text{COOH}}\).\[C_{\mathrm{CH}_{\mathrm{3}} \mathrm{COOH}}=\left[\mathrm{CH}_{3} \mathrm{COOH}\right]+\left[\mathrm{CH}_{3} \mathrm{COO}^{-}\right] \nonumber\]You may recall from Chapter 2 that this is the difference between a formal concentration and a molar concentration. The variable C represents a formal concentration.The second equation is a charge balance equation, which requires that the total positive charge from the cations equal the total negative charge from the anions. Mathematically, the charge balance equation is\[\sum_{i=1}^{n}\left(z^{+}\right)_{i}\left[{C^{z}}^+\right]_{i} = -\sum_{j=1}^{m}(z^-)_{j}\left[{A^{z}}^-\right]_{j} \nonumber\]where [Cz+]i and [Az-]j are, respectively, the concentrations of the ith cation and the jth anion, and (z+)i and (z–)j are the charges for the ith cation and the jth anion. Every ion in solution, even if it does not appear in an equilibrium reaction, must appear in the charge balance equation. For example, the charge balance equation for an aqueous solution of Ca(NO3)2 is\[2 \times\left[\mathrm{Ca}^{2+}\right]+\left[\mathrm{H}_{3} \mathrm{O}^{+}\right]=\left[\mathrm{OH}^{+}\right]+\left[\mathrm{NO}_{3}^-\right] \nonumber\]Note that we multiply the concentration of Ca2+ by two and that we include the concentrations of H3O+ and OH–.A charge balance is a conservation of a charge. The minus sign in front of the summation term on the right side of the charge balance equation ensures that both summations are positive. There are situations where it is impossible to write a charge balance equation because we do not have enough information about the solution’s composition. For example, suppose we fix a solution’s pH using a buffer. If the buffer’s composition is not specified, then we cannot write a charge balance equation.Write mass balance equations and a charge balance equation for a 0.10 M solution of NaHCO3.SolutionIt is easier to keep track of the species in solution if we write down the reactions that define the solution’s composition. These reactions are the dissolution of a soluble salt\[\mathrm{NaHCO}_{3}(s) \rightarrow \mathrm{Na}^{+}(a q)+\mathrm{HCO}_{3}^{-}(a q) \nonumber\]and the acid–base dissociation reactions of \(\text{HCO}_3^-\) and H2O\[\mathrm{HCO}_{3}^{-}(a q)+\mathrm{H}_{2} \mathrm{O}(l)\rightleftharpoons \mathrm{H}_{3} \mathrm{O}^{+}(a q)+\mathrm{CO}_{3}^{2-}(a q) \nonumber\]\[\mathrm{HCO}_{3}^{-}(a q)+\mathrm{H}_{2} \mathrm{O}(l)\rightleftharpoons \mathrm{OH}^{-}(a q)+\mathrm{H}_{2} \mathrm{CO}_{3}(a q) \nonumber\]\[2 \mathrm{H}_{2} \mathrm{O}(l)\rightleftharpoons \mathrm{H}_{3} \mathrm{O}^{+}(a q)+\mathrm{OH}^{-}(a q) \nonumber\]The mass balance equations are\[0.10 \mathrm{M}=\left[\mathrm{H}_{2} \mathrm{CO}_{3}\right]+\left[\mathrm{HCO}_{3}^{-}\right]+\left[\mathrm{CO}_{3}^{2-}\right] \nonumber\]\[0.10 \ \mathrm{M}=\left[\mathrm{Na}^{+}\right] \nonumber\]and the charge balance equation is\[\left[\mathrm{Na}^{+}\right]+\left[\mathrm{H}_{3} \mathrm{O}^{+}\right]=\left[\mathrm{OH}^{-}\right]+\left[\mathrm{HCO}_{3}^-\right]+2 \times\left[\mathrm{CO}_{3}^{2-}\right] \nonumber\]Write appropriate mass balance and charge balance equations for a solution containing 0.10 M KH2PO4 and 0.050 M Na2HPO4.To help us determine what ions are in solution, let’s write down all the reaction needed to prepare the solutions and the equilibrium reactions that take place within these solutions. These reactions are the dissolution of two soluble salts\[\mathrm{KH}_{2} \mathrm{PO}_{4}(s) \longrightarrow \mathrm{K}^{+}(a q)+\mathrm{H}_{2} \mathrm{PO}_{4}^{-}(a q) \nonumber\]\[\mathrm{NaHPO}_{4}(s) \longrightarrow \mathrm{Na}^{+}(a q)+\mathrm{HPO}_{4}^{2-}(a q) \nonumber\]and the acid–base dissociation reactions for \(\text{H}_2\text{PO}_4^-\), \(\text{HPO}_4^{2-}\). and H2O.\[\mathrm{H}_{2} \mathrm{PO}_{4}^{-}(a q)+\mathrm{H}_{2} \mathrm{O}(l)\rightleftharpoons\mathrm{H}_{3} \mathrm{O}^{+}(a q)+\mathrm{HPO}_{4}^{2-}(a q) \nonumber\]\[\mathrm{H}_{2} \mathrm{PO}_{4}^{-}(a q)+\mathrm{H}_{2} \mathrm{O}(l)\rightleftharpoons \mathrm{OH}^{-}(a q)+\mathrm{H}_{3} \mathrm{PO}_{4}(a q) \nonumber\]\[\mathrm{HPO}_{4}^{2-}(a q)+\mathrm{H}_{2} \mathrm{O}(l)\rightleftharpoons \mathrm{H}_{3} \mathrm{O}^{+}(a q)+\mathrm{PO}_{4}^{3-}(a q) \nonumber\]\[2 \mathrm{H}_{2} \mathrm{O}(l)\rightleftharpoons \mathrm{H}_{3} \mathrm{O}^{+}(a q)+\mathrm{OH}^{-}(a q) \nonumber\]Note that we did not include the base dissociation reaction for \(\text{HPO}_4^{2-}\) because we already accounted for its product, \(\text{H}_2\text{PO}_4^-\), in another reaction. The mass balance equations for K+ and Na+ are straightforward\[\left[\mathrm{K}^{+}\right]=0.10 \ \mathrm{M} \text { and }\left[\mathrm{Na}^{+}\right]=0.10 \ \mathrm{M} \nonumber\]but the mass balance equation for phosphate takes a bit more thought. Both \(\text{H}_2\text{PO}_4^-\) and \(\text{HPO}_4^{2-}\) produce the same ions in solution. We can, therefore, imagine that the solution initially contains 0.15 M KH2PO4, which gives the following mass balance equation.\[\left[\mathrm{H}_{3} \mathrm{PO}_{4}\right]+\left[\mathrm{H}_{2} \mathrm{PO}_{4}^-\right]+\left[\mathrm{HPO}_{4}^{2-}\right]+\left[\mathrm{PO}_{4}^{3-}\right]=0.15 \ \mathrm{M} \nonumber\]The charge balance equation is\[\left[\mathrm{H}_{3} \mathrm{O}^{+}\right]+\left[\mathrm{K}^{+}\right]+\left[\mathrm{Na}^{+}\right] =\left[\mathrm{H}_{2} \mathrm{PO}_{4}^{-}\right]+2 \times\left[\mathrm{HPO}_{4}^{2-}\right] +3 \times\left[\mathrm{PO}_{4}^{3-}\right]+\left[\mathrm{OH}^{-}\right] \nonumber\]To illustrate the systematic approach to solving equilibrium problems, let’s calculate the pH of 1.0 M HF. Two equilibrium reactions affect the pH. The first, and most obvious, is the acid dissociation reaction for HF\[\mathrm{HF}(a q)+\mathrm{H}_{2} \mathrm{O}(l) \rightleftharpoons \mathrm{H}_{3} \mathrm{O}^{+}(a q)+\mathrm{F}^{-}(a q) \nonumber\]for which the equilibrium constant expression is\[K_{\mathrm{a}}=\frac{\left[\mathrm{H}_{3} \mathrm{O}^{+}\right]\left[\mathrm{F}^{-}\right]}{[\mathrm{HF}]}=6.8 \times 10^{-4} \label{6.3}\]The second equilibrium reaction is the dissociation of water, which is an obvious yet easily neglected reaction\[2 \mathrm{H}_{2} \mathrm{O}(l) \rightleftharpoons \mathrm{H}_{3} \mathrm{O}^{+}(a q)+\mathrm{OH}^{-}(a q) \nonumber\]\[K_{w}=\left[\mathrm{H}_{3} \mathrm{O}^{+}\right]\left[\mathrm{OH}^{-}\right]=1.00 \times 10^{-14} \label{6.4}\]Counting unknowns, we find four: [HF], [F–], [H3O+], and [OH–]. To solve this problem we need two additional equations. These equations are a mass balance equation on hydrofluoric acid\[C_{\mathrm{HF}}=[\mathrm{HF}]+\left[\mathrm{F}^{-}\right]=1.0 \mathrm{M} \label{6.5}\]and a charge balance equation\[\left[\mathrm{H}_{3} \mathrm{O}^{+}\right]=\left[\mathrm{OH}^{-}\right]+\left[\mathrm{F}^{-}\right] \label{6.6}\]With four equations and four unknowns, we are ready to solve the problem. Before doing so, let’s simplify the algebra by making two assumptions.Assumption One. Because HF is a weak acid, we know that the solution is acidic. For an acidic solution it is reasonable to assume that\[\left[\mathrm{H}_{3} \mathrm{O}^{+}\right]>>\left[\mathrm{OH}^{-}\right] \nonumber\]which simplifies the charge balance equation to\[\left[\mathrm{H}_{3} \mathrm{O}^{+}\right]=\left[\mathrm{F}^{-}\right] \label{6.7}\]Assumption Two. Because HF is a weak acid, very little of it dissociates to form F–. Most of the HF remains in its conjugate weak acid form and it is reasonable to assume that\[[\mathrm{HF}]>>\left[\mathrm{F}^{-}\right] \nonumber\]which simplifies the mass balance equation to\[C_{\mathrm{HF}}=[\mathrm{HF}]=1.0 \ \mathrm{M} \label{6.8}\]For this exercise let’s accept an assumption if it introduces an error of less than ±5%.Substituting Equation \ref{6.7} and Equation \ref{6.8} into Equation \ref{6.3}, and solving for the concentration of H3O+ gives us\[\mathrm{K}_{\mathrm{a}}=\frac{\left[\mathrm{H}_{3} \mathrm{O}^{+}\right]\left[\mathrm{F}^{-}\right]}{[\mathrm{HF}]}=\frac{\left[\mathrm{H}_{3} \mathrm{O}^{+}\right]\left[\mathrm{H}_{3} \mathrm{O}^{+}\right]}{\mathrm{C}_{\mathrm{HF}}}=\frac{\left[\mathrm{H}_{3} \mathrm{O}^{+}\right]^{2}}{\mathrm{C}_{\mathrm{HF}}}=6.8 \times 10^{-4} \nonumber\]\[\left[\mathrm{H}_{3} \mathrm{O}^{+}\right]=\sqrt{K_{\mathrm{a}} C_{\mathrm{HF}}}=\sqrt{\left(6.8 \times 10^{-4}\right)(1.0)}=2.6 \times 10^{-2} \nonumber\]Before accepting this answer, we must verify our assumptions. The first assumption is that [OH–] is significantly smaller than [H3O+]. Using Equation \ref{6.4}, we find that\[\left[\mathrm{OH}^{-}\right]=\frac{K_{\mathrm{w}}}{\left[\mathrm{H}_{3} \mathrm{O}^{+}\right]}=\frac{1.00 \times 10^{-14}}{2.6 \times 10^{-2}}=3.8 \times 10^{-13} \nonumber\]Clearly this assumption is acceptable. The second assumption is that [F–] is significantly smaller than [HF]. From Equation \ref{6.7} we have\[\left[\mathrm{F}^{-}\right]=2.6 \times 10^{-2} \ \mathrm{M} \nonumber\]Because [F–] is 2.60% of CHF, this assumption also is acceptable. Given that [H3O+] is \(2.6 \times 10^{-2}\) M, the pH of 1.0 M HF is 1.59.How does the calculation change if we require that the error introduced in our assumptions be less than ±1%? In this case we no longer can assume that [HF] >> [F–] and we cannot simplify the mass balance equation. Solving the mass balance equation for [HF]\[[\mathrm{HF}]=C_{\mathrm{HF}}-\left[\mathrm{F}^{-}\right]=C_{\mathrm{HF}}-\left[\mathrm{H}_{3} \mathrm{O}^{+}\right] \nonumber\]and substituting into the Ka expression along with Equation \ref{6.7} gives\[K_{\mathrm{a}}=\frac{\left[\mathrm{H}_{3} \mathrm{O}^{+}\right]^{2}}{C_{\mathrm{HF}}-\left[\mathrm{H}_{3} \mathrm{O}^{+}\right]} \nonumber\]Rearranging this equation leaves us with a quadratic equation\[\left[\mathrm{H}_{3} \mathrm{O}^{+}\right]^{2}+K_{\mathrm{a}}\left[\mathrm{H}_{3} \mathrm{O}^{+}\right]-K_{\mathrm{a}} C_{\mathrm{HF}}=0 \nonumber\]which we solve using the quadratic formula\[x=\frac{-b \pm \sqrt{b^{2}-4 a c}}{2 a} \nonumber\]where a, b, and c are the coefficients in the quadratic equation\[a x^{2}+b x+c=0 \nonumber\]Solving a quadratic equation gives two roots, only one of which has chemical significance. For our problem, the equation’s roots are\[x=\frac{-6.8 \times 10^{-4} \pm \sqrt{\left(6.8 \times 10^{-4}\right)^{2}-\left(-6.8 \times 10^{-4}\right)}}{} \nonumber\]\[x=\frac{-6.8 \times 10^{-4} \pm 5.22 \times 10^{-2}}{2} \nonumber\]\[x=2.57 \times 10^{-2} \text { or }-2.64 \times 10^{-2} \nonumber\]Only the positive root is chemically significant because the negative root gives a negative concentration for H3O+. Thus, [H3O+] is \(2.57 \times 10^{-2}\) M and the pH is 1.59.You can extend this approach to calculating the pH of a monoprotic weak base by replacing Ka with Kb, replacing CHF with the weak base’s concentration, and solving for [OH–] in place of [H3O+].Calculate the pH of 0.050 M NH3. State any assumptions you make in solving the problem, limiting the error for any assumption to ±5%. The Kb value for NH3 is \(1.75 \times 10^{-5}\).To determine the pH of 0.050 M NH3, we need to consider two equilibrium reactions: the base dissociation reaction for NH3\[\mathrm{NH}_{3}(a q)+\mathrm{H}_{2} \mathrm{O}(l) \rightleftharpoons \mathrm{OH}^{-}(a q)+\mathrm{NH}_{4}^{+}(a q) \nonumber\]and water’s dissociation reaction.\[2 \mathrm{H}_{2} \mathrm{O}(l)\rightleftharpoons \mathrm{H}_{3} \mathrm{O}^{+}(a q)+\mathrm{OH}^{-}(a q) \nonumber\]These two reactions contain four species whose concentrations we need to consider: NH3, \(\text{NH}_4^+\), H3O+, and OH–. We need four equations to solve the problem—these equations are the Kb equation for NH3\[K_{\mathrm{b}}=\frac{\left[\mathrm{NH}_{4}^{+}\right]\left[\mathrm{OH}^{-}\right]}{\left[\mathrm{NH}_{3}\right]}=1.75 \times 10^{-5} \nonumber\]the Kw equation for H2O\[K_{w}=\left[\mathrm{H}_{3} \mathrm{O}^{+}\right]\left[\mathrm{OH}^{-}\right] \nonumber\]a mass balance equation on ammonia\[C_{\mathrm{NH}_{3}}=0.050 \ \mathrm{M}=\left[\mathrm{NH}_{3}\right]+\left[\mathrm{NH}_{4}^{+}\right] \nonumber\]and a charge balance equation\[\left[\mathrm{H}_{3} \mathrm{O}^{+}\right]+\left[\mathrm{NH}_{4}^{+}\right]=\left[\mathrm{OH}^{-}\right] \nonumber\]To solve this problem, we will make two assumptions. Because NH3 is a base, our first assumption is\[\left[\mathrm{OH}^{-}\right]>>\left[\mathrm{H}_{3} \mathrm{O}^{+}\right] \nonumber\]which simplifies the charge balance equation to\[\left[\mathrm{NH}_{4}^{+}\right]=\left[\mathrm{OH}^{-}\right] \nonumber\]Because NH3 is a weak base, our second assumption is\[\left[\mathrm{NH}_{3}\right]>>\left[\mathrm{NH}_{4}^{+}\right] \nonumber\]which simplifies the mass balance equation to\[C_{\mathrm{NH}_{3}}=0.050 \ \mathrm{M}=\left[\mathrm{NH}_{3}\right] \nonumber\]Substituting the simplified charge balance equation and mass balance equation into the Kb equation leave us with\[K_{\mathrm{b}}=\frac{\left[\mathrm{NH}_{4}^{+}\right]\left[\mathrm{OH}^{-}\right]}{\left[\mathrm{NH}_{3}\right]}=\frac{\left[\mathrm{OH}^{-}\right]\left[\mathrm{OH}^{-}\right]}{C_{\mathrm{NH}_3}}=\frac{\left[\mathrm{OH}^{-}\right]^{2}}{C_{\mathrm{NH_3}}}=1.75 \times 10^{-5} \nonumber\]\[\left[\mathrm{OH}^{-}\right]=\sqrt{K_{\mathrm{b}} C_{\mathrm{NH_3}}}=\sqrt{\left(1.75 \times 10^{-5}\right)(0.050)}=9.35 \times 10^{-4} \nonumber\]Before we accept this answer, we must verify our two assumptions. The first assumption is that the concentration of OH– is significantly greater than the concentration of H3O+. Using Kw, we find that\[\left[\mathrm{H}_{3} \mathrm{O}^{+}\right]=\frac{K_{\mathrm{w}}}{\left[\mathrm{OH}^{-}\right]}=\frac{1.00 \times 10^{-14}}{9.35 \times 10^{-4}}=1.07 \times 10^{-11} \nonumber\]Clearly this assumption is acceptable. Our second assumption is that the concentration of NH3 is significantly greater than the concentration of \(\text{NH}_4^+\). Using our simplified charge balance equation, we find that\[\left[\mathrm{NH}_{4}^{+}\right]=\left[\mathrm{OH}^{-}\right]=9.35 \times 10^{-4} \nonumber\]Because the concentration of \(\text{NH}_4^+\) is 1.9% of \(C_{\text{NH}_3}\), our second assumption also is reasonable. Given that [H3O+] is \(1.07 \times 10^{-11}\), the pH is 10.97.A more challenging problem is to find the pH of a solution that contains a polyprotic weak acid or one of its conjugate species. As an example, consider the amino acid alanine, whose structure is shown in Figure 6.7.1 . The ladder diagram in Figure 6.7.2 shows alanine’s three acid–base forms and their respective areas of predominance. For simplicity, we identify these species as H2L+, HL, and L–. Figure 6.7.2 . Ladder diagram for alanine.Alanine hydrochloride is the salt of the diprotic weak acid H2L+ and Cl–. Because H2L+ has two acid dissociation reactions, a complete systematic solution to this problem is more complicated than that for a monoprotic weak acid. The ladder diagram in Figure 6.7.2 helps us simplify the problem. Because the areas of predominance for H2L+ and L– are so far apart, we can assume that a solution of H2L+ will not contain a significant amount of L–. As a result, we can treat H2L+ as though it is a monoprotic weak acid. Calculating the pH of 0.10 M alanine hydrochloride, which is 1.72, is left to the reader as an exercise.The alaninate ion is a diprotic weak base. Because L– has two base dissociation reactions, a complete systematic solution to this problem is more complicated than that for a monoprotic weak base. Once again, the ladder diagram in Figure 6.7.2 helps us simplify the problem. Because the areas of predominance for H2L+ and L– are so far apart, we can assume that a solution of L– will not contain a significant amount of H2L+. As a result, we can treat L– as though it is a monoprotic weak base. Calculating the pH of 0.10 M sodium alaninate, which is 11.42, is left to the reader as an exercise.Finding the pH of a solution of alanine is more complicated than our previous two examples because we cannot ignore the presence of either H2L+ or L–. To calculate the solution’s pH we must consider alanine’s acid dissociation reaction\[\mathrm{HL}(a q)+\mathrm{H}_{2} \mathrm{O}(l)\rightleftharpoons \mathrm{H}_{3} \mathrm{O}^{+}(a q)+\mathrm{L}^{-}(a q) \nonumber\]and its base dissociation reaction\[\mathrm{HL}(a q)+\mathrm{H}_{2} \mathrm{O}(l)\rightleftharpoons \mathrm{OH}^{-}(a q)+\mathrm{H}_{2} \mathrm{L}^{+}(a q) \nonumber\]and, as always, we must also consider the dissociation of water\[2 \mathrm{H}_{2} \mathrm{O}(l)\rightleftharpoons \mathrm{H}_{3} \mathrm{O}^{+}(a q)+\mathrm{OH}^{-}(a q) \nonumber\]This leaves us with five unknowns—[H2L+], [HL], [L–], [H3O+], and [OH–]—for which we need five equations. These equations are Ka2 and Kb2 for alanine\[K_{\mathrm{a} 2}=\frac{\left[\mathrm{H}_{3} \mathrm{O}^{+}\right]\left[\mathrm{L}^{-}\right]}{[\mathrm{HL}]} \nonumber\]\[K_{\mathrm{b} 2}=\frac{K_{\mathrm{w}}}{K_{\mathrm{a1}}}=\frac{\left[\mathrm{OH}^{-}\right]\left[\mathrm{H}_{2} \mathrm{L}^{+}\right]}{[\mathrm{HL}]} \nonumber\]the Kw equation\[K_{\mathrm{w}}=\left[\mathrm{H}_{3} \mathrm{O}^{+}\right]\left[\mathrm{OH}^{-}\right] \nonumber\]a mass balance equation for alanine\[C_{\mathrm{HL}}=\left[\mathrm{H}_{2} \mathrm{L}^{+}\right]+[\mathrm{HL}]+[\mathrm{L}^{-}] \nonumber\]and a charge balance equation\[\left[\mathrm{H}_{2} \mathrm{L}^{+}\right]+\left[\mathrm{H}_{3} \mathrm{O}^{+}\right]=[\mathrm{OH}^-]+[\mathrm{L^-}] \nonumber\]Because HL is a weak acid and a weak base, it seems reasonable to assume that little of it will dissociate and that\[[\mathrm{HL}]>>\left[\mathrm{H}_{2} \mathrm{L}^{+}\right]+[\mathrm{L}^-] \nonumber\]which allows us to simplify the mass balance equation to\[C_{\mathrm{HL}}=[\mathrm{HL}] \nonumber\]Next we solve Kb2 for [H2L+]\[\left[\mathrm{H}_{2} \mathrm{L}^{+}\right]=\frac{K_{\mathrm{w}}[\mathrm{HL}]}{K_{\mathrm{a1}}\left[\mathrm{OH}^{-}\right]}=\frac{\left[\mathrm{H}_{3} \mathrm{O}^{+}\right][\mathrm{HL}]}{K_{\mathrm{a1}}}=\frac{C_{\mathrm{HL}}\left[\mathrm{H}_{3} \mathrm{O}^{+}\right]}{K_{\mathrm{a1}}} \nonumber\]and solve Ka2 for [L–]\[[\mathrm{L^-}]=\frac{K_{a2}[\mathrm{HL}]}{\left[\mathrm{H}_{3} \mathrm{O}^{+}\right]}=\frac{K_{a2} C_{\mathrm{HL}}}{\left[\mathrm{H}_{3} \mathrm{O}^{+}\right]} \nonumber\]Substituting these equations for [H2L+] and [L–], and the equation for Kw, into the charge balance equation give us\[\frac{C_{\mathrm{HL}}\left[\mathrm{H}_{3} \mathrm{O}^{+}\right]}{K_{\mathrm{a1}}}+\left[\mathrm{H}_{3} \mathrm{O}^{+}\right]=\frac{K_{\mathrm{w}}}{\left[\mathrm{H}_{3} \mathrm{O}^{+}\right]}+\frac{K_{a2} C_{\mathrm{HL}}}{\left[\mathrm{H}_{3} \mathrm{O}^{+}\right]} \nonumber\]which we simplify to\[\left[\mathrm{H}_{3} \mathrm{O}^{+}\right]\left(\frac{C_{\mathrm{HL}}}{K_{\mathrm{a1}}}+1\right)=\frac{1}{\left[\mathrm{H}_{3} \mathrm{O}^{+}\right]}\left(K_{\mathrm{w}}+K_{a2} C_{\mathrm{HL}}\right) \nonumber\]\[\left[\mathrm{H}_{3} \mathrm{O}^{+}\right]^{2}=\frac{\left(K_{\mathrm{a} 2} C_{\mathrm{HL}}+K_{\mathrm{w}}\right)}{\frac{C_{\mathrm{HL}}}{K_{\mathrm{a1}}}+1}=\frac{K_{\mathrm{a1}}\left(K_{\mathrm{a2}} C_{\mathrm{HL}}+K_{\mathrm{w}}\right)}{C_{\mathrm{HL}}+K_{\mathrm{a1}}} \nonumber\]\[\left[\mathrm{H}_{3} \mathrm{O}^{+}\right]=\sqrt{\frac{\left(K_{\mathrm{a1}} K_{a2} C_{\mathrm{HL}}+K_{\mathrm{a1}} K_{\mathrm{w}}\right)}{C_{\mathrm{HL}}+K_{\mathrm{a1}}}} \nonumber\]We can further simplify this equation if Ka1Kw << Ka1Ka2CHL, and if Ka1 << CHL, leaving us with\[\left[\mathrm{H}_{3} \mathrm{O}^{+}\right]=\sqrt{K_{\mathrm{a1}} K_{\mathrm{a} 2}} \nonumber\]For a solution of 0.10 M alanine the [H3O+] is\[\left[\mathrm{H}_{3} \mathrm{O}^{+}\right]=\sqrt{\left(4.487 \times 10^{-3}\right)\left(1.358 \times 10^{-10}\right)}=7.806 \times 10^{-7} \ \mathrm{M} \nonumber\]or a pH of 6.11.Verify that each assumption in our solution for the pH of 0.10 M alanine is reasonable, using ±5% as the limit for the acceptable error.In solving for the pH of 0.10 M alanine, we made the following three assumptions: (a) [HL] >> [H2L+] + [L–]; (b) Ka1Kw << Ka1Ka2CHL; and (c) Ka1 << CHL. Assumptions (b) and (c) are easy to check. The value of Ka1 (\(4.487 \times 10^{-3}\)) is 4.5% of CHL (0.10), and Ka1Kw (\(4.487 \times 10^{-17}\)) is 0.074% of Ka1Ka2CHL (\(6.093 \times 10^{-14}\)). Each of these assumptions introduces an error of less than ±5%.To test assumption (a) we need to calculate the concentrations of H2L+ and L–, which we accomplish using the equations for Ka1 and Ka2.\[\left[\mathrm{H}_{2} \mathrm{L}^{+}\right]=\frac{\left[\mathrm{H}_{3} \mathrm{O}^{+}\right][\mathrm{HL}]}{K_{\mathrm{a1}}}=\frac{\left(7.807 \times 10^{-7}\right)(0.10)}{4.487 \times 10^{-3}}=1.74 \times 10^{-5} \nonumber\]\[\left[\mathrm{L}^{-}\right]=\frac{K_{a 2}[\mathrm{HL}]}{\left[\mathrm{H}_{3} \mathrm{O}^{+}\right]}=\frac{\left(1.358 \times 10^{-10}\right)(0.10)}{7.807 \times 10^{-7}}=1.74 \times 10^{-5} \nonumber\]Because these concentrations are less than ±5% of CHL, the first assumption also is acceptable.One method for increasing a precipitate’s solubility is to add a ligand that forms soluble complexes with one of the precipitate’s ions. For example, the solubility of AgI increases in the presence of NH3 due to the formation of the soluble \(\text{Ag(NH}_3)_2^+\) complex. As a final illustration of the systematic approach to solving equilibrium problems, let’s calculate the molar solubility of AgI in 0.10 M NH3.We begin by writing the relevant equilibrium reactions, which includes the solubility of AgI, the acid–base chemistry of NH3 and H2O, and the metal‐ligand complexation chemistry between Ag+ and NH3.\[\begin{array}{c}{\operatorname{AgI}(s)\rightleftharpoons\operatorname{Ag}^{+}(a q)+\mathrm{I}^{-}(a q)} \\ {\mathrm{NH}_{3}(a q)+\mathrm{H}_{2} \mathrm{O}(l)\rightleftharpoons \mathrm{OH}^{-}(a q)+\mathrm{NH}_{4}^{+}(a q)} \\ {2 \mathrm{H}_{2} \mathrm{O}(l)\rightleftharpoons \mathrm{H}_{3} \mathrm{O}^{+}(a q)+\mathrm{OH}^{-}(a q)} \\ {\mathrm{Ag}^{+}(a q)+2 \mathrm{NH}_{3}(a q) \rightleftharpoons \mathrm{Ag}\left(\mathrm{NH}_{3}\right)_{2}^{+}(a q)}\end{array} \nonumber\]This leaves us with seven unknowns—[Ag+], [I–], [NH3], [\(\text{NH}_4^+\) ], [OH–], [H3O+], and [\(\text{Ag(NH}_3)_2^+\)]—and a need for seven equations. Four of the equations we need to solve this problem are the equilibrium constant expressions\[K_{\mathrm{sp}}=\left[\mathrm{Ag}^{+}\right]\left[\mathrm{I}^{-}\right]=8.3 \times 10^{-17} \label{6.9}\]\[K_{\mathrm{b}}=\frac{\left[\mathrm{NH}_{4}^{+}\right]\left[\mathrm{OH}^{-}\right]}{\left[\mathrm{NH}_{3}\right]}=1.75 \times 10^{-5} \label{6.10}\]\[K_{\mathrm{w}}=\left[\mathrm{H}_{3} \mathrm{O}^{+}\right]\left[\mathrm{OH}^{-}\right]=1.00 \times 10^{-14} \label{6.11}\]\[\beta_{2}=\frac{\left[\mathrm{Ag}\left(\mathrm{NH}_{3}\right)_{2}^{+}\right]}{\left[\mathrm{Ag}^{+}\right]\left[\mathrm{NH}_{3}\right]^{2}}=1.7 \times 10^{7} \label{6.12}\]We still need three additional equations. The first of these equations is a mass balance for NH3.\[C_{\mathrm{NH}_{3}}=\left[\mathrm{NH}_{3}\right]+\left[\mathrm{NH}_{4}^{+}\right]+2 \times\left[\mathrm{Ag}\left(\mathrm{NH}_{3}\right)_{2}^{+}\right] \label{6.13}\]In writing this mass balance equation we multiply the concentration of \(\text{Ag(NH}_3)_2^+\) by two since there are two moles of NH3 per mole of \(\text{Ag(NH}_3)_2^+\). The second additional equation is a mass balance between iodide and silver. Because AgI is the only source of I– and Ag+, each iodide in solution must have an associated silver ion, which may be Ag+ or \(\text{Ag(NH}_3)_2^+\) ; thus\[\left[\mathrm{I}^{-}\right]=\left[\mathrm{Ag}^{+}\right]+\left[\mathrm{Ag}\left(\mathrm{NH}_{3}\right)_{2}^{+}\right] \label{6.14}\]Finally, we include a charge balance equation.\[\left[\mathrm{Ag}^{+}\right]+\left[\mathrm{Ag}\left(\mathrm{NH}_{3}\right)_{2}^{+}\right]+\left[\mathrm{NH}_{4}^{+}\right]+\left[\mathrm{H}_{3} \mathrm{O}^{+}\right]=[\mathrm{OH}^-]+[\mathrm{I}^-] \label{6.15}\]Although the problem looks challenging, three assumptions greatly simplify the algebra.Assumption One. Because the formation of the \(\text{Ag(NH}_3)_2^+\) complex is so favorable (\(\beta_2\) is \(1.7 \times 10^7\)), there is very little free Ag+ in solution and it is reasonable to assume that\[\left[\mathrm{Ag}^{+}\right]<<\left[\mathrm{Ag}\left(\mathrm{NH}_{3}\right)_{2}^{+}\right] \nonumber\]Assumption Two. Because NH3 is a weak base we may reasonably assume that most uncomplexed ammonia remains as NH3; thus\[\left[\mathrm{NH}_{4}^{+}\right]<<\left[\mathrm{NH}_{3}\right] \nonumber\]Assumption Three. Because Ksp for AgI is significantly smaller than \(\beta_2\) for \(\text{Ag(NH}_3)_2^+\), the solubility of AgI probably is small enough that very little ammonia is needed to form the metal–ligand complex; thus\[\left[\mathrm{Ag}\left(\mathrm{NH}_{3}\right)_{2}^{+}\right]<<\left[\mathrm{NH}_{3}\right] \nonumber\]As we use these assumptions to simplify the algebra, let’s set ±5% as the limit for error.Assumption two and assumption three suggest that the concentration of NH3 is much larger than the concentrations of either \(\text{NH}_4^+\) or \(\text{Ag(NH}_3)_2^+\), which allows us to simplify the mass balance equation for NH3 to\[C_{\mathrm{NH}_{3}}=\left[\mathrm{NH}_{3}\right] \label{6.16}\]Finally, using assumption one, which suggests that the concentration of \(\text{Ag(NH}_3)_2^+\) is much larger than the concentration of Ag+, we simplify the mass balance equation for I– to\[\left[\mathrm{I}^{-}\right]=\left[\mathrm{Ag}\left(\mathrm{NH}_{3}\right)_{2}^{+}\right] \label{6.17}\]Now we are ready to combine equations and to solve the problem. We begin by solving Equation \ref{6.9} for [Ag+] and substitute it into \(\beta_2\) (Equation \ref{6.12}), which leaves us with\[\beta_{2}=\frac{\left[\mathrm{Ag}\left(\mathrm{NH}_{3}\right)_{2}^{+}\right][\mathrm{I}^-]}{K_{\mathrm{sp}}\left[\mathrm{NH}_{3}\right]^{2}} \label{6.18}\]Next we substitute Equation \ref{6.16} and Equation \ref{6.17} into Equation \ref{6.18}, obtaining\[\beta_{2}=\frac{\left[\mathrm{I}^{-}\right]^{2}}{K_{\mathrm{sp}}\left(C_{\mathrm{NH}_3}\right)^{2}} \label{6.19}\]Solving Equation \ref{6.19} for [I–] gives\[\left[\mathrm{I}^{-}\right]=C_{\mathrm{NH}_3} \sqrt{\beta_{2} K_{s p}} = \\ (0.10) \sqrt{\left(1.7 \times 10^{7}\right)\left(8.3 \times 10^{-17}\right)}=3.76 \times 10^{-6} \ \mathrm{M} \nonumber\]Because one mole of AgI produces one mole of I–, the molar solubility of AgI is the same as the [I–], or \(3.8 \times 10^{-6}\) mol/L.Before we accept this answer we need to check our assumptions. Substituting [I–] into Equation \ref{6.9}, we find that the concentration of Ag+ is\[\left[\mathrm{Ag}^{+}\right]=\frac{K_{\mathrm{p}}}{[\mathrm{I}^-]}=\frac{8.3 \times 10^{-17}}{3.76 \times 10^{-6}}=2.2 \times 10^{-11} \ \mathrm{M} \nonumber\]Substituting the concentrations of I– and Ag+ into the mass balance equation for iodide (Equation \ref{6.14}), gives the concentration of \(\text{Ag(NH}_3)_2^+\) as\[\left[\operatorname{Ag}\left(\mathrm{NH}_{3}\right)_{2}^{+}\right]=[\mathrm{I}^-]-\left[\mathrm{Ag}^{+}\right]=3.76 \times 10^{-6}-2.2 \times 10^{-11}=3.76 \times 10^{-6} \ \mathrm{M} \nonumber\]Our first assumption that [Ag+] is significantly smaller than the [\(\text{Ag(NH}_3)_2^+\)] is reasonable.Substituting the concentrations of Ag+ and \(\text{Ag(NH}_3)_2^+\) into Equation \ref{6.12} and solving for [NH3], gives\[\left[\mathrm{NH}_{3}\right]=\sqrt{\frac{\left[\mathrm{Ag}\left(\mathrm{NH}_{3}\right)_{2}^{+}\right]}{\left[\mathrm{Ag}^{+}\right] \beta_{2}}}=\sqrt{\frac{3.76 \times 10^{-6}}{\left(2.2 \times 10^{-11}\right)\left(1.7 \times 10^{7}\right)}}=0.10 \ \mathrm{M} \nonumber\]From the mass balance equation for NH3 (Equation \ref{6.12}) we see that [\(\text{NH}_4^+\)] is negligible, verifying our second assumption that \([\text{NH}_4^+]\) is significantly smaller than [NH3]. Our third assumption that [\(\text{Ag(NH}_3)_2^+\)] is significantly smaller than [NH3] also is reasonable.Did you notice that our solution to this problem did not make use of Equation \ref{6.15}, the charge balance equation? The reason for this is that we did not try to solve for the concentration of all seven species. If we need to know the reaction mixture’s complete composition at equilibrium, then we will need to incorporate the charge balance equation into our solution.This page titled 6.7: Solving Equilibrium Problems is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by David Harvey.
160
6.8: Buffer Solutions
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Analytical_Chemistry_2.1_(Harvey)/06%3A_Equilibrium_Chemistry/6.08%3A_Buffer_Solutions
Adding as little as 0.1 mL of concentrated HCl to a liter of H2O shifts the pH from 7.0 to 3.0. Adding the same amount of HCl to a liter of a solution that 0.1 M in acetic acid and 0.1 M in sodium acetate, however, results in a negligible change in pH. Why do these two solutions respond so differently to the addition of HCl?A mixture of acetic acid and sodium acetate is one example of an acid–base buffer. To understand how this buffer works to limit the change in pH, we need to consider its acid dissociation reaction\[\mathrm{CH}_{3} \mathrm{COOH}(a q)+\mathrm{H}_{2} \mathrm{O}(l)\rightleftharpoons\mathrm{H}_{3} \mathrm{O}^{+}(a q)+\mathrm{CH}_{3} \mathrm{COO}^{-}(a q) \nonumber\]and its corresponding acid dissociation constant\[K_{a}=\frac{\left[\mathrm{CH}_{3} \mathrm{COO}^{-}\right]\left[\mathrm{H}_{3} \mathrm{O}^{+}\right]}{\left[\mathrm{CH}_{3} \mathrm{COOH}\right]}=1.75 \times 10^{-5} \label{6.1}\]Taking the negative log of the terms in Equation \ref{6.1} and solving for pH leaves us with the result shown here.\[\mathrm{pH}=\mathrm{p} K_{\mathrm{a}}+\log \frac{\left[\mathrm{CH}_{3} \mathrm{COO}^{-}\right]}{\left[\mathrm{CH}_{3} \mathrm{COOH}\right]} \nonumber\]\[\mathrm{pH}=4.76+\log \frac{\left[\mathrm{CH}_{3} \mathrm{COO}^{-}\right]}{\left[\mathrm{CH}_{3} \mathrm{COOH}\right]} \label{6.2}\]You may recall that we developed these same equations in Chapter 6.6 when we introduced ladder diagrams.Buffering occurs because of the logarithmic relationship between pH and the concentration ratio of acetate and acetic acid. Here is an example to illustrate this point. If the concentrations of acetic acid and acetate are equal, the buffer’s pH is 4.76. If we convert 10% of the acetate to acetic acid, by adding a strong acid, the ratio [CH3COO–]/[CH3COOH] changes from 1.00 to 0.818, and the pH decreases from 4.76 to 4.67—a decrease of only 0.09 pH units.The ratio [CH3COO–]/[CH3COOH] becomes 0.9/1.1 = 0.818 and the pH becomes\[\mathrm{pH}=4.76+\log (0.818)=4.67 \nonumber\]Equation \ref{6.2} is written in terms of the equilibrium concentrations of CH3COOH and of CH3COO–. A more useful relationship relates a buffer’s pH to the initial concentrations of the weak acid and the weak base. We can derive a general buffer equation by considering the following reactions for a weak acid, HA, and the soluble salt of its conjugate weak base, NaA.\[\begin{array}{c}{\mathrm{NaA}(s) \rightarrow \mathrm{Na}^{+}(a q)+\mathrm{A}^{-}(a q)} \\ {\mathrm{HA}(a q)+\mathrm{H}_{2} \mathrm{O}(l)\rightleftharpoons \mathrm{H}_{3} \mathrm{O}^{+}(a q)+\mathrm{A}^{-}(a q)} \\ {2 \mathrm{H}_{2} \mathrm{O}(l)\rightleftharpoons \mathrm{H}_{3} \mathrm{O}^{+}(a q)+\mathrm{OH}^{-}(a q)}\end{array} \nonumber\]Because the concentrations of Na+, A–, HA, H3O+, and OH– are unknown, we need five equations to define the solution’s composition. Two of these equations are the equilibrium constant expressions for HA and H2O.\[K_{a}=\frac{\left[\mathrm{H}_{3} \mathrm{O}^{+}\right]\left[\mathrm{A}^{-}\right]}{[\mathrm{HA}]} \label{6.3}\]\[K_{w}=\left[\mathrm{H}_{3} \mathrm{O}^{+}\right]\left[\mathrm{OH}^{-}\right] \nonumber\]The remaining three equations are mass balance equations for HA and Na+\[C_{\mathrm{HA}}+C_{\mathrm{NaA}}=[\mathrm{HA}]+\left[\mathrm{A}^{-}\right] \label{6.4}\]\[C_{\mathrm{NaA}}=\left[\mathrm{Na}^{+}\right] \label{6.5}\]and a charge balance equation\[\left[\mathrm{H}_{3} \mathrm{O}^{+}\right]+\left[\mathrm{Na}^{+}\right]=\left[\mathrm{OH}^{-}\right]+\left[\mathrm{A}^{-}\right] \label{6.6}\]Substituting Equation \ref{6.5} into Equation \ref{6.6} and solving for [A–] gives\[\left[\mathrm{A}^{-}\right]=C_{\mathrm{NaA} }-\left[\mathrm{OH}^{-}\right]+\left[\mathrm{H}_{3} \mathrm{O}^{+}\right] \label{6.7}\]Next, we substitute Equation \ref{6.7} into Equation \ref{6.4}, which gives the concentration of HA as\[[\mathrm{HA}]=C_{\mathrm{HA}}+\left[\mathrm{OH}^{-}\right]-\left[\mathrm{H}_{3} \mathrm{O}^{+}\right] \label{6.8}\]Finally, we substitute Equation \ref{6.7} and Equation \ref{6.8} into Equation \ref{6.3} and solve for pH to arrive at a general equation for a buffer’s pH.\[\mathrm{pH}=\mathrm{p} K_{\mathrm{a}}+\log \frac{C_{\mathrm{NaA} }-\left[\mathrm{OH}^{-}\right]+\left[\mathrm{H}_{3} \mathrm{O}^{+}\right]}{C_{\mathrm{HA}}+\left[\mathrm{OH}^{-}\right]-\left[\mathrm{H}_{3} \mathrm{O}^{+}\right]} \nonumber\]If the initial concentrations of the weak acid, CHA, and the weak base, CNaA. are significantly greater than [H3O+] and [OH–], then we can simplify the general equation to the Henderson–Hasselbalch equation.\[\mathrm{pH}=\mathrm{p} K_{\mathrm{a}}+\log \frac{C_{\mathrm{NaA}}}{C_{\mathrm{HA}}} \label{6.9}\]As outlined below, the Henderson–Hasselbalch equation provides a simple way to calculate the pH of a buffer, and to determine the change in pH upon adding a strong acid or strong base.Lawrence Henderson (1878‐1942) first developed a relationship between [H3O+], [HA], and [A–] while studying the buffering of blood. Kurt Hasselbalch (1874‐1962) modified Henderson’s equation by transforming it to the logarithmic form shown in Equation \ref{6.9}. The assumptions that lead to Equation \ref{6.9} result in a minimal error in pH (<±5%) for larger concentrations of HA and A–, for concentrations of HA and A– that are similar in magnitude, and for weak acid’s with pKa values closer to 7. For most problems in this textbook, Equation \ref{6.9} provides acceptable results. Be sure, however, to test your assumptions. For a discussion of the Henderson–Hasselbalch equation, including the error inherent in Equation \ref{6.9}, see Po, H. N.; Senozan, N. M. “The Henderson–Hasselbalch Equation: Its History and Limitations,” J. Chem. Educ. 2001, 78, 1499–1503.Calculate the pH of a buffer that is 0.020 M in NH3 and 0.030 M in NH4Cl. What is the pH after we add 1.0 mL of 0.10 M NaOH to 0.10 L of this buffer?SolutionThe acid dissociation constant for \(\text{NH}_4^+\) is \(5.70 \times 10^{-10}\), which is a pKa of 9.24. Substituting the initial concentrations of NH3 and NH4Cl into Equation \ref{6.9} and solving, we find that the buffer’s pH is\[\mathrm{pH}=9.24+\log \frac{0.020}{0.030}=9.06 \nonumber\]With a pH of 9.06, the concentration of H3O+ is \(8.71 \times 10^{-10}\) and the concentration of OH– is \(1.15 \times 10^{-5}\). Because both of these concentrations are much smaller than either \(C_{\text{NH}_3}\) or \(C_{\text{NH}_4\text{Cl}}\), the approximations used to derive Equation \ref{6.9} are reasonable.Adding NaOH converts a portion of the \(\text{NH}_4^+\) to NH3 following reaction\[\mathrm{NH}_{4}^{+}(a q)+\mathrm{OH}^{-}(a q) \rightleftharpoons \mathrm{H}_{2} \mathrm{O}(l)+\mathrm{NH}_{3}(a q) \nonumber\]Because this reaction’s equilibrium constant is so large (it is equal to (Kb)-1 or \(5.7 \times 10^4\)), we may treat the reaction as if it goes to completion. The new concentrations of \(\text{NH}_4^+\) and NH3 are\[C_{\mathrm{NH}_{4}^{+}}=\frac{\operatorname{mol} \ \mathrm{NH}_{4}^{+}- \ \mathrm{mol} \mathrm{OH}^{-}}{V_{\mathrm{total}}} \nonumber\]\[C_{\mathrm{NH}_4^+}=\frac{(0.030 \ \mathrm{M})(0.10 \ \mathrm{L})-(0.10 \ \mathrm{M})\left(1.0 \times 10^{-3} \ \mathrm{L}\right)}{0.10 \ \mathrm{L}+1.0 \times 10^{-3} \ \mathrm{L}}=0.029 \ \mathrm{M} \nonumber\]\[C_{\mathrm{NH}_{3}}=\frac{\mathrm{mol} \ \mathrm{NH}_{3}+\mathrm{mol} \ \mathrm{OH}^{-}}{V_{\mathrm{total}}} \nonumber\]\[C_{\mathrm{NH}_3}=\frac{(0.020 \ \mathrm{M})(0.10 \ \mathrm{L})+(0.10 \ \mathrm{M})\left(1.0 \times 10^{-3} \ \mathrm{L}\right)}{0.10 \ \mathrm{L}+1.0 \times 10^{-3} \ \mathrm{L}}=0.021 \ \mathrm{M} \nonumber\]Substituting these concentrations into the equation 6.60 gives a pH of\[\mathrm{pH}=9.24+\log \frac{0.021}{0.029}=9.10 \nonumber\]Note that adding NaOH increases the pH from 9.06 to 9.10. As we expect, adding a base makes the pH more basic. Checking to see that the pH changes in the right direction is one way to catch a calculation error.Calculate the pH of a buffer that is 0.10 M in KH2PO4 and 0.050 M in Na2HPO4. What is the pH after we add 5.0 mL of 0.20 M HCl to 0.10 L of this buffer. Use Appendix 11 to find the appropriate Ka value.The acid dissociation constant for \(\text{H}_2\text{PO}_4^-\) is \(6.32 \times 10^{-8}\), or a pKa of 7.199. Substituting the initial concentrations of \(\text{H}_2\text{PO}_4^-\) and \(\text{HPO}_4^{2-}\) into Equation \ref{6.9} and solving gives the buffer’s pH as\[\mathrm{pH}=7.199+\log \frac{\left[\mathrm{HPO}_{4}^{2-}\right]}{\left[\mathrm{H}_{2} \mathrm{PO}_{4}^{-}\right]}=7.199+\log \frac{0.050}{0.10}=6.898 \approx 6.90\nonumber\]Adding HCl converts a portion of \(\text{HPO}_4^{2-}\) to \(\text{H}_2\text{PO}_4^-\) as a result of the following reaction\[\mathrm{HPO}_{4}^{2-}(a q)+\mathrm{H}_{3} \mathrm{O}^{+}(a q)\rightleftharpoons \mathrm{H}_{2} \mathrm{O}(l)+\mathrm{H}_{2} \mathrm{PO}_{4}^{-}(a q) \nonumber\]Because this reaction’s equilibrium constant is so large (it is \(1.59 \times 10^7\)), we may treat the reaction as if it goes to completion. The new concentrations of \(\text{H}_2\text{PO}_4^-\) and \(\text{HPO}_4^{2-}\) are\[C_{\mathrm{H}_{2} \mathrm{PO}_{4}^{4-}}=\frac{\mathrm{mol} \ \mathrm{H}_{2} \mathrm{PO}_{4}^{-}+\mathrm{mol} \ \mathrm{HCl}}{V_{\mathrm{total}}} \nonumber\]\[C_{\mathrm{H}_{2} \mathrm{PO}_{4}^{4-}}=\frac{(0.10 \ \mathrm{M})(0.10 \ \mathrm{L})+(0.20 \ \mathrm{M})\left(5.0 \times 10^{-3} \ \mathrm{L}\right)}{0.10 \ \mathrm{L}+5.0 \times 10^{-3} \ \mathrm{L}}=0.105 \ \mathrm{M} \nonumber\]\[C_{\mathrm{HPO}_{4}^{2-}}=\frac{\mathrm{mol} \ \mathrm{HPO}_{4}^{2-}-\mathrm{mol} \ \mathrm{HCl}}{V_{\mathrm{total}}} \nonumber\]\[C_{\mathrm{HPO}_{4}^{2-}}=\frac{(0.05 \ \mathrm{M})(0.10 \ \mathrm{L})-(0.20 \ \mathrm{M})\left(5.0 \times 10^{-3} \ \mathrm{L}\right)}{0.10 \ \mathrm{L}+5.0 \times 10^{-3} \ \mathrm{L}}=0.0381 \ \mathrm{M} \nonumber\]Substituting these concentrations into Equation \ref{6.9} gives a pH of\[\mathrm{pH}=7.199+\log \frac{\left[\mathrm{HPO}_{4}^{2-}\right]}{\left[\mathrm{H}_{2} \mathrm{PO}_{4}^-\right]}=7.199+\log \frac{0.0381}{0.105}=6.759 \approx 6.76 \nonumber\]As we expect, adding HCl decreases the buffer’s pH by a small amount, dropping from 6.90 to 6.76.We can use a multiprotic weak acid to prepare buffers at as many different pH’s as there are acidic protons, with the Henderson–Hasselbalch equation applying in each case. For example, for malonic acid (pKa1 = 2.85 and pKa2 = 5.70) we can prepare buffers with pH values of\[\begin{array}{l}{\mathrm{pH}=2.85+\log \frac{C_{\mathrm{HM}^{-}}}{C_{\mathrm{H}_{2} \mathrm{M}}}} \\ {\mathrm{pH}=5.70+\log \frac{C_{\mathrm{M}^{2-}}}{C_{\mathrm{HM}^-}}}\end{array} \nonumber\]where H2M, HM– and M2– are malonic acid’s different acid–base forms.Although our treatment of buffers is based on acid–base chemistry, we can extend buffers to equilibria that involve complexation or redox reactions. For example, the Nernst equation for a solution that contains Fe2+ and Fe3+ is similar in form to the Henderson‐Hasselbalch equation.\[E=E_{\mathrm{Fe}^{3+} / \mathrm{Fe}^{2+}}^{\circ}-0.05916 \log \frac{\left[\mathrm{Fe}^{2+}\right]}{\left[\mathrm{Fe}^{3+}\right]} \nonumber\]A solution that contains similar concentrations of Fe2+ and Fe3+ is buffered to a potential near the standard state reduction potential for Fe3+. We call such solutions redox buffers. Adding a strong oxidizing agent or a strong reducing agent to a redox buffer results in a small change in potential.A ladder diagram provides a simple way to visualize a solution’s predominate species as a function of solution conditions. It also provides a convenient way to show the range of solution conditions over which a buffer is effective. For example, an acid–base buffer exists when the concentrations of the weak acid and its conjugate weak base are similar. For convenience, let’s assume that an acid–base buffer exists when\[\frac{1}{10} \leq \frac{\left[\mathrm{CH}_{3} \mathrm{COO}^{-}\right]}{\left[\mathrm{CH}_{3} \mathrm{COOH}\right]} \leq \frac{10}{1} \nonumber\]Substituting these ratios into the Henderson–Hasselbalch equation\[\begin{aligned} \mathrm{pH} &=\mathrm{p} K_{\mathrm{a}}+\log \frac{1}{10}=\mathrm{p} K_{\mathrm{a}}-1 \\ \mathrm{pH} &=\mathrm{p} K_{\mathrm{a}}+\log \frac{10}{1}=\mathrm{p} K_{\mathrm{a}}+1 \end{aligned} \nonumber\]shows that an acid–base buffer works over a pH range of pKa ± 1.Using the same approach, it is easy to show that a metal‐ligand complexation buffer for MLn exists when\[\mathrm{pL}=\log K_{n} \pm 1 \text { or } \mathrm{pL}=\log \beta_{n} \pm \frac{1}{n} \nonumber\]where Kn or \(\beta_n\) is the relevant stepwise or overall formation constant. For an oxidizing agent and its conjugate reducing agent, a redox buffer exists when\[E=E^{\circ} \pm \frac{1}{n} \times \frac{R T}{F}=E^{\circ} \pm \frac{0.05916}{n}\left(\text { at } 25^{\circ} \mathrm{C}\right) \nonumber\]Figure 6.8.1 shows ladder diagrams with buffer regions for several equilibrium systems.Buffer capacity is the ability of a buffer to resist a change in pH when we add to it a strong acid or a strong base. A buffer’s capacity to resist a change in pH is a function of the concentrations of the weak acid and the weak base, as well as their relative proportions. The importance of the weak acid’s concentration and the weak base’s concentration is obvious. The more moles of weak acid and weak base a buffer has, the more strong base or strong acid it can neutralize without a significant change in its pH.Although a higher concentration of buffering agents provides greater buffer capacity, there are reasons for using smaller concentrations, including the formation of unwanted precipitates and the tolerance of biological systems for high concentrations of dissolved salts.The relative proportions of a weak acid and a weak base also affects how much the pH changes when we add a strong acid or a strong base. A buffer that is equimolar in weak acid and weak base requires a greater amount of strong acid or strong base to bring about a one unit change in pH. Consequently, a buffer is most effective against the addition of strong acids or strong bases when its pH is near the weak acid’s pKa value.Buffer solutions are often prepared using standard “recipes” found in the chemical literature [see, for example, (a) Bower, V. E.; Bates, R. G. J. Res. Natl. Bur. Stand. (U. S.) 1955, 55, 197– 200; (b) Bates, R. G. Ann. N. Y. Acad. Sci. 1961, 92, 341–356; (c) Bates, R. G. Determination of pH, 2nd ed.; Wiley‐Interscience: New York, 1973]. In addition, there are computer programs and on‐line calculators to aid in preparing buffers [(a) Lambert, W. J. J. Chem. Educ. 1990, 67, 150–153; (b) http://www.bioinformatics.org/JaMBW/5/4/index.html.]. Perhaps the simplest way to make a buffer, however, is to prepare a solution that contains an appropriate conjugate weak acid and weak base, measure its pH, and then adjust the pH to the desired value by adding small portions of either a strong acid or a strong base.A good “rule of thumb” when choosing a buffer is to select one whose reagents have a pKa value close to your desired pH.This page titled 6.8: Buffer Solutions is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by David Harvey.
161
6.9: Activity Effects
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Analytical_Chemistry_2.1_(Harvey)/06%3A_Equilibrium_Chemistry/6.09%3A_Activity_Effects
Careful measurements on the metal–ligand complex Fe(SCN)2+ suggest its stability decreases in the presence of inert ions [Lister, M. W.; Rivington, D. E. Can. J. Chem. 1995, 33, 1572–1590]. We can demonstrate this by adding an inert salt to an equilibrium mixture of Fe3+ and SCN–. Figure 6.9.1 a shows the result of mixing together equal volumes of 1.0 mM FeCl3 and 1.5 mM KSCN, both of which are colorless. The solution’s reddish–orange color is due to the formation of Fe(SCN)2+.\[\mathrm{Fe}^{3+}(a q)+\mathrm{SCN}^{-}(a q) \rightleftharpoons \mathrm{Fe}(\mathrm{SCN})^{2+}(a q) \label{6.1}\]Adding 10 g of KNO3 to the solution and stirring to dissolve the solid, produces the result shown in Figure 6.9.1 b. The solution’s lighter color suggests that adding KNO3 shifts reaction \ref{6.1} to the left, decreasing the concentration of Fe(SCN)2+ and increasing the concentrations of Fe3+ and SCN–. The result is a decrease in the complex’s formation constant, K1.\[K_{1}=\frac{\left[\mathrm{Fe}(\mathrm{SCN})^{2+}\right]}{\left[\mathrm{Fe}^{3+}\right]\left[\mathrm{SCN}^{-}\right]} \label{6.2}\]Why should adding an inert electrolyte affect a reaction’s equilibrium position? We can explain the effect of KNO3 on the formation of Fe(SCN)2+ if we consider the reaction on a microscopic scale. The solution in Figure 6.9.1 b contains a variety of cations and anions: Fe3+, SCN–, K+, \(\text{NO}_3^-\), H3O+, and OH–. Although the solution is homogeneous, on average, there are slightly more anions in regions near the Fe3+ ions, and slightly more cations in regions near the SCN– ions. As shown in Figure 6.9.2 , each Fe3+ ion and each SCN– ion is surrounded by an ionic atmosphere of opposite charge (\(\delta^–\) and \(\delta^+\)) that partially screen the ions from each other. Because each ion’s apparent charge at the edge of its ionic atmosphere is less than its actual charge, the force of attraction between the two ions is smaller. As a result, the formation of Fe(SCN)2+ is slightly less favorable and the formation constant in Equation \ref{6.2} is slightly smaller. Higher concentrations of KNO3 increase \(\delta^–\) and \(\delta^+\), resulting in even smaller values for the formation constant.To factor the concentration of ions into the formation constant for Fe(SCN)2+, we need a way to express that concentration in a meaningful way. Because both an ion’s concentration and its charge are important, we define the solution’s ionic strength, \(\mu\) as\[\mu=\frac{1}{2} \sum_{i=1}^{n} c_{i} z_{i}^{2} \nonumber\]where ci and zi are the concentration and charge of the ith ion.Calculate the ionic strength of a solution of 0.10 M NaCl. Repeat the calculation for a solution of 0.10 M Na2SO4.SolutionThe ionic strength for 0.10 M NaCl is\[\begin{array}{c}{\mu=\frac{1}{2}\left\{\left[\mathrm{Na}^{+}\right] \times(+1)^{2}+\left[\mathrm{Cl}^{-}\right] \times(-1)^{2}\right\}} \\ {\mu=\frac{1}{2}\left\{(0.10) \times(+1)^{2}+(0.10) \times(-1)^{2}\right\}=0.10 \ \mathrm{M}}\end{array} \nonumber\]For 0.10 M Na2SO4 the ionic strength is\[\begin{array}{c}{\mu=\frac{1}{2}\left\{\left[\mathrm{Na}^{+}\right] \times(+1)^{2}+\left[\mathrm{SO}_{4}^{2-}\right] \times(-2)^{2}\right\}} \\ {\mu=\frac{1}{2}\left\{(0.20) \times(+1)^{2}+(0.10) \times(-2)^{2}\right\}=0.30 \ \mathrm{M}}\end{array} \nonumber\]In calculating the ionic strengths of these solutions we are ignoring the presence of H3O+ and OH–, and, in the case of Na2SO4, the presence of \(\text{HSO}_4^-\) from the base dissociation reaction of \(\text{SO}_4^{2-}\). In the case of 0.10 M NaCl, the concentrations of H3O+ and OH– are \(1.0 \times 10^{-7}\), which is significantly smaller than the concentrations of Na+ and Cl–. Because \(\text{SO}_4^{2-}\) is a very weak base (Kb = \(1.0 \times 10^{-12}\)), the solution is only slightly basic (pH = 7.5), and the concentrations of H3O+, OH–, and \(\text{HSO}_4^-\) are negligible. Although we can ignore the presence of H3O+, OH–, and \(\text{HSO}_4^-\) when we calculate the ionic strength of these two solutions, be aware that an equilibrium reaction can generate ions that might affect the solution’s ionic strength.Note that the unit for ionic strength is molarity, but that a salt’s ionic strength need not match its molar concentration. For a 1:1 salt, such as NaCl, ionic strength and molar concentration are identical. The ionic strength of a 2:1 electrolyte, such as Na2SO4, is three times larger than the electrolyte’s molar concentration.Figure 6.9.1 shows that adding KNO3 to a mixture of Fe3+ and SCN– decreases the formation constant for Fe(SCN)2+. This creates a contradiction. Earlier in this chapter we showed that there is a relationship between a reaction’s standard‐state free energy, ∆Go, and its equilibrium constant, K.\[\triangle G^{\circ}=-R T \ln K \nonumber\]Because a reaction has only one standard‐state, its equilibrium constant must be independent of solution conditions. Although ionic strength affects the apparent formation constant for Fe(SCN)2+, reaction \ref{6.1} must have an underlying thermodynamic formation constant that is independent of ionic strength.The apparent formation constant for Fe(SCN)2+, as shown in Equation \ref{6.2}, is a function of concentrations. In place of concentrations, we define the true thermodynamic equilibrium constant using activities. The activity of species A, aA, is the product of its concentration, [A], and a solution‐dependent activity coefficient, \(\gamma_A\)\[a_{A}=[A] \gamma_{A} \nonumber\]The true thermodynamic formation constant for Fe(SCN)2+, therefore, is\[K_{1}=\frac{a_{\mathrm{Fe}(S \mathrm{CN})^{2+}}}{a_{\mathrm{Fe}^{3+}} \times a_{\mathrm{SCN}^-}}=\frac{\left[\mathrm{Fe}(\mathrm{SCN})^{2+}\right] \gamma_{\mathrm{Fe}(\mathrm{SCN})^{2+}}}{\left[\mathrm{Fe}^{3+}\right] \gamma_{\mathrm{Fe}^{3+}}\left[\mathrm{SCN}^{-}\right] \gamma_{\mathrm{SCN}^{-}}} \nonumber\]Unless otherwise specified, the equilibrium constants in the appendices are thermodynamic equilibrium constants.A species’ activity coefficient corrects for any deviation between its physical concentration and its ideal value. For a gas, a pure solid, a pure liquid, or a non‐ionic solute, the activity coefficient is approximately one under most reasonable experimental conditions.For a gas the proper terms are fugacity and fugacity coefficient, instead of activity and activity coefficient.For a reaction that involves only these species, the difference between activity and concentration is negligible. The activity coefficient for an ion, however, depends on the solution’s ionic strength, the ion’s charge, and the ion’s size. It is possible to estimate activity coefficients using the extended Debye‐Hückel equation\[\log \gamma_{A}=\frac{-0.51 \times z_{A}^{2} \times \sqrt{\mu}}{1+3.3 \times \alpha_{A} \times \sqrt{\mu}} \label{6.3}\]where zA is the ion’s charge, \(\alpha_A\) is the hydrated ion’s effective diameter in nanometers (Table 6.2), \(\mu\) is the solution’s ionic strength, and 0.51 and 3.3 are constants appropriate for an aqueous solution at 25oC. A hydrated ion’s effective radius is the radius of the ion plus those water molecules closely bound to the ion. The effective radius is greater for smaller, more highly charged ions than it is for larger, less highly charged ions.H3O+Li+Na+, \(\text{IO}_3^-\), \(\text{HSO}_3^-\), \(\text{HCO}_3^-\), \(\text{H}_2\text{PO}_4^-\)OH–, F–, SCN–, HS–, \(\text{ClO}_3^-\), \(\text{ClO}_4^-\), \(\text{MnO}_4^-\)K+, Cl–, Br–, I–, CN–, \(\text{NO}_2^-\), \(\text{NO}_3^-\)Cs+, Tl+, Ag+, \(\text{NH}_4^+\)Mg2+, Be2+Ca2+, Cu2+, Zn2+, Sn2+, Mn2+, Fe2+, Ni2+, Co2+Sr2+, Ba2+, Cd2+, Hg2+, S2–Pb2+, \(\text{SO}_4^{2-}\), \(\text{SO}_3^{2-}\)\(\text{Hg}_2^{2+}\), \(\text{SO}_4^{2-}\), \(\text{S}_22\text{O}_3^{2-}\), \(\text{CrO}_4^{2-}\), \(\text{HPO}_4^{2-}\)Al3+, Fe3+, Cr3+\(\text{PO}_4^{3-}\), \(\text{Fe(CN)}_6^{3-}\)Zr4+, Ce4+, Sn4+\(\text{Fe(CN)}_6^{4-}\)Source: Kielland, J. J. Am. Chem. Soc. 1937, 59, 1675–1678.Several features of Equation \ref{6.3} deserve our attention. First, as the ionic strength approaches zero an ion’s activity coefficient approaches a value of one. In a solution where \(\mu = 0\), an ion’s activity and its concentration are identical. We can take advantage of this fact to determine a reaction’s thermodynamic equilibrium constant by measuring the apparent equilibrium constant for several increasingly smaller ionic strengths and extrapolating back to an ionic strength of zero. Second, an activity coefficient is smaller, and the effect of activity is more important, for an ion with a higher charge and a smaller effective radius. Finally, the extended Debye‐Hückel equation provides a reasonable estimate of an ion’s activity coefficient when the ionic strength is less than 0.1. Modifications to Equation \ref{6.3} extend the calculation of activity coefficients to higher ionic strengths [Davies, C. W. Ion Association, Butterworth: London, 1962].Earlier in this chapter we calculated the solubility of Pb(IO3)2 in deionized water, obtaining a result of \(4.0 \times 10^{-5}\) mol/L. Because the only significant source of ions is from the solubility reaction, the ionic strength is very low and we can assume that \(\gamma \approx 1\) for both Pb2+ and \(\text{IO}_3^-\). In calculating the solubility of Pb(IO3)2 in deionized water, we do not need to account for ionic strength. But what if we need to know the solubility of Pb(IO3)2 in a solution that contains other, inert ions? In this case we need to include activity coefficients in our calculation.Calculate the solubility of Pb(IO3)2 in a matrix of 0.020 M Mg(NO3)2.SolutionWe begin by calculating the solution’s ionic strength. Since Pb(IO3)2 is only sparingly soluble, we will assume we can ignore its contribution to the ionic strength; thus\[\mu=\frac{1}{2}\left\{(0.020)(+2)^{2}+(0.040)(-1)^{2}\right\}=0.060 \ \mathrm{M} \nonumber\]Next, we use Equation \ref{6.3} to calculate the activity coefficients for Pb2+ and \(\text{IO}_3^-\).\[\log \gamma_{\mathrm{Pb}^{2+}}=\frac{-0.51 \times(+2)^{2} \times \sqrt{0.060}}{1+3.3 \times 0.45 \times \sqrt{0.060}}=-0.366 \nonumber\]\[\gamma_{\mathrm{Pb}^{2+}}=0.431 \nonumber\]\[\log \gamma_{\mathrm{IO}_{3}^{-}}=\frac{-0.51 \times(-1)^{2} \times \sqrt{0.060}}{1+3.3 \times 0.45 \times \sqrt{0.060}}=-0.0916 \nonumber\]\[\gamma_{\mathrm{IO}_{3}^-}=0.810 \nonumber\]Defining the equilibrium concentrations of Pb2+ and \(\text{IO}_3^-\) in terms of the variable x Concentrationsand substituting into the thermodynamic solubility product for Pb(IO3)2 leaves us with\[K_{\mathrm{sp}}=a_{\mathrm{Pb}^{2+}} \times a_{\mathrm{IO}_{3}^-}^{2}=\gamma_{\mathrm{Pb}^{2+}}\left[\mathrm{Pb}^{2+}\right] \times \gamma_{\mathrm{IO}_3^-}^{2}\left[\mathrm{IO}_{3}^{-}\right]^{2}=2.5 \times 10^{-13} \nonumber\]\[K_{\mathrm{sp}}=(0.431)(x)(0.810)^{2}(2 x)^{2}=2.5 \times 10^{-13} \nonumber\]\[K_{\mathrm{sp}}=1.131 x^{3}=2.5 \times 10^{-13} \nonumber\]Solving for x gives \(6.0 \times 10^{-5}\) and a molar solubility of \(6.0 \times 10^{-5}\) mol/L for Pb(IO3)2. If we ignore activity, as we did in our earlier calculation, we report the molar solubility as \(4.0 \times 10^{-5}\) mol/L. Failing to account for activity in this case underestimates the molar solubility of Pb(IO3)2 by 33%.The solution’s equilibrium composition is\[\begin{array}{c}{\left[\mathrm{Pb}^{2+}\right]=6.0 \times 10^{-5} \ \mathrm{M}} \\ {\left[\mathrm{IO}_{3}^{-}\right]=1.2 \times 10^{-4} \ \mathrm{M}} \\ {\left[\mathrm{Mg}^{2+}\right]=0.020 \ \mathrm{M}} \\ {\left[\mathrm{NO}_{3}^{-}\right]=0.040 \ \mathrm{M}}\end{array} \nonumber\]Because the concentrations of both Pb2+ and \(\text{IO}_3^-\) are much smaller than the concentrations of Mg2+ and \(\text{NO}_3^-\) our decision to ignore the contribution of Pb2+ and \(\text{IO}_3^-\) to the ionic strength is reasonable.How do we handle the calculation if we can not ignore the concentrations of Pb2+ and \(\text{IO}_3^-\) when calculating the ionic strength. One approach is to use the method of successive approximations. First, we recalculate the ionic strength using the concentrations of all ions, including Pb2+ and \(\text{IO}_3^-\). Next, we recalculate the activity coefficients for Pb2+ and \(\text{IO}_3^-\) using this new ionic strength and then recalculate the molar solubility. We continue this cycle until two successive calculations yield the same molar solubility within an acceptable margin of error.Calculate the molar solubility of Hg2Cl2 in 0.10 M NaCl, taking into account the effect of ionic strength. Compare your answer to that from Exercise 6.7.2 in which you ignored the effect of ionic strength.We begin by calculating the solution’s ionic strength. Because NaCl is a 1:1 ionic salt, the ionic strength is the same as the concentration of NaCl; thus \(\mu\) = 0.10 M. This assumes, of course, that we can ignore the contributions of \(\text{Hg}_2^{2+}\) and Cl– from the solubility of Hg2Cl2.Next we use Equation \ref{6.3} to calculate the activity coefficients for \(\text{Hg}_2^{2+}\) and Cl–.\[\log \gamma_{\mathrm{Hg}_{2}^{2+}}=\frac{-0.51 \times(+2)^{2} \times \sqrt{0.10}}{1+3.3 \times 0.40 \times \sqrt{0.10}}=-0.455 \nonumber\]\[\gamma_{\mathrm{H} \mathrm{g}_{2}^{2+}}=0.351 \nonumber\]\[\log \gamma_{\mathrm{Cl}^{-}}=\frac{-0.51 \times(-1)^{2} \times \sqrt{0.10}}{1+3.3 \times 0.3 \times \sqrt{0.10}}=-0.12 \nonumber\]\[\gamma_{\mathrm{Cl}^-}=0.75 \nonumber\]Defining the equilibrium concentrations of \(\text{Hg}_2^{2+}\) and Cl– in terms of the variable xand substituting into the thermodynamic solubility product for Hg2Cl2, leave us with\[K_{\mathrm{sp}}=a_{\mathrm{Hg}_{2}^{2+}}\left(a_{\mathrm{Cl}^-}\right)^{2} = \gamma_{\mathrm{Hg}_{2}^{2+}}\left[\mathrm{Hg}_{2}^{2+}\right]\left(\gamma_{\mathrm{Cl}^{-}}\right)^{2}\left[\mathrm{Cl}^{-}\right]^{2}=1.2 \times 10^{-18} \nonumber\]Because the value of x likely is small, let’s simplify this equation to\[(0.351)(x)(0.75)^{2}(0.1)^{2}=1.2 \times 10^{-18} \nonumber\]Solving for x gives its value as \(6.1 \times 10^{-16}\). Because x is the concentration of \(\text{Hg}_2^{2+}\) and 2x is the concentration of Cl–, our decision to ignore their contributions to the ionic strength is reasonable. The molar solubility of Hg2Cl2 in 0.10 M NaCl is \(6.1 \times 10^{-16}\) mol/L. In Exercise 6.7.2, where we ignored ionic strength, we determined that the molar solubility of Hg2Cl2 is \(1.2 \times 10^{-16}\) mol/L, a result that is \(5 \times\) smaller than the its actual value.As Example 6.9.2 and Exercise 6.9.1 show, failing to correct for the effect of ionic strength can lead to a significant error in an equilibrium calculation. Nevertheless, it is not unusual to ignore activities and to assume that the equilibrium constant is expressed in terms of concentrations. There is a practical reason for this—in an analysis we rarely know the exact composition, much less the ionic strength of aqueous samples or of solid samples brought into solution. Equilibrium calculations are a useful guide when we develop an analytical method; however, it only is when we complete an analysis and evaluate the results that can we judge whether our theory matches reality. In the end, work in the laboratory is the most critical step in developing a reliable analytical method.This is a good place to revisit the meaning of pH. In Chapter 2 we defined pH as\[\mathrm{pH}=-\log \left[\mathrm{H}_{3} \mathrm{O}^{+}\right] \nonumber\]Now we see that the correct definition is\[\begin{array}{c}{\mathrm{pH}=-\log a_{\mathrm{H}_{3} \mathrm{O}^{+}}} \\ {\mathrm{pH}=-\log \gamma_{\mathrm{H}_{3} \mathrm{O}^{+}}\left[\mathrm{H}_{3} \mathrm{O}^{+}\right]}\end{array} \nonumber\]Failing to account for the effect of ionic strength can lead to a significant error in the reported concentration of H3O+. For example, if the pH of a solution is 7.00 and the activity coefficient for H3O+ is 0.90, then the concentration of H3O+ is \(1.11 \times 10^{-7}\) M, not \(1.00 \times 10^{-7}\) M, an error of +11%. Fortunately, when we develop and carry out an analytical method, we are more interested in controlling pH than in calculating [H3O+]. As a result, the difference between the two definitions of pH rarely is of significant concern.This page titled 6.9: Activity Effects is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by David Harvey.
162
6.10: Using Excel and R to Solve Equilibrium Problems
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Analytical_Chemistry_2.1_(Harvey)/06%3A_Equilibrium_Chemistry/6.10%3A_Using_Excel_and_R_to_Solve_Equilibrium_Problems
In solving equilibrium problems we typically make one or more assumptions to simplify the algebra. These assumptions are important because they allow us to reduce the problem to an equation in x that we can solve by simply taking a square‐root, a cube‐root, or by using the quadratic equation. Without these assumptions, most equilibrium problems result in a cubic equation (or a higher‐order equation) that is more challenging to solve. Both Excel and R are useful tools for solving such equations.Although we focus here on the use of Excel and R to solve equilibrium problems, you also can use WolframAlpha; for details, see Cleary, D. A. “Use of WolframAlpha in Equilibrium Calculations,” Chem. Educator, 2014, 19, 182–186.Excel offers a useful tool—the Solver function—for finding the chemically significant root of a polynomial equation. In addition, it is easy to solve a system of simultaneous equations by constructing a spreadsheet that allows you to test and evaluate multiple solutions. Let’s work through two examples.In our earlier treatment of this problem we arrived at the following cubic equation\[4 x^{3}+0.40 x^{2}=2.5 \times 10^{-13} \nonumber\]where x is the equilibrium concentration of Pb2+. Although there are several approaches for solving cubic equations with paper and pencil, none are computationally easy. One approach is to iterate in on the answer by finding two values of x, one that leads to a result larger than \(2.5 \times 10^{-13}\) and one that gives a result smaller than \(2.5 \times 10^{-13}\). With boundaries established for the value of x, we shift the upper limit and the lower limit until the precision of our answer is satisfactory. Without going into details, this is how Excel’s Solver function works.To solve this problem, we first rewrite the cubic equation so that its right‐side equals zero.\[4 x^{3}+0.40 x^{2}-2.5 \times 10^{-13}=0 \nonumber\]Next, we set up the spreadsheet shown in Figure 6.10.1 a, placing the formula for the cubic equation in cell B2, and entering our initial guess for x in cell B1. Because Pb(IO3)2 is not very soluble, we expect that x is small and set our initial guess to 0. Finally, we access the Solver function by selecting Solver... from the Tools menu, which opens the Solver Parameters window.To define the problem, place the cursor in the box for Set Target Cell and then click on cell B2. Select the Value of: radio button and enter 0 in the box. Place the cursor in the box for By Changing Cells: and click on cell B1. Together, these actions instruct the Solver function to change the value of x, which is in cell B1, until the cubic equation in cell B2 equals zero. Before we actually solve the function, we need to consider whether there are any limitations for an acceptable result. For example, we know that x cannot be smaller than 0 because a negative concentration is not possible. We also want to ensure that the solution’s precision is acceptable. Click on the button labeled Options... to open the Solver Options window. Checking the option for Assume Non-Negative forces the Solver to maintain a positive value for the contents of cell B1, meeting one of our criteria. Setting the precision requires a bit more thought. The Solver function uses the precision to decide when to stop its search, doing so when\[|\text { expected value }-\text { calculated value } | \times 100=\text { precision }(\%) \nonumber\]where expected value is the target cell’s desired value (0 in this case), calculated value is the function’s current value (cell B1 in this case), and precision is the value we enter in the box for Precision. Because our initial guess of x = 0 gives a calculated result of \(2.5 \times 10^{-13}\), accepting the Solver’s default precision of \(1 \times 10^{-6}\) will stop the search after one cycle. To be safe, let’s set the precision to \(1 \times 10^{-18}\). Click OK and then Solve. When the Solver function finds a solution, the results appear in your spreadsheet (see Figure 6.10.1 b). Click OK to keep the result, or Cancel to return to the original values. Note that the answer here agrees with our earlier result of \(7.91 \times 10^{-7}\) M for the solubility of Pb(IO3)2.Be sure to evaluate the reasonableness of Solver’s answer. If necessary, repeat the process using a smaller value for the precision.In developing our earlier solution to this problem we began by identifying four unknowns and writing out the following four equations.\[K_{\mathrm{a}}=\frac{\left[\mathrm{H}_{3} \mathrm{O}^{+}\right]\left[\mathrm{F}^{-}\right]}{[\mathrm{HF}]}=6.8 \times 10^{-4} \nonumber\]\[K_{w}=\left[\mathrm{H}_{3} \mathrm{O}^{+}\right]\left[\mathrm{OH}^{-}\right]=1.00 \times 10^{-14} \nonumber\]\[C_{\mathrm{HF}}=[\mathrm{HF}]+\left[\mathrm{F}^{-}\right] \nonumber\]\[\left[\mathrm{H}_{3} \mathrm{O}^{+}\right]=\left[\mathrm{OH}^{-}\right]+\left[\mathrm{F}^{-}\right] \nonumber\]Next, we made two assumptions that allowed us to simplify the problem to an equation that is easy to solve.\[\left[\mathrm{H}_{3} \mathrm{O}^{+}\right]=\sqrt{K_{\mathrm{a}} C_{\mathrm{HF}}}=\sqrt{\left(6.8 \times 10^{-4}\right)(1.0)}=2.6 \times 10^{-2} \nonumber\]Although we did not note this at the time, without making assumptions the solution to our problem is a cubic equation\[\left[\mathrm{H}_{3} \mathrm{O}^{+}\right]^{3}+K_{\mathrm{a}}\left[\mathrm{H}_{3} \mathrm{O}^{+}\right]^{2}- \left(K_{a} C_{\mathrm{HF}}+K_{\mathrm{w}}\right)\left[\mathrm{H}_{3} \mathrm{O}^{+}\right]-K_{\mathrm{a}} K_{\mathrm{w}}=0 \label{6.1}\]that we can solve using Excel’s Solver function. Of course, this assumes that we successfully complete the derivation!Another option is to use Excel to solve the four equations simultaneously by iterating in on values for [HF], [F–], [H3O+], and [OH–]. Figure 6.10.2 a shows a spreadsheet for this purpose. The cells in the first row contain initial guesses for the equilibrium pH. Using the ladder diagram in Figure 6.8.1, pH values between 1 and 3 seems reasonable. You can add additional columns if you wish to include more pH values. The formulas in rows 2–5 use the definition of pH to calculate [H3O+], Kw to calculate [OH–], the charge balance equation to calculate [F–], and Ka to calculate [HF]. To evaluate the initial guesses, we use the mass balance expression for HF, rewriting it as\[[\mathrm{HF}]+\left[\mathrm{F}^{-}\right]-C_{\mathrm{HF}}=[\mathrm{HF}]+[\mathrm{F}]-1.0=0 \nonumber\]and entering it in the last row; the values in these cells gives the calculation’s error for each pH.Figure 6.10.2 b shows the actual values for the spreadsheet in Figure 6.10.2 a. The negative value in cells B6 and C6 means that the combined concentrations of HF and F– are too small, and the positive value in cell D6 means that their combined concentrations are too large. The actual pH, therefore, is between 1.00 and 2.00. Using these pH values as new limits for the spreadsheet’s first row, we continue to narrow the range for the actual pH. Figure 6.10.2 c shows a final set of guesses, with the actual pH falling between 1.59 and 1.58. Because the error for 1.59 is smaller than that for 1.58, we accept a pH of 1.59 as the answer. Note that this is an agreement with our earlier result.You also can solve this set of simultaneous equations using Excel’s Solver function. To do so, create the spreadsheet in Figure 6.10.2 a, but omit all columns other than A and B. Select Solver... from the Tools menu and define the problem by using B6 for Set Target Cell, setting its desired value to 0, and selecting B1 for By Changing Cells:. You may need to play with the Solver’s options to find a suitable solution to the problem, and it is wise to try several different initial guesses. The Solver function works well for relatively simple problems, such as finding the pH of 1.0 M HF. As problems become more complex and include more unknowns, the Solver function becomes a less reliable tool for solving equilibrium problems.Using Excel, calculate the solubility of AgI in 0.10 M NH3 without making any assumptions. See our earlier treatment of this problem for the relevant equilibrium reactions and constants.For a list of the relevant equilibrium reactions and equilibrium constants, see our earlier treatment of this problem. To solve this problem using Excel, let’s set up the following spreadsheetcopying the contents of cells B1‐B9 into several additional columns. The initial guess for pI in cell B1 gives the concentration of I– in cell B2. Cells B3–B8 calculate the remaining concentrations, using the Ksp to obtain [Ag+], using the mass balance on iodide and silver to obtain [\(\text{Ag(NH}_3)_2^+\)], using b2 to calculate [NH3], using the mass balance on ammonia to find [\(\text{NH}_4^+\)], using Kb to calculate [OH–], and using Kw to calculate [H3O+]. The system’s charge balance equation provides a means for determining the calculation’s error.\[\left[\mathrm{Ag}^{+}\right]+\left[\mathrm{Ag}\left(\mathrm{NH}_{3}\right)_{2}^{+}\right]+\left[\mathrm{NH}_{4}^{+}\right]+\left[\mathrm{H}_{3} \mathrm{O}^{+}\right]-\left[\mathrm{I}^{-}\right]+\left[\mathrm{OH}^{-}\right]=0 \nonumber\]The largest possible value for pI, which corresponds to the smallest concentration of I– and the lowest possible solubility, occurs for a simple, saturated solution of AgI. When [Ag+] = [I–], the concentration of iodide is\[\left[\mathrm{I}^{-}\right]=\sqrt{K_{\mathrm{sp}}}=\sqrt{8.3 \times 10^{-17}}=9.1 \times 10^{-9} \nonumber\]which corresponds to a pI of 8.04. Entering initial guesses for pI of 4, 5, 6, 7, and 8 shows that the error changes sign between a pI of 5 and 6. Continuing in this way to narrow down the range for pI, we find that the error function is closest to zero at a pI of 5.42. The concentration of I– at equilibrium, and the molar solubility of AgI, is \(3.8 \times 10^{-6}\) mol/L, which agrees with our earlier solution to this problem.R has a simple command—uniroot—for finding the chemically significant root of a polynomial equation. In addition, it is easy to write a function to solve a set of simultaneous equations by iterating in on a solution. Let’s work through two examples.In our earlier treatment of this problem we arrived at the following cubic equation\[4 x^{3}+0.40 x^{2}=2.5 \times 10^{-13} \nonumber\]where x is the equilibrium concentration of Pb2+. Although there are several approaches for solving cubic equations with paper and pencil, none are computationally easy. One approach to solving the problem is to iterate in on the answer by finding two values of x, one that leads to a result larger than \(2.5 \times 10^{-13}\) and one that gives a result smaller than \(2.5 \times 10^{-13}\). Having established boundaries for the value of x, we then shift the upper limit and the lower limit until the precision of our answer is satisfactory. Without going into details, this is how the uniroot command works.The general form of the uniroot command isuniroot(function, lower, upper, tol)where function is an object that contains the equation whose root we seek, lower and upper are boundaries for the root, and tol is the desired precision for the root. To create an object that contains the equation, we rewrite it so that its right‐side equals zero.\[4 x^{3}+0.40 x^{2}-2.5 \times 10^{-13} = 0 \nonumber\]Next, we enter the following code, which defines our cubic equation as a function with the name eqn.> eqn = function(x) {4*x^3 + 0.4*x^2 – 2.5e–13}Because our equation is a function, the uniroot command can send a value of x to eqn and receive back the equation’s corresponding value.For example, entering> eqnpasses the value x = 2 to the function and returns an answer of 33.6.Finally, we use the uniroot command to find the root.> uniroot(eqn, lower = 0, upper = 0.1, tol = 1e–18)Because Pb(IO3)2 is not very soluble, we expect that x is small and set the lower limit to 0. The choice for the upper limit is less critical. To ensure that the solution has sufficient precision, we set the tolerance to a value that is smaller than the expected root. Figure 6.10.3 shows the resulting output. The value $root is the equation’s root, which is in good agreement with our earlier result of \(7.91 \times 10^{-7}\) for the molar solubility of Pb(IO3)2. The other results are the equation’s value for the root, the number of iterations needed to find the root, and the root’s estimated precision.In developing our earlier solution to this problem we began by identifying four unknowns and writing out the following four equations.\[K_{\mathrm{a}}=\frac{\left[\mathrm{H}_{3} \mathrm{O}^{+}\right]\left[\mathrm{F}^{-}\right]}{[\mathrm{HF}]}=6.8 \times 10^{-4} \nonumber\]\[K_{w}=\left[\mathrm{H}_{3} \mathrm{O}^{+}\right]\left[\mathrm{OH}^{-}\right]=1.00 \times 10^{-14} \nonumber\]\[C_{\mathrm{HF}}=[\mathrm{HF}]+\left[\mathrm{F}^{-}\right] \nonumber\]\[\left[\mathrm{H}_{3} \mathrm{O}^{+}\right]=\left[\mathrm{OH}^{-}\right]+\left[\mathrm{F}^{-}\right] \nonumber\]Next, we made two assumptions that allowed us to simplify the problem to an equation that is easy to solve.\[\left[\mathrm{H}_{3} \mathrm{O}^{+}\right]=\sqrt{K_{\mathrm{a}} C_{\mathrm{HF}}}=\sqrt{\left(6.8 \times 10^{-4}\right)(1.0)}=2.6 \times 10^{-2} \nonumber\]Although we did not note this at the time, without making assumptions the solution to our problem is a cubic equation\[\left[\mathrm{H}_{3} \mathrm{O}^{+}\right]^{3}+K_{\mathrm{a}}\left[\mathrm{H}_{3} \mathrm{O}^{+}\right]^{2}- \left(K_{a} C_{\mathrm{HF}}+K_{\mathrm{w}}\right)\left[\mathrm{H}_{3} \mathrm{O}^{+}\right]-K_{\mathrm{a}} K_{\mathrm{w}}=0 \nonumber\]that we can solve using the uniroot command. Of course, this assumes that we successfully complete the derivation!Another option is to write a function to solve the four equations simultaneously. Here is the code for this function, which we will call eval.> eval = function(pH){+ h3o = 10^–pH+ oh = 1e–14/h3o+ hf = (h3o * f )/6.8e–4+ error= hf + f – 1+ output= data.frame(pH, error)+ print(output)+}The open curly braces, {, tells R that we intend to enter our function over several lines. When we press enter at the end of a line, R changes its prompt from > to +, indicating that we are continuing to enter the same command. The closed curly brace, }, on the last line indicates that we have completed the function. The command data.frame combines two or more objects into a table, which we then print out so that we can view the results of the calculations. You can adapt this function to other problems by changing the variable you pass to the function and the equations you include within the function.Let’s examine more closely how this function works. The function accepts a guess for the pH and uses the definition of pH to calculate [H3O+], Kw to calculate [OH–], the charge balance equation to calculate [F–], and Ka to calculate [HF]. The function then evaluates the solution using the mass balance expression for HF, rewriting it as\[[\mathrm{HF}]+\left[\mathrm{F}^{-}\right]-C_{\mathrm{HF}}=[\mathrm{HF}]+\left[\mathrm{F}^{-}\right]-1.0=0 \nonumber\]The function then gathers together the initial guess for the pH and the error and prints them as a table.The beauty of this function is that the object we pass to it, pH, can contain many values, which makes it easy to search for a solution. Because HF is an acid, we know that the solution is acidic. This sets an upper limit of 7 for the pH. We also know that the pH of 1.0 M HF is no smaller than 0 as this is the pH if HF was a strong acid. For our first pass, let’s enter the following code> pH= c> eval(pH)which varies the pH within these limits. The result, which is shown in Figure 6.10.4 a, indicates that the pH is less than 2 and greater than 1 because it is in this interval that the error changes sign.For our second pass, let’s explore pH values between 2.0 and 1.0 to further narrow down the problem’s solution.> pH = c(2.0, 1.9, 1.8, 1.7, 1.6, 1.5, 1.4, 1.3, 1.2, 1.1, 1.0)> eval(pH)The result in Figure 6.10.4 b show that the pH must be less than 1.6 and greater than 1.5. A third pass between these limits gives the result shown in Figure 6.10.4 c, which is consistent with our earlier result of a pH 1.59.Using R, calculate the solubility of AgI in 0.10 M NH3 without making any assumptions. See our earlier treatment of this problem for the relevant equilibrium reactions and constants.To solve this problem, let’s use the following function> eval = function(pI){+ I = 10^–pI+ Ag = 8.3e–17/I+ AgNH3 = Ag – I+ NH3 = (AgNH3/(1.7e7*Ag))^0.5+ NH4 = 0.10‐NH3 – 2 * AgNH3+ OH = 1.75e–5 * NH3/NH4+ H3O = 1e–14/OH+ error = Ag + AgNH3 + NH4 + H3O – OH – I+ output = data.frame(pI, error)+ print(output)+}The function accepts an initial guess for pI and calculates the concentrations of each species in solution using the definition of pI to calculate [I–], using the Ksp to obtain [Ag+], using the mass balance on iodide and silver to obtain [\(\text{Ag(NH}_3)_2^+\)], using \(\beta_2\) to calculate [NH3], using the mass balance on ammonia to find [\(\text{NH}_4^+\)], using Kb to calculate [OH–], and using Kw to calculate [H3O+]. The system’s charge balance equation provides a means for determining the calculation’s error.\[\left[\mathrm{Ag}^{+}\right]+\left[\mathrm{Ag}\left(\mathrm{NH}_{3}\right)_{2}^{+}\right]+\left[\mathrm{NH}_{4}^{+}\right]+\left[\mathrm{H}_{3} \mathrm{O}^{+}\right]-\left[\mathrm{I}^{-}\right]+\left[\mathrm{OH}^{-}\right]=0 \nonumber\]The largest possible value for pI—corresponding to the smallest concentration of I– and the lowest possible solubility—occurs for a simple, saturated solution of AgI. When [Ag+] = [I–], the concentration of iodide is\[\left[\mathrm{I}^{-}\right]=\sqrt{K_{\mathrm{sp}}}=\sqrt{8.3 \times 10^{-17}}=9.1 \times 10^{-9} \nonumber\]corresponding to a pI of 8.04. The following session shows the function in action.> pI=c> eval(pI)pI error 1 4 ‐2.56235615 2 5 ‐0.16620930 3 6 0.07337101 4 7 0.09734824 5 8 0.09989073> pI =c(5.1, 5.2, 5.3, 5.4, 5.5, 5.6, 5.7, 5.8, 5.9, 6.0)> eval(pI)pI error 1 5.1 ‐0.111446582 5.2 ‐0.067941053 5.3 ‐0.033364754 5.4 ‐0.005681165 5.5 0.015715496 5.6 0.03308929 7 5.7 0.04685937 8 5.8 0.05779214 9 5.9 0.06647475 10 6.0 0.07337101> pI =c(5.40, 5.41, 5.42, 5.43, 5.44, 5.45, 5.46, 5.47, 5.48, 5.49, 5.50)> eval(pI)pI error1 5.40 ‐0.00568116052 5.41 ‐0.00307154843 5.42 0.00023103694 5.43 ‐0.00051348985 5.44 0.00282818786 5.45 0.00523709807 5.46 0.00747581818 5.47 0.00962603709 5.48 0.011710549810 5.49 0.013738729111 5.50 0.0157154889The error function is closest to zero at a pI of 5.42. The concentration of I– at equilibrium, and the molar solubility of AgI, is \(3.8 \times 10^{-6}\) mol/L, which agrees with our earlier solution to this problem.This page titled 6.10: Using Excel and R to Solve Equilibrium Problems is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by David Harvey.
163
6.11: Some Final Thoughts on Equilibrium Calculations
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Analytical_Chemistry_2.1_(Harvey)/06%3A_Equilibrium_Chemistry/6.11%3A_Some_Final_Thoughts_on_Equilibrium_Calculations
In this chapter we developed several tools to evaluate the composition of a system at equilibrium. These tools differ in how precisely they allow us to answer questions involving equilibrium chemistry. They also differ in how easy they are to use. An important part of having several tools to choose from is knowing when to each is most useful. If you need to know whether a reaction if favorable or you need to estimate a solution’s pH, then a ladder diagram usually will meet your needs. On the other hand, if you require a more accurate or more precise estimate of a compound’s solubility, then a rigorous calculation that includes activity coefficients is necessary.A critical part of solving an equilibrium problem is to know what equilibrium reactions to include. The need to include all relevant reactions is obvious, and at first glance this does not appear to be a significant problem—it is, however, a potential source of significant errors. The tables of equilibrium constants in this textbook, although extensive, are a small subset of all known equilibrium constants, which makes it easy to overlook an important equilibrium reaction. Commercial and freeware computational programs with extensive databases are available for equilibrium modeling, two examples of which are Visual Minteq (Windows only) and CurTiPot (for Excel); Visual Minteq can model acid–base, solubility, complexation, and redox equilibria; CurTiPot is limited to acid–base equilibria. Both programs account for the effect of activity. The R package CHNOSZ is used to model aqueous geochemistry systems and the properities of proteins.An integrated set of tools for thermodynamic calculations in aqueous geochemistry and geobiochemistry. Functions are provided for writing balanced reactions to form species from user-selected basis species and for calculating the standard molal properties of species and reactions, including the standard Gibbs energy and equilibrium constant. Calculations of the non-equilibrium chemical affinity and equilibrium chemical activity of species can be portrayed on diagrams as a function of temperature, pressure, or activity of basis species; in two dimensions, this gives a maximum affinity or predominance diagram. The diagrams have formatted chemical formulas and axis labels, and water stability limits can be added to Eh-pH, oxygen fugacity- temperature, and other diagrams with a redox variable. The package has been developed to handle common calculations in aqueous geochemistry, such as solubility due to complexation of metal ions, mineral buffers of redox or pH, and changing the basis species across a diagram ("mosaic diagrams"). CHNOSZ also has unique capabilities for comparing the compositional and thermodynamic properties of different proteins.Finally, a consideration of equilibrium chemistry can only help us decide if a reaction is favorable; however, it does not guarantee that the reaction occurs. How fast a reaction approaches its equilibrium position does not depend on the reaction’s equilibrium constant because the rate of a chemical reaction is a kinetic, not a thermodynamic, phenomenon. We will consider kinetic effects and their application in analytical chemistry in Chapter 13.This page titled 6.11: Some Final Thoughts on Equilibrium Calculations is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by David Harvey.
164
6.12: Problems
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Analytical_Chemistry_2.1_(Harvey)/06%3A_Equilibrium_Chemistry/6.12%3A_Problems
1. Write equilibrium constant expressions for the following reactions. What is the value for each reaction’s equilibrium constant?(a) \(\mathrm{NH}_{3}(a q)+\mathrm{H}_{3} \mathrm{O}^{+}(a q) \rightleftharpoons \mathrm{N} \mathrm{H}_{4}^{+}(a q)\)(b) \(\operatorname{PbI}_{2}(s)+\mathrm{S}^{2-}(a q) \rightleftharpoons \operatorname{PbS}(s)+2 \mathrm{I}^{-}(a q)\)(c) \(\operatorname{CdY}^{2-}(a q)+4 \mathrm{CN}^{-}(a q) \rightleftharpoons \mathrm{Cd}(\mathrm{CN})_{4}^{2-}(a q)+\mathrm{Y}^{4-}(a q)\); note: Y is the shorthand symbol for EDTA(d) \(\mathrm{AgCl}(s)+2 \mathrm{NH}_{3}(a q)\rightleftharpoons\mathrm{Ag}\left(\mathrm{NH}_{3}\right)_{2}^{+}(a q)+\mathrm{Cl}^{-}(a q)\)(e) \(\mathrm{BaCO}_{3}(s)+2 \mathrm{H}_{3} \mathrm{O}^{+}(a q)\rightleftharpoons \mathrm{Ba}^{2+}(a q)+\mathrm{H}_{2} \mathrm{CO}_{3}(a q)+2 \mathrm{H}_{2} \mathrm{O}(l)\)2. Use a ladder diagram to explain why the first reaction is favorable and why the second reaction is unfavorable.\[\mathrm{H}_{3} \mathrm{PO}_{4}(a q)+\mathrm{F}^{-}(a q)\rightleftharpoons\mathrm{HF}(a q)+\mathrm{H}_{2} \mathrm{PO}_{4}^{-}(a q) \nonumber\]\[\mathrm{H}_{3} \mathrm{PO}_{4}(a q)+2 \mathrm{F}^{-}(a q)\rightleftharpoons2 \mathrm{HF}(a q)+\mathrm{HPO}_{4}^{2-}(a q) \nonumber\]Determine the equilibrium constant for these reactions and verify that they are consistent with your ladder diagram.3. Calculate the potential for the following redox reaction for a solution in which [Fe3+] = 0.050 M, [Fe2+] = 0.030 M, [Sn2+] = 0.015 M and [Sn4+] = 0.020 M.\[2 \mathrm{Fe}^{3+}(a q)+\mathrm{Sn}^{2+}(a q)\rightleftharpoons\mathrm{Sn}^{4+}(a q)+2 \mathrm{Fe}^{2+}(a q) \nonumber\]4. Calculate the standard state potential and the equilibrium constant for each of the following redox reactions. Assume that [H3O+] is 1.0 M for an acidic solution and that [OH–] is 1.0 M for a basic solution. Note that these reactions are not balanced. Reactions (a) and (b) are in acidic solution; reaction (c) is in a basic solution.(a) \(\mathrm{MnO}_{4}^{-}(a q)+\mathrm{H}_{2} \mathrm{SO}_{3}(a q)\rightleftharpoons \mathrm{Mn}^{2+}(a q)+\mathrm{SO}_{4}^{2-}(a q)\)(b) \(\mathrm{IO}_{3}^{-}(a q)+\mathrm{I}^{-}(a q) \rightleftharpoons \mathrm{I}_{2}(a q)\)(c) \(\mathrm{ClO}^{-}(a q)+\mathrm{I}^{-}(a q) \rightleftharpoons \mathrm{IO}_{3}^{-}(a q)+\mathrm{Cl}^{-}(a q)\)5. One analytical method for determining the concentration of sulfur is to oxidize it to \(\text{SO}_4^{2-}\) and then precipitate it as BaSO4 by adding BaCl2. The mass of the resulting precipitate is proportional to the amount of sulfur in the original sample. The accuracy of this method depends on the solubility of BaSO4, the reaction for which is shown here.\[\mathrm{BaSO}_{4}(s) \rightleftharpoons \mathrm{Ba}^{2+}(a q)+\mathrm{SO}_{4}^{2-}(a q) \nonumber\]For each of the following, predict the affect on the solubility of BaSO4: (a) decreasing the solution’s pH; (b) adding more BaCl2; and (c) increasing the solution’s volume by adding H2O.6. Write a charge balance equation and one or more mass balance equations for the following solutions.(a) 0.10 M NaCl(b) 0.10 M HCl(c) 0.10 M HF(d) 0.10 M NaH2PO4(e) MgCO3 (saturated solution)(f) 0.10 M \(\text{Ag(CN)}_2^-\) (prepared using AgNO3 and KCN)(g) 0.10 M HCl and 0.050 M NaNO27. Use the systematic approach to equilibrium problems to calculate the pH of the following solutions. Be sure to state and justify any assumptions you make in solving the problems.(a) 0.050 M HClO4(b) \(1.00 \times 10^{-7}\) M HCl(c) 0.025 M HClO(d) 0.010 M HCOOH(e) 0.050 M Ba(OH)2(f) 0.010 M C5H5N8. Construct ladder diagrams for the following diprotic weak acids (H2A) and estimate the pH of 0.10 M solutions of H2A, NaHA, and Na2A.(a) maleic acid(b) malonic acid(c) succinic acid9. Use the systematic approach to solving equilibrium problems to calculate the pH of (a) malonic acid, H2A; (b) sodium hydrogenmalonate, NaHA; and (c) sodium malonate, Na2A. Be sure to state and justify any assumptions you make in solving the problems.10. Ignoring activity effects, calculate the molar solubility of Hg2Br2 in the following solutions. Be sure to state and justify any assumption you make in solving the problems.(a) a saturated solution of Hg2Br2(b) 0.025 M Hg2(NO3)2 saturated with Hg2Br2(c) 0.050 M NaBr saturated with Hg2Br211. The solubility of CaF2 is controlled by the following two reactions\[\mathrm{CaF}_{2}(s) \rightleftharpoons \mathrm{Ca}^{2+}(a q)+2 \mathrm{F}^{-}(a q) \nonumber\]\[\mathrm{HF}(a q)+\mathrm{H}_{2} \mathrm{O}(l)\rightleftharpoons\mathrm{H}_{3} \mathrm{O}^{+}(a q)+\mathrm{F}^{-}(a q) \nonumber\]Calculate the molar solubility of CaF2 in a solution that is buffered to a pH of 7.00. Use a ladder diagram to help simplify the calculations. How would your approach to this problem change if the pH is buffered to 2.00? What is the solubility of CaF2 at this pH? Be sure to state and justify any assumptions you make in solving the problems.12. Calculate the molar solubility of Mg(OH)2 in a solution buffered to a pH of 7.00. How does this compare to its solubility in unbuffered deionized water with an initial pH of 7.00? Be sure to state and justify any assumptions you make in solving the problem.13. Calculate the solubility of Ag3PO4 in a solution buffered to a pH of 9.00. Be sure to state and justify any assumptions you make in solving the problem.14. Determine the equilibrium composition of saturated solution of AgCl. Assume that the solubility of AgCl is influenced by the following reactions\[\mathrm{AgCl}(s) \rightleftharpoons \mathrm{Ag}^{+}(a q)+\mathrm{Cl}^{-}(a q) \nonumber\]\[\operatorname{Ag}^{+}(a q)+\mathrm{Cl}^{-}(a q) \rightleftharpoons \operatorname{AgCl}(a q) \nonumber\]\[\operatorname{AgCl}(a q)+\mathrm{Cl}^{-}(a q) \rightleftharpoons \operatorname{AgCl}_{2}^-(a q) \nonumber\]Be sure to state and justify any assumptions you make in solving the problem.15. Calculate the ionic strength of the following solutions(a) 0.050 M NaCl(b) 0.025 M CuCl2(c) 0.10 M Na2SO416. Repeat the calculations in Problem 10, this time correcting for the effect of ionic strength. Be sure to state and justify any assumptions you make in solving the problems.17. Over what pH range do you expect Ca3(PO4)2 to have its minimum solubility?18. Construct ladder diagrams for the following systems, each of which consists of two or three equilibrium reactions. Using your ladder diagrams, identify all reactions that are likely to occur in each system?(a) HF and H3PO4(b) \(\text{Ag(CN)}_2^-\), \(\text{Ni(CN)}_4^{2-}\), and \(\text{Fe(CN)}_6^{3-}\)(c) \(\text{Cr}_2\text{O}_7^{2-}/\text{Cr}^{3+}\) and Fe3+/Fe2+19. Calculate the pH of the following acid–base buffers. Be sure to state and justify any assumptions you make in solving the problems.(a) 100.0 mL of 0.025 M formic acid and 0.015 M sodium formate(b) 50.00 mL of 0.12 M NH3 and 3.50 mL of 1.0 M HCl(c) 5.00 g of Na2CO3 and 5.00 g of NaHCO3 diluted to 0.100 L20. Calculate the pH of the buffers in Problem 19 after adding 5.0 mL of 0.10 M HCl. Be sure to state and justify any assumptions you make in solving the problems.21. Calculate the pH of the buffers in Problem 19 after adding 5.0 mL of 0.10 M NaOH. Be sure to state and justify any assumptions you make in solving the problems.22. Consider the following hypothetical complexation reaction between a metal, M, and a ligand, L\[\mathrm{M}(a q)+\mathrm{L}(a q) \rightleftharpoons \mathrm{ML}(a q) \nonumber\]for which the formation constant is \(1.5 \times 10^8\). (a) Derive an equation similar to the Henderson–Hasselbalch equation that relates pM to the concentrations of L and ML. (b) What is the pM for a solution that contains 0.010 mol of M and 0.020 mol of L? (c) What is pM if you add 0.002 mol of M to this solution? Be sure to state and justify any assumptions you make in solving the problem.23. A redox buffer contains an oxidizing agent and its conjugate reducing agent. Calculate the potential of a solution that contains 0.010 mol of Fe3+ and 0.015 mol of Fe2+. What is the potential if you add sufficient oxidizing agent to convert 0.002 mol of Fe2+ to Fe3+? Be sure to state and justify any assumptions you make in solving the problem.24. Use either Excel or R to solve the following problems. For these problems, make no simplifying assumptions.(a) the solubility of CaF2 in deionized water(b) the solubility of AgCl in deionized water(c) the pH of 0.10 M fumaric acid25. Derive equation 6.10.1 for the rigorous solution to the pH of 0.1 M HF.This page titled 6.12: Problems is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by David Harvey.
165
6.13: Additional Resources
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Analytical_Chemistry_2.1_(Harvey)/06%3A_Equilibrium_Chemistry/6.13%3A_Additional_Resources
The following experiments involve the experimental determination of equilibrium constants, the characterization of buffers, and, in some cases, demonstrations of the importance of activity effects.A nice discussion of Berthollet’s discovery of the reversibility of reactions is found inThe following texts provide additional coverage of equilibrium chemistry.The following papers discuss a variety of general aspects of equilibrium chemistry.Collected here are a papers that discuss a variety of approaches to solving equilibrium problems.Additional historical background on the development of the Henderson-Hasselbalch equation is provided by the following papers.A simulation is a useful tool for helping students gain an intuitive understanding of a topic. Gathered here are some simulations for teaching equilibrium chemistry.The following papers provide additional resources on ionic strength, activity, and the effect of ionic strength and activity on equilibrium reactions and pH.For a contrarian’s view of equilibrium chemistry, please see the following papers.This page titled 6.13: Additional Resources is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by David Harvey.
166
6.14: Chapter Summary and Key Terms
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Analytical_Chemistry_2.1_(Harvey)/06%3A_Equilibrium_Chemistry/6.14%3A_Chapter_Summary_and_Key_Terms
Analytical chemistry is more than a collection of techniques; it is the application of chemistry to the analysis of samples. As we will see in later chapters, almost all analytical methods use chemical reactivity to accomplish one or more of the following: dissolve a sample, separate analytes from interferents, transform an analyte into a more useful form, or provide a signal. Equilibrium chemistry and thermodynamics provide us with a means for predicting which reactions are likely to be favorable.The most important types of reactions are precipitation reactions, acid–base reactions, metal‐ligand complexation reactions, and oxidation–reduction reactions. In a precipitation reaction two or more soluble species combine to produce an insoluble precipitate, which we characterize using a solubility product.An acid–base reaction occurs when an acid donates a proton to a base. The reaction’s equilibrium position is described using either an acid dissociation constant, Ka, or a base dissociation constant, Kb. The product of Ka and Kb for an acid and its conjugate base is the dissociation constant for water, Kw.When a ligand donates one or more pairs of electron to a metal ion, the result is a metal–ligand complex. Two types of equilibrium constants are used to describe metal–ligand complexation: stepwise formation constants and overall formation constants. There are two stepwise formation constants for the metal–ligand complex ML2, each of which describes the addition of one ligand; thus, K1 represents the addition of the first ligand to M, and K2 represents the addition of the second ligand to ML. Alternatively, we can use a cumulative, or overall formation constant, \(\beta_2\), for the metal–ligand complex ML2, in which both ligands are added to M.In an oxidation–reduction reaction, one of the reactants is oxidized and another reactant is reduced. Instead of using an equilibrium constants to characterize an oxidation–reduction reactions, we use the potential, positive values of which indicate a favorable reaction. The Nernst equation relates this potential to the concentrations of reactants and products.Le Châtelier’s principle provides a means for predicting how a system at equilibrium responds to a change in conditions. If we apply a stress to a system at equilibrium—by adding a reactant or product, by adding a reagent that reacts with a reactant or product, or by changing the volume—the system will respond by moving in the direction that relieves the stress.You should be able to describe a system at equilibrium both qualitatively and quantitatively. You can develop a rigorous solution to an equilibrium problem by combining equilibrium constant expressions with appropriate mass balance and charge balance equations. Using this systematic approach, you can solve some quite complicated equilibrium problems. If a less rigorous answer is acceptable, then a ladder diagram may help you estimate the equilibrium system’s composition.Solutions that contain relatively similar amounts of a weak acid and its conjugate base experience only a small change in pH upon the addition of a small amount of strong acid or of strong base. We call these solutions buffers. A buffer can also be formed using a metal and its metal–ligand complex, or an oxidizing agent and its conjugate reducing agent. Both the systematic approach to solving equilibrium problems and ladder diagrams are useful tools for characterizing buffers.A quantitative solution to an equilibrium problem may give an answer that does not agree with experimental results if we do not consider the effect of ionic strength. The true, thermodynamic equilibrium constant is a function of activities, a, not concentrations. A species’ activity is related to its molar concentration by an activity coefficient, \(\gamma\). Activity coefficients are estimated using the extended Debye‐Hückel equation, making possible a more rigorous treatment of equilibria.acidactivity coefficientbase dissociation constantcharge balance equationdissociation constantequilibriumformation constantHenderson–Hasselbalch equationLe Châtelier’s principlemetal–ligand complexNernst equationpH scaleprecipitatereductionsteady stateacid dissociation constantamphiproticbuffer common ion effectenthalpyequilibrium constantGibb’s free energyionic strengthligandmethod of successive approximationsoxidationpolyproticredox reactionstandard‐statestepwise formation constantactivitybasebuffer capacitycumulative formation constantentropyextended Debye‐Hückel equationhalf‐reactionladder diagrammass balance equationmonoproticoxidizing agentpotentialreducing agentstandard potentialsolubility productThis page titled 6.14: Chapter Summary and Key Terms is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by David Harvey.
167
7.1: The Importance of Sampling
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Analytical_Chemistry_2.1_(Harvey)/07%3A_Obtaining_and_Preparing_Samples_for_Analysis/7.01%3A_The_Importance_of_Sampling
When a manufacturer lists a chemical as ACS Reagent Grade, they must demonstrate that it conforms to specifications set by the American Chemical Society (ACS). For example, the ACS specifications for commercial NaBr require that the concentration of iron is less than 5 ppm. To verify that a production lot meets this standard, the manufacturer collects and analyzes several samples, reporting the average result on the product’s label (Figure 7.1.1 ).If the individual samples do not represent accurately the population from which they are drawn—a population that we call the target population—then even a careful analysis will yield an inaccurate result. Extrapolating a result from a sample to its target population always introduces a determinate sampling error. To minimize this determinate sampling error, we must collect the right sample.Even if we collect the right sample, indeterminate sampling errors may limit the usefulness of our analysis. Equation \ref{7.1} shows that a confidence interval about the mean, \(\overline{X}\) , is proportional to the standard deviation, s, of the analysis\[\mu=\overline{X} \pm \frac{t s}{\sqrt{n}} \label{7.1}\]where n is the number of samples and t is a statistical factor that accounts for the probability that the confidence interval contains the true value, \(\mu\).Equation \ref{7.1} should be familiar to you. See Chapter 4 to review confidence intervals and see Appendix 4 for values of t.Each step of an analysis contributes random error that affects the overall standard deviation. For convenience, let’s divide an analysis into two steps—collecting the samples and analyzing the samples—each of which is characterized by a variance. Using a propagation of uncertainty, the relationship between the overall variance, s2, and the variances due to sampling, \(s_{samp}^2\), and the variance due to the analytical method, \(s_{meth}^2\), is\[s^{2}=s_{samp}^{2}+s_{meth}^{2} \label{7.2}\]Although Equation \ref{7.1} is written in terms of a standard deviation, s, a propagation of uncertainty is written in terms of variances, s2. In this section, and those that follow, we will use both standard deviations and variances to discuss sampling uncertainty. For a review of the propagation of uncertainty, see Chapter 4.3 and Appendix 2.Equation \ref{7.2} shows that the overall variance for an analysis is limited by either the analytical method or sampling, or by both. Unfortunately, analysts often try to minimize the overall variance by improving only the method’s precision. This is a futile effort, however, if the standard deviation for sampling is more than three times greater than that for the method [Youden, Y. J. J. Assoc. Off. Anal. Chem. 1981, 50, 1007–1013]. Figure 7.1.2 shows how the ratio ssamp/smeth affects the method’s contribution to the overall variance. As shown by the dashed line, if the sample’s standard deviation is \(3 \times\) the method’s standard deviation, then indeterminate method errors explain only 10% of the overall variance. If indeterminate sampling errors are significant, decreasing smeth provides only limited improvement in the overall precision.A quantitative analysis gives a mean concentration of 12.6 ppm for an analyte. The method’s standard deviation is 1.1 ppm and the standard deviation for sampling is 2.1 ppm. (a) What is the overall variance for the analysis? (b) By how much does the overall variance change if we improve smeth by 10% to 0.99 ppm? (c) By how much does the overall variance change if we improve ssamp by 10% to 1.9 ppm?Solution(a) The overall variance is\[s^{2}=s_{samp}^{2}+s_{meth}^{2}=(2.1 \ \mathrm{ppm})^{2}+(1.1 \ \mathrm{ppm})^{2}=5.6 \ \mathrm{ppm}^{2} \nonumber\](b) Improving the method’s standard deviation changes the overall variance to\[s^{2}=(2.1 \ \mathrm{ppm})^{2}+(0.99 \ \mathrm{ppm})^{2}=5.4 \ \mathrm{ppm}^{2} \nonumber\]Improving the method’s standard deviation by 10% improves the overall variance by approximately 4%.(c) Changing the standard deviation for sampling\[s^{2}=(1.9 \ \mathrm{ppm})^{2}+(1.1 \ \mathrm{ppm})^{2}=4.8 \ \mathrm{ppm}^{2} \nonumber\]improves the overall variance by almost 15%. As expected, because ssamp is larger than smeth, we achieve a bigger improvement in the overall variance when we focus our attention on sampling problems.Suppose you wish to reduce the overall variance in Example 7.1.1 to 5.0 ppm2. If you focus on the method, by what percentage do you need to reduce smeth? If you focus on the sampling, by what percentage do you need to reduce ssamp?To reduce the overall variance by improving the method’s standard deviation requires that\[s^{2}=5.00 \ \mathrm{ppm}^{2} = s_{samp}^{2}+s_{m e t h}^{2} = (2.1 \mathrm{ppm})^{2}+s_{m e t h}^{2} \nonumber\]Solving for smeth gives its value as 0.768 ppm. Relative to its original value of 1.1 ppm, this is a reduction of \(3.0 \times 10^1\)%. To reduce the overall variance by improving the standard deviation for sampling requires that\[s^{2}=5.00 \ \mathrm{ppm}^{2} = s_{samp}^{2}+s_{meth}^{2} = s_{samp}^{2}+(1.1 \ \mathrm{ppm})^{2} \nonumber\]Solving for ssamp gives its value as 1.95 ppm. Relative to its original value of 2.1 ppm, this is reduction of 7.1%.To determine which step has the greatest effect on the overall variance, we need to measure both ssamp and smeth. The analysis of replicate samples provides an estimate of the overall variance. To determine the method’s variance we must analyze samples under conditions where we can assume that the sampling variance is negligible; the sampling variance is then determined by difference.There are several ways to minimize the standard deviation for sampling. Here are two examples. One approach is to use a standard reference material (SRM) that has been carefully prepared to minimize indeterminate sampling errors. When the sample is homogeneous—as is the case, for example, with an aqueous sample—then another useful approach is to conduct replicate analyses on a single sample.The following data were collected as part of a study to determine the effect of sampling variance on the analysis of drug-animal feed formulations [Fricke, G. H.; Mischler, P. G.; Staffieri, F. P.; Houmyer, C. L. Anal. Chem. 1987, 59, 1213– 1217].The data on the left were obtained under conditions where both ssamp and smeth contribute to the overall variance. The data on the right were obtained under conditions where ssamp is insignificant. Determine the overall variance, and the standard deviations due to sampling and the analytical method. To which source of indeterminate error—sampling or the method—should we turn our attention if we want to improve the precision of the analysis?SolutionUsing the data on the left, the overall variance, s2, is \(4.71 \times 10^{-7}\). To find the method’s contribution to the overall variance, \(s_{meth}^2\), we use the data on the right, obtaining a value of \(7.00 \times 10^{-8}\). The variance due to sampling, \(s_{samp}^2\), is\[s_{samp}^{2}=s^{2}-s_{meth}^{2} = 4.71 \times 10^{-7}-7.00 \times 10^{-8}=4.01 \times 10^{-7} \nonumber\]Converting variances to standard deviations gives ssamp as \(6.33 \times 10^{-4}\) and smeth as \(2.65 \times 10^{-4}\). Because ssamp is more than twice as large as smeth, improving the precision of the sampling process will have the greatest impact on the overall precision.A polymer’s density provides a measure of its crystallinity. The standard deviation for the determination of density using a single sample of a polymer is \(1.96 \times 10^{-3}\) g/cm3. The standard deviation when using different samples of the polymer is \(3.65 \times 10^{-2}\) g/cm3. Determine the standard deviations due to sampling and to the analytical method.The analytical method’s standard deviation is \(1.96 \times 10^{-3}\) g/cm3 as this is the standard deviation for the analysis of a single sample of the polymer. The sampling variance is\[s_{sa m p}^{2}=s^{2}-s_{meth}^{2}= \left(3.65 \times 10^{-2}\right)^{2}-\left(1.96 \times 10^{-3}\right)^{2}=1.33 \times 10^{-3} \nonumber\]Converting the variance to a standard deviation gives smeth as \(3.64 \times 10^{-2}\) g/cm3.This page titled 7.1: The Importance of Sampling is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by David Harvey.
168
7.2: Designing a Sampling Plan
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Analytical_Chemistry_2.1_(Harvey)/07%3A_Obtaining_and_Preparing_Samples_for_Analysis/7.02%3A_Designing_a_Sampling_Plan
A sampling plan must support the goals of an analysis. For example, a material scientist interested in characterizing a metal’s surface chemistry is more likely to choose a freshly exposed surface, created by cleaving the sample under vacuum, than a surface previously exposed to the atmosphere. In a qualitative analysis, a sample need not be identical to the original substance provided there is sufficient analyte present to ensure its detection. In fact, if the goal of an analysis is to identify a trace-level component, it may be desirable to discriminate against major components when collecting samples.For an interesting discussion of the importance of a sampling plan, see Buger, J. et al. “Do Scientists and Fishermen Collect the Same Size Fish? Possible Implications for Exposure Assessment,” Environ. Res. 2006, 101, 34–41.For a quantitative analysis, the sample’s composition must represent accurately the target population, a requirement that necessitates a careful sampling plan. Among the issues we need to consider are these five questions.A sampling error occurs whenever a sample’s composition is not identical to its target population. If the target population is homogeneous, then we can collect individual samples without giving consideration to where we collect sample. Unfortunately, in most situations the target population is heterogeneous and attention to where we collect samples is important. For example, due to settling a medication available as an oral suspension may have a higher concentration of its active ingredients at the bottom of the container. The composition of a clinical sample, such as blood or urine, may depend on when it is collected. A patient’s blood glucose level, for instance, will change in response to eating and exercise. Other target populations show both a spatial and a temporal heterogeneity. The concentration of dissolved O2 in a lake is heterogeneous due both to a change in seasons and to point sources of pollution.The composition of a homogeneous target population is the same regardless of where we sample, when we sample, or the size of our sample. For a heterogeneous target population, the composition is not the same at different locations, at different times, or for different sample sizes.If the analyte’s distribution within the target population is a concern, then our sampling plan must take this into account. When feasible, homogenizing the target population is a simple solution, although this often is impracticable. In addition, homogenizing a sample destroys information about the analyte’s spatial or temporal distribution within the target population, information that may be of importance.The ideal sampling plan provides an unbiased estimate of the target population’s properties. A random sampling is the easiest way to satisfy this requirement [Cohen, R. D. J. Chem. Educ. 1991, 68, 902–903]. Despite its apparent simplicity, a truly random sample is difficult to collect. Haphazard sampling, in which samples are collected without a sampling plan, is not random and may reflect an analyst’s unintentional biases.Here is a simple method to ensure that we collect random samples. First, we divide the target population into equal units and assign to each unit a unique number. Then, we use a random number table to select the units to sample. Example 7.2.1 provides an illustrative example. Appendix 14 provides a random number table that you can use to design a sampling plan.To analyze a polymer’s tensile strength, individual samples of the polymer are held between two clamps and stretched. To evaluate a production lot, the manufacturer’s sampling plan calls for collecting ten 1 cm \(\times\) 1 cm samples from a 100 cm \(\times\) 100 cm polymer sheet. Explain how we can use a random number table to ensure that we collect these samples at random.SolutionAs shown by the grid below, we divide the polymer sheet into 10 000 1 cm \(\times\) 1 cm squares, each identified by its row number and its column number, with numbers running from 0 to 99.For example, the blue square is in row 98 and in column 1. To select ten squares at random, we enter the random number table in Appendix 14 at an arbitrary point and let the entry’s last four digits represent the row number and the column number for the first sample. We then move through the table in a predetermined fashion, selecting random numbers until we have 10 samples. For our first sample, let’s use the second entry in the third column of Appendix 14 , which is 76831. The first sample, therefore, is row 68 and column 31. If we proceed by moving down the third column, then the 10 samples are as follows:66558When we collect a random sample we make no assumptions about the target population, which makes this the least biased approach to sampling. On the other hand, a random sample often requires more time and expense than other sampling strategies because we need to collect a greater number of samples to ensure that we adequately sample the target population, particularly when that population is heterogenous [Borgman, L. E.; Quimby, W. F. in Keith, L. H., ed. Principles of Environmental Sampling, American Chemical Society: Washington, D. C., 1988, 25–43].The opposite of random sampling is selective, or judgmental sampling in which we use prior information about the target population to help guide our selection of samples. Judgmental sampling is more biased than random sampling, but requires fewer samples. Judgmental sampling is useful if we wish to limit the number of independent variables that might affect our results. For example, if we are studying the bioaccumulation of PCB’s in fish, we may choose to exclude fish that are too small, too young, or that appear diseased.Random sampling and judgmental sampling represent extremes in bias and in the number of samples needed to characterize the target population. Systematic sampling falls in between these extremes. In systematic sampling we sample the target population at regular intervals in space or time. Figure 7.2.1 shows an aerial photo of the Great Salt Lake in Utah. A railroad line divides the lake into two sections that have different chemical compositions. To compare the lake’s two sections—and to evaluate spatial variations within each section—we use a two-dimensional grid to define sampling locations, collecting samples at the center of each location. When a population’s is heterogeneous in time, as is common in clinical and environmental studies, then we might choose to collect samples at regular intervals in time.If a target population’s properties have a periodic trend, a systematic sampling will lead to a significant bias if our sampling frequency is too small. This is a common problem when sampling electronic signals where the problem is known as aliasing. Consider, for example, a signal that is a simple sign wave. Figure 7.2.2 a shows how an insufficient sampling frequency underestimates the signal’s true frequency. The apparent signal, shown by the dashed red line that passes through the five data points, is significantly different from the true signal shown by the solid blue line.According to the Nyquist theorem, to determine accurately the frequency of a periodic signal, we must sample the signal at least twice during each cycle or period. If we collect samples at an interval of \(\Delta t\), then the highest frequency we can monitor accurately is \((2 \Delta t)^{-1}\). For example, if we collect one sample each hour, then the highest frequency we can monitor is (2 \(\times\) 1 hr)–1 or 0.5 hr–1, a period of less than 2 hr. If our signal’s period is less than 2 hours (a frequency of more than 0.5 hr–1), then we must use a faster sampling rate. Ideally, we use a sampling rate that is at least 3–4 times greater than the highest frequency signal of interest. If our signal has a period of one hour, then we should collect a new sample every 15-20 minutes.Combinations of the three primary approaches to sampling also are possible [Keith, L. H. Environ. Sci. Technol. 1990, 24, 610–617]. One such combination is systematic–judgmental sampling, in which we use prior knowledge about a system to guide a systematic sampling plan. For example, when monitoring waste leaching from a landfill, we expect the plume to move in the same direction as the flow of groundwater—this helps focus our sampling, saving money and time. The systematic–judgmental sampling plan in Figure 7.2.3 includes a rectangular grid for most of the samples and linear transects to explore the plume’s limits [Flatman, G. T.; Englund, E. J.; Yfantis, A. A. in Keith, L. H., ed. Principles of Environmental Sampling, American Chemical Society: Washington, D. C., 1988, 73–84].Another combination of the three primary approaches to sampling is judgmental–random, or stratified sampling. Many target populations consist of distinct units, or strata. For example, suppose we are studying particulate Pb in urban air. Because particulates come in a range of sizes—some visible and some microscopic—and come from many sources—such as road dust, diesel soot, and fly ash to name a few—we can subdivide the target population by size or by source. If we choose a random sampling plan, then we collect samples without considering the different strata, which may bias the sample toward larger particulates. In a stratified sampling we divide the target population into strata and collect random samples from within each stratum. After we analyze the samples from each stratum, we pool their respective means to give an overall mean for the target population. The advantage of stratified sampling is that individual strata usually are more homogeneous than the target population. The overall sampling variance for stratified sampling always is at least as good, and often is better than that obtained by simple random sampling. Because a stratified sampling requires that we collect and analyze samples from several strata, it often requires more time and money.One additional method of sampling deserves mention. In convenience sampling we select sample sites using criteria other than minimizing sampling error and sampling variance. In a survey of rural groundwater quality, for example, we can choose to drill wells at sites selected at random or we can choose to take advantage of existing wells; the latter usually is the preferred choice. In this case cost, expedience, and accessibility are more important than ensuring a random sampleHaving determined from where to collect samples, the next step in designing a sampling plan is to decide on the type of sample to collect. There are three common methods for obtaining samples: grab sampling, composite sampling, and in situ sampling.The most common type of sample is a grab sample in which we collect a portion of the target population at a specific time or location, providing a “snapshot” of the target population. If our target population is homogeneous, a series of random grab samples allows us to establish its properties. For a heterogeneous target population, systematic grab sampling allows us to characterize how its properties change over time and/or space.A composite sample is a set of grab samples that we combine into a single sample before analysis. Because information is lost when we combine individual samples, normally we analyze separately each grab sample. In some situations, however, there are advantages to working with a composite sample.One situation where composite sampling is appropriate is when our interest is in the target population’s average composition over time or space. For example, wastewater treatment plants must monitor and report the average daily composition of the treated water they release to the environment. The analyst can collect and analyze a set of individual grab samples and report the average result, or she can save time and money by combining the grab samples into a single composite sample and report the result of her analysis of the composite sample.Composite sampling also is useful when a single sample does not supply sufficient material for the analysis. For example, analytical methods for the quantitative analysis of PCB’s in fish often require as much as 50 g of tissue, an amount that may be difficult to obtain from a single fish. Combining and homogenizing tissue samples from several fish makes it easy to obtain the necessary 50-g sample.A significant disadvantage of grab samples and composite samples is that we cannot use them to monitor continuously a time-dependent change in the target population. In situ sampling, in which we insert an analytical sensor into the target population, allows us to monitor the target population without removing individual grab samples. For example, we can monitor the pH of a solution in an industrial production line by immersing a pH electrode in the solution’s flow.A study of the relationship between traffic density and the concentrations of Pb, Cd, and Zn in roadside soils uses the following sampling plan [Nabulo, G.; Oryem-Origa, H.; Diamond, M. Environ. Res. 2006, 101, 42–52]. Samples of surface soil (0–10 cm) are collected at distances of 1, 5, 10, 20, and 30 m from the road. At each distance, 10 samples are taken from different locations and mixed to form a single sample. What type of sampling plan is this? Explain why this is an appropriate sampling plan.SolutionThis is a systematic–judgemental sampling plan using composite samples. These are good choices given the goals of the study. Automobile emissions release particulates that contain elevated concentrations of Pb, Cd, and Zn—this study was conducted in Uganda where leaded gasoline was still in use—which settle out on the surrounding roadside soils as “dry rain.” Samples collected near the road and samples collected at fixed distances from the road provide sufficient data for the study, while minimizing the total number of samples. Combining samples from the same distance into a single, composite sample has the advantage of decreasing sampling uncertainty.To minimize sampling errors, samples must be of an appropriate size. If a sample is too small its composition may differ substantially from that of the target population, which introduces a sampling error. Samples that are too large, however, require more time and money to collect and analyze, without providing a significant improvement in the sampling error.Let’s assume our target population is a homogeneous mixture of two types of particles. Particles of type A contain a fixed concentration of analyte, and particles of type B are analyte-free. Samples from this target population follow a binomial distribution. If we collect a sample of n particles, then the expected number of particles that contains analyte, nA, is\[n_{A}=n p \nonumber\]where p is the probability of selecting a particle of type A. The standard deviation for sampling is\[s_{samp}=\sqrt{n p(1-p)} \label{7.1}\]To calculate the relative standard deviation for sampling, \(\left( s_{samp} \right)_{rel}\), we divide Equation \ref{7.1} by nA, obtaining\[\left(s_{samp}\right)_{r e l}=\frac{\sqrt{n p(1-p)}}{n p} \nonumber\]Solving for n allows us to calculate the number of particles we need to provide a desired relative sampling variance.\[n=\frac{1-p}{p} \times \frac{1}{\left(s_{s a m p}\right)_{rel}^{2}} \label{7.2}\]Suppose we are analyzing a soil where the particles that contain analyte represent only \(1 \times 10^{-7}\)% of the population. How many particles must we collect to give a percent relative standard deviation for sampling of 1%?SolutionSince the particles of interest account for \(1 \times 10^{-7}\)% of all particles, the probability, p, of selecting one of these particles is \(1 \times 10^{-9}\). Substituting into Equation \ref{7.2} gives\[n=\frac{1-\left(1 \times 10^{-9}\right)}{1 \times 10^{-9}} \times \frac{1}{(0.01)^{2}}=1 \times 10^{13} \nonumber\]To obtain a relative standard deviation for sampling of 1%, we need to collect \(1 \times 10^{13}\) particles.Depending on the particle size, a sample of 1013 particles may be fairly large. Suppose this is equivalent to a mass of 80 g. Working with a sample this large clearly is not practical. Does this mean we must work with a smaller sample and accept a larger relative standard deviation for sampling? Fortunately the answer is no. An important feature of Equation \ref{7.2} is that the relative standard deviation for sampling is a function of the number of particles instead of their combined mass. If we crush and grind the particles to make them smaller, then a sample of 1013 particles will have a smaller mass. If we assume that a particle is spherical, then its mass is proportional to the cube of its radius.\[\operatorname{mass} \propto r^{3} \nonumber\]If we decrease a particle’s radius by a factor of 2, for example, then we decrease its mass by a factor of 23, or 8. This assumes, of course, that the process of crushing and grinding particles does not change the composition of the particles.Assume that a sample of 1013 particles from Example 7.2.3 weighs 80 g and that the particles are spherical. By how much must we reduce a particle’s radius if we wish to work with 0.6-g samples?SolutionTo reduce the sample’s mass from 80 g to 0.6 g, we must change its mass by a factor of\[\frac{80}{0.6}=133 \times \nonumber\]To accomplish this we must decrease a particle’s radius by a factor of\[\begin{aligned} r^{3} &=133 \times \\ r &=5.1 \times \end{aligned} \nonumber\]Decreasing the radius by a factor of approximately 5 allows us to decrease the sample’s mass from 80 g to 0.6 g.Treating a population as though it contains only two types of particles is a useful exercise because it shows us that we can improve the relative standard deviation for sampling by collecting more particles. Of course, a real population likely contains more than two types of particles, with the analyte present at several levels of concentration. Nevertheless, the sampling of many well-mixed populations approximate binomial sampling statistics because they are homogeneous on the scale at which they are sampled. Under these conditions the following relationship between the mass of a random grab sample, m, and the percent relative standard deviation for sampling, R, often is valid\[m R^{2}=K_{s} \label{7.3}\]where Ks is a sampling constant equal to the mass of a sample that produces a percent relative standard deviation for sampling of ±1% [Ingamells, C. O.; Switzer, P. Talanta 1973, 20, 547–568].The following data were obtained in a preliminary determination of the amount of inorganic ash in a breakfast cereal.What is the value of Ks and what size sample is needed to give a percent relative standard deviation for sampling of ±2.0%. Predict the percent relative standard deviation and the absolute standard deviation if we collect 5.00-g samples.SolutionTo determine the sampling constant, Ks, we need to know the average mass of the cereal samples and the relative standard deviation for the amount of ash in those samples. The average mass of the cereal samples is 1.0007 g. The average %w/w ash and its absolute standard deviation are, respectively, 1.298 %w/w and 0.03194 %w/w. The percent relative standard deviation, R, therefore, is\[R=\frac{s_{\text { samp }}}{\overline{X}}=\frac{0.03194 \% \ \mathrm{w} / \mathrm{w}}{1.298 \% \ \mathrm{w} / \mathrm{w}} \times 100=2.46 \% \nonumber\]Solving for Ks gives its value as\[K_{s}=m R^{2}=(1.0007 \mathrm{g})(2.46)^{2}=6.06 \ \mathrm{g} \nonumber\]To obtain a percent relative standard deviation of ±2%, samples must have a mass of at least\[m=\frac{K_{s}}{R^{2}}=\frac{6.06 \mathrm{g}}{(2.0)^{2}}=1.5 \ \mathrm{g} \nonumber\]If we use 5.00-g samples, then the expected percent relative standard deviation is\[R=\sqrt{\frac{K_{s}}{m}}=\sqrt{\frac{6.06 \mathrm{g}}{5.00 \mathrm{g}}}=1.10 \% \nonumber\]and the expected absolute standard deviation is\[s_{\text { samp }}=\frac{R \overline{X}}{100}=\frac{(1.10)(1.298 \% \mathrm{w} / \mathrm{w})}{100}=0.0143 \% \mathrm{w} / \mathrm{w} \nonumber\]Olaquindox is a synthetic growth promoter in medicated feeds for pigs. In an analysis of a production lot of feed, five samples with nominal masses of 0.95 g were collected and analyzed, with the results shown in the following table.What is the value of Ks and what size samples are needed to obtain a percent relative deviation for sampling of 5.0%? By how much do you need to reduce the average particle size if samples must weigh no more than 1 g?To determine the sampling constant, Ks, we need to know the average mass of the samples and the percent relative standard deviation for the concentration of olaquindox in the feed. The average mass for the five samples is 0.95792 g. The average concentration of olaquindox in the samples is 23.14 mg/kg with a standard deviation of 2.200 mg/kg. The percent relative standard deviation, R, is\[R=\frac{s_{\text { samp }}}{\overline{X}} \times 100=\frac{2.200 \ \mathrm{mg} / \mathrm{kg}}{23.14 \ \mathrm{mg} / \mathrm{kg}} \times 100=9.507 \approx 9.51 \nonumber\]Solving for Ks gives its value as\[K_{s}=m R^{2}=(0.95792 \mathrm{g})(9.507)^{2}=86.58 \ \mathrm{g} \approx 86.6 \ \mathrm{g} \nonumber\]To obtain a percent relative standard deviation of 5.0%, individual samples need to have a mass of at least\[m=\frac{K_{s}}{R^{2}}=\frac{86.58 \ \mathrm{g}}{(5.0)^{2}}=3.5 \ \mathrm{g} \nonumber\]To reduce the sample’s mass from 3.5 g to 1 g, we must change the mass by a factor of\[\frac{3.5 \ \mathrm{g}}{1 \ \mathrm{g}}=3.5 \times \nonumber\]If we assume that the sample’s particles are spherical, then we must reduce a particle’s radius by a factor of\[\begin{aligned} r^{3} &=3.5 \times \\ r &=1.5 \times \end{aligned} \nonumber\]In the previous section we considered how much sample we need to minimize the standard deviation due to sampling. Another important consideration is the number of samples to collect. If the results from our analysis of the samples are normally distributed, then the confidence interval for the sampling error is\[\mu=\overline{X} \pm \frac{t s_{samp}}{\sqrt{n_{samp}}} \label{7.4}\]where nsamp is the number of samples and ssamp is the standard deviation for sampling. Rearranging Equation \ref{7.4} and substituting e for the quantity \(\overline{X} - \mu\), gives the number of samples as\[n_{samp}=\frac{t^{2} s_{samp}^{2}}{e^{2}} \label{7.5}\]Because the value of t depends on nsamp, the solution to Equation \ref{7.5} is found iteratively.When we use Equation \ref{7.5}, we must express the standard deviation for sampling, ssamp, and the error, e, in the same way. If ssamp is reported as a percent relative standard deviation, then the error, e, is reported as a percent relative error. When you use Equation \ref{7.5}, be sure to check that you are expressing ssamp and e in the same way.In Example 7.2.5 we determined that we need 1.5-g samples to establish an ssamp of ±2.0% for the amount of inorganic ash in cereal. How many 1.5-g samples do we need to collect to obtain a percent relative sampling error of ±0.80% at the 95% confidence level?SolutionBecause the value of t depends on the number of samples—a result we have yet to calculate—we begin by letting nsamp = \(\infty\) and using t(0.05, \(\infty\)) for t. From Appendix 4, the value for t(0.05, \(\infty\)) is 1.960. Substituting known values into Equation \ref{7.5} gives the number of samples as\[n_{samp}=\frac{(1.960)^{2}(2.0)^{2}}{(0.80)^{2}}=24.0 \approx 24 \nonumber\]Letting nsamp = 24, the value of t(0.05, 23) from Appendix 4 is 2.073. Recalculating nsamp gives\[n_{samp}=\frac{(2.073)^{2}(2.0)^{2}}{(0.80)^{2}}=26.9 \approx 27 \nonumber\]When nsamp = 27, the value of t(0.05, 26) from Appendix 4 is 2.060. Recalculating nsamp gives\[n_{samp}=\frac{(2.060)^{2}(2.0)^{2}}{(0.80)^{2}}=26.52 \approx 27 \nonumber\]Because two successive calculations give the same value for nsamp, we have an iterative solution to the problem. We need 27 samples to achieve a percent relative sampling error of ±0.80% at the 95% confidence level.Assuming that the percent relative standard deviation for sampling in the determination of olaquindox in medicated feed is 5.0% (see Exercise 7.2.1 ), how many samples do we need to analyze to obtain a percent relative sampling error of ±2.5% at \(\alpha\) = 0.05?Because the value of t depends on the number of samples—a result we have yet to calculate—we begin by letting nsamp = \(\infty\) and using t(0.05, \(\infty\)) for the value of t. From Appendix 4, the value for t(0.05, \(\infty\)) is 1.960. Our first estimate for nsamp is\[n_{samp}=\frac{t^{2} s_{s a m p}^{2}}{e^{2}} = \frac{(1.96)^{2}(5.0)^{2}}{(2.5)^{2}}=15.4 \approx 15 \nonumber\]Letting nsamp = 15, the value of t(0.05,14) from Appendix 4 is 2.145. Recalculating nsamp gives\[n_{samp}=\frac{t^{2} s_{samp}^{2}}{e^{2}}=\frac{(2.145)^{2}(5.0)^{2}}{(2.5)^{2}}=18.4 \approx 18 \nonumber\]Letting nsamp = 18, the value of t(0.05,17) from Appendix 4 is 2.103. Recalculating nsamp gives\[n_{samp}=\frac{t^{2} s_{samp}^{2}}{e^{2}}=\frac{(2.103)^{2}(5.0)^{2}}{(2.5)^{2}}=17.7 \approx 18 \nonumber\]Because two successive calculations give the same value for nsamp, we need 18 samples to achieve a sampling error of ±2.5% at the 95% confidence interval.Equation \ref{7.5} provides an estimate for the smallest number of samples that will produce the desired sampling error. The actual sampling error may be substantially larger if ssamp for the samples we collect during the subsequent analysis is greater than ssamp used to calculate nsamp. This is not an uncommon problem. For a target population with a relative sampling variance of 50 and a desired relative sampling error of ±5%, Equation \ref{7.5} predicts that 10 samples are sufficient. In a simulation using 1000 samples of size 10, however, only 57% of the trials resulted in a sampling error of less than ±5% [Blackwood, L. G. Environ. Sci. Technol. 1991, 25, 1366–1367]. Increasing the number of samples to 17 was sufficient to ensure that the desired sampling error was achieved 95% of the time.For an interesting discussion of why the number of samples is important, see Kaplan, D.; Lacetera, N.; Kaplan, C. “Sample Size and Precision in NIH Peer Review,” Plos One, 2008, 3, 1–3. When reviewing grants, individual reviewers report a score between 1.0 and 5.0 (two significant figures). NIH reports the average score to three significant figures, implying that a difference of 0.01 is significant. If the individual scores have a standard deviation of 0.1, then a difference of 0.01 is significant at \(\alpha = 0.05\) only if there are 384 reviews. The authors conclude that NIH review panels are too small to provide a statistically meaningful separation between proposals receiving similar scores.A final consideration when we develop a sampling plan is how we can minimize the overall variance for the analysis. Equation 7.1.2 shows that the overall variance is a function of the variance due to the method, \(s_{meth}^2\), and the variance due to sampling, \(s_{samp}^2\). As we learned earlier, we can improve the sampling variance by collecting more samples of the proper size. Increasing the number of times we analyze each sample improves the method’s variance. If \(s_{samp}^2\) is significantly greater than \(s_{meth}^2\), we can ignore the method’s contribution to the overall variance and use Equation \ref{7.5} to estimate the number of samples to analyze. Analyzing any sample more than once will not improve the overall variance, because the method’s variance is insignificant.If \(s_{meth}^2\) is significantly greater than \(s_{samp}^2\), then we need to collect and analyze only one sample. The number of replicate analyses, nrep, we need to minimize the error due to the method is given by an equation similar to Equation \ref{7.5}.\[n_{rep}=\frac{t^{2} s_{m e t h}^{2}}{e^{2}} \nonumber\]Unfortunately, the simple situations described above often are the exception. For many analyses, both the sampling variance and the method variance are significant, and both multiple samples and replicate analyses of each sample are necessary. The overall error in this case is\[e=t \sqrt{\frac{s_{samp}^{2}}{n_{samp}} + \frac{s_{meth}^{2}}{n_{sam p} n_{rep}}} \label{7.6}\]Equation \ref{7.6} does not have a unique solution as different combinations of nsamp and nrep give the same overall error. How many samples we collect and how many times we analyze each sample is determined by other concerns, such as the cost of collecting and analyzing samples, and the amount of available sample.An analytical method has a relative sampling variance of 0.40% and a relative method variance of 0.070%. Evaluate the percent relative error (\(\alpha = 0.05\)) if you collect 5 samples and analyze each twice, and if you collect 2 samples and analyze each 5 times.SolutionBoth sampling strategies require a total of 10 analyses. From Appendix 4 we find that the value of t(0.05, 9) is 2.262. Using Equation \ref{7.6}, the relative error for the first sampling strategy is\[e=2.262 \sqrt{\frac{0.40}{5}+\frac{0.070}{5 \times 2}}=0.67 \% \nonumber\]and that for the second sampling strategy is\[e=2.262 \sqrt{\frac{0.40}{2}+\frac{0.070}{2 \times 5}}=1.0 \% \nonumber\]Because the method variance is smaller than the sampling variance, we obtain a smaller relative error if we collect more samples and analyze each sample fewer times.An analytical method has a relative sampling variance of 0.10% and a relative method variance of 0.20%. The cost of collecting a sample is $20 and the cost of analyzing a sample is $50. Propose a sampling strategy that provides a maximum relative error of ±0.50% (\(\alpha = 0.05\)) and a maximum cost of $700.If we collect a single sample (cost $20), then we can analyze that sample 13 times (cost $650) and stay within our budget. For this scenario, the percent relative error is\[e=t \sqrt{\frac{s_{samp}^{2}}{n_{samp}} + \frac{s_{meth}^{2}}{n_{sam p} n_{rep}}} = 2.179 \sqrt{\frac{0.10}{1}+\frac{0.20}{1 \times 13}}=0.74 \% \nonumber\]where t(0.05, 12) is 2.179. Because this percent relative error is larger than ±0.50%, this is not a suitable sampling strategy.Next, we try two samples (cost $40), analyzing each six times (cost $600). For this scenario, the percent relative error is\[e=t \sqrt{\frac{s_{samp}^{2}}{n_{samp}} + \frac{s_{meth}^{2}}{n_{sam p} n_{rep}}} = 2.2035 \sqrt{\frac{0.10}{2}+\frac{0.20}{2 \times 6}}=0.57 \% \nonumber\]where t(0.05, 11) is 2.2035. Because this percent relative error is larger than ±0.50%, this also is not a suitable sampling strategy.Next we try three samples (cost $60), analyzing each four times (cost $600). For this scenario, the percent relative error is\[e=t \sqrt{\frac{s_{samp}^{2}}{n_{samp}} + \frac{s_{meth}^{2}}{n_{sam p} n_{rep}}} = 2.2035 \sqrt{\frac{0.10}{3}+\frac{0.20}{3 \times 4}}=0.49 \% \nonumber\]where t(0.05, 11) is 2.2035. Because both the total cost ($660) and the percent relative error meet our requirements, this is a suitable sampling strategy.There are other suitable sampling strategies that meet both goals. The strategy that requires the least expense is to collect eight samples, analyzing each once for a total cost of $560 and a percent relative error of ±0.46%. Collecting 10 samples and analyzing each one time, gives a percent relative error of ±0.39% at a cost of $700.This page titled 7.2: Designing a Sampling Plan is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by David Harvey.
169
7.3: Implementing the Sampling Plan
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Analytical_Chemistry_2.1_(Harvey)/07%3A_Obtaining_and_Preparing_Samples_for_Analysis/7.03%3A_Implementing_the_Sampling_Plan
Implementing a sampling plan usually involves three steps: physically removing the sample from its target population, preserving the sample, and preparing the sample for analysis. Except for in situ sampling, we analyze a sample after we have removed it from its target population. Because sampling exposes the target population to potential contamination, our sampling device must be inert and clean.Once we remove a sample from its target population, there is a danger that it will undergo a chemical or physical change before we can complete its analysis. This is a serious problem because the sample’s properties will no longer e representative of the target population. To prevent this problem, we often preserve samples before we transport them to the laboratory for analysis. Even when we analyze a sample in the field, preservation may still be necessary.The initial sample is called the primary or gross sample, and it may be a single increment drawn from the target population or a composite of several increments. In many cases we cannot analyze the gross sample without first preparing the sample for analyze by reducing the sample’s particle size, by converting the sample into a more readily analyzable form, or by improving its homogeneity.Although you may never work with the specific samples highlighted in this section, the case studies presented here may help you in envisioning potential problems associated with your samples.There are many good examples of solution samples: commercial solvents; beverages, such as milk or fruit juice; natural waters, including lakes, streams, seawater, and rain; bodily fluids, such as blood and urine; and, suspensions, such as those found in many oral medications. Let’s use the sampling of natural waters and wastewaters as a case study in how to sample a solution.The chemical composition of a surface water—such as a stream, river, lake, estuary, or ocean—is influenced by flow rate and depth. Rapidly flowing shallow streams and rivers, and shallow (<5 m) lakes usually are well mixed and show little stratification with depth. To collect a grab sample we submerge a capped bottle below the surface, remove the cap and allow the bottle to fill completely, and replace the cap. Collecting a sample this way avoids the air–water interface, which may be enriched with heavy metals or contaminated with oil [Duce, R. A.; Quinn, J. G. Olney, C. E.; Piotrowicz, S. R.; Ray, S. J.; Wade, T. L. Science 1972, 176, 161–163].Slowly moving streams and rivers, lakes deeper than five meters, estuaries, and oceans may show substantial stratification with depth. Grab samples from near the surface are collected as described above, and samples at greater depths are collected using a sample bottle lowered to the desired depth (Figure 7.3.1 ).Wells for sampling groundwater are purged before we collect samples because the chemical composition of water in a well-casing may differ significantly from that of the groundwater. These differences may result from contaminants introduced while drilling the well or by a change in the groundwater’s redox potential following its exposure to atmospheric oxygen. In general, a well is purged by pumping out a volume of water equivalent to several well-casing volumes or by pumping until the water’s temperature, pH, or specific conductance is constant. A municipal water supply, such as a residence or a business, is purged before sampling because the chemical composition of water standing in a pipe may differ significantly from the treated water supply. Samples are collected at faucets after flushing the pipes for 2-3 minutes.Samples from municipal wastewater treatment plants and industrial discharges often are collected as a 24-hour composite. An automatic sampler periodically removes an individual grab sample, adding it to those collected previously. The volume of each sample and the frequency of sampling may be constant, or may vary in response to changes in flow rate.Sample containers for collecting natural waters and wastewaters are made from glass or plastic. Kimax and Pyrex brand borosilicate glass have the advantage of being easy to sterilize, easy to clean, and inert to all solutions except those that are strongly alkaline. The disadvantages of glass containers are cost, weight, and the ease of breakage. Plastic containers are made from a variety of polymers, including polyethylene, polypropylene, polycarbonate, polyvinyl chloride, and Teflon. Plastic containers are light-weight, durable, and, except for those manufactured from Teflon, inexpensive. In most cases glass or plastic bottles are used interchangeably, although polyethylene bottles generally are preferred because of their lower cost. Glass containers are always used when collecting samples for the analysis of pesticides, oil and grease, and organics because these species often interact with plastic surfaces. Because glass surfaces easily adsorb metal ions, plastic bottles are preferred when collecting samples for the analysis of trace metals.In most cases the sample bottle has a wide mouth, which makes it easy to fill and to remove the sample. A narrow-mouth sample bottle is used if exposing the sample to the container’s cap or to the outside environment is a problem. Unless exposure to plastic is a problem, caps for sample bottles are manufactured from polyethylene. When polyethylene must be avoided, the container’s cap includes an inert interior liner of neoprene or Teflon.Here our concern is only with the need to prepare the gross sample by converting it into a form suitable for analysis. Some analytical methods require additional sample preparation steps, such as concentrating or diluting the analyte, or adjusting the analyte’s chemical form. We will consider these forms of sample preparation in later chapters that focus on specific analytical methods.After removing a sample from its target population, its chemical composition may change as a result of chemical, biological, or physical processes. To prevent a change in composition, samples are preserved by controlling the sample’s pH and temperature, by limiting its exposure to light or to the atmosphere, or by adding a chemical preservative. After preserving a sample, it is safely stored for later analysis. The maximum holding time between preservation and analysis depends on the analyte’s stability and the effectiveness of sample preservation. Table 7.3.1 summarizes preservation methods and maximum holding times for several analytes of importance in the analysis of natural waters and wastewaters.cool to 4oC; add H2SO4 to pH < 2HNO3 to pH < 2HNO3 to pH < 2none required1 mL of 10 mg/mL HgCl2 orimmediate extraction with a suitable non-aqueous solvent7 days without extraction;40 days with extractionnone requiredOther than adding a preservative, solution samples generally do not need additional preparation before analysis. This is the case for samples of natural waters and wastewaters. Solution samples with particularly complex matricies—blood and milk are two common examples—may need addi- tional processing to separate analytes from interferents, a topic covered later in this chapter.Typical examples of gaseous samples include automobile exhaust, emissions from industrial smokestacks, atmospheric gases, and compressed gases. Also included in this category are aerosol particulates—the fine solid particles and liquid droplets that form smoke and smog. Let’s use the sampling of urban air as a case study in how to sample a gas.One approach for collecting a sample of urban air is to fill a stainless steel canister or a Tedlar/Teflon bag. A pump pulls the air into the container and, after purging, the container is sealed. This method has the advantage of being simple and of collecting a representative sample. Disadvantages include the tendency for some analytes to adsorb to the container’s walls, the presence of analytes at concentrations too low to detect with suitable accuracy and precision, and the presence of reactive analytes, such as ozone and nitrogen oxides, that may react with the container or that may otherwise alter the sample’s chemical composition during storage. When using a stainless steel canister, cryogenic cooling, which changes the sample from a gaseous state to a liquid state, may limit some of these disadvantages.Most urban air samples are collected by filtration or by using a trap that contains a solid sorbent. Solid sorbents are used for volatile gases (a vapor pressure more than 10–6 atm) and for semi-volatile gases (a vapor pressure between 10–6 atm and 10–12 atm). Filtration is used to collect aerosol particulates. Trapping and filtering allow for sampling larger volumes of gas—an important concern for an analyte with a small concentration—and stabilizes the sample between its collection and its analysis.In solid sorbent sampling, a pump pulls the urban air through a canister packed with sorbent particles. Typically 2–100 L of air are sampled when collecting a volatile compound and 2–500 m3 when collecting a semi-volatile gas. A variety of inorganic, organic polymer, and carbon sorbents have been used. Inorganic sorbents, such as silica gel, alumina, magnesium aluminum silicate, and molecular sieves, are efficient collectors for polar compounds. Their efficiency at absorbing water, however, limits their capacity for many organic analytes.1 m3 is equivalent to 103 L.Organic polymeric sorbents include polymeric resins of 2,4-diphenyl-p-phenylene oxide or styrene-divinylbenzene for volatile compounds, and polyurethane foam for semi-volatile compounds. These materials have a low affinity for water and are efficient for sampling all but the most highly volatile organic compounds and some lower molecular weight alcohols and ketones. Carbon sorbents are superior to organic polymer resins, which makes them useful for highly volatile organic compounds that will not absorb onto polymeric resins, although removing the compounds may be difficult.Non-volatile compounds normally are present either as solid particulates or are bound to solid particulates. Samples are collected by pulling a large volume of urban air through a filtering unit and collecting the particulates on glass fiber filters.The short term exposure of humans, animals, and plants to atmospheric pollutants is more severe than that for pollutants in other matrices. Because the composition of atmospheric gases can vary significantly over a time, the continuous monitoring of atmospheric gases such as O3, CO, SO2, NH3, H2O2, and NO2 by in situ sampling is important [Tanner, R. L. in Keith, L. H., ed. Principles of Environmental Sampling, American Chemical Society: Washington, D. C., 1988, 275–286].After collecting a gross sample of urban air, generally there is little need for sample preservation or preparation. The chemical composition of a gas sample usually is stable when it is collected using a solid sorbent, a filter, or by cryogenic cooling. When using a solid sorbent, gaseous compounds are released for analysis by thermal desorption or by extracting with a suitable solvent. If the sorbent is selective for a single analyte, the increase in the sorbent’s mass is used to determine the amount of analyte in the sample.Typical examples of solid samples include large particulates, such as those found in ores; smaller particulates, such as soils and sediments; tablets, pellets, and capsules used for dispensing pharmaceutical products and animal feeds; sheet materials, such as polymers and rolled metals; and tissue samples from biological specimens. Solids usually are heterogeneous and we must collect samples carefully if they are to be representative of the target population. Let’s use the sampling of sediments, soils, and ores as a case study in how to sample solids.Sediments from the bottom of streams, rivers, lakes, estuaries, and oceans are collected with a bottom grab sampler or with a corer. A bottom grab sampler (Figure 7.3.2 ) is equipped with a pair of jaws that close when they contact the sediment, scooping up sediment in the process. Its principal advantages are ease of use and the ability to collect a large sample. Disadvantages include the tendency to lose finer grain sediment particles as water flows out of the sampler, and the loss of spatial information—both laterally and with depth—due to mixing of the sample.An alternative method for collecting sediments is the cylindrical coring device shown in Figure 7.3.3 ). The corer is dropped into the sediment, collecting a column of sediment and the water in contact with the sediment. With the possible exception of sediment at the surface, which may experience mixing, samples collected with a corer maintain their vertical profile, which preserves information about how the sediment’s composition changes with depth.Collecting soil samples at depths of up to 30 cm is accomplished with a scoop or a shovel, although the sampling variance generally is high. A better tool for collecting soil samples near the surface is a soil punch, which is a thin-walled steel tube that retains a core sample after it is pushed into the soil and removed. Soil samples from depths greater than 30 cm are collected by digging a trench and collecting lateral samples with a soil punch. Alternatively, an auger is used to drill a hole to the desired depth and the sample collected with a soil punch.For particulate materials, particle size often determines the sampling method. Larger particulate solids, such as ores, are sampled using a riffle (Figure 7.3.4 ), which is a trough with an even number of compartments. Because adjoining compartments empty onto opposite sides of the riffle, dumping a gross sample into the riffle divides it in half. By repeatedly passing half of the separated material back through the riffle, a sample of the desired size is collected.A sample thief (Figure 7.3.5 ) is used for sampling smaller particulate materials, such as powders. A typical sample thief consists of two tubes that are nestled together. Each tube has one or more slots aligned down the length of the sample thief. Before inserting the sample thief into the material being sampled, the slots are closed by rotating the inner tube. When the sample thief is in place, rotating the inner tube opens the slots, which fill with individual samples. The inner tube is then rotated to the closed position and the sample thief withdrawn.Without preservation, a solid sample may undergo a change in composition due to the loss of volatile material, biodegradation, or chemical reactivity (particularly redox reactions). Storing samples at lower temperatures makes them less prone to biodegradation and to the loss of volatile material, but fracturing of solids and phase separations may present problems. To minimize the loss of volatile compounds, the sample container is filled completely, eliminating a headspace where gases collect. Samples that have not been exposed to O2 particularly are susceptible to oxidation reactions. For example, samples of anaerobic sediments must be prevented from coming into contact with air.Unlike gases and liquids, which generally require little sample preparation, a solid sample usually needs some processing before analysis. There are two reasons for this. First, as discussed in Chapter 7.2, the standard deviation for sampling, ssamp, is a function of the number of particles in the sample, not the combined mass of the particles. For a heterogeneous material that consists of large particulates, the gross sample may be too large to analyze. For example, a Ni-bearing ore with an average particle size of 5 mm may require a sample that weighs one ton to obtain a reasonable ssamp. Reducing the sample’s average particle size allows us to collect the same number of particles with a smaller, more manageable mass. Second, many analytical techniques require that the analyte be in solution.A reduction in particle size is accomplished by crushing and grinding the gross sample. The resulting particulates are then thoroughly mixed and divided into subsamples of smaller mass. This process seldom occurs in a single step. Instead, subsamples are cycled through the process several times until a final laboratory sample is obtained.Crushing and grinding uses mechanical force to break larger particles into smaller particles. A variety of tools are used depending on the particle’s size and hardness. Large particles are crushed using jaw crushers that can reduce particles to diameters of a few millimeters. Ball mills, disk mills, and mortars and pestles are used to further reduce particle size.A significant change in the gross sample’s composition may occur during crushing and grinding. Decreasing particle size increases the available surface area, which increases the risk of losing volatile components. This problem is made worse by the frictional heat that accompanies crushing and grinding. Increasing the surface area also exposes interior portions of the sample to the atmosphere where oxidation may alter the gross sample’s composition. Other problems include contamination from the materials used to crush and grind the sample, and differences in the ease with which particles are reduced in size. For example, softer particles are easier to reduce in size and may be lost as dust before the remaining sample is processed. This is a particular problem if the analyte’s distribution between different types of particles is not uniform.The gross sample is reduced to a uniform particle size by intermittently passing it through a sieve. Those particles not passing through the sieve receive additional processing until the entire sample is of uniform size. The resulting material is mixed thoroughly to ensure homogeneity and a subsample obtained with a riffle, or by coning and quartering. As shown in Figure 7.3.6 , the gross sample is piled into a cone, flattened, and divided into four quarters. After discarding two diagonally opposed quarters, the remaining material is cycled through the process of coning and quartering until a suitable laboratory sample remains.If you are fortunate, your sample will dissolve easily in a suitable solvent, requiring no more effort than gently swirling and heating. Distilled water usually is the solvent of choice for inorganic salts, but organic solvents, such as methanol, chloroform, and toluene, are useful for organic materials.When a sample is difficult to dissolve, the next step is to try digesting it with an acid or a base. Table 7.3.2 lists several common acids and bases, and summarizes their use. Digestions are carried out in an open container, usually a beaker, using a hot-plate as a source of heat. The main advantage of an open-vessel digestion is cost because it requires no special equipment. Volatile reaction products, however, are lost, which results in a determinate error if they include the analyte.HCl (37% w/w)HNO3 (70% w/w)H2SO4 (98% w/w)HClO4 (70% w/w)HCl:HNO3 (3:1 v/v)NaOHMany digestions now are carried out in a closed container using microwave radiation as the source of energy. Vessels for microwave digestion are manufactured using Teflon (or some other fluoropolymer) or fused silica. Both materials are thermally stable, chemically resistant, transparent to microwave radiation, and capable of withstanding elevated pressures. A typical microwave digestion vessel, as shown in Figure 7.3.7 , consists of an insulated vessel body and a cap with a pressure relief valve. The vessels are placed in a microwave oven (a typical oven can accommodate 6–14 vessels) and microwave energy is controlled by monitoring the temperature or pressure within one of the vessels.Figure 7.3.7 . Microwave digestion unit: on the left is a view of the unit’s interior showing the carousel that holds the digestion vessels; on the right is a close-up of a Teflon digestion vessel, which is encased in a thermal sleeve. The pressure relief value, which is part of the vessel’s blue cap, contains a membrane that ruptures if the internal pressure becomes too high.Inorganic samples that resist decomposition by digesting with acids or bases often are brought into solution by fusing with a large excess of an alkali metal salt, called a flux. After mixing the sample and the flux in a crucible, they are heated to a molten state and allowed to cool slowly to room temperature. The resulting melt usually dissolves readily in distilled water or dilute acid. Table 7.3.3 summarizes several common fluxes and their uses. Fusion works when other methods of decomposition do not because of the high temperature and the flux’s high concentration in the molten liquid. Disadvantages include contamination from the flux and the crucible, and the loss of volatile materials.Finally, we can decompose organic materials by dry ashing. In this method the sample is placed in a suitable crucible and heated over a flame or in a furnace. The carbon present in the sample oxidizes to CO2, and hydrogen, sulfur, and nitrogen are volatilized as H2O, SO2, and N2. These gases can be trapped and weighed to determine their concentration in the organic material. Often the goal of dry ashing is to remove the organic material, leaving behind an inorganic residue, or ash, that can be further analyzed.This page titled 7.3: Implementing the Sampling Plan is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by David Harvey.
170
7.4: Separating the Analyte From Interferents
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Analytical_Chemistry_2.1_(Harvey)/07%3A_Obtaining_and_Preparing_Samples_for_Analysis/7.04%3A_Separating_the_Analyte_From_Interferents
When an analytical method is selective for the analyte, analyzing a sample is a relatively simple task. For example, a quantitative analysis for glucose in honey is relatively easy to accomplish if the method is selective for glucose, even in the presence of other reducing sugars, such as fructose. Unfortunately, few analytical methods are selective toward a single species.In the absence of an interferent, the relationship between the sample’s signal, Ssamp, and the analyte’s concentration, CA, is\[S_{samp}=k_{A} C_{A} \label{7.1}\]where kA is the analyte’s sensitivity.In Equation \ref{7.1}, and the equations that follow, you can replace the analyte’s concentration, CA, with the moles of analyte, nA, when working with methods, such as gravimetry, that respond to the absolute amount of analyte in a sample. In this case the interferent also is expressed in terms of moles.If an interferent, is present, then Equation \ref{7.1} becomes\[S_{samp}=k_{A} C_{A}+k_{I} C_{I} \label{7.2}\]where kI and CI are, respectively, the interferent’s sensitivity and concentration. A method’s selectivity for the analyte is determined by the relative difference in its sensitivity toward the analyte and the interferent. If kA is greater than kI, then the method is more selective for the analyte. The method is more selective for the interferent if kI is greater than kA.Even if a method is more selective for an interferent, we can use it to determine CA if the interferent’s contribution to Ssamp is insignificant. The selectivity coefficient, KA,I, which we introduced in Chapter 3, provides a way to characterize a method’s selectivity.\[K_{A, I}=\frac{k_{I}}{k_{A}} \label{7.3}\]Solving Equation \ref{7.3} for kI, substituting into Equation \ref{7.2}, and simplifying, gives\[S_{samp}=k_{A}\left(C_{A}+K_{A, I} \times C_{I}\right) \label{7.4}\]An interferent, therefore, does not pose a problem as long as the product of its concentration and its selectivity coefficient is significantly smaller than the analyte’s concentration.\[K_{A, I} \times C_{I}<7.4: Separating the Analyte From Interferents is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by David Harvey.
171
7.5: General Theory of Separation Effiiciency
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Analytical_Chemistry_2.1_(Harvey)/07%3A_Obtaining_and_Preparing_Samples_for_Analysis/7.05%3A_General_Theory_of_Separation_Effiiciency
The goal of an analytical separation is to remove either the analyte or the interferent from the sample’s matrix. To achieve this separation we must identify at least one significant difference between the analyte’s and the interferent’s chemical or physical properties. A significant difference in properties, however, is not sufficient to effect a separation if the conditions that favor the extraction of interferent from the sample also removes a small amount of analyte.Two factors limit a separation’s efficiency: failing to recover all the analyte and failing to remove all the interferent. We define the analyte’s recovery, RA, as\[R_{A}=\frac{C_{A}}{\left(C_{A}\right)_{\mathrm{o}}} \label{7.1}\]where CA is the concentration of analyte that remains after the separation, and (CA)o is the analyte’s initial concentration. A recovery of 1.00 means that no analyte is lost during the separation. The interferent’s recovery, RI, is defined in the same manner\[R_{I}=\frac{C_{I}}{\left(C_{I}\right)_{o}} \label{7.2}\]where CI is the concentration of interferent that remains after the separation, and (CI)o is the interferent’s initial concentration. We define the extent of the separation using a separation factor, SI,A [(a) Sandell, E. B. Colorimetric Determination of Trace Metals, Interscience Publishers: New York, 1950, pp. 19–20; (b) Sandell, E. B. Anal. Chem. 1968, 40, 834–835].\[S_{I, A}=\frac{R_{I}}{R_{A}} \label{7.3}\]In general, an SI,A of approximately 10–7 is needed for the quantitative analysis of a trace analyte in the presence of a macro interferent, and 10–3 when the analyte and interferent are present in approximately equal amounts.The meaning of trace and macro, as well as other terms for describing the concentrations of analytes and interferents, is presented in Chapter 2.An analytical method for determining Cu in an industrial plating bath gives poor results in the presence of Zn. To evaluate a method for separating the analyte from the interferent, samples with known concentrations of Cu or Zn were prepared and analyzed. When a sample of 128.6 ppm Cu was taken through the separation, the concentration of Cu that remained was 127.2 ppm. Taking a 134.9 ppm solution of Zn through the separation left behind a concentration of 4.3 ppm Zn. Calculate the recoveries for Cu and Zn, and the separation factor.SolutionUsing Equation \ref{7.1} and Equation \ref{7.2}, the recoveries for the analyte and interferent are\[R_{\mathrm{Cu}}=\frac{127.2 \ \mathrm{ppm}}{128.6 \ \mathrm{ppm}}=0.9891 \text { or } 98.91 \% \nonumber\]\[R_{\mathrm{zn}}=\frac{4.3 \ \mathrm{ppm}}{134.9 \ \mathrm{ppm}}=0.032 \text { or } 3.2 \% \nonumber\]and the separation factor is\[S_{\mathrm{Zn}, \mathrm{Cu}}=\frac{R_{\mathrm{Zn}}}{R_{\mathrm{Cu}}}=\frac{0.032}{0.9891}=0.032 \nonumber\]Recoveries and separation factors are useful tools for evaluating a separation’s potential effectiveness; they do not, however, give a direct indication of the error that results from failing to remove all the interferent or from failing to completely recover the analyte. The relative error due to the separation, E, is\[E=\frac{S_{s a m p}-S_{s a m p}^*}{S_{samp}} \label{7.4}\]where \(S_{samp}^*\) is the sample’s signal for an ideal separation in which we completely recover the analyte.\[S_{samp}^{*}=k_{A}\left(C_{A}\right)_{\mathrm{o}} \label{7.5}\]Substituting equation 7.4.4 and Equation \ref{7.5} into Equation \ref{7.4}, and rearranging\[E=\frac{k_{A}\left(C_{A}+K_{A, l} \times C_{I}\right)-k_{A}\left(C_{A}\right)_{o}}{k_{A}\left(C_{A}\right)_{o}} \nonumber\]\[E=\frac{C_{A}+K_{A, I} \times C_{I}-\left(C_{A}\right)_{\circ}}{\left(C_{A}\right)_{\circ}} \nonumber\]\[E=\frac{C_{A}}{\left(C_{A}\right)_{\mathrm{o}}}-\frac{\left(C_{A}\right)_{o}}{\left(C_{A}\right)_{o}}+\frac{K_{A, I} \times C_{I}}{\left(C_{A}\right)_{o}} \nonumber\]leaves us with\[E=\left(R_{A}-1\right)+\frac{K_{A, I} \times C_{I}}{\left(C_{A}\right)_{o}} \label{7.6}\]A more useful equation is obtained by solving Equation \ref{7.2} for CI and substituting into Equation \ref{7.6}.\[E=\left(R_{A}-1\right)+\frac{K_{A, I} \times\left(C_{I}\right)_{o}}{\left(C_{A}\right)_{o}} \times R_{I} \label{7.7}\]The first term of Equation \ref{7.7} accounts for the analyte’s incomplete recovery and the second term accounts for a failure to remove all the interferent.Following the separation outlined in Example 7.5.1 , an analysis is carried out to determine the concentration of Cu in an industrial plating bath. Analysis of standard solutions that contain either Cu or Zn give the following linear calibrations.\[S_{\mathrm{Cu}}=1250 \ \mathrm{ppm}^{-1} \times C_{\mathrm{Cu}} \text { and } S_{\mathrm{Zn}}=2310 \ \mathrm{ppm}^{-1} \times C_{\mathrm{Zn}} \nonumber\](a) What is the relative error if we analyze a sample without removing the Zn? Assume the initial concentration ratio, Cu:Zn, is 7:1. (b) What is the relative error if we first complete the separation with the recoveries determined in Example 7.5.1 ? (c) What is the maximum acceptable recovery for Zn if the recovery for Cu is 1.00 and if the error due to the separation must be no greater than 0.10%?Solution(a) If we complete the analysis without separating Cu and Zn, then RCu and RZn are exactly 1 and Equation \ref{7.7} simplifies to\[E=\frac{K_{\mathrm{Cu}, \mathrm{Zn}} \times\left(C_{\mathrm{Zn}}\right)_{\mathrm{o}}}{\left(C_{\mathrm{Cu}}\right)_{\mathrm{o}}} \nonumber\]Using equation 7.4.3, we find that the selectivity coefficient is\[K_{\mathrm{Cu}, \mathrm{Zn}}=\frac{k_{\mathrm{Zn}}}{k_{\mathrm{Cu}}}=\frac{2310 \ \mathrm{ppm}^{-1}}{1250 \ \mathrm{ppm}^{-1}}=1.85 \nonumber\]Given the initial concentration ratio of 7:1 for Cu and Zn, the relative error without the separation is\[E=\frac{1.85 \times 1}{7}=0.264 \text { or } 26.4 \% \nonumber\](b) To calculate the relative error we substitute the recoveries from Example 7.5.1 into Equation \ref{7.7}, obtaining\[E=(0.9891-1)+\frac{1.85 \times 1}{7} \times 0.032= -0.0109+0.085=-0.0024 \nonumber\]or –0.24%. Note that the negative determinate error from failing to recover all the analyte is offset partially by the positive determinate error from failing to remove all the interferent.(c) To determine the maximum recovery for Zn, we make appropriate substitutions into Equation \ref{7.7}\[E=0.0010=+\frac{1.85 \times 1}{7} \times R_{\mathrm{Zn}} \nonumber\]and solve for RZn, obtaining a recovery of 0.0038, or 0.38%. Thus, we must remove at least\[100.00 \%-0.38 \%=99.62 \% \nonumber\]of the Zn to obtain an error of 0.10% when RCu is exactly 1.This page titled 7.5: General Theory of Separation Effiiciency is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by David Harvey.
172
7.6: Classifying Separation Techniques
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Analytical_Chemistry_2.1_(Harvey)/07%3A_Obtaining_and_Preparing_Samples_for_Analysis/7.06%3A_Classifying_Separation_Techniques
We can separate an analyte and an interferent if there is a significant difference in at least one of their chemical or physical properties. Table 7.6.1 provides a partial list of separation techniques, organized by the chemical or physical property affecting the separation.Size is the simplest physical property we can exploit in a separation. To accomplish the separation we use a porous medium through which only the analyte or the interferent can pass. Examples of size-based separations include filtration, dialysis, and size-exclusion.In a filtration we separate a particulate interferent from soluble analytes using a filter with a pore size that will retain the interferent. The solution that passes through the filter is called the filtrate, and the material retained by the filter is the retentate. Gravity filtration and suction filtration using filter paper are techniques with which you should already be familiar. A membrane filter is the method of choice for particulates that are too small to be retained by filter paper. Figure 7.6.1 provides information about three types of membrane filters. For applications of gravity filtration and suction filtration in gravimetric methods of analysis, see Chapter 8.Dialysis is another example of a separation technique in which size is used to separate the analyte and the interferent. A dialysis membrane usually is made using cellulose and fashioned into tubing, bags, or cassettes. Figure 7.6.2 shows an example of a commercially available dialysis cassette. The sample is injected into the dialysis membrane, which is sealed tightly by a gasket, and the unit is placed in a container filled with a solution with a composition different from the sample. If there is a difference in a species’ concentration on the membrane’s two sides, the resulting concentration gradient provides a driving force for its diffusion across the membrane. While small species freely pass through the membrane, larger species are unable to pass. Dialysis frequently is used to purify proteins, hormones, and enzymes. During kidney dialysis, metabolic waste products, such as urea, uric acid, and creatinine, are removed from blood by passing it over a dialysis membrane.Size-exclusion chromatography is a third example of a separation technique that uses size as a means to effect a separation. In this technique a column is packed with small, approximately 10-μm, porous polymer beads of cross-linked dextrin or polyacrylamide. The pore size of the particles is controlled by the degree of cross-linking, with more cross-linking producing smaller pore sizes. The sample is placed into a stream of solvent that is pumped through the column at a fixed flow rate. Those species too large to enter the pores pass through the column at the same rate as the solvent. Species that enter into the pores take longer to pass through the column, with smaller species requiring more time to pass through the column. Size-exclusion chromatography is widely used in the analysis of polymers, and in biochemistry, where it is used for the separation of proteins. A more detailed treatment of size-exclusion chromatography, which also is called gel permeation chromatography, is in Chapter 12.If the analyte and the interferent have different masses or densities, then a separation using centrifugation may be possible. The sample is placed in a centrifuge tube and spun at a high angular velocity, measured in revolutions per minute (rpm). The sample’s constituents experience a centrifugal force that pulls them toward the bottom of the centrifuge tube. Those species that experience the greatest centrifugal force have the fastest sedimentation rate and are the first to reach the bottom of the centrifuge tube. If two species have the same density, their separation is based on a difference in mass, with the heavier species having the greater sedimentation rate. If the species are of equal mass, then the species with the larger density has the greatest sedimentation rate.Centrifugation is an important separation technique in biochemistry. Table 7.6.2 , for example, lists conditions for separating selected cellular components. We can separate lysosomes from other cellular components by several differential centrifugations, in which we divide the sample into a solid residue and a supernatant solution. After destroying the cells, the solution is centrifuged for 20 minutes at \(15000 \times g\) (a centrifugal force that is 15 000 times the earth’s gravitational force), leaving a solid residue of cell membranes and mitochondria. The supernatant, which contains the lysosomes, is isolated by decanting it from the residue and then centrifuged for 30 minutes at \(30000 \times g\), leaving a solid residue of lysosomes. Figure 7.6.3 shows a typical centrifuge capable of producing the centrifugal forces needed for biochemical separations.Source: Adapted from Zubay, G. Biochemistry, 2nd ed. Macmillan: New York, 1988, p.120.An alternative approach to differential centrifugation is a density gradient centrifugation. To prepare a sucrose density gradient, for example, a solution with a smaller concentration of sucrose—and, thus, of lower density—is gently layered upon a solution with a higher concentration of sucrose. Repeating this process several times, fills the centrifuge tube with a multi-layer density gradient. The sample is placed on top of the density gradient and centrifuged using a force greater than \(150000 \times g\). During centrifugation, each of the sample’s components moves through the gradient until it reaches a position where its density matches the surrounding sucrose solution. Each component is isolated as a separate band positioned where its density is equal to that of the local density within the gradient. Figure 7.6.4 provides an example of a typical sucrose density centrifugation for separating plant thylakoid membranes.One widely used technique for preventing an interference is to bind the interferent in a strong, soluble complex that prevents it from interfering in the analyte’s determination. This process is known as masking. As shown in Table 7.6.3 , a wide variety of ions and molecules are useful masking agents, and, as a result, selectivity is usually not a problem.Technically, masking is not a separation technique because we do not physically separate the analyte and the interferent. We do, however, chemically isolate the interferent from the analyte, resulting in a pseudo-separation.Ag, Au, Cd, Co, Cu, Fe, Hg, Mn, Ni, Pd, Pt, ZnAg, Cd, Co, Cu, Fe, Ni, Pd, Pt, ZnAg, Co, Ni, Cu, ZnAu, Ce, Co, Cu, Fe, Hg, Mn, Pb, Pd, Pt, Sb, Sn, ZnAl, Fe, Mg, MnCu, Fe, SnSource: Meites, L. Handbook of Analytical Chemistry, McGraw-Hill: New York, 1963.Using Table 7.6.3 , suggest a masking agent for the analysis of aluminum in the presence of iron.SolutionA suitable masking agent must form a complex with the interferent, but not with the analyte. Oxalate, for example, is not a suitable masking agent because it binds both Al and Fe. Thioglycolic acid, on the other hand, is a selective masking agent for Fe in the presence of Al. Other acceptable masking agents are cyanide (CN–) thiocyanate (SCN–), and thiosulfate (\(\text{S}_2\text{O}_3^{2-}\)).Using Table 7.6, suggest a masking agent for the analysis of Fe in the presence of Al.The fluoride ion, F–, is a suitable masking agent as it binds with Al3+ to form the stable \(\text{AlF}_6^{3-}\) complex, leaving iron in solution.As shown in Example 7.6.2 , we can judge a masking agent’s effectiveness by considering the relevant equilibrium constants.Show that CN– is an appropriate masking agent for Ni2+ in a method where nickel’s complexation with EDTA is an interference.SolutionThe relevant reactions and formation constants are\[\mathrm{Ni}^{2+}(a q)+\mathrm{Y}^{4-}(a q)\rightleftharpoons \mathrm{NiY}^{2-}(a q) \quad K_{1}=4.2 \times 10^{18} \nonumber\]\[\mathrm{Ni}^{2+}(a q)+4 \mathrm{CN}^{-}(a q)\rightleftharpoons \mathrm{Ni}(\mathrm{CN})_{4}^{2-}(a q) \quad \beta_{4}=1.7 \times 10^{30} \nonumber\]where Y4– is an abbreviation for EDTA. Cyanide is an appropriate masking agent because the formation constant for \(\text{Ni(CN)}_4^{2-}\) is greater than that for the Ni–EDTA complex. In fact, the equilibrium constant for the reaction in which EDTA displaces the masking agent\[\mathrm{Ni}(\mathrm{CN})_{4}^{2-}(a q)+\mathrm{Y}^{4-}(a q) \rightleftharpoons \mathrm{NiY}^{2-}(a q)+4 \mathrm{CN}^{-}(a q) \nonumber\]\[K=\frac{K_{1}}{\beta_{4}}=\frac{4.2 \times 10^{18}}{1.7 \times 10^{30}}=2.5 \times 10^{-12} \nonumber\]is sufficiently small that \(\text{Ni(CN)}_4^{2-}\) is relatively inert in the presence of EDTA.Use the formation constants in Appendix 12 to show that 1,10-phenanthroline is a suitable masking agent for Fe2+ in the presence of Fe3+. Use a ladder diagram to define any limitations on using 1,10-phenanthroline as a masking agent. See Chapter 6 for a review of ladder diagrams.The relevant reactions and equilibrium constants are\[\begin{array}{ll}{\mathrm{Fe}^{2+}(a q)+3 \mathrm{phen}(a q)} & {\rightleftharpoons\mathrm{Fe}(\mathrm{phen})_{3}^{2+}(a q) \quad \beta_{3}=5 \times 10^{20}} \\ {\mathrm{Fe}^{3+}(a q)+3 \mathrm{phen}(a q)} & {\rightleftharpoons \mathrm{Fe}(\mathrm{phen})_{3}^{3+}(a q) \quad \beta_{3}=6 \times 10^{13}}\end{array} \nonumber\]where phen is an abbreviation for 1,10-phenanthroline. Because \(\beta_3\) is larger for the complex with Fe2+ than it is for the complex with Fe3+,1,10-phenanthroline will bind Fe2+ before it binds Fe3+. A ladder diagram for this system (as shown below) suggests that an equilibrium p(phen) between 5.6 and 5.9 will fully complex Fe2+ without any significant formation of the \(\text{Fe(phen)}_3^{3+}\) complex. Adding a stoichiometrically equivalent amount of 1,10-phenanthroline to a solution of Fe2+ is sufficient to mask Fe2+ in the presence of Fe3+. A large excess of 1,10-phenanthroline, however, decreases p(phen) and allows for the formation of both metal–ligand complexes.Because an analyte and its interferent are usually in the same phase, we can achieve a separation if one of them undergoes a change in its physical state or its chemical state.When the analyte and the interferent are miscible liquids, separation by distillation is possible if their boiling points are significantly different. Figure 7.6.5 shows the progress of a distillation as a plot of temperature versus the composition of mixture’s vapor-phase and liquid-phase. The initial liquid mixture (point A), contains more interferent than analyte. When this solution is brought to its boiling point, the vapor phase in equilibrium with the liquid phase is enriched in analyte (point B). The horizontal line that connects points A and B represents this vaporization equilibrium. Condensing the vapor phase at point B, by lowering the temperature, creates a new liquid phase with a composition identical to that in the vapor phase (point C). The vertical line that connects points B and C represents this condensation equilibrium. The liquid phase at point C has a lower boiling point than the original mixture, and is in equilibrium with the vapor phase at point D. This process of repeated vaporization and condensation gradually separates the analyte and the interferent.Two experimental set-ups for distillations are shown in Figure 7.6.6 . The simple distillation apparatus shown in Figure 7.6.6 a is useful only for separating a volatile analyte (or interferent) from a non-volatile interferent (or analyte), or for separating an analyte and an interferent whose boiling points differ by more than 150oC. A more efficient separation is achieved using the fractional distillation apparatus in Figure 7.6.6 b. Packing the fractionating column with a high surface area material, such as a steel sponge or glass beads, provides more opportunity for the repeated process of vaporization and condensation necessary to effect a complete separation.When the sample is a solid, sublimation may provide a useful separation of the analyte and the interferent. The sample is heated at a temperature and pressure below the analyte’s triple point, allowing it to vaporize without passing through a liquid state. Condensing the vapor recovers the purified analyte (Figure 7.6.7 ). A useful analytical example of sublimation is the isolation of amino acids from fossil mollusk shells and deep-sea sediments [Glavin, D. P.; Bada, J. L. Anal. Chem. 1998, 70, 3119–3122].Recrystallization is another method for purifying a solid. A solvent is chosen in which the analyte’s solubility is significant when the solvent is hot and minimal when the solvent is cold. The interferents must be less soluble in the hot solvent than the analyte or present in much smaller amounts. After heating a portion of the solvent in an Erlenmeyer flask, small amounts of sample are added until undissolved sample is visible. Additional hot solvent is added until the sample redissolves, or until only insoluble impurities remain. This process of adding sample and solvent is repeated until the entire sample is added to the Erlenmeyer flask. Any insoluble impurities are removed by filtering the hot solution. The solution is allowed to cool slowly, which promotes the growth of large, pure crystals, and then cooled in an ice bath to minimize solubility losses. The purified sample is isolated by filtration and rinsed to remove any soluble impurities. Finally, the sample is dried to remove any remaining traces of the solvent. Further purification, if necessary, is accomplished by additional recrystallizations.Distillation, sublimation, and recrystallization use a change in physical state to effect a separation. Chemical reactivity also is a useful tool for separating analytes and interferents. For example, we can separate SiO2 from a sample by reacting it with HF to form SiF4. Because SiF4 is volatile, it is easy to remove by evaporation. If we wish to collect the reaction’s volatile product, then a distillation is possible. For example, we can isolate the \(\text{NH}_4^+\) in a sample by making the solution basic and converting it to NH3. The ammonia is then removed by distillation. Table 7.6.4 provides additional examples of this approach for isolating inorganic ions.Another reaction for separating analytes and interferents is precipitation. Two important examples of using a precipitation reaction in a separation are the pH-dependent solubility of metal oxides and hydroxides, and the pH-dependent solubility of metal sulfides.Separations based on the pH-dependent solubility of oxides and hydroxides usually use a strong acid, a strong base, or an NH3/NH4Cl buffer to adjust the pH. Most metal oxides and hydroxides are soluble in hot concentrated HNO3, although a few oxides, such as WO3, SiO2, and SnO2 remain insoluble even under these harsh conditions. To determine the amount of Cu in brass, for example, we can avoid an interference from Sn by dissolving the sample with a strong acid and filtering to remove the solid residue of SnO2.Most metals form a hydroxide precipitate in the presence of concentrated NaOH. Those metals that form amphoteric hydroxides, however, do not precipitate because they react to form higher-order hydroxo-complexes. For example, Zn2+ and Al3+ do not precipitate in concentrated NaOH because they form the soluble complexes \(\text{Zn(OH)}_3^-\) and \(\text{Al(OH)}_4^-\). The solubility of Al3+ in concentrated NaOH allows us to isolate aluminum from impure samples of bauxite, an ore of Al2O3. After crushing the ore, we place it in a solution of concentrated NaOH, dissolving the Al2O3 and forming \(\text{Al(OH)}_4^-\). Other oxides in the ore, such as Fe2O3 and SiO2, remain insoluble. After filtering, we recover the aluminum as a precipitate of Al(OH)3 by neutralizing some of the OH– with acid.The pH of an NH3/NH4Cl buffer (pKa = 9.26) is sufficient to precipitate most metals as the hydroxide. The alkaline earths and alkaline metals, however, do not precipitate at this pH. In addition, metal ions that form soluble complexes with NH3, such as Cu2+, Zn2+, Ni2+, and Co2+ also do not precipitate under these conditions.The use of S2– as a precipitating reagent is one of the earliest examples of a separation technique. In Fresenius’s 1881 text A System of Instruction in Quantitative Chemical Analysis, sulfide frequently is used to separate metal ions from the remainder of the sample’s matrix [Fresenius. C. R. A System of Instruction in Quantitative Chemical Analysis, John Wiley and Sons: New York, 1881]. Sulfide is a useful reagent for separating metal ions for two reasons: most metal ions, except for the alkaline earths and alkaline metals, form insoluble sulfides; and these metal sulfides show a substantial variation in solubility. Because the concentration of S2– is pH-dependent, we can control which metal ions precipitate by adjusting the pH. For example, in Fresenius’s gravimetric procedure for the determination of Ni in ore samples (see , sulfide is used three times to separate Co2+ and Ni2+ from Cu2+ and, to a lesser extent, from Pb2+.The most important group of separation techniques uses a selective partitioning of the analyte or interferent between two immiscible phases. If we bring a phase that contains the solute, S, into contact with a second phase, the solute will partition itself between the two phases, as shown by the following equilibrium reaction.\[S_{\text { phase } 1} \rightleftharpoons S_{\text { phase } 2} \label{7.1}\]The equilibrium constant for reaction \ref{7.1}\[K_{\mathrm{D}}=\frac{\left[S_{\mathrm{phase} \ 2}\right]}{\left[S_{\mathrm{phase} \ 1}\right]} \nonumber\]is called the distribution constant or the partition coefficient. If KD is sufficiently large, then the solute moves from phase 1 to phase 2. The solute will remain in phase 1 if the partition coefficient is sufficiently small. When we bring a phase that contains two solutes into contact with a second phase, a separation of the solutes is possible if KD is favorable for only one of the solutes. The physical states of the phases are identified when we describe the separation process, with the phase that contains the sample listed first. For example, if the sample is in a liquid phase and the second phase is a solid, then the separation involves liquid–solid partitioning.We call the process of moving a species from one phase to another phase an extraction. Simple extractions are particularly useful for separations where only one component has a favorable partition coefficient. Several important separation techniques are based on a simple extraction, including liquid–liquid, liquid–solid, solid–liquid, and gas–solid extractions.A liquid–liquid extraction usually is accomplished using a separatory funnel (Figure 7.6.8 ). After placing the two liquids in the separatory funnel, we shake the funnel to increase the surface area between the phases. When the extraction is complete, we allow the liquids to separate. The stopcock at the bottom of the separatory funnel allows us to remove the two phases.We also can carry out a liquid–liquid extraction without a separatory funnel by adding the extracting solvent to the sample’s container. Pesticides in water, for example, are preserved in the field by extracting them into a small volume of hexane. A liquid–liquid microextraction, in which the extracting phase is a 1-µL drop suspended from a microsyringe (Figure 7.6.9 ), also has been described [Jeannot, M. A.; Cantwell, F. F. Anal. Chem. 1997, 69, 235–239]. Because of its importance, a more thorough discussion of liquid–liquid extractions is in Chapter7.7.In a solid phase extraction of a liquid sample, we pass the sample through a cartridge that contains a solid adsorbent, several examples of which are shown in Figure 7.6.10 . The choice of adsorbent is determined by the species we wish to separate. Table 7.6.5 provides several representative examples of solid adsorbents and their applications.As an example, let’s examine a procedure for isolating the sedatives secobarbital and phenobarbital from serum samples using a C-18 solid adsorbent [Alltech Associates Extract-Clean SPE Sample Preparation Guide, Bulletin 83]. Before adding the sample, the solid phase cartridge is rinsed with 6 mL each of methanol and water. Next, a 500-μL sample of serum is pulled through the cartridge, with the sedatives and matrix interferents retained following a liquid–solid extraction (Figure 7.6.11 a). Washing the cartridge with distilled water removes any interferents (Figure 7.6.11 b). Finally, we elute the sedatives using 500 μL of acetone (Figure 7.6.11 c). In comparison to a liquid–liquid extraction, a solid phase extraction has the advantage of being easier, faster, and requires less solvent.An extraction is possible even if the analyte has an unfavorable partition coefficient, provided that the sample’s other components have significantly smaller partition coefficients. Because the analyte’s partition coefficient is unfavorable, a single extraction will not recover all the analyte. Instead we continuously pass the extracting phase through the sample until we achieve a quantitative extraction.A continuous extraction of a solid sample is carried out using a Soxhlet extractor (Figure 7.6.12 ). The extracting solvent is placed in the lower reservoir and heated to its boiling point. Solvent in the vapor phase moves upward through the tube on the far right side of the apparatus, reaching the condenser where it condenses back to the liquid state. The solvent then passes through the sample, which is held in a porous cellulose filter thimble, collecting in the upper reservoir. When the solvent in the upper reservoir reaches the return tube’s upper bend, the solvent and extracted analyte are siphoned back to the lower reservoir. Over time the analyte’s concentration in the lower reservoir increases.Microwave-assisted extractions have replaced Soxhlet extractions in some applications [Renoe, B. W. Am. Lab August 1994, 34–40]. The process is the same as that described earlier for a microwave digestion. After placing the sample and the solvent in a sealed digestion vessel, a microwave oven is used to heat the mixture. Using a sealed digestion vessel allows the extraction to take place at a higher temperature and pressure, reducing the amount of time needed for a quantitative extraction. In a Soxhlet extraction the temperature is limited by the solvent’s boiling point at atmospheric pressure. When acetone is the solvent, for example, a Soxhlet extraction is limited to 56oC, but a microwave extraction can reach 150oC.Two other continuous extractions deserve mention. Volatile organic compounds (VOCs) can be quantitatively removed from a liquid sample by a liquid–gas extraction. As shown in Figure 7.6.13 , an inert purging gas, such as He, is passed through the sample. The purge gas removes the VOCs, which are swept to a primary trap where they collect on a solid absorbent. When the extraction is complete, the VOCs are removed from the primary trap by rapidly heating the tube while flushing with He. This technique is known as a purge-and-trap. Because the analyte’s recovery may not be reproducible, an internal standard is required for quantitative work.Continuous extractions also can be accomplished using supercritical fluids [McNally, M. E. Anal. Chem. 1995, 67, 308A–315A]. If we heat a substance above its critical temperature and pressure it forms a supercritical fluid whose properties are between those of a gas and a liquid. A supercritical fluid is a better solvent than a gas, which makes it a better reagent for extractions. In addition, a supercritical fluid’s viscosity is significantly less than that of a liquid, which makes it easier to push it through a particulate sample. One example of a supercritical fluid extraction is the determination of total petroleum hydrocarbons (TPHs) in soils, sediments, and sludges using supercritical CO2 [“TPH Extraction by SFE,” ISCO, Inc. Lincoln, NE, Revised Nov. 1992]. An approximately 3-g sample is placed in a 10-mL stainless steel cartridge and supercritical CO2 at a pressure of 340 atm and a temperature of 80oC is passed through the cartridge for 30 minutes at flow rate of 1–2 mL/min. To collect the TPHs, the effluent from the cartridge is passed through 3 mL of tetrachloroethylene at room temperature. At this temperature the CO2 reverts to the gas phase and is released to the atmosphere.In an extraction, the sample originally is in one phase and we extract the analyte or the interferent into a second phase. We also can separate the analyte and interferents by continuously passing one sample-free phase, called the mobile phase, over a second sample-free phase that remains fixed or stationary. The sample is injected into the mobile phase and the sample’s components partition themselves between the mobile phase and the stationary phase. Those components with larger partition coefficients are more likely to move into the stationary phase and take longer time to pass through the system. This is the basis of all chromatographic separations. Chromatography provides both a separation of analytes and interferents, and a means for performing a qualitative or quantitative analysis for the analyte. For this reason a more thorough treatment of chromatography is found in Chapter 12.This page titled 7.6: Classifying Separation Techniques is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by David Harvey.
173
7.7: Liquid-Liquid Extractions
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Analytical_Chemistry_2.1_(Harvey)/07%3A_Obtaining_and_Preparing_Samples_for_Analysis/7.07%3A_Liquid-Liquid_Extractions
A liquid–liquid extraction is an important separation technique for environmental, clinical, and industrial laboratories. A standard environmental analytical method illustrates the importance of liquid–liquid extractions. Municipal water departments routinely monitor public water supplies for trihalomethanes (CHCl3, CHBrCl2, CHBr2Cl, and CHBr3) because they are known or suspected carcinogens. Before their analysis by gas chromatography, trihalomethanes are separated from their aqueous matrix using a liquid–liquid extraction with pentane [“The Analysis of Trihalomethanes in Drinking Water by Liquid Extraction,”EPAMethod501.2 (EPA 500-Series, November 1979)].The Environmental Protection Agency (EPA) also publishes two additional methods for trihalomethanes. Method 501.1 and Method 501.3 use a purge-and-trap to collect the trihalomethanes prior to a gas chromatographic analysis with a halide-specific detector (Method 501.1) or a mass spectrometer as the detector (Method 501.3). You will find more details about gas chromatography, including detectors, in Chapter 12.In a simple liquid–liquid extraction the solute partitions itself between two immiscible phases. One phase usually is an aqueous solvent and the other phase is an organic solvent, such as the pentane used to extract trihalomethanes from water. Because the phases are immiscible they form two layers, with the denser phase on the bottom. The solute initially is present in one of the two phases; after the extraction it is present in both phases. Extraction efficiency—that is, the percentage of solute that moves from one phase to the other—is determined by the equilibrium constant for the solute’s partitioning between the phases and any other side reactions that involve the solute. Examples of other reactions that affect extraction efficiency include acid–base reactions and complexation reactions.As we learned earlier in this chapter, a solute’s partitioning between two phases is described by a partition coefficient, KD. If we extract a solute from an aqueous phase into an organic phase\[S_{a q} \rightleftharpoons S_{o r g} \nonumber\]then the partition coefficient is\[K_{\mathrm{D}}=\frac{\left[S_{org}\right]}{\left[S_{a q}\right]} \nonumber\]A large value for KD indicates that extraction of solute into the organic phase is favorable.To evaluate an extraction’s efficiency we must consider the solute’s total concentration in each phase, which we define as a distribution ratio, D.\[D=\frac{\left[S_{o r g}\right]_{\text { total }}}{\left[S_{a q}\right]_{\text { total }}} \nonumber\]The partition coefficient and the distribution ratio are identical if the solute has only one chemical form in each phase; however, if the solute exists in more than one chemical form in either phase, then KD and D usually have different values. For example, if the solute exists in two forms in the aqueous phase, A and B, only one of which, A, partitions between the two phases, then\[D=\frac{\left[S_{o r g}\right]_{A}}{\left[S_{a q}\right]_{A}+\left[S_{a q}\right]_{B}} \leq K_{\mathrm{D}}=\frac{\left[S_{o r g}\right]_{A}}{\left[S_{a q}\right]_{A}} \nonumber\]This distinction between KD and D is important. The partition coefficient is a thermodynamic equilibrium constant and has a fixed value for the solute’s partitioning between the two phases. The distribution ratio’s value, however, changes with solution conditions if the relative amounts of A and B change. If we know the solute’s equilibrium reactions within each phase and between the two phases, we can derive an algebraic relationship between KD and D.In a simple liquid–liquid extraction, the only reaction that affects the extraction efficiency is the solute’s partitioning between the two phases (Figure 7.7.1 ).In this case the distribution ratio and the partition coefficient are equal.\[D=\frac{\left[S_{o r g}\right]_{\text { total }}}{\left[S_{aq}\right]_{\text { total }}} = K_\text{D} = \frac {[S_{org}]} {[S_{aq}]} \label{7.1}\]Let’s assume the solute initially is present in the aqueous phase and that we wish to extract it into the organic phase. A conservation of mass requires that the moles of solute initially present in the aqueous phase equal the combined moles of solute in the aqueous phase and the organic phase after the extraction.\[\left(\operatorname{mol} \ S_{a q}\right)_{0}=\left(\operatorname{mol} \ S_{a q}\right)_{1}+\left(\operatorname{mol} \ S_{org}\right)_{1} \label{7.2}\]where the subscripts indicate the extraction number with 0 representing the system before the extraction and 1 the system following the first extraction. After the extraction, the solute’s concentration in the aqueous phase is\[\left[S_{a q}\right]_{1}=\frac{\left(\operatorname{mol} \ S_{a q}\right)_{1}}{V_{a q}} \label{7.3}\]and its concentration in the organic phase is\[\left[S_{o r g}\right]_{1}=\frac{\left(\operatorname{mol} \ S_{o r g}\right)_{1}}{V_{o r g}} \label{7.4}\]where Vaq and Vorg are the volumes of the aqueous phase and the organic phase. Solving Equation \ref{7.2} for (mol Sorg)1 and substituting into Equation \ref{7.4} leave us with\[\left[S_{o r g}\right]_{1} = \frac{\left(\operatorname{mol} \ S_{a q}\right)_{0}-\left(\operatorname{mol} \ S_{a q}\right)_{1}}{V_{o r g}} \label{7.5}\]Substituting Equation \ref{7.3} and Equation \ref{7.5} into Equation \ref{7.1} gives\[D = \frac {\frac {(\text{mol }S_{aq})_0-(\text{mol }S_{aq})_1} {V_{org}}} {\frac {(\text{mol }S_{aq})_1} {V_{aq}}} = \frac{\left(\operatorname{mol} \ S_{a q}\right)_{0} \times V_{a q}-\left(\operatorname{mol} \ S_{a q}\right)_{1} \times V_{a q}}{\left(\operatorname{mol} \ S_{a q}\right)_{1} \times V_{o r g}} \nonumber\]Rearranging and solving for the fraction of solute that remains in the aqueous phase after one extraction, (qaq)1, gives\[\left(q_{aq}\right)_{1} = \frac{\left(\operatorname{mol} \ S_{aq}\right)_{1}}{\left(\operatorname{mol} \ S_{a q}\right)_{0}} = \frac{V_{aq}}{D V_{o r g}+V_{a q}} \label{7.6}\]The fraction present in the organic phase after one extraction, (qorg)1, is\[\left(q_{o r g}\right)_{1}=\frac{\left(\operatorname{mol} S_{o r g}\right)_{1}}{\left(\operatorname{mol} S_{a q}\right)_{0}}=1-\left(q_{a q}\right)_{1}=\frac{D V_{o r g}}{D V_{o r g}+V_{a q}} \nonumber\]Example 7.7.1 shows how we can use Equation \ref{7.6} to calculate the efficiency of a simple liquid-liquid extraction.A solute has a KD between water and chloroform of 5.00. Suppose we extract a 50.00-mL sample of a 0.050 M aqueous solution of the solute using 15.00 mL of chloroform. (a) What is the separation’s extraction efficiency? (b) What volume of chloroform do we need if we wish to extract 99.9% of the solute?SolutionFor a simple liquid–liquid extraction the distribution ratio, D, and the partition coefficient, KD, are identical.(a) The fraction of solute that remains in the aqueous phase after the extraction is given by Equation \ref{7.6}.\[\left(q_{aq}\right)_{1}=\frac{V_{a q}}{D V_{org}+V_{a q}}=\frac{50.00 \ \mathrm{mL}}{(5.00)(15.00 \ \mathrm{mL})+50.00 \ \mathrm{mL}}=0.400 \nonumber\]The fraction of solute in the organic phase is 1–0.400, or 0.600. Extraction efficiency is the percentage of solute that moves into the extracting phase; thus, the extraction efficiency is 60.0%.(b) To extract 99.9% of the solute (qaq)1 must be 0.001. Solving Equation \ref{7.6} for Vorg, and making appropriate substitutions for (qaq)1 and Vaq gives\[V_{o r g}=\frac{V_{a q}-\left(q_{a q}\right)_{1} V_{a q}}{\left(q_{a q}\right)_{1} D}=\frac{50.00 \ \mathrm{mL}-(0.001)(50.00 \ \mathrm{mL})}{(0.001)(5.00 \ \mathrm{mL})}=999 \ \mathrm{mL} \nonumber\]This is large volume of chloroform. Clearly, a single extraction is not reasonable under these conditions.In Example 7.7.1 , a single extraction provides an extraction efficiency of only 60%. If we carry out a second extraction, the fraction of solute remaining in the aqueous phase, (qaq)2, is\[\left(q_{a q}\right)_{2}=\frac{\left(\operatorname{mol} \ S_{a q}\right)_{2}}{\left(\operatorname{mol} \ S_{a q}\right)_{1}}=\frac{V_{a q}}{D V_{org}+V_{a q}} \nonumber\]If Vaq and Vorg are the same for both extractions, then the cumulative fraction of solute that remains in the aqueous layer after two extractions, (Qaq)2, is the product of (qaq)1 and (qaq)2, or\[\left(Q_{aq}\right)_{2}=\frac{\left(\operatorname{mol} \ S_{aq}\right)_{2}}{\left(\operatorname{mol} \ S_{aq}\right)_{0}}=\left(q_{a q}\right)_{1} \times\left(q_{a q}\right)_{2}=\left(\frac{V_{a q}}{D V_{o r g}+V_{a q}}\right)^{2} \nonumber\]In general, for a series of n identical extractions, the fraction of analyte that remains in the aqueous phase after the last extraction is\[\left(Q_{a q}\right)_{n}=\left(\frac{V_{a q}}{D V_{o r g}+V_{a q}}\right)^{n} \label{7.7}\]For the extraction described in Example 7.7.1 , determine (a) the extraction efficiency for two identical extractions and for three identical extractions; and (b) the number of extractions required to ensure that we extract 99.9% of the solute.Solution(a) The fraction of solute remaining in the aqueous phase after two extractions and three extractions is\[\left(Q_{aq}\right)_{2}=\left(\frac{50.00 \ \mathrm{mL}}{(5.00)(15.00 \ \mathrm{mL})+50.00 \ \mathrm{mL}}\right)^{2}=0.160 \nonumber\]\[\left(Q_{a q}\right)_{3}=\left(\frac{50.0 \ \mathrm{mL}}{(5.00)(15.00 \ \mathrm{mL})+50.00 \ \mathrm{mL}}\right)^{3}=0.0640 \nonumber\]The extraction efficiencies are 84.0% for two extractions and 93.6% for three extractions.(b) To determine the minimum number of extractions for an efficiency of 99.9%, we set (Qaq)n to 0.001 and solve for n using Equation \ref{7.7}.\[0.001=\left(\frac{50.00 \ \mathrm{mL}}{(5.00)(15.00 \ \mathrm{mL})+50.00 \ \mathrm{mL}}\right)^{n}=(0.400)^{n} \nonumber\]Taking the log of both sides and solving for n \[\begin{aligned} \log (0.001) &=n \log (0.400) \\ n &=7.54 \end{aligned} \nonumber\]we find that a minimum of eight extractions is necessary.The last two examples provide us with an important observation—for any extraction efficiency, we need less solvent if we complete several extractions using smaller portions of solvent instead of one extraction using a larger volume of solvent. For the conditions in Example 7.7.1 and Example 7.7.2 , an extraction efficiency of 99.9% requires one extraction with 9990 mL of chloroform, or 120 mL when using eight 15-mL portions of chloroform. Although extraction efficiency increases dramatically with the first few multiple, the effect diminishes quickly as we increase the number of extractions (Figure 7.7.2 ). In most cases there is little improvement in extraction efficiency after five or six extractions. For the conditions in Example 7.7.2 , we reach an extraction efficiency of 99% after five extractions and need three additional extractions to obtain the extra 0.9% increase in extraction efficiency.To plan a liquid–liquid extraction we need to know the solute’s distribution ratio between the two phases. One approach is to carry out the extraction on a solution that contains a known amount of solute. After the extraction, we isolate the organic phase and allow it to evaporate, leaving behind the solute. In one such experiment, 1.235 g of a solute with a molar mass of 117.3 g/mol is dissolved in 10.00 mL of water. After extracting with 5.00 mL of toluene, 0.889 g of the solute is recovered in the organic phase. (a) What is the solute’s distribution ratio between water and toluene? (b) If we extract 20.00 mL of an aqueous solution that contains the solute using 10.00 mL of toluene, what is the extraction efficiency? (c) How many extractions will we need to recover 99.9% of the solute?(a) The solute’s distribution ratio between water and toluene is\[D=\frac{\left[S_{o r g}\right]}{\left[S_{a q}\right]}=\frac{0.889 \ \mathrm{g} \times \frac{1 \ \mathrm{mol}}{117.3 \ \mathrm{g}} \times \frac{1}{0.00500 \ \mathrm{L}}}{(1.235 \ \mathrm{g}-0.889 \ \mathrm{g}) \times \frac{1 \ \mathrm{mol}}{117.3 \ \mathrm{g}} \times \frac{1}{0.01000 \ \mathrm{L}}}=5.14 \nonumber\](b) The fraction of solute remaining in the aqueous phase after one extraction is\[\left(q_{a q}\right)_{1}=\frac{V_{a q}}{D V_{org}+V_{a q}}=\frac{20.00 \ \mathrm{mL}}{(5.14)(10.00 \ \mathrm{mL})+20.00 \ \mathrm{mL}}=0.280 \nonumber\]The extraction efficiency, therefore, is 72.0%.(c) To extract 99.9% of the solute requires\[\left(Q_{aq}\right)_{n}=0.001=\left(\frac{20.00 \ \mathrm{mL}}{(5.14)(10.00 \ \mathrm{mL})+20.00 \ \mathrm{mL}}\right)^{n}=(0.280)^{n} \nonumber\]\[\begin{aligned} \log (0.001) &=n \log (0.280) \\ n &=5.4 \end{aligned} \nonumber\]a minimum of six extractions.As we see in Equation \ref{7.1}, in a simple liquid–liquid extraction the distribution ratio and the partition coefficient are identical. As a result, the distribution ratio does not depend on the composition of the aqueous phase or the organic phase. A change in the pH of the aqueous phase, for example, will not affect the solute’s extraction efficiency when KD and D have the same value.If the solute participates in one or more additional equilibrium reactions within a phase, then the distribution ratio and the partition coefficient may not be the same. For example, Figure 7.7.3 shows the equilibrium reactions that affect the extraction of the weak acid, HA, by an organic phase in which ionic species are not soluble.In this case the partition coefficient and the distribution ratio are\[K_{\mathrm{D}}=\frac{\left[\mathrm{HA}_{org}\right]}{\left[\mathrm{HA}_{a q}\right]} \label{7.8}\]\[D=\frac{\left[\mathrm{HA}_{org}\right]_{\text { total }}}{\left[\mathrm{HA}_{a q}\right]_{\text { total }}} =\frac{\left[\mathrm{HA}_{org}\right]}{\left[\mathrm{HA}_{a q}\right]+\left[\mathrm{A}_{a q}^{-}\right]} \label{7.9}\]Because the position of an acid–base equilibrium depends on pH, the distribution ratio, D, is pH-dependent. To derive an equation for D that shows this dependence, we begin with the acid dissociation constant for HA.\[K_{\mathrm{a}}=\frac{\left[\mathrm{H}_{3} \mathrm{O}_{\mathrm{aq}}^{+}\right]\left[\mathrm{A}_{\mathrm{aq}}^{-}\right]}{\left[\mathrm{HA}_{\mathrm{aq}}\right]} \label{7.10}\]Solving Equation \ref{7.10} for the concentration of A– in the aqueous phase\[\left[\mathrm{A}_{a q}^{-}\right]=\frac{K_{\mathrm{a}} \times\left[\mathrm{HA}_{a q}\right]}{\left[\mathrm{H}_{3} \mathrm{O}_{a q}^{+}\right]} \nonumber\]and substituting into Equation \ref{7.9} gives\[D = \frac {[\text{HA}_{org}]} {[\text{HA}_{aq}] + \frac {K_a \times [\text{HA}_{aq}]}{[\text{H}_3\text{O}_{aq}^+]}} \nonumber\]Factoring [HAaq] from the denominator, replacing [HAorg]/[HAaq] with KD (Equation \ref{7.8}), and simplifying leaves us with the following relationship between the distribution ratio, D, and the pH of the aqueous solution.\[D=\frac{K_{\mathrm{D}}\left[\mathrm{H}_{3} \mathrm{O}_{aq}^{+}\right]}{\left[\mathrm{H}_{3} \mathrm{O}_{aq}^{+}\right]+K_{a}} \label{7.11}\]An acidic solute, HA, has a Ka of \(1.00 \times 10^{-5}\) and a KD between water and hexane of 3.00. Calculate the extraction efficiency if we extract a 50.00 mL sample of a 0.025 M aqueous solution of HA, buffered to a pH of 3.00, with 50.00 mL of hexane. Repeat for pH levels of 5.00 and 7.00.SolutionWhen the pH is 3.00, [\(\text{H}_3\text{O}_{aq}^+\)] is \(1.0 \times 10^{-3}\) and the distribution ratio is\[D=\frac{(3.00)\left(1.0 \times 10^{-3}\right)}{1.0 \times 10^{-3}+1.00 \times 10^{-5}}=2.97 \nonumber\]The fraction of solute that remains in the aqueous phase is\[\left(Q_{aq}\right)_{1}=\frac{50.00 \ \mathrm{mL}}{(2.97)(50.00 \ \mathrm{mL})+50.00 \ \mathrm{mL}}=0.252 \nonumber\]The extraction efficiency, therefore, is almost 75%. The same calculation at a pH of 5.00 gives the extraction efficiency as 60%. At a pH of 7.00 the extraction efficiency is just 3% .The extraction efficiency in Example 7.7.3 is greater at more acidic pH levels because HA is the solute’s predominate form in the aqueous phase. At a more basic pH, where A– is the solute’s predominate form, the extraction efficiency is smaller. A graph of extraction efficiency versus pH is shown in Figure 7.7.4 . Note that extraction efficiency essentially is independent of pH for pH levels more acidic than the HA’s pKa, and that it is essentially zero for pH levels more basic than HA’s pKa. The greatest change in extraction efficiency occurs at pH levels where both HA and A– are predominate species. The ladder diagram for HA along the graph’s x-axis helps illustrate this effect.The liquid–liquid extraction of the weak base B is governed by the following equilibrium reactions:\[\begin{array}{c}{\mathrm{B}(a q) \rightleftharpoons \mathrm{B}(org) \quad K_{D}=5.00} \\ {\mathrm{B}(a q)+\mathrm{H}_{2} \mathrm{O}(l)\rightleftharpoons \mathrm{OH}^{-}(a q)+\mathrm{HB}^{+}(a q) \quad K_{b}=1.0 \times 10^{-4}}\end{array} \nonumber\]Derive an equation for the distribution ratio, D, and calculate the extraction efficiency if 25.0 mL of a 0.025 M solution of B, buffered to a pH of 9.00, is extracted with 50.0 mL of the organic solvent.Because the weak base exists in two forms, only one of which extracts into the organic phase, the partition coefficient, KD, and the distribution ratio, D, are not identical.\[K_{\mathrm{D}}=\frac{\left[\mathrm{B}_{org}\right]}{\left[\mathrm{B}_{aq}\right]} \nonumber\]\[D = \frac {[\text{B}_{org}]} {[\text{B}_{aq}]} = \frac {[\text{B}_{org}]} {[\text{B}_{aq}] + [\text{HB}_{aq}^+]} \nonumber\]Using the Kb expression for the weak base\[K_{\mathrm{b}}=\frac{\left[\mathrm{OH}_{a q}^{-}\right]\left[\mathrm{HB}_{a q}^{+}\right]}{\left[\mathrm{B}_{a q}\right]} \nonumber\]we solve for the concentration of HB+ and substitute back into the equation for D, obtaining\[D = \frac {[\text{B}_{org}]} {[\text{B}_{aq}] + \frac {K_b \times [\text{B}_{aq}]} {[\text{OH}_{aq}^-]}} = \frac {[\text{B}_{org}]} {[\text{B}_{aq}]\left(1+\frac {K_b} {[\text{OH}_{aq}^+]} \right)} =\frac{K_{D}\left[\mathrm{OH}_{a q}^{-}\right]}{\left[\mathrm{OH}_{a q}^{-}\right]+K_{\mathrm{b}}} \nonumber\]At a pH of 9.0, the [OH–] is \(1 \times 10^{-5}\) M and the distribution ratio has a value of\[D=\frac{K_{D}\left[\mathrm{OH}_{a q}^{-}\right]}{\left[\mathrm{OH}_{aq}^{-}\right]+K_{\mathrm{b}}}=\frac{(5.00)\left(1.0 \times 10^{-5}\right)}{1.0 \times 10^{-5}+1.0 \times 10^{-4}}=0.455 \nonumber\]After one extraction, the fraction of B remaining in the aqueous phase is\[\left(q_{aq}\right)_{1}=\frac{25.00 \ \mathrm{mL}}{(0.455)(50.00 \ \mathrm{mL})+25.00 \ \mathrm{mL}}=0.524 \nonumber\]The extraction efficiency, therefore, is 47.6%. At a pH of 9, most of the weak base is present as HB+, which explains why the overall extraction efficiency is so poor.One important application of a liquid–liquid extraction is the selective extraction of metal ions using an organic ligand. Unfortunately, many organic ligands are not very soluble in water or undergo hydrolysis or oxidation reactions in aqueous solutions. For these reasons the ligand is added to the organic solvent instead of the aqueous phase. Figure 7.7.5 shows the relevant equilibrium reactions (and equilibrium constants) for the extraction of Mn+ by the ligand HL, including the ligand’s extraction into the aqueous phase (KD,HL), the ligand’s acid dissociation reaction (Ka), the formation of the metal–ligand complex (\(\beta_n\)), and the complex’s extraction into the organic phase (KD,c).If the ligand’s concentration is much greater than the metal ion’s concentration, then the distribution ratio is\[D=\frac{\beta_{n} K_{\mathrm{D}, c}\left(K_{a}\right)^{n}\left(C_{\mathrm{HL}}\right)^{n}}{\left(K_{\mathrm{D}, \mathrm{HL}}\right)^{n}\left[\mathrm{H}_{3} \mathrm{O}^{+}\right]^{n}+\beta_{n}\left(K_{\mathrm{a}}\right)^{n}\left(C_{\mathrm{HL}}\right)^{n}} \label{7.12}\]where CHL is the ligand’s initial concentration in the organic phase. As shown in Example 7.7.4 , the extraction efficiency for metal ions shows a marked pH dependency.A liquid–liquid extraction of the divalent metal ion, M2+, uses the scheme outlined in Figure 7.7.5 . The partition coefficients for the ligand, KD,HL, and for the metal–ligand complex, KD,c, are \(1.0 \times 10^4\) and \(7.0 \times 10^4\), respectively. The ligand’s acid dissociation constant, Ka, is \(5.0 \times 10^{-5}\), and the formation constant for the metal–ligand complex, \(\beta_2\), is \(2.5 \times 10^{16}\). What is the extraction efficiency if we extract 100.0 mL of a \(1.0 \times 10^{-6}\) M aqueous solution of M2+, buffered to a pH of 1.00, with 10.00 mL of an organic solvent that is 0.1 mM in the chelating agent? Repeat the calculation at a pH of 3.00.SolutionWhen the pH is 1.00 the distribution ratio is\[D=\frac{\left(2.5 \times 10^{16}\right)\left(7.0 \times 10^{4}\right)\left(5.0 \times 10^{-5}\right)^{2}\left(1.0 \times 10^{-4}\right)^{2}}{\left(1.0 \times 10^{4}\right)^{2}(0.10)^{2}+\left(2.5 \times 10^{16}\right)\left(5.0 \times 10^{-5}\right)^{2}\left(1.0 \times 10^{-4}\right)^{2}} \nonumber\]or a D of 0.0438. The fraction of metal ion that remains in the aqueous phase is\[\left(Q_{aq}\right)_{1}=\frac{100.0 \ \mathrm{mL}}{(0.0438)(10.00 \ \mathrm{mL})+100.0 \ \mathrm{mL}}=0.996 \nonumber\]At a pH of 1.00, we extract only 0.40% of the metal into the organic phase. Changing the pH to 3.00, however, increases the extraction efficiency to 97.8%. Figure 7.7.6 shows how the pH of the aqueous phase affects the extraction efficiency for M2+.One advantage of using a ligand to extract a metal ion is the high degree of selectivity that it brings to a liquid–liquid extraction. As seen in Figure 7.7.6 , a divalent metal ion’s extraction efficiency increases from approximately 0% to 100% over a range of 2 pH units. Because a ligand’s ability to form a metal–ligand complex varies substantially from metal ion to metal ion, significant selectivity is possible if we carefully control the pH. Table 7.7.1 shows the minimum pH for extracting 99% of a metal ion from an aqueous solution using an equal volume of 4 mM dithizone in CCl4.Using Table 7.7.1 , explain how we can separate the metal ions in an aqueous mixture of Cu2+, Cd2+, and Ni2+ by extracting with an equal volume of dithizone in CCl4.SolutionFrom Table 7.7.1 , a quantitative separation of Cu2+ from Cd2+ and from Ni2+ is possible if we acidify the aqueous phase to a pH of less than 1. This pH is greater than the minimum pH for extracting Cu2+ and significantly less than the minimum pH for extracting either Cd2+ or Ni2+. After the extraction of Cu2+ is complete, we shift the pH of the aqueous phase to 4.0, which allows us to extract Cd2+ while leaving Ni2+ in the aqueous phase.This page titled 7.7: Liquid-Liquid Extractions is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by David Harvey.
174
7.8: Separation Versus Preconcentration
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Analytical_Chemistry_2.1_(Harvey)/07%3A_Obtaining_and_Preparing_Samples_for_Analysis/7.08%3A_Separation_Versus_Preconcentration
Two common analytical problems are matrix components that interfere with an analyte’s analysis and an analyte with a concentration that is too small to analyze accurately. As we have learned in this chapter, we can use a separation to solve the first problem. Interestingly, we often can use a separation to solve the second problem as well. For a separation in which we recover the analyte in a new phase, it may be possible to increase the analyte’s concentration if we can extract the analyte from a larger volume into a smaller volume. This step in an analytical procedure is known as a preconcentration.An example from the analysis of water samples illustrates how we can simultaneously accomplish a separation and a preconcentration. In the gas chromatographic analysis for organophosphorous pesticides in environmental waters, the analytes in a 1000-mL sample are separated from their aqueous matrix by a solid-phase extraction that uses 15 mL of ethyl acetate [Aguilar, C.; Borrul, F.; Marcé, R. M. LC•GC 1996, 14, 1048–1054]. After the extraction, the analytes in the ethyl acetate have a concentration that is 67 times greater than that in the original sample (assuming the extraction is 100% efficient).This page titled 7.8: Separation Versus Preconcentration is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by David Harvey.
175
7.9: Problems
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Analytical_Chemistry_2.1_(Harvey)/07%3A_Obtaining_and_Preparing_Samples_for_Analysis/7.09%3A_Problems
1. Because of the risk of lead poisoning, the exposure of children to lead-based paint is a significant public health concern. The first step in the quantitative analysis of lead in dried paint chips is to dissolve the sample. Corl evaluated several dissolution techniques [Corl, W. E. Spectroscopy 1991, 6, 40–43]. Samples of paint were collected and then pulverized using a Pyrex mortar and pestle. Replicate portions of the powdered paint were taken for analysis. The following table shows results for a paint sample and for a standard reference material. Both samples and standards were digested with HNO3 on a hot plate.Replicate% w/w Pb in Sample% w/w Pb in Standard15.0911.4826.2911.6236.6411.4744.6311.86(a) Determine the overall variance, the variance due to the method and the variance due to sampling. (b) What percentage of the overall variance is due to sampling? (c) How might you decrease the variance due to sampling?2. To analyze a shipment of 100 barrels of an organic solvent, you plan to collect a single sample from each of 10 barrels selected at random. From which barrels should you collect samples if the first barrel is given by the twelfth entry in the random number table in Appendix 14, with subsequent barrels given by every third entry? Assume that entries in the random number table are arranged by rows.3. The concentration of dissolved O2 in a lake shows a daily cycle from the effect of photosynthesis, and a yearly cycle due to seasonal changes in temperature. Suggest an appropriate systematic sampling plan to monitor the daily change in dissolved O2. Suggest an appropriate systematic sampling plan for monitoring the yearly change in dissolved O2.4. The data in the following table were collected during a preliminary study of the pH of an industrial wastewater stream.0.51.0Prepare a figure showing how the pH changes as a function of time and suggest an appropriate sampling frequency for a long-term monitoring program.5. You have been asked to monitor the daily fluctuations in atmospheric ozone in the downtown area of a city to determine if there is relationship between daily traffic patterns and ozone levels. (a) Which of the following sampling plans will you use and why: random, systematic, judgmental, systematic–judgmental, or stratified? (b) Do you plan to collect and analyze a series of grab samples, or will you form a single composite sample? (c) Will your answers to these questions change if your goal is to determine if the average daily ozone level exceeds a threshold value? If yes, then what is your new sampling strategy?6. The distinction between a homogeneous population and a heterogeneous population is important when we develop a sampling plan. (a) Define homogeneous and heterogeneous. (b) If you collect and analyze a single sample, can you determine if the population is homogeneous or is heterogeneous?7. Beginning with equation 7.2.2, derive equation 7.2.3. Assume that the particles are spherical with a radius of r and a density of d.8. The sampling constant for the radioisotope 24Na in homogenized human liver is approximately 35 g [Kratochvil, B.; Taylor, J. K. Anal. Chem. 1981, 53, 924A–938A]. (a) What is the expected relative standard deviation for sampling if we analyze 1.0-g samples? (b) How many 1.0-g samples must we analyze to obtain a maximum sampling error of ±5% at the 95% confidence level?9. Engels and Ingamells reported the following results for the % w/w K2O in a mixture of amphibolite and orthoclase [Engels, J. C.; Ingamells, C. O. Geochim. Cosmochim. Acta 1970, 34, 1007–1017].0.2470.3000.2470.2750.2580.3110.2580.330Each of the 12 samples had a nominal mass of 0.1 g. Using this data, calculate the approximate value for Ks, and then, using this value for Ks, determine the nominal mass of sample needed to achieve a percent relative standard deviation of 2%.10. The following data was reported for the determination of KH2PO4 in a mixture of KH2PO4 and NaCl [Guy, R. D.; Ramaley, L.; Wentzell, P. D. J. Chem. Educ. 1998, 75, 1028–1033].0.25150.8470.24650.5980.27700.4310.24600.8420.24850.9640.25901.1780.50841.0090.49540.9470.52860.6180.52320.7440.49650.5720.49950.7091.0270.9870.9910.9980.997(a) Prepare a graph of % w/w KH2PO4 vs. the actual sample mass. Is this graph consistent with your understanding of the factors that affect sampling variance. (b) For each nominal mass, calculate the percent relative standard deviation, Rexp, based on the data. The value of Ks for this analysis is estimated as 350. Use this value of Ks to determine the theoretical percent relative standard deviation, Rtheo, due to sampling. Considering these calculations, what is your conclusion about the importance of indeterminate sampling errors for this analysis? (c) For each nominal mass, convert Rtheo to an absolute standard deviation. Plot points on your graph that correspond to ±1 absolute standard deviations about the overall average % w/w KH2PO4 for all samples. Draw smooth curves through these two sets of points. Does the sample appear homogeneous on the scale at which it is sampled?11.In this problem you will collect and analyze data to simulate the sampling process. Obtain a pack of M&M’s (or other similar candy). Collect a sample of five candies and count the number that are red (or any other color of your choice). Report the result of your analysis as % red. Return the candies to the bag, mix thoroughly, and repeat the analysis for a total of 20 determinations. Calculate the mean and the standard deviation for your data. Remove all candies from the bag and determine the true % red for the population. Sampling in this exercise should follow binomial statistics. Calculate the expected mean value and the expected standard deviation, and compare to your experimental results.12. Determine the error (\(\alpha = 0.05\)) for the following situations. In each case assume that the variance for a single determination is 0.0025 and that the variance for collecting a single sample is 0.050. (a) Nine samples are collected, each analyzed once. (b) One sample is collected and analyzed nine times. (c) Five samples are collected, each analyzed twice.13. Which of the sampling schemes in problem 12 is best if you wish to limit the overall error to less than ±0.30 and the cost to collect a single sample is $1 and the cost to analyze a single sample is $10? Which is the best sampling scheme if the cost to collect a single sample is $7 and the cost to analyze a single sample is $3?14. Maw, Witry, and Emond evaluated a microwave digestion method for Hg against the standard open-vessel digestion method [Maw, R.; Witry, L.; Emond, T. Spectroscopy 1994, 9, 39–41]. The standard method requires a 2-hr digestion and is operator-intensive. The microwave digestion is complete in approximately 0.5 hr and requires little monitoring by the operator. Samples of baghouse dust from air-pollution-control equipment were collected from a hazardous waste incinerator and digested in triplicate before determining the concentration of Hg in ppm. Results are summarized in the following two tables.Does the microwave digestion method yields acceptable results when compared to the standard digestion method?15. Simpson, Apte, and Batley investigated methods for preserving water samples collected from anoxic (O2-poor) environments that have high concentrations of dissolved sulfide [Simpson, S. L.: Apte, S. C.; Batley, G. E. Anal. Chem. 1998, 70, 4202–4205]. They found that preserving water samples with HNO3 (a common method for preserving aerobic samples) gave significant negative determinate errors when analyzing for Cu2+. Preserving samples by first adding H2O2 and then adding HNO3 eliminated the determinate error. Explain their observations.16. In a particular analysis the selectivity coefficient, KA,I, is 0.816. When a standard sample with an analyte-to-interferent ratio of 5:1 is carried through the analysis, the error when determining the analyte is +6.3%. (a) Determine the apparent recovery for the analyte if RI =0. (b) Determine the apparent recovery for the interferent if RA = 0.17. The amount of Co in an ore is determined using a procedure for which Fe in an interferent. To evaluate the procedure’s accuracy, a standard sample of ore known to have a Co/Fe ratio of 10.2 is analyzed. When pure samples of Co and Fe are taken through the procedure the following calibration relationships are obtained\[S_{\mathrm{Co}}=0.786 \times m_{\mathrm{Co}} \text { and } S_{\mathrm{Fe}}=0.699 \times m_{\mathrm{Fe}} \nonumber\]where S is the signal and m is the mass of Co or Fe. When 278.3 mg of Co are taken through the separation step, 275.9 mg are recovered. Only 3.6 mg of Fe are recovered when a 184.9 mg sample of Fe is carried through the separation step. Calculate (a) the recoveries for Co and Fe; (b) the separation factor; (c) the selectivity ratio; (d) the error if no attempt is made to separate the Co and Fe; (e) the error if the separation step is carried out; and (f ) the maximum possible recovery for Fe if the recovery for Co is 1.00 and the maximum allowed error is 0.05%.18. The amount of calcium in a sample of urine is determined by a method for which magnesium is an interferent. The selectivity coefficient, KCa,Mg, for the method is 0.843. When a sample with a Mg/Ca ratio of 0.50 is carried through the procedure, an error of \(-3.7 \%\) is obtained. The error is +5.5% when using a sample with a Mg/Ca ratio of 2.0. (a) Determine the recoveries for Ca and Mg. (b) What is the expected error for a urine sample in which the Mg/Ca ratio is 10.0?19. Using the formation constants in Appendix 12, show that F– is an effective masking agent for preventing a reaction between Al3+ and EDTA. Assume that the only significant forms of fluoride and EDTA are F– and Y4–.20. Cyanide is frequently used as a masking agent for metal ions. Its effectiveness as a masking agent is better in more basic solutions. Explain the reason for this dependence on pH.21. Explain how we can separate an aqueous sample that contains Cu2+, Sn4+, Pb2+, and Zn2+ into its component parts by adjusting the pH of the solution.22. A solute, S, has a distribution ratio between water and ether of 7.5. Calculate the extraction efficiency if we extract a 50.0-mL aqueous sample of S using 50.0 mL of ether as (a) a single portion of 50.0 mL; (b) two portions, each of 25.0 mL; (c) four portions, each of 12.5 mL; and (d) five portions, each of 10.0 mL. Assume the solute is not involved in any secondary equilibria.23. What volume of ether is needed to extract 99.9% of the solute in problem 23 when using (a) 1 extraction; (b) 2 extractions; (c) four extrac- tions; and (d) five extractions.24. What is the minimum distribution ratio if 99% of the solute in a 50.0-mL sample is extracted using a single 50.0-mL portion of an organic solvent? Repeat for the case where two 25.0-mL portions of the organic solvent are used.25. A weak acid, HA, with a Ka of \(1.0 \times 10^{-5}\) has a partition coefficient, KD, of \(1.2 \times 10^3\) between water and an organic solvent. What restriction on the sample’s pH is necessary to ensure that 99.9% of the weak acid in a 50.0-mL sample is extracted using a single 50.0-mL portion of the organic solvent?26. For problem 25, how many extractions are needed if the sample’s pH cannot be decreased below 7.0?27. A weak base, B, with a Kb of \(1.0 \times 10^{-3}\) has a partition coefficient, KD, of \(5.0 \times 10^2\) between water and an organic solvent. What restriction on the sample’s pH is necessary to ensure that 99.9% of the weak base in a 50.0-mL sample is extracted when using two 25.0-mL portions of the organic solvent?28. A sample contains a weak acid analyte, HA, and a weak acid interferent, HB. The acid dissociation constants and the partition coefficients for the weak acids are Ka,HA = \(1.0 \times 10^{-3}\), Ka,HB = \(1.0 \times 10^{-7}\), KD,HA = KD,HB = \(5.0 \times 10^2\). (a) Calculate the extraction efficiency for HA and HB when a 50.0-mL sample, buffered to a pH of 7.0, is extracted using 50.0 mL of the organic solvent. (b) Which phase is enriched in the analyte? (c) What are the recoveries for the analyte and the interferent in this phase? (d) What is the separation factor? (e) A quantitative analysis is conducted on the phase enriched in analyte. What is the expected relative error if the selectivity coefficient, KHA,HB, is 0.500 and the initial ratio of HB/HA is 10.0?29. The relevant equilibria for the extraction of I2 from an aqueous solution of KI into an organic phase are shown below. (a) Is the extraction efficiency for I2 better at higher or at a lower concentrations of I–? (b) Derive an expression for the distribution ratio for this extraction.30. The relevant equilibria for the extraction of the metal-ligand complex ML2 from an aqueous solution into an organic phase are shown below. (a) Derive an expression for the distribution ratio for this extraction. (b) Calculate the extraction efficiency when a 50.0-mL aqueous sample that is 0.15 mM in M2+ and 0.12 M in L– is extracted using 25.0 mL of the organic phase. Assume that KD is 10.3 and that \(\beta_2\) is 560.31. Derive equation 7.7.12 for the extraction scheme outlined in figure 7.7.5.32. The following information is available for the extraction of Cu2+ by CCl4 and dithizone: KD,c = \(7 \times 10^4\); \(\beta_2 = 5 \times 10^{22}\); Ka,HL = \(3 \times 10^{-5}\); KD,HL = \(1.1 \times 10^4\); and n = 2. What is the extraction efficiency if a 100.0-mL sample of an aqueous solution that is \(1.0 \times 10^{-7}\) M Cu2+ and 1 M in HCl is extracted using 10.0 mL of CCl4 containing \(4.0 \times 10^{-4}\) M dithizone (HL)?33. Cupferron is a ligand whose strong affinity for metal ions makes it useful as a chelating agent in liquid–liquid extractions. The following table provides pH-dependent distribution ratios for the extraction of Hg2+, Pb2+, and Zn2+ from an aqueous solution to an organic solvent.(a) Suppose you have a 50.0-mL sample of an aqueous solution that contains Hg2+, Pb2+, and Zn2+. Describe how you can separate these metal ions. (b) Under the conditions for your extraction of Hg2+, what percent of the Hg2+ remains in the aqueous phase after three 50.0-mL extractions with the organic solvent? (c) Under the conditions for your extraction of Pb2+, what is the minimum volume of organic solvent needed to extract 99.5% of the Pb2+ in a single extraction? (d) Under the conditions for your extraction of Zn2+, how many extractions are needed to remove 99.5% of the Zn2+ if each extraction uses 25.0 mL of organic solvent?This page titled 7.9: Problems is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by David Harvey.
176
7.10: Additional Resources
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Analytical_Chemistry_2.1_(Harvey)/07%3A_Obtaining_and_Preparing_Samples_for_Analysis/7.10%3A_Additional_Resources
The following set of experiments and class exercises introduce students to the importance of sampling on the quality of analytical results.The following experiments describe homemade sampling devices for collecting samples in the field.The following experiments introduce students to methods for extracting analytes from their matrix.The following papers provides a general introduction to the terminology used in describing sampling.Further information on the statistics of sampling is covered in the following papers and textbooks.The process of collecting a sample presents a variety of difficulties, particularly with respect to the analyte’s integrity. The following papers provide representative examples of sampling problems.The following texts and articles provide additional information on methods for separating analytes and inter- ferents.This page titled 7.10: Additional Resources is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by David Harvey.
177
7.11: Chapter Summary and Key Terms
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Analytical_Chemistry_2.1_(Harvey)/07%3A_Obtaining_and_Preparing_Samples_for_Analysis/7.11%3A_Chapter_Summary_and_Key_Terms
An analysis requires a sample and how we acquire that sample is critical. The samples we collect must accurately represent their target population, and our sampling plan must provide a sufficient number of samples of appropriate size so that uncertainty in sampling does not limit the precision of our analysis.A complete sampling plan requires several considerations, including the type of sample to collect (random, judgmental, systematic, systematic–judgmental, stratified, or convenience); whether to collect grab samples, composite samples, or in situ samples; whether the population is homogeneous or heterogeneous; the appropriate size for each sample; and the number of samples to collect.Removing a sample from its population may induce a change in its composition due to a chemical or physical process. For this reason, we collect samples in inert containers and we often preserve them at the time of collection.When an analytical method’s selectivity is insufficient, we may need to separate the analyte from potential interferents. Such separations take advantage of physical properties—such as size, mass or density—or chemical properties. Important examples of chemical separations include masking, distillation, and extractions.centrifugationconvenience samplingdistillationextraction efficiencygrab samplehomogeneouslaboratory sampleNyquist theorempurge-and-traprecrystallizationsecondary equilibrium reactionsize exclusion chromatographysublimationsystematic–judgmental samplingcomposite sampledensity gradient centrifugationdistribution ratiofiltrategross sample in situ samplingmasking partition coefficientrandom samplingretentateselectivity coefficientSoxhlet extractorsubsamplessystematic samplingconing and quarteringdialysisextraction filtrationheterogeneousjudgmental samplingmasking agentspreconcentrationrecoverysampling planseparation factorstratified samplingsupercritical fluidtarget populationThis page titled 7.11: Chapter Summary and Key Terms is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by David Harvey.
178
8.1: Overview of Gravimetric Methods
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Analytical_Chemistry_2.1_(Harvey)/08%3A_Gravimetric_Methods/8.01%3A_Overview_of_Gravimetric_Methods
Before we consider specific gravimetric methods, let’s take a moment to develop a broad survey of gravimetry. Later, as you read through the descriptions of specific gravimetric methods, this survey will help you focus on their similarities instead of their differences. It is easier to understand a new analytical method when you can see its relationship to other similar methods.Suppose we are to determine the total suspended solids in the water released by a sewage-treatment facility. Suspended solids are just that: solid matter that has yet to settle out of its solution matrix. The analysis is easy. After collecting a sample, we pass it through a preweighed filter that retains the suspended solids, and then dry the filter and solids to remove any residual moisture. The mass of suspended solids is the difference between the filter’s final mass and its original mass. We call this a direct analysis because the analyte—the suspended solids in this example—is the species that is weighed.Method 2540D in Standard Methods for the Examination of Waters and Wastewaters, 20th Edition (American Public Health Association, 1998) provides an approved method for determining total suspended solids. The method uses a glass-fiber filter to retain the suspended solids. After filtering the sample, the filter is dried to a constant weight at 103–105oC.What if our analyte is an aqueous ion, such as Pb2+? Because the analyte is not a solid, we cannot isolate it by filtration. We can still measure the analyte’s mass directly if we first convert it into a solid form. If we suspend a pair of Pt electrodes in the sample and apply a sufficiently positive potential between them for a long enough time, we can convert the Pb2+ to PbO2, which deposits on the Pt anode.\[\mathrm{Pb}^{2+}(a q)+4 \mathrm{H}_{2} \mathrm{O}(l)\rightleftharpoons\mathrm{PbO}_{2}(s)+\mathrm{H}_{2}(g)+2 \mathrm{H}_{3} \mathrm{O}^{+}(a q) \nonumber\]If we weigh the anode before and after we apply the potential, its change in mass gives the mass of PbO2 and, from the reaction’s stoichiometry, the amount of Pb2+ in the sample. This is a direct analysis because PbO2 contains the analyte.Sometimes it is easier to remove the analyte and let a change in mass serve as the analytical signal. Suppose we need to determine a food’s moisture content. One approach is to heat a sample of the food to a temperature that will vaporize water and capture the water vapor using a preweighed absorbent trap. The change in the absorbent’s mass provides a direct determination of the amount of water in the sample. An easier approach is to weigh the sample of food before and after we heat it and use the change in its mass to determine the amount of water originally present. We call this an indirect analysis because we determine the analyte, H2O in this case, using a signal that is proportional its disappearance.Method 925.10 in Official Methods of Analysis, 18th Edition (AOAC International, 2007) provides an approved method for determining the moisture content of flour. A preweighed sample is heated for one hour in a 130oC oven and transferred to a desiccator while it cools to room temperature. The loss in mass gives the amount of water in the sample.The indirect determination of a sample’s moisture content is made by measuring a change in mass. The sample’s initial mass includes the water, but its final mass does not. We can also determine an analyte indirectly without its being weighed. For example, phosphite, \(\text{PO}_3^{3-}\), reduces Hg2+ to \(\text{Hg}_2^{2+}\), which in the presence of Cl– precipitates as Hg2Cl2.\[2 \mathrm{HgCl}_{2}(a q)+\mathrm{PO}_{3}^{3-}(a q) +3 \mathrm{H}_{2} \mathrm{O}(l)\rightleftharpoons \mathrm{Hg}_{2} \mathrm{Cl}_{2}(s)+2 \mathrm{H}_{3} \mathrm{O}^{+}(a q)+2 \mathrm{Cl}^{-}(a q)+\mathrm{PO}_{4}^{3-}(a q) \nonumber\]If we add HgCl2 in excess to a sample that contains phosphite, each mole of \(\text{PO}_3^{3-}\) will produce one mole of Hg2Cl2. The precipitate’s mass, therefore, provides an indirect measurement of the amount of \(\text{PO}_3^{3-}\) in the original sample.The examples in the previous section illustrate four different ways in which a measurement of mass may serve as an analytical signal. When the signal is the mass of a precipitate, we call the method precipitation gravimetry. The indirect determination of \(\text{PO}_3^{3-}\) by precipitating Hg2Cl2 is an example, as is the direct determination of Cl– by precipitating AgCl.In electrogravimetry, we deposit the analyte as a solid film on an electrode in an electrochemical cell. The deposition as PbO2 at a Pt anode is one example of electrogravimetry. The reduction of Cu2+ to Cu at a Pt cathode is another example of electrogravimetry.We will not consider electrogravimetry in this chapter. See Chapter 11 on electrochemical methods of analysis for a further discussion of electrogravimetry.When we use thermal or chemical energy to remove a volatile species, we call the method volatilization gravimetry. In determining the moisture content of bread, for example, we use thermal energy to vaporize the water in the sample. To determine the amount of carbon in an organic compound, we use the chemical energy of combustion to convert it to CO2.Finally, in particulate gravimetry we determine the analyte by separating it from the sample’s matrix using a filtration or an extraction. The determination of total suspended solids is one example of particulate gravimetry.An accurate gravimetric analysis requires that the analytical signal—whether it is a mass or a change in mass—is proportional to the amount of analyte in our sample. For all gravimetric methods this proportionality involves a conservation of mass. If the method relies on one or more chemical reactions, then we must know the stoichiometry of the reactions. In the analysis of \(\text{PO}_3^{3-}\) described earlier, for example, we know that each mole of Hg2Cl2 corresponds to a mole of \(\text{PO}_3^{3-}\). If we remove the analyte from its matrix, then the separation must be selective for the analyte. When determining the moisture content in bread, for example, we know that the mass of H2O in the bread is the difference between the sample’s final mass and its initial mass.We will return to this concept of applying a conservation of mass later in the chapter when we consider specific examples of gravimetric methods.Except for particulate gravimetry, which is the most trivial form of gravimetry, you probably will not use gravimetry after you complete this course. Why, then, is familiarity with gravimetry still important? The answer is that gravimetry is one of only a small number of definitive techniques whose measurements require only base SI units, such as mass or the mole, and defined constants, such as Avogadro’s number and the mass of 12C. Ultimately, we must be able to trace the result of any analysis to a definitive technique, such as gravimetry, that we can relate to fundamental physical properties [Valacárcel, M.; Ríos, A. Analyst 1995, 120, 2291–2297]. Although most analysts never use gravimetry to validate their results, they often verifying an analytical method by analyzing a standard reference material whose composition is traceable to a definitive technique [(a) Moody, J. R.; Epstein, M. S. Spectrochim. Acta 1991, 46B, 1571–1575; (b) Epstein, M. S. Spectrochim. Acta 1991, 46B, 1583–1591].Other examples of definitive techniques are coulometry and isotope-dilution mass spectrometry. Coulometry is discussed in Chapter 11. Isotope-dilution mass spectrometry is beyond the scope of this textbook; however, you will find some suggested readings in this chapter’s Additional Resources.This page titled 8.1: Overview of Gravimetric Methods is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by David Harvey.
179
8.2: Precipitation Gravimetry
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Analytical_Chemistry_2.1_(Harvey)/08%3A_Gravimetric_Methods/8.02%3A_Precipitation_Gravimetry
In precipitation gravimetry an insoluble compound forms when we add a precipitating reagent, or precipitant, to a solution that contains our analyte. In most cases the precipitate is the product of a simple metathesis reaction between the analyte and the precipitant; however, any reaction that generates a precipitate potentially can serve as a gravimetric method.Most precipitation gravimetric methods were developed in the nineteenth century, or earlier, often for the analysis of ores. +\mathrm{Cl}^{-}(a q)\rightleftharpoons\mathrm{AgCl}(s) \label{8.1}\]If this is the only reaction we consider, then we predict that the precipitate’s solubility, SAgCl, is given by the following equation.\[S_{\mathrm{AgCl}}=\left[\mathrm{Ag}^{+}\right]=\frac{K_{\mathrm{sp}}}{\left[\mathrm{Cl}^{-}\right]} \label{8.2}\]Equation \ref{8.2} suggests that we can minimize solubility losses by adding a large excess of Cl–. In fact, as shown in Figure 8.2.1 , adding a large excess of Cl– increases the precipitate’s solubility.To understand why the solubility of AgCl is more complicated than the relationship suggested by Equation \ref{8.2}, we must recall that Ag+ also forms a series of soluble silver-chloro metal–ligand complexes.\[\operatorname{Ag}^{+}(a q)+\mathrm{Cl}^{-}(a q)\rightleftharpoons\operatorname{AgCl}(a q) \quad \log K_{1}=3.70 \label{8.3}\]\[\operatorname{AgCl}(a q)+\mathrm{Cl}^{-}(a q)\rightleftharpoons\operatorname{AgCl}_{2}(a q) \quad \log K_{2}=1.92 \label{8.4}\]\[\mathrm{AgCl}_{2}^{-}(a q)+\mathrm{Cl}^{-}(a q)\rightleftharpoons\mathrm{AgCl}_{3}^{2-}(a q) \quad \log K_{3}=0.78 \label{8.5}\]Note the difference between reaction \ref{8.3}, in which we form AgCl(aq) as a product, and reaction \ref{8.1}, in which we form AgCl(s) as a product. The formation of AgCl(aq) from AgCl(s)\[\operatorname{AgCl}(s)\rightleftharpoons\operatorname{AgCl}(a q) \nonumber\]is called AgCl’s intrinsic solubility.The actual solubility of AgCl is the sum of the equilibrium concentrations for all soluble forms of Ag+.\[S_{\mathrm{AgCl}}=\left[\mathrm{Ag}^{+}\right]+[\mathrm{AgCl}(a q)]+\left[\mathrm{AgCl}_{2}^-\right]+\left[\mathrm{AgCl}_{3}^{2-}\right] \label{8.6}\]By substituting into Equation \ref{8.6} the equilibrium constant expressions for reaction \ref{8.1} and reactions \ref{8.3}–\ref{8.5}, we can define the solubility of AgCl as\[S_\text{AgCl} = \frac {K_\text{sp}} {[\text{Cl}^-]} + K_1K_\text{sp} + K_1K_2K_\text{sp}[\text{Cl}^-]+K_1K_2K_3K_\text{sp}[\text{Cl}^-]^2 \label{8.7}\]Equation \ref{8.7} explains the solubility curve for AgCl shown in Figure 8.2.1 . As we add NaCl to a solution of Ag+, the solubility of AgCl initially decreases because of reaction \ref{8.1}. Under these conditions, the final three terms in Equation \ref{8.7} are small and Equation \ref{8.2} is sufficient to describe AgCl’s solubility. For higher concentrations of Cl–, reaction \ref{8.4} and reaction \ref{8.5} increase the solubility of AgCl. Clearly the equilibrium concentration of chloride is important if we wish to determine the concentration of silver by precipitating AgCl. In particular, we must avoid a large excess of chloride.The predominate silver-chloro complexes for different values of pCl are shown by the ladder diagram along the x-axis in Figure 8.2.1 . Note that the increase in solubility begins when the higher-order soluble complexes of \(\text{AgCl}_2^-\) and \(\text{AgCl}_3^{2-}\) are the predominate species.Another important parameter that may affect a precipitate’s solubility is pH. For example, a hydroxide precipitate, such as Fe(OH)3, is more soluble at lower pH levels where the concentration of OH– is small. Because fluoride is a weak base, the solubility of calcium fluoride, \(S_{\text{CaF}_2}\), also is pH-dependent. We can derive an equation for \(S_{\text{CaF}_2}\) by considering the following equilibrium reactions\[\mathrm{CaF}_{2}(s)\rightleftharpoons \mathrm{Ca}^{2+}(a q)+2 \mathrm{F}^{-}(a q) \quad K_{\mathfrak{sp}}=3.9 \times 10^{-11} \label{8.8}\]\[\mathrm{HF}(a q)+\mathrm{H}_{2} \mathrm{O}(l )\rightleftharpoons\mathrm{H}_{3} \mathrm{O}^{+}(a q)+\mathrm{F}^{-}(a q) \quad K_{\mathrm{a}}=6.8 \times 10^{-4} \label{8.9}\]and the following equation for the solubility of CaF2.\[S_{\mathrm{Ca} \mathrm{F}_{2}}=\left[\mathrm{Ca}^{2+}\right]=\frac{1}{2}\left\{\left[\mathrm{F}^{-}\right]+[\mathrm{HF}]\right\} \label{8.10}\]Be sure that Equation \ref{8.10} makes sense to you. Reaction \ref{8.8} tells us that the dissolution of CaF2 produces one mole of Ca2+ for every two moles of F–, which explains the term of 1/2 in Equation \ref{8.10}. Because F– is a weak base, we must account for both chemical forms in solution, which explains why we include HF.Substituting the equilibrium constant expressions for reaction \ref{8.8} and reaction \ref{8.9} into Equation \ref{8.10} allows us to define the solubility of CaF2 in terms of the equilibrium concentration of H3O+.\[S_{\mathrm{CaF}_{2}}=\left[\mathrm{Ca}^{2+}\right]=\left\{\frac{K_{\mathrm{p}}}{4}\left(1+\frac{\left[\mathrm{H}_{3} \mathrm{O}^{+}\right]}{K_{\mathrm{a}}}\right)^{2}\right\}^{1 / 3} \label{8.11}\]Figure 8.2.2 shows how pH affects the solubility of CaF2. Depending on the solution’s pH, the predominate form of fluoride is either HF or F–. When the pH is greater than 4.17, the predominate species is F– and the solubility of CaF2 is independent of pH because only reaction \ref{8.8} occurs to an appreciable extent. At more acidic pH levels, the solubility of CaF2 increases because of the contribution of reaction \ref{8.9}.You can use a ladder diagram to predict the conditions that will minimize a precipitate’s solubility. Draw a ladder diagram for oxalic acid, H2C2O4, and use it to predict the range of pH values that will minimize the solubility of CaC2O4. Relevant equilibrium constants are in the appendices.The solubility reaction for CaC2O4 is\[\mathrm{CaC}_{2} \mathrm{O}_{4}(s)\rightleftharpoons \mathrm{Ca}^{2+}(a q)+\mathrm{C}_{2} \mathrm{O}_{4}^{2-}(a q) \nonumber\]To minimize solubility, the pH must be sufficiently basic that oxalate, \(\text{C}_2\text{O}_4^{2-}\), does not react to form \(\text{HC}_2\text{O}_4^{-}\) or H2C2O4. The ladder diagram for oxalic acid, including approximate buffer ranges, is shown below. Maintaining a pH greater than 5.3 ensures that \(\text{C}_2\text{O}_4^{2-}\) is the only important form of oxalic acid in solution, minimizing the solubility of CaC2O4.When solubility is a concern, it may be possible to decrease solubility by using a non-aqueous solvent. A precipitate’s solubility generally is greater in an aqueous solution because of water’s ability to stabilize ions through solvation. The poorer solvating ability of a non-aqueous solvent, even those that are polar, leads to a smaller solubility product. For example, the Ksp of PbSO4 is \(2 \times 10^{-8}\) in H2O and \(2.6 \times 10^{-12}\) in a 50:50 mixture of H2O and ethanol.In addition to having a low solubility, a precipitate must be free from impurities. Because precipitation usually occurs in a solution that is rich in dissolved solids, the initial precipitate often is impure. To avoid a determinate error, we must remove these impurities before we determine the precipitate’s mass.The greatest source of impurities are chemical and physical interactions that take place at the precipitate’s surface. A precipitate generally is crystalline—even if only on a microscopic scale—with a well-defined lattice of cations and anions. Those cations and anions at the precipitate’s surface carry, respectively, a positive or a negative charge because they have incomplete coordination spheres. In a precipitate of AgCl, for example, each silver ion in the precipitate’s interior is bound to six chloride ions. A silver ion at the surface, however, is bound to no more than five chloride ions and carries a partial positive charge (Figure 8.2.3 ). The presence of these partial charges makes the precipitate’s surface an active site for the chemical and physical interactions that produce impurities.One common impurity is an inclusion, in which a potential interferent, whose size and charge is similar to a lattice ion, can substitute into the lattice structure if the interferent precipitates with the same crystal structure (Figure 8.2.4 a). The probability of forming an inclusion is greatest when the interfering ion’s concentration is substantially greater than the lattice ion’s concentration. An inclusion does not decrease the amount of analyte that precipitates, provided that the precipitant is present in sufficient excess. Thus, the precipitate’s mass always is larger than expected.An inclusion is difficult to remove since it is chemically part of the precipitate’s lattice. The only way to remove an inclusion is through reprecipitation in which we isolate the precipitate from its supernatant solution, dissolve the precipitate by heating in a small portion of a suitable solvent, and then reform the precipitate by allowing the solution to cool. Because the interferent’s concentration after dissolving the precipitate is less than that in the original solution, the amount of included material decreases upon reprecipitation. We can repeat the process of reprecipitation until the inclusion’s mass is insignificant. The loss of analyte during reprecipitation, however, is a potential source of determinate error.Suppose that 10% of an interferent forms an inclusion during each precipitation. When we initially form the precipitate, 10% of the original interferent is present as an inclusion. After the first reprecipitation, 10% of the included interferent remains, which is 1% of the original interferent. A second reprecipitation decreases the interferent to 0.1% of the original amount.An occlusion forms when an interfering ions is trapped within the growing precipitate. Unlike an inclusion, which is randomly dispersed within the precipitate, an occlusion is localized, either along flaws within the precipitate’s lattice structure or within aggregates of individual precipitate particles (Figure 8.2.4 b). An occlusion usually increases a precipitate’s mass; however, the precipitate’s mass is smaller if the occlusion includes the analyte in a lower molecular weight form than that of the precipitate.We can minimize an occlusion by maintaining the precipitate in equilibrium with its supernatant solution for an extended time, a process called digestion. During a digestion, the dynamic nature of the solubility–precipitation equilibria, in which the precipitate dissolves and reforms, ensures that the occlusion eventually is reexposed to the supernatant solution. Because the rates of dissolution and reprecipitation are slow, there is less opportunity for forming new occlusions.After precipitation is complete the surface continues to attract ions from solution (Figure 8.2.4 c). These surface adsorbates comprise a third type of impurity. We can minimize surface adsorption by decreasing the precipitate’s available surface area. One benefit of digestion is that it increases a precipitate’s average particle size. Because the probability that a particle will dissolve completely is inversely proportional to its size, during digestion larger particles increase in size at the expense of smaller particles. One consequence of forming a smaller number of larger particles is an overall decrease in the precipitate’s surface area. We also can remove surface adsorbates by washing the precipitate, although we cannot ignore the potential loss of analyte.Inclusions, occlusions, and surface adsorbates are examples of coprecipitates—otherwise soluble species that form along with the precipitate that contains the analyte. Another type of impurity is an interferent that forms an independent precipitate under the conditions of the analysis. For example, the precipitation of nickel dimethylglyoxime requires a slightly basic pH. Under these conditions any Fe3+ in the sample will precipitate as Fe(OH)3. In addition, because most precipitants rarely are selective toward a single analyte, there is a risk that the precipitant will react with both the analyte and an interferent.In addition to forming a precipitate with Ni2+, dimethylglyoxime also forms precipitates with Pd2+ and Pt2+. These cations are potential interferents in an analysis for nickel.We can minimize the formation of additional precipitates by controlling solution conditions. If an interferent forms a precipitate that is less soluble than the analyte’s precipitate, we can precipitate the interferent and remove it by filtration, leaving the analyte behind in solution. Alternatively, we can mask the analyte or the interferent to prevent its precipitation. Both of the approaches outline above are illustrated in Fresenius’ analytical method for the determination of Ni in ores that contain Pb2+, Cu2+, and Fe3+ (see . Dissolving the ore in the presence of H2SO4 selectively precipitates Pb2+ as PbSO4. Treating the resulting supernatant with H2S precipitates Cu2+ as CuS. After removing the CuS by filtration, ammonia is added to precipitate Fe3+ as Fe(OH)3. Nickel, which forms a soluble amine complex, remains in solution.Masking was introduced in Chapter 7.Size matters when it comes to forming a precipitate. Larger particles are easier to filter and, as noted earlier, a smaller surface area means there is less opportunity for surface adsorbates to form. By controlling the reaction conditions we can significantly increase a precipitate’s average particle size.The formation of a precipitate consists of two distinct events: nucleation, the initial formation of smaller, stable particles of the precipitate, and particle growth. Larger particles form when the rate of particle growth exceeds the rate of nucleation. Understanding the conditions that favor particle growth is important when we design a gravimetric method of analysis.We define a solute’s relative supersaturation, RSS, as\[R S S=\frac{Q-S}{S} \label{8.12}\]where Q is the solute’s actual concentration and S is the solute’s concentration at equilibrium [Von Weimarn, P. P. Chem. Revs. 1925, 2, 217–242]. The numerator of Equation \ref{8.12}, Q – S, is a measure of the solute’s supersaturation. A solution with a large, positive value of RSS has a high rate of nucleation and produces a precipitate with many small particles. When the RSS is small, precipitation is more likely to occur by particle growth than by nucleation.A supersaturated solution is one that contains more dissolved solute than that predicted by equilibrium chemistry. A supersaturated solution is inherently unstable and precipitates solute to reach its equilibrium position. How quickly precipitation occurs depends, in part, on the value of RSS.Equation \ref{8.12} suggests that we can minimize RSS if we decrease the solute’s concentration, Q, or if we increase the precipitate’s solubility, S. A precipitate’s solubility usually increases at higher temperatures and adjusting pH may affect a precipitate’s solubility if it contains an acidic or a basic ion. Temperature and pH, therefore, are useful ways to increase the value of S. Forming the precipitate in a dilute solution of analyte or adding the precipitant slowly and with vigorous stirring are ways to decrease the value of Q. There are practical limits to minimizing RSS. Some precipitates, such as Fe(OH)3 and PbS, are so insoluble that S is very small and a large RSS is unavoidable. Such solutes inevitably form small particles. In addition, conditions that favor a small RSS may lead to a relatively stable supersaturated solution that requires a long time to precipitate fully. For example, almost a month is required to form a visible precipitate of BaSO4 under conditions in which the initial RSS is 5 [Bassett, J.; Denney, R. C.; Jeffery, G. H. Mendham. J. Vogel’s Textbook of Quantitative Inorganic Analysis, Longman: London, 4th Ed., 1981, p. 408].A visible precipitate takes longer to form when RSS is small both because there is a slow rate of nucleation and because there is a steady decrease in RSS as the precipitate forms. One solution to the latter problem is to generate the precipitant in situ as the product of a slow chemical reaction, which effectively maintains a constant RSS. Because the precipitate forms under conditions of low RSS, initial nucleation produces a small number of particles. As additional precipitant forms, particle growth supersedes nucleation, which results in larger particles of precipitate. This process is called a homogeneous precipitation [Gordon, L.; Salutsky, M. L.; Willard, H. H. Precipitation from Homogeneous Solution, Wiley: NY, 1959].Two general methods are used for homogeneous precipitation. If the precipitate’s solubility is pH-dependent, then we can mix the analyte and the precipitant under conditions where precipitation does not occur, and then increase or decrease the pH by chemically generating OH– or H3O+. For example, the hydrolysis of urea, CO(NH2)2, is a source of OH– because of the following two reactions.\[\mathrm{CO}\left(\mathrm{NH}_{2}\right)_{2}(a q)+\mathrm{H}_{2} \mathrm{O}( l)\rightleftharpoons2 \mathrm{NH}_{3}(a q)+\mathrm{CO}_{2}(g) \nonumber\]\[\mathrm{NH}_{3}(a q)+\mathrm{H}_{2} \mathrm{O}( l)\rightleftharpoons\mathrm{OH}^{-}(a q)+\mathrm{NH}_{4}^{+}(a q) \nonumber\]Because the hydrolysis of urea is temperature-dependent—the rate is negligible at room temperature—we can use temperature to control the rate of hydrolysis and the rate of precipitate formation. Precipitates of CaC2O4, for example, have been produced by this method. After dissolving a sample that contains Ca2+, the solution is made acidic with HCl before adding a solution of 5% w/v (NH4)2C2O4. Because the solution is acidic, a precipitate of CaC2O4 does not form. The solution is heated to approximately 50oC and urea is added. After several minutes, a precipitate of CaC2O4 begins to form, with precipitation reaching completion in about 30 min.In the second method of homogeneous precipitation, the precipitant is generated by a chemical reaction. For example, Pb2+ is precipitated homogeneously as PbCrO4 by using bromate, \(\text{BrO}_3^-\), to oxidize Cr3+ to \(\text{CrO}_4^{2-}\).\[6 \mathrm{BrO}_{3}^{-}(a q)+10 \mathrm{Cr}^{3+}(a q)+22 \mathrm{H}_{2} \mathrm{O}(l)\rightleftharpoons 3 \mathrm{Br}_{2}(a q)+10 \mathrm{CrO}_{4}^{2-}(a q)+44 \mathrm{H}^{+}(a q) \nonumber\]Figure 8.2.5 shows the result of preparing PbCrO4 by direct addition of K2CrO4 (Beaker A) and by homogenous precipitation (Beaker B). Both beakers contain the same amount of PbCrO4. Because the direct addition of K2CrO4 leads to rapid precipitation and the formation of smaller particles, the precipitate remains less settled than the precipitate prepared homogeneously. Note, as well, the difference in the color of the two precipitates.The effect of particle size on color is well-known to geologists, who use a streak test to help identify minerals. The color of a bulk mineral and its color when powdered often are different. Rubbing a mineral across an unglazed porcelain plate leaves behind a small streak of the powdered mineral. Bulk samples of hematite, Fe2O3, are black in color, but its streak is a familiar rust-red. Crocite, the mineral PbCrO4, is red-orange in color; its streak is orange-yellow.A homogeneous precipitation produces large particles of precipitate that are relatively free from impurities. These advantages, however, are offset by the increased time needed to produce the precipitate and by a tendency for the precipitate to deposit as a thin film on the container’s walls. The latter problem is particularly severe for hydroxide precipitates generated using urea.An additional method for increasing particle size deserves mention. When a precipitate’s particles are electrically neutral they tend to coagulate into larger particles that are easier to filter. Surface adsorption of excess lattice ions, however, provides the precipitate’s particles with a net positive or a net negative surface charge. Electrostatic repulsion between particles of similar charge prevents them from coagulating into larger particles.Let’s use the precipitation of AgCl from a solution of AgNO3 using NaCl as a precipitant to illustrate this effect. Early in the precipitation, when NaCl is the limiting reagent, excess Ag+ ions chemically adsorb to the AgCl particles, forming a positively charged primary adsorption layer (Figure 8.2.6 a). The solution in contact with this layer contains more inert anions, \(\text{NO}_3^-\) in this case, than inert cations, Na+, giving a secondary adsorption layer with a negative charge that balances the primary adsorption layer’s positive charge. The solution outside the secondary adsorption layer remains electrically neutral. Coagulation cannot occur if the secondary adsorption layer is too thick because the individual particles of AgCl are unable to approach each other closely enough.We can induce coagulation in three ways: by decreasing the number of chemically adsorbed Ag+ ions, by increasing the concentration of inert ions, or by heating the solution. As we add additional NaCl, precipitating more of the excess Ag+, the number of chemically adsorbed silver ions decreases and coagulation occurs (Figure 8.2.6 b). Adding too much NaCl, however, creates a primary adsorption layer of excess Cl– with a loss of coagulation.The coagulation and decoagulation of AgCl as we add NaCl to a solution of AgNO3 can serve as an endpoint for a titration. See Chapter 9 for additional details.A second way to induce coagulation is to add an inert electrolyte, which increases the concentration of ions in the secondary adsorption layer (Figure 8.2.6 c). With more ions available, the thickness of the secondary absorption layer decreases. Particles of precipitate may now approach each other more closely, which allows the precipitate to coagulate. The amount of electrolyte needed to cause spontaneous coagulation is called the critical coagulation concentration.Heating the solution and the precipitate provides a third way to induce coagulation. As the temperature increases, the number of ions in the primary adsorption layer decreases, which lowers the precipitate’s surface charge. In addition, heating increases the particles’ kinetic energy, allowing them to overcome the electrostatic repulsion that prevents coagulation at lower temperatures.After precipitating and digesting a precipitate, we separate it from solution by filtering. The most common filtration method uses filter paper, which is classified according to its speed, its size, and its ash content on ignition. Speed, or how quickly the supernatant passes through the filter paper, is a function of the paper’s pore size. A larger pore size allows the supernatant to pass more quickly through the filter paper, but does not retain small particles of precipitate. Filter paper is rated as fast (retains particles larger than 20–25 μm), medium–fast (retains particles larger than 16 μm), medium (retains particles larger than 8 μm), and slow (retains particles larger than 2–3 μm). The proper choice of filtering speed is important. If the filtering speed is too fast, we may fail to retain some of the precipitate, which causes a negative determinate error. On the other hand, the precipitate may clog the pores if we use a filter paper that is too slow.A filter paper’s size is just its diameter. Filter paper comes in many sizes, including 4.25 cm, 7.0 cm, 11.0 cm, 12.5 cm, 15.0 cm, and 27.0 cm. Choose a size that fits comfortably into your funnel. For a typical 65-mm long-stem funnel, 11.0 cm and 12.5 cm filter paper are good choices.Because filter paper is hygroscopic, it is not easy to dry it to a constant weight. When accuracy is important, the filter paper is removed before we determine the precipitate’s mass. After transferring the precipitate and filter paper to a covered crucible, we heat the crucible to a temperature that coverts the paper to CO2(g) and H2O(g), a process called ignition.Igniting a poor quality filter paper leaves behind a residue of inorganic ash. For quantitative work, use a low-ash filter paper. This grade of filter paper is pretreated with a mixture of HCl and HF to remove inorganic materials. Quantitative filter paper typically has an ash content of less than 0.010% w/w.Gravity filtration is accomplished by folding the filter paper into a cone and placing it in a long-stem funnel (Figure 8.2.7 ). To form a tight seal between the filter cone and the funnel, we dampen the paper with water or supernatant and press the paper to the wall of the funnel. When prepared properly, the funnel’s stem fills with the supernatant, increasing the rate of filtration.The precipitate is transferred to the filter in several steps. The first step is to decant the majority of the supernatant through the filter paper without transferring the precipitate (Figure 8.2.8 ). This prevents the filter paper from clogging at the beginning of the filtration process. The precipitate is rinsed while it remains in its beaker, with the rinsings decanted through the filter paper. Finally, the precipitate is transferred onto the filter paper using a stream of rinse solution. Any precipitate that clings to the walls of the beaker is transferred using a rubber policeman (a flexible rubber spatula attached to the end of a glass stirring rod).An alternative method for filtering a precipitate is to use a filtering crucible. The most common option is a fritted-glass crucible that contains a porous glass disk filter. Fritted-glass crucibles are classified by their porosity: coarse (retaining particles larger than 40–60 μm), medium (retaining particles greater than 10–15 μm), and fine (retaining particles greater than 4–5.5 μm). Another type of filtering crucible is the Gooch crucible, which is a porcelain crucible with a perforated bottom. A glass fiber mat is placed in the crucible to retain the precipitate. For both types of crucibles, the pre- cipitate is transferred in the same manner described earlier for filter paper. Instead of using gravity, the supernatant is drawn through the crucible with the assistance of suction from a vacuum aspirator or pump (Figure 8.2.9 ).Because the supernatant is rich with dissolved inert ions, we must remove residual traces of supernatant without incurring loss of analyte due to solubility. In many cases this simply involves the use of cold solvents or rinse solutions that contain organic solvents such as ethanol. The pH of the rinse solution is critical if the precipitate contains an acidic or a basic ion. When coagulation plays an important role in determining particle size, adding a volatile inert electrolyte to the rinse solution prevents the precipitate from reverting into smaller particles that might pass through the filter. This process of reverting to smaller particles is called peptization. The volatile electrolyte is removed when drying the precipitate.In general, we can minimize the loss of analyte if we use several small portions of rinse solution instead of a single large volume. Testing the used rinse solution for the presence of an impurity is another way to guard against over-rinsing the precipitate. For example, if Cl– is a residual ion in the supernatant, we can test for its presence using AgNO3. After we collect a small portion of the rinse solution, we add a few drops of AgNO3 and look for the presence or absence of a precipitate of AgCl. If a precipitate forms, then we know Cl– is present and continue to rinse the precipitate. Additional rinsing is not needed if the AgNO3 does not produce a precipitate.After separating the precipitate from its supernatant solution, we dry the precipitate to remove residual traces of rinse solution and to remove any volatile impurities. The temperature and method of drying depend on the method of filtration and the precipitate’s desired chemical form. Placing the precipitate in a laboratory oven and heating to a temperature of 110oC is sufficient to remove water and other easily volatilized impurities. Higher temperatures require a muffle furnace, a Bunsen burner, or a Meker burner, and are necessary if we need to decompose the precipitate before its weight is determined.Because filter paper absorbs moisture, we must remove it before we weigh the precipitate. This is accomplished by folding the filter paper over the precipitate and transferring both the filter paper and the precipitate to a porcelain or platinum crucible. Gentle heating first dries and then chars the filter paper. Once the paper begins to char, we slowly increase the temperature until there is no trace of the filter paper and any remaining carbon is oxidized to CO2.Fritted-glass crucibles can not withstand high temperatures and are dried in an oven at a temperature below 200oC. The glass fiber mats used in Gooch crucibles can be heated to a maximum temperature of approximately 500oC.For a quantitative application, the final precipitate must have a well-defined composition. A precipitate that contains volatile ions or substantial amounts of hydrated water, usually is dried at a temperature that completely removes these volatile species. For example, one standard gravimetric method for the determination of magnesium involves its precipitation as MgNH4PO4•6H2O. Unfortunately, this precipitate is difficult to dry at lower temperatures without losing an inconsistent amount of hydrated water and ammonia. Instead, the precipitate is dried at a temperature greater than 1000oC where it decomposes to magnesium pyrophosphate, Mg2P2O7.An additional problem is encountered if the isolated solid is nonstoichiometric. For example, precipitating Mn2+ as Mn(OH)2 and heating frequently produces a nonstoichiometric manganese oxide, MnOx, where x varies between one and two. In this case the nonstoichiometric product is the result of forming a mixture of oxides with different oxidation state of manganese. Other nonstoichiometric compounds form as a result of lattice defects in the crystal structure [Ward, R., ed., Non-Stoichiometric Compounds (Ad. Chem. Ser. 39), American Chemical Society: Washington, D. C., 1963].The best way to appreciate the theoretical and practical details discussed in this section is to carefully examine a typical precipitation gravimetric method. Although each method is unique, the determination of Mg2+ in water and wastewater by precipitating MgNH4PO4• 6H2O and isolating Mg2P2O7 provides an instructive example of a typical procedure. The description here is based on Method 3500-Mg D in Standard Methods for the Examination of Water and Wastewater, 19th Ed., American Public Health Asso- ciation: Washington, D. C., 1995. With the publication of the 20th Edition in 1998, this method is no longer listed as an approved method.Description of MethodMagnesium is precipitated as MgNH4PO4•6H2O using (NH4)2HPO4 as the precipitant. The precipitate’s solubility in a neutral solution is relatively high (0.0065 g/100 mL in pure water at 10oC), but it is much less soluble in the presence of dilute ammonia (0.0003 g/100 mL in 0.6 M NH3). Because the precipitant is not selective, a preliminary separation of Mg2+ from potential interferents is necessary. Calcium, which is the most significant interferent, is removed by precipitating it as CaC2O4. The presence of excess ammonium salts from the precipitant, or from the addition of too much ammonia, leads to the formation of Mg(NH4)4(PO4)2, which forms Mg(PO3)2 after drying. The precipitate is isolated by gravity filtration, using a rinse solution of dilute ammonia. After filtering, the precipitate is converted to Mg2P2O7 and weighed.ProcedureTransfer a sample that contains no more than 60 mg of Mg2+ into a 600-mL beaker. Add 2–3 drops of methyl red indicator, and, if necessary, adjust the volume to 150 mL. Acidify the solution with 6 M HCl and add 10 mL of 30% w/v (NH4)2HPO4. After cooling and with constant stirring, add concentrated NH3 dropwise until the methyl red indicator turns yellow (pH > 6.3). After stirring for 5 min, add 5 mL of concentrated NH3 and continue to stir for an additional 10 min. Allow the resulting solution and precipitate to stand overnight. Isolate the precipitate by filtering through filter paper, rinsing with 5% v/v NH3. Dissolve the precipitate in 50 mL of 10% v/v HCl and precipitate a second time following the same procedure. After filtering, carefully remove the filter paper by charring. Heat the precipitate at 500oC until the residue is white, and then bring the precipitate to constant weight at 1100oC.Questions1. Why does the procedure call for a sample that contains no more than 60 mg of Mg2+?A 60-mg portion of Mg2+ generates approximately 600 mg of MgNH4PO4•6H2O, which is a substantial amount of precipitate. A larger quantity of precipitate is difficult to filter and difficult to rinse free of impurities.2. Why is the solution acidified with HCl before we add the precipitant?The HCl ensures that MgNH4PO4 • 6H2O does not precipitate immediately upon adding the precipitant. Because \(\text{PO}_4^{3-}\) is a weak base, the precipitate is soluble in a strongly acidic solution. If we add the precipitant under neutral or basic conditions (that is, a high RSS), then the resulting precipitate will consist of smaller, less pure particles. Increasing the pH by adding base allows the precipitate to form under more favorable (that is, a low RSS) conditions.3. Why is the acid–base indicator methyl red added to the solution?The indicator changes color at a pH of approximately 6.3, which indicates that there is sufficient NH3 to neutralize the HCl added at the beginning of the procedure. The amount of NH3 is crucial to this procedure. If we add insufficient NH3, then the solution is too acidic, which increases the precipitate’s solubility and leads to a negative determinate error. If we add too much NH3, the precipitate may contain traces of Mg(NH4)4(PO4)2, which, on drying, forms Mg(PO3)2 instead of Mg2P2O7. This increases the mass of the ignited precipitate, and gives a positive determinate error. After adding enough NH3 to neutralize the HCl, we add an additional 5 mL of NH3 to complete the quantitative precipitation of MgNH4PO4 • 6H2O.4. Explain why forming Mg(PO3)2 instead of Mg2P2O7 increases the precipitate’s mass.Each mole of Mg2P2O7 contains two moles of magnesium and each mole of Mg(PO3)2 contains only one mole of magnesium. A conservation of mass, therefore, requires that two moles of Mg(PO3)2 form in place of each mole of Mg2P2O7. One mole of Mg2P2O7 weighs 222.6 g. Two moles of Mg(PO3)2 weigh 364.5 g. Any replacement of Mg2P2O7 with Mg(PO3)2 must increase the precipitate’s mass.5. What additional steps, beyond those discussed in questions 2 and 3, help improve the precipitate’s purity?Two additional steps in the procedure help to form a precipitate that is free of impurities: digestion and reprecipitation.6. Why is the precipitate rinsed with a solution of 5% v/v NH3?This is done for the same reason that the precipitation is carried out in an ammonical solution; using dilute ammonia minimizes solubility losses when we rinse the precipitate.Although no longer a common analytical technique, precipitation gravimetry still provides a reliable approach for assessing the accuracy of other methods of analysis, or for verifying the composition of standard reference materials. In this section we review the general application of precipitation gravimetry to the analysis of inorganic and organic compounds.Table 8.2.1 provides a summary of precipitation gravimetric methods for inorganic cations and anions. Several methods for the homogeneous generation of precipitants are shown in Table 8.2.2 . The majority of inorganic precipitants show poor selectivity for the analyte. Many organic precipitants, however, are selective for one or two inorganic ions. Table 8.2.3 lists examples of several common organic precipitants.Ba2+\(\text{SO}_4^{2-}\)Precipitation gravimetry continues to be listed as a standard method for the determination of \(\text{SO}_4^{2-}\) in water and wastewater analysis [Method 4500-SO42– C and Method 4500-SO42– D as published in Standard Methods for the Examination of Waters and Wastewaters, 20th Ed., American Public Health Association: Wash- ington, D. C., 1998]. Precipitation is carried out using BaCl2 in an acidic solution (adjusted with HCl to a pH of 4.5–5.0) to prevent the precipitation of BaCO3 or Ba3(PO4)2, and at a temperature near the solution’s boiling point. The precipitate is digested at 80–90oC for at least two hours. Ashless filter paper pulp is added to the precipitate to aid in its filtration. After filtering, the precipitate is ignited to constant weight at 800oC. Alternatively, the precipitate is filtered through a fine porosity fritted glass crucible (without adding filter paper pulp), and dried to constant weight at 105oC. This procedure is subject to a variety of errors, including occlusions of Ba(NO3)2, BaCl2, and alkali sulfates.Other standard methods for the determination of sulfate in water and wastewater include ion chromatography (see Chapter 12), capillary ion electrophoresis (see Chapter 12), turbidimetry (see Chapter 10), and flow injection analysis (see Chapter 13).Several organic functional groups or heteroatoms can be determined using precipitation gravimetric methods. Table 8.2.4 provides a summary of several representative examples. Note that the determination of alkoxy functional groups is an indirect analysis in which the functional group reacts with and excess of HI and the unreacted I– determined by precipitating as AgCl.The stoichiometry of a precipitation reaction provides a mathematical relationship between the analyte and the precipitate. Because a precipitation gravimetric method may involve additional chemical reactions to bring the analyte into a different chemical form, knowing the stoichiometry of the precipitation reaction is not always sufficient. Even if you do not have a complete set of balanced chemical reactions, you can use a conservation of mass to deduce the mathematical relationship between the analyte and the precipitate. The following example demonstrates this approach for the direct analysis of a single analyte.To determine the amount of magnetite, Fe3O4, in an impure ore, a 1.5419-g sample is dissolved in concentrated HCl, resulting in a mixture of Fe2+and Fe3+. After adding HNO3 to oxidize Fe2+ to Fe3+ and diluting with water, Fe3+ is precipitated as Fe(OH)3 using NH3. Filtering, rinsing, and igniting the precipitate provides 0.8525 g of pure Fe2O3. Calculate the %w/w Fe3O4 in the sample.SolutionA conservation of mass requires that the precipitate of Fe2O3 contain all iron originally in the sample of ore. We know there are 2 moles of Fe per mole of Fe2O3 (FW = 159.69 g/mol) and 3 moles of Fe per mole of Fe3O4 (FW = 231.54 g/mol); thus\[0.8525 \ \mathrm{g} \ \mathrm{Fe}_{2} \mathrm{O}_{3} \times \frac{2 \ \mathrm{mol} \ \mathrm{Fe}}{159.69 \ \mathrm{g} \ \mathrm{Fe}_{2} \mathrm{O}_{3}} \times \frac{231.54 \ \mathrm{g} \ \mathrm{Fe}_{3} \mathrm{O}_{4}}{3 \ \mathrm{mol} \ \mathrm{Fe}}=0.82405 \ \mathrm{g} \ \mathrm{Fe}_{3} \mathrm{O}_{4} \nonumber\]The % w/w Fe3O4 in the sample, therefore, is\[\frac{0.82405 \ \mathrm{g} \ \mathrm{Fe}_{3} \mathrm{O}_{4}}{1.5419 \ \mathrm{g} \ \text { sample }} \times 100=53.44 \% \nonumber\]A 0.7336-g sample of an alloy that contains copper and zinc is dissolved in 8 M HCl and diluted to 100 mL in a volumetric flask. In one analysis, the zinc in a 25.00-mL portion of the solution is precipitated as ZnNH4PO4, and isolated as Zn2P2O7, yielding 0.1163 g. The copper in a separate 25.00-mL portion of the solution is treated to precipitate CuSCN, yielding 0.2383 g. Calculate the %w/w Zn and the %w/w Cu in the sample.A conservation of mass requires that all zinc in the alloy is found in the final product, Zn2P2O7. We know there are 2 moles of Zn per mole of Zn2P2O7; thus\[0.1163 \ \mathrm{g} \ \mathrm{Zn}_{2} \mathrm{P}_{2} \mathrm{O}_{7} \times \frac{2 \ \mathrm{mol} \ \mathrm{Zn}}{304.70 \ \mathrm{g}\ \mathrm{Zn}_{2} \mathrm{P}_{2} \mathrm{O}_{7}} \times \frac{65.38 \ \mathrm{g} \ \mathrm{Zn}}{\mathrm{mol} \ \mathrm{Zn}}=0.04991 \ \mathrm{g} \ \mathrm{Zn}\nonumber\]This is the mass of Zn in 25% of the sample (a 25.00 mL portion of the 100.0 mL total volume). The %w/w Zn, therefore, is\[\frac{0.04991 \ \mathrm{g} \ \mathrm{Zn} \times 4}{0.7336 \ \mathrm{g} \text { sample }} \times 100=27.21 \% \ \mathrm{w} / \mathrm{w} \mathrm{Zn} \nonumber\]For copper, we find that\[\begin{array}{c}{0.2383 \ \mathrm{g} \ \mathrm{CuSCN} \times \frac{1 \ \mathrm{mol} \ \mathrm{Zn}}{121.63 \ \mathrm{g} \ \mathrm{CuSCN}} \times \frac{63.55 \ \mathrm{g} \ \mathrm{Cu}}{\mathrm{mol} \ \mathrm{Cu}}=0.1245 \ \mathrm{g} \ \mathrm{Cu}} \\ {\frac{0.1245 \ \mathrm{g} \ \mathrm{Cu} \times 4}{0.7336 \ \mathrm{g} \text { sample }} \times 100=67.88 \% \ \mathrm{w} / \mathrm{w} \mathrm{Cu}}\end{array} \nonumber\]In Practice Exercise 8.2.2 the sample contains two analytes. Because we can precipitate each analyte selectively, finding their respective concentrations is a straightforward stoichiometric calculation. But what if we cannot separately precipitate the two analytes? To find the concentrations of both analytes, we still need to generate two precipitates, at least one of which must contain both analytes. Although this complicates the calculations, we can still use a conservation of mass to solve the problem.A 0.611-g sample of an alloy that contains Al and Mg is dissolved and treated to prevent interferences by the alloy’s other constituents. Aluminum and magnesium are precipitated using 8-hydroxyquinoline, which yields a mixed precipitate of Al(C9H6NO)3 and Mg(C9H6NO)2 that weighs 7.815 g. Igniting the precipitate converts it to a mixture of Al2O3 and MgO that weighs 1.002 g. Calculate the %w/w Al and %w/w Mg in the alloy.SolutionThe masses of the solids provide us with the following two equations.\[\mathrm{g} \ \mathrm{Al}\left(\mathrm{C}_{9} \mathrm{H}_{6} \mathrm{NO}\right)_{3}+ \ \mathrm{g} \ \mathrm{Mg}\left(\mathrm{C}_{9} \mathrm{H}_{6} \mathrm{NO}\right)_{2}=7.815 \ \mathrm{g} \nonumber\]\[\mathrm{g} \ \mathrm{Al}_{2} \mathrm{O}_{3}+\mathrm{g} \ \mathrm{MgO}=1.002 \ \mathrm{g} \nonumber\]With two equations and four unknowns, we need two additional equations to solve the problem. A conservation of mass requires that all the aluminum in Al(C9H6NO)3 also is in Al2O3; thus\[\mathrm{g} \ \mathrm{Al}_{2} \mathrm{O}_{3}=\mathrm{g} \ \mathrm{Al}\left(\mathrm{C}_{9} \mathrm{H}_{6} \mathrm{NO}\right)_{3} \times \frac{1 \ \mathrm{mol} \ \mathrm{Al}}{459.43 \ \mathrm{g} \ \mathrm{Al}\left(\mathrm{C}_{9} \mathrm{H}_{6} \mathrm{NO}\right)_{3}} \times \frac{101.96 \ \mathrm{g} \ \mathrm{Al}_{2} \mathrm{O}_{3}}{2 \ \mathrm{mol} \ \mathrm{Al}_{2} \mathrm{O}_{3}} \nonumber\]\[\mathrm{g} \ \mathrm{Al}_{2} \mathrm{O}_{3}=0.11096 \times \mathrm{g} \ \mathrm{Al}\left(\mathrm{C}_{9} \mathrm{H}_{6} \mathrm{NO}\right)_{3} \nonumber\]Using the same approach, a conservation of mass for magnesium gives\[\mathrm{g} \ \mathrm{MgO}=\mathrm{g} \ \mathrm{Mg}\left(\mathrm{C}_{9} \mathrm{H}_{6} \mathrm{NO}\right)_{2} \times \frac{1 \ \mathrm{mol} \ \mathrm{Mg}}{312.61 \ \mathrm{g} \ \mathrm{Mg}\left(\mathrm{C}_{9} \mathrm{H}_{6} \mathrm{NO}\right)_{2}} \times \frac{40.304 \ \mathrm{g} \ \mathrm{MgO}}{\mathrm{mol} \ \mathrm{MgO}} \nonumber\]\[\mathrm{g} \ \mathrm{MgO}=0.12893 \times \mathrm{g} \ \mathrm{Mg}\left(\mathrm{C}_{9} \mathrm{H}_{6} \mathrm{NO}\right)_{2} \nonumber\]Substituting the equations for g MgO and g Al2O3 into the equation for the combined weights of MgO and Al2O3 leaves us with two equations and two unknowns.\[\mathrm{g} \ \mathrm{Al}\left(\mathrm{C}_{9} \mathrm{H}_{6} \mathrm{NO}\right)_{3}+\mathrm{g} \ \mathrm{Mg}\left(\mathrm{C}_{9} \mathrm{H}_{6} \mathrm{NO}\right)_{2}=7.815 \ \mathrm{g} \nonumber\]\[0.11096 \times \mathrm{g} \ \mathrm{Al}\left(\mathrm{C}_{9} \mathrm{H}_{6} \mathrm{NO}\right)_{3}+ 0.12893 \times \mathrm{g} \ \mathrm{Mg}\left(\mathrm{C}_{9} \mathrm{H}_{6} \mathrm{NO}\right)_{2}=1.002 \ \mathrm{g} \nonumber\]Multiplying the first equation by 0.11096 and subtracting the second equation gives\[-0.01797 \times \mathrm{g} \ \mathrm{Mg}\left(\mathrm{C}_{9} \mathrm{H}_{6} \mathrm{NO}\right)_{2}=-0.1348 \ \mathrm{g} \nonumber\]\[\mathrm{g} \ \mathrm{Mg}\left(\mathrm{C}_{9} \mathrm{H}_{6} \mathrm{NO}\right)_{2}=7.504 \ \mathrm{g} \nonumber\]\[\mathrm{g} \ \mathrm{Al}\left(\mathrm{C}_{9} \mathrm{H}_{6} \mathrm{NO}\right)_{3}=7.815 \ \mathrm{g}-7.504 \ \mathrm{g} \ \mathrm{Mg}\left(\mathrm{C}, \mathrm{H}_{6} \mathrm{NO}\right)_{2}=0.311 \ \mathrm{g} \nonumber\]Now we can finish the problem using the approach from Example 8.2.1 . A conservation of mass requires that all the aluminum and magnesium in the original sample of Dow metal is in the precipitates of Al(C9H6NO)3 and the Mg(C9H6NO)2. For aluminum, we find that\[0.311 \ \mathrm{g} \ \mathrm{Al}\left(\mathrm{C}_{9} \mathrm{H}_{6} \mathrm{NO}\right)_{3} \times \frac{1 \ \mathrm{mol} \ \mathrm{Al}}{459.45 \ \mathrm{g} \ \mathrm{Al}\left(\mathrm{C}_{9} \mathrm{H}_{6} \mathrm{NO}\right)_{3}} \times \frac{26.982 \ \mathrm{g} \ \mathrm{Al}}{\mathrm{mol} \ \mathrm{Al}}=0.01826 \ \mathrm{g} \ \mathrm{Al} \nonumber\]\[\frac{0.01826 \ \mathrm{g} \ \mathrm{Al}}{0.611 \ \mathrm{g} \text { sample }} \times 100=2.99 \% \mathrm{w} / \mathrm{w} \mathrm{Al} \nonumber\]and for magnesium we have\[7.504 \ \text{g Mg}\left(\mathrm{C}_9 \mathrm{H}_{6} \mathrm{NO}\right)_{2} \times \frac{1 \ \mathrm{mol} \ \mathrm{Mg}}{312.61 \ \mathrm{g} \ \mathrm{Mg}\left(\mathrm{C}_9 \mathrm{H}_{6} \mathrm{NO}\right)_{2}} \times \frac{24.305 \ \mathrm{g} \ \mathrm{Mg}}{\mathrm{mol} \ \mathrm{MgO}}=0.5834 \ \mathrm{g} \ \mathrm{Mg} \nonumber\]\[\frac{0.5834 \ \mathrm{g} \ \mathrm{Mg}}{0.611 \ \mathrm{g} \text { sample }} \times 100=95.5 \% \mathrm{w} / \mathrm{w} \mathrm{Mg} \nonumber\]A sample of a silicate rock that weighs 0.8143 g is brought into solution and treated to yield a 0.2692-g mixture of NaCl and KCl. The mixture of chloride salts is dissolved in a mixture of ethanol and water, and treated with HClO4, precipitating 0.3314 g of KClO4. What is the %w/w Na2O in the silicate rock?The masses of the solids provide us with the following equations\[\mathrm{g} \ \mathrm{NaCl}+\mathrm{g} \ \mathrm{KCl}=0.2692 \ \mathrm{g} \nonumber\]\[\mathrm{g} \ \mathrm{KClO}_{4} = 0.3314 \ \mathrm{g} \nonumber\]With two equations are three unknowns—g NaCl, g KCl, and g KClO4—we need one additional equation to solve the problem. A conservation of mass requires that all the potassium originally in the KCl ends up in the KClO4; thus\[\text{g KClO}_4 = \text{g KCl} \times \frac{1 \text{ mol Cl}}{74.55 \text{ g KCl}} \times \frac {138.55 \text{ g KClO}_4}{\text{mol Cl}} = 1.8585 \times \text{ g KCl} \nonumber\]Given the mass of KClO4, we use the third equation to solve for the mass of KCl in the mixture of chloride salts\[\text{ g KCl} = \frac{\text{g KClO}_4}{1.8585} = \frac{0.3314 \text{ g}}{1.8585} = 0.1783 \text{ g KCl} \nonumber\]The mass of NaCl in the mixture of chloride salts, therefore, is\[\text{ g NaCl} = 0.2692 \text{ g} - \text{g KCl} = 0.2692 \text{ g} - 0.1783 \text{ g KCl} = 0.0909 \text{ g NaCl} \nonumber\]Finally, to report the %w/w Na2O in the sample, we use a conservation of mass on sodium to determine the mass of Na2O\[0.0909 \text{ g NaCl} \times \frac{1 \text{ mol Na}}{58.44 \text{ g NaCl}} \times \frac{61.98 \text{ g Na}_2\text{O}}{2 \text{ mol Na}} = 0.0482 \text{ g Na}_2\text{O} \nonumber\]giving the %w/w Na2O as\[\frac{0.0482 \text{ g Na}_2\text{O}}{0.8143 \text{ g sample}} \times 100 = 5.92\% \text{ w/w Na}_2\text{O} \nonumber\]The previous problems are examples of direct methods of analysis because the precipitate contains the analyte. In an indirect analysis the precipitate forms as a result of a reaction with the analyte, but the analyte is not part of the precipitate. As shown by the following example, despite the additional complexity, we still can use conservation principles to organize our calculations.An impure sample of Na3PO3 that weighs 0.1392 g is dissolved in 25 mL of water. A second solution that contains 50 mL of 3% w/v HgCl2, 20 mL of 10% w/v sodium acetate, and 5 mL of glacial acetic acid is prepared. Adding the solution that contains the sample to the second solution oxidizes \(\text{PO}_3^{3-}\) to \(\text{PO}_4^{3-}\) and precipitates Hg2Cl2. After digesting, filtering, and rinsing the precipitate, 0.4320 g of Hg2Cl2 is obtained. Report the purity of the original sample as % w/w Na3PO3.SolutionThis is an example of an indirect analysis because the precipitate, Hg2Cl2, does not contain the analyte, Na3PO3. Although the stoichiometry of the reaction between Na3PO3 and HgCl2 is given earlier in the chapter, let’s see how we can solve the problem using conservation principles. (Although you can write the balanced reactions for any analysis, applying conservation principles can save you a significant amount of time!)The reaction between Na3PO3 and HgCl2 is an oxidation-reduction reaction in which phosphorous increases its oxidation state from +3 in Na3PO3 to +5 in Na3PO4, and in which mercury decreases its oxidation state from +2 in HgCl2 to +1 in Hg2Cl2. A redox reaction must obey a conservation of electrons because all the electrons released by the reducing agent, Na3PO3, must be accepted by the oxidizing agent, HgCl2. Knowing this, we write the following stoichiometric conversion factors:\[\frac{2 \ \mathrm{mol} \ e^{-}}{\mathrm{mol} \ \mathrm{Na}_{3} \mathrm{PO}_{3}} \text { and } \frac{1 \mathrm{mol} \ e^{-}}{\mathrm{mol} \ \mathrm{HgCl}_{2}} \nonumber\]Now we are ready to solve the problem. First, we use a conservation of mass for mercury to convert the precipitate’s mass to the moles of HgCl2.\[0.4320 \ \mathrm{g} \ \mathrm{Hg}_{2} \mathrm{Cl}_{2} \times \frac{2 \ \mathrm{mol} \ \mathrm{Hg}}{472.09 \ \mathrm{g} \ \mathrm{Hg}_{2} \mathrm{Cl}_{2}} \times \frac{1 \ \mathrm{mol} \ \mathrm{HgCl}_{2}}{\mathrm{mol} \ \mathrm{Hg}}=1.8302 \times 10^{-3} \ \mathrm{mol} \ \mathrm{HgCl}_{2} \nonumber\]Next, we use the conservation of electrons to find the mass of Na3PO3.\[1.8302 \times 10^{-3} \ \mathrm{mol} \ \mathrm{HgCl}_{2} \times \frac{1 \ \mathrm{mol} \ e^{-}}{\mathrm{mol} \ \mathrm{HgCl}_{2}} \times \frac{1 \ \mathrm{mol} \ \mathrm{Na}_{3} \mathrm{PO}_{4}}{2 \ \mathrm{mol} \ e^{-}} \times \frac{147.94 \ \mathrm{g} \ \mathrm{Na}_{3} \mathrm{PO}_{3}}{\mathrm{mol} \ \mathrm{Na}_{3} \mathrm{PO}_{3}}=0.13538 \ \mathrm{g} \ \mathrm{Na}_{3} \mathrm{PO}_{3} \nonumber\]Finally, we calculate the %w/w Na3PO3 in the sample.\[\frac{0.13538 \ \mathrm{g} \ \mathrm{Na}_{3} \mathrm{PO}_{3}}{0.1392 \ \mathrm{g} \text { sample }} \times 100=97.26 \% \mathrm{w} / \mathrm{w} \mathrm{Na}_{3} \mathrm{PO}_{3} \nonumber\]As you become comfortable using conservation principles, you will see ways to further simplify problems. For example, a conservation of electrons requires that the electrons released by Na3PO3 end up in the product, Hg2Cl2, yielding the following stoichiometric conversion factor:\[\frac{2 \ \operatorname{mol} \ \mathrm{Na}_{3} \mathrm{PO}_{3}}{\mathrm{mol} \ \mathrm{Hg}_{2} \mathrm{Cl}_{2}} \nonumber\]This conversion factor provides a direct link between the mass of Hg2Cl2 and the mass of Na3PO3.One approach for determining phosphate, \(\text{PO}_4^{3-}\), is to precipitate it as ammonium phosphomolybdate, (NH4)3PO4•12MoO3. After we isolate the precipitate by filtration, we dissolve it in acid and precipitate and weigh the molybdate as PbMoO3. Suppose we know that our sample is at least 12.5% Na3PO4 and that we need to recover a minimum of 0.600 g of PbMoO3? What is the minimum amount of sample that we need for each analysis?To find the mass of (NH4)3PO4•12MoO3 that will produce 0.600 g of PbMoO3, we first use a conservation of mass for molybdenum; thus\[0.600 \ \mathrm{g} \ \mathrm{PbMoO}_{3} \times \frac{1 \ \mathrm{mol} \ \mathrm{Mo}}{351.2 \ \mathrm{g} \ \mathrm{PbMoO}_{3}} \times \frac{1876.59 \ \mathrm{g} \ \left(\mathrm{NH}_{4}\right)_{3} \mathrm{PO}_{4} \cdot 12 \mathrm{MoO}_{3}}{12 \ \mathrm{mol} \ \mathrm{Mo}}= 0.2672 \ \mathrm{g} \ \left(\mathrm{NH}_{4}\right)_{3} \mathrm{PO}_{4} \cdot 12 \mathrm{MoO}_{3} \nonumber\]Next, to convert this mass of (NH4)3PO4•12MoO3 to a mass of Na3PO4, we use a conservation of mass on \(\text{PO}_4^{3-}\).\[0.2672 \ \mathrm{g} \ \left(\mathrm{NH}_{4}\right)_{3} \mathrm{PO}_{4} \cdot 12 \mathrm{MoO}_{3} \times \frac{1 \ \mathrm{mol} \ \mathrm{PO}_{4}^{3-}}{1876.59 \ \mathrm{g \ }\left(\mathrm{NH}_{4}\right)_{3} \mathrm{PO}_{4} \cdot 12 \mathrm{MoO}_{3}} \times \frac{163.94 \ \mathrm{g} \ \mathrm{Na}_{3} \mathrm{PO}_{4}}{\mathrm{mol} \ \mathrm{PO}_{4}^{3-}}=0.02334 \ \mathrm{g} \ \mathrm{Na}_{3} \mathrm{PO}_{4} \nonumber\]Finally, we convert this mass of Na3PO4 to the corresponding mass of sample.\[0.02334 \ \mathrm{g} \ \mathrm{Na}_{3} \mathrm{PO}_{4} \times \frac{100 \ \mathrm{g} \text { sample }}{12.5 \ \mathrm{g} \ \mathrm{Na}_{3} \mathrm{PO}_{4}}=0.187 \ \mathrm{g} \text { sample } \nonumber\]A sample of 0.187 g is sufficient to guarantee that we recover a minimum of 0.600 g PbMoO3. If a sample contains more than 12.5% Na3PO4, then a 0.187-g sample will produce more than 0.600 g of PbMoO3.A precipitation reaction is a useful method for identifying inorganic and organic analytes. Because a qualitative analysis does not require quantitative measurements, the analytical signal is simply the observation that a precipitate forms. Although qualitative applications of precipitation gravimetry have been replaced by spectroscopic methods of analysis, they continue to find application in spot testing for the presence of specific analytes [Jungreis, E. Spot Test Analysis; 2nd Ed., Wiley: New York, 1997].Any of the precipitants listed in Table 8.2.1 , Table 8.2.3 , and Table 8.2.4 can be used for a qualitative analysis.The scale of operation for precipitation gravimetry is limited by the sensitivity of the balance and the availability of sample. To achieve an accuracy of ±0.1% using an analytical balance with a sensitivity of ±0.1 mg, we must isolate at least 100 mg of precipitate. As a consequence, precipitation gravimetry usually is limited to major or minor analytes, in macro or meso samples. The analysis of a trace level analyte or a micro sample requires a microanalytical balance.For a macro sample that contains a major analyte, a relative error of 0.1– 0.2% is achieved routinely. The principal limitations are solubility losses, impurities in the precipitate, and the loss of precipitate during handling. When it is difficult to obtain a precipitate that is free from impurities, it often is possible to determine an empirical relationship between the precipitate’s mass and the mass of the analyte by an appropriate calibration.The relative precision of precipitation gravimetry depends on the sample’s size and the precipitate’s mass. For a smaller amount of sample or precipitate, a relative precision of 1–2 ppt is obtained routinely. When working with larger amounts of sample or precipitate, the relative precision extends to several ppm. Few quantitative techniques can achieve this level of precision.For any precipitation gravimetric method we can write the following general equation to relate the signal (grams of precipitate) to the absolute amount of analyte in the sample\[\text { g precipitate }=k \times \mathrm{g} \text { analyte } \label{8.13}\]where k, the method’s sensitivity, is determined by the stoichiometry between the precipitate and the analyte.Equation \ref{8.13} assumes we used a suitable blank to correct the signal for any contributions of the reagent to the precipitate’s mass.Consider, for example, the determination of Fe as Fe2O3. Using a conservation of mass for iron, the precipitate’s mass is\[\mathrm{g} \ \mathrm{Fe}_{2} \mathrm{O}_{3}=\mathrm{g} \ \mathrm{Fe} \times \frac{1 \ \mathrm{mol} \ \mathrm{Fe}}{\text{AW Fe}} \times \frac{\text{FW Fe}_{2} \mathrm{O}_{3}}{2 \ \mathrm{mol} \ \mathrm{Fe}} \nonumber\]and the value of k is\[k=\frac{1}{2} \times \frac{\mathrm{FW} \ \mathrm{Fe}_{2} \mathrm{O}_{3}}{\mathrm{AW} \ \mathrm{Fe}} \label{8.14}\]As we can see from Equation \ref{8.14}, there are two ways to improve a method’s sensitivity. The most obvious way to improve sensitivity is to increase the ratio of the precipitate’s molar mass to that of the analyte. In other words, it helps to form a precipitate with the largest possible formula weight. A less obvious way to improve a method’s sensitivity is indicated by the term of 1/2 in Equation \ref{8.14}, which accounts for the stoichiometry between the analyte and precipitate. We can also improve sensitivity by forming a precipitate that contains fewer units of the analyte.Suppose you wish to determine the amount of iron in a sample. Which of the following compounds—FeO, Fe2O3, or Fe3O4—provides the greatest sensitivity?To determine which form has the greatest sensitivity, we use a conservation of mass for iron to find the relationship between the precipitate’s mass and the mass of iron.\[\begin{aligned} \mathrm{g} \ \mathrm{FeO} &=\mathrm{g} \ \mathrm{Fe} \times \frac{1 \ \mathrm{mol} \ \mathrm{Fe}}{55.85 \ \mathrm{g} \ \mathrm{Fe}} \times \frac{71.84 \ \mathrm{g} \ \mathrm{FeO}}{\mathrm{mol} \ \mathrm{Fe}}=1.286 \times \mathrm{g} \ \mathrm{Fe} \\ \mathrm{g} \ \mathrm{Fe}_{2} \mathrm{O}_{3} &=\mathrm{g} \ \mathrm{Fe} \times \frac{1 \ \mathrm{mol} \ \mathrm{Fe}}{55.85 \ \mathrm{g} \ \mathrm{Fe}} \times \frac{159.69 \ \mathrm{g} \ \mathrm{Fe}_{2} \mathrm{O}_{3}}{2 \ \mathrm{mol} \ \mathrm{Fe}}=1.430 \times \mathrm{g} \ \mathrm{Fe} \\ \mathrm{g} \ \mathrm{Fe}_{3} \mathrm{O}_{4} &=\mathrm{g} \ \mathrm{Fe} \times \frac{1 \ \mathrm{mol} \ \mathrm{Fe}}{55.85 \ \mathrm{g} \ \mathrm{Fe}} \times \frac{231.53 \ \mathrm{g} \ \mathrm{Fe}_{3} \mathrm{O}_{4}}{3 \ \mathrm{mol} \ \mathrm{Fe}}=1.382 \times \mathrm{g} \ \mathrm{Fe} \end{aligned} \nonumber\]Of the three choices, the greatest sensitivity is obtained with Fe2O3 because it provides the largest value for k.Due to the chemical nature of the precipitation process, precipitants usually are not selective for a single analyte. For example, silver is not a selective precipitant for chloride because it also forms precipitates with bromide and with iodide. Interferents often are a serious problem and must be considered if accurate results are to be obtained.Precipitation gravimetry is time intensive and rarely practical if you have a large number of samples to analyze; however, because much of the time invested in precipitation gravimetry does not require an analyst’s immediate supervision, it is a practical alternative when working with only a few samples. Equipment needs are few—beakers, filtering devices, ovens or burners, and balances—inexpensive, routinely available in most laboratories, and easy to maintain.This page titled 8.2: Precipitation Gravimetry is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by David Harvey.
180
8.3: Volatilization Gravimetry
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Analytical_Chemistry_2.1_(Harvey)/08%3A_Gravimetric_Methods/8.03%3A_Volatilization_Gravimetry
A second approach to gravimetry is to thermally or chemically decompose the sample and measure the resulting change in its mass. Alternatively, we can trap and weigh a volatile decomposition product. Because the release of a volatile species is an essential part of these methods, we classify them collectively as volatilization gravimetric methods of analysis.Whether an analysis is direct or indirect, volatilization gravimetry usually requires that we know the products of the decomposition reaction. This rarely is a problem for organic compounds, which typically decompose to form simple gases such as CO2, H2O, and N2. For an inorganic compound, however, the products often depend on the decomposition temperature.One method for determining the products of a thermal decomposition is to monitor the sample’s mass as a function of temperature, a process called thermogravimetry. Figure 8.3.1 shows a typical thermogram in which each change in mass—each “step” in the thermogram—represents the loss of a volatile product. As the following example illustrates, we can use a thermogram to identify a compound’s decomposition reactions.The thermogram in Figure 8.3.1 shows the mass of a sample of calcium oxalate monohydrate, CaC2O4•H2O, as a function of temperature. The original sample of 17.61 mg was heated from room temperature to 1000oC at a rate of 20oC per minute. For each step in the thermogram, identify the volatilization product and the solid residue that remains.SolutionFrom 100–250oC the sample loses 17.61 mg – 15.44 mg, or 2.17 mg, which is\[\frac{2.17 \ \mathrm{mg}}{17.61 \ \mathrm{mg}} \times 100=12.3 \% \nonumber\]of the sample’s original mass. In terms of CaC2O4•H2O, this corresponds to a decrease in the molar mass of\[0.123 \times 146.11 \ \mathrm{g} / \mathrm{mol}=18.0 \ \mathrm{g} / \mathrm{mol} \nonumber\]The product’s molar mass and the temperature range for the decomposition, suggest that this is a loss of H2O(g), leaving a residue of CaC2O4.The loss of 3.38 mg from 350–550oC is a 19.2% decrease in the sample’s original mass, or a decrease in the molar mass of\[0.192 \times 146.11 \ \mathrm{g} / \mathrm{mol}=28.1 \ \mathrm{g} / \mathrm{mol} \nonumber\]which is consistent with the loss of CO(g) and a residue of CaCO3.Finally, the loss of 5.30 mg from 600-800oC is a 30.1% decrease in the sample’s original mass, or a decrease in molar mass of\[0.301 \times 146.11 \ \mathrm{g} / \mathrm{mol}=44.0 \ \mathrm{g} / \mathrm{mol} \nonumber\]This loss in molar mass is consistent with the release of CO2(g), leaving a final residue of CaO. The three decomposition reactions are\[\begin{array}{c}{\mathrm{CaC}_{2} \mathrm{O}_{4} \cdot \mathrm{H}_{2} \mathrm{O}(s) \rightarrow \ \mathrm{CaC}_{2} \mathrm{O}_{4}(s)+2 \mathrm{H}_{2} \mathrm{O}(l)} \\ {\mathrm{CaC}_{2} \mathrm{O}_{4}(s) \rightarrow \ \mathrm{CaCO}_{3}(s)+\mathrm{CO}(g)} \\ {\mathrm{CaCO}_{3}(s) \rightarrow \ \mathrm{CaO}(s)+\mathrm{CO}_{2}(g)}\end{array} \nonumber\]Identifying the products of a thermal decomposition provides information that we can use to develop an analytical procedure. For example, the thermogram in Figure 8.3.1 shows that we must heat a precipitate of CaC2O4•H2O to a temperature between 250 and 400oC if we wish to isolate and weigh CaC2O4. Alternatively, heating the sample to 1000oC allows us to isolate and weigh CaO.Under the same conditions as Figure 8.3.1 , the thermogram for a 22.16 mg sample of MgC2O4•H2O shows two steps: a loss of 3.06 mg from 100–250oC and a loss of 12.24 mg from 350–550oC. For each step, identify the volatilization product and the solid residue that remains. Using your results from this exercise and the results from Example 8.3.1 , explain how you can use thermogravimetry to analyze a mixture that contains CaC2O4•H2O and MgC2O4•H2O. You may assume that other components in the sample are inert and thermally stable below 1000oC.From 100–250oC the sample loses 13.8% of its mass, or a loss of\[0.138 \times 130.34 \ \mathrm{g} / \mathrm{mol}=18.0 \ \mathrm{g} / \mathrm{mol} \nonumber\]which is consistent with the loss of H2O(g) and a residue of MgC2O4.From 350–550oC the sample loses 55.23% of its original mass, or a loss of\[0.5523 \times 130.34 \ \mathrm{g} / \mathrm{mol}=71.99 \ \mathrm{g} / \mathrm{mol} \nonumber\]This weight loss is consistent with the simultaneous loss of CO(g) and CO2(g), leaving a residue of MgO.We can analyze the mixture by heating a portion of the sample to 300oC, 600oC, and 1000oC, recording the mass at each temperature. The loss of mass between 600oC and 1000oC, \(\Delta m_2\), is due to the loss of CO2(g) from the decomposition of CaCO3 to CaO, and is proportional to the mass of CaC2O4•H2O in the sample.\[\mathrm{g} \ \mathrm{CaC}_{2} \mathrm{O}_{4} \cdot \mathrm{H}_{2} \mathrm{O}=\Delta m_{2} \times \frac{1 \ \mathrm{mol} \ \mathrm{CO}_{2}}{44.01 \ \mathrm{g} \ \mathrm{CO}_{2}} \times \frac{146.11 \ \mathrm{g} \ \mathrm{CaC}_{2} \mathrm{O}_{4} \cdot \mathrm{H}_{2} \mathrm{O}}{\mathrm{mol} \ \mathrm{CO}_{2}} \nonumber\]The change in mass between 300oC and 600oC, \(\Delta m_1\), is due to the loss of CO(g) from CaC2O4•H2O and the loss of CO(g) and CO2(g) from MgC2O4•H2O. Because we already know the amount of CaC2O4•H2O in the sample, we can calculate its contribution to \(\Delta m_1\).\[\left(\Delta m_{1}\right)_{\mathrm{Ca}}=\mathrm{g} \ \mathrm{CaC}_{2} \mathrm{O}_{4} \cdot \mathrm{H}_{2} \mathrm{O}=\Delta m_{2} \times \frac{1 \ \mathrm{mol} \ \mathrm{CO}}{146.11 \ \mathrm{g} \ \mathrm{CaC}_{2} \mathrm{O}_{4} \cdot \mathrm{H}_{2} \mathrm{O}} \times \frac{28.01 \ \mathrm{g} \ \mathrm{CO}}{\mathrm{mol} \ \mathrm{CO}} \nonumber\]The change in mass between 300oC and 600oC due to the decomposition of MgC2O4•H2O\[\left(m_{1}\right)_{\mathrm{Mg}}=\Delta m_{1}-\left(\Delta m_{1}\right)_{\mathrm{Ca}} \nonumber\]provides the mass of MgC2O4•H2O in the sample.\[\mathrm{g} \ \mathrm{MgC}_{2} \mathrm{O}_{4} \cdot \mathrm{H}_{2} \mathrm{O}=\left(\Delta m_{1}\right)_{\mathrm{Mg}} \times \frac{1 \ \mathrm{mol}\left(\mathrm{CO} \ + \ \mathrm{CO}_{2}\right)}{130.35 \ \mathrm{g} \ \mathrm{MgC}_{2} \mathrm{O}_{4} \cdot \mathrm{H}_{2} \mathrm{O}} \times \frac{78.02 \ \mathrm{g} \ \left(\mathrm{CO} \ + \ \mathrm{CO}_{2}\right)}{\mathrm{mol}\ \left(\mathrm{CO} \ + \ \mathrm{CO}_{2}\right)} \nonumber\]Depending on the method of analysis, the equipment for volatilization gravimetry may be simple or complex. In the simplest experimental design, we place the sample in a crucible and decompose it at a fixed temperature using a Bunsen burner, a Meker burner, a laboratory oven, or a muffle furnace. The sample’s mass and the mass of the residue are measured using an analytical balance.Trapping and weighing the volatile products of a thermal decomposition requires specialized equipment. The sample is placed in a closed container and heated. As decomposition occurs, a stream of an inert purge-gas sweeps the volatile products through one or more selective absorbent traps.In a thermogravimetric analysis, the sample is placed on a small balance pan attached to one arm of an electromagnetic balance (Figure 8.3.2 ). The sample is lowered into an electric furnace and the furnace’s temperature is increased at a fixed rate of few degrees per minute while monitoring continuously the sample’s weight. The instrument usually includes a gas line for purging the volatile decomposition products out of the furnace, and a heat exchanger to dissipate the heat emitted by the furnace.Figure 8.3.2 . (a) Instrumentation for conducting a thermogravimetric analysis. The balance sits on the top of the instrument with the sample suspended below. A gas line supplies an inert gas that sweeps the volatile decomposition products out of the furnace. The heat exchanger dissipates the heat from the furnace to a reservoir of water. (b) Close-up showing the balance pan, which sits on a moving platform, the thermocouple for monitoring temperature, a hook for lowering the sample pan into the furnace, and the opening to the furnace. After placing a small portion of the sample on the balance pan, the platform rotates over the furnace and transfers the balance pan to a hook that is suspended from the balance. Once the balance pan is in place, the platform rotates back to its initial position. The balance pan and the thermocouple are then lowered into the furnace.The best way to appreciate the theoretical and practical details discussed in this section is to carefully examine a typical volatilization gravimetric method. Although each method is unique, the determination of Si in ores and alloys by forming volatile SiF4 provides an instructive example of a typical procedure. The description here is based on a procedure from Young, R. S. Chemical Analysis in Extractive Metallurgy, Griffen: London, 1971, pp. 302–304.Description of MethodSilicon is determined by dissolving the sample in acid and dehydrating to precipitate SiO2. Because a variety of other insoluble oxides also form, the precipitate’s mass is not a direct measure of the amount of silicon in the sample. Treating the solid residue with HF forms volatile SiF4. The decrease in mass following the loss of SiF4 provides an indirect measure of the amount of silicon in the original sample.ProcedureTransfer a sample of between 0.5 g and 5.0 g to a platinum crucible along with an excess of Na2CO3, and heat until a melt forms. After cooling, dissolve the residue in dilute HCl. Evaporate the solution to dryness on a steam bath and heat the residue, which contains SiO2 and other solids, for one hour at 110oC. Moisten the residue with HCl and repeat the dehydration. Remove any acid soluble materials from the residue by adding 50 mL of water and 5 mL of concentrated HCl. Bring the solution to a boil and filter through #40 filter paper (note: #40 filter paper is a medium speed, ashless filter paper for filtering crystalline solids). Wash the residue with hot 2% v/v HCl followed by hot water. Evaporate the filtrate to dryness twice and, following the same procedure, treat to remove any acid-soluble materials. Combine the two precipitates and dry and ignite to a constant weight at 1200oC. After cooling, add 2 drops of 50% v/v H2SO4 and 10 mL of HF. Remove the volatile SiF4 by evaporating to dryness on a hot plate. Finally, bring the residue to constant weight by igniting at 1200oC.Questions1. According to the procedure the sample should weigh between 0.5 g and 5.0 g. How should you decide upon the amount of sample to use?In this procedure the critical measurement is the decrease in mass following the volatilization of SiF4. The reaction responsible for the loss of mass is\[\mathrm{SiO}_{2}(s)+4 \mathrm{HF}(a q) \rightarrow \mathrm{SiF}_{4}(g)+2 \mathrm{H}_{2} \mathrm{O}(l ) \nonumber\]Water and excess HF are removed during the final ignition, and do not contribute to the change in mass. The loss in mass, therefore, is equivalent to the mass of SiO2 present after the dehydration step. Every 0.1 g of Si in the original sample results in the loss of 0.21 g of SiO2. How much sample we use depends on what is an acceptable uncertainty when we measure its mass. A 0.5-g sample that is 50% w/w in Si, for example, will lose 0.53 g. If we are using a balance that measures mass to the nearest ±0.1 mg, then the relative uncertainty in mass is approximately ±0.02%; this is a reasonable level of uncertainty for a gravimetric analysis. A 0.5-g sample that is only 5% w/w Si experiences a weight loss of only 0.053 g and has a relative uncertainty of ±0.2%. In this case a larger sample is needed.2. Why are acid-soluble materials removed before we treat the dehydrated residue with HF?Any acid-soluble materials in the sample will react with HF or H2SO4. If the products of these reactions are volatile, or if they decompose at 1200oC, then the change in mass is not due solely to the volatilization of SiF4. As a result, we will overestimate the amount of Si in our sample.3. Why is H2SO4 added with the HF?Many samples that contain silicon also contain aluminum and iron, which form Al2O3 and Fe2O3 when we dehydrate the sample. These oxides are potential interferents because they also form volatile fluorides. In the presence of H2SO4, however, aluminum and iron preferentially form non-volatile sulfates, which eventually decompose back to their respective oxides when we heat the residue to 1200oC. As a result, the change in weight after treating with HF and H2SO4 is due only to the loss of SiF4.Unlike precipitation gravimetry, which rarely is used as a standard method of analysis, volatilization gravimetric methods continue to play an important role in chemical analysis. Several important examples are discussed below.Determining the inorganic ash content of an organic material, such as a polymer, is an example of a direct volatilization gravimetric analysis. After weighing the sample, it is placed in an appropriate crucible and the organic material carefully removed by combustion, leaving behind the inorganic ash. The crucible that contains the residue is heated to a constant weight using either a burner or an oven before the mass of the inorganic ash is determined.Another example of volatilization gravimetry is the determination of dissolved solids in natural waters and wastewaters. In this method, a sample of water is transferred to a weighing dish and dried to a constant weight at either 103–105oC or at 180oC. Samples dried at the lower temperature retain some occluded water and lose some carbonate as CO2; the loss of organic material, however, is minimal at this temperature. At the higher temperature, the residue is free from occluded water, but the loss of carbonate is greater. In addition, some chloride, nitrate, and organic material is lost through thermal decomposition. In either case, the residue that remains after drying to a constant weight at 500oC is the amount of fixed solids in the sample, and the loss in mass provides an indirect measure of the sample’s volatile solids.Indirect analyses based on the weight of a residue that remains after volatilization are used to determine moisture in a variety of products and to determine silica in waters, wastewaters, and rocks. Moisture is determined by drying a preweighed sample with an infrared lamp or a low temperature oven. The difference between the original weight and the weight after drying equals the mass of water lost.The most important application of volatilization gravimetry is for the elemental analysis of organic materials. During combustion with pure O2, many elements, such as carbon and hydrogen, are released as gaseous combustion products, such as CO2(g) and H2O(g). Passing the combustion products through preweighed tubes that contain selective absorbents and measuring the increase in each tube’s mass provides a direct analysis for the mass of carbon and hydrogen in the sample.Instead of measuring mass, modern instruments for completing an elemental analysis use gas chromatography (Chapter 12) or infrared spectroscopy (Chapter 10) to monitor the gaseous decomposition products.Alkaline metals and earths in organic materials are determined by adding H2SO4 to the sample before combustion. After combustion is complete, the metal remains behind as a solid residue of metal sulfate. Silver, gold, and platinum are determined by burning the organic sample, leaving a metallic residue of Ag, Au, or Pt. Other metals are determined by adding HNO3 before combustion, which leaves a residue of the metal oxide.Volatilization gravimetry also is used to determine biomass in waters and wastewaters. Biomass is a water quality index that provides an indication of the total mass of organisms contained within a sample of water. A known volume of the sample is passed through a preweighed 0.45-μm membrane filter or a glass-fiber filter and dried at 105oC for 24 h. The residue’s mass provides a direct measure of biomass. If samples are known to contain a substantial amount of dissolved inorganic solids, the residue is ignited at 500oC for one hour, which volatilizes the biomass. The resulting inorganic residue is wetted with distilled water to rehydrate any clay minerals and dried to a constant weight at 105oC. The difference in mass before and after ignition provides an indirect measure of biomass.For some applications, such as determining the amount of inorganic ash in a polymer, a quantitative calculation is straightforward and does not require a balanced chemical reaction. For other applications, however, the relationship between the analyte and the analytical signal depends upon the stoichiometry of any relevant reactions. Once again, a conservation of mass is useful when solving problems.A 101.3-mg sample of an organic compound that contains chlorine is combusted in pure O2. The volatile gases are collected in absorbent traps with the trap for CO2 increasing in mass by 167.6 mg and the trap for H2O increasing in mass by 13.7-mg. A second sample of 121.8 mg is treated with concentrated HNO3, producing Cl2 that reacts with Ag+ to form 262.7 mg of AgCl. Determine the compound’s composition, as well as its empirical formula.SolutionA conservation of mass requires that all the carbon in the organic compound is in the CO2 produced during combustion; thus\[0.1676 \ \mathrm{g} \ \mathrm{CO}_{2} \times \frac{1 \ \mathrm{mol} \ \mathrm{C}}{44.010 \ \mathrm{g} \ \mathrm{CO}_{2}} \times \frac{12.011 \ \mathrm{g} \ \mathrm{C}}{\mathrm{mol} \ \mathrm{C}}=0.04574 \ \text{g C} \nonumber\]\[\frac{0.04574 \ \mathrm{g} \ \mathrm{C}}{0.1013 \ \mathrm{g} \text { sample }} \times 100=45.15 \% \mathrm{w} / \mathrm{w} \ \mathrm{C} \nonumber\]Using the same approach for hydrogen and chlorine, we find that\[0.0137 \ \mathrm{g} \ \mathrm{H}_{2} \mathrm{O} \times \frac{2 \ \mathrm{mol} \ \mathrm{H}}{18.015 \ \mathrm{g} \ \mathrm{H}_{2} \mathrm{O}} \times \frac{1.008 \ \mathrm{g} \ \mathrm{H}}{\mathrm{mol} \ \mathrm{H}}=1.533 \times 10^{-3} \mathrm{g} \ \mathrm{H} \nonumber\]\[\frac{1.533 \ \times 10^{-3} \mathrm{g} \ \mathrm{H}}{0.1003 \ \mathrm{g} \ \text { sample }} \times 100=1.53 \% \mathrm{w} / \mathrm{w} \ \mathrm{H} \nonumber\]\[0.2627 \ \mathrm{g} \ \mathrm{AgCl} \times \frac{1 \ \mathrm{mol} \ \mathrm{Cl}}{143.32 \ \mathrm{g} \ \mathrm{AgCl}} \times \frac{35.455 \ \text{g Cl}}{\mathrm{mol} \ \mathrm{Cl}}=0.06498 \ \mathrm{g} \ \mathrm{Cl} \nonumber\]\[\frac{0.06498 \ \mathrm{g} \ \mathrm{Cl}}{0.1218 \ \mathrm{g} \text { sample }} \times 100=53.35 \% \mathrm{w} / \mathrm{w} \ \mathrm{Cl} \nonumber\]Adding together the weight percents for C, H, and Cl gives a total of 100.03%; thus, the compound contains only these three elements. To determine the compound’s empirical formula we note that a gram of sample contains 0.4515 g of C, 0.0153 g of H and 0.5335 g of Cl. Expressing each element in moles gives 0.0376 moles C, 0.0152 moles H and 0.0150 moles Cl. Hydrogen and chlorine are present in a 1:1 molar ratio. The molar ratio of C to moles of H or Cl is\[\frac{\mathrm{mol} \ \mathrm{C}}{\mathrm{mol} \text{ H}} =\frac{\mathrm{mol} \ \mathrm{C}}{\mathrm{mol} \ \mathrm{Cl}}=\frac{0.0376}{0.0150}=2.51 \approx 2.5 \nonumber\]Thus, the simplest, or empirical formula for the compound is C5H2Cl2.In an indirect volatilization gravimetric analysis, the change in the sample’s weight is proportional to the amount of analyte in the sample. Note that in the following example it is not necessary to apply a conservation of mass to relate the analytical signal to the analyte.A sample of slag from a blast furnace is analyzed for SiO2 by decomposing a 0.5003-g sample with HCl, leaving a residue with a mass of 0.1414 g. After treating with HF and H2SO4, and evaporating the volatile SiF4, a residue with a mass of 0.0183 g remains. Determine the %w/w SiO2 in the sample.SolutionThe difference in the residue’s mass before and after volatilizing SiF4 gives the mass of SiO2 in the sample; thus the sample contains\[0.1414 \ \mathrm{g}-0.0183 \ \mathrm{g}=0.1231 \ \mathrm{g} \ \mathrm{SiO}_{2} \nonumber\]and the %w/w SiO2 is\[\frac{0.1231 \ \mathrm{g} \ \mathrm{Si} \mathrm{O}_{2}}{0.5003 \ \mathrm{g} \text { sample }} \times 100=24.61 \% \mathrm{w} / \mathrm{w} \ \mathrm{SiO}_{2} \nonumber\]Heating a 0.3317-g mixture of CaC2O4 and MgC2O4 yields a residue of 0.1794 g at 600oC and a residue of 0.1294 g at 1000oC. Calculate the %w/w CaC2O4 in the sample. You may wish to review your answer to Exercise 8.3.1 as you consider this problem.In Exercise 8.3.1 we developed an equation for the mass of CaC2O4•H2O in a mixture of CaC2O4•H2O, MgC2O4•H2O, and inert materials. Adapting this equation to a sample that contains CaC2O4, MgC2O4, and inert materials is easy; thus\[\mathrm{g} \ \mathrm{CaC}_{2} \mathrm{O}_{4}=(0.1794 \ \mathrm{g}-0.1294 \ \mathrm{g}) \times \frac{1 \ \mathrm{mol} \ \mathrm{CO}_{2}}{44.01 \ \mathrm{g} \ \mathrm{CO}_{2}} \times \frac{128.10 \ \mathrm{g} \ \mathrm{CaC}_{2} \mathrm{O}_{4}}{\mathrm{mol} \ \mathrm{CO}_{2}}=0.1455 \ \mathrm{g} \ \mathrm{CaC}_{2} \mathrm{O}_{4} \nonumber\]The %w/w CaC2O4 in the sample is\[\frac{0.1455 \ \mathrm{g} \ \mathrm{CaC}_{2} \mathrm{O}_{4}}{0.3317 \ \mathrm{g} \text { sample }} \times 100=43.86 \% \mathrm{w} / \mathrm{w} \mathrm{CaC}_{2} \mathrm{O}_{4} \nonumber\]Finally, for some quantitative applications we can compare the result for a sample to a similar result obtained using a standard.A 26.23-mg sample of MgC2O4•H2O and inert materials is heated to constant weight at 1200oC, leaving a residue that weighs 20.98 mg. A sample of pure MgC2O4•H2O, when treated in the same fashion, undergoes a 69.08% change in its mass. Determine the %w/w MgC2O4•H2O in the sample.SolutionThe change in the sample’s mass is 5.25 mg, which corresponds to\[5.25 \ \mathrm{mg} \operatorname{lost} \times \frac{100.0 \ \mathrm{mg} \ \mathrm{MgC}_{2} \mathrm{O}_{4} \cdot \mathrm{H}_{2} \mathrm{O}}{69.08 \ \mathrm{mg} \text { lost }}=7.60 \ \mathrm{mg} \ \mathrm{MgC}_{2} \mathrm{O}_{4} \cdot \mathrm{H}_{2} \mathrm{O} \nonumber\]The %w/w MgC2O4•H2O in the sample is\[\frac{7.60 \ \mathrm{mg} \ \mathrm{MgC}_{2} \mathrm{O}_{4} \cdot \mathrm{H}_{2} \mathrm{O}}{26.23 \ \mathrm{mg} \text { sample }} \times 100=29.0 \% \mathrm{w} / \mathrm{w} \ \mathrm{MgC}_{2} \mathrm{O}_{4} \cdot \mathrm{H}_{2} \mathrm{O} \nonumber\]The scale of operation, accuracy, and precision of a gravimetric volatilization method is similar to that described in the last section for precipitation gravimetry. The sensitivity of a direct analysis is fixed by the analyte’s chemical form following combustion or volatilization. We can improve the sensitivity of an indirect analysis by choosing conditions that give the largest possible change in mass. For example, the thermogram in Figure 8.3.1 shows us that an indirect analysis for CaC2O4•H2O is more sensitive if we measure the change in mass following ignition at 1000oC than if we ignite the sample at 300oC.Selectivity is not a problem for a direct analysis if we trap the analyte using a selective absorbent trap. A direct analysis based on the residue’s weight following combustion or volatilization is possible if the residue contains only the analyte of interest. As noted earlier, an indirect analysis only is feasible when the change in mass results from the loss of a single volatile product that contains the analyte.Volatilization gravimetric methods are time and labor intensive. Equipment needs are few, except when combustion gases must be trapped, or for a thermogravimetric analysis, when specialized instrumentation is needed.This page titled 8.3: Volatilization Gravimetry is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by David Harvey.
181
8.4: Particulate Gravimetry
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Analytical_Chemistry_2.1_(Harvey)/08%3A_Gravimetric_Methods/8.04%3A_Particulate_Gravimetry
Precipitation and volatilization gravimetric methods require that the analyte, or some other species in the sample, participates in a chemical reaction. In a direct precipitation gravimetric analysis, for example, we convert a soluble analyte into an insoluble form that precipitates from solution. In some situations, however, the analyte already is present in a particulate form that is easy to separate from its liquid, gas, or solid matrix. When such a separation is possible, we can determine the analyte’s mass without relying on a chemical reaction.A particulate is any tiny portion of matter, whether it is a speck of dust, a globule of fat, or a molecule of ammonia. For particulate gravimetry we simply need a method to collect the particles and a balance to measure their mass.There are two methods for separating a particulate analyte from its matrix. The most common method is filtration, in which we separate solid particulates from their gas, liquid, or solid matrix. A second method, which is useful for gas particles, solutes, and solids, is an extraction.To separate solid particulates from their matrix we use gravity or apply suction from a vacuum pump or an aspirator to pull the sample through a filter. The type of filter we use depends upon the size of the solid particles and the sample’s matrix. Filters for liquid samples are constructed from a variety of materials, including cellulose fibers, glass fibers, cellulose nitrate, and polytetrafluoroethylene (PTFE). Particle retention depends on the size of the filter’s pores. Cellulose fiber filter papers range in pore size from 30 μm to 2–3 μm. Glass fiber filters, manufactured using chemically inert borosilicate glass, are available with pore sizes between 2.5 μm and 0.3 μm. Membrane filters, which are made from a variety of materials, including cellulose nitrate and PTFE, are available with pore sizes from 5.0 μm to 0.1 μm.For additional information, see our earlier discussion in this chapter on filtering precipitates, and the discussion in Chapter 7 of separations based on size.Solid aerosol particulates are collected using either a single-stage or a multiple-stage filter. In a single-stage system, we pull the gas through a single filter, which retains particles larger than the filter’s pore size. To collect samples from a gas line, we place the filter directly in the line. Atmospheric gases are sampled with a high volume sampler that uses a vacuum pump to pull air through the filter at a rate of approximately 75 m3/h. In either case, we can use the same filtering media for liquid samples to collect aerosol particulates. In a multiple-stage system, a series of filtering units separates the particles into two or more size ranges.The particulates in a solid matrix are separated by size using one or more sieves (Figure 8.4.1 ). Sieves are available in a variety of mesh sizes, ranging from approximately 25 mm to 40 μm. By stacking together sieves of different mesh size, we can isolate particulates into several narrow size ranges. Using the sieves in Figure 8.4.1 , for example, we can separate a solid into particles with diameters >1700 μm, with diameters between 1700 μm and 500 μm, with diameters between 500 μm and 250 μm, and those with a diameter <250 μm.Filtering limits particulate gravimetry to solid analytes that are easy to separate from their matrix. We can extend particulate gravimetry to the analysis of gas phase analytes, solutes, and solids that are difficult to filter if we extract them with a suitable solvent. After the extraction, we evaporate the solvent and determine the analyte’s mass. Alternatively, we can determine the analyte indirectly by measuring the change in the sample’s mass after we extract the analyte.For a more detailed review of extractions, particularly solid-phase extractions, see Chapter 7.Another method for extracting an analyte from its matrix is by adsorption onto a solid substrate, by absorption into a thin polymer film or chemical film coated on a solid substrate, or by chemically binding to a suitable receptor that is covalently bound to a solid substrate (Figure 8.4.2 ). Adsorption, absorption, and binding occur at the interface between the solution that contains the analyte and the substrate’s surface, the thin film, or the receptor. Although the amount of extracted analyte is too small to measure using a conventional balance, it can be measured using a quartz crystal microbalance.The measurement of mass using a quartz crystal microbalance takes advantage of the piezoelectric effect [(a) Ward, M. D.; Buttry, D. A. Science 1990, 249, 1000–1007; (b) Grate, J. W.; Martin, S. J. ; White, R. M. Anal. Chem. 1993, 65, 940A–948A; (c) Grate, J. W.; Martin, S. J. ; White, R. M. Anal. Chem. 1993, 65, 987A–996A.]. The application of an alternating electrical field across a quartz crystal induces an oscillatory vibrational motion in the crystal. Every quartz crystal vibrates at a characteristic resonant frequency that depends on the crystal’s properties, including the mass per unit area of any material coated on the crystal’s surface. The change in mass following adsorption, absorption, or binding of the analyte is determined by monitoring the change in the quartz crystal’s characteristic resonant frequency. The exact relationship between the change in frequency and mass is determined by a calibration curve.If you own a wristwatch, there is a good chance that its operation relies on a quartz crystal. The piezoelectric properties of quartz were discovered in 1880 by Paul-Jacques Currie and Pierre Currie. Because the oscillation frequency of a quartz crystal is so precise, it quickly found use in the keeping of time. The first quartz clock was built in 1927 at the Bell Telephone labs, and Seiko introduced the first quartz wristwatches in 1969.Particulate gravimetry is important in the environmental analysis of water, air, and soil samples. The analysis for suspended solids in water samples, for example, is accomplished by filtering an appropriate volume of a well-mixed sample through a glass fiber filter and drying the filter to constant weight at 103–105oC.The microbiological testing of water also uses particulate gravimetry. One example is the analysis for coliform bacteria in which an appropriate volume of sample is passed through a sterilized 0.45-μm membrane filter. The filter is placed on a sterilized absorbent pad that is saturated with a culturing medium and incubated for 22–24 hours at 35 ± 0.5oC. Coliform bacteria are identified by the presence of individual bacterial colonies that form during the incubation period (Figure 8.4.3 ). As with qualitative applications of precipitation gravimetry, the signal in this case is a visual observation of the number of colonies rather than a measurement of mass.Total airborne particulates are determined using a high-volume air sampler equipped with either a cellulose fiber or a glass fiber filter. Samples from urban environments require approximately 1 h of sampling time, but samples from rural environments require substantially longer times.Grain size distributions for sediments and soils are used to determine the amount of sand, silt, and clay in a sample. For example, a grain size of 2 mm serves as the boundary between gravel and sand. The grain size for the sand–silt and the silt–clay boundaries are 1/16 mm and 1/256 mm, respectively.Several standard quantitative analytical methods for agricultural products are based on measuring the sample’s mass following a selective solvent extraction. For example, the crude fat content in chocolate is determined by extracting with ether for 16 hours in a Soxhlet extractor. After the extraction is complete, the ether is allowed to evaporate and the residue is weighed after drying at 100oC. This analysis also can be accomplished indirectly by weighing a sample before and after extracting with supercritical CO2.Quartz crystal microbalances equipped with thin film polymer films or chemical coatings have found numerous quantitative applications in environmental analysis. Methods are reported for the analysis of a variety of gaseous pollutants, including ammonia, hydrogen sulfide, ozone, sulfur dioxide, and mercury. Biochemical particulate gravimetric sensors also have been developed. For example, a piezoelectric immunosensor has been developed that shows a high selectivity for human serum albumin, and is capable of detecting microgram quantities [Muratsugu, M.; Ohta, F.; Miya, Y.; Hosokawa, T.; Kurosawa, S.; Kamo, N.; Ikeda, H. Anal. Chem. 1993, 65, 2933–2937].The result of a quantitative analysis by particulate gravimetry is just the ratio, using appropriate units, of the amount of analyte relative to the amount of sample.A 200.0-mL sample of water is filtered through a pre-weighed glass fiber filter. After drying to constant weight at 105oC, the filter is found to have increased in mass by 48.2 mg. Determine the sample’s total suspended solids.SolutionOne ppm is equivalent to one mg of analyte per liter of solution; thus, the total suspended solids for the sample is\[\frac{48.2 \ \mathrm{mg} \text { solids }}{0.2000 \ \mathrm{L} \text { sample }}=241 \ \mathrm{ppm} \text { solids } \nonumber\]The scale of operation and the detection limit for particulate gravimetry can be extended beyond that of other gravimetric methods by increasing the size of the sample taken for analysis. This usually is impracticable for other gravimetric methods because it is difficult to manipulate a larger sample through the individual steps of the analysis. With particulate gravimetry, however, the part of the sample that is not analyte is removed when filtering or extracting. Consequently, particulate gravimetry easily is extended to the analysis of trace-level analytes.Except for methods that rely on a quartz crystal microbalance, particulate gravimetry uses the same balances as other gravimetric methods, and is capable of achieving similar levels of accuracy and precision. Because particulate gravimetry is defined in terms of the mass of the particle themselves, the sensitivity of the analysis is given by the balance’s sensitivity. Selectivity, on the other hand, is determined either by the filter’s pore size or by the properties of the extracting phase. Because it requires a single step, particulate gravimetric methods based on filtration generally require less time, labor and capital than other gravimetric methods.This page titled 8.4: Particulate Gravimetry is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by David Harvey.
182
8.5: Problems
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Analytical_Chemistry_2.1_(Harvey)/08%3A_Gravimetric_Methods/8.05%3A_Problems
1. Starting with the equilibrium constant expressions for reaction 8.2.1, and for reaction 8.2.3, reaction 8.2.4, and reaction 8.2.5, verify that equation 8.2.7 is correct.2. Equation 8.2.7 explains how the solubility of AgCl varies as a function of the equilibrium concentration of Cl–. Derive a similar equation that describes the solubility of AgCl as a function of the equilibrium concentration of Ag+. Graph the resulting solubility function and compare it to that shown in figure 8.2.1.3. Construct a solubility diagram for Zn(OH)2 that takes into account the following soluble zinc-hydroxide complexes: Zn(OH)+, \(\text{Zn(OH)}_3^-\), and \(\text{Zn(OH)}_4^{2-}\). What is the optimum pH for the quantitative precipitation of Zn(OH)2? For your solubility diagram, plot log(S) on the y-axis and pH on the x-axis. See the appendices for relevant equilibrium constants.4. Starting with equation 8.2.10, verify that equation 8.2.11 is correct.5. For each of the following precipitates, use a ladder diagram to identify the pH range where the precipitates has its lowest solubility? See the appendices for relevant equilibrium constants. (a) CaC2O4; (b) PbCrO4; (c) BaSO4; (d) SrCO3; (e) ZnS6. Mixing solutions of 1.5 M KNO3 and 1.5 M HClO4 produces a precipitate of KClO4. If permanganate ions are present, an inclusion of KMnO4 is possible. Shown below are descriptions of two experiments in which KClO4 is precipitated in the presence of \(\text{MnO}_4^-\). Explain why the experiments lead to the different results shown in the figure below.Experiment (a). Place 1 mL of 1.5 M KNO3 in a test tube, add 3 drops of 0.1 M KMnO4, and swirl to mix. Add 1 mL of 1.5 M HClO4 dropwise, agitating the solution between drops. Destroy the excess KMnO4 by adding 0.1 M NaHSO3 dropwise. The resulting precipitate of KClO4 has an intense purple color.Experiment (b). Place 1 mL of 1.5 M HClO4 in a test tube, add 3 drops of 0.1 M KMnO4, and swirl to mix. Add 1 mL of 1.5 M KNO3 dropwise, agitating the solution between drops. Destroy the excess KMnO4 by adding 0.1 M NaHSO3 dropwise. The resulting precipitate of KClO4 has a pale purple in color.7. Mixing solutions of Ba(SCN)2 and MgSO4 produces a precipitate of BaSO4. Shown below are the descriptions and results for three experiments using different concentrations of Ba(SCN)2 and MgSO4. Explain why these experiments produce different results.Experiment 1. When equal volumes of 3.5 M Ba(SCN)2 and 3.5 M MgSO4 are mixed, a gelatinous precipitate forms immediately.Experiment 2. When equal volumes of 1.5 M Ba(SCN)2 and 1.5 M MgSO4 are mixed, a curdy precipitate forms immediately. Individual particles of BaSO4 are seen as points under a magnification of \(1500 \times\) (a particle size less than 0.2 μm).Experiment 3. When equal volumes of 0.5 mM Ba(SCN)2 and 0.5 mM MgSO4 are mixed, the complete precipitation of BaSO4 requires 2–3 h. Individual crystals of BaSO4 obtain lengths of approximately 5 μm.8. Aluminum is determined gravimetrically by precipitating Al(OH)3 and isolating Al2O3. A sample that contains approximately 0.1 g of Al is dissolved in 200 mL of H2O, and 5 g of NH4Cl and a few drops of methyl red indicator are added (methyl red is red at pH levels below 4 and yellow at pH levels above 6). The solution is heated to boiling and 1:1 NH3 is added dropwise until the indicator turns yellow, precipitating Al(OH)3. The precipitate is held at the solution’s boiling point for several minutes before filtering and rinsing with a hot solution of 2% w/v NH4NO3. The precipitate is then ignited at 1000–1100oC, forming Al2O3.(a) Cite at least two ways in which this procedure encourages the formation of larger particles of precipitate.(b) The ignition step is carried out carefully to ensure the quantitative conversion of Al(OH)3 to Al2O3. What is the effect of an incomplete conversion on the %w/w Al?(c) What is the purpose of adding NH4Cl and methyl red indicator?(d) An alternative procedure for aluminum involves isolating and weighing the precipitate as the 8-hydroxyquinolate, Al(C9H6NO)3. Why might this be a more advantageous form of Al for a gravimetric analysis? Are there any disadvantages?9. Calcium is determined gravimetrically by precipitating CaC2O4•H2O and isolating CaCO3. After dissolving a sample in 10 mL of water and 15 mL of 6 M HCl, the resulting solution is heated to boiling and a warm solution of excess ammonium oxalate is added. The solution is maintained at 80oC and 6 M NH3 is added dropwise, with stirring, until the solution is faintly alkaline. The resulting precipitate and solution are removed from the heat and allowed to stand for at least one hour. After testing the solution for completeness of precipitation, the sample is filtered, rinsed with 0.1% w/v ammonium oxalate, and dried for one hour at 100–120oC. The precipitate is transferred to a muffle furnace where it is converted to CaCO3 by drying at 500 ± 25oC until constant weight.(a) Why is the precipitate of CaC2O4•H2O converted to CaCO3?(b) In the final step, if the sample is heated at too high of a temperature some CaCO3 is converted to CaO. What effect would this have on the reported %w/w Ca?(c) Why is the precipitant, (NH4)2C2O4, added to a hot, acidic solution instead of a cold, alkaline solution?10. Iron is determined gravimetrically by precipitating as Fe(OH)3 and igniting to Fe2O3. After dissolving a sample in 50 mL of H2O and 10 mL of 6 M HCl, any Fe2+ is converted Fe3+ by oxidizing with 1–2 mL of concentrated HNO3. The sample is heated to remove the oxides of nitrogen and the solution is diluted to 200 mL. After bringing the solution to a boil, Fe(OH)3 is precipitated by slowly adding 1:1 NH3 until an odor of NH3 is detected. The solution is boiled for an additional minute and the precipitate allowed to settle. The precipitate is then filtered and rinsed with several portions of hot 1% w/v NH4NO3 until no Cl– is found in the wash water. Finally, the precipitate is ignited to constant weight at 500–550oC and weighed as Fe2O3.(a) If ignition is not carried out under oxidizing conditions (plenty of O2 present), the final product may contain Fe3O4. What effect will this have on the reported %w/w Fe?(b) The precipitate is washed with a dilute solution of NH4NO3. Why is NH4NO3 added to the wash water?(c) Why does the procedure call for adding NH3 until the odor of ammonia is detected?(d) Describe how you might test the filtrate for Cl–.11. Sinha and Shome described a gravimetric method for molybdenum in which it is precipitated as MoO2(C13H10NO2)2 using n-benzoyl-phenylhydroxylamine, C13H11NO2, as the precipitant [Sinha, S. K.; Shome, S. C. Anal. Chim. Acta 1960, 24, 33–36]. The precipitate is weighed after igniting to MoO3. As part of their study, the authors determined the optimum conditions for the analysis. Samples that contained 0.0770 g of Mo each were taken through the procedure while varying the temperature, the amount of precipitant added, and the pH of the solution. The solution volume was held constant at 300 mL for all experiments. A summary of their results is shown in the following table.Based on these results, discuss the optimum conditions for determining Mo by this method. Express your results for the precipitant as the minimum %w/v in excess, needed to ensure a quantitative precipitation.12. A sample of an impure iron ore is approximately 55% w/w Fe. If the amount of Fe in the sample is determined gravimetrically by isolating it as Fe2O3, what mass of sample is needed to ensure that we isolate at least 1.0 g of Fe2O3?13. The concentration of arsenic in an insecticide is determined gravimetrically by precipitating it as MgNH4AsO4 and isolating it as Mg2As2O7. Determine the %w/w As2O3 in a 1.627-g sample of insecticide if it yields 106.5 mg of Mg2As2O7.14. After preparing a sample of alum, K2SO4•Al2(SO4)3•24H2O, an analyst determines its purity by dissolving a 1.2931-g sample and precipitating the aluminum as Al(OH)3. After filtering, rinsing, and igniting, 0.1357 g of Al2O3 is obtained. What is the purity of the alum preparation?15. To determine the amount of iron in a dietary supplement, a random sample of 15 tablets with a total weight of 20.505 g is ground into a fine powder. A 3.116-g sample is dissolved and treated to precipitate the iron as Fe(OH)3. The precipitate is collected, rinsed, and ignited to a constant weight as Fe2O3, yielding 0.355 g. Report the iron content of the dietary supplement as g FeSO4•7H2O per tablet.16. A 1.4639-g sample of limestone is analyzed for Fe, Ca, and Mg. The iron is determined as Fe2O3 yielding 0.0357 g. Calcium is isolated as CaSO4, yielding a precipitate of 1.4058 g, and Mg is isolated as 0.0672 g of Mg2P2O7. Report the amount of Fe, Ca, and Mg in the limestone sample as %w/w Fe2O3, %w/w CaO, and %w/w MgO.17. The number of ethoxy groups (CH3CH2O–) in an organic compound is determined by the following two reactions.\[\mathrm{R}\left(\mathrm{OCH}_{2} \mathrm{CH}_{3}\right)_{x}+x \mathrm{HI} \rightarrow \mathrm{R}(\mathrm{OH})_{x}+x \mathrm{CH}_{3} \mathrm{CH}_{2} \mathrm{I} \nonumber\]\[\mathrm{CH}_{3} \mathrm{CH}_{2} \mathrm{I}+\mathrm{Ag}^{+}+\mathrm{H}_{2} \mathrm{O} \rightarrow \operatorname{AgI}(s)+\mathrm{CH}_{3} \mathrm{CH}_{2} \mathrm{OH}\nonumber\]A 36.92-mg sample of an organic compound with an approximate molecular weight of 176 is treated in this fashion, yielding 0.1478 g of AgI. How many ethoxy groups are there in each molecule of the compound?18. A 516.7-mg sample that contains a mixture of K2SO4 and (NH4)2SO4 is dissolved in water and treated with BaCl2, precipitating the \(\text{SO}_4^{2-}\) as BaSO4. The resulting precipitate is isolated by filtration, rinsed free of impurities, and dried to a constant weight, yielding 863.5 mg of BaSO4. What is the %w/w K2SO4 in the sample?19. The amount of iron and manganese in an alloy is determined by precipitating the metals with 8-hydroxyquinoline, C9H7NO. After weighing the mixed precipitate, the precipitate is dissolved and the amount of 8-hydroxyquinoline determined by another method. In a typical analysis a 127.3-mg sample of an alloy containing iron, manganese, and other metals is dissolved in acid and treated with appropriate masking agents to prevent an interference from other metals. The iron and manganese are precipitated and isolated as Fe(C9H6NO)3 and Mn(C9H6NO)2, yielding a total mass of 867.8 mg. The amount of 8-hydroxyquinolate in the mixed precipitate is determined to be 5.276 mmol. Calculate the %w/w Fe and %w/w Mn in the alloy.20. A 0.8612-g sample of a mixture of NaBr, NaI, and NaNO3 is analyzed by adding AgNO3 and precipitating a 1.0186-g mixture of AgBr and AgI. The precipitate is then heated in a stream of Cl2, which converts it to 0.7125 g of AgCl. Calculate the %w/w NaNO3 in the sample.21. The earliest determinations of elemental atomic weights were accomplished gravimetrically. To determine the atomic weight of manganese, a carefully purified sample of MnBr2 weighing 7.16539 g is dissolved and the Br– precipitated as AgBr, yielding 12.53112 g. What is the atomic weight for Mn if the atomic weights for Ag and Br are taken to be 107.868 and 79.904, respectively?22. While working as a laboratory assistant you prepared 0.4 M solutions of AgNO3, Pb(NO3)2, BaCl2, KI and Na2SO4. Unfortunately, you became distracted and forgot to label the solutions before leaving the laboratory. Realizing your error, you label the solutions A–E and perform all possible binary mixtures of the five solutions, obtaining the results shown in the figure below (key: NP means no precipitate formed, W means a white precipitate formed, and Y means a yellow precipitate formed). Identify solutions A–E.23. A solid sample has approximately equal amounts of two or more of the following soluble salts: AgNO3, ZnCl2, K2CO3, MgSO4, Ba(C2H3O2)2, and NH4NO3. A sample of the solid, sufficient to give at least 0.04 moles of any single salt, is added to 100 mL of water, yielding a white precipitate and a clear solution. The precipitate is collected and rinsed with water. When a portion of the precipitate is placed in dilute HNO3 it completely dissolves, leaving a colorless solution. A second portion of the precipitate is placed in dilute HCl, yielding a solid and a clear solution; when its filtrate is treated with excess NH3, a white precipitate forms. Identify the salts that must be present in the sample, the salts that must be absent, and the salts for which there is insufficient information to make this determination [Adapted from Sorum, C. H.; Lagowski, J. J. Introduction to Semimicro Qualitative Analysis, Prentice-Hall: Englewood Cliffs, N. J., 5th Ed., 1977, p. 285].24. Two methods have been proposed for the analysis of pyrite, FeS2, in impure samples of the ore. In the first method, the sulfur in FeS2 is determined by oxidizing it to \(\text{SO}_4^{2-}\) and precipitating it as BaSO4. In the second method, the iron in FeS2 is determined by precipitating the iron as Fe(OH)3 and isolating it as Fe2O3. Which of these methods provides the more sensitive determination for pyrite? What other factors should you consider in choosing between these methods?25. A sample of impure pyrite that is approximately 90–95% w/w FeS2 is analyzed by oxidizing the sulfur to \(\text{SO}_4^{2-}\) and precipitating it as BaSO4. How many grams of the sample should you take to ensure that you obtain at least 1.0 g of BaSO4?26. A series of samples that contain any possible combination of KCl, NaCl, and NH4Cl is to be analyzed by adding AgNO3 and precipitating AgCl. What is the minimum volume of 5% w/v AgNO3 necessary to precipitate completely the chloride in any 0.5-g sample?27. If a precipitate of known stoichiometry does not form, a gravimetric analysis is still feasible if we can establish experimentally the mole ratio between the analyte and the precipitate. Consider, for example, the precipitation gravimetric analysis of Pb as PbCrO4 [Grote, F. Z. Anal. Chem. 1941, 122, 395–398].(a) For each gram of Pb, how many grams of PbCrO4 will form, assuming the reaction is stoichiometric?(b) In a study of this procedure, Grote found that 1.568 g of PbCrO4 formed for each gram of Pb. What is the apparent stoichiometry between Pb and PbCrO4?(c) Does failing to account for the actual stoichiometry lead to a positive determinate error or a negative determinate error?28. Determine the uncertainty for the gravimetric analysis described in example 8.2.1. The expected accuracy for a gravimetric method is 0.1– 0.2%. What additional sources of error might account for the difference between your estimated uncertainty and the expected accuracy?29. A 38.63-mg sample of potassium ozonide, KO3, is heated to 70oC for 1 h, undergoing a weight loss of 7.10 mg. A 29.6-mg sample of impure KO3 experiences a 4.86-mg weight loss when treated under similar condition. What is the %w/w KO3 in the sample?30. The water content of an 875.4-mg sample of cheese is determined with a moisture analyzer. What is the %w/w H2O in the cheese if the final mass was found to be 545.8 mg?31. Representative Method 8.3.1 describes a procedure for determining Si in ores and alloys. In this analysis a weight loss of 0.21 g corresponds to 0.1 g of Si. Show that this relationship is correct.32. The iron in an organometallic compound is determined by treating a 0.4873-g sample with HNO3 and heating to volatilize the organic material. After ignition, the residue of Fe2O3 weighs 0.2091 g.(a) What is the %w/w Fe in this compound?(b) The carbon and hydrogen in a second sample of the compound are determined by a combustion analysis. When a 0.5123-g sample is carried through the analysis, 1.2119 g of CO2 and 0.2482 g of H2O re collected. What are the %w/w C and %w/w H in this compound and what is the compound’s empirical formula?33. A polymer’s ash content is determined by placing a weighed sample in a Pt crucible previously brought to a constant weight. The polymer is melted using a Bunsen burner until the volatile vapor ignites and then allowed to burn until a non-combustible residue remain. The residue then is brought to constant weight at 800oC in a muffle furnace. The following data were collected for two samples of a polymer resin.(a) For each polymer, determine the mean and the standard deviation for the %w/w ash.(b) Is there any evidence at \(\alpha = 0.05\) for a significant difference between the two polymers? See the appendices for statistical tables.34. In the presence of water vapor the surface of zirconia, ZrO2, chemically adsorbs H2O, forming surface hydroxyls, ZrOH (additional water is physically adsorbed as H2O). When heated above 200oC, the surface hydroxyls convert to H2O(g), releasing one molecule of water for every two surface hydroxyls. Below 200oC only physically absorbed water is lost. Nawrocki, et al. used thermogravimetry to determine the density of surface hydroxyls on a sample of zirconia that was heated to 700oC and cooled in a desiccator containing humid N2 [Nawrocki, J.; Carr, P. W.; Annen, M. J.; Froelicher, S. Anal. Chim. Acta 1996, 327, 261–266]. Heating the sample from 200oC to 900oC released 0.006 g of H2O for every gram of dehydroxylated ZrO2. Given that the zirconia had a surface area of 33 m2/g and that one molecule of H2O forms two surface hydroxyls, calculate the density of surface hydroxyls in μmol/m2.35. The concentration of airborne particulates in an industrial workplace is determined by pulling the air for 20 min through a single-stage air sampler equipped with a glass-fiber filter at a rate of 75 m3/h. At the end of the sampling period, the filter’s mass is found to have increased by 345.2 mg. What is the concentration of particulates in the air sample in mg/m3 and mg/L?36. The fat content of potato chips is determined indirectly by weighing a sample before and after extracting the fat with supercritical CO2. The following data were obtained for the analysis of potato chips [Fat Determination by SFE, ISCO, Inc. Lincoln, NE].(a) Determine the mean and standard deviation for the %w/w fat.(b) This sample of potato chips is known to have a fat content of 22.7% w/w. Is there any evidence for a determinate error at \(\alpha = 0.05\)? See the appendices for statistical tables.37. Delumyea and McCleary reported results for the %w/w organic material in sediment samples collected at different depths from a cove on the St. Johns River in Jacksonville, FL [17 Delumyea, R. D.; McCleary, D. L. J. Chem. Educ. 1993, 70, 172–173]. After collecting a sediment core, they sectioned it into 2-cm increments. Each increment was treated using the following procedure:Using the following data, determine the %w/w organic matter as a function of the average depth for each increment. Prepare a plot showing how the %w/w organic matter varies with depth and comment on your results.52.1048.8338. Yao, et al. described a method for the quantitative analysis based on its reaction with I2 [Yao, S. F.; He, F. J. Nie, L. H. Anal. Chim. Acta 1992, 268, 311–314].\[\mathrm{CS}\left(\mathrm{NH}_{2}\right)_{2}+4 \mathrm{I}_{2}+6 \mathrm{H}_{2} \mathrm{O} \longrightarrow\left(\mathrm{NH}_{4}\right)_{2} \mathrm{SO}_{4}+8 \mathrm{HI}+\mathrm{CO}_{2} \nonumber\]The procedure calls for placing a 100-μL aqueous sample that contains thiourea in a 60-mL separatory funnel and adding 10 mL of a pH 7 buffer and 10 mL of 12 μM I2 in CCl4. The contents of the separatory funnel are shaken and the organic and aqueous layers allowed to separate. The organic layer, which contains the excess I2, is transferred to the surface of a piezoelectric crystal on which a thin layer of Au has been deposited. After allowing the I2 to adsorb to the Au, the CCl4 is removed and the crystal’s frequency shift, \(\Delta f\), measured. The following data is reported for a series of thiourea standards.74.6327120543(a) Characterize this method with respect to the scale of operation shown in figure 3.4.1 of Chapter 3.(b) Prepare a calibration curve and use a regression analysis to determine the relationship between the crystal’s frequency shift and the concentration of thiourea.(c) If a sample that contains an unknown amount of thiourea gives a \(\Delta f\) of 176 Hz, what is the molar concentration of thiourea in the sample?(d) What is the 95% confidence interval for the concentration of thiourea in this sample assuming one replicate? See the appendices for statistical tables.This page titled 8.5: Problems is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by David Harvey.
183
8.7: Chapter Summary and Key Terms
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Analytical_Chemistry_2.1_(Harvey)/08%3A_Gravimetric_Methods/8.07%3A_Chapter_Summary_and_Key_Terms
In a gravimetric analysis, a measurement of mass or a change in mass provides quantitative information about the analyte. The most common form of gravimetry uses a precipitation reaction to generate a product whose mass is proportional to the amount of analyte. In many cases the precipitate includes the analyte; however, an indirect analysis in which the analyte causes the precipitation of another compound also is possible. Precipitation gravimetric procedures must be carefully controlled to produce precipitates that are easy to filter, free from impurities, and of known stoichiometry.In volatilization gravimetry, thermal or chemical energy decomposes the sample containing the analyte. The mass of residue that remains after decomposition, the mass of volatile products collected using a suitable trap, or a change in mass due to the loss of volatile material are all gravimetric measurements.When the analyte is already present in a particulate form that is easy to separate from its matrix, then a particulate gravimetric analysis is feasible. Examples include the determination of dissolved solids and the determination of fat in foods.coagulationdefinitive techniqueelectrogravimetryignitionocclusionprecipitant relative supersaturationsurface adsorbatevolatilization gravimetryconservation of massdigestiongravimetryinclusionparticulate gravimetryprecipitation gravimetryreprecipitationthermogramcoprecipitatedirect analysishomogeneous precipitationindirect analysispeptizationquartz crystal microbalancesupernatantthermogravimetryThis page titled 8.7: Chapter Summary and Key Terms is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by David Harvey.
185
9.1: Overview of Titrimetry
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Analytical_Chemistry_2.1_(Harvey)/09%3A_Titrimetric_Methods/9.01%3A_Overview_of_Titrimetry
In titrimetry we add a reagent, called the titrant, to a solution that contains another reagent, called the titrand, and allow them to react. The type of reaction provides us with a simple way to divide titrimetry into four categories: acid–base titrations, in which an acidic or basic titrant reacts with a titrand that is a base or an acid; complexometric titrations , which are based on metal–ligand complexation; redox titrations, in which the titrant is an oxidizing or reducing agent; and precipitation titrations, in which the titrand and titrant form a precipitate.We will deliberately avoid the term analyte at this point in our introduction to titrimetry. Although in most titrations the analyte is the titrand, there are circumstances where the analyte is the titrant. Later, when we discuss specific titrimetric methods, we will use the term analyte where appropriate.Despite their difference in chemistry, all titrations share several common features. Before we consider individual titrimetric methods in greater detail, let’s take a moment to consider some of these similarities. As you work through this chapter, this overview will help you focus on the similarities between different titrimetric methods. You will find it easier to understand a new analytical method when you can see its relationship to other similar methods.If a titration is to give an accurate result we must combine the titrand and the titrant in stoichiometrically equivalent amounts. We call this stoichiometric mixture the equivalence point. Unlike precipitation gravimetry, where we add the precipitant in excess, an accurate titration requires that we know the exact volume of titrant at the equivalence point, Veq. The product of the titrant’s equivalence point volume and its molarity, MT, is equal to the moles of titrant that react with the titrand.\[\text { moles titrant }=M_{T} \times V_{e q} \nonumber\]If we know the stoichiometry of the titration reaction, then we can calculate the moles of titrand.Unfortunately, for most titration reactions there is no obvious sign when we reach the equivalence point. Instead, we stop adding the titrant at an end point of our choosing. Often this end point is a change in the color of a substance, called an indicator, that we add to the titrand’s solution. The difference between the end point’s volume and the equivalence point’s volume is a determinate titration error. If the end point and the equivalence point volumes coincide closely, then this error is insignificant and is safely ignored. Clearly, selecting an appropriate end point is of critical importance.Instead of measuring the titrant’s volume, we may choose to measure its mass. Although generally we can measure mass more precisely than we can measure volume, the simplicity of a volumetric titration makes it the more popular choice.Almost any chemical reaction can serve as a titrimetric method provided that it meets the following four conditions. The first condition is that we must know the stoichiometry between the titrant and the titrand. If this is not the case, then we cannot convert the moles of titrant used to reach the end point to the moles of titrand in our sample. Second, the titration reaction effectively must proceed to completion; that is, the stoichiometric mixing of the titrant and the titrand must result in their complete reaction. Third, the titration reaction must occur rapidly. If we add the titrant faster than it can react with the titrand, then the end point and the equivalence point will differ significantly. Finally, we must have a suitable method for accurately determining the end point. These are significant limitations and, for this reason, there are several common titration strategies.Depending on how we are detecting the endpoint, we may stop the titration too early or too late. If the end point is a function of the titrant’s concentration, then adding the titrant too quickly leads to an early end point. On the other hand, if the end point is a function of the titrand's concentration, then the end point exceeds the equivalence point.A simple example of a titration is an analysis for Ag+ using thiocyanate, SCN–, as a titrant.\[\mathrm{Ag}^{+}(a q)+\mathrm{SCN}^{-}(a q)\rightleftharpoons\mathrm{Ag}(\mathrm{SCN})(s) \nonumber\]This reaction occurs quickly and with a known stoichiometry, which satisfies two of our requirements. To indicate the titration’s end point, we add a small amount of Fe3+ to the analyte’s solution before we begin the titration. When the reaction between Ag+ and SCN– is complete, formation of the red-colored Fe(SCN)2+ complex signals the end point. This is an example of a direct titration since the titrant reacts directly with the analyte.This is an example of a precipitation titration. You will find more information about precipitation titrations later in this chapter.If the titration’s reaction is too slow, if a suitable indicator is not available, or if there is no useful direct titration reaction, then an indirect analysis may be possible. Suppose you wish to determine the concentration of formaldehyde, H2CO, in an aqueous solution. The oxidation of H2CO by \(\text{I}_3^-\)\[\mathrm{H}_{2} \mathrm{CO}(a q)+\mathrm{I}_{3}^-(a q)+3 \mathrm{OH}^{-}(a q)\rightleftharpoons\mathrm{HCO}_{2}^{-}(a q)+3 \mathrm{I}^{-}(a q)+2 \mathrm{H}_{2} \mathrm{O} \nonumber\]is a useful reaction, but it is too slow for a titration. If we add a known excess of \(\text{I}_3^-\) and allow its reaction with H2CO to go to completion, we can titrate the unreacted \(\text{I}_3^-\) with thiosulfate, \(\text{S}_2\text{O}_3^{2-}\).\[\mathrm{I}_{3}^{-}(a q)+2 \mathrm{S}_{2} \mathrm{O}_{3}^{2-}(a q)\rightleftharpoons\mathrm{S}_{4} \mathrm{O}_{6}^{2-}(a q)+3 \mathrm{I}^{-}(a q) \nonumber\]The difference between the initial amount of \(\text{I}_3^-\) and the amount in excess gives us the amount of \(\text{I}_3^-\) that reacts with the formaldehyde. This is an example of a back titration.This is an example of a redox titration. You will find more information about redox titrations later in this chapter.Calcium ions play an important role in many environmental systems. A direct analysis for Ca2+ might take advantage of its reaction with the ligand ethylenediaminetetraacetic acid (EDTA), which we represent here as Y4–.\[\mathrm{Ca}^{2+}(a q)+\mathrm{Y}^{4-}(a q)\rightleftharpoons\mathrm{CaY}^{2-}(a q) \nonumber\]Unfortunately, for most samples this titration does not have a useful indicator. Instead, we react the Ca2+ with an excess of MgY2–\[\mathrm{Ca}^{2+}(a q)+\mathrm{MgY}^{2-}(a q)\rightleftharpoons\mathrm{Ca} \mathrm{Y}^{2-}(a q)+\mathrm{Mg}^{2+}(a q) \nonumber\]releasing an amount of Mg2+ equivalent to the amount of Ca2+ in the sample. Because the titration of Mg2+ with EDTA\[\mathrm{Mg}^{2+}(a q)+\mathrm{Y}^{4-}(a q)\rightleftharpoons\mathrm{MgY}^{2-}(a q) \nonumber\]has a suitable end point, we can complete the analysis. The amount of EDTA used in the titration provides an indirect measure of the amount of Ca2+ in the original sample. Because the species we are titrating was displaced by the analyte, we call this a displacement titration.MgY2– is the Mg2+–EDTA metal–ligand complex. You can prepare a solution of MgY2– by combining equimolar solutions of Mg2+ and EDTA. This is an example of a complexation titration. You will find more information about complexation titrations later in this chapter.If a suitable reaction with the analyte does not exist it may be possible to generate a species that we can titrate. For example, we can determine the sulfur content of coal by using a combustion reaction to convert sulfur to sulfur dioxide\[\mathrm{S}(s)+\mathrm{O}_{2}(g) \rightarrow \mathrm{SO}_{2}(g) \nonumber\]and then convert the SO2 to sulfuric acid, H2SO4, by bubbling it through an aqueous solution of hydrogen peroxide, H2O2.\[\mathrm{SO}_{2}(g)+\mathrm{H}_{2} \mathrm{O}_{2}(a q) \longrightarrow \mathrm{H}_{2} \mathrm{SO}_{4}(a q) \nonumber\]Titrating H2SO4 with NaOH\[\mathrm{H}_{2} \mathrm{SO}_{4}(a q)+2 \mathrm{NaOH}(a q)\rightleftharpoons2 \mathrm{H}_{2} \mathrm{O}(l )+\mathrm{Na}_{2} \mathrm{SO}_{4}(a q) \nonumber\]provides an indirect determination of sulfur.This is an example of an acid–base titration. You will find more information about acid–base titrations later in this chapter.To find a titration’s end point, we need to monitor some property of the reaction that has a well-defined value at the equivalence point. For example, the equivalence point for a titration of HCl with NaOH occurs at a pH of 7.0. A simple method for finding the equivalence point is to monitor the titration mixture’s pH using a pH electrode, stopping the titration when we reach a pH of 7.0. Alternatively, we can add an indicator to the titrand’s solution that changes color at a pH of 7.0.Why a pH of 7.0 is the equivalence point for this titration is a topic we will cover later in the section on acid–base titrations.Suppose the only available indicator changes color at a pH of 6.8. Is the difference between this end point and the equivalence point small enough that we safely can ignore the titration error? To answer this question we need to know how the pH changes during the titration.A titration curve provides a visual picture of how a property of the titration reaction changes as we add the titrant to the titrand. The titration curve in Figure 9.1.1 , for example, was obtained by suspending a pH electrode in a solution of 0.100 M HCl (the titrand) and monitoring the pH while adding 0.100 M NaOH (the titrant). A close examination of this titration curve should convince you that an end point pH of 6.8 produces a negligible titration error. Selecting a pH of 11.6 as the end point, however, produces an unacceptably large titration error.For the titration curve in Figure 9.1.1 , the volume of titrant to reach a pH of 6.8 is 24.99995 mL, a titration error of \(-2.00 \times 10^{-4}\)% relative to the equivalence point of 25.00 mL. Typically, we can read the volume only to the nearest ±0.01 mL, which means this uncertainty is too small to affect our results. The volume of titrant to reach a pH of 11.6 is 27.07 mL, or a titration error of +8.28%. This is a significant error.The shape of the titration curve in Figure 9.1.1 is not unique to an acid–base titration. Any titration curve that follows the change in concentration of a species in the titration reaction (plotted logarithmically) as a function of the titrant’s volume has the same general sigmoidal shape. Several additional examples are shown in Figure 9.1.2 .The titrand’s or the titrant’s concentration is not the only property we can use to record a titration curve. Other parameters, such as the temperature or absorbance of the titrand’s solution, may provide a useful end point signal. Many acid–base titration reactions, for example, are exothermic. As the titrant and the titrand react, the temperature of the titrand’s solution increases. Once we reach the equivalence point, further additions of titrant do not produce as exothermic a response. Figure 9.1.3 shows a typical thermometric titration curve where the intersection of the two linear segments indicates the equivalence point.The only essential equipment for an acid–base titration is a means for delivering the titrant to the titrand’s solution. The most common method for delivering titrant is a buret (Figure 9.1.4 ), which is a long, narrow tube with graduated markings and equipped with a stopcock for dispensing the titrant. The buret’s small internal diameter provides a better defined meniscus, making it easier to read precisely the titrant’s volume. Burets are available in a variety of sizes and tolerances (Table 9.1.1 ), with the choice of buret determined by the needs of the analysis. You can improve a buret’s accuracy by calibrating it over several intermediate ranges of volumes using the method described in Chapter 5 for calibrating pipets. Calibrating a buret corrects for variations in the buret’s internal diameter.±0.01±0.01An automated titration uses a pump to deliver the titrant at a constant flow rate (Figure 9.1.5 ). Automated titrations offer the additional advantage of using a microcomputer for data storage and analysis.This page titled 9.1: Overview of Titrimetry is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by David Harvey.
186
9.2: Acid–Base Titrations
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Analytical_Chemistry_2.1_(Harvey)/09%3A_Titrimetric_Methods/9.02%3A_AcidBase_Titrations
Before 1800, most acid–base titrations used H2SO4, HCl, or HNO3 as acidic titrants, and K2CO3 or Na2CO3 as basic titrants. A titration’s end point was determined using litmus as an indicator, which is red in acidic solutions and blue in basic solutions, or by the cessation of CO2 effervescence when neutralizing \(\text{CO}_3^{2-}\). Early examples of acid–base titrimetry include determining the acidity or alkalinity of solutions, and determining the purity of carbonates and alkaline earth oxides.The determination of acidity and alkalinity continue to be important applications of acid–base titrimetry. We will take a closer look at these applications later in this section.Three limitations slowed the development of acid–base titrimetry: the lack of a strong base titrant for the analysis of weak acids, the lack of suitable indicators, and the absence of a theory of acid–base reactivity. The introduction, in 1846, of NaOH as a strong base titrant extended acid–base titrimetry to the determination of weak acids. The synthesis of organic dyes provided many new indicators. Phenolphthalein, for example, was first synthesized by Bayer in 1871 and used as an indicator for acid–base titrations in 1877.Despite the increased availability of indicators, the absence of a theory of acid–base reactivity made it difficult to select an indicator. The development of equilibrium theory in the late 19th century led to significant improvements in the theoretical understanding of acid–base chemistry, and, in turn, of acid–base titrimetry. Sørenson’s establishment of the pH scale in 1909 provided a rigorous means to compare indicators. The determination of acid–base dissociation constants made it possible to calculate a theoretical titration curve, as outlined by Bjerrum in 1914. For the first time analytical chemists had a rational method for selecting an indicator, making acid–base titrimetry a useful alternative to gravimetry.In the overview to this chapter we noted that a titration’s end point should coincide with its equivalence point. To understand the relationship between an acid–base titration’s end point and its equivalence point we must know how the titrand’s pH changes during a titration. In this section we will learn how to calculate a titration curve using the equilibrium calculations from Chapter 6. We also will learn how to sketch a good approximation of any acid–base titration curve using a limited number of simple calculations.For our first titration curve, let’s consider the titration of 50.0 mL of 0.100 M HCl using a titrant of 0.200 M NaOH. When a strong base and a strong acid react the only reaction of importance is\[\mathrm{H}_{3} \mathrm{O}^{+}(a q)+\mathrm{OH}^{-}(a q) \rightarrow 2 \mathrm{H}_{2} \mathrm{O}(\mathrm{l}) \label{9.1}\]Although we have not written reaction \ref{9.1} as an equilibrium reaction, it is at equilibrium; however, because its equilibrium constant is large—it is (Kw)–1 or \(1.00 \times 10^{14}\)—we can treat reaction \ref{9.1} as though it goes to completion.The first task is to calculate the volume of NaOH needed to reach the equivalence point, Veq. At the equivalence point we know from reaction \ref{9.1} that\[\begin{aligned} \text { moles } \mathrm{HCl}=& \text { moles } \mathrm{NaOH} \\ M_{a} \times V_{a} &=M_{b} \times V_{b} \end{aligned} \nonumber\]where the subscript ‘a’ indicates the acid, HCl, and the subscript ‘b’ indicates the base, NaOH. The volume of NaOH needed to reach the equivalence point is\[V_{e q}=V_{b}=\frac{M_{a} V_{a}}{M_{b}}=\frac{(0.100 \ \mathrm{M})(50.0 \ \mathrm{mL})}{(0.200 \ \mathrm{M})}=25.0 \ \mathrm{mL} \nonumber\]Before the equivalence point, HCl is present in excess and the pH is determined by the concentration of unreacted HCl. At the start of the titration the solution is 0.100 M in HCl, which, because HCl is a strong acid, means the pH is\[\mathrm{pH}=-\log \left[\mathrm{H}_{3} \mathrm{O}^{+}\right]=-\log \left[\text{HCl} \right] = -\log (0.100)=1.00 \nonumber\]After adding 10.0 mL of NaOH the concentration of excess HCl is\[[\text{HCl}] = \frac {(\text{mol HCl})_\text{initial} - (\text{mol NaOH})_\text{added}} {\text{total volume}} = \frac {M_a V_a - M_b V_b} {V_a + V_b} \nonumber\]\[[\mathrm{HCl}]=\frac{(0.100 \ \mathrm{M})(50.0 \ \mathrm{mL})-(0.200 \ \mathrm{M})(10.0 \ \mathrm{mL})}{50.0 \ \mathrm{mL}+10.0 \ \mathrm{mL}}=0.0500 \ \mathrm{M} \nonumber\]and the pH increases to 1.30.At the equivalence point the moles of HCl and the moles of NaOH are equal. Since neither the acid nor the base is in excess, the pH is determined by the dissociation of water.\[\begin{array}{c}{K_{w}=1.00 \times 10^{-14}=\left[\mathrm{H}_{3} \mathrm{O}^{+}\right]\left[\mathrm{OH}^{-}\right]=\left[\mathrm{H}_{3} \mathrm{O}^{+}\right]^{2}} \\ {\left[\mathrm{H}_{3} \mathrm{O}^{+}\right]=1.00 \times 10^{-7}}\end{array} \nonumber\]Thus, the pH at the equivalence point is 7.00.For volumes of NaOH greater than the equivalence point, the pH is determined by the concentration of excess OH–. For example, after adding 30.0 mL of titrant the concentration of OH– is\[[\text{OH}^-] = \frac {(\text{mol NaOH})_\text{added} - (\text{mol HCl})_\text{initial}} {\text{total volume}} = \frac {M_b V_b - M_a V_a} {V_a + V_b} \nonumber\]\[\left[\mathrm{OH}^{-}\right]=\frac{(0.200 \ \mathrm{M})(30.0 \ \mathrm{mL})-(0.100 \ \mathrm{M})(50.0 \ \mathrm{mL})}{30.0 \ \mathrm{mL}+50.0 \ \mathrm{mL}}=0.0125 \ \mathrm{M} \nonumber\]To find the concentration of H3O+ we use the Kw expression\[\left[\mathrm{H}_{3} \mathrm{O}^{+}\right]=\frac{K_{\mathrm{w}}}{\left[\mathrm{OH}^{-}\right]}=\frac{1.00 \times 10^{-14}}{0.0125}=8.00 \times 10^{-13} \ \mathrm{M} \nonumber\]to find that the pH is 12.10. Table 9.2.1 and Figure 9.2.1 show additional results for this titration curve. You can use this same approach to calculate the titration curve for the titration of a strong base with a strong acid, except the strong base is in excess before the equivalence point and the strong acid is in excess after the equivalence point.Construct a titration curve for the titration of 25.0 mL of 0.125 M NaOH with 0.0625 M HCl.The volume of HCl needed to reach the equivalence point is\[V_{e q}=V_{a}=\frac{M_{b} V_{b}}{M_{a}}=\frac{(0.125 \ \mathrm{M})(25.0 \ \mathrm{mL})}{(0.0625 \ \mathrm{M})}=50.0 \ \mathrm{mL} \nonumber\]Before the equivalence point, NaOH is present in excess and the pH is determined by the concentration of unreacted OH–. For example, after adding 10.0 mL of HCl\[\begin{array}{c}{\left[\mathrm{OH}^{-}\right]=\frac{(0.125 \ \mathrm{M})(25.0 \ \mathrm{mL})-(0.0625 \mathrm{M})(10.0 \ \mathrm{mL})}{25.0 \ \mathrm{mL}+10.0 \ \mathrm{mL}}=0.0714 \ \mathrm{M}} \\ {\left[\mathrm{H}_{3} \mathrm{O}^{+}\right]=\frac{K_{w}}{\left[\mathrm{OH}^{-}\right]}=\frac{1.00 \times 10^{-14}}{0.0714 \ \mathrm{M}}=1.40 \times 10^{-13} \ \mathrm{M}}\end{array} \nonumber\]the pH is 12.85.For the titration of a strong base with a strong acid the pH at the equivalence point is 7.00.For volumes of HCl greater than the equivalence point, the pH is determined by the concentration of excess HCl. For example, after adding 70.0 mL of titrant the concentration of HCl is\[[\mathrm{HCl}]=\frac{(0.0625 \ \mathrm{M})(70.0 \ \mathrm{mL})-(0.125 \ \mathrm{M})(25.0 \ \mathrm{mL})}{70.0 \ \mathrm{mL}+25.0 \ \mathrm{mL}}=0.0132 \ \mathrm{M} \nonumber\]giving a pH of 1.88. Some additional results are shown here.For this example, let’s consider the titration of 50.0 mL of 0.100 M acetic acid, CH3COOH, with 0.200 M NaOH. Again, we start by calculating the volume of NaOH needed to reach the equivalence point; thus\[\operatorname{mol} \ \mathrm{CH}_{3} \mathrm{COOH}=\mathrm{mol} \ \mathrm{NaOH} \nonumber\]\[M_{a} \times V_{a}=M_{b} \times V_{b} \nonumber\]\[V_{e q}=V_{b}=\frac{M_{a} V_{a}}{M_{b}}=\frac{(0.100 \ \mathrm{M})(50.0 \ \mathrm{mL})}{(0.200 \ \mathrm{M})}=25.0 \ \mathrm{mL} \nonumber\]Before we begin the titration the pH is that for a solution of 0.100 M acetic acid. Because acetic acid is a weak acid, we calculate the pH using the method outlined in Chapter 6\[\mathrm{CH}_{3} \mathrm{COOH}(a q)+\mathrm{H}_{2} \mathrm{O}(l)\rightleftharpoons\mathrm{H}_{3} \mathrm{O}^{+}(a q)+\mathrm{CH}_{3} \mathrm{COO}^{-}(a q) \nonumber\]\[K_{a}=\frac{\left[\mathrm{H}_{3} \mathrm{O}^{+}\right]\left[\mathrm{CH}_{3} \mathrm{COO}^-\right]}{\left[\mathrm{CH}_{3} \mathrm{COOH}\right]}=\frac{(x)(x)}{0.100-x}=1.75 \times 10^{-5} \nonumber\]finding that the pH is 2.88.Adding NaOH converts a portion of the acetic acid to its conjugate base, CH3COO–.\[\mathrm{CH}_{3} \mathrm{COOH}(a q)+\mathrm{OH}^{-}(a q) \longrightarrow \mathrm{H}_{2} \mathrm{O}(l)+\mathrm{CH}_{3} \mathrm{COO}^{-}(a q) \label{9.2}\]Because the equilibrium constant for reaction \ref{9.2} is quite large\[K=K_{\mathrm{a}} / K_{\mathrm{w}}=1.75 \times 10^{9} \nonumber\]we can treat the reaction as if it goes to completion.Any solution that contains comparable amounts of a weak acid, HA, and its conjugate weak base, A–, is a buffer. As we learned in Chapter 6, we can calculate the pH of a buffer using the Henderson–Hasselbalch equation.\[\mathrm{pH}=\mathrm{p} K_{\mathrm{a}}+\log \frac{\left[\mathrm{A}^{-}\right]}{[\mathrm{HA}]} \nonumber\]Before the equivalence point the concentration of unreacted acetic acid is\[\left[\text{CH}_3\text{COOH}\right] = \frac {(\text{mol CH}_3\text{COOH})_\text{initial} - (\text{mol NaOH})_\text{added}} {\text{total volume}} = \frac {M_a V_a - M_b V_b} {V_a + V_b} \nonumber\]and the concentration of acetate is\[[\text{CH}_3\text{COO}^-] = \frac {(\text{mol NaOH})_\text{added}} {\text{total volume}} = \frac {M_b V_b} {V_a + V_b} \nonumber\]For example, after adding 10.0 mL of NaOH the concentrations of CH3COOH and CH3COO– are\[\left[\mathrm{CH}_{3} \mathrm{COOH}\right]=\frac{(0.100 \ \mathrm{M})(50.0 \ \mathrm{mL})-(0.200 \ \mathrm{M})(10.0 \ \mathrm{mL})}{50.0 \ \mathrm{mL}+10.0 \ \mathrm{mL}} = 0.0500 \text{ M} \nonumber\]\[\left[\mathrm{CH}_{3} \mathrm{COO}^{-}\right]=\frac{(0.200 \ \mathrm{M})(10.0 \ \mathrm{mL})}{50.0 \ \mathrm{mL}+10.0 \ \mathrm{mL}}=0.0333 \ \mathrm{M} \nonumber\]which gives us a pH of\[\mathrm{pH}=4.76+\log \frac{0.0333 \ \mathrm{M}}{0.0500 \ \mathrm{M}}=4.58 \nonumber\]At the equivalence point the moles of acetic acid initially present and the moles of NaOH added are identical. Because their reaction effectively proceeds to completion, the predominate ion in solution is CH3COO–, which is a weak base. To calculate the pH we first determine the concentration of CH3COO–\[\left[\mathrm{CH}_{3} \mathrm{COO}^-\right]=\frac{(\mathrm{mol} \ \mathrm{NaOH})_{\mathrm{added}}}{\text { total volume }}= \frac{(0.200 \ \mathrm{M})(25.0 \ \mathrm{mL})}{50.0 \ \mathrm{mL}+25.0 \ \mathrm{mL}}=0.0667 \ \mathrm{M} \nonumber\]Alternatively, we can calculate acetate’s concentration using the initial moles of acetic acid; thus\[\left[\mathrm{CH}_{3} \mathrm{COO}^{-}\right]=\frac{\left(\mathrm{mol} \ \mathrm{CH}_{3} \mathrm{COOH}\right)_{\mathrm{initial}}}{\text { total volume }} = \frac{(0.100 \ \mathrm{M})(50.0 \ \mathrm{mL})}{50.0 \ \mathrm{mL}+25.0 \ \mathrm{mL}} = 0.0667 \text{ M} \nonumber\]Next, we calculate the pH of the weak base as shown earlier in Chapter 6\[\mathrm{CH}_{3} \mathrm{COO}^{-}(a q)+\mathrm{H}_{2} \mathrm{O}(l)\rightleftharpoons\mathrm{OH}^{-}(a q)+\mathrm{CH}_{3} \mathrm{COOH}(a q) \nonumber\]\[K_{\mathrm{b}}=\frac{\left[\mathrm{OH}^{-}\right]\left[\mathrm{CH}_{3} \mathrm{COOH}\right]}{\left[\mathrm{CH}_{3} \mathrm{COO}^{-}\right]}=\frac{(x)(x)}{0.0667-x}=5.71 \times 10^{-10} \nonumber\]\[x=\left[\mathrm{OH}^{-}\right]=6.17 \times 10^{-6} \ \mathrm{M} \nonumber\]\[\left[\mathrm{H}_{3} \mathrm{O}^{+}\right]=\frac{K_{\mathrm{w}}}{\left[\mathrm{OH}^{-}\right]}=\frac{1.00 \times 10^{-14}}{6.17 \times 10^{-6}}=1.62 \times 10^{-9} \ \mathrm{M} \nonumber\]finding that the pH at the equivalence point is 8.79.After the equivalence point, the titrant is in excess and the titration mixture is a dilute solution of NaOH. We can calculate the pH using the same strategy as in the titration of a strong acid with a strong base. For example, after adding 30.0 mL of NaOH the concentration of OH– is\[\left[\mathrm{OH}^{-}\right]=\frac{(0.200 \ \mathrm{M})(30.0 \ \mathrm{mL})-(0.100 \ \mathrm{M})(50.0 \ \mathrm{mL})}{30.0 \ \mathrm{mL}+50.0 \ \mathrm{mL}}=0.0125 \ \mathrm{M} \nonumber\]\[\left[\mathrm{H}_{3} \mathrm{O}^{+}\right]=\frac{K_{\mathrm{w}}}{\left[\mathrm{OH}^{-}\right]}=\frac{1.00 \times 10^{-14}}{0.0125}=8.00 \times 10^{-13} \ \mathrm{M} \nonumber\]giving a pH of 12.10. Table 9.2.2 and Figure 9.2.2 show additional results for this titration. You can use this same approach to calculate the titration curve for the titration of a weak base with a strong acid, except the initial pH is determined by the weak base, the pH at the equivalence point by its conjugate weak acid, and the pH after the equivalence point by excess strong acid.Construct a titration curve for the titration of 25.0 mL of 0.125 M NH3 with 0.0625 M HCl.The volume of HCl needed to reach the equivalence point is\[V_{a q}=V_{a}=\frac{M_{b} V_{b}}{M_{a}}=\frac{(0.125 \ \mathrm{M})(25.0 \ \mathrm{mL})}{(0.0625 \ \mathrm{M})}=50.0 \ \mathrm{mL} \nonumber\]Before adding HCl the pH is that for a solution of 0.100 M NH3.\[K_{\mathrm{b}}=\frac{[\mathrm{OH}^-]\left[\mathrm{NH}_{4}^{+}\right]}{\left[\mathrm{NH}_{3}\right]}=\frac{(x)(x)}{0.125-x}=1.75 \times 10^{-5} \nonumber\]\[x=\left[\mathrm{OH}^{-}\right]=1.48 \times 10^{-3} \ \mathrm{M} \nonumber\]\[\left[\mathrm{H}_{3} \mathrm{O}^{+}\right]=\frac{K_{\mathrm{w}}}{[\mathrm{OH}^-]}=\frac{1.00 \times 10^{-14}}{1.48 \times 10^{-3} \ \mathrm{M}}=6.76 \times 10^{-12} \ \mathrm{M} \nonumber\]The pH at the beginning of the titration, therefore, is 11.17.Before the equivalence point the pH is determined by an \(\text{NH}_3/\text{NH}_4^+\) buffer. For example, after adding 10.0 mL of HCl\[\left[\mathrm{NH}_{3}\right]=\frac{(0.125 \ \mathrm{M})(25.0 \ \mathrm{mL})-(0.0625 \ \mathrm{M})(10.0 \ \mathrm{mL})}{25.0 \ \mathrm{mL}+10.0 \ \mathrm{mL}}=0.0714 \ \mathrm{M} \nonumber\]\[\left[\mathrm{NH}_{4}^{+}\right]=\frac{(0.0625 \ \mathrm{M})(10.0 \ \mathrm{mL})}{25.0 \ \mathrm{mL}+10.0 \ \mathrm{mL}}=0.0179 \ \mathrm{M} \nonumber\]\[\mathrm{pH}=9.244+\log \frac{0.0714 \ \mathrm{M}}{0.0179 \ \mathrm{M}}=9.84 \nonumber\]At the equivalence point the predominate ion in solution is \(\text{NH}_4^+\). To calculate the pH we first determine the concentration of \(\text{NH}_4^+\)\[\left[\mathrm{NH}_{4}^{+}\right]=\frac{(0.125 \ \mathrm{M})(25.0 \ \mathrm{mL})}{25.0 \ \mathrm{mL}+50.0 \ \mathrm{mL}}=0.0417 \ \mathrm{M} \nonumber\]and then calculate the pH\[K_{\mathrm{a}}=\frac{\left[\mathrm{H}_{3} \mathrm{O}^{+}\right]\left[\mathrm{NH}_{3}\right]}{\left[\mathrm{NH}_{4}^{+}\right]}=\frac{(x)(x)}{0.0417-x}=5.70 \times 10^{-10} \nonumber\]obtaining a value of 5.31.After the equivalence point, the pH is determined by the excess HCl. For example, after adding 70.0 mL of HCl\[[\mathrm{HCl}]=\frac{(0.0625 \ \mathrm{M})(70.0 \ \mathrm{mL})-(0.125 \ \mathrm{M})(25.0 \ \mathrm{mL})}{70.0 \ \mathrm{mL}+25.0 \ \mathrm{mL}}=0.0132 \ \mathrm{M} \nonumber\]and the pH is 1.88. Some additional results are shown here.We can extend this approach for calculating a weak acid–strong base titration curve to reactions that involve multiprotic acids or bases, and mixtures of acids or bases. As the complexity of the titration increases, however, the necessary calculations become more time consuming. Not surprisingly, a variety of algebraic and spreadsheet approaches are available to aid in constructing titration curves.The following papers provide information on algebraic approaches to calculating titration curves: (a) Willis, C. J. J. Chem. Educ. 1981, 58, 659–663; (b) Nakagawa, K. J. Chem. Educ. 1990, 67, 673–676; (c) Gordus, A. A. J. Chem. Educ. 1991, 68, 759–761; (d) de Levie, R. J. Chem. Educ. 1993, 70, 209–217; (e) Chaston, S. J. Chem. Educ. 1993, 70, 878–880; (f) de Levie, R. Anal. Chem. 1996, 68, 585–590.The following papers provide information on the use of spreadsheets to generate titration curves: (a) Currie, J. O.; Whiteley, R. V. J. Chem. Educ. 1991, 68, 923–926; (b) Breneman, G. L.; Parker, O. J. J. Chem. Educ. 1992, 69, 46–47; (c) Carter, D. R.; Frye, M. S.; Mattson, W. A. J. Chem. Educ. 1993, 70, 67–71; (d) Freiser, H. Concepts and Calculations in Analytical Chemistry, CRC Press: Boca Raton, 1992.To evaluate the relationship between a titration’s equivalence point and its end point we need to construct only a reasonable approximation of the exact titration curve. In this section we demonstrate a simple method for sketching an acid–base titration curve. Our goal is to sketch the titration curve quickly, using as few calculations as possible. Let’s use the titration of 50.0 mL of 0.100 M CH3COOH with 0.200 M NaOH to illustrate our approach. This is the same example that we used to develop the calculations for a weak acid–strong base titration curve. You can review the results of that calculation in Table 9.2.2 and in Figure 9.2.2 .We begin by calculating the titration’s equivalence point volume, which, as we determined earlier, is 25.0 mL. Next we draw our axes, placing pH on the y-axis and the titrant’s volume on the x-axis. To indicate the equivalence point volume, we draw a vertical line that intersects the x-axis at 25.0 mL of NaOH. Figure 9.2.3 a shows the first step in our sketch.Before the equivalence point the titrand’s pH is determined by a buffer of acetic acid, CH3COOH, and acetate, CH3COO–. Although we can calculate a buffer’s pH using the Henderson–Hasselbalch equation, we can avoid this calculation by making a simple assumption. You may recall from Chapter 6 that a buffer operates over a pH range that extends approximately ±1 pH unit on either side of the weak acid’s pKa value. The pH is at the lower end of this range, pH = pKa – 1, when the weak acid’s concentration is \(10 \times\) greater than that of its conjugate weak base. The buffer reaches its upper pH limit, pH = pKa + 1, when the weak acid’s concentration is \(10 \times\) smaller than that of its conjugate weak base. When we titrate a weak acid or a weak base, the buffer spans a range of volumes from approximately 10% of the equivalence point volume to approximately 90% of the equivalence point volume.The actual values are 9.09% and 90.9%, but for our purpose, using 10% and 90% is more convenient; that is, after all, one advantage of an approximation!Figure 9.2.3 b shows the second step in our sketch. First, we superimpose acetic acid’s ladder diagram on the y-axis, including its buffer range, using its pKa value of 4.76. Next, we add two points, one for the pH at 10% of the equivalence point volume (a pH of 3.76 at 2.5 mL) and one for the pH at 90% of the equivalence point volume (a pH of 5.76 at 22.5 mL).The third step is to add two points after the equivalence point. The pH after the equivalence point is fixed by the concentration of excess titrant, NaOH. Calculating the pH of a strong base is straightforward, as we saw earlier. Figure 9.2.3 c includes points (see Table 9.2.2 ) for the pH after adding 30.0 mL and after adding 40.0 mL of NaOH.Next, we draw a straight line through each pair of points, extending each line through the vertical line that represents the equivalence point’s volume (Figure 9.2.3 d). Finally, we complete our sketch by drawing a smooth curve that connects the three straight-line segments (Figure 9.2.3 e). A comparison of our sketch to the exact titration curve (Figure 9.2.3 f) shows that they are in close agreement.Sketch a titration curve for the titration of 25.0 mL of 0.125 M NH3 with 0.0625 M HCl and compare to the result from Exercise 9.2.2 .The figure below shows a sketch of the titration curve. The black dots and curve are the approximate sketch of the titration curve. The points in red are the calculations from Exercise 9.2.2 . The two black points before the equivalence point (VHCl = 5 mL, pH = 10.24 and VHCl = 45 mL, pH= 8.24) are plotted using the pKa of 9.244 for \(\text{NH}_4^+\). The two black points after the equivalence point (VHCl = 60 mL, pH = 2.13 and VHCl = 80 mL, pH= 1.75 ) are from the answer to Exercise 9.2.2 .As shown in the following example, we can adapt this approach to any acid–base titration, including those where exact calculations are more challenging, including the titration of polyprotic weak acids and bases, and the titration of mixtures of weak acids or weak bases.Sketch titration curves for the following two systems: (a) the titration of 50.0 mL of 0.050 M H2A, a diprotic weak acid with a pKa1 of 3 and a pKa2 of 7; and (b) the titration of a 50.0 mL mixture that contains 0.075 M HA, a weak acid with a pKa of 3, and 0.025 M HB, a weak acid with a pKa of 7. For both titrations, assume that the titrant is 0.10 M NaOH.SolutionFigure 9.2.4 a shows the titration curve for H2A, including the ladder diagram for H2A on the y-axis, the two equivalence points at 25.0 mL and at 50.0 mL, two points before each equivalence point, two points after the last equivalence point, and the straight-lines used to sketch the final titration curve. Before the first equivalence point the pH is controlled by a buffer of H2A and HA–. An HA–/A2– buffer controls the pH between the two equivalence points. After the second equivalence point the pH reflects the concentration of excess NaOH.Figure 9.2.4 b shows the titration curve for the mixture of HA and HB. Again, there are two equivalence points; however, in this case the equivalence points are not equally spaced because the concentration of HA is greater than that for HB. Because HA is the stronger of the two weak acids it reacts first; thus, the pH before the first equivalence point is controlled by a buffer of HA and A–. Between the two equivalence points the pH reflects the titration of HB and is determined by a buffer of HB and B–. After the second equivalence point excess NaOH determines the pH.Sketch the titration curve for 50.0 mL of 0.050 M H2A, a diprotic weak acid with a pKa1 of 3 and a pKa2 of 4, using 0.100 M NaOH as the titrant. The fact that pKa2 falls within the buffer range of pKa1 presents a challenge that you will need to consider.The figure below shows a sketch of the titration curve. The titration curve has two equivalence points, one at 25.0 mL \((\text{H}_2\text{A} \rightarrow \text{HA}^-)\) and one at 50.0 mL (\(\text{HA}^- \rightarrow \text{A}^{2-}\)). In sketching the curve, we plot two points before the first equivalence point using the pKa1 of 3 for H2A\[V_{\mathrm{HCl}}=2.5 \ \mathrm{mL}, \mathrm{pH}=2 \text { and } V_{\mathrm{HCl}}=22.5 \ \mathrm{mL}, \mathrm{pH}=4 \nonumber\]two points between the equivalence points using the pKa2 of 5 for HA–\[V_{\mathrm{HCl}}=27.5 \ \mathrm{mL}, \mathrm{pH}=3, \text { and } V_{\mathrm{HCl}}=47.5 \ \mathrm{mL}, \mathrm{pH}=5 \nonumber\]and two points after the second equivalence point\[V_{\mathrm{HCl}}=70 \ \mathrm{mL}, \mathrm{pH}=12.22 \text { and } V_{\mathrm{HCl}}=90 \ \mathrm{mL}, \mathrm{pH}=12.46 \nonumber\]Drawing a smooth curve through these points presents us with the following dilemma—the pH appears to increase as the titrant’s volume approaches the first equivalence point and then appears to decrease as it passes through the first equivalence point. This is, of course, absurd; as we add NaOH the pH cannot decrease. Instead, we model the titration curve before the second equivalence point by drawing a straight line from the first point (VHCl = 2.5 mL, pH = 2) to the fourth point (VHCl = 47.5 mL, pH= 5), ignoring the second and third points. The results is a reasonable approximation of the exact titration curve.Earlier we made an important distinction between a titration’s end point and its equivalence point. The difference between these two terms is important and deserves repeating. An equivalence point, which occurs when we react stoichiometrically equal amounts of the analyte and the titrant, is a theoretical not an experimental value. A titration’s end point is an experimental result that represents our best estimate of the equivalence point. Any difference between a titration’s equivalence point and its corresponding end point is a source of determinate error.Earlier we learned how to calculate the pH at the equivalence point for the titration of a strong acid with a strong base, and for the titration of a weak acid with a strong base. We also learned how to sketch a titration curve with only a minimum of calculations. Can we also locate the equivalence point without performing any calculations. The answer, as you might guess, often is yes!For most acid–base titrations the inflection point—the point on a titration curve that has the greatest slope—very nearly coincides with the titration’s equivalence point. The red arrows in Figure 9.2.4 , for example, identify the equivalence points for the titration curves in Example 9.2.1 . An inflection point actually precedes its corresponding equivalence point by a small amount, with the error approaching 0.1% for weak acids and weak bases with dissociation constants smaller than 10–9, or for very dilute solutions [Meites, L.; Goldman, J. A. Anal. Chim. Acta 1963, 29, 472–479].The principal limitation of an inflection point is that it must be present and easy to identify. For some titrations the inflection point is missing or difficult to find. Figure 9.2.5 , for example, demonstrates the affect of a weak acid’s dissociation constant, Ka, on the shape of its titration curve. An inflection point is visible, even if barely so, for acid dissociation constants larger than 10–9, but is missing when Ka is 10–11.An inflection point also may be missing or difficult to see if the analyte is a multiprotic weak acid or weak base with successive dissociation constants that are similar in magnitude. To appreciate why this is true let’s consider the titration of a diprotic weak acid, H2A, with NaOH. During the titration the following two reactions occur.\[\mathrm{H}_{2} \mathrm{A}(a q)+\mathrm{OH}^{-}(a q) \longrightarrow \mathrm{H}_{2} \mathrm{O}(l)+\mathrm{HA}^{-}(a q) \label{9.3}\]\[\mathrm{HA}^{-}(a q)+\mathrm{OH}^{-}(a q) \rightarrow \mathrm{H}_{2} \mathrm{O}(l)+\mathrm{A}^{2-}(a q) \label{9.4}\]To see two distinct inflection points, reaction \ref{9.3} must essentially be complete before reaction \ref{9.4} begins.Figure 9.2.6 shows titration curves for three diprotic weak acids. The titration curve for maleic acid, for which Ka1 is approximately \(20000 \times\) larger than Ka2, has two distinct inflection points. Malonic acid, on the other hand, has acid dissociation constants that differ by a factor of approximately 690. Although malonic acid’s titration curve shows two inflection points, the first is not as distinct as the second. Finally, the titration curve for succinic acid, for which the two Ka values differ by a factor of only \(27 \times\), has only a single inflection point that corresponds to the neutralization of \(\text{HC}_2\text{H}_4\text{O}_4^-\) to \(\text{C}_2\text{H}_4\text{O}_4^{2-}\). In general, we can detect separate inflection points when successive acid dissociation constants differ by a factor of at least 500 (a \(\Delta\)Ka of at least 2.7).The same holds true for mixtures of weak acids or mixtures of weak bases. To detect separate inflection points when titrating a mixture of weak acids, their pKa values must differ by at least a factor of 500.One interesting group of weak acids and weak bases are organic dyes. Because an organic dye has at least one highly colored conjugate acid–base species, its titration results in a change in both its pH and its color. We can use this change in color to indicate the end point of a titration provided that it occurs at or near the titration’s equivalence point.As an example, let’s consider an indicator for which the acid form, HIn, is yellow and the base form, In–, is red. The color of the indicator’s solution depends on the relative concentrations of HIn and In–. To understand the relationship between pH and color we use the indicator’s acid dissociation reaction\[\mathrm{HIn}(a q)+\mathrm{H}_{2} \mathrm{O}(l)\rightleftharpoons \mathrm{H}_{3} \mathrm{O}^{+}(a q)+\operatorname{In}^{-}(a q) \nonumber\]and its equilibrium constant expression.\[K_{\mathrm{a}}=\frac{\left[\mathrm{H}_{3} \mathrm{O}^{+}\right]\left[\mathrm{In}^{-}\right]}{[\mathrm{HIn}]} \label{9.5}\]Taking the negative log of each side of Equation \ref{9.5}, and rearranging to solve for pH leaves us with a equation that relates the solution’s pH to the relative concentrations of HIn and In–.\[\mathrm{pH}=\mathrm{p} K_{\mathrm{a}}+\log \frac{[\mathrm{In}^-]}{[\mathrm{HIn}]} \label{9.6}\]If we can detect HIn and In– with equal ease, then the transition from yellow-to-red (or from red-to-yellow) reaches its midpoint, which is orange, when the concentrations of HIn and In– are equal, or when the pH is equal to the indicator’s pKa. If the indicator’s pKa and the pH at the equivalence point are identical, then titrating until the indicator turns orange is a suitable end point. Unfortunately, we rarely know the exact pH at the equivalence point. In addition, determining when the concentrations of HIn and In– are equal is difficult if the indicator’s change in color is subtle.We can establish the range of pHs over which the average analyst observes a change in the indicator’s color by making two assumptions: that the indicator’s color is yellow if the concentration of HIn is \(10 \times\) greater than that of In– and that its color is red if the concentration of HIn is \(10 \times\) smaller than that of In–. Substituting these inequalities into Equation \ref{9.6}\[\begin{array}{l}{\mathrm{pH}=\mathrm{p} K_{\mathrm{a}}+\log \frac{1}{10}=\mathrm{p} K_{\mathrm{a}}-1} \\ {\mathrm{pH}=\mathrm{p} K_{\mathrm{a}}+\log \frac{10}{1}=\mathrm{p} K_{\mathrm{a}}+1}\end{array} \nonumber\]shows that the indicator changes color over a pH range that extends ±1 unit on either side of its pKa. As shown in Figure 9.2.7 , the indicator is yel-ow when the pH is less than pKa – 1 and it is red when the pH is greater than pKa + 1. For pH values between pKa – 1 and pKa + 1 the indicator’s color passes through various shades of orange. The properties of several common acid–base indicators are listed in Table 9.2.3 .You may wonder why an indicator’s pH range, such as that for phenolphthalein, is not equally distributed around its pKa value. The explanation is simple. Figure 9.2.7 presents an idealized view in which our sensitivity to the indicator’s two colors is equal. For some indicators only the weak acid or the weak base is colored. For other indicators both the weak acid and the weak base are colored, but one form is easier to see. In either case, the indicator’s pH range is skewed in the direction of the indicator’s less colored form. Thus, phenolphthalein’s pH range is skewed in the direction of its colorless form, shifting the pH range to values lower than those suggested by Figure 9.2.7 .The relatively broad range of pHs over which an indicator changes color places additional limitations on its ability to signal a titration’s end point. To minimize a determinate titration error, the indicator’s entire pH range must fall within the rapid change in pH near the equivalence point. For example, in Figure 9.2.8 we see that phenolphthalein is an appropriate indicator for the titration of 50.0 mL of 0.050 M acetic acid with 0.10 M NaOH. Bromothymol blue, on the other hand, is an inappropriate indicator because its change in color begins well before the initial sharp rise in pH, and, as a result, spans a relatively large range of volumes. The early change in color increases the probability of obtaining an inaccurate result, and the range of possible end point volumes increases the probability of obtaining imprecise results.Suggest a suitable indicator for the titration of 25.0 mL of 0.125 M NH3 with 0.0625 M NaOH. You constructed a titration curve for this titration in Exercise 9.2.2 and Exercise 9.2.3 .The pH at the equivalence point is 5.31 (see Exercise 9.2.2 ) and the sharp part of the titration curve extends from a pH of approximately 7 to a pH of approximately 4. Of the indicators in Table 9.2.3 , methyl red is the best choice because its pKa value of 5.0 is closest to the equivalence point’s pH and because the pH range of 4.2–6.3 for its change in color will not produce a significant titration error.An alternative approach for locating a titration’s end point is to monitor the titration’s progress using a sensor whose signal is a function of the analyte’s concentration. The result is a plot of the entire titration curve, which we can use to locate the end point with a minimal error.A pH electrode is the obvious sensor for monitoring an acid–base titration and the result is a potentiometric titration curve. For example, Figure 9.2.9 a shows a small portion of the potentiometric titration curve for the titration of 50.0 mL of 0.050 M CH3COOH with 0.10 M NaOH, which focuses on the region that contains the equivalence point. The simplest method for finding the end point is to locate the titration curve’s inflection point, which is shown by the arrow. This is also the least accurate method, particularly if the titration curve has a shallow slope at the equivalence point.See Chapter 11 for more details about pH electrodes.Figure 9.2.9 . Titration curves for the titration of 50.0 mL of 0.050 M CH3COOH with 0.10 M NaOH: (a) normal titration curve; (b) first derivative titration curve; (c) second derivative titration curve; (d) Gran plot. The red arrows show the location of each titration’s end point.Another method for locating the end point is to plot the first derivative of the titration curve, which gives its slope at each point along the x-axis. Examine Figure 9.2.9 a and consider how the titration curve’s slope changes as we approach, reach, and pass the equivalence point. Because the slope reaches its maximum value at the inflection point, the first derivative shows a spike at the equivalence point (Figure 9.2.9 b). The second derivative of a titration curve can be more useful than the first derivative because the equivalence point intersects the volume axis. Figure 9.2.9 c shows the resulting titration curve.Suppose we have the following three points on our titration curve:Mathematically, we can approximate the first derivative as \(\Delta \text{pH} / \Delta V\), where \(\Delta \text{pH}\) is the change in pH between successive additions of titrant. Using the first two points, the first derivative is\[\frac{\Delta \mathrm{pH}}{\Delta V}=\frac{6.10-6.00}{23.91-23.65}=0.385 \nonumber\]which we assign to the average of the two volumes, or 23.78 mL. For the second and third points, the first derivative is 0.455 and the average volume is 24.02 mL.We can approximate the second derivative as \(\Delta (\Delta \text{pH} / \Delta V) / \Delta V\), or \(\Delta^2 \text{pH} / \Delta V^2\). Using the two points from our calculation of the first derivative, the second derivative is\[\frac{\Delta^{2} \mathrm{p} \mathrm{H}}{\Delta V^{2}}=\frac{0.455-0.385}{24.02-23.78}=0.292 \nonumber\]which we assign to the average of the two volumes, or 23.90 mL. Note that calculating the first derivative comes at the expense of losing one piece of information (three points become two points), and calculating the second derivative comes at the expense of losing two pieces of information.Derivative methods are particularly useful when titrating a sample that contains more than one analyte. If we rely on indicators to locate the end points, then we usually must complete separate titrations for each analyte so that we can see the change in color for each end point. If we record the titration curve, however, then a single titration is sufficient. The precision with which we can locate the end point also makes derivative methods attractive for an analyte that has a poorly defined normal titration curve.Derivative methods work well only if we record sufficient data during the rapid increase in pH near the equivalence point. This usually is not a problem if we use an automatic titrator, such as the one seen earlier in +\mathrm{H}_{2} \mathrm{O}(l )\rightleftharpoons\mathrm{H}_{3} \mathrm{O}^{+}(a q)+\mathrm{CH}_{3} \mathrm{COO}^{-}(a q) \nonumber\]for which the equilibrium constant is\[K_{a}=\frac{\left[\mathrm{H}_{3} \mathrm{O}^{+}\right]\left[\mathrm{CH}_{3} \mathrm{COO}^{-}\right]}{\left[\mathrm{CH}_{3} \mathrm{COOH}\right]} \nonumber\]Before the equivalence point the concentrations of CH3COOH and CH3COO– are\[[\text{CH}_3\text{COOH}] = \frac {(\text{mol CH}_3\text{COOH})_\text{initial} - (\text{mol NaOH})_\text{added}} {\text{total volume}} = \frac {M_a V_a - M_b V_b} {V_a + V_b} \nonumber\]\[[\text{CH}_3\text{COO}^-] = \frac {(\text{mol NaOH})_\text{added}} {\text{total volume}} = \frac {M_b V_b} {V_a + V_b} \nonumber\]Substituting these equations into the Ka expression and rearranging leaves us with\[K_{\mathrm{a}}=\frac{\left[\mathrm{H}_{3} \mathrm{O}^{+}\right]\left(M_{b} V_{b}\right) /\left(V_{a}+V_{b}\right)}{\left\{M_{a} V_{a}-M_{b} V_{b}\right\} /\left(V_{a}+V_{b}\right)} \nonumber\]\[K_{a} M_{a} V_{a}-K_{a} M_{b} V_{b}=\left[\mathrm{H}_{3} \mathrm{O}^{+}\right]\left(M_{b} V_{b}\right) \nonumber\]\[\frac{K_{a} M_{a} V_{a}}{M_{b}}-K_{a} V_{b}=\left[\mathrm{H}_{3} \mathrm{O}^{+}\right] V_{b} \nonumber\]Finally, recognizing that the equivalence point volume is\[V_{eq}=\frac{M_{a} V_{a}}{M_{b}} \nonumber\]leaves us with the following equation.\[\left[\mathrm{H}_{3} \mathrm{O}^{+}\right] \times V_{b}=K_{\mathrm{a}} V_{eq}-K_{\mathrm{a}} V_{b} \nonumber\]For volumes of titrant before the equivalence point, a plot of \(V_b \times [\text{H}_3\text{O}^+]\) versus Vb is a straight-line with an x-intercept of Veq and a slope of –Ka. Figure 9.2.9 d shows a typical result. This method of data analysis, which converts a portion of a titration curve into a straight-line, is a Gran plot.Values of Ka determined by this method may have a substantial error if the effect of activity is ignored. See Chapter 6.9 for a discussion of activity.The reaction between an acid and a base is exothermic. Heat generated by the reaction is absorbed by the titrand, which increases its temperature. Monitoring the titrand’s temperature as we add the titrant provides us with another method for recording a titration curve and identifying the titration’s end point (Figure 9.2.10 ).Before we add the titrant, any change in the titrand’s temperature is the result of warming or cooling as it equilibrates with the surroundings. Adding titrant initiates the exothermic acid–base reaction and increases the titrand’s temperature. This part of a thermometric titration curve is called the titration branch. The temperature continues to rise with each addition of titrant until we reach the equivalence point. After the equivalence point, any change in temperature is due to the titrant’s enthalpy of dilution and the difference between the temperatures of the titrant and titrand. Ideally, the equivalence point is a distinct intersection of the titration branch and the excess titrant branch. As shown in Figure 9.2.10 , however, a thermometric titration curve usually shows curvature near the equivalence point due to an incomplete neutralization reaction or to the excessive dilution of the titrand and the titrant during the titration. The latter problem is minimized by using a titrant that is 10–100 times more concentrated than the analyte, although this results in a very small end point volume and a larger relative error. If necessary, the end point is found by extrapolation.Although not a common method for monitoring an acid–base titration, a thermometric titration has one distinct advantage over the direct or indirect monitoring of pH. As discussed earlier, the use of an indicator or the monitoring of pH is limited by the magnitude of the relevant equilibrium constants. For example, titrating boric acid, H3BO3, with NaOH does not provide a sharp end point when monitoring pH because boric acid’s Ka of \(5.8 \times 10^{-10}\) is too small (Figure 9.2.11 a). Because boric acid’s enthalpy of neutralization is fairly large, –42.7 kJ/mole, its thermometric titration curve provides a useful endpoint (Figure 9.2.11 b).Thus far we have assumed that the titrant and the titrand are aqueous solutions. Although water is the most common solvent for acid–base titrimetry, switching to a nonaqueous solvent can improve a titration’s feasibility.For an amphoteric solvent, SH, the autoprotolysis constant, Ks, relates the concentration of its protonated form, \(\text{SH}_2^+\), to its deprotonated form, S–\[\begin{aligned} 2 \mathrm{SH} &\rightleftharpoons\mathrm{SH}_{2}^{+}+\mathrm{S}^{-} \\ K_{\mathrm{s}} &=\left[\mathrm{SH}_{2}^{+}\right][\mathrm{S}^-] \end{aligned} \nonumber\]and the solvent’s pH and pOH are\[\begin{array}{l}{\mathrm{pH}=-\log \left[\mathrm{SH}_{2}^{+}\right]} \\ {\mathrm{pOH}=-\log \left[\mathrm{S}^{-}\right]}\end{array} \nonumber\]You should recognize that Kw is just specific form of Ks when the solvent is water.The most important limitation imposed by Ks is the change in pH during a titration. To understand why this is true, let’s consider the titration of 50.0 mL of \(1.0 \times 10^{-4}\) M HCl using \(1.0 \times 10^{-4}\) M NaOH as the titrant. Before the equivalence point, the pH is determined by the untitrated strong acid. For example, when the volume of NaOH is 90% of Veq, the concentration of H3O+ is\[\left[\mathrm{H}_{3} \mathrm{O}^{+}\right]=\frac{M_{a} V_{a}-M_{b} V_{b}}{V_{a}+V_{b}} = \frac{\left(1.0 \times 10^{-4} \ \mathrm{M}\right)(50.0 \ \mathrm{mL})-\left(1.0 \times 10^{-4} \ \mathrm{M}\right)(45.0 \ \mathrm{mL})}{50.0 \ \mathrm{mL}+45.0 \ \mathrm{mL}} = 5.3 \times 10^{-6} \ \mathrm{M} \nonumber\]and the pH is 5.3. When the volume of NaOH is 110% of Veq, the concentration of OH– is\[\left[\mathrm{OH}^{-}\right]=\frac{M_{b} V_{b}-M_{a} V_{a}}{V_{a}+V_{b}} = \frac{\left(1.0 \times 10^{-4} \ \mathrm{M}\right)(55.0 \ \mathrm{mL})-\left(1.0 \times 10^{-4} \ \mathrm{M}\right)(50.0 \ \mathrm{mL})}{55.0 \ \mathrm{mL}+50.0 \ \mathrm{mL}} = 4.8 \times 10^{-6} \ \mathrm{M} \nonumber\]and the pOH is 5.3. The titrand’s pH is\[\mathrm{pH}=\mathrm{p} K_{w}-\mathrm{pOH}=14.0-5.3=8.7 \nonumber\]and the change in the titrand’s pH as the titration goes from 90% to 110% of Veq is\[\Delta \mathrm{pH}=8.7-5.3=3.4 \nonumber\]If we carry out the same titration in a nonaqueous amphiprotic solvent that has a Ks of \(1.0 \times 10^{-20}\), the pH after adding 45.0 mL of NaOH is still 5.3. However, the pH after adding 55.0 mL of NaOH is\[\mathrm{pH}=\mathrm{p} K_{s}-\mathrm{pOH}=20.0-5.3=14.7 \nonumber\]In this case the change in pH\[\Delta \mathrm{pH}=14.7-5.3=9.4 \nonumber\]is significantly greater than that obtained when the titration is carried out in water. Figure 9.2.12 shows the titration curves in both the aqueous and the nonaqueous solvents.Another parameter that affects the feasibility of an acid–base titration is the titrand’s dissociation constant. Here, too, the solvent plays an important role. The strength of an acid or a base is a relative measure of how easy it is to transfer a proton from the acid to the solvent or from the solvent to the base. For example, HF, with a Ka of \(6.8 \times 10^{-4}\), is a better proton donor than CH3COOH, for which Ka is \(1.75 \times 10^{-5}\).The strongest acid that can exist in water is the hydronium ion, H3O+. HCl and HNO3 are strong acids because they are better proton donors than H3O+ and essentially donate all their protons to H2O, leveling their acid strength to that of H3O+. In a different solvent HCl and HNO3 may not behave as strong acids.If we place acetic acid in water the dissociation reaction\[\mathrm{CH}_{3} \mathrm{COOH}(a q)+\mathrm{H}_{2} \mathrm{O}( l)\rightleftharpoons\mathrm{H}_{3} \mathrm{O}^{+}(a q)+\mathrm{CH}_{3} \mathrm{COO}^{-}(a q) \nonumber\]does not proceed to a significant extent because CH3COO– is a stronger base than H2O and H3O+ is a stronger acid than CH3COOH. If we place acetic acid in a solvent that is a stronger base than water, such as ammonia, then the reaction\[\mathrm{CH}_{3} \mathrm{COOH}+\mathrm{NH}_{3}\rightleftharpoons\mathrm{NH}_{4}^{+}+\mathrm{CH}_{3} \mathrm{COO}^{-} \nonumber\]proceeds to a greater extent. In fact, both HCl and CH3COOH are strong acids in ammonia.All other things being equal, the strength of a weak acid increases if we place it in a solvent that is more basic than water, and the strength of a weak base increases if we place it in a solvent that is more acidic than water. In some cases, however, the opposite effect is observed. For example, the pKb for NH3 is 4.75 in water and it is 6.40 in the more acidic glacial acetic acid. In contradiction to our expectations, NH3 is a weaker base in the more acidic solvent. A full description of the solvent’s effect on the pKa of weak acid or the pKb of a weak base is beyond the scope of this text. You should be aware, however, that a titration that is not feasible in water may be feasible in a different solvent.The best way to appreciate the theoretical and the practical details discussed in this section is to carefully examine a typical acid–base titrimetric method. Although each method is unique, the following description of the determination of protein in bread provides an instructive example of a typical procedure. The description here is based on Method 13.86 as published in Official Methods of Analysis, 8th Ed., Association of Official Agricultural Chemists: Washington, D. C., 1955.Description of the MethodsThis method is based on a determination of %w/w nitrogen using the Kjeldahl method. The protein in a sample of bread is oxidized to \(\text{NH}_4^+\) using hot concentrated H2SO4. After making the solution alkaline, which converts \(\text{NH}_4^+\) to NH3, the ammonia is distilled into a flask that contains a known amount of HCl. The amount of unreacted HCl is determined by a back titration using a standard strong base titrant. Because different cereal proteins contain similar amounts of nitrogen—on average there are 5.7 g protein for every gram of nitrogen—we multiply the experimentally determined %w/w N by a factor of 5.7 gives the %w/w protein in the sample.ProcedureTransfer a 2.0-g sample of bread, which previously has been air-dried and ground into a powder, to a suitable digestion flask along with 0.7 g of a HgO catalyst, 10 g of K2SO4, and 25 mL of concentrated H2SO4. Bring the solution to a boil. Continue boiling until the solution turns clear and then boil for at least an additional 30 minutes. After cooling the solution below room temperature, remove the Hg2+ catalyst by adding 200 mL of H2O and 25 mL of 4% w/v K2S. Add a few Zn granules to serve as boiling stones and 25 g of NaOH. Quickly connect the flask to a distillation apparatus and distill the NH3 into a collecting flask that contains a known amount of standardized HCl. The tip of the condenser must be placed below the surface of the strong acid. After the distillation is complete, titrate the excess strong acid with a standard solution of NaOH using methyl red as an indicator (Figure 9.2.13 ).Questions1. Oxidizing the protein converts all of its nitrogen to \(\text{NH}_4^+\). Why is the amount of nitrogen not determined by directly titrating the \(\text{NH}_4^+\) with a strong base?There are two reasons for not directly titrating the ammonium ion. First, because \(\text{NH}_4^+\) is a very weak acid (its Ka is \(5.6 \times 10^{-10}\)), its titration with NaOH has a poorly-defined end point. Second, even if we can determine the end point with acceptable accuracy and precision, the solution also contains a substantial concentration of unreacted H2SO4. The presence of two acids that differ greatly in concentration makes for a difficult analysis. If the titrant’s concentration is similar to that of H2SO4, then the equivalence point volume for the titration of \(\text{NH}_4^+\) is too small to measure reliably. On the other hand, if the titrant’s concentration is similar to that of \(\text{NH}_4^+\), the volume needed to neutralize the H2SO4 is unreasonably large.2. Ammonia is a volatile compound as evidenced by the strong smell of even dilute solutions. This volatility is a potential source of determinate error. Is this determinate error negative or positive?Any loss of NH3 is loss of nitrogen and, therefore, a loss of protein. The result is a negative determinate error.3. Identify the steps in this procedure that minimize the determinate error from the possible loss of NH3.Three specific steps minimize the loss of ammonia: the solution is cooled below room temperature before we add NaOH; after we add NaOH, the digestion flask is quickly connected to the distillation apparatus; and we place the condenser’s tip below the surface of the HCl to ensure that the NH3 reacts with the HCl before it is lost through volatilization.4. How does K2S remove Hg2+, and why is its removal important?Adding sulfide precipitates Hg2+ as HgS. This is important because NH3 forms stable complexes with many metal ions, including Hg2+. Any NH3 that reacts with Hg2+ is not collected during distillation, providing another source of determinate error.Although many quantitative applications of acid–base titrimetry have been replaced by other analytical methods, a few important applications continue to find use. In this section we review the general application of acid–base titrimetry to the analysis of inorganic and organic compounds, with an emphasis on applications in environmental and clinical analysis. First, however, we discuss the selection and standardization of acidic and basic titrants.The most common strong acid titrants are HCl, HClO4, and H2SO4. Solutions of these titrants usually are prepared by diluting a commercially available concentrated stock solution. Because the concentration of a concentrated acid is known only approximately, the titrant’s concentration is determined by standardizing against one of the primary standard weak bases listed in Table 9.2.4 .The nominal concentrations of the concentrated stock solutions are 12.1 M HCl, 11.7 M HClO4, and 18.0 M H2SO4. The actual concentrations of these acids are given as %w/v and vary slightly from lot-to-lot.(a) The end point for this titration is improved by titrating to the second equivalence point, boiling the solution to expel CO2, and retitrating to the second equivalence point. The reaction in this case is\[\mathrm{Na}_{2} \mathrm{CO}_{3}+2 \mathrm{H}_{3} \mathrm{O}^{+} \rightarrow \mathrm{CO}_{2}+2 \mathrm{Na}^{+}+3 \mathrm{H}_{2} \mathrm{O} \nonumber\](b) Tris-(hydroxymethyl)aminomethane often goes by the shorter name of TRIS or THAM.(c) Potassium hydrogen phthalate often goes by the shorter name of KHP.(d) Because it is not very soluble in water, dissolve benzoic acid in a small amount of ethanol before diluting with water.The most common strong base titrant is NaOH, which is available both as an impure solid and as an approximately 50% w/v solution. Solutions of NaOH are standardized against any of the primary weak acid standards listed in Table \(\PageIndex[4|\).Using NaOH as a titrant is complicated by potential contamination from the following reaction between dissolved CO2 and OH–.\[\mathrm{CO}_{2}(a q)+2 \mathrm{OH}^{-}(a q) \rightarrow \mathrm{CO}_{3}^{2-}(a q)+\mathrm{H}_{2} \mathrm{O}( l) \label{9.7}\]Any solution in contact with the atmosphere contains a small amount of CO2(aq) from the equilibrium\[\mathrm{CO}_{2}(g)\rightleftharpoons\mathrm{CO}_{2}(a q) \nonumber\]During the titration, NaOH reacts both with the titrand and with CO2, which increases the volume of NaOH needed to reach the titration’s end point. This is not a problem if the end point pH is less than 6. Below this pH the \(\text{CO}_3^{2-}\) from reaction \ref{9.7} reacts with H3O+ to form carbonic acid.\[\mathrm{CO}_{3}^{2-}(a q)+2 \mathrm{H}_{3} \mathrm{O}^{+}(a q) \rightarrow 2 \mathrm{H}_{2} \mathrm{O}(l)+\mathrm{H}_{2} \mathrm{CO}_{3}(a q). \label{9.8}\]Combining reaction \ref{9.7} and reaction \ref{9.8} gives an overall reaction that does not include OH–.\[\mathrm{CO}_{2}(a q)+\mathrm{H}_{2} \mathrm{O}(l ) \longrightarrow \mathrm{H}_{2} \mathrm{CO}_{3}(a q) \nonumber\]Under these conditions the presence of CO2 does not affect the quantity of OH– used in the titration and is not a source of determinate error.If the end point pH is between 6 and 10, however, the neutralization of \(\text{CO}_3^{2-}\) requires one proton\[\mathrm{CO}_{3}^{2-}(a q)+\mathrm{H}_{3} \mathrm{O}^{+}(a q) \rightarrow \mathrm{H}_{2} \mathrm{O}(l)+\mathrm{HCO}_{3}^{-}(a q) \nonumber\]and the net reaction between CO2 and OH– is\[\mathrm{CO}_{2}(a q)+\mathrm{OH}^{-}(a q) \rightarrow \mathrm{HCO}_{3}^{-}(a q) \nonumber\]Under these conditions some OH– is consumed in neutralizing CO2, which results in a determinate error. We can avoid the determinate error if we use the same end point pH for both the standardization of NaOH and the analysis of our analyte, although this is not always practical.Solid NaOH is always contaminated with carbonate due to its contact with the atmosphere, and we cannot use it to prepare a carbonate-free solution of NaOH. Solutions of carbonate-free NaOH are prepared from 50% w/v NaOH because Na2CO3 is insoluble in concentrated NaOH. When CO2 is absorbed, Na2CO3 precipitates and settles to the bottom of the container, which allow access to the carbonate-free NaOH. When pre- paring a solution of NaOH, be sure to use water that is free from dissolved CO2. Briefly boiling the water expels CO2; after it cools, the water is used to prepare carbonate-free solutions of NaOH. A solution of carbonate-free NaOH is relatively stable if we limit its contact with the atmosphere. Standard solutions of sodium hydroxide are not stored in glass bottles as NaOH reacts with glass to form silicate; instead, store such solutions in polyethylene bottles.Acid–base titrimetry is a standard method for the quantitative analysis of many inorganic acids and bases. A standard solution of NaOH is used to determine the concentration of inorganic acids, such as H3PO4 or H3AsO4, and inorganic bases, such as Na2CO3 are analyzed using a standard solution of HCl.If an inorganic acid or base that is too weak to be analyzed by an aqueous acid–base titration, it may be possible to complete the analysis by adjusting the solvent or by an indirect analysis. For example, when analyzing boric acid, H3BO3, by titrating with NaOH, accuracy is limited by boric acid’s small acid dissociation constant of \(5.8 \times 10^{-10}\). Boric acid’s Ka value increases to \(1.5 \times 10^{-4}\) in the presence of mannitol, because it forms a stable complex with the borate ion, which results is a sharper end point and a more accurate titration. Similarly, the analysis of ammonium salts is limited by the ammonium ion’ small acid dissociation constant of \(5.7 \times 10^{-10}\). We can determine \(\text{NH}_4^+\) indirectly by using a strong base to convert it to NH3, which is removed by distillation and titrated with HCl. Because NH3 is a stronger weak base than \(\text{NH}_4^+\) is a weak acid (its Kb is \(1.58 \times 10^{-5}\)), the titration has a sharper end point.We can analyze a neutral inorganic analyte if we can first convert it into an acid or a base. For example, we can determine the concentration of \(\text{NO}_3^-\) by reducing it to NH3 in a strongly alkaline solution using Devarda’s alloy, a mixture of 50% w/w Cu, 45% w/w Al, and 5% w/w Zn.\[3 \mathrm{NO}_{3}^{-}(a q)+8 \mathrm{Al}(s)+5 \mathrm{OH}^{-}(a q)+2 \mathrm{H}_{2} \mathrm{O}(l) \rightarrow 8 \mathrm{AlO}_{2}^{-}(a q)+3 \mathrm{NH}_{3}(a q) \nonumber\]The NH3 is removed by distillation and titrated with HCl. Alternatively, we can titrate \(\text{NO}_3^-\) as a weak base by placing it in an acidic nonaqueous solvent, such as anhydrous acetic acid, and using HClO4 as a titrant.Acid–base titrimetry continues to be listed as a standard method for the determination of alkalinity, acidity, and free CO2 in waters and wastewaters. Alkalinity is a measure of a sample’s capacity to neutralize acids. The most important sources of alkalinity are OH–, \(\text{HCO}_3^-\), and \(\text{CO}_3^{2-}\), although other weak bases, such as phosphate, may contribute to the overall alkalinity. Total alkalinity is determined by titrating to a fixed end point pH of 4.5 (or to the bromocresol green end point) using a standard solution of HCl or H2SO4. Results are reported as mg CaCO3/L.Although a variety of strong bases and weak bases may contribute to a sample’s alkalinity, a single titration cannot distinguish between the possible sources. Reporting the total alkalinity as if CaCO3 is the only source provides a means for comparing the acid-neutralizing capacities of different samples.When the sources of alkalinity are limited to OH–, \(\text{HCO}_3^-\), and \(\text{CO}_3^{2-}\), separate titrations to a pH of 4.5 (or the bromocresol green end point) and a pH of 8.3 (or the phenolphthalein end point) allow us to determine which species are present and their respective concentrations. Titration curves for OH–, \(\text{HCO}_3^-\), and \(\text{CO}_3^{2-}\)are shown in Figure 9.2.14 . For a solution that contains OH– alkalinity only, the volume of strong acid needed to reach each of the two end points is identical (Figure 9.2.14 a). When the only source of alkalinity is \(\text{CO}_3^{2-}\), the volume of strong acid needed to reach the end point at a pH of 4.5 is exactly twice that needed to reach the end point at a pH of 8.3 (Figure 9.2.14 b). If a solution contains \(\text{HCO}_3^-\) alkalinity only, the volume of strong acid needed to reach the end point at a pH of 8.3 is zero, but that for the pH 4.5 end point is greater than zero (Figure 9.2.14 c).A mixture of OH– and \(\text{CO}_3^{2-}\) or a mixture of \(\text{HCO}_3^-\) and \(\text{CO}_3^{2-}\) also is possible. Consider, for example, a mixture of OH– and \(\text{CO}_3^{2-}\). The volume of strong acid to titrate OH– is the same whether we titrate to a pH of 8.3 or a pH of 4.5. Titrating \(\text{CO}_3^{2-}\) to a pH of 4.5, however, requires twice as much strong acid as titrating to a pH of 8.3. Consequently, when we titrate a mixture of these two ions, the volume of strong acid needed to reach a pH of 4.5 is less than twice that needed to reach a pH of 8.3. For a mixture of \(\text{HCO}_3^-\) and \(\text{CO}_3^{2-}\) the volume of strong acid needed to reach a pH of 4.5 is more than twice that needed to reach a pH of 8.3. Table 9.2.5 summarizes the relationship between the sources of alkalinity and the volumes of titrant needed to reach the two end points.A mixture of OH– and \(\text{HCO}_3^-\) is unstable with respect to the formation of \(\text{CO}_3^{2-}\). Problem 15 in the end-of-chapter problems asks you to explain why this is true.Acidity is a measure of a water sample’s capacity to neutralize base and is divided into strong acid and weak acid acidity. Strong acid acidity from inorganic acids such as HCl, HNO3, and H2SO4 is common in industrial effluents and in acid mine drainage. Weak acid acidity usually is dominated by the formation of H2CO3 from dissolved CO2, but also includes contributions from hydrolyzable metal ions such as Fe3+, Al3+, and Mn2+. In addition, weak acid acidity may include a contribution from organic acids.Acidity is determined by titrating with a standard solution of NaOH to a fixed pH of 3.7 (or the bromothymol blue end point) and to a fixed pH of 8.3 (or the phenolphthalein end point). Titrating to a pH of 3.7 provides a measure of strong acid acidity, and titrating to a pH of 8.3 provides a measure of total acidity. Weak acid acidity is the difference between the total acidity and the strong acid acidity. Results are expressed as the amount of CaCO3 that can be neutralized by the sample’s acidity. An alternative approach for determining strong acid and weak acid acidity is to obtain a potentiometric titration curve and use a Gran plot to determine the two equivalence points. This approach has been used, for example, to determine the forms of acidity in atmospheric aerosols [Ferek, R. J.; Lazrus, A. L.; Haagenson, P. L.; Winchester, J. W. Environ. Sci. Technol. 1983, 17, 315–324].As is the case with alkalinity, acidity is reported as mg CaCO3/L.Water in contact with either the atmosphere or with carbonate-bearing sediments contains free CO2 in equilibrium with CO2(g) and with aqueous H2CO3, \(\text{HCO}_3^-\) and \(\text{CO}_3^{2-}\). The concentration of free CO2 is determined by titrating with a standard solution of NaOH to the phenolphthalein end point, or to a pH of 8.3, with results reported as mg CO2/L. This analysis essentially is the same as that for the determination of total acidity and is used only for water samples that do not contain strong acid acidity.Free CO2 is the same thing as CO2(aq).Acid–base titrimetry continues to have a small, but important role for the analysis of organic compounds in pharmaceutical, biochemical, agricultur- al, and environmental laboratories. Perhaps the most widely employed acid–base titration is the Kjeldahl analysis for organic nitrogen. Examples of analytes determined by a Kjeldahl analysis include caffeine and saccharin in pharmaceutical products, proteins in foods, and the analysis of nitrogen in fertilizers, sludges, and sediments. Any nitrogen present in a –3 oxidation state is oxidized quantitatively to \(\text{NH}_4^+\). Because some aromatic heterocyclic compounds, such as pyridine, are difficult to oxidize, a catalyst is used to ensure a quantitative oxidation. Nitrogen in other oxidation states, such as nitro and azo nitrogens, are oxidized to N2, which results in a negative determinate error. Including a reducing agent, such as salicylic acid, converts this nitrogen to a –3 oxidation state, eliminating this source of error. Table 9.2.6 provides additional examples in which an element is converted quantitatively into a titratable acid or base.Several organic functional groups are weak acids or weak bases. Carboxylic (–COOH), sulfonic (–SO3H) and phenolic (–C6H5OH) functional groups are weak acids that are titrated successfully in either aqueous or non-aqueous solvents. Sodium hydroxide is the titrant of choice for aqueous solutions. Nonaqueous titrations often are carried out in a basic solvent, such as ethylenediamine, using tetrabutylammonium hydroxide, (C4H9)4NOH, as the titrant. Aliphatic and aromatic amines are weak bases that are titrated using HCl in aqueous solutions, or HClO4 in glacial acetic acid. Other functional groups are analyzed indirectly following a reaction that produces or consumes an acid or base. Typical examples are shown in Table 9.2.7 .: (CH3CO)2O + ROH \(\rightarrow\) CH3COOR + CH3COOH: (CH3CO)2) + H2O \(\rightarrow\) 2CH3COOHthe species that is titrated is shown in boldfor alcohols, reaction is carried out in pyridine to prevent the hydrolysis of acetic anhydride by water. After reaction is complete, water is added to covert any unreacted acetic anhydride to acetic acid (reaction)Many pharmaceutical compounds are weak acids or weak bases that are analyzed by an aqueous or a nonaqueous acid–base titration; examples include salicylic acid, phenobarbital, caffeine, and sulfanilamide. Amino acids and proteins are analyzed in glacial acetic acid using HClO4 as the titrant. For example, a procedure for determining the amount of nutritionally available protein uses an acid–base titration of lysine residues [(a) Molnár-Perl, I.; Pintée-Szakács, M. Anal. Chim. Acta 1987, 202, 159–166; (b) Barbosa, J.; Bosch, E.; Cortina, J. L.; Rosés, M. Anal. Chim. Acta 1992, 256, 177–181].The quantitative relationship between the titrand and the titrant is determined by the titration reaction’s stoichiometry. If the titrand is polyprotic, then we must know to which equivalence point we are titrating. The following example illustrates how we can use a ladder diagram to determine a titration reaction’s stoichiometry.A 50.00-mL sample of a citrus drink requires 17.62 mL of 0.04166 M NaOH to reach the phenolphthalein end point. Express the sample’s acidity as grams of citric acid, C6H8O7, per 100 mL.SolutionBecause citric acid is a triprotic weak acid, we first must determine if the phenolphthalein end point corresponds to the first, second, or third equivalence point. Citric acid’s ladder diagram is shown in Figure 9.2.15 a. Based on this ladder diagram, the first equivalence point is between a pH of 3.13 and a pH of 4.76, the second equivalence point is between a pH of 4.76 and a pH of 6.40, and the third equivalence point is greater than a pH of 6.40. Because phenolphthalein’s end point pH is 8.3–10.0 (see Table 9.2.3 ), the titration must proceed to the third equivalence point and the titration reaction is\[ \mathrm{C}_{6} \mathrm{H}_{8} \mathrm{O}_{7}(a q)+3 \mathrm{OH}^{-}(a q) \longrightarrow \mathrm{C}_{6} \mathrm{H}_{5} \mathrm{O}_{7}^{3-}(a q)+3 \mathrm{H}_{2} \mathrm{O}(l) \nonumber\]To reach the equivalence point, each mole of citric acid consumes three moles of NaOH; thus\[(0.04166 \ \mathrm{M} \ \mathrm{NaOH})(0.01762 \ \mathrm{L} \ \mathrm{NaOH})=7.3405 \times 10^{-4} \ \mathrm{mol} \ \mathrm{NaOH} \nonumber\]\[7.3405 \times 10^{-4} \ \mathrm{mol} \ \mathrm{NaOH} \times \frac{1 \ \mathrm{mol} \ \mathrm{C}_{6} \mathrm{H}_{8} \mathrm{O}_{7}}{3 \ \mathrm{mol} \ \mathrm{NaOH}}= 2.4468 \times 10^{-4} \ \mathrm{mol} \ \mathrm{C}_{6} \mathrm{H}_{8} \mathrm{O}_{7} \nonumber\]\[2.4468 \times 10^{-4} \ \mathrm{mol} \ \mathrm{C}_{6} \mathrm{H}_{8} \mathrm{O}_{7} \times \frac{192.1 \ \mathrm{g} \ \mathrm{C}_{6} \mathrm{H}_{8} \mathrm{O}_{7}}{\mathrm{mol} \ \mathrm{C}_{6} \mathrm{H}_{8} \mathrm{O}_{7}}=0.04700 \ \mathrm{g} \ \mathrm{C}_{6} \mathrm{H}_{8} \mathrm{O}_{7} \nonumber\]Because this is the amount of citric acid in a 50.00 mL sample, the concentration of citric acid in the citrus drink is 0.09400 g/100 mL. The complete titration curve is shown in Figure 9.2.15 b.Your company recently received a shipment of salicylic acid, C7H6O3, for use in the production of acetylsalicylic acid (aspirin). You can accept the shipment only if the salicylic acid is more than 99% pure. To evaluate the shipment’s purity, you dissolve a 0.4208-g sample in water and titrate to the phenolphthalein end point, using 21.92 mL of 0.1354 M NaOH. Report the shipment’s purity as %w/w C7H6O3. Salicylic acid is a diprotic weak acid with pKa values of 2.97 and 13.74.Because salicylic acid is a diprotic weak acid, we must first determine to which equivalence point it is being titrated. Using salicylic acid’s pKa values as a guide, the pH at the first equivalence point is between 2.97 and 13.74, and the second equivalence points is at a pH greater than 13.74. From Table 9.2.3 , phenolphthalein’s end point is in the pH range 8.3–10.0. The titration, therefore, is to the first equivalence point for which the moles of NaOH equal the moles of salicylic acid; thus\[(0.1354 \ \mathrm{M})(0.02192 \ \mathrm{L})=2.968 \times 10^{-3} \ \mathrm{mol} \ \mathrm{NaOH} \nonumber\]\[2.968 \times 10^{-3} \ \mathrm{mol} \ \mathrm{NaOH} \times \frac{1 \ \mathrm{mol} \ \mathrm{C}_{7} \mathrm{H}_{6} \mathrm{O}_{3}}{\mathrm{mol} \ \mathrm{NaOH}} \times \frac{138.12 \ \mathrm{g} \ \mathrm{C}_{7} \mathrm{H}_{6} \mathrm{O}_{3}}{\mathrm{mol} \ \mathrm{C}_{7} \mathrm{H}_{6} \mathrm{O}_{3}}=0.4099 \ \mathrm{g} \ \mathrm{C}_{7} \mathrm{H}_{6} \mathrm{O}_{3} \nonumber\]\[\frac{0.4099 \ \mathrm{g} \ \mathrm{C}_{7} \mathrm{H}_{6} \mathrm{O}_{3}}{0.4208 \ \mathrm{g} \text { sample }} \times 100=97.41 \ \% \mathrm{w} / \mathrm{w} \ \mathrm{C}_{7} \mathrm{H}_{6} \mathrm{O}_{3} \nonumber\]Because the purity of the sample is less than 99%, we reject the shipment.In an indirect analysis the analyte participates in one or more preliminary reactions, one of which produces or consumes acid or base. Despite the additional complexity, the calculations are straightforward.The purity of a pharmaceutical preparation of sulfanilamide, C6H4N2O2S, is determined by oxidizing the sulfur to SO2 and bubbling it through H2O2 to produce H2SO4. The acid is titrated to the bromothymol blue end point using a standard solution of NaOH. Calculate the purity of the preparation given that a 0.5136-g sample requires 48.13 mL of 0.1251 M NaOH.SolutionThe bromothymol blue end point has a pH range of 6.0–7.6. Sulfuric acid is a diprotic acid, with a pKa2 of 1.99 (the first Ka value is very large and the acid dissociation reaction goes to completion, which is why H2SO4 is a strong acid). The titration, therefore, proceeds to the second equivalence point and the titration reaction is\[\mathrm{H}_{2} \mathrm{SO}_{4}(a q)+2 \mathrm{OH}^{-}(a q) \longrightarrow 2 \mathrm{H}_{2} \mathrm{O}(l)+\mathrm{SO}_{4}^{2-}(a q) \nonumber\]Using the titration results, there are\[(0.1251 \ \mathrm{M} \ \mathrm{NaOH})(0.04813 \ \mathrm{L} \ \mathrm{NaOH})=6.021 \times 10^{-3} \ \mathrm{mol} \ \mathrm{NaOH} \nonumber\]\[6.012 \times 10^{-3} \text{ mol NaOH} \times \frac{1 \text{ mol} \mathrm{H}_{2} \mathrm{SO}_{4}} {2 \text{ mol NaOH}} = 3.010 \times 10^{-3} \text{ mol} \mathrm{H}_{2} \mathrm{SO}_{4} \nonumber\]\[3.010 \times 10^{-3} \ \mathrm{mol} \ \mathrm{H}_{2} \mathrm{SO}_{4} \times \frac{1 \ \mathrm{mol} \text{ S}}{\mathrm{mol} \ \mathrm{H}_{2} \mathrm{SO}_{4}} \times \ \frac{1 \ \mathrm{mol} \ \mathrm{C}_{6} \mathrm{H}_{4} \mathrm{N}_{2} \mathrm{O}_{2} \mathrm{S}}{\mathrm{mol} \text{ S}} \times \frac{168.17 \ \mathrm{g} \ \mathrm{C}_{6} \mathrm{H}_{4} \mathrm{N}_{2} \mathrm{O}_{2} \mathrm{S}}{\mathrm{mol} \ \mathrm{C}_{6} \mathrm{H}_{4} \mathrm{N}_{2} \mathrm{O}_{2} \mathrm{S}}= 0.5062 \ \mathrm{g} \ \mathrm{C}_{6} \mathrm{H}_{4} \mathrm{N}_{2} \mathrm{O}_{2} \mathrm{S} \nonumber\]produced when the SO2 is bubbled through H2O2. Because all the sulfur in H2SO4 comes from the sulfanilamide, we can use a conservation of mass to determine the amount of sulfanilamide in the sample.\[\frac{0.5062 \ \mathrm{g} \ \mathrm{C}_{6} \mathrm{H}_{4} \mathrm{N}_{2} \mathrm{O}_{2} \mathrm{S}}{0.5136 \ \mathrm{g} \text { sample }} \times 100=98.56 \ \% \mathrm{w} / \mathrm{w} \ \mathrm{C}_{6} \mathrm{H}_{4} \mathrm{N}_{2} \mathrm{O}_{2} \mathrm{S} \nonumber\]The concentration of NO2 in air is determined by passing the sample through a solution of H2O2, which oxidizes NO2 to HNO3, and titrating the HNO3 with NaOH. What is the concentration of NO2, in mg/L, if a 5.0 L sample of air requires 9.14 mL of 0.01012 M NaOH to reach the methyl red end pointThe moles of HNO3 produced by pulling the sample through H2O2 is\[(0.01012 \ \mathrm{M})(0.00914 \ \mathrm{L}) \times \frac{1 \ \mathrm{mol} \ \mathrm{HNO}_{3}}{\mathrm{mol} \ \mathrm{NaOH}}=9.25 \times 10^{-5} \ \mathrm{mol} \ \mathrm{HNO}_{3} \nonumber\]A conservation of mass on nitrogen requires that each mole of NO2 produces one mole of HNO3; thus, the mass of NO2 in the sample is\[9.25 \times 10^{-5} \ \mathrm{mol} \ \mathrm{HNO}_{3} \times \frac{1 \ \mathrm{mol} \ \mathrm{NO}_{2}}{\mathrm{mol} \ \mathrm{HNO}_{3}} \times \frac{46.01 \ \mathrm{g} \ \mathrm{NO}_{2}}{\mathrm{mol} \ \mathrm{NO}_{2}}=4.26 \times 10^{-3} \ \mathrm{g} \ \mathrm{NO}_{2} \nonumber\]and the concentration of NO2 is\[\frac{4.26 \times 10^{-3} \ \mathrm{g} \ \mathrm{NO}_{2}}{5 \ \mathrm{L} \text { air }} \times \frac{1000 \ \mathrm{mg}}{\mathrm{g}}=0.852 \ \mathrm{mg} \ \mathrm{NO}_{2} \ \mathrm{L} \text { air } \nonumber\]For a back titration we must consider two acid–base reactions. Again, the calculations are straightforward.The amount of protein in a sample of cheese is determined by a Kjeldahl analysis for nitrogen. After digesting a 0.9814-g sample of cheese, the nitrogen is oxidized to \(\text{NH}_4^+\), converted to NH3 with NaOH, and the NH3 distilled into a collection flask that contains 50.00 mL of 0.1047 M HCl. The excess HCl is back titrated with 0.1183 M NaOH, requiring 22.84 mL to reach the bromothymol blue end point. Report the %w/w protein in the cheese assuming there are 6.38 grams of protein for every gram of nitrogen in most dairy products.SolutionThe HCl in the collection flask reacts with two bases\[\mathrm{HCl}(a q)+\mathrm{NH}_{3}(a q) \rightarrow \mathrm{NH}_{4}^{+}(a q)+\mathrm{Cl}^{-}(a q) \nonumber\]\[\mathrm{HCl}(a q)+\mathrm{OH}^{-}(a q) \rightarrow \mathrm{H}_{2} \mathrm{O}(l)+\mathrm{Cl}^{-}(a q) \nonumber\]The collection flask originally contains\[(0.1047 \ \mathrm{M \ HCl})(0.05000 \ \mathrm{L \ HCl})=5.235 \times 10^{-3} \mathrm{mol} \ \mathrm{HCl} \nonumber\]of which\[(0.1183 \ \mathrm{M} \ \mathrm{NaOH})(0.02284 \ \mathrm{L} \ \mathrm{NaOH}) \times \frac{1 \ \mathrm{mol} \ \mathrm{HCl}}{\mathrm{mol} \ \mathrm{NaOH}}=2.702 \times 10^{-3} \ \mathrm{mol} \ \mathrm{HCl} \nonumber\]react with NaOH. The difference between the total moles of HCl and the moles of HCl that react with NaOH is the moles of HCl that react with NH3.\[5.235 \times 10^{-3} \ \mathrm{mol} \ \mathrm{HCl}-2.702 \times 10^{-3} \ \mathrm{mol} \ \mathrm{HCl} =2.533 \times 10^{-3} \ \mathrm{mol} \ \mathrm{HCl} \nonumber\]Because all the nitrogen in NH3 comes from the sample of cheese, we use a conservation of mass to determine the grams of nitrogen in the sample.\[2.533 \times 10^{-3} \ \mathrm{mol} \ \mathrm{HCl} \times \frac{1 \ \mathrm{mol} \ \mathrm{NH}_{3}}{\mathrm{mol} \ \mathrm{HCl}} \times \frac{14.01 \ \mathrm{g} \ \mathrm{N}}{\mathrm{mol} \ \mathrm{NH}_{3}}=0.03549 \ \mathrm{g} \ \mathrm{N} \nonumber\]The mass of protein, therefore, is\[0.03549 \ \mathrm{g} \ \mathrm{N} \times \frac{6.38 \ \mathrm{g} \text { protein }}{\mathrm{g} \ \mathrm{N}}=0.2264 \ \mathrm{g} \text { protein } \nonumber\]and the % w/w protein is\[\frac{0.2264 \ \mathrm{g} \text { protein }}{0.9814 \ \mathrm{g} \text { sample }} \times 100=23.1 \ \% \mathrm{w} / \mathrm{w} \text { protein } \nonumber\]Limestone consists mainly of CaCO3, with traces of iron oxides and other metal oxides. To determine the purity of a limestone, a 0.5413-g sample is dissolved using 10.00 mL of 1.396 M HCl. After heating to expel CO2, the excess HCl was titrated to the phenolphthalein end point, requiring 39.96 mL of 0.1004 M NaOH. Report the sample’s purity as %w/w CaCO3.The total moles of HCl used in this analysis is\[(1.396 \ \mathrm{M})(0.01000 \ \mathrm{L})=1.396 \times 10^{-2} \ \mathrm{mol} \ \mathrm{HCl} \nonumber\]Of the total moles of HCl\[(0.1004 \ \mathrm{M} \ \mathrm{NaOH})(0.03996 \ \mathrm{L}) \times \frac{1 \ \mathrm{mol} \ \mathrm{HCl}}{\mathrm{mol} \ \mathrm{NaOH}} =4.012 \times 10^{-3} \ \mathrm{mol} \ \mathrm{HCl} \nonumber\]are consumed in the back titration with NaOH, which means that\[ 1.396 \times 10^{-2} \ \mathrm{mol} \ \mathrm{HCl}-4.012 \times 10^{-3} \ \mathrm{mol} \ \mathrm{HCl} \\ =9.95 \times 10^{-3} \ \mathrm{mol} \ \mathrm{HCl} \nonumber\]react with the CaCO3. Because \(\text{CO}_3^{2-}\) is dibasic, each mole of CaCO3 consumes two moles of HCl; thus\[9.95 \times 10^{-3} \ \mathrm{mol} \ \mathrm{HCl} \times \frac{1 \ \mathrm{mol} \ \mathrm{CaCO}_{3}}{2 \ \mathrm{mol} \ \mathrm{HCl}} \times \\ \frac{100.09 \ \mathrm{g} \ \mathrm{CaCO}_{3}}{\mathrm{mol} \ \mathrm{CaCO}_{3}}=0.498 \ \mathrm{g} \ \mathrm{CaCO}_{3} \nonumber\]\[\frac{0.498 \ \mathrm{g} \ \mathrm{CaCO}_{3}}{0.5143 \ \mathrm{g} \text { sample }} \times 100=96.8 \ \% \mathrm{w} / \mathrm{w} \ \mathrm{CaCO}_{3} \nonumber\]Earlier we noted that we can use an acid–base titration to analyze a mixture of acids or bases by titrating to more than one equivalence point. The concentration of each analyte is determined by accounting for its contribution to each equivalence point.The alkalinity of natural waters usually is controlled by OH–, \(\text{HCO}_3^-\), and \(\text{CO}_3^{2-}\), present singularly or in combination. Titrating a 100.0-mL sample to a pH of 8.3 requires 18.67 mL of 0.02812 M HCl. A second 100.0-mL aliquot requires 48.12 mL of the same titrant to reach a pH of 4.5. Identify the sources of alkalinity and their concentrations in milligrams per liter.SolutionBecause the volume of titrant to reach a pH of 4.5 is more than twice that needed to reach a pH of 8.3, we know from Table 9.2.5 , that the sample’s alkalinity is controlled by \(\text{CO}_3^{2-}\) and \(\text{HCO}_3^-\). Titrating to a pH of 8.3 neutralizes \(\text{CO}_3^{2-}\) to \(\text{HCO}_3^-\)\[\mathrm{CO}_{3}^{2-}(a q)+\mathrm{HCl}(a q) \rightarrow \mathrm{HCO}_{3}^{-}(a q)+\mathrm{Cl}^{-}(a q) \nonumber\]but there is no reaction between the titrant and \(\text{HCO}_3^-\) (see Figure 9.2.14 ). The concentration of \(\text{CO}_3^{2-}\) in the sample, therefore, is\[{(0.02812 \ \mathrm{M \ HCl})(0.01867 \ \mathrm{L \ HCl}) \times} {\frac{1 \ \mathrm{mol} \ \mathrm{CO}_3^{2-}}{\mathrm{mol} \ \mathrm{HCl}}=5.250 \times 10^{-4} \ \mathrm{mol} \ \mathrm{CO}_{3}^{2-}} \nonumber\]\[\frac{5.250 \times 10^{-4} \ \mathrm{mol} \ \mathrm{CO}_{3}^{2-}}{0.1000 \ \mathrm{L}} \times \frac{60.01 \ \mathrm{g} \ \mathrm{CO}_{3}^{2-}}{\mathrm{mol} \ \mathrm{CO}_{3}^{2-}} \times \frac{1000 \ \mathrm{mg}}{\mathrm{g}}=315.1 \ \mathrm{mg} / \mathrm{L} \nonumber\]Titrating to a pH of 4.5 neutralizes \(\text{CO}_3^{2-}\) to H2CO3 and neutralizes \(\text{HCO}_3^-\) to H2CO3 (see Figure 9.2.14 ).\[\begin{array}{l}{\mathrm{CO}_{3}^{2-}(a q)+2 \mathrm{HCl}(a q) \rightarrow \mathrm{H}_{2} \mathrm{CO}_{3}(a q)+2 \mathrm{Cl}^{-}(a q)} \\ {\mathrm{HCO}_{3}^{-}(a q)+\mathrm{HCl}(a q) \rightarrow \mathrm{H}_{2} \mathrm{CO}_{3}(a q)+\mathrm{Cl}^{-}(a q)}\end{array} \nonumber\]Because we know how many moles of \(\text{CO}_3^{2-}\) are in the sample, we can calculate the volume of HCl it consumes.\[{5.250 \times 10^{-4} \ \mathrm{mol} \ \mathrm{CO}_{3}^{2-} \times \frac{2 \ \mathrm{mol} \ \mathrm{HCl}}{\mathrm{mol} \ \mathrm{CO}_{3}^{2-}} \times} {\frac{1 \ \mathrm{L} \ \mathrm{HCl}}{0.02812 \ \mathrm{mol} \ \mathrm{HCl}} \times \frac{1000 \ \mathrm{mL}}{\mathrm{L}}=37.34 \ \mathrm{mL} \ \mathrm{HCl}} \nonumber\]This leaves 48.12 mL–37.34 mL, or 10.78 mL of HCl to react with \(\text{HCO}_3^-\). The amount of \(\text{HCO}_3^-\) in the sample is\[{(0.02812 \ \mathrm{M \ HCl})(0.01078 \ \mathrm{L} \ \mathrm{HCl}) \times} {\frac{1 \ \mathrm{mol} \ \mathrm{H} \mathrm{CO}_{3}^{-}}{\mathrm{mol} \ \mathrm{HCl}}=3.031 \times 10^{-4} \ \mathrm{mol} \ \mathrm{HCO}_{3}^{-}} \nonumber\]The sample contains 315.1 mg \(\text{CO}_3^{2-}\)/L and 185.0 mg \(\text{HCO}_3^-\)/LSamples that contain a mixture of the monoprotic weak acids 2–methylanilinium chloride (C7H10NCl, pKa = 4.447) and 3–nitrophenol (C6H5NO3, pKa = 8.39) can be analyzed by titrating with NaOH. A 2.006-g sample requires 19.65 mL of 0.200 M NaOH to reach the bromocresol purple end point and 48.41 mL of 0.200 M NaOH to reach the phenolphthalein end point. Report the %w/w of each compound in the sample.Of the two analytes, 2-methylanilinium is the stronger acid and is the first to react with the titrant. Titrating to the bromocresol purple end point, therefore, provides information about the amount of 2-methylanilinium in the sample.\[(0.200\ \mathrm{M} \ \mathrm{NaOH} )(0.01965 \ \mathrm{L}) \times \frac{1 \ \mathrm{mol} \ \mathrm{C}_{7} \mathrm{H}_{10} \mathrm{NCl}}{\mathrm{mol} \ \mathrm{NaOH}} \times \frac{143.61 \ \mathrm{g} \ \mathrm{C}_{7} \mathrm{H}_{10} \mathrm{NCl}}{\mathrm{mol} \ \mathrm{C}_{7} \mathrm{H}_{10} \mathrm{NCl}}=0.564 \ \mathrm{g} \ \mathrm{C}_{7} \mathrm{H}_{10} \mathrm{NCl} \nonumber\]\[\frac{0.564 \ \mathrm{g} \ \mathrm{C}_{7} \mathrm{H}_{10} \mathrm{NCl}}{2.006 \ \mathrm{g} \text { sample }} \times 100=28.1 \ \% \mathrm{w} / \mathrm{w} \ \mathrm{C}_{7} \mathrm{H}_{10} \mathrm{NCl} \nonumber\]Titrating from the bromocresol purple end point to the phenolphthalein end point, a total of 48.41 mL – 19.65 mL = 28.76 mL, gives the amount of NaOH that reacts with 3-nitrophenol. The amount of 3-nitrophenol in the sample, therefore, is\[(0.200 \ \mathrm{M} \ \mathrm{NaOH}) (0.02876 \ \mathrm{L}) \times \frac{1 \ \mathrm{mol} \ \mathrm{C}_{6} \mathrm{H}_{5} \mathrm{NO}_{3}}{\mathrm{mol} \ \mathrm{NaOH}} \times \frac{139.11 \ \mathrm{g} \ \mathrm{C}_{6} \mathrm{H}_{5} \mathrm{NO}_{3}}{\mathrm{mol} \ \mathrm{C}_{6} \mathrm{H}_{5} \mathrm{NO}_{3}}=0.800 \ \mathrm{g} \ \mathrm{C}_{6} \mathrm{H}_{5} \mathrm{NO}_{3} \nonumber\]\[\frac{0.800 \ \mathrm{g} \ \mathrm{C}_{6} \mathrm{H}_{5} \mathrm{NO}_{3}}{2.006 \ \mathrm{g} \text { sample }} \times 100=39.8 \ \% \mathrm{w} / \mathrm{w} \ \mathrm{C}_{6} \mathrm{H}_{5} \mathrm{NO}_{3} \nonumber\]Example 9.5 shows how we can use an acid–base titration to determine the forms of alkalinity in waters and their concentrations. We can extend this approach to other systems. For example, if we titrate a sample to the methyl orange end point and the phenolphthalein end point using either a strong acid or a strong base, we can determine which of the following species are present and their concentrations: H3PO4, \(\text{H}_2\text{PO}_4^-\), \(\text{HPO}_4^{2-}\), \(\text{PO}_4^{3-}\), HCl, and NaOH. As outlined in Table 9.2.8 , each species or mixture of species has a unique relationship between the volumes of titrant needed to reach these two end points. Note that mixtures containing three or more these species are not possible.Use a ladder diagram to convince yourself that mixtures containing three or more of these species are unstable.VPH and VMO are, respectively, the volume of titrant at the phenolphthalein and methyl orange end pointswhen no information is provided, the volume at each end point is zeroIn addition to a quantitative analysis and a qualitative analysis, we also can use an acid–base titration to characterize the chemical and physical properties of matter. Two useful characterization applications are the determination of a compound’s equivalent weight and the determination of its acid dissociation constant or its base dissociation constant.Suppose we titrate a sample of an impure weak acid to a well-defined end point using a monoprotic strong base as the titrant. If we assume the titration involves the transfer of n protons, then the moles of titrant needed to reach the end point is\[\text { moles titrant }=\frac{n \text { moles titrant }}{\text { moles analyte }} \times \text { moles analyte } \nonumber\]If we know the analyte’s identity, we can use this equation to determine the amount of analyte in the sample\[\text { grams analyte }=\text { moles titrant } \times \frac{1 \text { mole analyte }}{n \text { moles analyte }} \times F W \text { analyte } \nonumber\]where FW is the analyte’s formula weight.But what if we do not know the analyte’s identify? If we titrate a pure sample of the analyte, we can obtain some useful information that may help us establish its identity. Because we do not know the number of protons that are titrated, we let n = 1 and replace the analyte’s formula weight with its equivalent weight (EW)\[\text { grams analyte }=\text { moles titrant } \times \frac{1 \text { equivalent analyte }}{1 \text { mole analyte }}=E W \text { analyte } \nonumber\]where\[F W=n \times E W \nonumber\]A 0.2521-g sample of an unknown weak acid is titrated with 0.1005 M NaOH, requiring 42.68 mL to reach the phenolphthalein end point. Determine the compound’s equivalent weight. Which of the following compounds is most likely to be the unknown weak acid?SolutionThe moles of NaOH needed to reach the end point is\[(0.1005 \ \mathrm{M} \ \mathrm{NaOH})(0.04268 \ \mathrm{L} \ \mathrm{NaOH})=4.289 \times 10^{-3} \ \mathrm{mol} \ \mathrm{NaOH} \nonumber\]The equivalents of weak acid are the same as the moles of NaOH used in the titration; thus, he analyte’s equivalent weight is\[E W=\frac{0.2521 \ \mathrm{g}}{4.289 \times 10^{-3} \text { equivalents }}=58.78 \ \mathrm{g} / \mathrm{equivalent} \nonumber\]The possible formula weights for the weak acid are 58.78 g/mol (n = 1), 117.6 g/mol (n = 2), and 176.3 g/mol (n = 3). If the analyte is a monoprotic weak acid, then its formula weight is 58.78 g/mol, eliminating ascorbic acid as a possibility. If it is a diprotic weak acid, then the analyte’s formula weight is either 58.78 g/mol or 117.6 g/mol, depending on whether the weak acid was titrated to its first or its second equivalence point. Succinic acid, with a formula weight of 118.1 g/mole is a possibility, but malonic acid is not. If the analyte is a triprotic weak acid, then its formula weight is 58.78 g/mol, 117.6 g/mol, or 176.3 g/mol. None of these values is close to the formula weight for citric acid, eliminating it as a possibility. Only succinic acid provides a possible match.Figure 9.2.16 shows the potentiometric titration curve for the titration of a 0.500-g sample an unknown weak acid. The titrant is 0.1032 M NaOH. What is the weak acid’s equivalent weight?The first of the two visible end points is approximately 37 mL of NaOH. The analyte’s equivalent weight, therefore, is\[(0.1032 \ \mathrm{M} \ \mathrm{NaOH})(0.037 \ \mathrm{L}) \times \frac{1 \text { equivalent }}{\mathrm{mol} \ \mathrm{NaOH}}=3.8 \times 10^{-3} \text { equivalents } \nonumber\]\[E W=\frac{0.5000 \ \mathrm{g}}{3.8 \times 10^{-3} \text { equivalents }}=1.3 \times 10^{2} \ \mathrm{g} / \mathrm{equivalent} \nonumber\]Another application of acid–base titrimetry is the determination of a weak acid’s or a weak base’s dissociation constant. Consider, for example, a solution of acetic acid, CH3COOH, for which the dissociation constant is\[K_{\mathrm{a}}=\frac{\left[\mathrm{H}_{3} \mathrm{O}^{+}\right]\left[\mathrm{CH}_{3} \mathrm{COO}^{-}\right]}{\left[\mathrm{CH}_{3} \mathrm{COOH}\right]} \nonumber\]When the concentrations of CH3COOH and CH3COO– are equal, the Ka expression reduces to Ka = [H3O+], or pH = pKa. If we titrate a solution of acetic acid with NaOH, the pH equals the pKa when the volume of NaOH is approximately 1⁄2Veq. As shown in Figure 9.2.17 , a potentiometric titration curve provides a reasonable estimate of acetic acid’s pKa.Recall that pH = pKa is a step on a ladder diagram, which divides the pH axis into two regions, one where the weak acid is the predominate species, and one where its conjugate weak base is the predominate species.This method provides a reasonable estimate for a weak acid’s pKa if the acid is neither too strong nor too weak. These limitations are easy to appreciate if we consider two limiting cases. For the first limiting case, let’s assume the weak acid, HA, is more than 50% dissociated before the titration begins (a relatively large Ka value); in this case the concentration of HA before the equivalence point is always less than the concentration of A– and there is no point on the titration curve where [HA] = [A–]. At the other extreme, if the acid is too weak, then less than 50% of the weak acid reacts with the titrant at the equivalence point. In this case the concentration of HA before the equivalence point is always greater than that of A–. Determining the pKa by the half-equivalence point method overestimates its value if the acid is too strong and underestimates its value if the acid is too weak.Use the potentiometric titration curve in Figure 9.2.16 to estimate the pKa values for the weak acid in Exercise 9.2.10 .At 1⁄2Veq, or approximately 18.5 mL, the pH is approximately 2.2; thus, we estimate that the analyte’s pKa is 2.2.A second approach for determining a weak acid’s pKa is to use a Gran plot. For example, earlier in this chapter we derived the following equation for the titration of a weak acid with a strong base.\[\left[\mathrm{H}_{3} \mathrm{O}^{+}\right] \times V_{b}=K_{a} V_{e q}-K_{a} V_{b} \nonumber\]A plot of [H3O+] \(\times\) Vb versus Vb for volumes less than the equivalence point yields a straight line with a slope of –Ka. Other linearizations have been developed that use the entire titration curve or that require no assumptions [(a) Gonzalez, A. G.; Asuero, A. G. Anal. Chim. Acta 1992, 256, 29–33; (b) Papanastasiou, G.; Ziogas, I.; Kokkindis, G. Anal. Chim. Acta 1993, 277, 119–135]. This approach to determining an acidity constant has been used to study the acid–base properties of humic acids, which are naturally occurring, large molecular weight organic acids with multiple acidic sites. In one study a humic acid was found to have six titratable sites, three which were identified as carboxylic acids, two which were believed to be secondary or tertiary amines, and one which was identified as a phenolic group [Alexio, L. M.; Godinho, O. E. S.; da Costa, W. F. Anal. Chim. Acta 1992, 257, 35–39].Values of Ka determined by this method may have a substantial error if the effect of activity is ignored. See Chapter 6.9 for a discussion of activity.In an acid–base titration, the volume of titrant needed to reach the equivalence point is proportional to the moles of titrand. Because the pH of the titrand or the titrant is a function of its concentration, the change in pH at the equivalence point—and thus the feasibility of an acid–base titration—depends on their respective concentrations. Figure 9.2.18 , for example, shows a series of titration curves for the titration of several concentrations of HCl with equimolar solutions NaOH. For titrand and titrant concentrations smaller than 10–3 M, the change in pH at the end point is too small to provide an accurate and a precise result.Acid–base titrimetry is an example of a total analysis technique in which the signal is proportional to the absolute amount of analyte. See Chapter 3 for a discussion of the difference between total analysis techniques and concentration techniques.A minimum concentration of 10–3 M places limits on the smallest amount of analyte we can analyze successfully. For example, suppose our analyte has a formula weight of 120 g/mol. To successfully monitor the titration’s end point using an indicator or a pH probe, the titrand needs an initial volume of approximately 25 mL. If we assume the analyte’s formula weight is 120 g/mol, then each sample must contain at least 3 mg of analyte. For this reason, acid–base titrations generally are limited to major and minor analytes. We can extend the analysis of gases to trace analytes by pulling a large volume of the gas through a suitable collection solution.We need a volume of titrand sufficient to cover the tip of the pH probe or to allow for an easy observation of the indicator’s color. A volume of 25 mL is not an unreasonable estimate of the minimum volume.One goal of analytical chemistry is to extend analyses to smaller samples. Here we describe two interesting approaches to titrating μL and pL samples. In one experimental design (Figure 9.2.19 ), samples of 20–100 μL are held by capillary action between a flat-surface pH electrode and a stainless steel sample stage [Steele, A.; Hieftje, G. M. Anal. Chem. 1984, 56, 2884–2888]. The titrant is added using the oscillations of a piezoelectric ceramic device to move an angled glass rod in and out of a tube connected to a reservoir that contains the titrant. Each time the glass tube is withdrawn an approximately 2 nL microdroplet of titrant is released. The microdroplets are allowed to fall onto the sample, with mixing accomplished by spinning the sample stage at 120 rpm. A total of 450 microdroplets, with a combined volume of 0.81–0.84 μL, is dispensed between each pH measurement. In this fashion a titration curve is constructed. This method has been used to titrate solutions of 0.1 M HCl and 0.1 M CH3COOH with 0.1 M NaOH. Absolute errors ranged from a minimum of +0.1% to a maximum of –4.1%, with relative standard deviations from 0.15% to 4.7%. Samples as small as 20 μL were titrated successfully.Another approach carries out the acid–base titration in a single drop of solution [(a) Gratzl, M.; Yi, C. Anal. Chem. 1993, 65, 2085–2088; (b) Yi, C.; Gratzl, M. Anal. Chem. 1994, 66, 1976–1982; (c) Hui, K. Y.; Gratzl, M. Anal. Chem. 1997, 69, 695–698; (d) Yi, C.; Huang, D.; Gratzl, M. Anal. Chem. 1996, 68, 1580–1584; (e) Xie, H.; Gratzl, M. Anal. Chem. 1996, 68, 3665–3669]. The titrant is delivered using a microburet fashioned from a glass capillary micropipet (Figure 9.2.20 ). The microburet has a 1-2 μm tip filled with an agar gel membrane. The tip of the microburet is placed within a drop of the sample solution, which is suspended in heptane, and the titrant is allowed to diffuse into the sample. The titration’s progress is monitored using an acid–base indicator and the time needed to reach the end point is measured. The rate of the titrant’s diffusion from the microburet is determined by a prior calibration. Once calibrated the end point time is converted to an end point volume. Samples usually consist of picoliter volumes (10–12 liters), with the smallest sample being 0.7 pL. The precision of the titrations is about 2%.Titrations conducted with microliter or picoliter sample volumes require a smaller absolute amount of analyte. For example, diffusional titrations have been conducted on as little as 29 femtomoles (10–15 moles) of nitric acid. Nevertheless, the analyte must be present in the sample at a major or minor level for the titration to give accurate and precise results.When working with a macro–major or a macro–minor sample, an acid–base titration can achieve a relative error of 0.1–0.2%. The principal limitation to accuracy is the difference between the end point and the equivalence point.An acid–base titration’s relative precision depends primarily on the precision with which we can measure the end point volume and the precision in detecting the end point. Under optimum conditions, an acid–base titration has a relative precision of 0.1–0.2%. We can improve the relative precision by using the largest possible buret and by ensuring we use most of its capacity in reaching the end point. A smaller volume buret is a better choice when using costly reagents, when waste disposal is a concern, or when we must complete the titration quickly to avoid competing chemical reactions. An automatic titrator is particularly useful for titrations that require small volumes of titrant because it provides significantly better precision (typically about ±0.05% of the buret’s volume).The precision of detecting the end point depends on how it is measured and the slope of the titration curve at the end point. With an indicator the precision of the end point signal usually is ±0.03–0.10 mL. Potentiometric end points usually are more precise.For an acid–base titration we can write the following general analytical equation to express the titrant’s volume in terms of the amount of titrand\[\text { volume of titrant }=k \times \text { moles of titrand } \nonumber\]where k, the sensitivity, is determined by the stoichiometry between the titrand and the titrant. Consider, for example, the determination of sulfurous acid, H2SO3, by titrating with NaOH to the first equivalence point\[\mathrm{H}_{2} \mathrm{SO}_{3}(a q)+\mathrm{OH}^{-}(a q) \rightarrow \mathrm{H}_{2} \mathrm{O}(l )+\mathrm{HSO}_{3}^{-}(a q) \nonumber\]At the equivalence point the relationship between the moles of NaOH and the moles of H2SO3 is\[\mathrm{mol} \ \mathrm{NaOH}=\mathrm{mol} \ \mathrm{H}_{2} \mathrm{SO}_{3} \nonumber\]Substituting the titrant’s molarity and volume for the moles of NaOH and rearranging\[M_{\mathrm{NaOH}} \times V_{\mathrm{NNOH}}=\mathrm{mol} \ \mathrm{H}_{2} \mathrm{SO}_{3} \nonumber\]\[V_{\mathrm{NaOH}}=\frac{1}{M_{\mathrm{NaOH}}} \times \mathrm{mol} \ \mathrm{H}_{2} \mathrm{SO}_{3} \nonumber\]we find that k is\[k=\frac{1}{M_{\mathrm{NaOH}}} \nonumber\]There are two ways in which we can improve a titration’s sensitivity. The first, and most obvious, is to decrease the titrant’s concentration because it is inversely proportional to the sensitivity, k. The second approach, which applies only if the titrand is multiprotic, is to titrate to a later equivalence point. If we titrate H2SO3 to its second equivalence point\[ \mathrm{H}_{2} \mathrm{SO}_{3}(a q)+2 \mathrm{OH}^{-}(a q) \rightarrow 2 \mathrm{H}_{2} \mathrm{O}(l)+\mathrm{SO}_{3}^{2-}(a q)\nonumber\]then each mole of H2SO3 consumes two moles of NaOH\[\mathrm{mol} \ \mathrm{NaOH}=2 \times \mathrm{mol} \ \mathrm{H}_{2} \mathrm{SO}_{3} \nonumber\]and the sensitivity becomes\[k=\frac{2}{M_{\mathrm{NaOH}}} \nonumber\]In practice, however, any improvement in sensitivity is offset by a decrease in the end point’s precision if a larger volume of titrant requires us to refill the buret. For this reason, standard acid–base titrimetric procedures are written to ensure that a titration uses 60–100% of the buret’s volume.Acid–base titrants are not selective. A strong base titrant, for example, reacts with all acids in a sample, regardless of their individual strengths. If the titrand contains an analyte and an interferent, then selectivity depends on their relative acid strengths. Let’s consider two limiting situations.If the analyte is a stronger acid than the interferent, then the titrant will react with the analyte before it begins reacting with the interferent. The feasibility of the analysis depends on whether the titrant’s reaction with the interferent affects the accurate location of the analyte’s equivalence point. If the acid dissociation constants are substantially different, the end point for the analyte can be determined accurately. Conversely, if the acid dissociation constants for the analyte and interferent are similar, then there may not be an accurate end point for the analyte. In the latter case a quantitative analysis for the analyte is not possible.In the second limiting situation the analyte is a weaker acid than the interferent. In this case the volume of titrant needed to reach the analyte’s equivalence point is determined by the concentration of both the analyte and the interferent. To account for the interferent’s contribution to the end point, an end point for the interferent must be available. Again, if the acid dissociation constants for the analyte and interferent are significantly different, then the analyte’s determination is possible. If the acid dissociation constants are similar, however, there is only a single equivalence point and we cannot separate the analyte’s and the interferent’s contributions to the equivalence point volume.Acid–base titrations require less time than most gravimetric procedures, but more time than many instrumental methods of analysis, particularly when analyzing many samples. With an automatic titrator, however, concerns about analysis time are less significant. When performing a titration manually our equipment needs—a buret and, perhaps, a pH meter—are few in number, inexpensive, routinely available, and easy to maintain. Automatic titrators are available for between $3000 and $10 000.This page titled 9.2: Acid–Base Titrations is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by David Harvey.
187
9.3: Complexation Titrations
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Analytical_Chemistry_2.1_(Harvey)/09%3A_Titrimetric_Methods/9.03%3A_Complexation_Titrations
The earliest examples of metal–ligand complexation titrations are Liebig’s determinations, in the 1850s, of cyanide and chloride using, respectively, Ag+ and Hg2+ as the titrant. Practical analytical applications of complexation titrimetry were slow to develop because many metals and ligands form a series of metal–ligand complexes. Liebig’s titration of CN– with Ag+ was successful because they form a single, stable complex of \(\text{Ag(CN)}_2^-\), which results in a single, easily identified end point. Other metal–ligand complexes, such as \(\text{CdI}_4^{2-}\), are not analytically useful because they form a series of metal–ligand complexes (CdI+, CdI2(aq), \(\text{CdI}_3^-\) and \(\text{CdI}_4^{2-}\)) that produce a sequence of poorly defined end points.Recall that an acid–base titration curve for a diprotic weak acid has a single end point if its two Ka values are not sufficiently different. See Figure 9.2.6 for an example.In 1945, Schwarzenbach introduced aminocarboxylic acids as multidentate ligands. The most widely used of these new ligands—ethylenediaminetetraacetic acid, or EDTA—forms a strong 1:1 complex with many metal ions. The availability of a ligand that gives a single, easily identified end point made complexation titrimetry a practical analytical method.Ethylenediaminetetraacetic acid, or EDTA, is an aminocarboxylic acid. EDTA, the structure of which is shown in Figure 9.3.1 a in its fully deprotonated form, is a Lewis acid with six binding sites—the four negatively charged carboxylate groups and the two tertiary amino groups—that can donate up to six pairs of electrons to a metal ion. The resulting metal–ligand complex, in which EDTA forms a cage-like structure around the metal ion (Figure 9.3.1 b), is very stable. The actual number of coordination sites depends on the size of the metal ion, however, all metal–EDTA complexes have a 1:1 stoichiometry.To illustrate the formation of a metal–EDTA complex, let’s consider the reaction between Cd2+ and EDTA\[\mathrm{Cd}^{2+}(a q)+\mathrm{Y}^{4-}(a q)\rightleftharpoons\mathrm{Cd} \mathrm{Y}^{2-}(a q) \label{9.1}\]where Y4– is a shorthand notation for the fully deprotonated form of EDTA shown in Figure 9.3.1 a. Because the reaction’s formation constant\[K_{f}=\frac{\left[\mathrm{CdY}^{2-}\right]}{\left[\mathrm{Cd}^{2+}\right]\left[\mathrm{Y}^{4-}\right]}=2.9 \times 10^{16} \label{9.2}\]is large, its equilibrium position lies far to the right. Formation constants for other metal–EDTA complexes are found in Appendix 12.In addition to its properties as a ligand, EDTA is also a weak acid. The fully protonated form of EDTA, H6Y2+, is a hexaprotic weak acid with successive pKa values of\[\mathrm{p} K_\text{a1}=0.0 \quad \mathrm{p} K_\text{a2}=1.5 \quad \mathrm{p} K_\text{a3}=2.0 \nonumber\]\[\mathrm{p} K_\text{a4}=2.66 \quad \mathrm{p} K_\text{a5}=6.16 \quad \mathrm{p} K_\text{a6}=10.24 \nonumber\]The first four values are for the carboxylic acid protons and the last two values are for the ammonium protons. Figure 9.3.2 shows a ladder diagram for EDTA. The specific form of EDTA in reaction \ref{9.1} is the predominate species only when the pH is more basic than 10.24.The formation constant for CdY2– in Equation \ref{9.2} assumes that EDTA is present as Y4–. Because EDTA has many forms, when we prepare a solution of EDTA we know it total concentration, CEDTA, not the concentration of a specific form, such as Y4–. To use Equation \ref{9.2}, we need to rewrite it in terms of CEDTA.At any pH a mass balance on EDTA requires that its total concentration equal the combined concentrations of each of its forms.\[C_{\mathrm{EDTA}}=\left[\mathrm{H}_{6} \mathrm{Y}^{2+}\right]+\left[\mathrm{H}_{5} \mathrm{Y}^{+}\right]+\left[\mathrm{H}_{4} \mathrm{Y}\right]+\left[\mathrm{H}_{3} \mathrm{Y}^-\right]+\left[\mathrm{H}_{2} \mathrm{Y}^{2-}\right]+\left[\mathrm{HY}^{3-}\right]+\left[\mathrm{Y}^{4-}\right] \nonumber\]To correct the formation constant for EDTA’s acid–base properties we need to calculate the fraction, \(\alpha_{\text{Y}^{4-}}\), of EDTA that is present as Y4–.\[\alpha_{\text{Y}^{4-}}=\frac{\left[\text{Y}^{4-}\right]}{C_\text{EDTA}} \label{9.3}\]Table 9.3.1 provides values of \(\alpha_{\text{Y}^{4-}}\) for selected pH levels. Solving Equation \ref{9.3} for [Y4–] and substituting into Equation \ref{9.2} for the CdY2– formation constant\[K_{\mathrm{f}}=\frac{\left[\mathrm{CdY}^{2-}\right]}{\left[\mathrm{Cd}^{2+}\right] (\alpha_{\mathrm{Y}^{4-}}) C_{\mathrm{EDTA}}} \nonumber\]and rearranging gives\[K_{f}^{\prime}=K_{f} \times \alpha_{\text{Y}^{4-}}=\frac{\left[\mathrm{CdY}^{2-}\right]}{\left[\mathrm{Cd}^{2+}\right] C_{\mathrm{EDTA}}} \label{9.4}\]where \(K_f^{\prime}\) is a pH-dependent conditional formation constant. As shown in Table 9.3.2 , the conditional formation constant for CdY2– becomes smaller and the complex becomes less stable at more acidic pHs.To maintain a constant pH during a complexation titration we usually add a buffering agent. If one of the buffer’s components is a ligand that binds with Cd2+, then EDTA must compete with the ligand for Cd2+. For example, an \(\text{NH}_4^+ / \text{NH}_3\) buffer includes NH3, which forms several stable Cd2+–NH3 complexes. Because EDTA forms a stronger complex with Cd2+ than does NH3, it displaces NH3; however, the stability of the Cd2+–EDTA complex decreases.We can account for the effect of an auxiliary complexing agent, such as NH3, in the same way we accounted for the effect of pH. Before adding EDTA, the mass balance on Cd2+, CCd, is\[C_{\mathrm{Cd}} = \left[\mathrm{Cd}^{2+}\right] + \left[\mathrm{Cd}\left(\mathrm{NH}_{3}\right)^{2+}\right] + \left[\mathrm{Cd}\left(\mathrm{NH}_{3}\right)_{2}^{2+}\right] + \left[\mathrm{Cd}\left(\mathrm{NH}_{3}\right)_{3}^{2+}\right] + \left[\mathrm{Cd}\left(\mathrm{NH}_{3}\right)_{4}^{2+}\right] \nonumber\]and the fraction of uncomplexed Cd2+, \(\alpha_{Cd^{2+}}\), is\[\alpha_{\mathrm{Cd}^{2+}}=\frac{\left[\mathrm{Cd}^{2+}\right]}{C_{\mathrm{Cd}}} \label{9.5}\]The value of \(\alpha_{\mathrm{Cd}^{2+}}\) depends on the concentration of NH3. Contrast this with \(\alpha_{\text{Y}^{4-}}\), which depends on pH.Solving Equation \ref{9.5} for [Cd2+] and substituting into Equation \ref{9.4} gives\[K_{f}^{\prime}=K_{f} \times \alpha_{Y^{4-}} = \frac {[\text{CdY}^{2-}]} {\alpha_{\text{Cd}^{2+}} C_\text{Cd} C_\text{EDTA}} \nonumber\]Because the concentration of NH3 in a buffer essentially is constant, we can rewrite this equation\[K_{f}^{\prime \prime}=K_{f} \times \alpha_{\mathrm{Y}^{4-}} \times \alpha_{\mathrm{Cd}^{2+}}=\frac{\left[\mathrm{CdY}^{2-}\right]}{C_{\mathrm{Cd}} C_{\mathrm{EDTA}}} \label{9.6}\]to give a conditional formation constant, \(K_f^{\prime \prime}\), that accounts for both pH and the auxiliary complexing agent’s concentration. Table 9.3.3 provides values of \(\alpha_{\text{M}^{2+}}\) for several metal ion when NH3 is the complexing agent.Now that we know something about EDTA’s chemical properties, we are ready to evaluate its usefulness as a titrant. To do so we need to know the shape of a complexometric titration curve. In chapter 9.2 we learned that an acid–base titration curve shows how the titrand’s pH changes as we add titrant. The analogous result for a complexation titration shows the change in pM, where M is the metal ion’s concentration, as a function of the volume of EDTA. In this section we will learn how to calculate a titration curve using the equilibrium calculations from Chapter 6. We also will learn how to sketch a good approximation of any complexation titration curve using a limited number of simple calculations.pM = –log[M2+]Let’s calculate the titration curve for 50.0 mL of \(5.00 \times 10^{-3}\) M Cd2+ using a titrant of 0.0100 M EDTA. Furthermore, let’s assume the titrand is buffered to a pH of 10 using a buffer that is 0.0100 M in NH3.Because the pH is 10, some of the EDTA is present in forms other than Y4–. In addition, EDTA will compete with NH3 for the Cd2+. To evaluate the titration curve, therefore, we first need to calculate the conditional formation constant for CdY2–. From Table 9.3.1 and Table 9.3.3 we find that \(\alpha_{\text{Y}^{4-}}\) is 0.367 at a pH of 10, and that \(\alpha_{\text{Cd}^{2+}}\) is 0.0881 when the concentration of NH3 is 0.0100 M. Using these values, the conditional formation constant is\[K_{f}^{\prime \prime}=K_{f} \times \alpha_{\text{Y}^{4-}} \times \alpha_{\text{Cd}^{2+}}=\left(2.9 \times 10^{16}\right)(0.367)(0.0881)=9.4 \times 10^{14} \nonumber\]Because \(K_f^{\prime \prime}\) is so large, we can treat the titration reaction\[\mathrm{Cd}^{2+}(a q)+\mathrm{Y}^{4-}(a q) \longrightarrow \mathrm{CdY}^{2-}(a q) \nonumber\]as if it proceeds to completion.The next task is to determine the volume of EDTA needed to reach the equivalence point. At the equivalence point we know that the moles of EDTA added must equal the moles of Cd2+ in our sample; thus\[\operatorname{mol} \mathrm{EDTA}=M_{\mathrm{EDTA}} \times V_{\mathrm{EDTA}}=M_{\mathrm{Cd}} \times V_{\mathrm{Cd}}=\mathrm{mol} \ \mathrm{Cd}^{2+} \nonumber\]Substituting in known values, we find that it requires\[V_{eq}=V_{\mathrm{EDTA}}=\frac{M_{\mathrm{Cd}} V_{\mathrm{Cd}}}{M_{\mathrm{EDTA}}}=\frac{\left(5.00 \times 10^{-3} \ \mathrm{M}\right)(50.0 \ \mathrm{mL})}{0.0100 \ \mathrm{M}}=25.0 \ \mathrm{mL} \nonumber\]of EDTA to reach the equivalence point.Before the equivalence point, Cd2+ is present in excess and pCd is determined by the concentration of unreacted Cd2+. Because not all unreacted Cd2+ is free—some is complexed with NH3—we must account for the presence of NH3. For example, after adding 5.0 mL of EDTA, the total concentration of Cd2+ is\[C_{\mathrm{Cd}} = \frac {(\text{mol Cd}^{2+})_\text{initial} - (\text{mol EDTA})_\text{added}} {\text{total volume}} = \frac {M_\text{Cd}V_\text{Cd} - M_\text{EDTA}V_\text{EDTA}} {V_\text{Cd} + V_\text{EDTA}} \nonumber\]\[C_{\mathrm{Cd}}=\frac{\left(5.00 \times 10^{-3} \ \mathrm{M}\right)(50.0 \ \mathrm{mL})-(0.0100 \ \mathrm{M})(5.0 \ \mathrm{mL})}{50.0 \ \mathrm{mL}+5.0 \ \mathrm{mL}} \nonumber\]\[C_{\mathrm{Cd}}=3.64 \times 10^{-3} \ \mathrm{M} \nonumber\]To calculate the concentration of free Cd2+ we use Equation \ref{9.5}\[\left[\mathrm{Cd}^{2+}\right]=\alpha_{\mathrm{Cd}^{2+}} \times C_{\mathrm{Cd}}=(0.0881)\left(3.64 \times 10^{-3} \ \mathrm{M}\right)=3.21 \times 10^{-4} \ \mathrm{M} \nonumber\]which gives a pCd of\[\mathrm{pCd}=-\log \left[\mathrm{Cd}^{2+}\right]=-\log \left(3.21 \times 10^{-4}\right)=3.49 \nonumber\]At the equivalence point all Cd2+ initially in the titrand is now present as CdY2–. The concentration of Cd2+, therefore, is determined by the dissociation of the CdY2– complex. First, we calculate the concentration of CdY2–.\[\left[\mathrm{CdY}^{2-}\right]=\frac{\left(\mathrm{mol} \ \mathrm{Cd}^{2+}\right)_{\mathrm{initial}}}{\text { total volume }} = \frac {M_\text{Cd}V_\text{Cd}} {V_\text{Cd} + V_\text{EDTA}} \nonumber\]\[\left[\mathrm{CdY}^{2-}\right]=\frac{\left(5.00 \times 10^{-3} \ \mathrm{M}\right)(50.0 \ \mathrm{mL})}{50.0 \ \mathrm{mL}+25.0 \ \mathrm{mL}}=3.33 \times 10^{-3} \ \mathrm{M} \nonumber\]Next, we solve for the concentration of Cd2+ in equilibrium with CdY2–.\[K_{\mathrm{f}}^{\prime \prime}=\frac{\left[\mathrm{CdY}^{2-}\right]}{C_{\mathrm{Cd}} C_{\mathrm{EDTA}}}=\frac{3.33 \times 10^{-3}-x}{(x)(x)}=9.5 \times 10^{14} \nonumber\]\[x=C_{\mathrm{Cd}}=1.87 \times 10^{-9} \ \mathrm{M} \nonumber\]In calculating that [CdY2–] at the equivalence point is \(3.33 \times 10^{-3}\) M, we assumed the reaction between Cd2+ and EDTA went to completion. Here we let the system relax back to equilibrium, increasing CCd and CEDTA from 0 to x, and decreasing the concentration of CdY2– by x.Once again, to find the concentration of uncomplexed Cd we must account for the presence of NH3; thus\[\left[\mathrm{Cd}^{2+}\right]=\alpha_{\mathrm{Cd}^{2+}} \times C_{\mathrm{Cd}}=(0.0881)\left(1.87 \times 10^{-9} \ \mathrm{M}\right)=1.64 \times 10^{-10} \ \mathrm{M} \nonumber\]and pCd is 9.78 at the equivalence point.After the equivalence point, EDTA is in excess and the concentration of Cd2+ is determined by the dissociation of the CdY2– complex. First, we calculate the concentrations of CdY2– and of unreacted EDTA. For example, after adding 30.0 mL of EDTA the concentration of CdY2– is\[\left[\mathrm{CdY}^{2-}\right]=\frac{\left(\mathrm{mol} \mathrm{Cd}^{2+}\right)_{\mathrm{initial}}}{\text { total volume }} = \frac{M_{\mathrm{Cd}} V_{\mathrm{Cd}}}{V_{\mathrm{Cd}}+V_{\mathrm{EDTA}}} \nonumber\]\[\left[\mathrm{CdY}^{2-}\right]=\frac{\left(5.00 \times 10^{-3} \ \mathrm{M}\right)(50.0 \ \mathrm{mL})}{50.0 \ \mathrm{mL}+30.0 \ \mathrm{mL}}=3.12 \times 10^{-3} \ \mathrm{M} \nonumber\]and the concentration of EDTA is\[C_{\mathrm{EDTA}} = \frac {(\text{mol EDTA})_\text{added} - (\text{mol Cd}^{2+})_\text{initial}} {\text{total volume}} = \frac{M_{\mathrm{EDTA}} V_{\mathrm{EDTA}}-M_{\mathrm{Cd}} V_{\mathrm{Cd}}}{V_{\mathrm{Cd}}+V_{\mathrm{EDTA}}} \nonumber\]\[C_{\text{EDTA}} = \frac {(0.0100 \text{ M})(30.0 \text{ mL}) - (5.00 \times 10^{-3} \text{ M})(50.0 \text{ mL})} {50.0 \text{ mL} + 30.0 \text{ mL}} \nonumber\]\[C_{\mathrm{EDTA}}=6.25 \times 10^{-4} \ \mathrm{M} \nonumber\]Substituting into Equation \ref{9.6} and solving for [Cd2+] gives\[\frac{\left[\mathrm{CdY}^{2-}\right]}{C_{\mathrm{Cd}} C_{\mathrm{EDTA}}} = \frac{3.12 \times 10^{-3} \ \mathrm{M}}{C_{\mathrm{Cd}}\left(6.25 \times 10^{-4} \ \mathrm{M}\right)} = 9.5 \times 10^{14} \nonumber\]\[C_{\text{Cd}} = 5.27 \times 10^{-15} \text{ M} \nonumber\]\[ \left[ \text{Cd}^{2+} \right] = \alpha_{\text{Cd}^{2+}} \times C_{\text{Cd}} = (0.0881)(5.27 \times 10^{-15} \text{ M}) = 4.64 \times 10^{-16} \text{ M} \nonumber\]a pCd of 15.33. Table 9.3.4 and Figure 9.3.3 show additional results for this titration.After the equilibrium point we know the equilibrium concentrations of CdY2- and of EDTA in all its forms, CEDTA. We can solve for CCd using \(K_f^{\prime \prime}\) and then calculate [Cd2+] using \(\alpha_{\text{Cd}^{2+}}\). Because we used the same conditional formation constant, \(K_f^{\prime \prime}\), for other calculations in this section, this is the approach used here as well. There is a second method for calculating [Cd2+] after the equivalence point. Because the calculation uses only [CdY2-] and CEDTA, we can use \(K_f^{\prime}\) instead of \(K_f^{\prime \prime}\); thus\[\frac{\left[\mathrm{CdY}^{2-}\right]}{\left[\mathrm{Cd}^{2+}\right] C_{\mathrm{EDTA}}}=\alpha_{\mathrm{Y}^{4-}} \times K_{\mathrm{f}} \nonumber\]\[\frac{3.13 \times 10^{-3} \ \mathrm{M}}{\left[\mathrm{Cd}^{2+}\right]\left(6.25 \times 10^{-4}\right)}=(0.367)\left(2.9 \times 10^{16}\right) \nonumber\]Solving gives [Cd2+] = \(4.71 \times 10^{-16}\) M and a pCd of 15.33. We will use this approach when we learn how to sketch a complexometric titration curve.Calculate titration curves for the titration of 50.0 mL of \(5.00 \times 10^{-3}\) M Cd2+ with 0.0100 M EDTA (a) at a pH of 10 and (b) at a pH of 7. Neither titration includes an auxiliary complexing agent. Compare your results with Figure 9.3.3 and comment on the effect of pH on the titration of Cd2+ with EDTA.Let’s begin with the calculations at a pH of 10 where some of the EDTA is present in forms other than Y4–. To evaluate the titration curve, therefore, we need the conditional formation constant for CdY2–, which, from Table 9.3.2 is \(K_f^{\prime} = 1.1 \times 10^{16}\). Note that the conditional formation constant is larger in the absence of an auxiliary complexing agent.The titration’s equivalence point requires\[V_{e q}=V_{\mathrm{EDTA}}=\frac{M_{\mathrm{Cd}} V_{\mathrm{Cd}}}{M_{\mathrm{EDTA}}}=\frac{\left(5.00 \times 10^{-3} \ \mathrm{M}\right)(50.0 \ \mathrm{mL})}{(0.0100 \ \mathrm{M})}=25.0 \ \mathrm{mL} \nonumber\]of EDTA.Before the equivalence point, Cd2+ is present in excess and pCd is determined by the concentration of unreacted Cd2+. For example, after adding 5.00 mL of EDTA, the total concentration of Cd2+ is\[\left[\mathrm{Cd}^{2+}\right]=\frac{\left(5.00 \times 10^{-3} \ \mathrm{M}\right)(50.0 \ \mathrm{mL})-(0.0100 \ \mathrm{M})(5.00 \ \mathrm{mL})}{50.0 \ \mathrm{mL}+5.00 \ \mathrm{mL}} \nonumber\]which gives [Cd2+] as \(3.64 \times 10^{-3}\) and pCd as 2.43.At the equivalence point all Cd2+ initially in the titrand is now present as CdY2–. The concentration of Cd2+, therefore, is determined by the dissociation of the CdY2– complex. First, we calculate the concentration of CdY2–.\[\left[\mathrm{CdY}^{2-}\right]=\frac{\left(5.00 \times 10^{-3} \ \mathrm{M}\right)(50.0 \ \mathrm{mL})}{50.0 \ \mathrm{mL}+25.00 \ \mathrm{mL}}=3.33 \times 10^{-3} \ \mathrm{M} \nonumber\]Next, we solve for the concentration of Cd2+ in equilibrium with CdY2–.\[K_{f}^{\prime}=\frac{\left[\mathrm{CdY}^{2-}\right]}{\left[\mathrm{Cd}^{2+}\right] C_{\mathrm{EDTA}}}=\frac{3.33 \times 10^{-3}-x}{(x)(x)}=1.1 \times 10^{16} \nonumber\]Solving gives [Cd2+] as \(5.50 \times 10^{-10}\) M or a pCd of 9.26 at the equivalence point.After the equivalence point, EDTA is in excess and the concentration of Cd2+ is determined by the dissociation of the CdY2– complex. First, we calculate the concentrations of CdY2– and of unreacted EDTA. For example, after adding 30.0 mL of EDTA\[\left[\mathrm{CdY}^{2-}\right]=\frac{\left(5.00 \times 10^{-3} \ \mathrm{M}\right)(50.0 \ \mathrm{mL})}{50.0 \ \mathrm{mL}+30.00 \ \mathrm{mL}}=3.12 \times 10^{-3} \ \mathrm{M} \nonumber\]\[C_{\mathrm{EDTA}}=\frac{(0.0100 \ \mathrm{M})(30.00 \ \mathrm{mL})-\left(5.00 \times 10^{-3} \ \mathrm{M}\right)(50.0 \ \mathrm{mL})}{50.0 \ \mathrm{mL}+30.00 \ \mathrm{mL}} \nonumber\]\[C_{\mathrm{EDTA}}=6.25 \times 10^{-4} \ \mathrm{M} \nonumber\]Substituting into the equation for the conditional formation constant\[K_{f}^{\prime}=\frac{\left[\mathrm{CdY}^{2-}\right]}{\left[\mathrm{Cd}^{2+}\right] C_{\mathrm{EDTA}}}=\frac{3.12 \times 10^{-3} \ \mathrm{M}}{(\mathrm{x})\left(6.25 \times 10^{-4} \ \mathrm{M}\right)}=1.1 \times 10^{16} \nonumber\]and solving for [Cd2+] gives \(4.54 \times 10^{-16}\) M or a pCd of 15.34.The calculations at a pH of 7 are identical, except the conditional formation constant for CdY2– is \(1.5 \times 10^{13}\) instead of \(1.1 \times 10^{16}\). The following table summarizes results for these two titrations as well as the results from Table 9.3.4 for the titration of Cd2+ at a pH of 10 in the presence of 0.0100 M NH3 as an auxiliary complexing agent.Volume of EDTA (mL)pCd at pH 10pCd at pH 10 w/ 0.0100 M NH3pCd at pH 72.303.362.305.002.433.492.432.603.662.602.813.872.813.154.203.153.564.623.569.269.777.8312.08Examining these results allows us to draw several conclusions. First, in the absence of an auxiliary complexing agent the titration curve before the equivalence point is independent of pH (compare columns 2 and 4). Second, for any pH, the titration curve after the equivalence point is the same regardless of whether an auxiliary complexing agent is present (compare columns 2 and 3). Third, the largest change in pH through the equivalence point occurs at higher pHs and in the absence of an auxiliary complexing agent. For example, from 23.0 mL to 27.0 mL of EDTA the change in pCd is 11.38 at a pH of 10, 10.33 at a pH of 10 in the presence of 0.0100 M NH3, and 8.52 at a pH of 7.To evaluate the relationship between a titration’s equivalence point and its end point, we need to construct only a reasonable approximation of the exact titration curve. In this section we demonstrate a simple method for sketching a complexation titration curve. Our goal is to sketch the titration curve quickly, using as few calculations as possible. Let’s use the titration of 50.0 mL of \(5.00 \times 10^{-3}\) M Cd2+ with 0.0100 M EDTA in the presence of 0.0100 M NH3 to illustrate our approach. This is the same example we used in developing the calculations for a complexation titration curve. You can review the results of that calculation in Table 9.3.4 and Figure 9.3.3 .We begin by calculating the titration’s equivalence point volume, which, as we determined earlier, is 25.0 mL. Next, we draw our axes, placing pCd on the y-axis and the titrant’s volume on the x-axis. To indicate the equivalence point’s volume, we draw a vertical line that intersects the x-axis at 25.0 mL of EDTA. Figure 9.3.4 a shows the result of the first step in our sketch.Before the equivalence point, Cd2+ is present in excess and pCd is determined by the concentration of unreacted Cd2+. Because not all unreacted Cd2+ is free—some is complexed with NH3—we must account for the presence of NH3. The calculations are straightforward, as we saw earlier. Figure 9.3.4 b shows the pCd after adding 5.00 mL and 10.0 mL of EDTA.The third step in sketching our titration curve is to add two points after the equivalence point. Here the concentration of Cd2+ is controlled by the dissociation of the Cd2+–EDTA complex. Beginning with the conditional formation constant\[K_{f}^{\prime}=\frac{\left[\mathrm{CdY}^{2-}\right]}{\left[\mathrm{Cd}^{2+}\right] C_{\mathrm{EDTA}}} = \alpha_{\text{Y}^{4-}} \times K_{f}=(0.367)\left(2.9 \times 10^{16}\right)=1.1 \times 10^{16} \nonumber\]we take the log of each side and rearrange, arriving at\[\begin{array}{c}{\log K_{f}^{\prime}=-\log \left[\mathrm{Cd}^{2+}\right]+\log \frac{\left[\mathrm{CdY}^{2-}\right]}{C_{\mathrm{EDTA}}}} \\ {\mathrm{pCd}=\log K_{f}^{\prime}+\log \frac{C_{\mathrm{EDTA}}}{\left[\mathrm{CdY}^{2-}]\right.}}\end{array} \nonumber\]Recall that we can use either of our two possible conditional formation constants, \(K_f^{\prime}\) or \(K_f^{\prime \prime}\), to determine the composition of the system at equilibrium.Note that after the equivalence point, the titrand is a metal–ligand complexation buffer, with pCd determined by CEDTA and [CdY2–]. The buffer is at its lower limit of \(\text{pCd} = \log{K_f^{\prime}} - 1\) when\[\frac{C_{\mathrm{EDTA}}}{\left[\mathrm{CdY}^{2-}\right]} = \frac {(\text{mole EDTA})_\text{added} - (\text{mol Cd}^{2+})_\text{initial}} {(\text{mol Cd}^{2+})_\text{initial}} = \frac {1} {10} \nonumber\]Making appropriate substitutions and solving, we find that\[\frac{M_{\mathrm{EDTA}} V_{\mathrm{EDTA}}-M_{\mathrm{Cd}} V_{\mathrm{Cd}}}{M_{\mathrm{Cd}} V_{\mathrm{Cd}}}=\frac{1}{10} \nonumber\]\[M_{\mathrm{EDTA}} V_{\mathrm{EDTA}}-M_{\mathrm{Cd}} V_{\mathrm{Cd}}=0.1 \times M_{\mathrm{Cd}} V_{\mathrm{Cd}} \nonumber\]\[V_{\mathrm{EDTA}}=\frac{1.1 \times M_{\mathrm{Cd}} V_{\mathrm{Cd}}}{M_{\mathrm{EDTA}}}=1.1 \times V_{e q} \nonumber\]Thus, when the titration reaches 110% of the equivalence point volume, pCd is \(\log{K_f^{\prime}} - 1\). A similar calculation should convince you that pCd is \(\log{K_f^{\prime}} + 1\) when the volume of EDTA is \(2 \times V_\text{eq}\).Figure 9.3.4 c shows the third step in our sketch. First, we add a ladder diagram for the CdY2– complex, including its buffer range, using its \(\log{K_f^{\prime}}\) value of 16.04. Next, we add two points, one for pCd at 110% of Veq (a pCd of 15.04 at 27.5 mL) and one for pCd at 200% of Veq (a pCd of 16.04 at 50.0 mL).Next, we draw a straight line through each pair of points, extending each line through the vertical line that indicates the equivalence point’s volume (Figure 9.3.4 d). Finally, we complete our sketch by drawing a smooth curve that connects the three straight-line segments (Figure 9.3.4 e). A comparison of our sketch to the exact titration curve (Figure 9.3.4 f) shows that they are in close agreement.Our treatment here is general and applies to any complexation titration using EDTA as a titrant.Sketch titration curves for the titration of 50.0 mL of \(5.00 \times 10^{-3}\) M Cd2+ with 0.0100 M EDTA (a) at a pH of 10 and (b) at a pH of 7. Compare your sketches to the calculated titration curves from Exercise 9.3.1 .The figure below shows a sketch of the titration curves. The two black points before the equivalence point (VEDTA = 5 mL, pCd= 2.43 and VEDTA = 15 mL, pCd= 2.81) are the same for both pHs and taken from the results of Exercise 9.3.1 . The two black points after the equivalence point for a pH of 7 (VEDTA = 27.5 mL, pCd= 12.2 and VEDTA = 50 mL, pCd= 13.2) are plotted using the \(\log{K_f^{\prime}}\) of 13.2 for CdY2-. The two points after the equivalence point for a pH of 10 (VEDTA = 27.5 mL, pCd= 15.0 andVEDTA = 50 mL, pCd= 16.0) are plotted using the \(\log{K_f^{\prime}}\) of 16.0 for CdY2-. The points in red are the calculations from Exercise 9.3.1 for a pH of 10, and the points in green are the calculations from Exercise 9.3.1 for a pH of 7.The equivalence point of a complexation titration occurs when we react stoichiometrically equivalent amounts of the titrand and titrant. As is the case for an acid–base titration, we estimate the equivalence point for a complexation titration using an experimental end point. A variety of methods are available for locating the end point, including indicators and sensors that respond to a change in the solution conditions.Most indicators for complexation titrations are organic dyes—known as metallochromic indicators—that form stable complexes with metal ions. The indicator, Inm–, is added to the titrand’s solution where it forms a stable complex with the metal ion, MInn–. As we add EDTA it reacts first with free metal ions, and then displaces the indicator from MInn–.\[\text{MIn}^{n-}(aq) + \text{Y}^{4-}(aq) \rightarrow \text{MY}^{2-}(aq) + \text{In}^{m-}(aq) \nonumber\]If MInn– and Inm– have different colors, then the change in color signals the end point.The accuracy of an indicator’s end point depends on the strength of the metal–indicator complex relative to the strength of the metal–EDTA complex. If the metal–indicator complex is too strong, the change in color occurs after the equivalence point. If the metal–indicator complex is too weak, however, the end point occurs before we reach the equivalence point.Most metallochromic indicators also are weak acids. One consequence of this is that the conditional formation constant for the metal–indicator complex depends on the titrand’s pH. This provides some control over an indicator’s titration error because we can adjust the strength of a metal–indicator complex by adjusted the pH at which we carry out the titration. Unfortunately, because the indicator is a weak acid, the color of the uncomplexed indicator also may change with pH. Figure 9.3.5 , for example, shows the color of the indicator calmagite as a function of pH and pMg, where H2In–, HIn2–, and In3– are different forms of the uncomplexed indicator, and MgIn– is the Mg2+–calmagite complex. Because the color of calmagite’s metal–indicator complex is red, its use as a metallochromic indicator has a practical pH range of approximately 8.5–11 where the uncomplexed indicator, HIn2–, has a blue color.Table 9.3.5 provides examples of metallochromic indicators and the metal ions and pH conditions for which they are useful. Even if a suitable indicator does not exist, it often is possible to complete an EDTA titration by introducing a small amount of a secondary metal–EDTA complex if the secondary metal ion forms a stronger complex with the indicator and a weaker complex with EDTA than the analyte. For example, calmagite has a poor end point when titrating Ca2+ with EDTA. Adding a small amount of Mg2+–EDTA to the titrand gives a sharper end point. Because Ca2+ forms a stronger complex with EDTA, it displaces Mg2+, which then forms the red-colored Mg2+–calmagite complex. At the titration’s end point, EDTA displaces Mg2+ from the Mg2+–calmagite complex, signaling the end point by the presence of the uncomplexed indicator’s blue form.all metal ions carry a +2 charge expect for iron, which is +3metal ions in italic font have poor end pointsAn important limitation when using a metallochromic indicator is that we must be able to see the indicator’s change in color at the end point. This may be difficult if the solution is already colored. For example, when titrating Cu2+ with EDTA, ammonia is used to adjust the titrand’s pH. The intensely colored \(\text{Cu(NH}_3)_2^{4+}\) complex obscures the indicator’s color, making an accurate determination of the end point difficult. Other absorbing species present within the sample matrix may also interfere. This often is a problem when analyzing clinical samples, such as blood, or environmental samples, such as natural waters.If at least one species in a complexation titration absorbs electromagnet- ic radiation, then we can identify the end point by monitoring the titrand’s absorbance at a carefully selected wavelength. For example, we can identify the end point for a titration of Cu2+ with EDTA in the presence of NH3 by monitoring the titrand’s absorbance at a wavelength of 745 nm, where the \(\text{Cu(NH}_3)_2^{4+}\) complex absorbs strongly. At the beginning of the titration the absorbance is at a maximum. As we add EDTA, however, the reaction\[\text{Cu(NH}_3)_4^{2+}(aq) + \text{Y}^{4-} \rightleftharpoons \text{CuY}^{2-}(aq) + 4\text{NH}_3(aq) \nonumber\]decreases the concentration of \(\text{Cu(NH}_3)_2^{4+}\) and decreases the absorbance until we reach the equivalence point. After the equivalence point the absorbance essentially remains unchanged. The resulting spectrophotometric titration curve is shown in Figure 9.3.6 a. Note that the titration curve’s y-axis is not the measured absorbance, Ameas, but a corrected absorbance, Acorr\[A_\text{corr} = A_\text{meas} \times \frac {V_\text{EDTA} + V_\text{Cu}} {V_\text{Cu}} \nonumber\]where VEDTA and VCu are, respectively, the volumes of EDTA and Cu. Correcting the absorbance for the titrand’s dilution ensures that the spectrophotometric titration curve consists of linear segments that we can extrapolate to find the end point. Other common spectrophotometric titration curves are shown in Figures 9.3.6 b-f.The best way to appreciate the theoretical and the practical details discussed in this section is to carefully examine a typical complexation titrimetric method. Although each method is unique, the following description of the determination of the hardness of water provides an instructive example of a typical procedure. The description here is based on Method 2340C as published in Standard Methods for the Examination of Water and Wastewater, 20th Ed., American Public Health Association: Washington, D. C., 1998.Description of the MethodThe operational definition of water hardness is the total concentration of cations in a sample that can form an insoluble complex with soap. Although most divalent and trivalent metal ions contribute to hardness, the two most important metal ions are Ca2+ and Mg2+. Hardness is determined by titrating with EDTA at a buffered pH of 10. Calmagite is used as an indicator. Hardness is reported as mg CaCO3/L.ProcedureSelect a volume of sample that requires less than 15 mL of titrant to keep the analysis time under 5 minutes and, if necessary, dilute the sample to 50 mL with distilled water. Adjust the sample’s pH by adding 1–2 mL of a pH 10 buffer that contains a small amount of Mg2+–EDTA. Add 1–2 drops of indicator and titrate with a standard solution of EDTA until the red-to-blue end point is reached (Figure 9.3.7 ).Questions1. Why is the sample buffered to a pH of 10? What problems might you expect at a higher pH or a lower pH?Of the two primary cations that contribute to hardness, Mg2+ forms the weaker complex with EDTA and is the last cation to react with the titrant. Calmagite is a useful indicator because it gives a distinct end point when titrating Mg2+ (see Table 9.3.5 ). Because of calmagite’s acid–base properties, the range of pMg values over which the indicator changes color depends on the titrand’s pH (Figure 9.3.5 ). Figure 9.3.8 shows the titration curve for a 50-mL solution of 10–3 M Mg2+ with 10–2 M EDTA at pHs of 9, 10, and 11. Superimposed on each titration curve is the range of conditions for which the average analyst will observe the end point. At a pH of 9 an early end point is possible, which results in a negative determinate error. A late end point and a positive determinate error are possible if the pH is 11.2. Why is a small amount of the Mg2+–EDTA complex added to the buffer?The titration’s end point is signaled by the indicator calmagite. The indicator’s end point with Mg2+ is distinct, but its change in color when titrating Ca2+ does not provide a good end point (see Table 9.3.5 ). If the sample does not contain any Mg2+ as a source of hardness, then the titration’s end point is poorly defined, which leads to an inaccurate and imprecise result. Adding a small amount of Mg2+–EDTA to the buffer ensures that the titrand includes at least some Mg2+. Because Ca2+ forms a stronger complex with EDTA, it displaces Mg2+ from the Mg2+–EDTA complex, freeing the Mg2+ to bind with the indicator. This displacement is stoichiometric, so the total concentration of hardness cations remains unchanged. The displacement by EDTA of Mg2+ from the Mg2+–indicator complex signals the titration’s end point.3. Why does the procedure specify that the titration take no longer than 5 minutes?A time limitation suggests there is a kinetically-controlled interference, possibly arising from a competing chemical reaction. In this case the interference is the possible precipitation of CaCO3 at a pH of 10.Although many quantitative applications of complexation titrimetry have been replaced by other analytical methods, a few important applications continue to find relevance. In the section we review the general application of complexation titrimetry with an emphasis on applications from the analysis of water and wastewater. First, however, we discuss the selection and standardization of complexation titrants.EDTA is a versatile titrant that can be used to analyze virtually all metal ions. Although EDTA is the usual titrant when the titrand is a metal ion, it cannot be used to titrate anions, for which Ag+ or Hg2+ are suitable titrants.Solutions of EDTA are prepared from its soluble disodium salt, Na2H2Y•2H2O, and standardized by titrating against a solution made from the primary standard CaCO3. Solutions of Ag+ and Hg2+ are prepared using AgNO3 and Hg(NO3)2, both of which are secondary standards. Standardization is accomplished by titrating against a solution prepared from primary standard grade NaCl.Complexation titrimetry continues to be listed as a standard method for the determination of hardness, Ca2+, CN–, and Cl– in waters and wastewaters. The evaluation of hardness was described earlier in Representative Method 9.3.1. The determination of Ca2+ is complicated by the presence of Mg2+, which also reacts with EDTA. To prevent an interference the pH is adjusted to 12–13, which precipitates Mg2+ as Mg(OH)2. Titrating with EDTA using murexide or Eriochrome Blue Black R as the indicator gives the concentration of Ca2+.Cyanide is determined at concentrations greater than 1 mg/L by making the sample alkaline with NaOH and titrating with a standard solution of AgNO3 to form the soluble \(\text{Ag(CN)}_2^-\) complex. The end point is determined using p-dimethylaminobenzalrhodamine as an indicator, with the solution turning from a yellow to a salmon color in the presence of excess Ag+.Chloride is determined by titrating with Hg(NO3)2, forming HgCl2(aq). The sample is acidified to a pH of 2.3–3.8 and diphenylcarbazone, which forms a colored complex with excess Hg2+, serves as the indicator. The pH indicator xylene cyanol FF is added to ensure that the pH is within the desired range. The initial solution is a greenish blue, and the titration is carried out to a purple end point.The quantitative relationship between the titrand and the titrant is determined by the titration reaction’s stoichiometry. For a titration using EDTA, the stoichiometry is always 1:1.The concentration of a solution of EDTA is determined by standardizing against a solution of Ca2+ prepared using a primary standard of CaCO3. A 0.4071-g sample of CaCO3 is transferred to a 500-mL volumetric flask, dissolved using a minimum of 6 M HCl, and diluted to volume. After transferring a 50.00-mL portion of this solution to a 250-mL Erlenmeyer flask, the pH is adjusted by adding 5 mL of a pH 10 NH3–NH4Cl buffer that contains a small amount of Mg2+–EDTA. After adding calmagite as an indicator, the solution is titrated with the EDTA, requiring 42.63 mL to reach the end point. Report the molar concentration of EDTA in the titrant.SolutionThe primary standard of Ca2+ has a concentration of\[\frac {0.4071 \text{ g CaCO}_3}{0.5000 \text{ L}} \times \frac {1 \text{ mol Ca}^{2+}}{100.09 \text{ g CaCO}_3} = 8.135 \times 10^{-3} \text{ M Ca}^{2+} \nonumber\]The moles of Ca2+ in the titrand is\[8.135 \times 10^{-3} \text{ M Ca}^{2+} \times 0.05000 \text{ L} = 4.068 \times 10^{-4} \text{ mol Ca}^{2+} \nonumber\]which means that \(4.068 \times 10^{-4}\) moles of EDTA are used in the titration. The molarity of EDTA in the titrant is\[\frac {4.068 \times 10^{-4} \text{ mol Ca}^{2+}}{0.04263 \text{ L}} = 9.543 \times 10^{-3} \text{ M EDTA} \nonumber\]A 100.0-mL sample is analyzed for hardness using the procedure outlined in Representative Method 9.3.1, requiring 23.63 mL of 0.0109 M EDTA. Report the sample’s hardness as mg CaCO3/L.In an analysis for hardness we treat the sample as if Ca2+ is the only metal ion that reacts with EDTA. The grams of Ca2+ in the sample, therefore, are\[(0.0109 \text{ M EDTA})(0.02363 \text{ L}) \times \frac {1 \text{ mol Ca}^{2+}}{\text{mol EDTA}} = 2.58 \times 10^{-4} \text{ mol Ca}^{2+} \nonumber\]\[2.58 \times 10^{-4} \text{ mol Ca}^{2+} \times \frac {1 \text{ mol CaCO}_3}{\text{mol Ca}^{2+}} \times \frac {100.09 \text{ g CaCO}_3}{\text{mol CaCO}_3} = 0.0258 \text{ g CaCO}_3 \nonumber\]and the sample’s hardness is\[\frac {0.0258 \text{ g CaCO}_3}{0.1000 \text{ L}} \times \frac {1000 \text{ mg}}{\text{g}} = 258 \text{ g CaCO}_3\text{/L} \nonumber\]As shown in the following example, we can extended this calculation to complexation reactions that use other titrants.The concentration of Cl– in a 100.0-mL sample of water from a freshwater aquifer is tested for the encroachment of sea water by titrating with 0.0516 M Hg(NO3)2. The sample is acidified and titrated to the diphenylcarbazone end point, requiring 6.18 mL of the titrant. Report the concentration of Cl–, in mg/L, in the aquifer.SolutionThe reaction between Cl– and Hg2+ produces a metal–ligand complex of HgCl2(aq). Each mole of Hg2+ reacts with 2 moles of Cl–; thus\[\frac {0.0516 \text{ mol Hg(NO}_3)_2}{\text{L}} \times 0.00618 \text{ L} \times \frac {2 \text{ mol Cl}^-}{\text{mol Hg(NO}_3)_2} \times \frac {35.453 \text{ g Cl}^-}{\text{mol Cl}^-} = 0.0226 \text{ g Cl}^- \nonumber\]are in the sample. The concentration of Cl– in the sample is\[\frac {0.0226 \text{ g Cl}^-}{0.1000 \text{ L}} \times \frac {1000 \text{ mg}}{\text{g}} = 226 \text{ mg/L} \nonumber\]A 0.4482-g sample of impure NaCN is titrated with 0.1018 M AgNO3, requiring 39.68 mL to reach the end point. Report the purity of the sample as %w/w NaCN.The titration of CN– with Ag+ produces the metal-ligand complex \(\text{Ag(CN)}_2^-\); thus, each mole of AgNO3 reacts with two moles of NaCN. The grams of NaCN in the sample is\[(0.1018 \text{ M AgNO}_3)(0.03968 \text{ L}) \times \frac {2 \text{ mol NaCN}}{\text{mol AgNO}_3} \times \frac {49.01 \text{ g NaCN}}{\text{mol NaCN}} = 0.3959 \text{ g NaCN} \nonumber\]and the purity of the sample is\[\frac {0.3959 \text{ g NaCN}}{0.4482 \text{ g sample}} \times 100 = 88.33 \text{% w/w NaCN} \nonumber\]Finally, complex titrations involving multiple analytes or back titrations are possible.An alloy of chromel that contains Ni, Fe, and Cr is analyzed by a complexation titration using EDTA as the titrant. A 0.7176-g sample of the alloy is dissolved in HNO3 and diluted to 250 mL in a volumetric flask. A 50.00-mL aliquot of the sample, treated with pyrophosphate to mask the Fe and Cr, requires 26.14 mL of 0.05831 M EDTA to reach the murexide end point. A second 50.00-mL aliquot is treated with hexamethylenetetramine to mask the Cr. Titrating with 0.05831 M EDTA requires 35.43 mL to reach the murexide end point. Finally, a third 50.00-mL aliquot is treated with 50.00 mL of 0.05831 M EDTA, and back titrated to the murexide end point with 6.21 mL of 0.06316 M Cu2+. Report the weight percents of Ni, Fe, and Cr in the alloy.SolutionThe stoichiometry between EDTA and each metal ion is 1:1. For each of the three titrations, therefore, we can write an equation that relates the moles of EDTA to the moles of metal ions that are titrated.titration 1: mol Ni = mol EDTA titration 2: mol Ni +mol Fe = mol EDTA titration 3: mol Ni + mol Fe + mol Cr + mol Cu = mol EDTAWe use the first titration to determine the moles of Ni in our 50.00-mL portion of the dissolved alloy. The titration uses\[\frac {0.05831 \text{ mol EDTA}}{\text{L}} \times 0.02614 \text{ L} = 1.524 \times 10^{-3} \text{ mol EDTA} \nonumber\]which means the sample contains \(1.524 \times 10^{-3}\) mol Ni.Having determined the moles of EDTA that react with Ni, we use the second titration to determine the amount of Fe in the sample. The second titration uses\[\frac {0.05831 \text{ mol EDTA}}{\text{L}} \times 0.03543 \text{ L} = 2.066 \times 10^{-3} \text{ mol EDTA} \nonumber\]of which \(1.524 \times 10^{-3}\) mol are used to titrate Ni. This leaves \(5.42 \times 10^{-4}\) mol of EDTA to react with Fe; thus, the sample contains \(5.42 \times 10^{-4}\) mol of Fe.Finally, we can use the third titration to determine the amount of Cr in the alloy. The third titration uses\[\frac {0.05831 \text{ mol EDTA}}{\text{L}} \times 0.0500 \text{ L} = 3.926 \times 10^{-3} \text{ mol EDTA} \nonumber\]of which \(1.524 \times 10^{-3}\) mol are used to titrate Ni and \(5.42 \times 10^{-4}\) mol are used to titrate Fe. This leaves \(8.50 \times 10^{-4}\) mol of EDTA to react with Cu and Cr. The amount of EDTA that reacts with Cu is\[\frac {0.06316 \text{ mol Cu}^{2+}}{\text{L}} \times 0.006211 \text{ L} \times \frac {1 \text{ mol EDTA}}{\text{mol Cu}^{2+}} = 3.92 \times 10^{-4} \text{ mol EDTA} \nonumber\]leaving \(4.58 \times 10^{-4}\) mol of EDTA to react with Cr. The sample, therefore, contains \(4.58 \times 10^{-4}\) mol of Cr.Having determined the moles of Ni, Fe, and Cr in a 50.00-mL portion of the dissolved alloy, we can calculate the %w/w of each analyte in the alloy.\[\frac {1.524 \times 10^{-3} \text{ mol Ni}}{50.00 \text{ mL}} \times \frac {58.69 \text{ g Ni}}{\text{mol Ni}} = 0.4472 \text{ g Ni} \nonumber\]\[\frac {0.4472 \text{ g Ni}}{0.7176 \text{ g sample}} \times 100 = 62.32 \text{% w/w Ni} \nonumber\]\[\frac {5.42 \times 10^{-4} \text{ mol Fe}}{50.00 \text{ mL}} \times \frac {55.845 \text{ g Fe}}{\text{mol Fe}} = 0.151 \text{ g Fe} \nonumber\]\[\frac {0.151 \text{ g Fe}}{0.7176 \text{ g sample}} \times 100 = 21.0 \text{% w/w Fe} \nonumber\]\[\frac {4.58 \times 10^{-4} \text{ mol Cr}}{50.00 \text{ mL}} \times \frac {51.996 \text{ g Cr}}{\text{mol Cr}} = 0.119 \text{ g Cr} \nonumber\]\[\frac {0.119 \text{ g Cr}}{0.7176 \text{ g sample}} \times 100 = 16.6 \text{% w/w Cr} \nonumber\]An indirect complexation titration with EDTA can be used to determine the concentration of sulfate, \(\text{SO}_4^{2-}\), in a sample. A 0.1557-g sample is dissolved in water and any sulfate present is precipitated as BaSO4 by adding Ba(NO3)2. After filtering and rinsing the precipitate, it is dissolved in 25.00 mL of 0.02011 M EDTA. The excess EDTA is titrated with 0.01113 M Mg2+, requiring 4.23 mL to reach the end point. Calculate the %w/w Na2SO4 in the sample.The total moles of EDTA used in this analysis is\[(0.02011 \text{ M EDTA})(0.02500 \text{ L}) = 5.028 \times 10^{-4} \text{ mol EDTA} \nonumber\]Of this,\[(0.01113 \text{ M Mg}^{2+})(0.00423 \text{ L}) \times \frac {1 \text{ mol EDTA}}{\text{mol Mg}^{2+}} = 4.708 \times 10^{-5} \text{ mol EDTA} \nonumber\]are consumed in the back titration with Mg2+, which means that\[5.028 \times 10^{-4} \text{ mol EDTA} - 4.708 \times 10^{-5} \text{ mol EDTA} = 4.557 \times 10^{-4} \text{ mol EDTA} \nonumber\]react with the BaSO4. Each mole of BaSO4 reacts with one mole of EDTA; thus\[4.557 \times 10^{-4} \text{ mol EDTA} \times \frac {1 \text{ mol BaSO}_4}{\text{mol EDTA}} \times \frac {1 \text{ mol Na}_2\text{SO}_4}{\text{mol BaSO}_4} \times \frac {142.04 \text{ g Na}_2\text{SO}_4}{\text{mol Na}_2\text{SO}_4} = 0.06473 \text{ g Na}_2\text{SO}_4 \nonumber\]\[\frac{0.06473 \text{ g Na}_2\text{SO}_4}{0.1557 \text{ g sample}} \times 100 = 41.57 \text{% w/w Na}_2\text{SO}_4 \nonumber\]The scale of operations, accuracy, precision, sensitivity, time, and cost of a complexation titration are similar to those described earlier for acid–base titrations. Complexation titrations, however, are more selective. Although EDTA forms strong complexes with most metal ion, by carefully controlling the titrand’s pH we can analyze samples that contain two or more analytes. The reason we can use pH to provide selectivity is shown in Figure 9.3.9 a. A titration of Ca2+ at a pH of 9 has a distinct break in the titration curve because the conditional formation constant for CaY2– of \(2.6 \times 10^9\) is large enough to ensure that the reaction of Ca2+ and EDTA goes to completion. At a pH of 3, however, the conditional formation constant of 1.23 is so small that very little Ca2+ reacts with the EDTA. Suppose we need to analyze a mixture of Ni2+ and Ca2+. Both analytes react with EDTA, but their conditional formation constants differ significantly. If we adjust the pH to 3 we can titrate Ni2+ with EDTA without titrating Ca2+ (Figure 9.3.9 b). When the titration is complete, we adjust the titrand’s pH to 9 and titrate the Ca2+ with EDTA.A spectrophotometric titration is a particularly useful approach for analyzing a mixture of analytes. For example, as shown in Figure 9.3.10 , we can determine the concentration of a two metal ions if there is a difference between the absorbance of the two metal-ligand complexes.This page titled 9.3: Complexation Titrations is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by David Harvey.
188
9.4: Redox Titrations
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Analytical_Chemistry_2.1_(Harvey)/09%3A_Titrimetric_Methods/9.04%3A_Redox_Titrations
Analytical titrations using oxidation–reduction reactions were introduced shortly after the development of acid–base titrimetry. The earliest redox titration took advantage of chlorine’s oxidizing power. In 1787, Claude Berthollet introduced a method for the quantitative analysis of chlorine water (a mixture of Cl2, HCl, and HOCl) based on its ability to oxidize indigo, a dye that is colorless in its oxidized state. In 1814, Joseph Gay-Lussac developed a similar method to determine chlorine in bleaching powder. In both methods the end point is a change in color. Before the equivalence point the solution is colorless due to the oxidation of indigo. After the equivalence point, however, unreacted indigo imparts a permanent color to the solution.The number of redox titrimetric methods increased in the mid-1800s with the introduction of \(\text{MnO}_4^-\), \(\text{Cr}_2\text{O}_7^{2-}\), and I2 as oxidizing titrants, and of Fe2+ and \(\text{S}_2\text{O}_3^{2-}\) as reducing titrants. Even with the availability of these new titrants, redox titrimetry was slow to develop due to the lack of suitable indicators. A titrant can serve as its own indicator if its oxidized and its reduced forms differ significantly in color. For example, the intensely purple \(\text{MnO}_4^-\) ion serves as its own indicator since its reduced form, Mn2+, is almost colorless. Other titrants require a separate indicator. The first such indicator, diphenylamine, was introduced in the 1920s. Other redox indicators soon followed, increasing the applicability of redox titrimetry.To evaluate a redox titration we need to know the shape of its titration curve. In an acid–base titration or a complexation titration, the titration curve shows how the concentration of H3O+ (as pH) or Mn+ (as pM) changes as we add titrant. For a redox titration it is convenient to monitor the titration reaction’s potential instead of the concentration of one species.You may recall from Chapter 6 that the Nernst equation relates a solution’s potential to the concentrations of reactants and products that participate in the redox reaction. Consider, for example, a titration in which a titrand in a reduced state, Ared, reacts with a titrant in an oxidized state, Box.\[A_{red} + B_{ox} \rightleftharpoons B_{red} + A_{ox} \nonumber\]where Aox is the titrand’s oxidized form, Bred is the titrant’s reduced form, and the stoichiometry between the two is 1:1. The reaction’s potential, Erxn, is the difference between the reduction potentials for each half-reaction.\[E_{rxn} = E_{B_{ox}/B_{red}} - E_{A_{ox}/A_{red}} \nonumber\]After each addition of titrant the reaction between the titrand and the titrant reaches a state of equilibrium. Because the potential at equilibrium is zero, the titrand’s and the titrant’s reduction potentials are identical.\[E_{B_{ox}/B_{red}} = E_{A_{ox}/A_{red}} \nonumber\]This is an important observation as it allows us to use either half-reaction to monitor the titration’s progress.Before the equivalence point the titration mixture consists of appreciable quantities of the titrand’s oxidized and reduced forms. The concentration of unreacted titrant, however, is very small. The potential, therefore, is easier to calculate if we use the Nernst equation for the titrand’s half-reaction\[E_{rxn} = E_{A_{ox}/A_{red}}^{\circ} - \frac{RT}{nF}\ln{\frac{[A_{red}]}{[A_{ox}]}} \nonumber\]After the equivalence point it is easier to calculate the potential using the Nernst equation for the titrant’s half-reaction.\[E_{rxn} = E_{B_{ox}/B_{red}}^{\circ} - \frac{RT}{nF}\ln{\frac{[B_{red}]}{[B_{ox}]}} \nonumber\]Although the Nernst equation is written in terms of the half-reaction’s standard state potential, a matrix-dependent formal potential often is used in its place. See Appendix 13 for the standard state potentials and formal potentials for selected half-reactions.Let’s calculate the titration curve for the titration of 50.0 mL of 0.100 M Fe2+ with 0.100 M Ce4+ in a matrix of 1 M HClO4. The reaction in this case is\[\text{Fe}^{2+}(aq) + \text{Ce}^{4+}(aq) \rightleftharpoons \text{Ce}^{3+}(aq) + \text{Fe}^{3+}(aq) \label{9.1}\]Because the equilibrium constant for reaction \ref{9.1} is very large—it is approximately \(6 \times 10^{15}\)—we may assume that the analyte and titrant react completely.In 1 M HClO4, the formal potential for the reduction of Fe3+ to Fe2+ is +0.767 V, and the formal potential for the reduction of Ce4+ to Ce3+ is +1.70 V.The first task is to calculate the volume of Ce needed to reach the titration’s equivalence point. From the reaction’s stoichiometry we know that\[\text{mol Fe}^{2+} = M_\text{Fe}V_\text{Fe} = M_\text{Ce}V_\text{Ce} = \text{mol Ce}^{4+} \nonumber\]Solving for the volume of Ce4+ gives the equivalence point volume as\[V_{eq} = V_\text{Ce} = \frac{M_\text{Fe}V_\text{Fe}}{M_\text{Ce}} = \frac{(0.100 \text{ M})(50.0 \text{ mL})}{(0.100 \text{ M})} = 50.0 \text{ mL} \nonumber\]Before the equivalence point, the concentration of unreacted Fe2+ and the concentration of Fe3+ are easy to calculate. For this reason we find the potential using the Nernst equation for the Fe3+/Fe2+ half-reaction.\[E = +0.767 \text{ V} - 0.05916 \log{\frac{[\text{Fe}^{2+}]}{[\text{Fe}^{3+}]}} \label{9.2}\]For example, the concentrations of Fe2+ and Fe3+ after adding 10.0 mL of titrant are\[[\text{Fe}^{2+}] = \frac{(\text{mol Fe}^{2+})_\text{initial} - (\text{mol Ce}^{4+})_\text{added}}{\text{total volume}} = \frac{M_\text{Fe}V_\text{Fe} - M_\text{Ce}V_\text{Ce}}{V_\text{Fe} + V_\text{Ce}} \nonumber\]\[[\text{Fe}^{2+}] = \frac{(0.100 \text{ M})(50.0 \text{ mL}) - (0.100 \text{ M})(10.0 \text{ mL})}{50.0 \text{ mL} + 10.0 \text{ mL}} = 6.67 \times 10^{-2} \text{ M} \nonumber\]\[[\text{Fe}^{3+}] = \frac{(\text{mol Ce}^{4+})_\text{added}}{\text{total volume}} = \frac{M_\text{Ce}V_\text{Ce}}{V_\text{Fe} + V_\text{Ce}} \nonumber\]\[[\text{Fe}^{3+}] = \frac{(0.100 \text{ M})(10.0 \text{ mL})}{50.0 \text{ mL} + 10.0 \text{ mL}} = 1.67 \times 10^{-2} \text{ M} \nonumber\]Substituting these concentrations into Equation \ref{9.2} gives the potential as\[E = +0.767 \text{ V} - 0.05916 \log{\frac{6.67 \times 10^{-2}}{1.67 \times 10^{-2}}} = +0.731 \text{ V} \nonumber\]After the equivalence point, the concentration of Ce3+ and the concentration of excess Ce4+ are easy to calculate. For this reason we find the potential using the Nernst equation for the Ce4+/Ce3+ half-reaction in a manner similar to that used above to calculate potentials before the equivalence point.\[E = +1.70 \text{ V} - 0.05916 \log{\frac{[\text{Ce}^{3+}]}{[\text{Ce}^{4+}]}} \label{9.3}\]For example, after adding 60.0 mL of titrant, the concentrations of Ce3+ and Ce4+ are\[[\text{Ce}^{3+}] = \frac{(\text{mol Fe}^{2+})_\text{initial}}{\text{total volume}} = \frac{M_\text{Fe}V_\text{Fe}}{V_\text{Fe}+V_\text{Ce}} \nonumber\]\[[\text{Ce}^{3+}] = \frac{(0.100 \text{ M})(50.0 \text{ mL})}{50.0 \text{ mL} + 60.0 \text{ mL}} = 4.55 \times 10^{-2} \text{ M} \nonumber\]\[[\text{Ce}^{4+}] = \frac{(\text{mol Ce}^{4+})_\text{added}-(\text{mol Fe}^{2+})_\text{initial}}{\text{total volume}} = \frac{M_\text{Ce}V_\text{Ce}-M_\text{Fe}V_\text{Fe}}{V_\text{Fe}+V_\text{Ce}} \nonumber\]\[[\text{Ce}^{4+}] = \frac{(0.100 \text{ M})(60.0 \text{ mL})-(0.100 \text{ M})(50.0 \text{ mL})}{50.0 \text{ mL} + 60.0 \text{ mL}} = 9.09 \times 10^{-3} \text{ M} \nonumber\]Substituting these concentrations into Equation \ref{9.3} gives a potential of\[E = +1.70 \text{ V} - 0.05916 \log{\frac{4.55 \times 10^{-2} \text{ M}}{9.09 \times 10^{-3} \text{ M}}} = +1.66 \text{ V} \nonumber\]At the titration’s equivalence point, the potential, Eeq, in Equation \ref{9.2} and Equation \ref{9.3} are identical. Adding the equations together to gives\[2E_{eq} = E_{\text{Fe}^{3+}/\text{Fe}^{2+}}^{\circ} + E_{\text{Ce}^{4+}/\text{Ce}^{3+}}^{\circ} - 0.05916 \log{\frac{[\text{Fe}^{2+}][\text{Ce}^{3+}]}{[\text{Fe}^{3+}][\text{Ce}^{4+}]}} \nonumber\]Because [Fe2+] = [Ce4+] and [Ce3+] = [Fe3+] at the equivalence point, the log term has a value of zero and the equivalence point’s potential is\[E_{eq} = \frac{E_{\text{Fe}^{3+}/\text{Fe}^{2+}}^{\circ} + E_{\text{Ce}^{4+}/\text{Ce}^{3+}}^{\circ}}{2} = \frac{0.767 \text{ V} + 1.70 \text{ V}}{2} = +1.23 \text{ V} \nonumber\]Additional results for this titration curve are shown in Table 9.4.1 and Figure 9.4.1 .Calculate the titration curve for the titration of 50.0 mL of 0.0500 M Sn2+ with 0.100 M Tl3+. Both the titrand and the titrant are 1.0 M in HCl. The titration reaction is\[\text{Sn}^{2+}(aq) + \text{Tl}^{3+} \rightleftharpoons \text{Tl}^+(aq) + \text{Sn}^{4+}(aq) \nonumber\]The volume of Tl3+ needed to reach the equivalence point is\[V_{eq} = V_\text{Tl} = \frac{M_\text{Sn}V_\text{Sn}}{M_\text{Tl}} = \frac{(0.050 \text{ M})(50.0 \text{ mL})}{(0.100 \text{ M})} = 25.0 \text{ mL} \nonumber\]Before the equivalence point, the concentration of unreacted Sn2+ and the concentration of Sn4+ are easy to calculate. For this reason we find the potential using the Nernst equation for the Sn4+/Sn2+ half-reaction. For example, the concentrations of Sn2+ and Sn4+ after adding 10.0 mL of titrant are\[[\text{Sn}^{2+}] = \frac{(0.050 \text{ M})(50.0 \text{ mL}) - (0.100 \text{ M})(10.0 \text{ mL})}{50.0 \text{ mL} + 10.0 \text{ mL}} = 0.0250 \text{ M} \nonumber\]\[[\text{Sn}^{4+}] = \frac{(0.100 \text{ M})(10.0 \text{ mL})}{50.0 \text{ mL} + 10.0 \text{ mL}} = 0.0167 \text{ M} \nonumber\]and the potential is\[E = +0.139 \text{ V} - \frac{0.05916}{2} \log{\frac{0.0250 \text{ M}}{0.0167 \text{ M}}} = +0.134 \text{ V} \nonumber\]After the equivalence point, the concentration of Tl+ and the concentration of excess Tl3+ are easy to calculate. For this reason we find the potential using the Nernst equation for the Tl3+/Tl+ half-reaction. For example, after adding 40.0 mL of titrant, the concentrations of Tl+ and Tl3+ are\[[\text{Tl}^{+}] = \frac{(0.050 \text{ M})(50.0 \text{ mL})}{50.0 \text{ mL} + 40.0 \text{ mL}} = 0.0278 \text{ M} \nonumber\]\[[\text{Tl}^{3+}] = \frac{(0.100 \text{ M})(40.0 \text{ mL}) - (0.050 \text{ M})(50.0 \text{ mL})}{50.0 \text{ mL} + 40.0 \text{ mL}} = 0.0167 \text{ M} \nonumber\]and the potential is\[E = +0.77 \text{ V} - \frac{0.05916}{2} \log{\frac{0.0278 \text{ M}}{0.0167 \text{ M}}} = +0.76 \text{ V} \nonumber\]At the titration’s equivalence point, the potential, Eeq, potential is\[E_{eq} = \frac{0.139 \text{ V} + 0.77 \text{ V}}{2} = +0.45 \text{ V} \nonumber\]Some additional results are shown here.To evaluate the relationship between a titration’s equivalence point and its end point we need to construct only a reasonable approximation of the exact titration curve. In this section we demonstrate a simple method for sketching a redox titration curve. Our goal is to sketch the titration curve quickly, using as few calculations as possible. Let’s use the titration of 50.0 mL of 0.100 M Fe2+ with 0.100 M Ce4+ in a matrix of 1 M HClO4.This is the same example that we used in developing the calculations for a redox titration curve. You can review the results of that calculation in Table 9.4.1 and Figure 9.4.1 .We begin by calculating the titration’s equivalence point volume, which, as we determined earlier, is 50.0 mL. Next, we draw our axes, placing the potential, E, on the y-axis and the titrant’s volume on the x-axis. To indicate the equivalence point’s volume, we draw a vertical line that intersects the x-axis at 50.0 mL of Ce4+. Figure 9.4.2 a shows the result of the first step in our sketch.Before the equivalence point, the potential is determined by a redox buffer of Fe2+ and Fe3+. Although we can calculate the potential using the Nernst equation, we can avoid this calculation if we make a simple assumption. You may recall from Chapter 6 that a redox buffer operates over a range of potentials that extends approximately ±(0.05916/n) unit on either side of \(E_{\text{Fe}^{3+}/\text{Fe}^{2+}}^{\circ}\). The potential at the buffer’s lower limit is\[E = E_{\text{Fe}^{3+}/\text{Fe}^{2+}}^{\circ} - 0.05916 \nonumber\]when the concentration of Fe2+ is \(10 \times\) greater than that of Fe3+. The buffer reaches its upper potential of\[E = E_{\text{Fe}^{3+}/\text{Fe}^{2+}}^{\circ} + 0.05916 \nonumber\]when the concentration of Fe2+ is \(10 \times\) smaller than that of Fe3+. The redox buffer spans a range of volumes from approximately 10% of the equivalence point volume to approximately 90% of the equivalence point volume. Figure 9.4.2 b shows the second step in our sketch. First, we superimpose a ladder diagram for Fe on the y-axis, using its \(E_{\text{Fe}^{3+}/\text{Fe}^{2+}}^{\circ}\) value of 0.767 V and including the buffer’s range of potentials. Next, we add points for the potential at 10% of Veq (a potential of 0.708 V at 5.0 mL) and for the potential at 90% of Veq (a potential of 0.826 V at 45.0 mL).We used a similar approach when sketching the acid–base titration curve for the titration of acetic acid with NaOH; see Chapter 9.2 for details.The third step in sketching our titration curve is to add two points after the equivalence point. Here the potential is controlled by a redox buffer of Ce3+ and Ce4+. The redox buffer is at its lower limit of\[E = E_{\text{Ce}^{4+}/\text{Ce}^{3+}}^{\circ} - 0.05916 \nonumber\]when the titrant reaches 110% of the equivalence point volume and the potential is \(E_{\text{Ce}^{4+}/\text{Ce}^{3+}}^{\circ}\) when the volume of Ce is \(2 \times V_{eq}\).We used a similar approach when sketching the complexation titration curve for the titration of Mg2+ with EDTA; see Chapter 9.3 for details.Figure 9.4.2 c shows the third step in our sketch. First, we superimpose a ladder diagram for Ce on the y-axis, using its \(E_{\text{Ce}^{4+}/\text{Ce}^{3+}}^{\circ}\) value of 1.70 V and including the buffer’s range. Next, we add points representing the potential at 110% of Veq (a value of 1.66 V at 55.0 mL) and at 200% of Veq (a value of 1.70 V at 100.0 mL).Next, we draw a straight line through each pair of points, extending the line through the vertical line that indicates the equivalence point’s volume (Figure 9.4.2 d). Finally, we complete our sketch by drawing a smooth curve that connects the three straight-line segments (Figure 9.4.2 e). A comparison of our sketch to the exact titration curve (Figure 9.4.2 f) shows that they are in close agreement.Sketch the titration curve for the titration of 50.0 mL of 0.0500 M Sn4+ with 0.100 M Tl+. Both the titrand and the titrant are 1.0 M in HCl. The titration reaction is\[\text{Sn}^{2+}(aq) + \text{Tl}^{3+}(aq) \rightleftharpoons \text{Tl}^{+}(aq) + \text{Sn}^{4+}(aq) \nonumber\]Compare your sketch to your calculated titration curve from Exercise 9.4.1 .The figure below shows a sketch of the titration curve. The two points before the equivalence pointVTl = 2.5 mL, E = +0.109 V and VTl = 22.5 mL, E = +0.169 Vare plotted using the redox buffer for Sn4+/Sn2+, which spans a potential range of +0.139 ± 0.5916/2. The two points after the equivalence pointVTl = 27.5 mL, E = +0.74 V and VTl = 50 mL, E = +0.77 Vare plotted using the redox buffer for Tl3+/Tl+, which spans the potential range of +0.139 ± 0.5916/2. The black dots and curve are the approximate sketch of the titration curve. The points in red are the calculations from Exercise 9.4.1 .A redox titration’s equivalence point occurs when we react stoichiometrically equivalent amounts of titrand and titrant. As is the case for acid–base titrations and complexation titrations, we estimate the equivalence point of a redox titration using an experimental end point. A variety of methods are available for locating a redox titration’s end point, including indicators and sensors that respond to a change in the solution conditions.For an acid–base titration or a complexometric titration the equivalence point is almost identical to the inflection point on the steeply rising part of the titration curve. If you look back at .\[5\text{Fe}^{2+}(aq) + \text{MnO}_4^-(aq) + 8\text{H}^+(aq) \rightarrow 5\text{Fe}^{3+}(aq) + \text{Mn}^{2+}(aq) + 4\text{H}_2\text{O}(l) \nonumber\]SolutionThe half-reactions for the oxidation of Fe2+ and the reduction of \(\text{MnO}_4^-\) are\[\text{Fe}^{2+}(aq) \rightarrow \text{Fe}^{3+}(aq) + e^- \nonumber\]\[\text{MnO}_4^-(aq) + 8\text{H}^+(aq) + 5 e^- \rightarrow \text{Mn}^{2+}(aq) + 4\text{H}_2\text{O}(l) \nonumber\]for which the Nernst equations are\[E = E_{\text{Fe}^{3+}/\text{Fe}^{2+}}^{\circ} - 0.5916 \log{\frac{[\text{Fe}^{2+}]}{[\text{Fe}^{3+}]}} \nonumber\]\[E = E_{\text{MnO}_4^{-}/\text{Mn}^{2+}}^{\circ} - \frac{0.5916}{5} \log{\frac{[\text{Mn}^{2+}]}{[\text{MnO}_4^{-}][\text{H}^+]^8}} \nonumber\]Before we add together these two equations we must multiply the second equation by 5 so that we can combine the log terms; thus\[6E_{eq} = E_{\text{Fe}^{3+}/\text{Fe}^{2+}}^{\circ} + 5E_{\text{MnO}_4^{-}/\text{Mn}^{2+}}^{\circ} - 0.05916 \log{\frac{[\text{Fe}^{2+}][\text{Mn}^{2+}]}{[\text{Fe}^{3+}][\text{MnO}_4^{-}][\text{H}^+]^8}} \nonumber\]At the equivalance point we know that\[[\text{Fe}^{2+}] = 5 \times [\text{MnO}_4^-] \text{ and } [\text{Fe}^{3+}] = 5 \times [\text{Mn}^{2+}] \nonumber\]Substituting these equalities into the previous equation and rearranging gives us a general equation for the potential at the equivalence point.\[6E_{eq} = E_{\text{Fe}^{3+}/\text{Fe}^{2+}}^{\circ} + 5E_{\text{MnO}_4^{-}/\text{Mn}^{2+}}^{\circ} - 0.05916 \log{\frac{5[\text{MnO}_4^{-}][\text{Mn}^{2+}]}{5[\text{Mn}^{2+}][\text{MnO}_4^{-}][\text{H}^+]^8}} \nonumber\]\[E_{eq} = \frac{E_{\text{Fe}^{3+}/\text{Fe}^{2+}}^{\circ} + 5E_{\text{MnO}_4^{-}/\text{Mn}^{2+}}^{\circ}}{6} - \frac{0.05916}{6} \log{\frac{1}{[\text{H}^+]^8}} \nonumber\]\[E_{eq} = \frac{E_{\text{Fe}^{3+}/\text{Fe}^{2+}}^{\circ} + 5E_{\text{MnO}_4^{-}/\text{Mn}^{2+}}^{\circ}}{6} + \frac{0.05916 \times 8}{6} \log{[\text{H}^+]} \nonumber\]\[E_{eq} = \frac{E_{\text{Fe}^{3+}/\text{Fe}^{2+}}^{\circ} + 5E_{\text{MnO}_4^{-}/\text{Mn}^{2+}}^{\circ}}{6} - 0.07888 \text{pH} \nonumber\]Our equation for the equivalence point has two terms. The first term is a weighted average of the titrand’s and the titrant’s standard state potentials, in which the weighting factors are the number of electrons in their respective half-reactions. The second term shows that Eeq for this titration is pH-dependent. At a pH of 1 (in H2SO4), for example, the equivalence point has a potential of\[E_{eq} = \frac{0.768 + 5 \times 1.51}{6} - 0.07888 \times 1 = 1.31 \text{ V} \nonumber\]Figure 9.4.3 shows a typical titration curve for titration of Fe2+ with \(\text{MnO}_4^-\). Note that the titration’s equivalence point is asymmetrical.Derive a general equation for the equivalence point’s potential for the titration of U4+ with Ce4+. The unbalanced reaction is\[\text{Ce}^{4+}(aq) + \text{U}^{4+}(aq) \rightarrow \text{UO}_2^{2+}(aq) + \text{Ce}^{3+}(aq) \nonumber\]What is the equivalence point’s potential if the pH is 1?The two half reactions are\[\text{Ce}^{4+}(aq) + e^- \rightarrow \text{Ce}^{3+}(aq) \nonumber\]\[\text{U}^{4+}(aq) +2\text{H}_2\text{O}(l) \rightarrow \text{UO}_2^{2+}(aq)) + 4\text{H}^+(aq) +2e^- \nonumber\]for which the Nernst equations are\[E = E_{\text{Ce}^{4+}/\text{Ce}^{3+}}^{\circ} - 0.05916 \log{\frac{[\text{Ce}^{3+}]}{[\text{Ce}^{4+}]}} \nonumber\]\[E = E_{\text{UO}_2^{2+}/\text{U}^{4+}}^{\circ} - \frac{0.05916}{2} \log{\frac{[\text{U}^{4+}]}{[\text{UO}_2^{2+}][\text{H}^+]^4}} \nonumber\]Before adding these two equations together we must multiply the second equation by 2 so that we can combine the log terms; thus\[3E = E_{\text{Ce}^{4+}/\text{Ce}^{3+}}^{\circ} + 2E_{\text{UO}_2^{2+}/\text{U}^{4+}}^{\circ} - 0.05916 \log{\frac{[\text{Ce}^{3+}][\text{U}^{4+}]}{[\text{Ce}^{4+}][\text{UO}_2^{2+}][\text{H}^+]^4}} \nonumber\]At the equivalence point we know that\[[\text{Ce}^{3+}] = 2 \times [\text{UO}_2^{2+}] \text{ and } [\text{Ce}^{4+}] = 2 \times [\text{U}^{4+}] \nonumber\]Substituting these equalities into the previous equation and rearranging gives us a general equation for the potential at the equivalence point.\[3E = E_{\text{Ce}^{4+}/\text{Ce}^{3+}}^{\circ} + 2E_{\text{UO}_2^{2+}/\text{U}^{4+}}^{\circ} - 0.05916 \log{\frac{2[\text{UO}_2^{2+}][\text{U}^{4+}]}{2[\text{U}^{4+}][\text{UO}_2^{2+}][\text{H}^+]^4}} \nonumber\]\[E = \frac{E_{\text{Ce}^{4+}/\text{Ce}^{3+}}^{\circ} + 2E_{\text{UO}_2^{2+}/\text{U}^{4+}}^{\circ}}{3} - \frac{0.05916}{3} \log{\frac{1}{[\text{H}^+]^4}} \nonumber\]\[E = \frac{E_{\text{Ce}^{4+}/\text{Ce}^{3+}}^{\circ} + 2E_{\text{UO}_2^{2+}/\text{U}^{4+}}^{\circ}}{3} + \frac{0.05916 \times 4}{3} \log{[\text{H}^+]^4} \nonumber\]\[E = \frac{E_{\text{Ce}^{4+}/\text{Ce}^{3+}}^{\circ} + 2E_{\text{UO}_2^{2+}/\text{U}^{4+}}^{\circ}}{3} - 0.07888\text{pH} \nonumber\]At a pH of 1 the equivalence point has a potential of\[E = \frac{1.72 + 2 \times 0.327}{3} - 0.07888 \times 1 = +0.712 \text{ V} \nonumber\]Three types of indicators are used to signal a redox titration’s end point. The oxidized and reduced forms of some titrants, such as \(\text{MnO}_4^-\), have different colors. A solution of \(\text{MnO}_4^-\) is intensely purple. In an acidic solution, however, permanganate’s reduced form, Mn2+, is nearly colorless. When using \(\text{MnO}_4^-\) as a titrant, the titrand’s solution remains colorless until the equivalence point. The first drop of excess \(\text{MnO}_4^-\) produces a permanent tinge of purple, signaling the end point.Some indicators form a colored compound with a specific oxidized or reduced form of the titrant or the titrand. Starch, for example, forms a dark purple complex with \(\text{I}_3^-\). We can use this distinct color to signal the presence of excess \(\text{I}_3^-\) as a titrant—a change in color from colorless to purple—or the completion of a reaction that consumes \(\text{I}_3^-\) as the titrand— a change in color from purple to colorless. Another example of a specific indicator is thiocyanate, SCN–, which forms the soluble red-colored complex of Fe(SCN)2+ in the presence of Fe3+.The most important class of indicators are substances that do not participate in the redox titration, but whose oxidized and reduced forms differ in color. When we add a redox indicator to the titrand, the indicator imparts a color that depends on the solution’s potential. As the solution’s potential changes with the addition of titrant, the indicator eventually changes oxidation state and changes color, signaling the end point.To understand the relationship between potential and an indicator’s color, consider its reduction half-reaction\[\text{In}_\text{ox} + ne^- \rightleftharpoons \text{In}_\text{red} \nonumber\]where Inox and Inred are, respectively, the indicator’s oxidized and reduced forms.For simplicity, Inox and Inred are shown without specific charges. Because there is a change in oxidation state, Inox and Inred cannot both be neutral.The Nernst equation for this half-reaction is\[E = E_{\text{In}_\text{ox}/\text{In}_\text{red}}^{\circ} - \frac{0.05916}{n} \log{\frac{[\text{In}_\text{red}]}{[\text{In}_\text{ox}]}} \nonumber\]As shown in Figure 9.4.4 , if we assume the indicator’s color changes from that of Inox to that of Inred when the ratio [Inred]/[Inox] changes from 0.1 to 10, then the end point occurs when the solution’s potential is within the range\[E = E_{\text{In}_\text{ox}/\text{In}_\text{red}}^{\circ} \pm \frac{0.05916}{n} \nonumber\]This is the same approach we took in considering acid–base indicators and complexation indicators.A partial list of redox indicators is shown in Table 9.4.2 . Examples of an appropriate and an inappropriate indicator for the titration of Fe2+ with Ce4+ are shown in Figure 9.4.5 .Another method for locating a redox titration’s end point is a potentiometric titration in which we monitor the change in potential while we add the titrant to the titrand. The end point is found by examining visually the titration curve. The simplest experimental design for a potentiometric titration consists of a Pt indicator electrode whose potential is governed by the titrand’s or the titrant’s redox half-reaction, and a reference electrode that has a fixed potential. Other methods for locating the titration’s end point include thermometric titrations and spectrophotometric titrations.You will a further discussion of potentiometry in Chapter 11.The best way to appreciate the theoretical and the practical details discussed in this section is to carefully examine a typical redox titrimetric method. Although each method is unique, the following description of the determination of the total chlorine residual in water provides an instructive example of a typical procedure. The description here is based on Method 4500-Cl B as published in Standard Methods for the Examination of Water and Wastewater, 20th Ed., American Public Health Association: Washington, D. C., 1998.Description of the MethodThe chlorination of a public water supply produces several chlorine-containing species, the combined concentration of which is called the total chlorine residual. Chlorine is present in a variety of chemical states, including the free residual chlorine, which consists of Cl2, HOCl and OCl–, and the combined chlorine residual, which consists of NH2Cl, NHCl2, and NCl3. The total chlorine residual is determined by using the oxidizing power of chlorine to convert I– to \(\text{I}_3^-\). The amount of \(\text{I}_3^-\) formed is then determined by titrating with Na2S2O3 using starch as an indicator. Regardless of its form, the total chlorine residual is reported as if Cl2 is the only source of chlorine, and is reported as mg Cl/L.ProcedureSelect a volume of sample that requires less than 20 mL of Na2S2O3 to reach the end point. Using glacial acetic acid, acidify the sample to a pH between 3 and 4, and add about 1 gram of KI. Titrate with Na2S2O3 until the yellow color of \(\text{I}_3^-\) begins to disappear. Add 1 mL of a starch indicator solution and continue titrating until the blue color of the starch–\(\text{I}_3^-\) complex disappears (Figure 9.4.6 ). Use a blank titration to correct the volume of titrant needed to reach the end point for reagent impurities.Questions1. Is this an example of a direct or an indirect analysis?This is an indirect analysis because the chlorine-containing species do not react with the titrant. Instead, the total chlorine residual oxidizes I– to \(\text{I}_3^-\), and the amount of \(\text{I}_3^-\) is determined by titrating with Na2S2O3.2. Why does the procedure rely on an indirect analysis instead of directly titrating the chlorine-containing species using KI as a titrant?Because the total chlorine residual consists of six different species, a titration with I– does not have a single, well-defined equivalence point. By converting the chlorine residual to an equivalent amount of \(\text{I}_3^-\), the indirect titration with Na2S2O3 has a single, useful equivalence point. Even if the total chlorine residual is from a single species, such as HOCl, a direct titration with KI is impractical. Because the product of the titration, \(\text{I}_3^-\), imparts a yellow color, the titrand’s color would change with each addition of titrant, making it difficult to find a suitable indicator.3. Both oxidizing and reducing agents can interfere with this analysis. Explain the effect of each type of interferent on the total chlorine residual.An interferent that is an oxidizing agent converts additional I– to \(\text{I}_3^-\). Because this extra \(\text{I}_3^-\) requires an additional volume of Na2S2O3 to reach the end point, we overestimate the total chlorine residual. If the interferent is a reducing agent, it reduces back to I– some of the \(\text{I}_3^-\) produced by the reaction between the total chlorine residual and iodide; as a result, we underestimate the total chlorine residual.Although many quantitative applications of redox titrimetry have been re- placed by other analytical methods, a few important applications continue to find relevance. In this section we review the general application of redox titrimetry with an emphasis on environmental, pharmaceutical, and industrial applications. We begin, however, with a brief discussion of selecting and characterizing redox titrants, and methods for controlling the titrand’s oxidation state.If a redox titration is to be used in a quantitative analysis, the titrand initially must be present in a single oxidation state. For example, iron is determined by a redox titration in which Ce4+ oxidizes Fe2+ to Fe3+. Depending on the sample and the method of sample preparation, iron initially may be present in both the +2 and +3 oxidation states. Before titrating, we must reduce any Fe3+ to Fe2+ if we want to determine the total concentration of iron in the sample. This type of pretreatment is accomplished using an auxiliary reducing agent or oxidizing agent.A metal that is easy to oxidize—such as Zn, Al, and Ag—can serve as an auxiliary reducing agent. The metal, as a coiled wire or powder, is added to the sample where it reduces the titrand. Because any unreacted auxiliary reducing agent will react with the titrant, it is removed before we begin the titration by removing the coiled wire or by filtering.An alternative method for using an auxiliary reducing agent is to immobilize it in a column. To prepare a reduction column an aqueous slurry of the finally divided metal is packed in a glass tube equipped with a porous plug at the bottom. The sample is placed at the top of the column and moves through the column under the influence of gravity or vacuum suction. The length of the reduction column and the flow rate are selected to ensure the analyte’s complete reduction.Two common reduction columns are used. In the Jones reductor the column is filled with amalgamated zinc, Zn(Hg), which is prepared by briefly placing Zn granules in a solution of HgCl2. Oxidation of zinc\[\text{Zn(Hg)}(s) \rightarrow \text{Zn}^{2+}(aq) + \text{Hg}(l) + 2e^- \nonumber\]provides the electrons for reducing the titrand. In the Walden reductor the column is filled with granular Ag metal. The solution containing the titrand is acidified with HCl and passed through the column where the oxidation of silver\[\text{Ag}(s) + \text{Cl}^- (aq) \rightarrow \text{AgCl}(s) + e^- \nonumber\]provides the necessary electrons for reducing the titrand. Table 9.4.3 provides a summary of several applications of reduction columns.Several reagents are used as auxiliary oxidizing agents, including ammonium peroxydisulfate, (NH4)2S2O8, and hydrogen peroxide, H2O2. Peroxydisulfate is a powerful oxidizing agent\[\text{S}_2\text{O}_8^{2-}(aq) + 2e^- \rightarrow 2\text{SO}_4^{2-}(aq) \nonumber\]that is capable of oxidizing Mn2+ to \(\text{MnO}_4^-\), Cr3+ to \(\text{Cr}_2\text{O}_7^{2-}\), and Ce3+ to Ce4+. Excess peroxydisulfate is destroyed by briefly boiling the solution. The reduction of hydrogen peroxide in an acidic solution\[\text{H}_2\text{O}_2(aq) + 2\text{H}^+(aq) + 2e^- \rightarrow 2\text{H}_2\text{O}(l) \nonumber\]provides another method for oxidizing a titrand. Excess H2O2 is destroyed by briefly boiling the solution.If it is to be used quantitatively, the titrant’s concentration must remain stable during the analysis. Because a titrant in a reduced state is susceptible to air oxidation, most redox titrations use an oxidizing agent as the titrant. There are several common oxidizing titrants, including \(\text{MnO}_4^-\), Ce4+, \(\text{Cr}_2\text{O}_7^{2-}\), and \(\text{I}_3^-\). Which titrant is used often depends on how easily it oxidizes the titrand. A titrand that is a weak reducing agent needs a strong oxidizing titrant if the titration reaction is to have a suitable end point.The two strongest oxidizing titrants are \(\text{MnO}_4^-\) and Ce4+, for which the reduction half-reactions are\[\text{MnO}_4^-(aq) + 8\text{H}^+(aq) + 5e^- \rightleftharpoons \text{Mn}^{2+}(aq) + 4\text{H}_2\text{O}(l) \nonumber\]\[\text{Ce}^{4+}(aq) + e^- \rightleftharpoons \text{Ce}^{3+}(aq) \nonumber\]A solution of Ce4+ in 1 M H2SO4 usually is prepared from the primary standard cerium ammonium nitrate, Ce(NO3)4•2NH4NO3. When prepared using a reagent grade material, such as Ce(OH)4, the solution is standardized against a primary standard reducing agent such as Na2C2O4 or Fe2+ (prepared from iron wire) using ferroin as an indicator. Despite its availability as a primary standard and its ease of preparation, Ce4+ is not used as frequently as \(\text{MnO}_4^-\) because it is more expensive.The standardization reactions are\[\text{Ce}^{4+}(aq) + \text{Fe}^{2+}(aq) \rightarrow \text{Fe}^{3+}(aq) + \text{Ce}^{3+}(aq) \nonumber\]\[2\text{Ce}^{4+}(aq) + \text{H}_2\text{C}_2\text{O}_4(aq) \rightarrow 2\text{Ce}^{3+}(aq) + 2\text{CO}_2(g) + 2\text{H}^+(aq) \nonumber\]A solution of \(\text{MnO}_4^-\) is prepared from KMnO4, which is not available as a primary standard. An aqueous solution of permanganate is thermodynamically unstable due to its ability to oxidize water.\[4\text{MnO}_4^-(aq) + 2\text{H}_2\text{O}(l) \rightleftharpoons 4\text{MnO}_2(s) + 3\text{O}_2 (g) + 4\text{OH}^-(aq) \nonumber\]This reaction is catalyzed by the presence of MnO2, Mn2+, heat, light, and the presence of acids and bases. A moderately stable solution of permanganate is prepared by boiling it for an hour and filtering through a sintered glass filter to remove any solid MnO2 that precipitates. Standardization is accomplished against a primary standard reducing agent such as Na2C2O4 or Fe2+ (prepared from iron wire), with the pink color of excess \(\text{MnO}_4^-\) signaling the end point. A solution of \(\text{MnO}_4^-\) prepared in this fashion is stable for 1–2 weeks, although you should recheck the standardization periodically.The standardization reactions are\[\text{MnO}_4^-(aq) + 5\text{Fe}^{2+}(aq) + 8\text{H}^+(aq) \rightarrow \text{Mn}^{2+}(aq) + 5\text{Fe}^{3+}(aq) + 4\text{H}_2\text{O}(l) \nonumber\]\[2\text{MnO}_4^-(aq) + 5\text{H}_2\text{C}_2\text{O}_4(aq) + 6\text{H}^+(aq) \rightarrow 2\text{Mn}^{2+}(aq) + 10\text{CO}_2(g) + 8\text{H}_2\text{O}(l) \nonumber\]Potassium dichromate is a relatively strong oxidizing agent whose principal advantages are its availability as a primary standard and its long term stability when in solution. It is not, however, as strong an oxidizing agent as \(\text{MnO}_4^-\) or Ce4+, which makes it less useful when the titrand is a weak reducing agent. Its reduction half-reaction is\[\text{Cr}_2\text{O}_7^{2-}(aq) + 14\text{H}^+(aq) + 6e^- \rightleftharpoons 2\text{Cr}^{3+}(aq) + 7\text{H}_2\text{O}(l) \nonumber\]Although a solution of \(\text{Cr}_2\text{O}_7^{2-}\) is orange and a solution of Cr3+ is green, neither color is intense enough to serve as a useful indicator. Diphenylamine sulfonic acid, whose oxidized form is red-violet and reduced form is colorless, gives a very distinct end point signal with \(\text{Cr}_2\text{O}_7^{2-}\).Iodine is another important oxidizing titrant. Because it is a weaker oxidizing agent than \(\text{MnO}_4^-\), Ce4+, and \(\text{Cr}_2\text{O}_7^{2-}\), it is useful only when the titrand is a stronger reducing agent. This apparent limitation, however, makes I2 a more selective titrant for the analysis of a strong reducing agent in the presence of a weaker reducing agent. The reduction half-reaction for I2 is\[\text{I}_2(aq) + 2e^- \rightleftharpoons 2\text{I}^-(aq) \nonumber\]Because iodine is not very soluble in water, solutions are prepared by adding an excess of I–. The complexation reaction\[\text{I}_2(aq) + \text{I}^-(aq) \rightleftharpoons \text{I}_3^-(aq) \nonumber\]increases the solubility of I2 by forming the more soluble triiodide ion, \(\text{I}_3^-\). Even though iodine is present as \(\text{I}_3^-\) instead of I2, the number of electrons in the reduction half-reaction is unaffected.\[\text{I}_3^-(aq) + 2e^-(aq) \rightleftharpoons 3\text{I}^-(aq) \nonumber\]Solutions of \(\text{I}_3^-\) normally are standardized against Na2S2O3 using starch as a specific indicator for \(\text{I}_3^-\).The standardization reaction is\[\text{I}_3^-(aq) + 2\text{S}_2\text{O}_3^{2-}(aq) \rightarrow 3\text{I}^-(aq) + 2\text{S}_4\text{O}_6^{2-} (aq) \nonumber\]An oxidizing titrant such as \(\text{MnO}_4^-\), Ce4+, \(\text{Cr}_2\text{O}_7^{2-}\), and \(\text{I}_3^-\), is used when the titrand is in a reduced state. If the titrand is in an oxidized state, we can first reduce it with an auxiliary reducing agent and then complete the titration using an oxidizing titrant. Alternatively, we can titrate it using a reducing titrant. Iodide is a relatively strong reducing agent that could serve as a reducing titrant except that its solutions are susceptible to the air-oxidation of I– to \(\text{I}_3^-\).\[3\text{I}^-(aq) \rightleftharpoons \text{I}_3^- (aq) + 2e^- \nonumber\]A freshly prepared solution of KI is clear, but after a few days it may show a faint yellow coloring due to the presence of \(\text{I}_3^-\).Instead, adding an excess of KI reduces the titrand and releases a stoichiometric amount of \(\text{I}_3^-\). The amount of \(\text{I}_3^-\) produced is then determined by a back titration using thiosulfate, \(\text{S}_2\text{O}_3^{2-}\), as a reducing titrant.\[2\text{S}_2\text{O}_3^{2-}(aq) \rightleftharpoons \text{S}_4\text{O}_6^{2-}(aq) + 2e^- \nonumber\]Solutions of \(\text{S}_2\text{O}_3^{2-}\) are prepared using Na2S2O3•5H2O and are standardized before use. Standardization is accomplished by dissolving a carefully weighed portion of the primary standard KIO3 in an acidic solution that contains an excess of KI. The reaction between \(\text{IO}_3^-\) and I–\[\text{IO}_3^-(aq) + 8\text{I}^-(aq) + 6\text{H}^+(aq) \rightarrow 3\text{I}_3^-(aq) + 3\text{H}_2\text{O}(l) \nonumber\]liberates a stoichiometric amount of I-3 . By titrating this \(\text{I}_3^-\) with thiosulfate, using starch as a visual indicator, we can determine the concentration of \(\text{S}_2\text{O}_3^{2-}\) in the titrant.The standardization titration is\[\text{I}_3^-(aq) + 2\text{S}_2\text{O}_3^{2-}(aq) \rightarrow 3\text{I}^-(aq) + \text{S}_4\text{O}_6^{2-}(aq) \nonumber\]which is the same reaction used to standardize solutions of \(\text{I}_3^-\). This approach to standardizing solutions of \(\text{S}_2\text{O}_2^{3-}\) is similar to that used in the determination of the total chlorine residual outlined in Representative Method 9.4.1.Although thiosulfate is one of the few reducing titrants that is not readily oxidized by contact with air, it is subject to a slow decomposition to bisulfite and elemental sulfur. If used over a period of several weeks, a solution of thiosulfate is restandardized periodically. Several forms of bacteria are able to metabolize thiosulfate, which leads to a change in its concentration. This problem is minimized by adding a preservative such as HgI2 to the solution.Another useful reducing titrant is ferrous ammonium sulfate, Fe(NH4)2(SO4)2•6H2O, in which iron is present in the +2 oxidation state. A solution of Fe2+ is susceptible to air-oxidation, but when prepared in 0.5 M H2SO4 it remains stable for as long as a month. Periodic restandardization with K2Cr2O7 is advisable. Ferrous ammonium sulfate is used as the titrant in a direct analysis of the titrand, or, it is added to the titrand in excess and the amount of Fe3+ produced determined by back titrating with a standard solution of Ce4+ or \(\text{Cr}_2\text{O}_7^{2-}\).One of the most important applications of redox titrimetry is evaluating the chlorination of public water supplies. Representative Method 9.4.1, for example, describes an approach for determining the total chlorine residual using the oxidizing power of chlorine to oxidize I– to \(\text{I}_3^-\). The amount of \(\text{I}_3^-\) is determined by back titrating with \(\text{S}_2\text{O}_3^{2-}\).The efficiency of chlorination depends on the form of the chlorinating species. There are two contributions to the total chlorine residual—the free chlorine residual and the combined chlorine residual. The free chlorine residual includes forms of chlorine that are available for disinfecting the water supply. Examples of species that contribute to the free chlorine residual include Cl2, HOCl and OCl–. The combined chlorine residual includes those species in which chlorine is in its reduced form and, therefore, no longer capable of providing disinfection. Species that contribute to the combined chlorine residual are NH2Cl, NHCl2 and NCl3.When a sample of iodide-free chlorinated water is mixed with an excess of the indicator N,N-diethyl-p-phenylenediamine (DPD), the free chlorine oxidizes a stoichiometric portion of DPD to its red-colored form. The oxidized DPD is then back-titrated to its colorless form using ferrous ammonium sulfate as the titrant. The volume of titrant is proportional to the free residual chlorine.Having determined the free chlorine residual in the water sample, a small amount of KI is added, which catalyzes the reduction of monochloramine, NH2Cl, and oxidizes a portion of the DPD back to its red-colored form. Titrating the oxidized DPD with ferrous ammonium sulfate yields the amount of NH2Cl in the sample. The amount of dichloramine and trichloramine are determined in a similar fashion.The methods described above for determining the total, free, or combined chlorine residual also are used to establish a water supply’s chlorine demand. Chlorine demand is defined as the quantity of chlorine needed to react completely with any substance that can be oxidized by chlorine, while also maintaining the desired chlorine residual. It is determined by adding progressively greater amounts of chlorine to a set of samples drawn from the water supply and determining the total, free, or combined chlorine residual.Another important example of redox titrimetry, which finds applications in both public health and environmental analysis, is the determination of dissolved oxygen. In natural waters, such as lakes and rivers, the level of dissolved O2 is important for two reasons: it is the most readily available oxidant for the biological oxidation of inorganic and organic pollutants; and it is necessary for the support of aquatic life. In a wastewater treatment plant dissolved O2 is essential for the aerobic oxidation of waste materials. If the concentration of dissolved O2 falls below a critical value, aerobic bacteria are replaced by anaerobic bacteria, and the oxidation of organic waste produces undesirable gases, such as CH4 and H2S.One standard method for determining dissolved O2 in natural waters and wastewaters is the Winkler method. A sample of water is collected without exposing it to the atmosphere, which might change the concentration of dissolved O2. The sample first is treated with a solution of MnSO4 and then with a solution of NaOH and KI. Under these alkaline conditions the dissolved oxygen oxidizes Mn2+ to MnO2.\[2\text{Mn}^{2+}(aq) + 4\text{OH}^-(aq) + \text{O}_2(g) \rightarrow 2\text{MnO}_2(s) + 2\text{H}_2\text{O}(l) \nonumber\]After the reaction is complete, the solution is acidified with H2SO4. Under the now acidic conditions, I– is oxidized to \(\text{I}_3^-\) by MnO2.\[\text{MnO}_2(s) + 3\text{I}^-(aq) + 4\text{H}^+(aq) \rightarrow \text{Mn}^{2+}(aq) + \text{I}_3^-(aq) + 2\text{H}_2\text{O}(l) \nonumber\]The amount of \(\text{I}_3^-\) that forms is determined by titrating with \(\text{S}_2\text{O}_3^{2-}\) using starch as an indicator. The Winkler method is subject to a variety of interferences and several modifications to the original procedure have been proposed. For example, \(\text{NO}_2^-\) interferes because it reduces \(\text{I}_3^-\) to I– under acidic conditions. This interference is eliminated by adding sodium azide, NaN3, which reduces \(\text{NO}_2^-\) to N2. Other reducing agents, such as Fe2+, are eliminated by pretreating the sample with KMnO4 and destroying any excess permanganate with K2C2O4.Another important example of redox titrimetry is the determination of water in nonaqueous solvents. The titrant for this analysis is known as the Karl Fischer reagent and consists of a mixture of iodine, sulfur dioxide, pyridine, and methanol. Because the concentration of pyridine is sufficiently large, I2 and SO2 react with pyridine (py) to form the complexes py•I2 and py•SO2. When added to a sample that contains water, I2 is reduced to I– and SO2 is oxidized to SO3.\[\text{py}\cdot\text{I}_2 + \text{py}\cdot\text{SO}_2 + \text{H}_2\text{O} + 2\text{py} \rightarrow 2\text{py}\cdot\text{HI} + \text{py}\cdot\text{SO}_3 \nonumber\]Methanol is included to prevent the further reaction of py•SO3 with water. The titration’s end point is signaled when the solution changes from the product’s yellow color to the brown color of the Karl Fischer reagent.Redox titrimetry also is used for the analysis of organic analytes. One important example is the determination of the chemical oxygen demand (COD) of natural waters and wastewaters. The COD is a measure of the quantity of oxygen necessary to oxidize completely all the organic matter in a sample to CO2 and H2O. Because no attempt is made to correct for organic matter that is decomposed biologically, or for slow decomposition kinetics, the COD always overestimates a sample’s true oxygen demand. The determination of COD is particularly important in the management of industrial wastewater treatment facilities where it is used to monitor the release of organic-rich wastes into municipal sewer systems or into the environment.A sample’s COD is determined by refluxing it in the presence of excess K2Cr2O7, which serves as the oxidizing agent. The solution is acidified with H2SO4, using Ag2SO4 to catalyze the oxidation of low molecular weight fatty acids. Mercuric sulfate, HgSO4, is added to complex any chloride that is present, which prevents the precipitation of the Ag+ catalyst as AgCl. Under these conditions, the efficiency for oxidizing organic matter is 95–100%. After refluxing for two hours, the solution is cooled to room temperature and the excess \(\text{Cr}_2\text{O}_7^{2-}\) determined by a back titration using ferrous ammonium sulfate as the titrant and ferroin as the indicator. Because it is difficult to remove completely all traces of organic matter from the reagents, a blank titration is performed. The difference in the amount of ferrous ammonium sulfate needed to titrate the sample and the blank is proportional to the COD.Iodine has been used as an oxidizing titrant for a number of compounds of pharmaceutical interest. Earlier we noted that the reaction of \(\text{S}_2\text{O}_3^{2-}\) with \(\text{I}_3^-\) produces the tetrathionate ion, \(\text{S}_4\text{O}_6^{2-}\). The tetrathionate ion is actually a dimer that consists of two thiosulfate ions connected through a disulfide (–S–S–) linkage. In the same fashion, \(\text{I}_3^-\) is used to titrate mercaptans of the general formula RSH, forming the dimer RSSR as a product. The amino acid cysteine also can be titrated with \(\text{I}_3^-\). The product of this titration is cystine, which is a dimer of cysteine. Triiodide also is used for the analysis of ascorbic acid (vitamin C) by oxidizing the enediol functional group to an alpha diketoneand for the analysis of reducing sugars, such as glucose, by oxidizing the aldehyde functional group to a carboxylate ion in a basic solution.An organic compound that contains a hydroxyl, a carbonyl, or an amine functional group adjacent to an hydoxyl or a carbonyl group can be oxidized using metaperiodate, \(\text{IO}_4^-\), as an oxidizing titrant.\[\text{IO}_4^-(aq) + \text{H}_2\text{O}(l) + 2e^- \rightleftharpoons \text{IO}_3^-(aq) + 2\text{OH}^-(aq) \nonumber\]A two-electron oxidation cleaves the C–C bond between the two functional groups with hydroxyl groups oxidized to aldehydes or ketones, carbonyl groups oxidized to carboxylic acids, and amines oxidized to an aldehyde and an amine (ammonia if a primary amine). The analysis is conducted by adding a known excess of \(\text{IO}_4^-\) to the solution that contains the analyte and allowing the oxidation to take place for approximately one hour at room temperature. When the oxidation is complete, an excess of KI is added, which converts any unreacted \(\text{IO}_4^-\) to \(\text{IO}_3^-\) and \(\text{I}_3^-\).\[\text{IO}_4^-(aq) + 3\text{I}^-(aq) + \text{H}_2\text{O}(l) \rightarrow \text{IO}_3^-(aq) + \text{I}_3^-(aq) + 2\text{OH}^-(aq) \nonumber\]The \(\text{I}_3^-\) is then determined by titrating with \(\text{S}_2\text{O}_3^{2-}\) using starch as an indicator.The quantitative relationship between the titrand and the titrant is determined by the stoichiometry of the titration reaction. If you are unsure of the balanced reaction, you can deduce its stoichiometry by remembering that the electrons in a redox reaction are conserved.The amount of Fe in a 0.4891-g sample of an ore is determined by titrating with K2Cr2O7. After dissolving the sample in HCl, the iron is brought into a +2 oxidation state using a Jones reductor. Titration to the diphenylamine sulfonic acid end point requires 36.92 mL of 0.02153 M K2Cr2O7. Report the ore’s iron content as %w/w Fe2O3.SolutionBecause we are not provided with the titration reaction, we will use a conservation of electrons to deduce the stoichiometry. During the titration the analyte is oxidized from Fe2+ to Fe3+, and the titrant is reduced from \(\text{Cr}_2\text{O}_7^{2-}\) to Cr3+. Oxidizing Fe2+ to Fe3+ requires a single electron. Reducing \(\text{Cr}_2\text{O}_7^{2-}\), in which each chromium is in the +6 oxidation state, to Cr3+ requires three electrons per chromium, for a total of six electrons. A conservation of electrons for the titration, therefore, requires that each mole of K2Cr2O7 reacts with six moles of Fe2+.The moles of K2Cr2O7 used to reach the end point is\[(0.02153 \text{ M})(0.03692 \text{ L}) = 7.949 \times 10^{-4} \text{ mol K}_2\text{Cr}_2\text{O}_7 \nonumber\]which means the sample contains\[7.949 \times 10^{-4} \text{ mol K}_2\text{Cr}_2\text{O}_7 \times \frac{6 \text{ mol Fe}^{2+}}{\text{mol K}_2\text{Cr}_2\text{O}_7} = 4.769 \times 10^{-3} \text{ mol Fe}^{2+} \nonumber\]Thus, the %w/w Fe2O3 in the sample of ore is\[4.769 \times 10^{-3} \text{ mol Fe}^{2+} \times \frac{1 \text{ mol Fe}_2\text{O}_3}{2 \text{ mol Fe}^{2+}} \times \frac{159.69 \text{g Fe}_2\text{O}_3}{\text{mol Fe}_2\text{O}_3} = 0.3808 \text{ g Fe}_2\text{O}_3 \nonumber\]\[\frac{0.3808 \text{ g Fe}_2\text{O}_3}{0.4891 \text{ g sample}} \times 100 = 77.86 \text{% w/w Fe}_2\text{O}_3 \nonumber\]Although we can deduce the stoichiometry between the titrant and the titrand in Example 9.4.2 without balancing the titration reaction, the balanced reaction\[\text{K}_2\text{Cr}_2\text{O}_7(aq) + 6\text{Fe}^{2+}(aq) + 14\text{H}^+(aq) \rightarrow 2\text{Cr}^{3+}(aq) + 2\text{K}^+(aq) + 6\text{Fe}^{3+}(aq) + 7\text{H}_2\text{O}(l) \nonumber\]does provide useful information. For example, the presence of H+ reminds us that the reaction must take place in an acidic solution.The purity of a sample of sodium oxalate, Na2C2O4, is determined by titrating with a standard solution of KMnO4. If a 0.5116-g sample requires 35.62 mL of 0.0400 M KMnO4 to reach the titration’s end point, what is the %w/w Na2C2O4 in the sample.Because we are not provided with a balanced reaction, let’s use a conservation of electrons to deduce the stoichiometry. Oxidizing \(\text{C}_2\text{O}_4^{2-}\), in which each carbon has a +3 oxidation state, to CO2, in which carbon has an oxidation state of +4, requires one electron per carbon or a total of two electrons for each mole of \(\text{C}_2\text{O}_4^{2-}\). Reducing \(\text{MnO}_4^-\), in which each manganese is in the +7 oxidation state, to Mn2+ requires five electrons. A conservation of electrons for the titration, therefore, requires that two moles of KMnO4 (10 moles of e-) react with five moles of Na2C2O4 (10 moles of e-).The moles of KMnO4 used to reach the end point is\[(0.0400 \text{ M KMnO}_4)(0.03562 \text{ L})=1.42 \times 10^{-3} \text{ mol KMnO}_4 \nonumber\]which means the sample contains\[1 .42 \times 10^{-3} \text{ mol KMnO}_4 \times \frac{5 \text{ mol Na}_2\text{C}_2\text{O}_4}{2 \text{ mol KMnO}_4} = 3.55 \times 10^{-3} \text{ mol Na}_2\text{C}_2\text{O}_4 \nonumber\]Thus, the %w/w Na2C2O4 in the sample of ore is\[3.55 \times 10^{-3} \text{ mol Na}_2\text{C}_2\text{O}_4 \times \frac{134.00 \text{ g Na}_2\text{C}_2\text{O}_4}{\text{mol Na}_2\text{C}_2\text{O}_4} = 0.476 \text{ g Na}_2\text{C}_2\text{O}_4 \nonumber\]\[\frac{0.476 \text{ g Na}_2\text{C}_2\text{O}_4}{0.5116 \text{ g sample}} \times 100 = 93.0 \text{% w/w Na}_2\text{C}_2\text{O}_4 \nonumber\]As shown in the following two examples, we can easily extend this approach to an analysis that requires an indirect analysis or a back titration.A 25.00-mL sample of a liquid bleach is diluted to 1000 mL in a volumetric flask. A 25-mL portion of the diluted sample is transferred by pipet into an Erlenmeyer flask that contains an excess of KI, reducing the OCl– to Cl– and producing \(\text{I}_3^-\). The liberated \(\text{I}_3^-\) is determined by titrating with 0.09892 M Na2S2O3, requiring 8.96 mL to reach the starch indicator end point. Report the %w/v NaOCl in the sample of bleach.SolutionTo determine the stoichiometry between the analyte, NaOCl, and the titrant, Na2S2O3, we need to consider both the reaction between OCl– and I–, and the titration of \(\text{I}_3^-\) with Na2S2O3.First, in reducing OCl– to Cl– the oxidation state of chlorine changes from +1 to –1, requiring two electrons. The oxidation of three I– to form \(\text{I}_3^-\) releases two electrons as the oxidation state of each iodine changes from –1 in I– to –1⁄3 in \(\text{I}_3^-\). A conservation of electrons, therefore, requires that each mole of OCl– produces one mole of \(\text{I}_3^-\).Second, in the titration reaction, \(\text{I}_3^-\) is reduced to I– and \(\text{S}_2\text{O}_3^{2-}\) is oxidized to \(\text{S}_4\text{O}_6^{2-}\). Reducing \(\text{I}_3^-\) to 3I– requires two elections as each iodine changes from an oxidation state of –1⁄3 to –1. In oxidizing \(\text{S}_2\text{O}_3^{2-}\) to \(\text{S}_4\text{O}_6^{2-}\), each sulfur changes its oxidation state from +2 to +2.5, releasing one electron for each \(\text{S}_2\text{O}_3^{2-}\). A conservation of electrons, therefore, requires that each mole of \(\text{I}_3^-\) reacts with two moles of \(\text{S}_2\text{O}_3^{2-}\).Finally, because each mole of OCl– produces one mole of \(\text{I}_3^-\), and each mole of \(\text{I}_3^-\) reacts with two moles of \(\text{S}_2\text{O}_3^{2-}\), we know that every mole of NaOCl in the sample ultimately results in the consumption of two moles of Na2S2O3.The moles of Na2S2O3 used to reach the titration’s end point is\[(0.09892 \text{ M})(0.00896 \text{ L}) = 8.86 \times 10^{-4} \text{ mol Na}_2\text{S}_2\text{O}_3 \nonumber\]which means the sample contains\[8.86 \times 10^{-4} \text{ mol Na}_2\text{S}_2\text{O}_3 \times \frac{1 \text{ mol NaOCl}}{\text{mol Na}_2\text{S}_2\text{O}_3} \times \frac{74.44 \text{ g NaOCl}}{\text{mol NaOCl}} = 0.03299 \text{ g NaOCl} \nonumber\]Thus, the %w/v NaOCl in the diluted sample is\[\frac{0.03299 \text{ g NaOCl}}{25.00 \text{ mL}} \times 100 = 0.132 \text{% w/v NaOCl} \nonumber\]Because the bleach was diluted by a factor of \(40 \times\) (25 mL to 1000 mL), the concentration of NaOCl in the bleach is 5.28% w/v.The balanced reactions for this analysis are:\[\text{OCl}^-(aq) + 3\text{I}^-(aq) + 2\text{H}^+(aq) \rightarrow \text{I}_3^-(aq) + \text{Cl}^-(aq) + \text{H}_2\text{O}(l) \nonumber\]\[\text{I}_3^-(aq) + 2\text{S}_2\text{O}_3^{2-}(aq) \rightarrow \text{S}_4\text{O}_6^{2-}(aq) + 3\text{I}^-(aq) \nonumber\]The amount of ascorbic acid, C6H8O6, in orange juice is determined by oxidizing ascorbic acid to dehydroascorbic acid, C6H6O6, with a known amount of \(\text{I}_3^-\), and back titrating the excess \(\text{I}_3^-\) with Na2S2O3. A 5.00-mL sample of filtered orange juice is treated with 50.00 mL of 0.01023 M \(\text{I}_3^-\). After the oxidation is complete, 13.82 mL of 0.07203 M Na2S2O3 is needed to reach the starch indicator end point. Report the concentration ascorbic acid in mg/100 mL.SolutionFor a back titration we need to determine the stoichiometry between \(\text{I}_3^-\) and the analyte, C6H8O6, and between \(\text{I}_3^-\) and the titrant, Na2S2O3. The later is easy because we know from Example 9.4.3 that each mole of \(\text{I}_3^-\) reacts with two moles of Na2S2O3.In oxidizing ascorbic acid to dehydroascorbic acid, the oxidation state of carbon changes from +2⁄3 in C6H8O6 to +1 in C6H6O6. Each carbon releases 1⁄3 of an electron, or a total of two electrons per ascorbic acid. As we learned in Example 9.4.3 , reducing \(\text{I}_3^-\) requires two electrons; thus, a conservation of electrons requires that each mole of ascorbic acid consumes one mole of \(\text{I}_3^-\).The total moles of \(\text{I}_3^-\) that react with C6H8O6 and with Na2S2O3 is\[(0.01023 \text{ M})(0.05000 \text{ L}) = 5.115 \times 10^{-4} \text{ mol I}_3^- \nonumber\]The back titration consumes\[0.01382 \text{ L Na}_2\text{S}_2\text{O}_3 \times \frac{0.07203 \text{ mol Na}_2\text{S}_2\text{O}_3}{\text{ L Na}_2\text{S}_2\text{O}_3} \times \frac{1 \text{ mol I}_3^-}{2 \text{ mol Na}_2\text{S}_2\text{O}_3} = 4.977 \times 10^{-4} \text{ mol I}_3^- \nonumber\]Subtracting the moles of \(\text{I}_3^-\) that react with Na2S2O3 from the total moles of \(\text{I}_3^-\) gives the moles reacting with ascorbic acid.\[5.115 \times 10^{-4} \text{ mol I}_3^- - 4.977 \times 10^{-4} \text{ mol I}_3^- = 1.38 \times 10^{-5} \text{ mol I}_3^- \nonumber\]The grams of ascorbic acid in the 5.00-mL sample of orange juice is\[1.38 \times 10^{-5} \text{ mol I}_3^- \times \frac{1 \text{ mol C}_6\text{H}_8\text{O}_6}{\text{mol I}_3^-} \times \frac{176.12 \text{ g C}_6\text{H}_8\text{O}_6}{\text{mol C}_6\text{H}_8\text{O}_6} = 2.43 \times 10^{-3} \text{ g C}_6\text{H}_8\text{O}_6 \nonumber\]There are 2.43 mg of ascorbic acid in the 5.00-mL sample, or 48.6 mg per 100 mL of orange juice.The balanced reactions for this analysis are:\[\text{C}_6\text{H}_8\text{O}_6(aq) + \text{I}_3^- (aq) \rightarrow 3\text{I}^-(aq) + \text{C}_6\text{H}_6\text{O}_6(aq) + 2\text{H}^+(aq) \nonumber\]\[\text{I}_3^-(aq) + 2\text{S}_2\text{O}_3^{2-}(aq) \rightarrow \text{S}_4\text{O}_6^{2-}(aq) + 3\text{I}^-(aq) \nonumber\]A quantitative analysis for ethanol, C2H6O, is accomplished by a redox back titration. Ethanol is oxidized to acetic acid, C2H4O2, using excess dichromate, \(\text{Cr}_2\text{O}_7^{2-}\), which is reduced to Cr3+. The excess dichromate is titrated with Fe2+, giving Cr3+ and Fe3+ as products. In a typical analysis, a 5.00-mL sample of a brandy is diluted to 500 mL in a volumetric flask. A 10.00-mL sample is taken and the ethanol is removed by distillation and collected in 50.00 mL of an acidified solution of 0.0200 M K2Cr2O7. A back titration of the unreacted \(\text{Cr}_2\text{O}_7^{2-}\) requires 21.48 mL of 0.1014 M Fe2+. Calculate the %w/v ethanol in the brandy.For a back titration we need to determine the stoichiometry between \(\text{Cr}_2\text{O}_7^{2-}\) and the analyte, C2H6O, and between \(\text{Cr}_2\text{O}_7^{2-}\) and the titrant, Fe2+. In oxidizing ethanol to acetic acid, the oxidation state of carbon changes from –2 in C2H6O to 0 in C2H4O2. Each carbon releases two electrons, or a total of four electrons per C2H6O. In reducing \(\text{Cr}_2\text{O}_7^{2-}\), in which each chromium has an oxidation state of +6, to Cr3+, each chromium loses three electrons, for a total of six electrons per \(\text{Cr}_2\text{O}_7^{2-}\). Oxidation of Fe2+ to Fe3+ requires one electron. A conservation of electrons requires that each mole of K2Cr2O7 (6 moles of e–) reacts with six moles of Fe2+ (6 moles of e–), and that four moles of K2Cr2O7 (24 moles of e–) react with six moles of C2H6O (24 moles of e–).The total moles of K2Cr2O7 that react with C2H6O and with Fe2+ is\[(0.0200 \text{ M K}_2\text{Cr}_2\text{O}_7)(0.05000 \text{ L})=1.00 \times 10^{-3} \text{ mol K}_2\text{Cr}_2\text{O}_7 \nonumber\]The back titration with Fe2+ consumes\[(0.1014 \text{ M Fe}^{2+})(0.02148 \text{ L}) \times \frac{1 \text{ mol K}_2\text{Cr}_2\text{O}_7}{6 \text{ mol Fe}^{2+}} = 3.63 \times 10^{-4} \text{ mol K}_2\text{Cr}_2\text{O}_7 \nonumber\]Subtracting the moles of K2Cr2O7 that react with Fe2+ from the total moles of K2Cr2O7 gives the moles that react with the analyte.\[(1.00 \times 10^{-3} \text{ mol K}_2\text{Cr}_2\text{O}_7) - (3.63 \times 10^{-4} \text{ mol K}_2\text{Cr}_2\text{O}_7) = 6.37 \times 10^{-4} \text{ mol K}_2\text{Cr}_2\text{O}_7 \nonumber\]The grams of ethanol in the 10.00-mL sample of diluted brandy is\[6.37 \times 10^{-4} \text{ mol K}_2\text{Cr}_2\text{O}_7 \times \frac{6 \text{ mol C}_2\text{H}_6\text{O}}{4 \text{ mol K}_2\text{Cr}_2\text{O}_7} \times \frac{46.07 \text{ g C}_2\text{H}_6\text{O}}{\text{mol C}_2\text{H}_6\text{O}} = 0.0440 \text{ g C}_2\text{H}_6\text{O} \nonumber\]The %w/v C2H6O in the brandy is\[\frac{0.0440 \text{ g C}_2\text{H}_6\text{O}}{10.0 \text{ mL diluted brandy}} \times \frac{500.0 \text{ mL diluted brandy}}{5.00 \text{ mL brandy}} \times 100 = 44.0 \text{% w/v C}_2\text{H}_6\text{O} \nonumber\]The scale of operations, accuracy, precision, sensitivity, time, and cost of a redox titration are similar to those described earlier in this chapter for an acid–base or a complexation titration. As with an acid–base titration, we can extend a redox titration to the analysis of a mixture of analytes if there is a significant difference in their oxidation or reduction potentials. Figure 9.4.7 shows an example of the titration curve for a mixture of Fe2+ and Sn2+ using Ce4+ as the titrant. A titration of a mixture of analytes is possible if their standard state potentials or formal potentials differ by at least 200 mV.This page titled 9.4: Redox Titrations is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by David Harvey.
189
9.5: Precipitation Titrations
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Analytical_Chemistry_2.1_(Harvey)/09%3A_Titrimetric_Methods/9.05%3A_Precipitation_Titrations
Thus far we have examined titrimetric methods based on acid–base, complexation, and oxidation–reduction reactions. A reaction in which the analyte and titrant form an insoluble precipitate also can serve as the basis for a titration. We call this type of titration a precipitation titration.One of the earliest precipitation titrations—developed at the end of the eighteenth century—was the analysis of K2CO3 and K2SO4 in potash. Calcium nitrate, Ca(NO3)2, was used as the titrant, which forms a precipitate of CaCO3 and CaSO4. The titration’s end point was signaled by noting when the addition of titrant ceased to generate additional precipitate. The importance of precipitation titrimetry as an analytical method reached its zenith in the nineteenth century when several methods were developed for determining Ag+ and halide ions.A precipitation titration curve follows the change in either the titrand’s or the titrant’s concentration as a function of the titrant’s volume. As we did for other titrations, we first show how to calculate the titration curve and then demonstrate how we can sketch a reasonable approximation of the titration curve.Let’s calculate the titration curve for the titration of 50.0 mL of 0.0500 M NaCl with 0.100 M AgNO3. The reaction in this case is\[\text{Ag}^+(aq) + \text{Cl}^-(aq) \rightleftharpoons \text{AgCl}(s) \nonumber\]Because the reaction’s equilibrium constant is so large\[K = (K_\text{sp})^{-1} = (1.8 \times 10^{-10})^{-1} = 5.6 \times 10^9 \nonumber\]we may assume that Ag+ and Cl– react completely.By now you are familiar with our approach to calculating a titration curve. The first task is to calculate the volume of Ag+ needed to reach the equivalence point. The stoichiometry of the reaction requires that\[\text{mol Ag}^+ = M_\text{Ag}V_\text{Ag} = M_\text{Cl}V_\text{Cl} = \text{mol Cl}^- \nonumber\]Solving for the volume of Ag+\[V_{eq} = V_\text{Ag} = \frac{M_\text{Cl}V_\text{Cl}}{M_\text{Ag}} = \frac{(0.0500 \text{ M})(50.0 \text{ mL})}{0.100 \text{ M}} = 25.0 \text{ mL} \nonumber\]shows that we need 25.0 mL of Ag+ to reach the equivalence point.Before the equivalence point the titrand, Cl–, is in excess. The concentration of unreacted Cl– after we add 10.0 mL of Ag+, for example, is\[[\text{Cl}^-] = \frac{(\text{mol Cl}^-)_\text{initial} - (\text{mol Ag}^+)_\text{added}}{\text{total volume}} = \frac{M_\text{Cl}V_\text{Cl} - M_\text{Ag}V_\text{Ag}}{V_\text{Cl} + V_\text{Ag}} \nonumber\]\[[\text{Cl}^-] = \frac{(0.0500 \text{ M})(50.0 \text{ mL}) - (0.100 \text{ M})(10.0 \text{ mL})}{50.0 \text{ mL} + 10.0 \text{ mL}} = 2.50 \times 10^{-2} \text{ M} \nonumber\]which corresponds to a pCl of 1.60.At the titration’s equivalence point, we know that the concentrations of Ag+ and Cl– are equal. To calculate the concentration of Cl– we use the Ksp for AgCl; thus\[K_\text{sp} = [\text{Ag}^+][\text{Cl}^-] = (x)(x) = 1.8 \times 10^{-10} \nonumber\]Solving for x gives [Cl–] as \(1.3 \times 10^{-5}\) M, or a pCl of 4.89.After the equivalence point, the titrant is in excess. We first calculate the concentration of excess Ag+ and then use the Ksp expression to calculate the concentration of Cl–. For example, after adding 35.0 mL of titrant\[[\text{Ag}^+] = \frac{(\text{mol Ag}^+)_\text{added} - (\text{mol Cl}^-)_\text{initial}}{\text{total volume}} = \frac{M_\text{Ag}V_\text{Ag} - M_\text{Cl}V_\text{Cl}}{V_\text{Ag} + V_\text{Cl}} \nonumber\]\[[\text{Ag}^+] = \frac{(0.100 \text{ M})(35.0 \text{ mL}) - (0.0500 \text{ M})(50.0 \text{ mL})}{35.0 \text{ mL} + 50.0 \text{ mL}} = 1.18 \times 10^{-2} \text{ M} \nonumber\]\[[\text{Cl}^-] = \frac{K_\text{sp}}{[\text{Ag}^+]} = \frac{1.8 \times 10^{-10}}{1.18 \times 10^{-2}} = 1.5 \times 10^{-8} \text{ M} \nonumber\]or a pCl of 7.81. Additional results for the titration curve are shown in Table 9.5.1 and Figure 9.5.1 .When calculating a precipitation titration curve, you can choose to follow the change in the titrant’s concentration or the change in the titrand’s concentration. Calculate the titration curve for the titration of 50.0 mL of 0.0500 M AgNO3 with 0.100 M NaCl as pAg versusVNaCl, and as pCl versus VNaCl.The first task is to calculate the volume of NaCl needed to reach the equivalence point; thus\[V_{eq} = V_\text{NaCl} = \frac{M_\text{Ag}V_\text{Ag}}{M_\text{NaCl}} = \frac{(0.0500 \text{ M})(50.0 \text{ mL})}{0.100 \text{ M}} = 25.0 \text{ mL} \nonumber\]Before the equivalence point the titrand, Ag+, is in excess. The concentration of unreacted Ag+ after adding 10.0 mL of NaCl, for example, is\[[\text{Ag}^+] = \frac{(0.0500 \text{ M})(50.0 \text{ mL}) - (0.100 \text{ M})(10.0 \text{ mL})}{50.0 \text{ mL} + 10.0 \text{ mL}} = 2.50 \times 10^{-2} \text{ M} \nonumber\]which corresponds to a pAg of 1.60. To find the concentration of Cl– we use the Ksp for AgCl; thus\[[\text{Cl}^-] = \frac{K_\text{sp}}{[\text{Ag}^+]} = \frac{1.8 \times 10^{-10}}{2.50 \times 10^{-2}} = 7.2 \times 10^{-9} \text{ M} \nonumber\]or a pCl of 8.14.At the titration’s equivalence point, we know that the concentrations of Ag+ and Cl– are equal. To calculate their concentrations we use the Ksp expression for AgCl; thus\[K_\text{sp} = [\text{Ag}^+][\text{Cl}^-] = (x)(x) = 1.8 \times 10^{-10} \nonumber\]Solving for x gives the concentration of Ag+ and the concentration of Cl– as \(1.3 \times 10^{-5}\) M, or a pAg and a pCl of 4.89.After the equivalence point, the titrant is in excess. For example, after adding 35.0 mL of titrant\[[\text{Cl}^-] = \frac{(0.100 \text{ M})(35.0 \text{ mL}) - (0.0500 \text{ M})(50.0 \text{ mL})}{35.0 \text{ mL} + 50.0 \text{ mL}} = 1.18 \times 10^{-2} \text{ M} \nonumber\]or a pCl of 1.93. To find the concentration of Ag+ we use the Ksp for AgCl; thus\[[\text{Ag}^+] = \frac{K_\text{sp}}{[\text{Cl}^-]} = \frac{1.8 \times 10^{-10}}{1.18 \times 10^{-2}} = 1.5 \times 10^{-8} \text{ M} \nonumber\]or a pAg of 7.82. The following table summarizes additional results for this titration.To evaluate the relationship between a titration’s equivalence point and its end point we need to construct only a reasonable approximation of the exact titration curve. In this section we demonstrate a simple method for sketching a precipitation titration curve. Our goal is to sketch the titration curve quickly, using as few calculations as possible. Let’s use the titration of 50.0 mL of 0.0500 M NaCl with 0.100 M AgNO3.This is the same example that we used in developing the calculations for a precipitation titration curve. You can review the results of that calculation in Table 9.5.1 and Figure 9.5.1 .We begin by calculating the titration’s equivalence point volume, which, as we determined earlier, is 25.0 mL. Next we draw our axes, placing pCl on the y-axis and the titrant’s volume on the x-axis. To indicate the equivalence point’s volume, we draw a vertical line that intersects the x-axis at 25.0 mL of AgNO3. Figure 9.5.2 a shows the result of this first step in our sketch.Before the equivalence point, Cl– is present in excess and pCl is determined by the concentration of unreacted Cl–. As we learned earlier, the calculations are straightforward. Figure 9.5.2 b shows pCl after adding 10.0 mL and 20.0 mL of AgNO3.After the equivalence point, Ag+ is in excess and the concentration of Cl– is determined by the solubility of AgCl. Again, the calculations are straightforward. Figure 9.5.2 c shows pCl after adding 30.0 mL and 40.0 mL of AgNO3.Next, we draw a straight line through each pair of points, extending them through the vertical line that represents the equivalence point’s volume (Figure 9.5.2 d). Finally, we complete our sketch by drawing a smooth curve that connects the three straight-line segments (Figure 9.5.2 e). A comparison of our sketch to the exact titration curve (Figure 9.5.2 f) shows that they are in close agreement.At the beginning of this section we noted that the first precipitation titration used the cessation of precipitation to signal the end point. At best, this is a cumbersome method for detecting a titration’s end point. Before precipitation titrimetry became practical, better methods for identifying the end point were necessary.There are three general types of indicators for a precipitation titration, each of which changes color at or near the titration’s equivalence point. The first type of indicator is a species that forms a precipitate with the titrant. In the Mohr method for Cl– using Ag+ as a titrant, for example, a small amount of K2CrO4 is added to the titrand’s solution. The titration’s end point is the formation of a reddish-brown precipitate of Ag2CrO4.The Mohr method was first published in 1855 by Karl Friedrich Mohr.Because \(\text{CrO}_4^{2-}\) imparts a yellow color to the solution, which might obscure the end point, only a small amount of K2CrO4 is added. As a result, the end point is always later than the equivalence point. To compensate for this positive determinate error, an analyte-free reagent blank is analyzed to determine the volume of titrant needed to affect a change in the indicator’s color. Subtracting the end point for the reagent blank from the titrand’s end point gives the titration’s end point. Because \(\text{CrO}_4^{2-}\) is a weak base, the titrand’s solution is made slightly alkaline. If the pH is too acidic, chromate is present as \(\text{HCrO}_4^{-}\) instead of \(\text{CrO}_4^{2-}\), and the Ag2CrO4 end point is delayed. The pH also must be less than 10 to avoid the precipitation of silver hydroxide.A second type of indicator uses a species that forms a colored complex with the titrant or the titrand. In the Volhard method for Ag+ using KSCN as the titrant, for example, a small amount of Fe3+ is added to the titrand’s solution. The titration’s end point is the formation of the reddish-colored Fe(SCN)2+ complex. The titration is carried out in an acidic solution to prevent the precipitation of Fe3+ as Fe(OH)3.The Volhard method was first published in 1874 by Jacob Volhard.The third type of end point uses a species that changes color when it adsorbs to the precipitate. In the Fajans method for Cl– using Ag+ as a titrant, for example, the anionic dye dichlorofluoroscein is added to the titrand’s solution. Before the end point, the precipitate of AgCl has a negative surface charge due to the adsorption of excess Cl–. Because dichlorofluoroscein also carries a negative charge, it is repelled by the precipitate and remains in solution where it has a greenish-yellow color. After the end point, the surface of the precipitate carries a positive surface charge due to the adsorption of excess Ag+. Dichlorofluoroscein now adsorbs to the precipitate’s surface where its color is pink. This change in the indicator’s color signals the end point.The Fajans method was first published in the 1920s by Kasimir Fajans.Another method for locating the end point is a potentiometric titration in which we monitor the change in the titrant’s or the titrand’s concentration using an ion-selective electrode. The end point is found by visually examining the titration curve. For a discussion of potentiometry and ion-selective electrodes, see Chapter 11.Although precipitation titrimetry rarely is listed as a standard method of analysis, it is useful as a secondary analytical method to verify other analytical methods. Most precipitation titrations use Ag+ as either the titrand or the titrant. A titration in which Ag+ is the titrant is called an argentometric titration. Table 9.5.2 provides a list of several typical precipitation titrations.AgNO3AgNO3 and KSCNMohr or FajansVolhardAgNO3AgNO3 and KSCNMohr or FajarnsVolhard*AgNO3AgNO3 and KSCNFajansVolhardWhen two titrants are listed (AgNO3 and KSCN), the analysis is by a back titration; the first titrant, AgNO3, is added in excess and the excess is titrated using the second titrant, KSCN. For those Volhard methods identified with an asterisk (*), the precipitated silver salt is removed before carrying out the back titration.The quantitative relationship between the titrand and the titrant is determined by the stoichiometry of the titration reaction. If you are unsure of the balanced reaction, you can deduce the stoichiometry from the precipitate’s formula. For example, in forming a precipitate of Ag2CrO4, each mole of \(\text{CrO}_4^{2-}\) reacts with two moles of Ag+.A mixture containing only KCl and NaBr is analyzed by the Mohr method. A 0.3172-g sample is dissolved in 50 mL of water and titrated to the Ag2CrO4 end point, requiring 36.85 mL of 0.1120 M AgNO3. A blank titration requires 0.71 mL of titrant to reach the same end point. Report the %w/w KCl in the sample.SolutionTo find the moles of titrant reacting with the sample, we first need to correct for the reagent blank; thus\[V_\text{Ag} = 36.85 \text{ mL} - 0.71 \text{ mL} = 36.14 \text{ mL} \nonumber\]\[(0.1120 \text{ M})(0.03614 \text{ L}) = 4.048 \times 10^{-3} \text{ mol AgNO}_3 \nonumber\]Titrating with AgNO3 produces a precipitate of AgCl and AgBr. In forming the precipitates, each mole of KCl consumes one mole of AgNO3 and each mole of NaBr consumes one mole of AgNO3; thus\[\text{mol KCl + mol NaBr} = 4.048 \times 10^{-3} \text{ mol AgNO}_3 \nonumber\]We are interested in finding the mass of KCl, so let’s rewrite this equation in terms of mass. We know that\[\text{mol KCl} = \frac{\text{g KCl}}{74.551 \text{g KCl/mol KCl}} \nonumber\]\[\text{mol NaBr} = \frac{\text{g NaBr}}{102.89 \text{g NaBr/mol NaBr}} \nonumber\]which we substitute back into the previous equation\[\frac{\text{g KCl}}{74.551 \text{g KCl/mol KCl}} + \frac{\text{g NaBr}}{102.89 \text{g NaBr/mol NaBr}} = 4.048 \times 10^{-3} \nonumber\]Because this equation has two unknowns—g KCl and g NaBr—we need another equation that includes both unknowns. A simple equation takes advantage of the fact that the sample contains only KCl and NaBr; thus,\[\text{g NaBr} = 0.3172 \text{ g} - \text{ g KCl} \nonumber\]\[\frac{\text{g KCl}}{74.551 \text{g KCl/mol KCl}} + \frac{0.3172 \text{ g} - \text{ g KCl}}{102.89 \text{g NaBr/mol NaBr}} = 4.048 \times 10^{-3} \nonumber\]\[1.341 \times 10^{-2}(\text{g KCl}) + 3.083 \times 10^{-3} - 9.719 \times 10^{-3} (\text{g KCl}) = 4.048 \times 10^{-3} \nonumber\]\[3.69 \times 10^{-3}(\text{g KCl}) = 9.65 \times 10^{-4} \nonumber\]The sample contains 0.262 g of KCl and the %w/w KCl in the sample is\[\frac{0.262 \text{ g KCl}}{0.3172 \text{ g sample}} \times 100 = 82.6 \text{% w/w KCl} \nonumber\]The analysis for I– using the Volhard method requires a back titration. A typical calculation is shown in the following example.The %w/w I– in a 0.6712-g sample is determined by a Volhard titration. After adding 50.00 mL of 0.05619 M AgNO3 and allowing the precipitate to form, the remaining silver is back titrated with 0.05322 M KSCN, requiring 35.14 mL to reach the end point. Report the %w/w I– in the sample.SolutionThere are two precipitates in this analysis: AgNO3 and I– form a precipitate of AgI, and AgNO3 and KSCN form a precipitate of AgSCN. Each mole of I– consumes one mole of AgNO3 and each mole of KSCN consumes one mole of AgNO3; thus\[\text{mol AgNO}_3 = \text{mol I}^- + \text{mol KSCN} \nonumber\]Solving for the moles of I– we find\[\text{mol I}^- = \text{mol AgNO}_3 - \text{mol KSCN} = M_\text{Ag} V_\text{Ag} - M_\text{KSCN} V_\text{KSCN} \nonumber\]\[\text{mol I}^- = (0.05619 \text{ M})(0.0500 \text{ L}) - (0.05322 \text{ M})(0.03514 \text{ L}) = 9.393 \times 10^{-4} \nonumber\]The %w/w I– in the sample is\[\frac{(9.393 \times 10^{-4} \text{ mol I}^-) \times \frac{126.9 \text{ g I}^-}{\text{mol I}^-}}{0.6712 \text{ g sample}} \times 100 = 17.76 \text{% w/w I}^- \nonumber\]A 1.963-g sample of an alloy is dissolved in HNO3 and diluted to volume in a 100-mL volumetric flask. Titrating a 25.00-mL portion with 0.1078 M KSCN requires 27.19 mL to reach the end point. Calculate the %w/w Ag in the alloy.The titration uses\[(0.1078 \text{ M KSCN})(0.02719 \text{ L}) = 2.931 \times 10^{-3} \text{ mol KSCN} \nonumber\]The stoichiometry between SCN– and Ag+ is 1:1; thus, there are\[2.931 \times 10^{-3} \text{ mol Ag}^+ \times \frac{107.87 \text{ g Ag}}{\text{mol Ag}} = 0.3162 \text{ g Ag} \nonumber\]in the 25.00 mL sample. Because this represents 1⁄4 of the total solution, there are \(0.3162 \times 4\) or 1.265 g Ag in the alloy. The %w/w Ag in the alloy is\[\frac{1.265 \text{ g Ag}}{1.963 \text{ g sample}} \times 100 = 64.44 \text{% w/w Ag} \nonumber\]The scale of operations, accuracy, precision, sensitivity, time, and cost of a precipitation titration is similar to those described elsewhere in this chapter for acid–base, complexation, and redox titrations. Precipitation titrations also can be extended to the analysis of mixtures provided there is a significant difference in the solubilities of the precipitates. Figure 9.5.3 shows an example of a titration curve for a mixture of I– and Cl– using Ag+ as a titrant.This page titled 9.5: Precipitation Titrations is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by David Harvey.
190
9.6: Problems
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Analytical_Chemistry_2.1_(Harvey)/09%3A_Titrimetric_Methods/9.06%3A_Problems
Some of the problems that follow require one or more equilibrium constants or standard state potentials. For your convenience, here are hyperlinks to the appendices containing these constants:Appendix 10: Solubility ProductsAppendix 11: Acid Dissociation ConstantsAppendix 12: Metal-Ligand Formation ConstantsAppendix 13: Standard State Reduction Potentials1. Calculate or sketch titration curves for the following acid–base titrations.(a) 25.0 mL of 0.100 M NaOH with 0.0500 M HCl(b) 50.0 mL of 0.0500 M HCOOH with 0.100 M NaOH(c) 50.0 mL of 0.100 M NH3 with 0.100 M HCl(d) 50.0 mL of 0.0500 M ethylenediamine with 0.100 M HCl(e) 50.0 mL of 0.0400 M citric acid with 0.120 M NaOH(f) 50.0 mL of 0.0400 M H3PO4 with 0.120 M NaOH2. Locate the equivalence point(s) for each titration curve in problem 1 and, where feasible, calculate the pH at the equivalence point. What is the stoichiometric relationship between the moles of acid and the moles of base for each of these equivalence points?3. Suggest an appropriate visual indicator for each of the titrations in problem 1.4. To sketch the titration curve for a weak acid we approximate the pH at 10% of the equivalence point volume as pKa – 1, and the pH at 90% of the equivalence point volume as pKa + 1. Show that these assumptions are reasonable.5. Tartaric acid, H2C4H4O6, is a diprotic weak acid with a pKa1 of 3.0 and a pKa2 of 4.4. Suppose you have a sample of impure tartaric acid (purity > 80%), and that you plan to determine its purity by titrating with a solution of 0.1 M NaOH using an indicator to signal the end point. Describe how you will carry out the analysis, paying particular attention to how much sample to use, the desired pH range for the indicator, and how you will calculate the %w/w tartaric acid. Assume your buret has a maximum capacity of 50 mL.6. The following data for the titration of a monoprotic weak acid with a strong base were collected using an automatic titrator. Prepare normal, first derivative, second derivative, and Gran plot titration curves for this data, and locate the equivalence point for each.0.250.861.632.724.296.549.677. Schwartz published the following simulated data for the titration of a \(1.02 \times 10^{-4}\) M solution of a monoprotic weak acid (pKa = 8.16) with \(1.004 \times 10^{-3}\) M NaOH [Schwartz, L. M. J. Chem. Educ. 1992, 69, 879–883]. The simulation assumes that a 50-mL pipet is used to transfer a portion of the weak acid solution to the titration vessel. A calibration of the pipet shows that it delivers a volume of only 49.94 mL. Prepare normal, first derivative, second derivative, and Gran plot titration curves for this data, and determine the equivalence point for each. How do these equivalence points compare to the expected equivalence point? Comment on the utility of each titration curve for the analysis of very dilute solutions of very weak acids.0.036.2124.790.090.298.9940.728. Calculate or sketch the titration curve for a 50.0 mL solution of a 0.100 M monoprotic weak acid (pKa = 8.0) with 0.1 M strong base in a nonaqueous solvent with Ks = \(10^{-20}\). You may assume that the change in solvent does not affect the weak acid’s pKa. Compare your titration curve to the titration curve when water is the solvent.9. The titration of a mixture of p-nitrophenol (pKa = 7.0) and m-nitrophenol (pKa = 8.3) is followed spectrophotometrically. Neither acid absorbs at a wavelength of 545 nm, but their respective conjugate bases do absorb at this wavelength. The m-nitrophenolate ion has a greater absorbance than an equimolar solution of the p-nitrophenolate ion. Sketch the spectrophotometric titration curve for a 50.00-mL mixture consisting of 0.0500 M p-nitrophenol and 0.0500 M m-nitrophenol with 0.100 M NaOH. Compare your result to the expected potentiometric titration curves.10. A quantitative analysis for aniline (C6H5NH2, Kb = \(3.94 \times 10^{-10}\)) is carried out by an acid–base titration using glacial acetic acid as the solvent and HClO4 as the titrant. A known volume of sample that contains 3–4 mmol of aniline is transferred to a 250-mL Erlenmeyer flask and diluted to approximately 75 mL with glacial acetic acid. Two drops of a methyl violet indicator are added, and the solution is titrated with previously standardized 0.1000 M HClO4 (prepared in glacial acetic acid using anhydrous HClO4) until the end point is reached. Results are reported as parts per million aniline.(a) Explain why this titration is conducted using glacial acetic acid as the solvent instead of using water.(b) One problem with using glacial acetic acid as solvent is its relatively high coefficient of thermal expansion of 0.11%/oC. For example, 100.00 mL of glacial acetic acid at 25oC occupies 100.22 mL at 27oC. What is the effect on the reported concentration of aniline if the standardization of HClO4 is conducted at a temperature that is lower than that for the analysis of the unknown?(c) The procedure calls for a sample that contains 3–4 mmoles of aniline. Why is this requirement necessary?Some of the problems that follow require one or more equilibrium constants or standard state potentials. For your convenience, here are hyperlinks to the appendices containing these constants:Appendix 10: Solubility ProductsAppendix 11: Acid Dissociation ConstantsAppendix 12: Metal-Ligand Formation ConstantsAppendix 13: Standard State Reduction Potentials11. Using a ladder diagram, explain why the presence of dissolved CO2 leads to a determinate error for the standardization of NaOH if the end point’s pH is between 6–10, but no determinate error if the end point’s pH is less than 6.12. A water sample’s acidity is determined by titrating to fixed end point pHs of 3.7 and 8.3, with the former providing a measure of the concentration of strong acid and the later a measure of the combined concentrations of strong acid and weak acid. Sketch a titration curve for a mixture of 0.10 M HCl and 0.10 M H2CO3 with 0.20 M strong base, and use it to justify the choice of these end points.13. Ethylenediaminetetraacetic acid, H4Y, is a weak acid with successive acid dissociation constants of 0.010, \(2.19 \times 10^{-3}\), \(6.92 \times 10^{-7}\), and \(5.75 \times 10^{-11}\). The figure below shows a titration curve for H4Y with NaOH. What is the stoichiometric relationship between H4Y and NaOH at the equivalence point marked with the red arrow?14. A Gran plot method has been described for the quantitative analysis of a mixture that consists of a strong acid and a monoprotic weak acid [(a) Boiani, J. A. J. Chem. Educ. 1986, 63, 724–726; (b) Castillo, C. A.; Jaramillo, A. J. Chem. Educ. 1989, 66, 341]. A 50.00-mL mixture of HCl and CH3COOH is transferred to an Erlenmeyer flask and titrated by using a digital pipet to add successive 1.00-mL aliquots of 0.09186 M NaOH. The progress of the titration is monitored by recording the pH after each addition of titrant. Using the two papers listed above as a reference, prepare a Gran plot for the following data and determine the concentrations of HCl and CH3COOH.1.001.8324.004.4547.0012.142.001.864.5348.0012.173.001.894.6149.0012.204.001.924.6950.0012.235.001.954.7651.0012.266.001.994.8452.0012.287.002.034.9353.0012.308.002.105.0254.0012.329.002.185.1355.0012.3410.005.2356.0012.365.3757.0012.385.5258.0012.395.7559.0012.406.1460.0012.4210.3015. Explain why it is not possible for a sample of water to simultaneously have OH– and \(\text{HCO}_3^-\) as sources of alkalinity.16. For each of the samples a–e, determine the sources of alkalinity (OH–, \(\text{HCO}_3^-\), \(\text{CO}_3^{2-}\)) and their respective concentrations in parts per million In each case a 25.00-mL sample is titrated with 0.1198 M HCl to the bromocresol green and the phenolphthalein end points.17. A sample may contain any of the following: HCl, NaOH, H3PO4, \(\text{H}_2\text{PO}_4^-\), \(\text{HPO}_4^{2-}\), or \(\text{PO}_4^{3-}\). The composition of a sample is determined by titrating a 25.00-mL portion with 0.1198 M HCl or 0.1198 M NaOH to the phenolphthalein and to the methyl orange end points. For each of the following samples, determine which species are present and their respective molar concentrations.18. The protein in a 1.2846-g sample of an oat cereal is determined by a Kjeldahl analysis. The sample is digested with H2SO4, the resulting solution made basic with NaOH, and the NH3 distilled into 50.00 mL of 0.09552 M HCl. The excess HCl is back titrated using 37.84 mL of 0.05992 M NaOH. Given that the proteins in grains average 17.54% w/w N, report the %w/w protein in the sample.19. The concentration of SO2 in air is determined by bubbling a sample of air through a trap that contains H2O2. Oxidation of SO2 by H2O2 results in the formation of H2SO4, which is then determined by titrat-ing with NaOH. In a typical analysis, a sample of air is passed through the peroxide trap at a rate of 12.5 L/min for 60 min and required 10.08 mL of 0.0244 M NaOH to reach the phenolphthalein end point. Calculate the μL/L SO2 in the sample of air. The density of SO2 at the temperature of the air sample is 2.86 mg/mL.20. The concentration of CO2 in air is determined by an indirect acid–base titration. A sample of air is bubbled through a solution that contains an excess of Ba(OH)2, precipitating BaCO3. The excess Ba(OH)2 is back titrated with HCl. In a typical analysis a 3.5-L sample of air is bubbled through 50.00 mL of 0.0200 M Ba(OH)2. Back titrating with 0.0316 M HCl requires 38.58 mL to reach the end point. Determine the ppm CO2 in the sample of air given that the density of CO2 at the temperature of the sample is 1.98 g/L.Some of the problems that follow require one or more equilibrium constants or standard state potentials. For your convenience, here are hyperlinks to the appendices containing these constants:Appendix 10: Solubility ProductsAppendix 11: Acid Dissociation ConstantsAppendix 12: Metal-Ligand Formation ConstantsAppendix 13: Standard State Reduction Potentials21. The purity of a synthetic preparation of methylethyl ketone, C4H8O, is determined by reacting it with hydroxylamine hydrochloride, liberating HCl (see reaction in Table 9.2.7). In a typical analysis a 3.00-mL sample is diluted to 50.00 mL and treated with an excess of hydroxylamine hydrochloride. The liberated HCl is titrated with 0.9989 M NaOH, requiring 32.68 mL to reach the end point. Report the percent purity of the sample given that the density of methylethyl ketone is 0.805 g/mL.22. Animal fats and vegetable oils are triesters formed from the reaction between glycerol (1,2,3-propanetriol) and three long-chain fatty acids. One of the methods used to characterize a fat or an oil is a determination of its saponification number. When treated with boiling aqueous KOH, an ester saponifies into the parent alcohol and fatty acids (as carboxylate ions). The saponification number is the number of milligrams of KOH required to saponify 1.000 gram of the fat or the oil. In a typical analysis a 2.085-g sample of butter is added to 25.00 mL of 0.5131 M KOH. After saponification is complete the excess KOH is back titrated with 10.26 mL of 0.5000 M HCl. What is the saponification number for this sample of butter?23. A 250.0-mg sample of an organic weak acid is dissolved in an appropriate solvent and titrated with 0.0556 M NaOH, requiring 32.58 mL to reach the end point. Determine the compound’s equivalent weight.24. The figure below shows a potentiometric titration curve for a 0.4300-g sample of a purified amino acid that was dissolved in 50.00 mL of water and titrated with 0.1036 M NaOH. Identify the amino acid from the possibilities listed in the table.25. Using its titration curve, determine the acid dissociation constant for the weak acid in problem 9.6.26. Where in the scale of operations do the microtitration techniques discussed in Chapter 9.7 belong?27. An acid–base titration can be used to determine an analyte’s equivalent weight, but it can not be used to determine its formula weight. Explain why.28. Commercial washing soda is approximately 30–40% w/w Na2CO3. One procedure for the quantitative analysis of washing soda contains the following instructions:Transfer an approximately 4-g sample of the washing soda to a 250-mL volumetric flask. Dissolve the sample in about 100 mL of H2O and then dilute to the mark. Using a pipet, transfer a 25-mL aliquot of this solution to a 125-mL Erlenmeyer flask and add 25-mL of H2O and 2 drops of bromocresol green indicator. Titrate the sample with 0.1 M HCl to the indicator’s end point.What modifications, if any, are necessary if you want to adapt this procedure to evaluate the purity of commercial Na2CO3 that is >98% pure?29. A variety of systematic and random errors are possible when standardizing a solution of NaOH against the primary weak acid standard potassium hydrogen phthalate (KHP). Identify, with justification, whether the following are sources of systematic error or random error, or if they have no affect on the error. If the error is systematic, then indicate whether the experimentally determined molarity for NaOH is too high or too low. The standardization reaction is\[\text{C}_8\text{H}_5\text{O}_4^-(aq) + \text{OH}^-(aq) \rightarrow \text{C}_8\text{H}_4\text{O}_4^-(aq) + \text{H}_2\text{O}(l) \nonumber\] (a) The balance used to weigh KHP is not properly calibrated and always reads 0.15 g too low.(b) The indicator for the titration changes color between a pH of 3–4.(c) An air bubble, which is lodged in the buret’s tip at the beginning of the analysis, dislodges during the titration.(d) Samples of KHP are weighed into separate Erlenmeyer flasks, but the balance is tarred only for the first flask.(e) The KHP is not dried before it is used.(f) The NaOH is not dried before it is used.(g) The procedure states that the sample of KHP should be dissolved in 25 mL of water, but it is accidentally dissolved in 35 mL of water.30. The concentration of o-phthalic acid in an organic solvent, such as n-butanol, is determined by an acid–base titration using aqueous NaOH as the titrant. As the titrant is added, the o-phthalic acid extracts into the aqueous solution where it reacts with the titrant. The titrant is added slowly to allow sufficient time for the extraction to take place.(a) What type of error do you expect if the titration is carried out too quickly?(b) Propose an alternative acid–base titrimetric method that allows for a more rapid determination of the concentration of o-phthalic acid in n-butanol.Some of the problems that follow require one or more equilibrium constants or standard state potentials. For your convenience, here are hyperlinks to the appendices containing these constants:Appendix 10: Solubility ProductsAppendix 11: Acid Dissociation ConstantsAppendix 12: Metal-Ligand Formation ConstantsAppendix 13: Standard State Reduction Potentials31. Calculate or sketch titration curves for 50.0 mL of 0.100 Mg2+ with 0.100 M EDTA at a pH of 7 and 10. Locate the equivalence point for each titration curve.32. Calculate or sketch titration curves for 25.0 mL of 0.0500 M Cu2+ with 0.025 M EDTA at a pH of 10 and in the presence of 10–3 M and 10–1 M NH3. Locate the equivalence point for each titration curve.33. Sketch the spectrophotometric titration curve for the titration of a mixture of \(5.00 \times 10^{-3}\) M Bi3+ and \(5.00 \times 10^{-3}\) M Cu2+ with 0.0100 M EDTA. Assume that only the Cu2+–EDTA complex absorbs at the selected wavelength.34. The EDTA titration of mixtures of Ca2+ and Mg2+ can be followed thermometrically because the formation of the Ca2+–EDTA complex is exothermic and the formation of the Mg2+–EDTA complex is endothermic. Sketch the thermometric titration curve for a mixture of \(5.00 \times 10^{-3}\) M Ca2+ and \(5.00 \times 10^{-3}\) M Mg2+ using 0.0100 M EDTA as the titrant. The heats of formation for CaY2– and MgY2– are, respectively, –23.9 kJ/mole and 23.0 kJ/mole.35. EDTA is one member of a class of aminocarboxylate ligands that form very stable 1:1 complexes with metal ions. The following table provides logKf values for the complexes of six such ligands with Ca2+ and Mg2+. Which ligand is the best choice for a direct titration of Ca2+ in the presence of Mg2+?36. The amount of calcium in physiological fluids is determined by a complexometric titration with EDTA. In one such analysis a 0.100-mL sample of a blood serum is made basic by adding 2 drops of NaOH and titrated with 0.00119 M EDTA, requiring 0.268 mL to reach the end point. Report the concentration of calcium in the sample as milligrams Ca per 100 mL.37. After removing the membranes from an eggshell, the shell is dried and its mass recorded as 5.613 g. The eggshell is transferred to a 250-mL beaker and dissolved in 25 mL of 6 M HCl. After filtering, the solution that contains the dissolved eggshell is diluted to 250 mL in a volumetric flask. A 10.00-mL aliquot is placed in a 125-mL Erlenmeyer flask and buffered to a pH of 10. Titrating with 0.04988 M EDTA requires 44.11 mL to reach the end point. Determine the amount of calcium in the eggshell as %w/w CaCO3.38. The concentration of cyanide, CN–, in a copper electroplating bath is determined by a complexometric titration using Ag+ as the titrant, forming the soluble \(\text{Ag(CN)}_2^-\) complex. In a typical analysis a 5.00-mL sample from an electroplating bath is transferred to a 250-mL Erlenmeyer flask, and treated with 100 mL of H2O, 5 mL of 20% w/v NaOH and 5 mL of 10% w/v KI. The sample is titrated with 0.1012 M AgNO3, requiring 27.36 mL to reach the end point as signaled by the formation of a yellow precipitate of AgI. Report the concentration of cyanide as parts per million of NaCN.39. Before the introduction of EDTA most complexation titrations used Ag+ or CN– as the titrant. The analysis for Cd2+, for example, was accomplished indirectly by adding an excess of KCN to form \(\text{Cd(CN)}_4^{2-}\), and back-titrating the excess CN– with Ag+, forming \(\text{Ag(CN)}_2^-\). In one such analysis a 0.3000-g sample of an ore is dissolved and treated with 20.00 mL of 0.5000 M KCN. The excess CN– requires 13.98 mL of 0.1518 M AgNO3 to reach the end point. Determine the %w/w Cd in the ore.40. Solutions that contain both Fe3+ and Al3+ are selectively analyzed for Fe3+ by buffering to a pH of 2 and titrating with EDTA. The pH of the solution is then raised to 5 and an excess of EDTA added, resulting in the formation of the Al3+–EDTA complex. The excess EDTA is back-titrated using a standard solution of Fe3+, providing an indirect analysis for Al3+.(a) At a pH of 2, verify that the formation of the Fe3+–EDTA complex is favorable, and that the formation of the Al3+–EDTA complex is not favorable.(b) A 50.00-mL aliquot of a sample that contains Fe3+ and Al3+ is transferred to a 250-mL Erlenmeyer flask and buffered to a pH of 2. A small amount of salicylic acid is added, forming the soluble red-colored Fe3+–salicylic acid complex. The solution is titrated with 0.05002 M EDTA, requiring 24.82 mL to reach the end point as signaled by the disappearance of the Fe3+–salicylic acid complex’s red color. The solution is buffered to a pH of 5 and 50.00 mL of 0.05002 M EDTA is added. After ensuring that the formation of the Al3+–EDTA complex is complete, the excess EDTA is back titrated with 0.04109 M Fe3+, requiring 17.84 mL to reach the end point as signaled by the reappearance of the red-colored Fe3+–salicylic acid complex. Report the molar concentrations of Fe3+ and Al3+ in the sample.Some of the problems that follow require one or more equilibrium constants or standard state potentials. For your convenience, here are hyperlinks to the appendices containing these constants:Appendix 10: Solubility ProductsAppendix 11: Acid Dissociation ConstantsAppendix 12: Metal-Ligand Formation ConstantsAppendix 13: Standard State Reduction Potentials41. Prada and colleagues described an indirect method for determining sulfate in natural samples, such as seawater and industrial effluents [Prada, S.; Guekezian, M.; Suarez-Iha, M. E. V. Anal. Chim. Acta 1996, 329, 197–202]. The method consists of three steps: precipitating the sulfate as PbSO4; dissolving the PbSO4 in an ammonical solution of excess EDTA to form the soluble PbY2– complex; and titrating the excess EDTA with a standard solution of Mg2+. The following reactions and equilibrium constants are known\[\text{PbSO}_4(s) \rightleftharpoons \text{Pb}^{2+}(aq) + \text{SO}_4^{2-}(aq) \quad K_\text{sp} = 1.6 \times 10^{-8} \nonumber\]\[\text{Pb}^{2+}(aq) + \text{Y}^{4-}(aq) \rightleftharpoons \text{PbY}^{2-}(aq) \quad K_\text{f} = 1.1 \times 10^{18} \nonumber\]\[\text{Mg}^{2+}(aq) + \text{Y}^{4-}(aq) \rightleftharpoons \text{MgY}^{2-}(aq) \quad K_\text{f} = 4.9 \times 10^{8} \nonumber\]\[\text{Zn}^{2+}(aq) + \text{Y}^{4-}(aq) \rightleftharpoons \text{ZnY}^{2-}(aq) \quad K_\text{f} = 3.2 \times 10^{16} \nonumber\](a) Verify that a precipitate of PbSO4 will dissolve in a solution of Y4–.(b) Sporek proposed a similar method using Zn2+ as a titrant and found that the accuracy frequently was poor [Sporek, K. F. Anal. Chem. 1958, 30, 1030–1032]. One explanation is that Zn2+ might react with the PbY2– complex, forming ZnY2–. Show that this might be a problem when using Zn2+ as a titrant, but that it is not a problem when using Mg2+ as a titrant. Would such a displacement of Pb2+ by Zn2+ lead to the reporting of too much or too little sulfate?(c) In a typical analysis, a 25.00-mL sample of an industrial effluent is carried through the procedure using 50.00 mL of 0.05000 M EDTA. Titrating the excess EDTA requires 12.42 mL of 0.1000 M Mg2+. Report the molar concentration of \(\text{SO}_4^{2-}\) in the sample of effluent.42. Table 9.3.1 provides values for the fraction of EDTA present as Y4–, \(\alpha_{\text{Y}^{4-}}\). Values of \(\alpha_{\text{Y}^{4-}}\) are calculated using the equation\[\alpha_{\text{Y}^{4-}} = \frac{[\text{Y}^{4-}]}{C_\text{EDTA}} \nonumber\]where [Y4-] is the concentration of the fully deprotonated EDTA and CEDTA is the total concentration of EDTA in all of its forms\[C_\text{EDTA} = [\text{H}_6\text{Y}^{2+}]+[\text{H}_5\text{Y}^{+}]+[\text{H}_4\text{Y}]+ [\text{H}_3\text{Y}^{-}] + [\text{H}_2\text{Y}^{2-}] + [\text{H}_\text{Y}^{3-}] + [\text{Y}^{4-}] \nonumber\]\[\text{H}_6\text{Y}^{2+} (aq) + \text{H}_2\text{O}(l) \rightleftharpoons \text{H}_3\text{O}^+(aq) + \text{H}_5\text{Y}^{+}(aq) \quad K_\text{a1} \nonumber\]\[\text{H}_5\text{Y}^{+} (aq) + \text{H}_2\text{O}(l) \rightleftharpoons \text{H}_3\text{O}^+(aq) + \text{H}_4\text{Y}(aq) \quad K_\text{a2} \nonumber\]\[\text{H}_4\text{Y} (aq) + \text{H}_2\text{O}(l) \rightleftharpoons \text{H}_3\text{O}^+(aq) + \text{H}_3\text{Y}^{-}(aq) \quad K_\text{a3} \nonumber\]\[\text{H}_3\text{Y}^{-} (aq) + \text{H}_2\text{O}(l) \rightleftharpoons \text{H}_3\text{O}^+(aq) + \text{H}_2\text{Y}^{2-}(aq) \quad K_\text{a4} \nonumber\]\[\text{H}_2\text{Y}^{2-} (aq) + \text{H}_2\text{O}(l) \rightleftharpoons \text{H}_3\text{O}^+(aq) + \text{H}\text{Y}^{3-}(aq) \quad K_\text{a5} \nonumber\]\[\text{H}\text{Y}^{2-} (aq) + \text{H}_2\text{O}(l) \rightleftharpoons \text{H}_3\text{O}^+(aq) + \text{Y}^{4-}(aq) \quad K_\text{a6} \nonumber\]to show that\[\alpha_{\text{Y}^{4-}} = \frac{K_\text{a1}K_\text{a2}K_\text{a3}K_\text{a4}K_\text{a5}K_\text{a6}}{d} \nonumber\]where\[d = [\text{H}_3\text{O}^+]^6 + [\text{H}_3\text{O}^+]^5K_\text{a1} + [\text{H}_3\text{O}^+]^4K_\text{a1}K_\text{a2} + [\text{H}_3\text{O}^+]^3K_\text{a1}K_\text{a2}K_\text{a3} + [\text{H}_3\text{O}^+]^2K_\text{a1}K_\text{a2}K_\text{a3}K_\text{a4} + [\text{H}_3\text{O}^+]K_\text{a1}K_\text{a2}K_\text{a3}K_\text{a4}K_\text{a5} + K_\text{a1}K_\text{a2}K_\text{a3}K_\text{a4}K_\text{a5}K_\text{a6} \nonumber\]43. Calculate or sketch titration curves for the following redox titration reactions at 25oC. Assume the analyte initially is present at a concentration of 0.0100 M and that a 25.0-mL sample is taken for analysis. The titrant, which is the bold species in each reaction, has a concentration of 0.0100 M.(a) V2+(aq) + Ce4+(aq) \(\rightarrow\) V3+(aq) + Ce3+(aq)(b) Sn2+(aq) + 2Ce4+(aq) \(\rightarrow\) Sn4+(aq) +2Ce3+(aq)(c) 5Fe2+(aq) + \(\mathbf{MnO}_\mathbf{4}^\mathbf{-}\)(aq) + 8H+(aq) \(\rightarrow\) 5Fe3+(aq) + Mn2+(aq) +4H2O(l) at a pH of 144. What is the equivalence point for each titration in problem 43?45. Suggest an appropriate indicator for each titration in problem 43.46. The iron content of an ore is determined by a redox titration that uses K2Cr2O7 as the titrant. A sample of the ore is dissolved in concentrated HCl using Sn2+ to speed its dissolution by reducing Fe3+ to Fe2+. After the sample is dissolved, Fe2+ and any excess Sn2+ are oxidized to Fe3+ and Sn4+ using \(\text{MnO}_4^-\). The iron is then carefully reduced to Fe2+ by adding a 2–3 drop excess of Sn2+. A solution of HgCl2 is added and, if a white precipitate of Hg2Cl2 forms, the analysis is continued by titrating with K2Cr2O7. The sample is discarded without completing the analysis if a precipitate of Hg2Cl2 does not form or if a gray precipitate (due to Hg) forms.(a) Explain why the sample is discarded if a white precipitate of Hg2Cl2 does not form or if a gray precipitate forms.(b) Is a determinate error introduced if the analyst forgets to add Sn2+ in the step where the iron ore is dissolved?(c) Is a determinate error introduced if the iron is not quantitatively oxidized back to Fe3+ by the \(\text{MnO}_4^-\)?47. The amount of Cr3+ in an inorganic salt is determined by a redox titration. A portion of sample that contains approximately 0.25 g of Cr3+ is accurately weighed and dissolved in 50 mL of H2O. The Cr3+ is oxidized to \(\text{Cr}_2\text{O}_7^{2-}\) by adding 20 mL of 0.1 M AgNO3, which serves as a catalyst, and 50 mL of 10%w/v (NH4)2S2O8, which serves as the oxidizing agent. After the reaction is complete, the resulting solution is boiled for 20 minutes to destroy the excess \(\text{S}_2\text{O}_8^{2-}\), cooled to room temperature, and diluted to 250 mL in a volumetric flask. A 50-mL portion of the resulting solution is transferred to an Erlenmeyer flask, treated with 50 mL of a standard solution of Fe2+, and acidified with 200 mL of 1 M H2SO4, reducing the \(\text{Cr}_2\text{O}_7^{2-}\) to Cr3+. The excess Fe2+ is then determined by a back titration with a standard solution of K2Cr2O7 using an appropriate indicator. The results are reported as %w/w Cr3+.(a) There are several places in the procedure where a reagent’s volume is specified (see italicized text). Which of these measurements must be made using a volumetric pipet.(b) Excess peroxydisulfate, \(\text{S}_2\text{O}_8^{2-}\) is destroyed by boiling the solution. What is the effect on the reported %w/w Cr3+ if some of the \(\text{S}_2\text{O}_8^{2-}\) is not destroyed during this step?(c) Solutions of Fe2+ undergo slow air oxidation to Fe3+. What is the effect on the reported %w/w Cr3+ if the standard solution of Fe2+ is inadvertently allowed to be partially oxidized?48. The exact concentration of H2O2 in a solution that is nominally 6% w/v H2O2 is determined by a redox titration using \(\text{MnO}_4^-\) as the titrant. A 25-mL aliquot of the sample is transferred to a 250-mL volumetric flask and diluted to volume with distilled water. A 25-mL aliquot of the diluted sample is added to an Erlenmeyer flask, diluted with 200 mL of distilled water, and acidified with 20 mL of 25% v/v H2SO4. The resulting solution is titrated with a standard solution of KMnO4 until a faint pink color persists for 30 s. The results are reported as %w/v H2O2.(a) Many commercially available solutions of H2O2 contain an inorganic or an organic stabilizer to prevent the autodecomposition of the peroxide to H2O and O2. What effect does the presence of this stabilizer have on the reported %w/v H2O2 if it also reacts with \(\text{MnO}_4^-\)?(b) Laboratory distilled water often contains traces of dissolved organic material that may react with \(\text{MnO}_4^-\). Describe a simple method to correct for this potential interference.(c) What modifications to the procedure, if any, are needed if the sample has a nominal concentration of 30% w/v H2O2.49. The amount of iron in a meteorite is determined by a redox titration using KMnO4 as the titrant. A 0.4185-g sample is dissolved in acid and the liberated Fe3+ quantitatively reduced to Fe2+ using a Walden reductor. Titrating with 0.02500 M KMnO4 requires 41.27 mL to reach the end point. Determine the %w/w Fe2O3 in the sample of meteorite.50. Under basic conditions, \(\text{MnO}_4^-\) is used as a titrant for the analysis of Mn2+, with both the analyte and the titrant forming MnO2. In the analysis of a mineral sample for manganese, a 0.5165-g sample is dissolved and the manganese reduced to Mn2+. The solution is made basic and titrated with 0.03358 M KMnO4, requiring 34.88 mL to reach the end point. Calculate the %w/w Mn in the mineral sample.Some of the problems that follow require one or more equilibrium constants or standard state potentials. For your convenience, here are hyperlinks to the appendices containing these constants:Appendix 10: Solubility ProductsAppendix 11: Acid Dissociation ConstantsAppendix 12: Metal-Ligand Formation ConstantsAppendix 13: Standard State Reduction Potentials51. The amount of uranium in an ore is determined by an indirect redox titration. The analysis is accomplished by dissolving the ore in sulfuric acid and reducing \(\text{UO}_2^+\) to U4+ with a Walden reductor. The solution is treated with an excess of Fe3+, forming Fe2+ and U6+. The Fe2+ is titrated with a standard solution of K2Cr2O7. In a typical analysis a 0.315-g sample of ore is passed through the Walden reductor and treated with 50.00 mL of 0.0125 M Fe3+. Back titrating with 0.00987 M K2Cr2O7 requires 10.52 mL. What is the %w/w U in the sample?52. The thickness of the chromium plate on an auto fender is determined by dissolving a 30.0-cm2 section in acid and oxidizing Cr3+ to \(\text{Cr}_2\text{O}_7^{2-}\) with peroxydisulfate. After removing excess peroxydisulfate by boiling, 500.0 mg of Fe(NH4)2(SO4)2•6H2O is added, reducing the \(\text{Cr}_2\text{O}_7^{2-}\) to Cr3+. The excess Fe2+ is back titrated, requiring 18.29 mL of 0.00389 M K2Cr2O7 to reach the end point. Determine the average thickness of the chromium plate given that the density of Cr is 7.20 g/cm3.53. The concentration of CO in air is determined by passing a known volume of air through a tube that contains I2O5, forming CO2 and I2. The I2 is removed from the tube by distilling it into a solution that contains an excess of KI, producing \(\text{I}_3^-\). The \(\text{I}_3^-\) is titrated with a standard solution of Na2S2O3. In a typical analysis a 4.79-L sample of air is sampled as described here, requiring 7.17 mL of 0.00329 M Na2S2O3 to reach the end point. If the air has a density of \(1.23 \times 10^{-3}\) g/mL, determine the parts per million CO in the air.54. The level of dissolved oxygen in a water sample is determined by the Winkler method. In a typical analysis a 100.0-mL sample is made basic and treated with a solution of MnSO4, resulting in the formation of MnO2. An excess of KI is added and the solution is acidified, resulting in the formation of Mn2+ and I2. The liberated I2 is titrated with a solution of 0.00870 M Na2S2O3, requiring 8.90 mL to reach the starch indicator end point. Calculate the concentration of dissolved oxygen as parts per million O2.55. Calculate or sketch the titration curve for the titration of 50.0 mL of 0.0250 M KI with 0.0500 M AgNO3. Prepare separate titration curves using pAg and pI on the y-axis.56. Calculate or sketch the titration curve for the titration of a 25.0 mL mixture of 0.0500 M KI and 0.0500 M KSCN using 0.0500 M AgNO3 as the titrant.57. The analysis for Cl– using the Volhard method requires a back titration. A known amount of AgNO3 is added, precipitating AgCl. The unreacted Ag+ is determined by back titrating with KSCN. There is a complication, however, because AgCl is more soluble than AgSCN.(a) Why do the relative solubilities of AgCl and AgSCN lead to a titration error?(b) Is the resulting titration error a positive or a negative determinate error?(c) How might you modify the procedure to eliminate this source of determinate error?(d) Is this source of determinate error of concern when using the Volhard method to determine Br–?58. Voncina and co-workers suggest that a precipitation titration can be monitored by measuring pH as a function of the volume of titrant if the titrant is a weak base [VonČina, D. B.; DobČnik, D.; GomiŠČek, S. Anal. Chim. Acta 1992, 263, 147–153]. For example, when titrating Pb2+ with K2CrO4 the solution that contains the analyte initially is acidified to a pH of 3.50 using HNO3. Before the equivalence point the concentration of \(\text{CrO}_4^{2-}\) is controlled by the solubility product of PbCrO4. After the equivalence point the concentration of \(\text{CrO}_4^{2-}\) is determined by the amount of excess titrant. Considering the reactions that control the concentration of \(\text{CrO}_4^{2-}\), sketch the expected titration curve of pH versus volume of titrant.59. A 0.5131-g sample that contains KBr is dissolved in 50 mL of distilled water. Titrating with 0.04614 M AgNO3 requires 25.13 mL to reach the Mohr end point. A blank titration requires 0.65 mL to reach the same end point. Report the %w/w KBr in the sample.60. A 0.1093-g sample of impure Na2CO3 is analyzed by the Volhard method. After adding 50.00 mL of 0.06911 M AgNO3, the sample is back titrated with 0.05781 M KSCN, requiring 27.36 mL to reach the end point. Report the purity of the Na2CO3 sample.61. A 0.1036-g sample that contains only BaCl2 and NaCl is dissolved in 50 mL of distilled water. Titrating with 0.07916 M AgNO3 requires 19.46 mL to reach the Fajans end point. Report the %w/w BaCl2 in the sample.Some of the problems that follow require one or more equilibrium constants or standard state potentials. For your convenience, here are hyperlinks to the appendices containing these constants:Appendix 10: Solubility ProductsAppendix 11: Acid Dissociation ConstantsAppendix 12: Metal-Ligand Formation ConstantsAppendix 13: Standard State Reduction PotentialsThis page titled 9.6: Problems is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by David Harvey.
191
9.7: Additional Resources
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Analytical_Chemistry_2.1_(Harvey)/09%3A_Titrimetric_Methods/9.07%3A_Additional_Resources
The following set of experiments introduce students to the applications of titrimetry. Experiments are grouped into four categories based on the type of reaction (acid–base, complexation, redox, and precipitation). Additional experiments emphasizing potentiometric electrodes are found in Chapter 11.Acid–base Titrimetry Complexation Titrimetry Redox Titrimetry Precipitation Titrimetry For a general history of titrimetry, see the following sources.The use of weight instead of volume as a signal for titrimetry is reviewed in the following paper.A more thorough discussion of non-aqueous titrations, with numerous practical examples, is provided in the following text.The sources listed below provides more details on the how potentiometric titration data may be used to calculate equilibrium constants.The following provides additional information about Gran plots. The following provide additional information about calculating or sketching titration curves.For a complete discussion of the application of complexation titrimetry see the texts and articles listed below.A good source for additional examples of the application of all forms of titrimetry isThis page titled 9.7: Additional Resources is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by David Harvey.
192
9.8: Chapter Summary and Key Terms
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Analytical_Chemistry_2.1_(Harvey)/09%3A_Titrimetric_Methods/9.08%3A_Chapter_Summary_and_Key_Terms
In a titrimetric method of analysis, the volume of titrant that reacts stoichiometrically with a titrand provides quantitative information about the amount of analyte in a sample. The volume of titrant that corresponds to this stoichiometric reaction is called the equivalence point. Experimentally we determine the titration’s end point using an indicator that changes color near the equivalence point. Alternatively, we can locate the end point by monitoring a property of the titrand’s solution—absorbance, potential, and temperature are typical examples—that changes as the titration progresses. In either case, an accurate result requires that the end point closely match the equivalence point. Knowing the shape of a titration curve is critical to evaluating the feasibility of a titrimetric method.Many titrations are direct, in which the analyte participates in the titration as the titrand or the titrant. Other titration strategies are possible when a direct reaction between the analyte and titrant is not feasible. In a back titration a reagent is added in excess to a solution that contains the analyte. When the reaction between the reagent and the analyte is complete, the amount of excess reagent is determined by a titration. In a displacement titration the analyte displaces a reagent, usually from a complex, and the amount of displaced reagent is determined by an appropriate titration.Titrimetric methods have been developed using acid–base, complexation, oxidation–reduction, and precipitation reactions. Acid–base titrations use a strong acid or a strong base as a titrant. The most common titrant for a complexation titration is EDTA. Because of their stability against air oxidation, most redox titrations use an oxidizing agent as a titrant. Titrations with reducing agents also are possible. Precipitation titrations often involve Ag+ as either the analyte or titrant.acid–base titrationargentometric titrationauxiliary oxidizing agentburetdirect titrationequivalence pointGran plotKjeldahl analysisMohr methodredox indicatorsymmetric equivalence pointtitranttitrimetryacidityasymmetric equivalence pointauxiliary reducing agentcomplexation titrationdisplacement titrationFajans methodindicatorlevelingpotentiometric titrationredox titrationthermometric titrationtitration curveVolhard methodalkalinityauxiliary complexing agentback titrationconditional formation constantend pointformal potentialJones reductormetallochromic indicatorprecipitation titrationspectrophotometric titrationtitrandtitration errorWalden reductorThis page titled 9.8: Chapter Summary and Key Terms is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by David Harvey.
193
About the Author
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Analytical_Chemistry_2.1_(Harvey)/00%3A_Front_Matter/About_the_Author
David Harvey, professor of chemistry and biochemistry at DePauw University, is the recipient of the 2016 American Chemical Society Division of Analytical Chemistry J. Calvin Giddings Award for Excellence in Education. The national award recognizes a scientist who has enhanced the professional development of analytical chemistry students, developed and published innovative experiments, designed and improved equipment or teaching labs and published influential textbooks or significant articles on teaching analytical chemistry.
194
InfoPage
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Chemometrics_Using_R_(Harvey)/00%3A_Front_Matter/02%3A_InfoPage
This text is disseminated via the Open Education Resource (OER) LibreTexts Project and like the hundreds of other texts available within this powerful platform, it is freely available for reading, printing and "consuming." Most, but not all, pages in the library have licenses that may allow individuals to make changes, save, and print this book. Carefully consult the applicable license(s) before pursuing such effects.Instructors can adopt existing LibreTexts texts or Remix them to quickly build course-specific resources to meet the needs of their students. Unlike traditional textbooks, LibreTexts’ web based origins allow powerful integration of advanced features and new technologies to support learning. The LibreTexts mission is to unite students, faculty and scholars in a cooperative effort to develop an easy-to-use online platform for the construction, customization, and dissemination of OER content to reduce the burdens of unreasonable textbook costs to our students and society. The LibreTexts project is a multi-institutional collaborative venture to develop the next generation of open-access texts to improve postsecondary education at all levels of higher learning by developing an Open Access Resource environment. The project currently consists of 14 independently operating and interconnected libraries that are constantly being optimized by students, faculty, and outside experts to supplant conventional paper-based books. These free textbook alternatives are organized within a central environment that is both vertically (from advance to basic level) and horizontally (across different fields) integrated.The LibreTexts libraries are Powered by NICE CXOne and are supported by the Department of Education Open Textbook Pilot Project, the UC Davis Office of the Provost, the UC Davis Library, the California State University Affordable Learning Solutions Program, and Merlot. This material is based upon work supported by the National Science Foundation under Grant No. 1246120, 1525057, and 1413739.Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation nor the US Department of Education.Have questions or comments? For information about adoptions or adaptions contact More information on our activities can be found via Facebook , Twitter , or our blog .This text was compiled on 07/05/2023
196
1.1: Installing and Accessing R and RStudio
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Chemometrics_Using_R_(Harvey)/01%3A_R_and_RStudio/1.01%3A_Installing_and_Accessing_R_and_RStudio
You can download and install R from the R-Project website. On the left side of the page, click on the link to CRAN under the title “Downloads.” Scroll through the list of CRAN mirror sites and click on the link to a site located near you. Versions are available for Mac OS, for Windows, and for Linux. Follow the directions for your operating system.You can download and install the RStudio Desktop Interface from the RStudio website. Click on the Download button for the free version of RStudio Desktop. From the list of available installers, click on the link that is appropriate for your operating system and follow the directions. When you launch RStudio, the program opens with the four panes as shown in (although some panes may be minimized).Beginning in the lower left corner and moving clockwise, these panes areAs you work with R, take time to examine each pane so that you become comfortable with them. For example, shows my RStudio screen after I highlighted lines 15–21 in the script file "figures_11.R" and clicked Run, sending the lines of code to the console where R processed them to create the figure in the lower right pane.This page titled 1.1: Installing and Accessing R and RStudio is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by David Harvey.
199
1.2: The Basics of Working With R
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Chemometrics_Using_R_(Harvey)/01%3A_R_and_RStudio/1.02%3A_The_Basics_of_Working_With_R
The symbol > in the console is the command prompt, which indicates that R is awaiting your instructions. When you type a command in the console and hit Enter or Return, R executes the command and displays any appropriate output in the console; thus, this command adds the numbers 1 and 31 + 3and returns the number 4 as an answer. 4The text above is a code block that contains the line of code to enter into the console and the output generated by R. The command prompt (>) is not included here so that you can, if you wish, copy and paste the code into R; if you are copying and pasting the code, do not include the output or R will return an error message. Note that the output here is preceded by the number 1 in brackets, which is the id number of the first value returned on that line.This is all well and good, but it is even less useful than a calculator because we cannot operate further on the result. If we assign this calculation to an object using an assignment operator, then the result of the calculation remains available to us.There are two common leftward assignment operators in R: an arrow that points from right-to-left, <-, which means the value on the right is assigned to the object on the left, and an equals sign, =. Most style guides for R favor <- over =, but as = is the more common option in most other programming languages—such as Python, C++, and Matlab—we will use it here.If we assign our calculation to the object answer then the result of the calculation is assigned to the object but not returned to us. To see an object’s value we can look for it in RStudio’s Environment Panel or enter the object’s name as a command in the Console, as shown here.answer = 1 + 3answer 4Note that an object’s name is case-sensitive so answer and Answer are different objects.Answer = 2 + 4Answer 6There are just a few limitations to the names you can assign to objects: they can include letters (both upper and lower case), numbers, dots (.), or underscores (_), but not spaces. A name can begin with a letter or with a dot followed by a letter (but not a dot followed by a number). Here are some examples of valid namesanswerone answer_one answer1 answerOne answer.oneand examples of invalid names1stanswer answer* first answerYou will find it helpful to use names that remind you of the object's meaning and that are not overly long. My personal preference is to use all lowercase letters, to use a descriptive noun, and to separate words using an underscore as I find that these choices make my code easier to read. When I find it useful to use the same base name for several objects of different types, then I may append a two or three letter designation to the name similar to the extensions that designate, for example, a spreadsheet stored as a .csv file. For example, when I use R to run a linear regression based on Beer's law, I may store the concentrations and absorbances of my standards in a data frame (see below for a description of data frames) with a name such as zinc.df and store the output of the linear model (see Chapter 8 for a discussion of linear models) in an object with a name such as zinc.lm.In the code above, answer and Answer are objects that store a single numerical value. There are several different types of objects we can use to store data, including vectors, data frames, matrices and arrays, and lists.A vector is an ordered collection of elements of the same type, which may be numerical values, integer values, logical values, or character strings. Note that ordered does not imply that the values are arranged from smallest-to-largest or from largest-to-smallest, or in alphabetical order; it simply means the vector’s elements are stored in the order in which we enter them into the object. The length of a vector is the number of elements it holds. The objects answer and Answer, for example, are vectors with lengths of 1.length(answer) 1Most of the vectors we will use include multiple elements. One way to create a vector with multiple elements is to use the concatenation function, c( ).In the code blocks below and elsewhere, any text that follows a hashtag, #, is a comment that explains what the line of code is accomplishing; comments are not executable code, so R simply ignores them.For example, we can create a vector of numerical values,v00 = c(1.1, 2.2, 3.3) v00 1.1 2.2 3.3or a vector of integers,v01 = cv01 1 2 3or a vector of logical values,v02 = c(TRUE, TRUE, FALSE) # we also could enter this as c(T, T, F)v02 TRUE TRUE FALSEor a vector of character stringsv03 = c("alpha", "bravo", "charley")v03 "alpha" "bravo" "charley"You can view an object’s structure by examining it in the Environment Panel or by using R’s structure command, str( ) which, for example, identifies vector the v02 as a logical vector with an index for its entries of 1, 2, and 3, and with values of TRUE, TRUE, and FALSE.str(v02)logi [1:3] TRUE TRUE FALSEWe can use a vector’s index to correct errors, to add additional values, or to create a new vector using already existing vectors. Note that the number within the square brackets, [ ], identifies the element in the vector of interest. For example, the correct spelling for the third element in v03 is charlie, not charley; we can correct this using the following line of code.v03 = "charlie" # correct the vector's third valuev03 "alpha" "bravo" "charlie"We can also use the square bracket to add a new element to an existing vector,v00 = 4.4 # add a fourth element to the existing vector, increasing its lengthv00 1.1 2.2 3.3 4.4or to create a new vector using elements from other vectors.v04 = c(v01, v02, v03)v04 "1" "TRUE" "charlie"Note the the elements of v04 are character strings even though v01 contains integers and v02 contains logical values. This is because the elements of a vector must be of the same type, so R coerces them to a common type, in this case a vector of character strings.Here are several ways to create a vector when its entries follow a defined sequence, seq( ), or use a repetitive pattern, rep( ).v05 = seq(from = 0, to = 20, by = 4)v05 0 4 8 12 16 20v06 = seq # R assumes the values are provided in the order from, to, and byv06 0 2 4 6 8 10v07 = rep(1:4, times = 2) # repeats the pattern 1, 2, 3, 4 twicev07 1 2 3 4 1 2 3 4v08 = rep(1:4, each = 2) # repeats each element in the string twice before proceeding to next elementv08 1 1 2 2 3 3 4 4Note that 1:4 is equivalent to c or seq. In R it often is the case that there are multiple ways to accomplish the same thing!Finally, we can complete mathematical operations using vectors, make logical inquiries of vectors, and create sub-samples of vectors.v09 = v08 - v07 # subtract two vectors, which must be of equal lengthv09 0 -1 -1 -2 2 1 1 0v10 = (v09 == 0) # returns TRUE for each element in v10 that equals zerov10 TRUE FALSE FALSE FALSE FALSE FALSE FALSE TRUEv11 = which(v09 < 1) # returns the index for each elements in v09 that is less than 1v11 1 2 3 4 8v12 = v09[!v09 < 1] # returns values for elements in v09 whose values are not less than 1v12 2 1 1A data frame is a collection of vectors—all equal in length but not necessarily of a single type of element—arranged with the vectors as the data frame's columns.df01 = data.frame(v07, v08, v09, v10)df01v07 v08 v09 v10 1 1 1 0 TRUE 2 2 1 -1 FALSE 3 3 2 -1 FALSE 4 4 2 -2 FALSE 5 1 3 2 FALSE 6 2 3 1 FALSE 7 3 4 1 FALSE 8 4 4 0 TRUEWe can access the elements in a data frame using the data frame's index, which takes the form [row number(s), column number(s}], where [ is the bracket operator.df02 = df01[1, ] # returns all elements in the data frame's first rowdf02 v07 v08 v09 v10 1 1 1 0 TRUE df03 = df01[ , 3:4] # returns all elements in the data frame's third and fourth columnsdf03 v09 v10 1 0 TRUE2 -1 FALSE 3 -1 FALSE 4 -2 FALSE 5 2 FALSE 6 1 FALSE 7 1 FALSE 8 0 TRUE df04 = df01 # returns the element in the data frame's fourth row and third columndf04 -2We can also extract a single column from a data frame using the dollar sign ($) operator to designate the column's namedf05 = df01$v08df05 1 1 2 2 3 3 4 4If you look carefully at the output above you will see that extracting a single row or multiple columns using the[ operator returns a new data frame. Extracting a single element from a data frame using the bracket operator, or a single column using the$operator returns a vector.A matrix is similar to a data frame, but every element in a matrix is of the same type, usually numerical.m01 = matrix(1:10, nrow = 5) # places numbers 1:10 in matrix with five rows, filing by columnm01 [,1] [,2] [1,] 1 6 [2,] 2 7 [3,] 3 8 [4,] 4 9 [5,] 5 10m02 = matrix(1:10, ncol = 5) # places numbers 1:10 in matrix with five columns, filling by rowm02[,1] [,2] [,3] [,4] [,5] [1,] 1 3 5 7 9 [2,] 2 4 6 8 10A matrix has two dimensions and an array has three or more dimensions.A list is an object that holds other objects, even if those objects are of different types.li01 = list(v00, df01, m01)li01[] 1.1 2.2 3.3 4.4 [] v07 v08 v09 v10 1 1 1 0 TRUE 2 2 1 -1 FALSE 3 3 2 -1 FALSE 4 4 2 -2 FALSE 5 1 3 2 FALSE 6 2 3 1 FALSE 7 3 4 1 FALSE 8 4 4 0 TRUE[] [,1] [,2] [1,] 1 6 [2,] 2 7 [3,] 3 8 [4,] 4 9 [5,] 5 10Note that the double bracket, such as[], identifies an object in the list and that we can extract values from this list using this notation.li01[] # extract first object stored in the list 1.1 2.2 3.3 4.4li01[] # extract the first value of the first object stored in the list 1.1Although you can enter commands directly into RStudio’s Console Panel and execute them, you will find it much easier to write your commands in a script file and send them to the console line-by-line, as groups of two or more lines, or all at once by sourcing the file. You will make errors as you enter code. When your error is in one line of a multi-line script, you can fix the error and then rerun the script at once without the need to retype each line directly into the console.To open a script file, select File: New File: R Script from the main menu. To save your script file, which will have .R as an extension, select File: Save from the main menu and navigate to the folder where you wish to save the file. x2 = runif # another vector of 1000 values drawn at random from a uniform distribution y1 = rnorm # a vector of 1000 values drawn at random from a normal distribution y2 = rnorm # another vector of 1000 values drawn at random from a normal distribution old.par = par(mfrow = c) # create a 2 x 2 grid for plots plot(x1, x2) # create a scatterplot of two vectors plot(y1, y2) plot(x1, y1) plot(x2, y2) par(old.par) # restore the initial plot conditions (more on this later)save it as test_script.Rand then click the Source button; you should see the following plot appear in the Plot tab.Although creating a small vector, data frame, matrix, array, or list is easy, creating one with hundreds of elements or creating dozens of individual data objects is tedious at best; thus, the ability to load data saved during an earlier session, or the ability to read in a spreadsheet file is helpful.To read in a spreadsheet file saved in .csv format (comma separated values), we use R's read.csv() function, which takes the general formread.csv(file)where file provides the absolute path to the file. This is easiest to manage if you navigate to the folder where your .csv file is stored using RStudio's file pane and then set it as the working directory by clicking on More and selecting Set As Working Directory. Download the file "element_data.csv" using this link and then store the file in a folder on your computer. Navigate to this folder and set it as your working directory. Enter the following line of codeelements = read.csv(file = "element_data.csv")to read the file's data into a data frame named elements . To view the data frame's structure we use the head() function to display the first six rows of data.head(elements)name symbol at_no at_wt mp bp phase electronegativity electron_affinity 1 Hydrogen H 1 1.007940 14.01 20.28 Gas 2.20 72.8 2 Helium He 2 4.002602 NA 4.22 Gas NA 0.0 3 Lithium Li 3 6.941000 453.69 1615.15 Solid 0.98 59.6 4 Beryllium Be 4 9.012182 1560.15 2743.15 Solid 1.57 0.0 5 Boron B 5 10.811000 2348.15 4273.15 Solid 2.04 26.7 6 Carbon C 6 12.010700 3823.15 4300.15 Solid 2.55 153.9 block group period at_radius covalent_radius 1 s 1 1 5.30e-11 3.70e-11 2 p 18 1 3.10e-11 3.20e-11 3 s 1 2 1.67e-10 1.34e-10 4 s 2 2 1.12e-10 9.00e-11 5 p 13 2 8.70e-11 8.20e-11 6 p 14 2 6.70e-11 7.70e-11Note that cells in the spreadsheet with missing values appear here as NA for not available. The melting points (mp) and boiling points (bp) are in Kelvin, and the electron affinities are in kJ/mol.You can save to your working directory the contents of data frame by using the write.csv() function; thus, we can save a copy of the data in elements using the following line of codewrite.csv(elements, file = "element_data_copy.csv")Another way to save multiple objects is to use the save() function to create an .RData file. For example, to save the vectors v00, v01, and v02 to a file with the name vectors.RData, entersave(v00, v01, v02, file = "vectors.RData") To read in the objects in an .RData file, navigate to the folder that contains the file, click on the file's name and RStudio will ask if you wish to load the file into your session.The base installation of R provides many useful functions for working with data. The advantage of these functions is that they work (always a plus) and they are stable (which means they will continue to work even as R is updated to new versions). For the most part, we will rely on R’s built in functions for these two reasons. When we need capabilities that are not part of R’s base installation, then we must write our own functions or use packages of functions written by others.To install a package of functions, click on the Packages tab in the Files, Plots, Packages, Help & Viewer pane. Click on the button labeled Install, enter the name of the package you wish to install, and click on Install to complete the installation. You only need to install a package once.To use a package that is not part of R’s base installation, you need to bring it into your current session, which you do with the command library(name of package) or by clicking on the checkbox next to the name of the package in the list of your installed packages. Once you have loaded the package into your session, it remains available to you until you quit RStudio.One nice feature of RStudio is that the Environment Panel provides a list of the objects you create. If your environment becomes too cluttered, you can delete items by switching to the Grid view, clicking on the check-box next to the object(s) you wish to delete, and then clicking on the broom icon. You can remove all items from the List view by simply clicking on the broom icon.There are extensive help files for R's functions that you can search for using the Help Panel or by using the help() command. A help file shows you the command’s proper syntax, including the types of values you can pass to the command and their default values, if any—more details on this later—and provides you with some examples of how the command is used. R's help files can be difficult to parse at times; you may find it more helpful to simply use a search engine to look for information about "how to use in R." Another good source for finding help with R is stackoverflow.This page titled 1.2: The Basics of Working With R is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by David Harvey.
200
10.1: Signals and Noise
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Chemometrics_Using_R_(Harvey)/10%3A_Cleaning_Up_Data/10.1%3A_Signals_and_Noise
When we make a measurement it is the sum of two parts, a determinate, or fixed contribution that arises from the analyte and an indeterminate, or random, contribution that arises from uncertainty in the measurement process. We call the first of these the signal and we call the latter the noise. There are two broad categories of noise: that associated with obtaining samples and that associated with making measurements. Our interest here is in the latter.Noise is a random event characterized by a mean and standard deviation. There are many types of noise, but we will limit ourselves for now to noise that is stationary, in that its mean and its standard deviation are independent of time, and that is heteroscedastic, in that its mean and its variance (and standard deviation) are independent of the signal's magnitude. shows an example of a noisy signal that meets these criteria. The x-axis here is shown as time—perhaps a chromatogram—but other units, such as wavelength or potential, are possible. shows the underlying noise and shows the underlying signal. Note that the noise in appears consistent in its central tendency (mean) and its spread (variance) along the x-axis and is independent of the signal's strength. Although we characterize noise by its mean and its standard deviation, the most important benchmark is the signal-to-noise ratio, \(S/N\), which we define as\[S/N = \frac{S_\text{analyte}}{s_\text{noise}} \nonumber\]where \(S_\text{analyte}\) is the signal's value at particular location on the x-axis and \(s_\text{noise}\) is the standard deviation of the noise using a signal-free portion of the data. As general rules-of-thumb, we can measure the signal with some confidence when \(S/N \ge 3\) and we can detect the signal with some confidence when \(3 \ge S/N \ge 2\). For the data in , and using the information in the figure caption, the signal-to-noise ratios are, from left-to-right, 10, 6, and 3.To measure the signal with confidence implies we can use the signal's value in a calculation, such as constructing a calibration curve. To detect the signal with confidence means we are certain that a signal is present (and that an analyte responsible for the signal is present) even if we cannot measure the signal with sufficient confidence to allow for a meaningful calculation.There are two broad approaches that we can use to improve the signal-to-noise ratio: hardware and software. Hardware approaches are built into the instrument and include decisions on how the instrument is set-up for making measurements (for example, the choice of a scan rate or a slit width), and how the signal is processed by the instrument (for example, using electronic filters); such solutions are not of interest to us here in a textbook with a focus on chemometrics. Software solutions are computational approaches in which we manipulate the data either while we are collecting it or after data acquisition is complete. This page titled 10.1: Signals and Noise is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by David Harvey.
203
10.2: Improving the Signal-to-Noise Ratio
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Chemometrics_Using_R_(Harvey)/10%3A_Cleaning_Up_Data/10.2%3A_Signal_Averaging
In this section we will consider three common computational tools for improving the signal-to-noise ratio: signal averaging, digital smoothing, and Fourier filtering.The most important difference between the signal and the noise is that a signal is determinate (fixed in value) and the noise is indeterminate (random in value). If we measure a pure signal several times, we expect its value to be the same each time; thus, if we add together n scans, we expect that the net signal, \(S_n\), is defined as\[S_n = n S \nonumber\]where \(S\) is the signal for a single scan. Because noise is random, its value varies from one run to the next, sometimes with a value that is larger and sometimes with a value that is smaller, and sometimes with a value that is positive and sometimes with a value that is negative. On average, the standard deviation of the noise increases as we make more scans, but it does so at a slower rate than for the signal\[s_n = \sqrt{n} s \nonumber \]where \(s\) is the standard deviation for a single scan and \(s_n\) is the standard deviation after n scans. Combining these two equations, shows us that the signal-to-noise ratio, \(S/N\), after n scans increases as\[(S/N)_n = \frac{S_n}{s_n} = \frac{nS}{\sqrt{n}s} = \sqrt{n}(S/N)_{n = 1} \nonumber\]where \((S/N)_{n = 1}\) is the signal-to-noise ratio for the initial scan. Thus, when \(n = 4\) the signal-to-noise ratio improves by a factor of 2, and when \(n = 16\) the signal-to-noise ratio increases by a factor of 4. shows the improvement in the signal-to-noise ratio for 1, 2, 4, and 8 scans.Signal averaging works well when the time it takes to collect a single scan is short and when the analyte's signal is stable with respect to time both because the sample is stable and the instrument is stable; when this is not the case, then we risk a time-dependent change in \(S_\text{analyte}\) and/or \(s_\text{noise}\) Because the equation for \((S/N)_n\) is proportional to the \(\sqrt{n}\), the relative improvement in the signal-to-noise ratio decreases as \(n\) increases; for example, 16 scans gives a \(4 \times\) improvement in the signal-to-noise ratio, but it takes an additional 48 scans (for a total of 64 scans) to achieve a \(8 \times\) improvement in the signal-to-noise ratio.One characteristic of noise is that its magnitude fluctuates rapidly in contrast to the underlying signal. We see this, for example, in where the underlying signal either remains constant or steadily increases or decreases while the noise fluctuates chaotically. Digital smoothing filters take advantage of this by using a mathematical function to average the data for a small range of consecutive data points, replacing the range's middle value with the average signal over that range.For a moving average filter, we replace each point by the average signal for that point and an equal number of points on either side; thus, a moving average filtee has a width, \(w\), of 3, 5, 7, ... points. For example, suppose the first five points in a sequence arethen a three-point moving average (\(w = 3)\) returns values ofwhere, for example, 0.63 is the average of 0.80, 0.30, and 0.80. Note that we lose \((w - 1)/2 = (3 - 1)/2 = 1\) points at each end of the data set because we do not have a sufficient number of data points to complete a calculation for the first and the last point. shows the improvement in the \(S/N\) ratio when using moving average filters with widths of 5, 9, and 13.One limitation to a moving average filter is that it distorts the original data by removing points from both ends, although this is not a serious concern if the points in question are just noise. Of greater concern is the distortion in a signal's height if we use a range that is too wide; for example, , shows how a 23-point moving average filter (shown in blue) applied to the noisy signal in the upper left quadrant of , reduces the height of the original signal (shown in black). Because the filter's width—shown by the red bar—is similar to the peak's width, as the filter passes through the peak it systematically reduces the signal by averaging together values that are mostly smaller than the maximum signal.A moving average filter weights all points equally; that is, points near the edges of the filter contribute to the average as a level equal to points near the filter's center. A Savitzky-Golay filter uses a polynomial model that weights each point differently, placing more weight on points near the center of the filter and less weight on points at the edge of the filter. Specific values depend on the size of the window and the polynomial model; for example, a five-point filter using a second-order polynomial has weights of\[-3/35 \quad \quad 12/35 \quad \quad 17/35 \quad \quad 12/35 \quad \quad -3/35 \nonumber \]For example, suppose the first five points in a sequence arethen this Savitzky-Golay filter returns values ofwhere, for example, the value for the middle point is\[0.80 \times \frac{-3}{35} + 0.30 \times \frac{12}{35} + 0.80 \times \frac{17}{35} + 0.20 \times \frac{12}{35} + 1.00 \times \frac{-3}{35} = 0.406 \approx 0.41 \nonumber \]Note that we lose \((w - 1)/2 = (5 - 1)/2 = 2\) points at each end of the data set, where w is the filter's range, because we do not have a sufficient number of data points to complete the calculations. For other Savitzky-Golay smoothing filters, see Savitzky, A.; Golay, M. J. E. Anal Chem, 1964, 36, 1627-1639. shows the improvement in the \(S/N\) ratio when using Savitzky-Golay filters using a second-order polynomial with 5, 9, and 13 points.Because a Savitzky-Golay filter weights points differently than does a moving average smoothing filter, a Savitzky-Golay filter introduces less distortion to the signal, as we see in the following figure.This approach to improving the signal-to-noise ratio takes advantage of a mathematical technique called a Fourier transform (FT). The basis of a Fourier transform is that we can express a signal in two separate domains. In the first domain the signal is characterized by one or more peaks, each defined by its position, its width, and its area; this is called the frequency domain. In the second domain, which is called the time domain, the signal consists of a set of oscillations, each defined by its frequency, its amplitude, and its decay rate. The Fourier transform—and the inverse Fourier transform—allow us to move between these two domains.The mathematical details behind the Fourier transform are beyond the level of this textbook; for a more in-depth treatment, consult this chapter's resources. shows a single peak in the frequency domain and shows its equivalent time domain signal. There are correlations between the two domains:We can use a Fourier transform to improve the signal-to-noise ratio because the signal is a single broad peak and the noise appears as a multitude of very narrow peaks. As noted above, a broad peak in the frequency domain has a fast decaying signal in the time domain, which means that while the beginning of the time domain signal includes contributions from the signal and the noise, the latter part of the time domain signal includes contributions from noise only. The figure below shows how we can take advantage of this to reduce the noise and improve the signal-to-noise ratio for the noisy signal in , which has 256 points along the x-axis and has a signal-to-noise ratio of 5.1. First, we use the Fourier transform to convert its original domain into the new domain, the first 128 points of which are shown in (note: the first half of the data contains the same information as the second half of the data, so we only need to look at the first half of the data). The points at the beginning are dominated by the signal, which is why there is a systematic decrease in the intensity of the oscillations; the remaining points are dominated by noise, which is why the variation in intensity is random. To filter out the noise we retain the first 24 points as they are and set the intensities of the remaining points to zero (the choice of how many points to retain may require some adjustment). As shown in , we repeat this for the remaining 128 points, retaining the last 24 points as they are. Finally, we use an inverse Fourier transform to return to our original domain, with the result in , with the signal-to-noise ratio improving from 5. 1 for the original noisy signal to 11.2 for the filtered signal.This page titled 10.2: Improving the Signal-to-Noise Ratio is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by David Harvey.
204
10.3: Background Removal
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Chemometrics_Using_R_(Harvey)/10%3A_Cleaning_Up_Data/10.3%3A_Background_Removal
Another form of noise is a systematic background signal on which the analytical signal of interest is overlaid. For example, the following figure shows a Gaussian signal with a maximum value of 50 centered at \(x = 125\) superimposed on an exponential background. The dotted line is the Gaussian signal, which has a maximum value of 50 at \(x = 125\), and the solid line is the signal as measured, which has a maximum value of 57 at \(x = 125\).If the background signal is consistent across all samples, then we can analyze the data without first removing its contribution. For example, the following figure shows a set of calibration standards and their resulting calibration curve, for which the y-intercept of 7 gives the offset introduced by the background.But background signals often are not consistent across samples, particularly when the source of the background is a property of the samples we collect (natural water samples, for example, may have variations in color due to differences in the concentration of dissolved organic matter) or a property of the instrument we are using (such as a variation in source intensity over time). When true, our data may look more like what we see in the following figure, which leads to a calibration curve with a greater uncertainty. Because the background changes gradually with the values for x while the analyte's signal changes quickly, we can use a derivative to the distinguish between the two. One approach is to use a Savitzky-Golay derivative filter using the same approach described in the last section. For example, applying a 7-point first-derivative Savitzky-Golay filter with weights of\[ -3/28 \quad \quad -2/28 \quad \quad -1/28 \quad \quad 0/28 \quad \quad 1/28 \quad \quad 2/28 \quad \quad 3/28 \nonumber\]to the data in gives the results shown below. The calibration signal in this case is the difference between the maximum signal and the minimum signal, which are shown by the dotted red lines in the top part of the figure. The fit of the calibration curve to the data and the calibration curve's y-intercept of zero shows that we have successfully compensated for the background signals.For other Savitzky-Golay derivative filters, including second-derivative filters, see Savitzky, A.; Golay, M. J. E. Anal Chem, 1964, 36, 1627-1639.This page titled 10.3: Background Removal is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by David Harvey.
205
10.4: Using R to Clean Up Data
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Chemometrics_Using_R_(Harvey)/10%3A_Cleaning_Up_Data/10.4%3A_Using_R_to_Clean_Up_Data
R has two useful functions, filter() and fft(), that we can use to smooth or filter noise and to remove background signals. To explore their use, let's first create two sets of data that we can use as examples: a noisy signal and a pure signal superimposed on an exponential background. To create the noisy signal, we first create a vector of 256 values that defines the x-axis; although we will not specify a unit here, these could be times or frequencies. Next we use R's dnorm() function to generate a pure Gaussian signal with a mean of 125 and a standard deviation of 10, and R's rnorm() function to generate 256 points of random noise with a mean of zero and a standard deviation of 10. Finally, we add the pure signal and the noise to arrive at our noisy signal and then plot the noisy signal and overlay the pure signal.x = seq gaus_signal = 1250 * dnorm(x, mean = 125, sd = 10) noise = rnorm(256, mean = 0, sd = 10) noisy_signal = gaus_signal + noise plot(x = x, y = noisy_signal, type = "l", lwd = 2, col = "blue", xlab = "x", ylab = "signal") lines(x = x, y = gaus_signal, lwd = 2)To estimate the signal-to-noise ratio, we use the maximum of the pure signal and the standard deviation of the noisy signal as determined using 100 points divided evenly between the two ends.s_to_n = max(gaus_signal)/sd(noisy_signal[c(1:50,201:250)]) s_to_n 5.14663To create a signal superimposed on an exponential background, we use R's exp() function to generate 256 points for the background's signal, add that to our pure Gaussian signal, and plot the result.exp_bkgd = 30*exp(-0.01 * x) plot(x,exp_bkgd,type = "l") signal_bkgd = gaus_signal + exp_bkgd plot(x = x, y = signal_bkgd, type = "l", lwd = 2, col = "blue", xlab = "x", ylab = "signal", ylim = c) lines(x = x, y = gaus_signal, lwd = 2, lty = 2)R's filter() function takes the general formfilter(x, filter)where x is the object being filtered and filter is an object that contains the filter's coefficients. To create a seven-point moving average filter, we use the rep() function to create a vector that has seven identical values, each equal to 1/7.mov_avg_7 = rep(1/7, 7)Applying this filter to our noisy signal returns the following resultnoisy_signal_movavg = filter(noisy_signal, mov_avg_7) plot(x = x, y = noisy_signal_movavg, type = "l", lwd = 2, col = "blue", xlab = "x", ylab = "signal") lines(x = x, y = gaus_signal, lwd = 2)with the signal-to-noise ratio improved tos_to_n_movavg = max(gaus_signal)/sd(noisy_signal_movavg[c(1:50,200:250)], na.rm = TRUE) s_to_n_movavg 11.29943Note that we must add na.rm = TRUE to the sd() function because applying a seven-point moving average filter replaces the first three and the last three points with values of NA which we must tell the sd() function to ignore.To create a seven-point Savitzky-Golay smoothing filter, we create a vector to store the coefficients, obtaining the values from the original paper (Savitzky, A.; Golay, M. J. E. Anal Chem, 1964, 36, 1627-1639) and then apply it to our noisy signal, obtaining the results below.sg_smooth_7 = c(-2,3,6,7,6,5,-2)/21 noisy_signal_sg = filter(noisy_signal, sg_smooth_7) plot(x = x, y = noisy_signal_sg, type = "l", lwd = 2, col = "blue", xlab = "x", ylab = "signal") lines(x = x, y = gaus_signal, lwd = 2) s_to_n_movavg = max(gaus_signal)/sd(noisy_signal_sg[c(1:50,200:250)], na.rm = TRUE) s_to_n_movavg 7.177931To remove a background from a signal, we use the same approach, substituting a first-derivative (or higher order) Savitxky-Golay filter.sg_fd_7 = c(22, -67, -58, 0, 58, 67, -22)/252 signal_bkgd_sg = filter(signal_bkgd, sg_fd_7) plot(x = x, y = signal_bkgd_sg, type = "l", lwd = 2, col = "blue", xlab = "x", ylab = "signal")To complete a Fourier transform in R we use the fft() function, which takes the form fft(z, inverse = FALSE) where z is the object that contains the values to which we wish to apply the Fourier transform and where setting inverse = TRUE allows for an inverse Fourier transform. Before we apply Fourier filtering to our noisy signal, let's first apply the fft() function to a vector that contains the integers 1 through 8. test_vector_ft = fft(test_vector) test_vector_ft 36+0.000000i -4+9.656854i -4+4.000000i -4+1.656854i -4+0.000000i -4-1.656854i -4-4.000000i -4-9.656854iEach of the eight results is a complex number with a real and an imaginary component. Note that the real component of the first value is 36, which is the sum of the elements in our test vector. Note, also, the symmetry in the remaining values where the second and eighth values, the third and seventh values, and the fourth and sixth values are identical except for a change in sign for the imaginary component.Taking the inverse Fourier transform returns the original eight values (note that the imaginary terms are now zero), but each is eight times larger in value than in our original vector.test_vector_ifft = fft(test_vector_ft, inverse = TRUE) test_vector_ifft 8+0i 16-0i 24+0i 32+0i 40+0i 48+0i 56-0i 64+0iTo compensate for this, we divide by the length of our vectortest_vector_ifft = fft(test_vector_ft, inverse = TRUE)/length(test_vector) test_vector_ifft 1+0i 2-0i 3+0i 4+0i 5+0i 6+0i 7-0i 8+0iwhich returns our original vector.With this background in place, let's use R to complete a Fourier filtering of our noisy signal. First, we complete the Fourier transform of the noisy signal and examine the values for the real component, using R's Re() function to extract them. Because of the symmetry noted above, we need only look at the first half of the real components (x = 1 to x = 128).noisy_signal_ft = fft(noisy_signal) plot(x = x[1:128], y = Re(noisy_signal_ft)[1:128], type = "l", col = "blue", xlab = "", ylab = "intensity", lwd = 2)Next, we look for where the signal's magnitude has decayed to what appears to be random noise and set these values to zero. In this example, we retain the first 24 points (and the last 24 points; remember the symmetry noted above) and set both the real and the imaginary components to 0 + 0i.noisy_signal_ft[25:232] = 0 + 0i plot(x = x, y = Re(noisy_signal_ft), type = "l", col = "blue", xlab = "", ylab = "intensity", lwd = 2)Finally, we take the inverse Fourier transform and display the resulting filtered signal and report the signal-to-noise ratio.noisy_signal_ifft = fft(noisy_signal_ft, inverse = TRUE)/length(noisy_signal_ft) plot(x = x, y = Re(noisy_signal_ifft), type = "l", col = "blue", xlab = "", ylab = "intensity", ylim = c(-20,60), lwd = 3) lines(x = x,y = gaus_signal,lwd =2, col = "black")s_to_n = 50/sd(Re(noisy_signal_ifft)[c(1:50,200:250)], na.rm = TRUE) s_to_n 9.695329This page titled 10.4: Using R to Clean Up Data is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by David Harvey.
206
10.5: Exercises
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Chemometrics_Using_R_(Harvey)/10%3A_Cleaning_Up_Data/10.5%3A_Exercises
1. The goal when smoothing data is to improve the signal-to-noise ratio without distorting the underlying signal. The data in the file problem10_1.csv consists of four columns of data: the vector x, which contains 200 values for plotting on the x-axis; the vector y, which contains 200 values for a step-function that satisfies the following criteria \[y = 0 \text{ for } x \le 75 \text{ and for } x \ge 126 \nonumber\] \[y = 1 \text{ for } 75 < x < 126 \nonumber\] the vector n, which contains 200 values drawn from random normal distribution with a mean of 0 and standard deviation of 0.1, and the vector s, which is the sum of y and n. In essence, y is the pure signal, n is the noise, and s is a noisy signal. Using this data, complete the following tasks:(a) Determine the mean signal, the standard deviation of the noise, and the signal-to-noise ratio for the noisy signal using just the data in the object s.(b) Explore the effect of applying to the noisy signal, one pass each of moving average filters of widths 5, 7, 9, 11, 13, 15, and 17. For each moving average filter, determine the mean signal, the standard deviation of the noise, and the signal-to-noise ratio. Organize these measurements using a table and comment on your results. Prepare a single plot that displays the original noisy signal and the smoothed signals using widths of 5, 9, 13, and 17, off-setting each so that all five signals are displayed. Comment on your results.(c) Repeat the calculations in (b) using Savitzky-Golay quadratic/cubic smoothing filters of widths 5, 7, 9, 11, 13, 15, and 17; see the original paper for each filter's coefficients.(d) Considering your results for (b) and for (c), what filter and what width provides the greatest improvement in the signal-to-noise ratio with the least distortion of the original signal’s step-function? Be sure to justify your choice.2. The file problem10_2.csv consists of two columns, each with 1024 points: x is an index for the x-axis and y is noisy data with a hint of a signal. Show that there is a signal in this file by using any one moving average or Savitzky-Golay smoothing filter of your choice and using a Fourier filter. Present your results in a single figure that shows the original signal, the signal after smoothing, and the signal after Fourier filtering. Comment on your results.3. The file problem 10_3.csv consists of six columns: x is an index for the x-axis and y1, y2, y3, y4, and y5 are signals superimposed on a variable background. Use a Savitzky-Golay nine-point cubic second-derivative filter to remove the background from the data and then build a calibration model using these results, and report the calibration equation and a plot of the calibration curve. See the original paper for the filter's coefficients.This page titled 10.5: Exercises is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by David Harvey.
207
11.1: What Do We Mean By Structure?
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Chemometrics_Using_R_(Harvey)/11%3A_Finding_Structure_in_Data/11.01%3A_What_Do_We_Mean_By_Structure_and_Order%3F
The signals we measure include contributions from determinate and indeterminate sources, with the determinate components resulting from the analytes in our sample and with the indeterminate sources resulting from noise. When we describe our data as having structure, or that we are looking for structure in our data, our interest is in the determinate contributions to the signal. Consider, for example, the data in the following figure, which shows the visible spectra for 24 samples at 635 wavelengths.Each curve in this figure, such as the one shown in red, is one of the 24 samples that make up this data set and shows the extent to which each of the 635 discrete wavelengths of light are absorbed by that sample: this is the determinate contribution to the data. Looking closely at the spectrum shown in red, we see small variations in the absorbance superimposed on the determinate signal: this is the indeterminate contribution to the data. Although when first examined, the 24 spectra in may create a sense of disorder, there is a clear underlying structure to the data. For example, there are four apparent peaks centered at wavelengths around 400 nm, 500 nm, 580 nm, and 800 nm. Each of the individual spectra include one or more of these peaks. Further, at a wavelength of 800 nm, we see that some samples show no absorbance, and presumably lack whatever analyte is responsible for this peak; other samples, however, clearly include contributions from this analyte. This is what we mean by finding structure in data. In this chapter we explore three tools for finding structure in data—cluster analysis, principal component analysis, and multivariate linear regression—that allow us to make sense of that structure.This page titled 11.1: What Do We Mean By Structure? is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by David Harvey.
208
11.2: Cluster Analysis
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Chemometrics_Using_R_(Harvey)/11%3A_Finding_Structure_in_Data/11.02%3A_Cluster_Analysis
In the previous section we examined the spectra of 24 samples at 635 wavelengths, displaying the data by plotting the absorbance as a function of wavelength. Another way to examine the data is to plot the absorbance of each sample at one wavelength against the absorbance of the same sample at a second wavelength, as we see in the following figure using wavelengths of 403.3 nm and 508.7 nm. Note that this plot suggests an underlying structure to our data as the 24 points occupy a triangular-shaped space. defined by the samples identified as 1, 2, and 3.We can extend this analysis to three wavelengths, as we see in the following figure, and, to as many as all 635 wavelengths (Of course we cannot examine a plot of this as it exists in 635-dimensional space!).In both and (and the higher dimensional plots that we cannot display), some samples are closer to each other in space than are other points. For example, in , samples 7 and 20 are closer to each other than any other pair of samples; samples 2 and 3, however, are further from each other than any other pair of samples.A cluster analysis is a way to examine our data in terms of the similarity of the samples to each other. outlines the steps using a small set of six points defined by two variables, a and b. Panel (a) shows the six data points. The two points closest in distance are 3 and 4, which make the first cluster and which we replace with the red point midway between them, as seen in panel (b). The next two points closest in distance are 2 and 6, which make the second cluster and which we replace with the red point between them, as seen in panel (c). Continuing in this way yields the results in panel (d) where the third cluster brings together points 2, 3, 4, and 6, the fourth cluster brings together points 1, 2, 3, 4, and 6, and the final cluster brings together all six points.To visualize the clusters, in terms of the identify of the points in the clusters, the order in which the clusters form, and the relative similarity of difference between points and clusters, we display the information in as the dendrogram shown in , which shows, for example, that the clusters of points 3 and 4, and of 2 and 6 are more similar to each other than they are to point 1 and to point 6. The vertical scale, which is identified as Height, provides a measure of the distance of the individual points or clusters of points from each other.A cluster analysis of the 24 samples from using 40 equally-spaced wavelengths. There is much we can learn from this diagram about the structure of these samples, which we can divide into three distinct clusters of samples, as shown by the boxes. The samples within each cluster are more similar to each other than they are to samples in other clusters. One possible explanation for this structure is that the 24 samples are comprised of three analytes, where, for each cluster, one of the analytes is present at a higher concentration than the other two analytes.This page titled 11.2: Cluster Analysis is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by David Harvey.
209
11.3: Principal Component Analysis
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Chemometrics_Using_R_(Harvey)/11%3A_Finding_Structure_in_Data/11.03%3A_Principal_Component_Analysis
The figure below—which is similar in structure to .Suppose we leave the points in space as they are and rotate the three axes. We might rotate the three axes until one passes through the cloud in a way that maximizes the variation of the data along that axis, which means this new axis accounts for the greatest contribution to the global variance. Having aligned this primary axis with the data, we then hold it in place and rotate the remaining two axes around the primary axis until one them passes through the cloud in a way that maximizes the data's remaining variance along that axis; this becomes the secondary axis. Finally, the third, or tertiary axis, is left, which explains whatever variance remains. In essence, this is what comprises a principal component analysis (PCA).One of the challenges with understanding how PCA works is that we cannot visualize our data in more than three dimensions. The data in , for example, consists of spectra for 24 samples recorded at 635 wavelengths. To visualize all of this data requires that we plot it along 635 axes in 635-dimensional space! Let's consider a much simpler system that consists of 21 samples for each of which we measure just two properties that we will call the first variable and the second variable. shows our data, which we can express as a matrix with 21 rows, one for each of the 21 samples, and 2 columns, one for each of the two variables.\[ [D]_{21 \times 2} \nonumber \]Next, we complete a linear regression analysis on the data and add the regression line to the plot; we call this the first principal component.Projecting our data (the blue points) onto the regression line (the red points) gives the location of each point on the first principal component's axis; these values are called the scores, \(S\). The cosines of the angles between the first principal component's axis and the original axes are called the loadings, \(L\). We can express the relationship between the data, the scores, and the loadings using matrix notation. Note that from the dimensions of the matrices for \(D\), \(S\), and \(L\), each of the 21 samples has a score and each of the two variables has a loading.\[ [D]_{21 \times 2} = [S]_{21 \times 1} \times [L]_{1 \times 2} \nonumber\]Next, we draw a line perpendicular to the first principal component axis, which becomes the second (and last) principal component axis, project the original data onto this axis (points in green) and record the scores and loadings for the second principal component.\[ [D]_{21 \times 2} = [S]_{21 \times 2} \times [L]_{2 \times 2} \nonumber\]In matrix multiplication the number of columns in the first matrix must equal the number of rows in the second matrix. The result of matrix multiplication is a new matrix that has a number of rows equal to that of the first matrix and that has a number of columns equal to that of the second matrix; thus multiplying together a matrix that is \(5 \times 4\) with one that is \(4 \times 8\) gives a matrix that is \(5 \times 8\).If we were working with 21 samples and 10 variables, then we would do this:The results of a principal component analysis are given by the scores and the loadings. Let's return to the data from , but to make things more manageable, we will work with just 24 of the 80 samples and expand the number of wavelengths from three to 16 (a number that is still a small subset of the 635 wavelengths available to us). The figure below shows the full spectra for these 24 samples and the specific wavelengths we will use as dotted lines; thus, our data is a matrix with 24 rows and 16 columns, \([D]_{24 \times 16}\). A principal component analysis of this data will yield 16 principal component axes.Each principal component accounts for a portion of the data's overall variances and each successive principal component accounts for a smaller proportion of the overall variance than did the preceding principal component. Those principal components that account for insignificant proportions of the overall variance presumably represent noise in the data; the remaining principal components presumably are determinate and sufficient to explain the data. The following table provides a summary of the proportion of the overall variance explained by each of the 16 principal components.The first principal component accounts for 68.62% of the overall variance and the second principal component accounts for 29.98% of the overall variance. Collectively, these two principal components account for 98.59% of the overall variance; adding a third component accounts for more than 99% of the overall variance. Clearly we need to consider at least two components (maybe three) to explain the data in . The remaining 14 (or 13) principal components simply account for noise in the original data. This leaves us with the following equation relating the original data to the scores and loadings\[ [D]_{24 \times 16} = [S]_{24 \times n} \times [L]_{n \times 16} \nonumber \]where \(n\) is the number of components needed to explain the data, in this case two or three.To examine the principal components more closely, we plot the scores for PC1 against the scores for PC2 to give the scores plot seen below, which shows the scores occupying a triangular-shaped space.Because our data are visible spectra, it is useful to compare the equation\[ [D]_{24 \times 16} = [S]_{24 \times n} \times [L]_{n \times 16} \nonumber \]to Beer's Law, which in matrix form is\[ [A]_{24 \times 16} = [C]_{24 \times n} \times [\epsilon b]_{n \times 16} \nonumber \]where \([A]\) gives the absorbance values for the 24 samples at 16 wavelengths, \([C]\) gives the concentrations of the two or three components that make up the samples, and \([\epsilon b]\) gives the products of the molar absorptivity and the pathlength for each of the two or three components at each of the 16 wavelengths. Comparing these two equations suggests that the scores are related to the concentrations of the \(n\) components and that the loadings are related to the molar absorptivities of the \(n\) components. Furthermore, we can explain the pattern of the scores in if each of the 24 samples consists of a 1–3 analytes with the three vertices being samples that contain a single component each, the samples falling more or less on a line between two vertices being binary mixtures of the three analytes, and the remaining points being ternary mixtures of the three analytes.If there are three components in our 24 samples, why are two components sufficient to account for almost 99% of the over variance? Suppose we prepared each sample by using a volumetric digital pipet to combine together aliquots drawn from solutions of the pure components, diluting each to a fixed volume in a 10.00 mL volumetric flask. For example, to make a ternary mixture we might pipet in 5.00 mL of component one and 4.00 mL of component two. If we are diluting to a final volume of 10 mL, then the volume of the third component must be less than 1.00 mL to allow for diluting to the mark. Because the volume of the third component is limited by the volumes of the first two components, two components are sufficient to explain most of the data.The loadings, as noted above, are related to the molar absorptivities of our sample's components, providing information on the wavelengths of visible light that are most strongly absorbed by each sample. We can overlay a plot of the loadings on our scores plot (this is a called a biplot), as shown here.Each arrow is identified with one of our 16 wavelengths and points toward the combination of PC1 and PC2 to which it is most strongly associated. For example, although difficult to read here, all wavelengths from 672.7 nm to 868.7 nm (see the caption for for a complete list of wavelengths) are strongly associated with the analyte that makes up the single component sample identified by the number one, and the wavelengths of 380.5 nm, 414.9 nm, 583.2 nm, and 613.3 nm are strongly associated with the analyte that makes up the single component sample identified by the number two.If we have some knowledge about the possible source of the analytes, then we may be able to match the experimental loadings to the analytes. The samples in were made using solutions of several first row transition metal ions. shows the visible spectra for four such metal ions. Comparing these spectra with the loadings in shows that Cu2+ absorbs at those wavelengths most associated with sample 1, that Cr3+ absorbs at those wavelengths most associated with sample 2, and that Co2+ absorbs at wavelengths most associated with sample 3; the last of the metal ions, Ni2+, is not present in the samplesThis page titled 11.3: Principal Component Analysis is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by David Harvey.
210
11.4: Multivariate Linear Regression
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Chemometrics_Using_R_(Harvey)/11%3A_Finding_Structure_in_Data/11.04%3A_Multivariate_Regression
In Chapter 11.2 we used a cluster analysis of the spectra for 24 samples measured at 16 wavelengths to show that we could divide the samples into three distinct groups, speculating that the samples contained three analytes and that in each group one of the analytes was present at a concentration greater than that of the other two analytes. In Chapter 11.3 we used a principal component analysis of the same set of samples to suggest that the three analytes are Cu2+, Cr3+, and Co2+. In this section we will use a multivariate linear regression analysis to determine the concentration of these analytes in each of the 24 samples.In a simple linear regression analysis, as outlined in Chapter 8, we model the relationship between a single dependent variable, y, and a single dependent variable, x, using the equation\[y = \beta_0 + \beta_1 x \nonumber \]where y is a vector of measured responses for the dependent variable, where x is a vector of values for the independent variable, where \(\beta_0\) is the expected y-intercept, and where \(\beta_1\) is the expected slope. For example, to complete a Beer's law calibration curve for a single analyte, where A is the absorbance and C is the analyte's concentration\[ A = \epsilon b C \nonumber \]we prepare a set of n standard solutions, each with a known concentration of the analyte and measure the absorbance for each of the standard solutions at a single wavelength. A linear regression analysis returns values for \(\epsilon b\), allowing us to determine the concentration of analyte in a sample by measuring its absorbance. See Chapter 8 for a review of how to complete a linear regression analysis using R.In a multivariate linear regression we have j dependent variables, Y, and k independent variables, X, and we measure the dependent variable for each of the n values for the independent variables; we can represent this using matrix notation as\[ [ Y ]_{n \times j} = [X]_{n \times k} \times [\beta_1]_{k \times j} \nonumber \]In this case, to complete a Beer's law calibration curve we prepare a set of n standard solutions, each of which contains known concentrations of the k analytes, and measure the absorbance of each standard at each of the j wavelengths \[ [ A ]_{n \times j} = [C]_{n \times k} \times [\epsilon b]_{k \times j} \nonumber \]where [A] is a matrix of absorbance values, [C] is a matrix of concentrations, and [\(\epsilon b\)] is a matrix of \(\epsilon b\) values for each analyte at each wavelength.Because matrix algebra does not allow for division, we solve for [\(\epsilon b\)] by first pre-multiplying both sides of the equation by the transpose of the matrix of concentrations\[ [C]_{k \times n}^{\text{T}} \times [ A ]_{n \times j} = [C]_{k \times n}^{\text{T}} \times [C]_{n \times k} \times [\epsilon b]_{k \times j} \nonumber \]and then pre-multiplying both sides of the equation by \( \left( [C]_{k \times n}^{\text{T}} \times [C]_{n \times k} \right)^{-1} \) to give\[ \left( [C]_{k \times n}^{\text{T}} \times [C]_{n \times k} \right)^{-1} \times [C]_{k \times n}^{\text{T}} \times [ A ]_{n \times j} = \left( [C]_{k \times n}^{\text{T}} \times [C]_{n \times k} \right)^{-1} \times [C]_{k \times n}^{\text{T}} \times [C]_{n \times k} \times [\epsilon b]_{k \times j} \nonumber \]Multiplying \(\left( [C]_{k \times n}^{\text{T}} \times [C]_{n \times k} \right)^{-1}\) by \(\left( [C]_{k \times n}^{\text{T}} \times [C]_{n \times k} \right)\) is equivalent to multiplying a value by its inverse, which is equal to 1; thus, we have\[ \left( [C]_{k \times n}^{\text{T}} \times [C]_{n \times k} \right)^{-1} \times [C]_{k \times n}^{\text{T}} \times [ A ]_{n \times j} = [\epsilon b]_{k \times j} \nonumber \]With the \(\epsilon b\) matrix in hand, we can determine the concentration of the analytes in a set of samples using the same general approach, as shown here\[ [ A ]_{n \times j} = [C]_{n \times k} \times [\epsilon b]_{k \times j} \nonumber \]\[ [ A ]_{n \times j} \times [\epsilon b]_{j \times k}^{\text{T}} = [C]_{n \times k} \times [\epsilon b]_{k \times j} \times [\epsilon b]_{j \times k}^{\text{T}} \nonumber \]\[ [ A ]_{n \times j} \times [\epsilon b]_{j \times k}^{\text{T}} \times \left( [\epsilon b]_{k \times j} \times [\epsilon b]_{j \times k}^{\text{T}} \right)^{-1} = [C]_{n \times k} \times [\epsilon b]_{k \times j} \times [\epsilon b]_{j \times k}^{\text{T}} \times \left( [\epsilon b]_{k \times j} \times [\epsilon b]_{j \times k}^{\text{T}} \right)^{-1} \nonumber \]\[ [ A ]_{n \times j} \times [\epsilon b]_{j \times k}^{\text{T}} \times \left( [\epsilon b]_{k \times j} \times [\epsilon b]_{j \times k}^{\text{T}} \right)^{-1} = [C]_{n \times k} \nonumber \]Completing these calculations by hand is a chore; see Chapter 11.7 to see how you can complete a multivariate linear regression using R.One way to evaluate the results of a calibration based on a multivariate linear regression is to use it to examine the values for each analyte's \(\epsilon b\) values from the calibration and compare them to the spectra of the individual analytes; the shape of the two plots should be similar. Another way to evaluate a calibration based on a multivariate regression calibration is to use it to analyze a set of samples with known concentrations of the analytes. 11.4: Multivariate Linear Regression is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by LibreTexts.
211
11.5: Using R for a Cluster Analysis
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Chemometrics_Using_R_(Harvey)/11%3A_Finding_Structure_in_Data/11.05%3A_Using_R_for_a_Cluster_Analysis
To illustrate how we can use R to complete a cluster analysis: use this link and save the file allSpec.csv to your working directory. The data in this file consists of 80 rows and 642 columns. Each row is an independent sample that contains one or more of the following transition metal cations: Cu2+, Co2+, Cr3+, and Ni2+. The first seven columns provide information about the samples:The remaining columns contain absorbance values at 635 wavelengths between 380.5 nm and 899.5 nm.First, we need to read the data into R, which we do using the read.csv() functionspec_data <- read.csv("allSpec.csv", check.names = FALSE)where the option check.names = FALSE overrides the function's default to not allow a column's name to begin with a number. sample_ids = c(1, 6, 11, 21:25, 38:53) cluster_data = spec_data[sample_ids, wavelength_ids ]where wavelength_ids is a vector that identifies the 16 equally spaced wavelengths, sample_ids is a vector that identifies the 24 samples that contain one or more of the cations Cu2+, Co2+, and Cr3+, and cluster_data is a data frame that contains the absorbance values for these 24 samples at these 16 wavelengths.Before we can complete the cluster analysis, we first must calculate the distance between the \(24 \times 16 = 384\) points that make up our data. To do this, we use the dist() function, which takes the general formdist(object, method)where object is a data frame or matrix with our data. There are a number of options for method, but we will use the default, which is euclidean.cluster_dist = dist(cluster_data, method = "euclidean") cluster_dist1 6 11 21 22 23 24 25 6 1.53328104 11 1.73128979 0.96493008 21 1.48359716 0.24997370 0.77766228 22 1.49208058 0.32863786 0.68852029 0.09664215 23 1.49457333 0.42903074 0.57495499 0.21089686 0.11755129 24 1.51211374 0.52218072 0.47457024 0.31016429 0.21830998 0.10205547 25 1.55862311 0.61154277 0.39798649 0.39406580 0.30194838 0.19121251 0.09771283 38 1.17069314 0.38098750 0.96982420 0.34254297 0.38830178 0.45418483 0.53114050 0.61729900Only a small portion of the values in cluster_dist are shown here; each entry shows the distance between two of the 24 samples.With distances calculated, we can use R's hclust() function to complete the cluster analysis. The general form of the function ishclust(object, method)where object is the output created using dist() that contains the distances between points. There are a number of options for method—here we use the ward.D method—saving the output to the object cluster_results so that we have access to the results.cluster_results = hclust(cluster_dist, method = "ward.D")To view the cluster diagram, we pass the object cluster_results to the plot() function where hang = -1 extends each vertical line to a height of zero. By default, the labels at the bottom of the dendrogram are the sample ids; cex adjusts the size of these labels.plot(cluster_results, hang = -1, cex = 0.75)With a few lines of code we can add useful details to our plot. Here, for example, we determine the the fraction of the stock Cu2+ solution in each sample and use these values as labels, and divide the 24 samples into three large clusters using the rect.clust() function where k is the number of clusters to highlight and which indicates which of these clusters to display using a rectangular box.cluster_copper = spec_data$concCu/spec_data$concCu plot(cluster_results, hang = -1, labels = cluster_copper[sample_ids], main = "Copper", xlab = "fraction of stock in sample", sub = "", cex = 0.75) rect.hclust(cluster_results, k = 3, which = c, border = "blue")The following code shows how we can use the same data set of 24 samples and 16 wavelength to complete a cluster diagram for the wavelengths. The use of the t() function within the dist() function takes the transpose of our data so that the rows are the 16 wavelengths and the columns are the 24 samples. We do this because the dist() function calculates distances using the rows.wavelength_dist = dist(t(cluster_data)) wavelength_clust = hclust(wavelength_dist, method = "ward.D") plot(wavelength_clust, hang = -1, main = "wavelengths strongly associated with copper") rect.hclust(wavelength_clust, k = 2, which = 2, border = "blue")The figure below highlights the cluster of wavelengths most strongly associated with the absorption by Cu2+.This page titled 11.5: Using R for a Cluster Analysis is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by David Harvey.
212
11.6: Using R for a Principal Component Analysis
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Chemometrics_Using_R_(Harvey)/11%3A_Finding_Structure_in_Data/11.06%3A_Using_R_for_a_Principal_Component_Analysis
To illustrate how we can use R to complete a cluster analysis: use this link and save the file allSpec.csv to your working directory. The data in this file consists of 80 rows and 642 columns. Each row is an independent sample that contains one or more of the following transition metal cations: Cu2+, Co2+, Cr3+, and Ni2+. The first seven columns provide information about the samples:The remaining columns contain absorbance values at 635 wavelengths between 380.5 nm and 899.5 nm.First, we need to read the data into R, which we do using the read.csv() functionspec_data <- read.csv("allSpec.csv", check.names = FALSE)where the option check.names = FALSE overrides the function's default to not allow a column's name to begin with a number. sample_ids = c(1, 6, 11, 21:25, 38:53) pca_data = spec_data[sample_ids, wavelength_ids ]where wavelength_ids is a vector that identifies the 16 equally spaced wavelengths, sample_ids is a vector that identifies the 24 samples that contain one or more of the cations Cu2+, Co2+, and Cr3+, and cluster_data is a data frame that contains the absorbance values for these 24 samples at these 16 wavelengths.To complete the principal component analysis we will use R's prcomp() function, which takes the general formprcomp(object, center, scale)where object is a data frame or matrix that contains our data, and center and scale are logical values that indicate if we should first center and scale the data before we complete the analysis. When we center and scale our data each variable (in this case, the absorbance at each wavelength) is adjusted so that its mean is zero and its variance is one. This has the effect of placing all variables on a common scale, which ensures that any difference in the relative magnitude of the variables does not affect the principal component analysis.pca_results = prcomp(pca_data, center = TRUE, scale = TRUE)The prcomp() function returns a variety of information that we can use to examine the results, including the standard deviation for each principal component, sdev, a matrix with the loadings, rotation, a matrix with the scores, x, and the values use to center and scale the original data. The summary() function, for example, returns the standard deviations for and the proportion of the overall variance explained by each principal component, and the cumulative proportion of variance explained by the principal components.summary(pca_results)Importance of components:PC1 PC2 PC3 PC4 PC5 PC6 PC7 PC8 PC9Standard deviation 3.3134 2.1901 0.42561 0.17585 0.09384 0.04607 0.04026 0.01253 0.01049Proportion of Variance 0.6862 0.2998 0.01132 0.00193 0.00055 0.00013 0.00010 0.00001 0.00001 Cumulative Proportion 0.6862 0.9859 0.99725 0.99919 0.99974 0.99987 0.99997 0.99998 0.99999PC10 PC11 PC12 PC13 PC14 PC15 PC16 Standard deviation 0.009211 0.007084 0.004478 0.00416 0.003039 0.002377 0.001504 Proportion of Variance 0.000010 0.000000 0.000000 0.00000 0.000000 0.000000 0.000000 Cumulative Proportion 0.999990 1.000000 1.000000 1.00000 1.000000 1.000000 1.000000We can also examine each principal component's variance (the square of its standard deviation) in the form of a bar plot by passing the results of the principal component analysis to the plot() function.plot(pca_results)As noted above, the 24 samples include one, two, or three of the cations Cu2+, Co2+, and Cr3+, which is consistent with our results if individual solutions are made by combining together aliquots of stock solutions of Cu2+, Co2+, and Cr3+ and diluting to a common volume. In this case, the volume of stock solution for one cation places limits on the volumes of the other cations such that a three-component mixture essentially has two independent variables.To examine the scores for the principal component analysis, we pass the scores to the plot() function, here using pch = 19 to display them as filled points.plot(pca_results$x, pch = 19)By default, the plot() function displays the values for the first two principal components, with the first (PC1) placed on the x-axis and the second (PC2) placed on the y-axis. If we wish to examine other principal components, then we must specify them when calling the plot() function; the following command, for example, uses the scores for the second and the third principal components.plot(x = pca_results$x[,2], y = pca_results$x[,3], pch = 19, xlab = "PC2", ylab = "PC3")If we wish to display the first three principal components using the same plot, then we can use the scatter3D() function from the plot3D package, which takes the general formlibrary(plot3D) scatter3D(x = pca_results$x[,1], y = pca_results$x[,2], z = pca_results$x[,3], pch = 19, type = "h", theta = 25, phi = 20, ticktype = "detailed", colvar = NULL)where we use the library() function to load the package into our R session (note: this assumes you have installed the plot3D package). The option type = "h" drops a horizontal line from each point down to the plane for PC1 and PC2, which helps us orient the points in space. By default, the plot uses color to show each points value of the third principal component (displayed on the z-axis); here we set colvar = NULL to display all points using the same color.Although the plots are not not shown here, we can use the same commands, replacing x with rotation, to display the loadings.plot(pca_results$rotation, pch = 19)plot(x = pca_results$rotation[,2], y = pca_results$rotation[,3], pch = 19, xlab = "PC2", ylab = "PC3")scatter3D(x = pca_results$rotation[,1], y = pca_results$rotation[,2], z = pca_results$rotation[,3], pch = 19, type = "h", theta = 25, phi = 20, ticktype = "detailed", colvar = NULL)Another way to view the results of a principal component analysis is to display the scores and the loadings on the same plot, which we can do using the biplot() function.biplot(pca_results, cex = c(2, 0.6), xlabs = rep("•", 24))where the option xlabs = rep("•", 24) overrides the function's default to display the scores as numbers, replacing them with dots, and cex = c(2, 0.6) is used to increase the size of the dots and decrease the size of the labels for the loadings.In this biplot, the scores are displayed as dots and the loadings are displayed as arrows that begin at the origin and point toward the individual loadings, which are indicated by the wavelengths associated with the loadings. For this set of data, scores and loadings that are co-located with each other represent samples and wavelengths that are strongly correlated with each other. For example, the sample whose score is in the upper right corner is strongly associated with absorbance of light with wavelengths of 613.3 nm, 583.2 nm, 380.5 nm, and 414.9 nm.Finally, we can use use color to highlight features from our data set. For example, the following lines of code creates a scores plot that uses a color pallet to indicate the relative concentration of Cu2+ in the sample.cu_palette = colorRampPalette(c("white", "blue")) cu_color = cu_pallete[as.numeric(cut(spec_data$concCu[sample_ids], breaks = 50))]The colorRampPalette() function takes a vector of colors—in this case white and blue—and returns a function that we can use to create a palette of colors that runs from pure white to pure blue. We then use this function to create 50 shades of white and bluecu_palette "#FFFFFF" "#F9F9FF" "#F4F4FF" "#EFEFFF" "#EAEAFF" "#E4E4FF" "#DFDFFF" "#DADAFF" "#D5D5FF" "#D0D0FF" "#CACAFF" "#C5C5FF" "#C0C0FF" "#BBBBFF" "#B6B6FF" "#B0B0FF" "#ABABFF" "#A6A6FF" "#A1A1FF" "#9C9CFF" "#9696FF" "#9191FF" "#8C8CFF" "#8787FF" "#8282FF" "#7C7CFF" "#7777FF" "#7272FF" "#6D6DFF" "#6868FF" "#6262FF" "#5D5DFF" "#5858FF" "#5353FF" "#4E4EFF" "#4848FF" "#4343FF" "#3E3EFF" "#3939FF" "#3434FF" "#2E2EFF" "#2929FF" "#2424FF" "#1F1FFF" "#1A1AFF" "#1414FF" "#0F0FFF" "#0A0AFF" "#0505FF" "#0000FF"where #FFFFFF is the hexadecimal code for pure white and #0000FF is the hexadecimal code for pure blue. The latter part of this line of codecu_color = cu_pallete[as.numeric(cut(spec_data$concCu[sample_ids], breaks = 50))]retrieves the concentrations of copper in each of our 24 samples and assigns a hexadecimal code for a shade of blue that indicates the relative concentration of copper in the sample. Here we see that the first sample has a hexadecimal code of #0000FF for pure blue, which means this sample has the largest concentration of copper and samples 2–8 have hexademical codes of #FFFFFF for pure white, which means these samples do not contain any copper.cu_color "#0000FF" "#FFFFFF" "#FFFFFF" "#FFFFFF" "#FFFFFF" "#FFFFFF" "#FFFFFF" "#FFFFFF" "#D0D0FF" "#B6B6FF" "#9C9CFF" "#8282FF" "#6868FF" "#D0D0FF" "#B6B6FF" "#9C9CFF" "#8282FF" "#6868FF" "#EAEAFF" "#EAEAFF" "#B6B6FF" "#B6B6FF" "#8282FF" "#8282FF"Finally, we create the scores plot, using pch = 21 for an open circle whose background color we designate using bg = cu_color and where we use cex = 2 to increase the size of the points.plot(pca_results$x, pch = 21, bg = cu_color, cex = 2)This page titled 11.6: Using R for a Principal Component Analysis is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by David Harvey.
213
11.7: Using R for a Multivariate Linear Regression
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Chemometrics_Using_R_(Harvey)/11%3A_Finding_Structure_in_Data/11.07%3A_Using_R_For_A_Multivariate_Regression
To illustrate how we can use R to complete a multivariate linear regression, use this link and save the file allSpec.csv to your working directory. The data in this file consists of 80 rows and 642 columns. Each row is an independent sample that contains one or more of the following transition metal cations: Cu2+, Co2+, Cr3+, and Ni2+. The first seven columns provide information about the samples:The remaining columns contain absorbance values at 635 wavelengths between 380.5 nm and 899.5 nm. We will use a subset of this data that is identical to that used to illustrate a cluster analysis and a principal component analysis.First, we need to read the data into R, which we do using the read.csv() functionspec_data <- read.csv("allSpec.csv", check.names = FALSE)where the option check.names = FALSE overrides the function's default to not allow a column's name to begin with a number. abs_stds = spec_data[1:15, wavelength_ids] conc_stds = data.frame(spec_data[1:15, 4], spec_data[1:15, 5], spec_data[1:15, 6]) abs_samples = spec_data[c(1, 6, 11, 21:25, 38:53), wavelength_ids]where wavelength_ids is a vector that identifies the 16 equally spaced wavelengths, abs_stds is a data frame that gives the absorbance values for 15 standard solutions of the three analytes Cu2+, Cr3+, and Co2+ at the 16 wavelengths, conc_stds is a data frame that contains the concentrations of the three analytes in the 15 standard solutions, and abs_samples is a data frame that contains the absorbances of the 24 sample at the 16 wavelengths. This is the same data used to illustrate cluster analysis and principal component analysis.To solve for the \(\epsilon b\) matrix we will write and source the following function that takes two objects—a data frame of absorbance values and a data frame of concentrations—and returns a matrix of \(\epsilon b\) values.findeb = function(abs, conc){ abs.m = as.matrix(abs) conc.m = as.matrix(conc) ct = t(conc.m) ctc = ct %*% conc.m invctc = solve(ctc) eb = invctc %*% ct %*% abs.m output = eb invisible(output) }Passing abs_stds and conc_stds to the functioneb_pred = findeb(abs_stds, conc_stds)returns the predicted values for \(\epsilon b\) that make up our calibration. As we see below, a plot of the \(\epsilon b\) values for Cu2+ has the same shape as a plot of the absorbance values for one of the Cu2+ standards.wavelengths = as.numeric(colnames(spec_data[8:642])) old.par = par(mfrow = c) plot(x = wavelengths[wavelength_ids], y = eb_pred[1,], type = "b", xlab = "wavelength (nm)", ylab = "eb", lwd = 2, col = "blue") plot(x = wavelengths, y = spec_data[1,8:642], type = "l", xlab = "wavelength (nm)", ylab = "absorbance", lwd = 2, col = "blue") par(old.par)Having completed the calibration, we can determine the concentrations of the three analytes in the 24 samples using the following function, which takes as inputs thea data frame of absobance values and the \(\epsilon b\) matrix returned by the function findebfindconc = function(abs, eb){ abs.m = as.matrix(abs) eb.m = as.matrix(eb) ebt = t(eb.m) ebebt = eb %*% ebt invebebt = solve(ebebt) pred_conc = round(abs.m %*% ebt %*% invebebt, digits = 5) output = pred_conc invisible(output) } pred_conc = findconc(abs_samples, eb_pred)To determine the error in the predicted concentrations, we first extract the actual concentrations from the original data set as a data frame, adjusting the column names for clarity.real_conc = data.frame(spec_data[c(1, 6, 11, 21:25, 38:53), 4], spec_data[c(1, 6, 11, 21:25, 38:53), 5], spec_data[c(1, 6, 11, 21:25, 38:53), 6]) colnames(real_conc) = c("copper", "cobalt", "chromium")and determine the difference between the actual concentrations and the predicted concentrationsconc_error = real_conc - pred_concFinally, we can report the mean error, the standard deviation, and the 95% confidence interval for each analyte.means = apply(conc_error, 2, mean) round(means, digits = 6)copper cobalt chromium -0.000280 -0.000153 -0.000210sds = apply(conc_error, 2, sd) round(sds, digits = 6)copper cobalt chromium0.001037 0.000811 0.000688conf.it = abs(qt(0.05/2, 20)) * sds round(conf.it, digits = 6)copper cobalt chromium 0.002163 0.001693 0.001434Compared to the ranges of concentrations for the three analytes in the 24 samplesrange(real_conc$copper) 0.00 0.05 range(real_conc$cobalt) 0.0 0.1 range(real_conc$chromium) 0.0000 0.0375the mean errors and confidence intervals are sufficiently small that we have confidence in the results.11.7: Using R for a Multivariate Linear Regression is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by LibreTexts.
214
11.8: Exercises
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Chemometrics_Using_R_(Harvey)/11%3A_Finding_Structure_in_Data/11.08%3A_Exercises
The file rare_earths.csv contains data for the 17 rare earth elements, which consists of the lanthanides (La \(\rightarrow\) Lu) plus Sc and Y. The data is from Horovitz, O.; Sârbu, C. "Characterization and Classification of Lanthanides by Multivariate-Analysis Methods," J. Chem. Educ. 2005, 82, 473-483. Each row in the file contains data for one element; the columns in the file provide values for the following 16 properties:Two variables included in the original paper—the enthalpy of vaporization and the surface tension at the melting point—are omitted from this data set as they include missing values. Problems 1-3 draw upon the data in this file.1. Perform a cluster analysis for the 17 elements in the file rare_earths.csv and comment on the results paying particular attention to the positions of Sc and Y, and the 15 lanthanides. You may wish to compare your results with those reported in the paper cited above.2. Perform a cluster analysis for the 16 properties in the file rare_earths.csv and comment on the results. You may wish to compare your results with those reported in the paper cited above.3. Complete a principal component analysis for the 17 elements in the file rare_earths.csv. Create two-dimensional scores plots that compare PC1 to PC2, PC1 to PC3, and PC2 to PC3, and a three-dimensional scores plot for the first three principal components. Comment on your results paying particular attention to the positions of Sc and Y, and the 15 lanthanides. You may wish to compare your results to those from Exercise 11.1 and the results reported in the paper cited above. Create two-dimensional loadings plots that compare PC1 to PC2, PC1 to PC3, and PC2 to PC3, and a three-dimensional loadings plot for the first three principal components. Comment on your results. You may wish to compare your results to those from Exercise 11.2 and the results reported in the paper cited above.4. The files mvr_abs and mvr_conc contain absorbance values for 10 samples that contain one or more the analytes Co2+, Cu2+, and Ni2+ at five wavelengths, and the mM concentrations of the same analytes in the 10 samples. The data are from Dado, G.; Rosenthal, J. "Simultaneous Determination of Cobalt, Copper, and Nickel by Multivariate Linear Regression," J. Chem. Educ. 1990, 67, 797-800. Use the first seven samples as calibration standards and use a multivariate linear regression to determine the concentrations of the analytes in the last three samples. You may wish to compare your results with those reported in the paper cited above.This page titled 11.8: Exercises is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by David Harvey.
215
12.1: Single-Sided Normal Distribution
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Chemometrics_Using_R_(Harvey)/12%3A_Appendices/12.01%3A_Single-Sided_Normal_Distribution
Table \(\PageIndex{1}\), at the bottom of this appendix, gives the proportion, P, of the area under a normal distribution curve that lies to the right of a deviation, z \[z = \frac {X -\mu} {\sigma} \nonumber\]where X is the value for which the deviation is defined, \(\mu\) is the distribution’s mean value and \(\sigma\) is the distribution’s standard deviation. For example, the proportion of the area under a normal distribution to the right of a deviation of 0.04 is 0.4840 (see entry in red in the table), or 48.40% of the total area (see the area shaded blue in ). The proportion of the area to the left of the deviation is 1 – P. For a deviation of 0.04, this is 1 – 0.4840, or 51.60%. . Normal distribution curve showing the area under a curve greater than a deviation of +0.04 (blue) and with a deviation less than –0.04 (green).When the deviation is negative—that is, when X is smaller than \(\mu\)—the value of z is negative. In this case, the values in the table give the area to the left of z. For example, if z is –0.04, then 48.40% of the area lies to the left of the deviation (see area shaded green in .To use the single-sided normal distribution table, sketch the normal distribution curve for your problem and shade the area that corresponds to your answer (for example, see , which is for Example 4.4.2).This divides the normal distribution curve into three regions: the area that corresponds to our answer (shown in blue), the area to the right of this, and the area to the left of this. Calculate the values of z for the limits of the area that corresponds to your answer. Use the table to find the areas to the right and to the left of these deviations. Subtract these values from 100% and, voilà, you have your answer.12.1: Single-Sided Normal Distribution is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by LibreTexts.
216
12.2: Critical Values for t-Test
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Chemometrics_Using_R_(Harvey)/12%3A_Appendices/12.02%3A_Critical_Values_for_t-Test
Assuming we have calculated texp, there are two approaches to interpreting a t-test. In the first approach we choose a value of \(\alpha\) for rejecting the null hypothesis and read the value of \(t(\alpha,\nu)\) from the table below. If \(t_\text{exp} > t(\alpha,\nu)\), we reject the null hypothesis and accept the alternative hypothesis. In the second approach, we find the row in the table below that corresponds to the available degrees of freedom and move across the row to find (or estimate) the a that corresponds to \(t_\text{exp} = t(\alpha,\nu)\); this establishes largest value of \(\alpha\) for which we can retain the null hypothesis. Finding, for example, that \(\alpha\) is 0.10 means that we retain the null hypothesis at the 90% confidence level, but reject it at the 89% confidence level. The examples in this textbook use the first approach.The values in this table are for a two-tailed t-test. For a one-tailed test, divide the \(\alpha\) values by 2. For example, the last column has an \(\alpha\) value of 0.005 and a confidence interval of 99.5% when conducting a one-tailed t-test.12.2: Critical Values for t-Test is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by LibreTexts.
217
12.3: Critical Values for F-Test
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Chemometrics_Using_R_(Harvey)/12%3A_Appendices/12.03%3A_Critical_Values_for_F-Test
The following tables provide values for \(F(0.05, \nu_\text{num}, \nu_\text{denom})\) for one-tailed and for two-tailed F-tests. To use these tables, we first decide whether the situation calls for a one-tailed or a two-tailed analysis and calculate Fexp\[F_\text{exp} = \frac {s_A^2} {s_B^2} \nonumber\]where \(S_A^2\) is greater than \(s_B^2\). Next, we compare Fexp to \(F(0.05, \nu_\text{num}, \nu_\text{denom})\) and reject the null hypothesis if \(F_\text{exp} > F(0.05, \nu_\text{num}, \nu_\text{denom})\). You may replace s with \(\sigma\) if you know the population’s standard deviation.12.3: Critical Values for F-Test is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by LibreTexts.
218