title
stringlengths
13
247
url
stringlengths
35
578
text
stringlengths
197
217k
__index_level_0__
int64
1
8.68k
12.4: Critical Values for Dixon's Q-Test
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Chemometrics_Using_R_(Harvey)/12%3A_Appendices/12.04%3A_Critical_Values_for_Dixon's_Q-Test
The following table provides critical values for \(Q(\alpha, n)\), where \(\alpha\) is the probability of incorrectly rejecting the suspected outlier and \(n\) is the number of samples in the data set. There are several versions of Dixon’s Q-Test, each of which calculates a value for Qij where i is the number of suspected outliers on one end of the data set and j is the number of suspected outliers on the opposite end of the data set. The critical values for Q here are for a single outlier, Q10, where\[Q_\text{exp} = Q_{10} = \frac {|\text{outlier's value} - \text{nearest value}|} {\text{largest value} - \text{smallest value}} \nonumber\]The suspected outlier is rejected if Qexp is greater than \(Q(\alpha, n)\). For additional information consult Rorabacher, D. B. “Statistical Treatment for Rejection of Deviant Values: Critical Values of Dixon’s ‘Q’ Parameter and Related Subrange Ratios at the 95% confidence Level,” Anal. Chem. 1991, 63, 139–146.12.4: Critical Values for Dixon's Q-Test is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by LibreTexts.
219
12.5: Critical Values for Grubb's Test
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Chemometrics_Using_R_(Harvey)/12%3A_Appendices/12.05%3A_Critical_Values_for_Grubb's_Test
The following table provides critical values for \(G(\alpha, n)\), where \(\alpha\) is the probability of incorrectly rejecting the suspected outlier and n is the number of samples in the data set. There are several versions of Grubb’s Test, each of which calculates a value for Gij where i is the number of suspected outliers on one end of the data set and j is the number of suspected outliers on the opposite end of the data set. The critical values for G given here are for a single outlier, G10, where\[G_\text{exp} = G_{10} = \frac {|X_{out} - \overline{X}|} {s} \nonumber\]The suspected outlier is rejected if Gexp is greater than \(G(\alpha, n)\).12.5: Critical Values for Grubb's Test is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by LibreTexts.
220
12.6: Critical Values for the Wilcoxson Signed Rank Test
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Chemometrics_Using_R_(Harvey)/12%3A_Appendices/12.06_Critical_Values_for_the_Wilcoxson_Signed_Rank_Test
The following table provides critical values at \(\alpha = 0.05\) for the Wilcoxson signed rank test where n is the number of samples in the data set. An entry of NA means the test cannot be applied. The null hypothesis of no difference between the samples can be rejected when the test statistic is less than or equal to the critical values for the number of samples.12.6: Critical Values for the Wilcoxson Signed Rank Test is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by LibreTexts.
221
12.7: Critical Values for the Wilcoxson Ranked Sum Test
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Chemometrics_Using_R_(Harvey)/12%3A_Appendices/12.07%3A_Critical_Values_for_the_Wilcoxson_Ranked_Sum_Test
The following table provides critical values at \(\alpha = 0.05\) for the Wilcoxson ranked sum test where \(n_1\) and \(n_2\) are the number of samples in the two sets of data where \(n_1 \le n_2\). An entry of NA means the test cannot be applied. The null hypothesis of no difference between the samples can be rejected when the test statistic is less than or equal to the critical values for the number of samples.12.7: Critical Values for the Wilcoxson Ranked Sum Test is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by LibreTexts.
222
13.1: Chemometric Resources
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Chemometrics_Using_R_(Harvey)/13%3A_Resources/13.1%3A_Chemometric_Resources
The following small collection of books provide a broad introduction to chemometric methods of analysis. The text by Miller and Miller is a good entry-level textbook suitable for the undergraduate curriculum. The text by Massart, et. al. is a particularly comprehensive resource. Although not resources on chemometrics, the following books provide a broad introduction to the statistical methods that underlie chemometrics.The following books provide more specialized coverage of topics relevant to chemometrics.The following books provide guidance on the visualization of data, both in figures and in tables.The following textbook provides a broad introduction to analytical chemistry, including sections on chemometric topics.The following paper provides a general theory of types of measurements.The detection of outliers, particularly when working with a small number of samples, is discussed in the following papers.The following papers provide additional information on error and uncertainty.The following articles provide thoughts on the limitations of statistical analysis based on significance testing.The following papers provide insight into organizing data in spreadsheets and visualizing data.This page titled 13.1: Chemometric Resources is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by David Harvey.
223
2.1: Ways to Describe Data
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Chemometrics_Using_R_(Harvey)/02%3A_Types_of_Data/2.01%3A_Ways_to_Describe_Data
If we are to consider how to describe data, then we need some data with which we can work. Ideally, we want data that is easy to gather and easy to understand. It also is helpful if you can gather similar data on your own so you can repeat what we cover here. A simple system that meets these criteria is to analyze the contents of bags of M&Ms. Although this system may seem trivial, keep in mind that reporting the percentage of yellow M&Ms in a bag is analogous to reporting the concentration of Cu2+ in a sample of an ore or water: both express the amount of an analyte present in a unit of its matrix.At the beginning of this chapter we identified four contrasting ways to describe data: categorical vs. numerical, ordered vs. unordered, absolute reference vs. arbitrary reference, and discrete vs. continuous. To give meaning to these descriptive terms, let’s consider the data in Table \(\PageIndex{1}\), which includes the year the bag was purchased and analyzed, the weight listed on the package, the type of M&Ms, the number of yellow M&Ms in the bag, the percentage of the M&Ms that were red, the total number of M&Ms in the bag and their corresponding ranks.The entries in Table \(\PageIndex{1}\) are organized by column and by row. The first row—sometimes called the header row—identifies the variables that make up the data. Each additional row is the record for one sample and each entry in a sample’s record provides information about one of its variables; thus, the data in the table lists the result for each variable and for each sample.Of the variables included in Table \(\PageIndex{1}\), some are categorical and some are numerical. A categorical variable provides qualitative information that we can use to describe the samples relative to each other, or that we can use to organize the samples into groups (or categories). For the data in Table \(\PageIndex{1}\), bag id, type, and rank are categorical variables.A numerical variable provides quantitative information that we can use in a meaningful calculation; for example, we can use the number of yellow M&Ms and the total number of M&Ms to calculate a new variable that reports the percentage of M&Ms that are yellow. For the data in Table \(\PageIndex{1}\), year, weight (oz), number yellow, % red M&Ms, and total M&Ms are numerical variables.We can also use a numerical variable to assign samples to groups. For example, we can divide the plain M&Ms in Table \(\PageIndex{1}\) into two groups based on the sample’s weight. What makes a numerical variable more interesting, however, is that we can use it to make quantitative comparisons between samples; thus, we can report that there are \(14.4 \times\) as many plain M&Ms in a 10-oz. bag as there are in a 0.8-oz. bag.\[\frac{333 + 331}{24 + 22} = \frac{664}{46} = 14.4 \nonumber\]Although we could classify year as a categorical variable—not an unreasonable choice as it could serve as a useful way to group samples—we list it here as a numerical variable because it can serve as a useful predictive variable in a regression analysis. On the other hand rank is not a numerical variable—even if we rewrite the ranks as numerals—as there are no meaningful calculations we can complete using this variable.Categorical variables are described as nominal or ordinal. A nominal categorical variable does not imply a particular order; an ordinal categorical variable, on the other hand, coveys a meaningful sense of order. For the categorical variables in Table \(\PageIndex{1}\), bag id and type are nominal variables, and rank is an ordinal variable.A numerical variable is described as either ratio or interval depending on whether it has (ratio) or does not have (interval) an absolute reference. Although we can complete meaningful calculations using any numerical variable, the type of calculation we can perform depends on whether or not the variable’s values have an absolute reference.A numerical variable has an absolute reference if it has a meaningful zero—that is, a zero that means a measured quantity of none—against which we reference all other measurements of that variable. For the numerical variables in Table \(\PageIndex{1}\), weight (oz), number yellow, % red, and total M&Ms are ratio variables because each has a meaningful zero; year is an interval variable because its scale is referenced to an arbitrary point in time, 1 BCE, and not to the beginning of time.For a ratio variable, we can make meaningful absolute and relative comparisons between two results, but only meaningful absolute comparisons for an interval variable. For example, consider sample e, which was collected in 1994 and has 331 M&Ms, and sample d, which was collected in 2000 and has 24 M&Ms. We can report a meaningful absolute comparison for both variables: sample e is six years older than sample d and sample e has 307 more M&Ms than sample d. We also can report a meaningful relative comparison for the total number of M&Ms—there are\[\frac{331}{24} = 13.8 \times \nonumber\] as many M&Ms in sample e as in sample d—but we cannot report a meaningful relative comparison for year because a sample collected in 2000 is not\[\frac{2000}{1994} = 1.003 \times \nonumber\] older than a sample collected in 1994. Finally, the granularity of a numerical variable provides one more way to describe our data. For example, we can describe a numerical variable as discrete or continuous. A numerical variable is discrete if it can take on only specific values—typically, but not always, an integer value—between its limits; a continuous variable can take on any possible value within its limits. For the numerical data in Table \(\PageIndex{1}\), year, number yellow, and total M&Ms are discrete in that each is limited to integer values. The numerical variables weight (oz) and % red, on the other hand, are continuous variables. Note that weight is a continuous variable even if the device we use to measure weight yields discrete values.This page titled 2.1: Ways to Describe Data is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by David Harvey.
225
2.2: Using R to Organize and Manipulate Data
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Chemometrics_Using_R_(Harvey)/02%3A_Types_of_Data/2.02%3A_Using_R_to_Organize_and_Manipulate_Data
The data in Table \(\PageIndex{1}\) should remind you of a data frame, a way of organizing data in R that we introduced in Chapter 1. Here we will learn how to create a data frame that holds the data in Table \(\PageIndex{1}\) and learn how we can make us of the data frame.To create a data frame we begin by creating vectors for each of the variables. Note that letters is a constant in R that contains the 26 lower case letters of the Roman alphabet: here we are using just the first six letters for the bag ids.bag_id = letters[1:6] year = c weight = c(1.74, 1.74, 0.80, 0.80, 10.0, 10.0) type = c("peanut", "peanut", "plain", "plain", "plain", "plain") number_yellow = c percent_red = c(27.8, 4.35, 22.7, 20.8, 23.0, 21.9) total = c rank = c("sixth", "fourth", "fifth", "third", "second", "first")To create the data frame, we use R’sdata.frame()function, passing to it the names of our vectors, each of which must be of the same length. There is an option within this function to treat variables whose values are character strings as factors—another name for a categorical variable—by using the argument stringsAsFactors = TRUE. As the default value for this argument depends on your version of R, it is useful to make your choice explicit by including it in your code, as we do here.mm_data = data.frame(bag_id, year, weight, type, number_yellow, percent_red, total, rank, stringsAsFactors = TRUE) mm_data bag_id year weight type number_yellow percent_red total rank 1 a 2006 1.74 peanut 2 27.80 18 sixth 2 b 2006 1.74 peanut 3 4.35 23 fourth 3 c 2000 0.80 plain 1 22.70 22 fifth 4 d 2000 0.80 plain 5 20.80 24 third 5 e 1994 10.00 plain 56 23.00 331 second 6 f 1994 10.00 plain 63 21.90 333 firstIf we examine the structure of this data set using R’sstr()function, we see that bag_id, type, and rank are factors and year, weight, number_yellow, percent_red, and total arenumerical variables, assignments that are consistent with our earlier analysis of the data.str(mm_data)'data.frame': 6 obs. of 8 variables: $ bag_id : Factor w/ 6 levels "a","b","c","d",..: 1 2 3 4 5 6 $ year : num 2006 2006 2000 2000 1994 ... $ weight : num 1.74 1.74 0.8 0.8 10 10 $ type : Factor w/ 2 levels "peanut","plain": 1 1 2 2 2 2 $ number_yellow: num 2 3 1 5 56 63 $ percent_red : num 27.8 4.35 22.7 20.8 23 21.9 $ total : num 18 23 22 24 331 333 $ rank : Factor w/ 6 levels "fifth","first",..: 5 3 1 6 4 2Finally, we can use the functionas.factor()to have R treat a numerical variable as a categorical variable, as we do here for year. Why we might wish to do this is a topic we will return to in later chapters.mm_year_as_factor = data.frame(bag_id, as.factor(year), percent_red, total)str(mm_year_as_factor)'data.frame': 6 obs. of 4 variables: $ bag_id : Factor w/ 6 levels "a","b","c","d",..: 1 2 3 4 5 6 $ as.factor.year.: Factor w/ 3 levels "1994","2000",..: 3 3 2 2 1 1 $ percent_red : num 27.8 4.35 22.7 20.8 23 21.9 $ total : num 18 23 22 24 331 33In Chapter 1.2 we learned how to retrieve individual rows or columns from a data frame and assign them to a new object. Here we learn how to use R’s more flexible subset() function to accomplish the same thing. Here, for example, we retrieve only the data for plain M&Ms.plain_mm = subset(mm_data, type == "plain")plain_mm bag_id year weight type number_yellow percent_red total rank 3 c 2000 0.8 plain 1 22.7 22 fifth 4 d 2000 0.8 plain 5 20.8 24 third 5 e 1994 10.0 plain 56 23.0 331 second 6 f 1994 10.0 plain 63 21.9 333 firstNote that type == "plain"uses a relational operator to choose only those rows in which the variable type has the value plain. Here is a list of relational operators:We can string variables together using the logical & operator.mm_plain10 = subset(mm_data, (weight == 10.0 & type == "plain")) mm_plain10 bag_id year weight type number_yellow percent_red total rank 5 e 1994 10 plain 56 23.0 331 second 6 f 1994 10 plain 63 21.9 333 firstWe also can narrow the number of variables returned using the subset() function’s select argument. In this example we exclude samples collected before the year 2000 and return only the year, the number of yellow M&Ms, and the percentage of red M&Ms.mm_20xx = subset(mm_data, year >= 2000, select = c(year, number_yellow, percent_red))mm_20xx year number_yellow percent_red 1 2006 2 27.80 2 2006 3 4.35 3 2000 1 22.70 4 2000 5 20.80This page titled 2.2: Using R to Organize and Manipulate Data is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by David Harvey.
226
2.3: Exercises
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Chemometrics_Using_R_(Harvey)/02%3A_Types_of_Data/2.03%3A_Exercises
1. In Exercise 1 of Chapter 1 you created a data frame with the following information about the first 18 elements.(a) Setting aside name and symbol, which of the remaining variables are categorical or numerical?(b) For those variables that are categorical, which are nominal and which are ordinal?(c) For those variables that are numerical, which are ratio and which are interval?(d) For those variables that are numerical, which are discrete and which are continuous?2. Use this link to download and save the spreadsheet marlybone_2018.csv. The data in this file gives the daily average level of NOX (the combined concentrations of NO and of NO2) in µg/m3 and the daily average temperature in °C as recorded in 2018 at a roadside monitoring station located on Marylebone Road in Westminster, which is near Reagents Park, Madame Tussaud's Wax Museum, and Baker Street, the "home" of Sherlock Holmes. The data is made available by London Air, a website managed by Kings College in London that reports results from the continuous monitoring of air quality at hundreds of sites spread throughout the greater London area. As in most long-term monitoring project, some data is missing for various reasons, such as equipment failure; these values appear in the spreadsheet as empty cells. If you wish, you can visit the London Air web site here.(a) Use the read.csv() function to bring the data into R as a data frame and examine the dataset's structure using the head() function.(b) Add a new column to the data frame that contains the running day number (January 1st is day 1 and December 31st is day 365).(c) Use the subset() function to create separate data frames for each month.(d) Save all of your data frames in a single .RData file so that it is available to you when working problems in other chapters.3. Use this link to access a case study on data analysis and complete the five investigations included in Part I: Ways to Describe Data.This page titled 2.3: Exercises is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by David Harvey.
227
3.1: Types of Visualizations
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Chemometrics_Using_R_(Harvey)/03%3A_Visualizing_Data/3.01%3A_Types_of_Visualizations
Suppose we want to study the composition of 1.69-oz (47.9-g) packages of plain M&Ms. We obtain 30 bags of M&Ms (ten from each of three stores) and remove the M&Ms from each bag one-by-one, recording the number of blue, brown, green, orange, red, and yellow M&Ms. We also record the number of yellow M&Ms in the first five candies drawn from each bag, and record the actual net weight of the M&Ms in each bag. Table \(\PageIndex{1}\) summarizes the data collected on these samples. The bag id identifies the order in which the bags were opened and analyzed.Having collected our data, we next examine it for possible problems, such as missing values (Did we forget to record the number of brown M&Ms in any of our samples?), for errors introduced when we recorded the data (Is the decimal point recorded incorrectly for any of the net weights?), or for unusual results (Is it really the case that this bag has only yellow M&M?). We also examine our data to identify interesting observations that we may wish to explore (It appears that most net weights are greater than the net weight listed on the individual packages. Why might this be? Is the difference significant?) When our data set is small we usually can identify possible problems and interesting observations without much difficulty; however, for a large data set, this becomes a challenge. Instead of trying to examine individual values, we can look at our results visually. While it may be difficult to find a single, odd data point when we have to individually review 1000 samples, it often jumps out when we look at the data using one or more of the approaches we will explore in this chapter.A dot plot displays data for one variable, with each sample’s value plotted on the x-axis. The individual points are organized along the y-axis with the first sample at the bottom and the last sample at the top. shows a dot plot for the number of brown M&Ms in the 30 bags of M&Ms from Table \(\PageIndex{1}\). The distribution of points appears random as there is no correlation between the sample id and the number of brown M&Ms. We would be surprised if we discovered that the points were arranged from the lower-left to the upper-right as this implies that the order in which we open the bags determines whether they have many or a few brown M&Ms.A dot plot provides a quick way to give us confidence that our data are free from unusual patterns, but at the cost of space because we use the y-axis to include the sample id as a variable. A stripchart uses the same x-axis as a dot plot, but does not use the y-axis to distinguish between samples. Because all samples with the same number of brown M&Ms will appear in the same place—making it impossible to distinguish them from each other—we stack the points vertically to spread them out, as shown in .Both the dot plot in and the stripchart in suggest that there is a smaller density of points at the lower limit and the upper limit of our results. We see, for example, that there is just one bag each with 8, 16, 18, 19, 20, and 21 brown M&Ms, but there are six bags each with 13 and 17 brown M&Ms.Because a stripchart does not use the y-axis to provide meaningful categorical information, we can easily display several stripcharts at once. shows this for the data in Table \(\PageIndex{1}\). Instead of stacking the individual points, we jitter them by applying a small, random offset to each point. Among the things we learn from this stripchart are that only brown and yellow M&Ms have counts of greater than 20 and that only blue and green M&Ms have counts of three or fewer M&Ms.The stripchart in is easy for us to examine because the number of samples, 30 bags, and the number of M&Ms per bag is sufficiently small that we can see the individual points. As the density of points becomes greater, a stripchart becomes less useful. A box and whisker plot provides a similar view but focuses on the data in terms of the range of values that encompass the middle 50% of the data. shows the box and whisker plot for brown M&Ms using the data in Table \(\PageIndex{1}\). The 30 individual samples are superimposed as a stripchart. The central box divides the x-axis into three regions: bags with fewer than 13 brown M&Ms (seven samples), bags with between 13 and 17 brown M&Ms (19 samples), and bags with more than 17 brown M&Ms (four samples). The box's limits are set so that it includes at least the middle 50% of our data. In this case, the box contains 19 of the 30 samples (63%) of the bags, because moving either end of the box toward the middle results in a box that includes less than 50% of the samples. The difference between the box's upper limit and its lower limit is called the interquartile range (IQR). The thick line in the box is the median, or middle value (more on this and the IQR in the next chapter). The dashed lines at either end of the box are called whiskers, and they extend to the largest or the smallest result that is within \(\pm 1.5 \times \text{IQR}\) of the box's right or left edge, respectively.Because a box and whisker plot does not use the y-axis to provide meaningful categorical information, we can easily display several plots in the same frame. shows this for the data in Table \(\PageIndex{1}\). Note that when a value falls outside of a whisker, as is the case here for yellow M&Ms, it is flagged by displaying it as an open circle.One use of a box and whisker plot is to examine the distribution of the individual samples, particularly with respect to symmetry. With the exception of the single sample that falls outside of the whiskers, the distribution of yellow M&Ms appears symmetrical: the median is near the center of the box and the whiskers extend equally in both directions. The distribution of the orange M&Ms is asymmetrical: half of the samples have 4–7 M&Ms (just four possible outcomes) and half have 7–15 M&Ms (nine possible outcomes), suggesting that the distribution is skewed toward higher numbers of orange M&Ms (see Chapter 5 for more information about the distribution of samples). shows box-and-whisker plots for yellow M&Ms grouped according to the store where the bags of M&Ms were purchased. Although the box and whisker plots are quite different in terms of the relative sizes of the boxes and the relative length of the whiskers, the dot plots suggest that the distribution of the underlying data is relatively similar in that most bags contain 12–18 yellow M&Ms and just a few bags deviate from these limits. These observations are reassuring because we do not expect the choice of store to affect the composition of bags of M&Ms. If we saw evidence that the choice of store affected our results, then we would look more closely at the bags themselves for evidence of a poorly controlled variable, such as type (Did we accidentally purchase bags of peanut butter M&Ms from one store?) or the product’s lot number (Did the manufacturer change the composition of colors between lots?).Although a dot plot, a stripchart and a box-and-whisker plot provide some qualitative evidence of how a variable’s values are distributed—we will have more to say about the distribution of data in Chapter 5—they are less useful when we need a more quantitative picture of the distribution. For this we can use a bar plot that displays a count of each discrete outcome. shows bar plots for orange and for yellow M&Ms using the data in Table \(\PageIndex{1}\).Here we see that the most common number of orange M&Ms per bag is four, which is also the smallest number of orange M&Ms per bag, and that there is a general decrease in the number of bags as the number of orange M&M per bag increases. For the yellow M&Ms, the most common number of M&Ms per bag is 16, which falls near the middle of the range of yellow M&Ms.A bar plot is a useful way to look at the distribution of discrete results, such as the counts of orange or yellow M&Ms, but it is not useful for continuous data where each result is unique. A histogram, in which we display the number of results that fall within a sequence of equally spaced bins, provides a view that is similar to that of a bar plot but that works with continuous data. , for example, shows a histogram for the net weights of the 30 bags of M&Ms in Table \(\PageIndex{1}\). Individual values are shown by the vertical hash marks at the bottom of the histogram.This page titled 3.1: Types of Visualizations is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by David Harvey.
229
3.2: Using R to Visualize Data
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Chemometrics_Using_R_(Harvey)/03%3A_Visualizing_Data/3.02%3A_Using_R_to_Visualize_Data
One of the strengths of R is the ease with which you can plot data and the quality of the plots you can create. R has two pre-installed graphing packages: one is the graphics package, which is available to you when you launch R, and the second is the lattice package tat you can bring into your session by running library(lattice)in the console—and there are many additional graphics packages, such as ggplot2, developed by others. As our interest in this textbook is making R quickly and easily accessible, we will rely on R’s base graphics. See this chapter's resources for a list of other graphing packages.This section uses the M&M data in Table 1 of Chapter 3.1. You can download a copy of the data as a .csv spreadsheet using this link, and save it in your working directory.Before we can create a visualization, we need to make our data available to R. The code below uses the read.csv()function to read in the file MandM.csv as a data frame with the name mm_data. The text"MandM.csv"assumes the file is located in your working directory.mm_data = read.csv("MandM.csv") To create a dot plot in R we use the function dotchart(x,...) where x is the object that holds our data, typically a vector or a single column from a data frame, and ... is a list of optional arguments that affects what we see. In the example below, pch sets the plotting symbol (19 is an solid circle), col is the color assigned to the plotting symbol, labels identifies the samples by name along the y-axis, xlab assigns a label to the x-axis, ylab assigns a label to the y-axis, and cex controls the size of the labels and points. See the last section of this chapter for a more general introduction to creating and displaying plots using R’s base graphics.dotchart(mm_data$brown, pch = 19, col = "brown", labels = mm_data$bag, xlab = "number of brown M&Ms", ylab = "bag id", cex = 0.5)To create a stripchart in R we use the function stripchart(x, ...)where x is the object that holds our data, typically a vector or a column from a data frame, and ... is a list of optional arguments that affects what we see. In the example below,pchsets the plotting symbol (19 is an solid circle), col is the color assigned to the plotting symbol, method defines how points with the same value for x are displayed on the y-axis, in this case stacking them one above the other by an amount defined by an offset, and cex controls the size of the individual data points.stripchart(mm_data$brown, pch = 19, col = "brown", method = "stack", offset = 0.5, cex = 0.6, xlab = "number of brown M&Ms")Because a stripchart does not use the y-axis to provide information, we can easily display several stripcharts at once, as shown in the following example, where we usemm_data[3:8]to identify the data for each stripchart and col to assign a color to each stripchart. Instead of stacking the individual points, they are jittered by applying a small, random offset to each point using jitter. The parameter las forces the labels to be displayed horizontally (las = 0 aligns labels parallel to the axis, las = 1 aligns labels horizontally, las = 2 aligns labels perpendicular to the axis, and las = 4 aligns labels vertically).stripchart(mm_data[3:8], pch = 19, cex = 0.5, xlab = "number of M&MS", col = c("blue", "brown", "green", "orange", "red", "yellow"), method = "jitter", jitter = 0.2, las = 1)To create a box-and-whisker plot in R we use the function boxplot(x,...)where x is the object that holds our data, typically a vector or a column from a data frame, and ... is a list of optional arguments that affects what we see. In the example below, the option horizontal = TRUE overrides the default, which is to display a vertical boxplot, and range specifies the length of the whisker as a multiple of the IQR. In this example, we also show the individual values using stripchart() with the option add = TRUE to overlay the stripchart on the boxplot.boxplot(mm_data$brown, horizontal = TRUE, range = 1.5, xlab = "number of brown M&Ms") stripchart(mm_data$brown, method = "jitter", jitter = 0.2, add = TRUE, col = "brown", pch = 19)Because a box and whisker plot does not use the y-axis to provide information, we can easily display several plots at once, as shown in the following example, where we use mm_data[3:8] to identify the data for each plot and col to assign a color to each plot.boxplot(mm_data[3:8], xlab = "number of M&MS", las = 1, horizontal = TRUE, col = c("blue", "brown", "green", "orange", "red", "yellow"))In the example below, the code mm_data$yellow ~ mm_data$store is a formula, which takes the general form of y as a function of x; in this case, it uses the data in the column named store to divide the data into three groups. The option outline = FALSE in the boxplot() function suppresses the function’s default to plot an open circle for each sample that lies outside of the whiskers; by doing this we avoid plotting these points twice.boxplot(mm_data$yellow ~ mm_data$store, horizontal = TRUE, las = 1, col = "yellow", outline = FALSE, xlab = "number of yellow M&Ms")stripchart(mm_data$yellow ~ mm_data$store, add = TRUE, pch = 19, method = "jitter", jitter = 0.2)See Chapter 8.5 for a discussion of the use of formulas in R.To create a bar plot in R we use the function barplot(x,...) where x is the object that holds our data, typically a vector or a column from a data frame and ... is a list of optional arguments that affects what we see. Unlike the previous plots, we cannot pass to barplot() our raw data that consists of the number of orange M&Ms in each bag. Instead, we have to provide the data in the form of a table that gives the number of bags that contain 0, 1, 2, . . . up to the maximum number of orange M&Ms in any bag; we accomplish this using the tabulate() function. Because tabulate() only counts the frequency of positive integers, it will ignore any bags that do not have any orange M&Ms; adding one to each count by using mm_data$orange + 1 ensures they are counted. The argument names.arg allows us to provide categorical labels for the x-axis (and correct for the fact that we increased each index by 1).orange_table = tabulate(mm_data$orange + 1) barplot(orange_table, col = "orange", names.arg = seq(0, max(mm_data$orange), 1), xlab = "number of orange M&Ms", ylab = "number of bags")To create a histogram in R we use the function hist(x,...) where x is the object that holds our data, typically a vector or a column from a data frame, and ... is a list of optional arguments that affects what we see. In the example below, the option main = NULL suppresses the placing of a title above the plot, which otherwise is included by default. The option right = TRUE means the right-most value of a bin is included in that bin. Finally, although a histogram shows how individual values are distributed, it does not show the individual values themselves. The rug(x) function adds tick marks along the x-axis that show each individual value.hist(mm_data$net_weight, col = "lightblue", xlab = "net weight of M&Ms (oz)", right = TRUE, main = NULL)rug(mm_data$net_weight, lwd = 1.5)By default, R uses an algorithm to determine how to set the size of bins. As shown in the following example, we can use the option breaks to specify the values of x where one bin ends and the next bin begins.hist(mm_data$net_weight, col = "lightblue", xlab = "net weight of M&Ms (oz)", breaks = seq(46, 52, 0.5), right = TRUE, main = NULL)rug(mm_data$net_weight, lwd = 1.5)This page titled 3.2: Using R to Visualize Data is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by David Harvey.
230
3.3: Creating Plots From Scratch in R Using Base Graphics
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Chemometrics_Using_R_(Harvey)/03%3A_Visualizing_Data/3.03%3A_Creating_Plots_From_Scratch_in_R_Using_Base_Graphics
As we saw in the last section, the functions to create dot charts, stripcharts, boxplots, barplots, and histograms have arguments that we can use to alter the appearance of the function’s output. For example, here is the full list of arguments available when we use dotchart()that control what the plot shows.dotchart(x, labels = NULL, groups = NULL, gdata = NULL, cex = par("cex"), pt.cex = cex, pch = 21, gpch = 21, bg = par("bg"), color = par("fg"), gcolor = par("fg"), lcolor = "gray", xlim = range(x[is.finite(x)]), main = NULL, xlab = NULL, ylab = NULL, ...)Each of the arguments has a default value, which means we need not specify the value for an argument unless we wish to change its value, as we did when we set pch to 19. The final argument of ... indicates that we can change any of a long list of graphical parameters that control what we see when we use dotchart.One of the most common, and most important, visualizations in analytical chemistry is a scatterplot in which we are interested in the relationship, if any, between two measurement by plotting the values for one variable along the x-axis and the values for the other variable along y-axis. For this exercise, we will use some data from the Puget Sound Data Hoard that gives the mass and the diameter for 816 M&Ms obtained from a 14.0-oz bag of plain M&Ms, a 12.7-oz bag of peanut M&Ms, and a 12.7-oz bag of peanut butter M&Ms. Let’s read the data into R and store it in a data frame with the name psmm_data. You can download a copy of the data using this link saving it in your working directory.psmm_data = read.csv("data/PugetSoundM&MData.csv")We might expect that as the diameter of an M&M increases so will the mass of the M&M. We might also expect that the relationship between diameter and mass may depend on whether the M&Ms are plain, peanut, or peanut butter. So that we can access data for each type of M&M, let’s use the which() function to create vectors that designate the row numbers for each of the three types of M&Ms.pb_id = which(psmm_data$type == "peanut butter") plain_id = which(psmm_data$type == "plain") peanut_id = which(psmm_data$type == "peanut")Typically we are interested in how one variable affects the other variable. We call the former the independent variable and place it on the x-axis and we call the latter the dependent variable and place it on the y-axis. Here we will use diameter as the independent variable and mass as the dependent variable. To create a scatterplot for the plain M&Ms we use the function plot(x, y) where x is the data to plot on the x-axis and y is the data to plot on the y-axis.plot(x = psmm_data$diameter[plain_id], y = psmm_data$mass[plain_id])Although our scatterplot shows that the mass of a plain M&M increases as its diameter increases, it is not a particularly attractive plot. In addition to specifying x and y, the plot function allows us to pass additional arguments to customize our plot; here are some of these optional arguments:type = “option”. This argument specifies how points are displayed; there are a number of options, but the most useful are “p” for points (this is the default), “l” for lines without points, “b” for both points and lines that do not touch the points, “o” for points and lines that pass through the points, “h” for histogram-like vertical lines, and “s” for stair steps; use “n” if you wish to suppress the points.pch = number. This argument selects the symbol used to plot the data, with the number assigned to each symbol shown below. The default option is 1, or an open circle. Symbols 15–20 are filled using the color of the symbol’s boundary, and symbols 21–25 can take a background color that is different from the symbol’s boundary. See later in this document for more details about setting colors. The figure below shows the different options.# code from http://www.sthda.com/english/wiki/r-...available-in-r oldPar = par() par(font = 2, mar = c(0.5, 0, 0, 0)) y = rev(c(rep,rep, rep, rep, rep)) x = c(rep(1:5, 5), 6) plot(x, y, pch = 0:25, cex = 1.5, ylim = c(1, 5.5), xlim = c(1, 6.5), axes = FALSE, xlab = "", ylab = "", bg = "blue") text(x, y, labels = 0:25, pos = 3) par(mar = oldPar$mar, font = oldPar$font)lty = number. This argument specifies the type of line to draw; the options are 1 for a solid line (this is the default), 2 for a dashed line, 3 for a dotted line, 4 for a dot-dash line, 5 for a long-dash line, and 6 for a two-dash line.lwd = number. This argument sets the width of the line. The default is 1 and any other entry simply scales the width relative to the default; thus lwd = 2 doubles the width and lwd = 0.5 cuts the width in half.bty = “option”. This argument specifies the type of box to draw around the plot; the options are “o” to draw all four sides (this is the default), “l” to draw on the left side and the bottom side only, “7” to draw on the top side and the right side only, “c” to draw all but the right side, “u” to draw all but the top side, “]” to draw all but the left side, and “n” to omit all four sides.axes = logical. This argument indicates whether the axes are drawn (TRUE) or not drawn (FALSE); the default is TRUE.xlim = c(begin, end). This argument sets the limits for the x-axis, overriding the default limits set by the plot() command.ylim = c(begin, end). This argument sets the limits for the y-axis, overriding the default limits set by the plot() command.xlab = “text”. This argument specifies the label for the x-axis, overriding the default label set by the plot() command.ylab = “text”. This argument specifies the label for the y-axis, overriding the default label set by the plot() command.main = “text”. This argument specifies the main title, which is placed above the plot, overriding the default title set by the plot() command.sub = “text”. This argument specifies the subtitle, which is placed below the plot, overriding the default subtitle set by the plot() command.cex = number. This argument controls the relative size of the symbols used to plot points. The default is 1 and any other entry simply scales the size relative to the default; thus cex = 2 doubles the size and cex = 0.5 cuts the size in half.cex.axis = number. This argument controls the relative size of the text used for the scale on both axes; see the entry above for cex for more details.cex.lab = number. This argument controls the relative size of the text used for the label on both axes; see the entry above for cex for more details.cex.main = number. This argument controls the relative size of the text used for the plot’s main title; see the entry above for cex for more details.cex.sub = number. This argument controls the relative size of the text used for the plot’s subtitle; see the entry above for cex for more details.col = number or “string”. This argument controls the color of the symbols used to plot points. There are 657 available colors, for which the default is “black” or 24. You can see a list of colors (number and text string) by typing colors() in the console.col.axis = number or “string”. This argument controls the color of the text used for the scale on both axes; see the entry above for col for more details.col.lab = number or “string”. This argument controls the color of the text used for the label on both axes; see the entry above for col for more details.col.main = number or “string”. This argument controls the color of the text used for the plot’s main title; see the entry above for col for more details.col.sub = number or “string”. This argument controls the color of the text used for the plot’s subtitle; see the entry above for col for more details.bg = number or “string”. This argument sets the background color for the plot symbols 21–25; see the entries above for pch and for col for more details.Let’s use some of these arguments to improve our scatterplot by adding some color to and adjusting the size of the symbols used to plot the data, and by adding a title and some more informative labels for the two axes.plot(x = psmm_data$diameter[plain_id], y = psmm_data$mass[plain_id], xlab = "diameter of M&Ms", ylab = "mass of M&Ms", main = "Diameter and Mass of Plain M&Ms", pch = 19, cex = 0.5, col = "blue")We can modify an existing plot in a number of useful ways, such as adding a new set of data, adding a reference line, adding a legend, adding text, and adding a set of grid lines; here are some of the things we can do:points(x, y, . . . ). This command is identical to the plot() command, but overlays the new points on the current plot instead of first erasing the previous plot. Note: the points() command can not re-scale the axes; thus, you must ensure that your original plot—created using the plot() command—has x-axis and y-axis limits that meet your needs.abline(h = number, . . . ). This command adds a horizontal line at y = number with the line’s color, type, and size set using the optional arguments.abline(v = number, . . . ). This command adds a vertical line at x = number with the line’s color, type, and size set using the optional arguments.abline(b = number, a = number, . . . ). This command adds a diagonal line defined by a slope (b) and a y-intercept (a); the line’s color, type, and size are set using the optional arguments. As we will see in Chapter 8, this is a useful command for displaying the results of a linear regression.legend(location, legend, . . . ). This command adds a legend to the current plot. The location is specified in one of two ways:• by giving the x and y coordinates for the legend’s upper-left corner using x = number and y = number)• by using location = “keyword” where the keyword is one of “topleft”, “top”, “topright”, “right”, “bottomright”, “bottom”, “bottomleft”, or “left”; the optional argument inset = numbermoves the legend in from the margin when using a keyword (it takes a value from 0 to 1 as a fraction of the plot’s area; the default is 0)The legend is added as a vector of character strings (one for each item in the legend), and any accompanying formatting, such as plot symbols, lines, or colors, are passed along as vectors of the same length; look carefully at the example at the end of this section to see how this command works.text(location, label, . . . ). This command adds the text given by “label” to the current plot. The location is specified by providing values for x and y using x = number and y = number. By default, the text is centered at its location; to set the text so that it is left-justified (which is easier to work with), add the argument adj = c(0, NA).grid(col, lty, lwd). This command adds a set of grid lines to the plot using the color, line type, and line width defined by “col”, “lty”, and “lwd”, respectively.Here is an example of a figure in which we show how the diameter and mass vary as a function of the type of M&Ms, add a legend, add a grid, and add some text that identifies the source of the data. Note the use of the functions max and min to identify the limits needed to display results for all of the data.# determine minimum and maximum values for diameter and mass so that we can # set limits for the x-axis and y-axis that will allow plotting of all data xmax = max(psmm_data$diameter)xmin = min(psmm_data$diameter)ymax = max(psmm_data$mass) ymin = min(psmm_data$mass)# create the initial plot using data for plain M&Ms, xlim and ylim values # ensure plot window will allow plotting of all dataplot(x = psmm_data$diameter[plain_id], y = psmm_data$mass[plain_id], xlab = "diameter of M&Ms", ylab = "mass of M&Ms", main = "Diameter and Mass of M&Ms", pch = 19, cex = 0.65, col = "red", xlim = c(xmin, xmax), ylim = c(ymin, ymax))# add the data for the peanut and peanut butter M&Ms using points()points(x = psmm_data$diameter[peanut_id], y = psmm_data$mass[peanut_id], pch = 18, col = "brown", cex = 0.65)points(x = psmm_data$diameter[pb_id], y = psmm_data$mass[pb_id], pch = 17, col = "blue", cex = 0.65)# add a legend, gird, and explanatory textlegend(x = "topleft", legend = c("plain", "peanut", "peanut butter"), col = c("red", "brown", "blue"), pch = c, bty = "n")grid(col = "gray")text(x = 16.5, y = 1, label = "data from University of Puget Sound Data Hoard", cex = 0.5)Our new plot shows that the individual M&Ms are reasonably well separated from each other in the space created by the variables diameter and mass, although a few M&Ms encroach into the space occupied by other types of M&Ms. We also see that the distribution of plain M&Ms is much more compact than for peanut and peanut butter M&Ms, which makes sense given the likely variability in the size of individual peanuts and the softer consistency of peanut butter.This page titled 3.3: Creating Plots From Scratch in R Using Base Graphics is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by David Harvey.
231
3.4: Exercises
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Chemometrics_Using_R_(Harvey)/03%3A_Visualizing_Data/3.04%3A_Exercises
1. When copper metal and powdered sulfur are placed in a crucible and ignited, the product is a sulfide with an empirical formula of CuxS. The value of x is determined by weighing the Cu and the S before ignition and finding the mass of CuxS when the reaction is complete (any excess sulfur leaves as SO2). The following table shows the Cu/S ratios from 62 such experiments (note that the values are organized from smallest-to-largest by rows). A copy of the data is available as a .csv file with data organized in a single column.(a) Construct a boxplot for this data and comment on your results.(b) Construct a histogram and comment on your results.2. Mizutani, Yabuki and Asai developed an electrochemical method for analyzing l-malate. As part of their study they analyzed a series of beverages using both their method and a standard spectrophotometric procedure based on a clinical kit purchased from Boerhinger Scientific. The following table summarizes their results. All values are in ppm.Construct a scatterplot of this data, placing values for the electrochemical method on the x-axis and values for the spectrophotometric method on the y-axis. Use different symbols for the four types of beverages. The data in this problem are from Mizutani, F.; Yabuki, S.; Asai, M. Anal. Chim. Acta 1991, 245,145–150. A copy of the data is available as a .csv file.3. Ten laboratories were asked to determine an analyte’s concentration of in three standard test samples. Following are the results, in μg/ml.(a) Construct a single plot that contains separate stripcharts for each of the three samples.(b) Construct a single plot that contains separate boxplots for each of the three samples.The data in this problem are adapted from Steiner, E. H. “Planning and Analysis of Results of Collaborative Tests,” in Statistical Manual of the Association of Official Analytical Chemists, Association of Official Analytical Chemists: Washington, D. C., 1975. A copy of the data is available as a .csv file.4. Real-time quantitative PCR is an analytical method for determining trace amounts of DNA. During the analysis, each cycle doubles the amount of DNA. A probe species that fluoresces in the presence of DNA is added to the reaction mixture and the increase in fluorescence is monitored during the cycling. The cycle threshold, Ct, is the cycle when the fluorescence exceeds a threshold value. The data in the following table shows Ct values for three samples using real-time quantitative PCR. Each sample was analyzed 18 times.Use two or more methods to analyze this data visually and write a brief report on your conclusions. The data in this problem is from Burns, M. J.; Nixon, G. J.; Foy, C. A.; Harris, N. BMC Biotechnol. 2005, 5:31 (open access publication). A copy of the data is available as a .csv file.5. The file problem3_5.csv contains data for 1061 United States pennies organized into three columns: the year the penny was minted, the penny's mass (to four decimal places), and the location where the penny was minted (D = Denver and P = Philadelphia). Subset the data by year into three groupsPlot separate histograms for the masses of the pennies in each group and comment on your results. The data in this problem was collected by Jordan Katz at Denison University and is available at the Analytical Sciences Digital Library's Active Learning website.6. Use the element data you created in Exercise 1.3.1 to create several visualizations of your choosing. At least one of your visualizations should be a scatterplot and one should be a boxplot.7. Use the data set you created in Exercise 2.3.2 on the daily average NOX concentrations and daily average temperatures recorded at a roadside monitoring station located on Marlybone Road in Westminster. Use this data to prepare a scatterplot that shows the daily average NOX concentrations for January on the y-axis and the daily average temperature for January on the x-axis. Add to this plot, a second scatterplot that shows the daily average NOX concentrations for July on the y-axis and the daily average temperature for July on the x-axis. Comment on your results.8. Use this link to access a case study on data analysis and complete the nine investigations included in Part II: Ways to Visualize Data.This page titled 3.4: Exercises is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by David Harvey.
232
4.1: Ways to Summarize Data
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Chemometrics_Using_R_(Harvey)/04%3A_Summarizing_Data/4.01%3A_Ways_to_Summarize_Data
In Chapter 3 we used data collected from 30 bags of M&Ms to explore different ways to visualize data. In this chapter we consider several ways to summarize data using the net weights of the same bags of M&Ms. Here is the raw data.Without completing any calculations, what conclusions can we make by just looking at this data? Here are a few:Both visualizations provide a good qualitative picture of the data, suggesting that the individual results are scattered around some central value with more results closer to that central value that at distance from it. Neither visualization, however, describes the data quantitatively. What we need is a convenient way to summarize the data by reporting where the data is centered and how varied the individual results are around that center.There are two common ways to report the center of a data set: the mean and the median.The mean, \(\overline{Y}\), is the numerical average obtained by adding together the results for all n observations and dividing by the number of observations\[\overline{Y} = \frac{ \sum_{i = 1}^n Y_{i} } {n} = \frac{49.287 + 48.870 + \cdots + 48.317} {30} = 48.980 \text{ g} \nonumber\]The median, \(\widetilde{Y}\), is the middle value after we order our observations from smallest-to-largest, as we show here for our data.If we have an odd number of samples, then the median is simply the middle value, or\[\widetilde{Y} = Y_{\frac{n + 1}{2}} \nonumber\]where n is the number of samples. If, as is the case here, n is even, then\[\widetilde{Y} = \frac {Y_{\frac{n}{2}} + Y_{\frac{n}{2}+1}} {2} = \frac {48.692 + 48.777}{2} = 48.734 \text{ g} \nonumber\]When our data has a symmetrical distribution, as we believe is the case here, then the mean and the median will have similar values.There are five common measures of the variation of data about its center: the variance, the standard deviation, the range, the interquartile range, and the median average difference.The variance, s2, is an average squared deviation of the individual observations relative to the mean\[s^{2} = \frac { \sum_{i = 1}^n \big(Y_{i} - \overline{Y} \big)^{2} } {n - 1} = \frac { \big(49.287 - 48.980\big)^{2} + \cdots + \big(48.317 - 48.980\big)^{2} } {30 - 1} = 2.052 \nonumber\]and the standard deviation, s, is the square root of the variance, which gives it the same units as the mean.\[s = \sqrt{\frac { \sum_{i = 1}^n \big(Y_{i} - \overline{Y} \big)^{2} } {n - 1}} = \sqrt{\frac { \big(49.287 - 48.980\big)^{2} + \cdots + \big(48.317 - 48.980\big)^{2} } {30 - 1}} = 1.432 \nonumber\]The range, w, is the difference between the largest and the smallest value in our data set.\[w = 51.730 \text{ g} - 46.405 \text{ g} = 5.325 \text{ g} \nonumber\]The interquartile range, IQR, is the difference between the median of the bottom 25% of observations and the median of the top 25% of observations; that is, it provides a measure of the range of values that spans the middle 50% of observations. There is no single, standard formula for calculating the IQR, and different algorithms yield slightly different results. We will adopt the algorithm described here:1. Divide the sorted data set in half; if there is an odd number of values, then remove the median for the complete data set. For our data, the lower half isand the upper half is2. Find FL, the median for the lower half of the data, which for our data is 48.196 g.3. Find FU , the median for the upper half of the data, which for our data is 50.037 g.4. The IQR is the difference between FU and FL.\[F_{U} - F_{L} = 50.037 \text{ g} - 48.196 \text{ g} = 1.841 \text{ g} \nonumber\]The median absolute deviation, MAD, is the median of the absolute deviations of each observation from the median of all observations. To find the MAD for our set of 30 net weights, we first subtract the median from each sample in Table \(\PageIndex{1}\).Next we take the absolute value of each difference and sort them from smallest-to-largest.Finally, we report the median for these sorted values as\[\frac{0.7425 + 0.8935}{2} = 0.818 \nonumber \]A good question to ask is why we might desire more than one way to report the center of our data and the variation in our data about the center. Suppose that the result for the last of our 30 samples was reported as 483.17 instead of 48.317. Whether this is an accidental shifting of the decimal point or a true result is not relevant to us here; what matters is its effect on what we report. Here is a summary of the effect of this one value on each of our ways of summarizing our data.Note that the mean, the variance, the standard deviation, and the range are very sensitive to the change in the last result, but the median, the IQR, and the MAD are not. The median, the IQR, and the MAD are considered robust statistics because they are less sensitive to an unusual result; the others are, of course, non-robust statistics. Both types of statistics have value to us, a point we will return to from time-to-time.This page titled 4.1: Ways to Summarize Data is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by David Harvey.
234
4.2: Using R to Summarize Data
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Chemometrics_Using_R_(Harvey)/04%3A_Summarizing_Data/4.02%3A_Using_R_to_Summarize_Data
One of R’s strengths is its Stats package, which provides access to a rich body of tools for analyzing data. The package is part of R’s base installation and is available whenever you use R without the need to use library() to make it available. Almost all of the statistical functions we will use in this textbook are included in the Stats package.This section uses the M&M data in Table 1 of Chapter 3.1. You can download a copy of the data as a .csv spreadsheet using this link. Before we can summarize our data, we need to make it available to R. The code below uses the read.csv function to read in the data from the file MandM.csv()as a data frame. The text"MandM.csv"assumes the file is located in your working directory.mm_data = read.csv("MandM.csv") To report the mean of a data set we use the function mean(x) where x is the object that holds our data, typically a vector or a single column from a data frame. An important argument to this, and to many other functions, is how to handle missing or NA values. The default is to keep them, which leads to an error when we try to calculate the mean. This is a reasonable default as it requires us to make note of the missing values and to set na.rm = TRUE if we wish to remove them from the calculation. As our vector of data is not missing any values, we do not need to include na.rm = TRUE here, but we do so to illustrate its importance.mean(mm_data$net_weight, na.rm = TRUE) 48.9803To report the median of a data set we use the function median(x) where x is the object that holds our data, typically a vector or a single column from a data frame.median(mm_data$net_weight, na.rm = TRUE) 48.7345To report the variance of a data set we use the function var(x) where x is the object that holds our data, typically a vector or a single column from a data frame.var(mm_data$net_weight, na.rm = TRUE) 2.052068To report the standard deviation we use the function sd(x) where x is the object that holds our data, typically a vector or a single column from a data frame.sd(mm_data$net_weight, na.rm = TRUE) 1.432504To report the range we have to be creative as R’s range()function does not directly report the range. Instead, it returns the minimum as its first value and the maximum as its second value, which we can extract using the bracket operator and then use to compute the range.range(mm_data$net_weight, na.rm = TRUE) - range(mm_data$net_weight, na.rm = TRUE) 5.325Another approach for calculating the range is to use R's max() and min() functions.max(mm_data$net_weight) - min(mm_data$net_weight) 5.325To report the interquartile range we use the function IQR(x) where x is the object that holds our data, typically a vector or a single column from a data frame. The function has nine different algorithms for calculating the IQR, identified using type as an argument. To obtain an IQR equivalent to that generated by R’s boxplot() function, we use type = 5 for an even number of values and type = 7 for an odd number of values.IQR(mm_data$net_weight, na.rm = TRUE, type = 5) 1.841To find the median absolute deviation we use the function mad(x) where x is the object that holds our data, typically a vector or a single column from a data frame. The function includes a scaling constant, the default value for which does not match our description for calculating the MAD; the argument constant = 1 gives a result that is consistent with our description of the MAD.mad(mm_data$net_weight, na.rm = TRUE, constant = 1) 0.818This page titled 4.2: Using R to Summarize Data is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by David Harvey.
235
4.3: Exercises
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Chemometrics_Using_R_(Harvey)/04%3A_Summarizing_Data/4.03%3A_Exercises
1. The following masses were recorded for 12 different U.S. quarters (all values given in grams):Report the mean, median, variance, standard deviation, range, IQR, and MAD for this data.2. A determination of acetaminophen in 10 separate tablets of Excedrin Extra Strength Pain Reliever gives the following results (in mg). The data in this problem are from Simonian, M. H.; Dinh, S.; Fray, L. A. Spectroscopy 1993, 8, 37–47.Report the mean, median, variance, standard deviation, range, IQR, and MAD for this data.3. Salem and Galan developed a new method to determine the amount of morphine hydrochloride in tablets. An analysis of tablets with different nominal dosages gave the following results (in mg/tablet). The data in this problem are from Salem, I. I.; Galan, A. C. Anal. Chim. Acta 1993, 283, 334–337.For each dosage, report the mean, median, variance, standard deviation, range, IQR, and MAD for this data.4. Use the data set you create in Exercise 2.32 for the daily roadside monitoring of NOX concentrations and air temperatures along Marlybone Road. Report the mean, median, variance, standard deviation, range, IQR, and MAD for the NOX concentrations in January. Examine a boxplot of the data and not that two values are flagged. Remove these values and recalculate the mean, median, variance, standard deviation, range, IQR, and MAD for this data. Compare these results to those calculated using all of the data and comment on your results.5. Use this link to access a case study on data analysis and complete the three investigations included in Part III: Ways to Summarize Data.This page titled 4.3: Exercises is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by David Harvey.
236
5.1: Terminology
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Chemometrics_Using_R_(Harvey)/05%3A_The_Distribution_of_Data/5.01%3A_Terminology
Before we consider different types of distributions, let's define some key terms. You may wish, as well, to review the discussion of different types of data in Chapter 2.A population includes every possible measurement we could make on a system, while a sample is the subset of a population on which we actually make measurements. These definitions are fluid. A single bag of M&Ms is a population if we are interested only in that specific bag, but it is but one sample from a box that contains a gross of individual bags. That box, itself, can be a population, or it can be one sample from a much larger production lot. And so on.In a discrete distribution the possible results take on a limited set of specific values that are independent of how we make our measurements. When we determine the number of yellow M&Ms in a bag, the results are limited to integer values. We may find 13 yellow M&Ms or 24 yellow M&Ms, but we cannot obtain a result of 15.43 yellow M&Ms.For a continuous distribution the result of a measurement can take on any possible value between a lower limit and an upper limit, even though our measuring device has a limited precision; thus, when we weigh a bag of M&Ms on a three-digit balance and obtain a result of 49.287 g we know that its true mass is greater than 49.2865... g and less than 49.2875... g.This page titled 5.1: Terminology is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by David Harvey.
237
5.2: Theoretical Models for the Distribution of Data
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Chemometrics_Using_R_(Harvey)/05%3A_The_Distribution_of_Data/5.02%3A_Theoretical_Models_for_the_Distribution_of_Data
There are four important types of distributions that we will consider in this chapter: the uniform distribution, the binomial distribution, the Poisson distribution, and the normal, or Gaussian, distribution. In Chapter 3 and Chapter 4 we used the analysis of bags of M&Ms to explore ways to visualize data and to summarize data. Here we will use the same data set to explore the distribution of data.In a uniform distribution, all outcomes are equally probable. Suppose the population of M&Ms has a uniform distribution. If this is the case, then, with six colors, we expect each color to appear with a probability of 1/6 or 16.7%. shows a comparison of the theoretical results if we draw 1699 M&Ms—the total number of M&Ms in our sample of 30 bags—from a population with a uniform distribution (on the left) to the actual distribution of the 1699 M&Ms in our sample (on the right). It seems unlikely that the population of M&Ms has a uniform distribution of colors!A binomial distribution shows the probability of obtaining a particular result in a fixed number of trials, where the odds of that result happening in a single trial are known. Mathematically, a binomial distribution is defined by the equation\[P(X, N) = \frac {N!} {X! (N - X)!} \times p^{X} \times (1 - p)^{N - X} \nonumber\]where P(X,N) is the probability that the event happens X times in N trials, and where p is the probability that the event happens in a single trial. The binomial distribution has a theoretical mean, \(\mu\), and a theoretical variance, \(\sigma^2\), of\[\mu = Np \quad \quad \quad \sigma^2 = Np(1 - p) \nonumber\] compares the expected binomial distribution for drawing 0, 1, 2, 3, 4, or 5 yellow M&Ms in the first five M&Ms—assuming that the probability of drawing a yellow M&M is 435/1699, the ratio of the number of yellow M&Ms and the total number of M&Ms—to the actual distribution of results. The similarity between the theoretical and the actual results seems evident; in Chapter 6 we will consider ways to test this claim.The binomial distribution is useful if we wish to model the probability of finding a fixed number of yellow M&Ms in a sample of M&Ms of fixed size—such as the first five M&Ms that we draw from a bag—but not the probability of finding a fixed number of yellow M&Ms in a single bag because there is some variability in the total number of M&Ms per bag.A Poisson distribution gives the probability that a given number of events will occur in a fixed interval in time or space if the event has a known average rate and if each new event is independent of the preceding event. Mathematically a Poisson distribution is defined by the equation\[P(X, \lambda) = \frac {e^{-\lambda} \lambda^X} {X !} \nonumber\]where \(P(X, \lambda)\) is the probability that an event happens X times given the event’s average rate, \(\lambda\). The Poisson distribution has a theoretical mean, \(\mu\), and a theoretical variance, \(\sigma^2\), that are each equal to \(\lambda\).The bar plot in shows the actual distribution of green M&Ms in 35 small bags of M&Ms (as reported by M. A. Xu-Friedman “Illustrating concepts of quantal analysis with an intuitive classroom model,” Adv. Physiol. Educ. 2013, 37, 112–116). Superimposed on the bar plot is the theoretical Poisson distribution based on their reported average rate of 3.4 green M&Ms per bag. The similarity between the theoretical and the actual results seems evident; in Chapter 6 we will consider ways to test this claim.A uniform distribution, a binomial distribution, and a Poisson distribution predict the probability of a discrete event, such as the probability of finding exactly two green M&Ms in the next bag of M&Ms that we open. Not all of the data we collect is discrete. The net weights of bags of M&Ms is an example of continuous data as the mass of an individual bag is not restricted to a discrete set of allowed values. In many cases we can model continuous data using a normal (or Gaussian) distribution, which gives the probability of obtaining a particular outcome, P(x), from a population with a known mean, \(\mu\), and a known variance, \(\sigma^2\). Mathematically a normal distribution is defined by the equation\[P(x) = \frac {1} {\sqrt{2 \pi \sigma^2}} e^{-(x - \mu)^2/(2 \sigma^2)} \nonumber\] shows the expected normal distribution for the net weights of our sample of 30 bags of M&Ms if we assume that their mean, \(\overline{X}\), of 48.98 g and standard deviation, s, of 1.433 g are good predictors of the population’s mean, \(\mu\), and standard deviation, \(\sigma\). Given the small sample of 30 bags, the agreement between the model and the data seems reasonable.This page titled 5.2: Theoretical Models for the Distribution of Data is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by David Harvey.
238
5.3: The Central Limit Theorem
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Chemometrics_Using_R_(Harvey)/05%3A_The_Distribution_of_Data/5.03%3A_The_Central_Limit_Theorem
Suppose we have a population for which one of its properties has a uniform distribution where every result between 0 and 1 is equally probable. If we analyze 10,000 samples we should not be surprised to find that the distribution of these 10000 results looks uniform, as shown by the histogram on the left side of . If we collect 1000 pooled samples—each of which consists of 10 individual samples for a total of 10,000 individual samples—and report the average results for these 1000 pooled samples, we see something interesting as their distribution, as shown by the histogram on the right, looks remarkably like a normal distribution. When we draw single samples from a uniform distribution, each possible outcome is equally likely, which is why we see the distribution on the left. When we draw a pooled sample that consists of 10 individual samples, however, the average values are more likely to be near the middle of the distribution’s range, as we see on the right, because the pooled sample likely includes values drawn from both the lower half and the upper half of the uniform distribution.This tendency for a normal distribution to emerge when we pool samples is known as the central limit theorem. As shown in , we see a similar effect with populations that follow a binomial distribution or a Poisson distribution.You might reasonably ask whether the central limit theorem is important as it is unlikely that we will complete 1000 analyses, each of which is the average of 10 individual trials. This is deceiving. When we acquire a sample of soil, for example, it consists of many individual particles each of which is an individual sample of the soil. Our analysis of this sample, therefore, is the mean for a large number of individual soil particles. Because of this, the central limit theorem is relevant.This page titled 5.3: The Central Limit Theorem is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by David Harvey.
239
5.4: Modeling Distributions Using R
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Chemometrics_Using_R_(Harvey)/05%3A_The_Distribution_of_Data/5.04%3A_Modeling_Distributions_Using_R
The base installation of R includes a variety of functions for working with uniform distributions, binomial distributions, Poisson distributions, and normal distributions. These functions come in four forms that take the general form xdist where dist is the type of distribution (unif for a uniform distribution, binom for a binomial distribution, pois for a Poisson distribution, and norm for a normal distribution), and where x defines the information we extract from the distribution. For example, the function dunif()returns the probability of obtaining a specific value drawn from a uniform distribution, the function pbinom() returns the probability of obtaining a result less than a defined value from a binomial distribution, the function qpois() returns the upper boundary that includes a defined percentage of results from a Poisson distribution, and the function rnomr() returns results drawn at random from a normal distribution.When you purchase a Class A 10.00-mL volumetric pipet it comes with a tolerance of ±0.02 mL, which is the manufacturer’s way of saying that the pipet’s true volume is no less than 9.98 mL and no greater than 10.02 mL. Suppose a manufacturer produces 10,000 pipets, how many might we expect to have a volume between 9.990 mL and 9.992 mL? A uniform distribution is the choice when the manufacturer provides a tolerance range without specifying a level of confidence and when there is no reason to believe that results near the center of the range are more likely than results at the ends of the range.To simulate a uniform distribution we use R’s runif(n, min, max) function, which returns n random values drawn from a uniform distribution defined by its minimum (min) and its maximum (max) limits. The result is shown in , where the dots, added using the points()function, show the theoretical uniform distribution at the midpoint of each of the histogram’s bins.# create vector of volumes for 10000 pipets drawn at random from uniform distributionpipet = runif(10000, 9.98, 10.02)# create histogram using 20 bins of size 0.002 mLpipet_hist = hist(pipet, breaks = seq(9.98, 10.02, 0.002), col = c("blue", "lightblue"), ylab = "number of pipets", xlab = "volume of pipet (mL)", main = NULL)# overlay points showing expected values for uniform distributionpoints(pipet_hist$mids, rep(10000/20, 20), pch = 19)Saving the histogram to the object pipet_hist allows us to retrieve the number of pipets in each of the histogram’s intervals; thus, there are 476 pipets with volumes between 9.990 mL and 9.992 mL, which is the sixth bar from the left edge of .pipet_hist$counts 476Carbon has two stable, non-radioactive isotopes, 12C and 13C, with relative isotopic abundances of, respectively, 98.89% and 1.11%. Suppose we are working with cholesterol, C27H44O, which has 27 atoms of carbon. We can use the binomial distribution to model the expected distribution for the number of atoms 13C in 1000 cholesterol molecules.To simulate the distribution we use R’s rbinom(n, size, prob) function, which returns n random values drawn from a binomial distribution defined by the size of our sample, which is the number of possible carbon atoms, and the isotopic abundance of 13C, which is itsprobor probability. The result is shown in , where the dots, added using the points()function, show the theoretical binomial distribution. These theoretical values are calculated using the dbinom() function. The bar plot is assigned to the object chol_bar to provide access to the values of x when plotting the points.# create vector with 1000 values drawn at random from binomial distributioncholesterol = rbinom(1000, 27, 0.0111)# create bar plot of results; table(cholesterol) determines the number of cholesterol # molecules with 0, 1, 2... atoms of carbon-13; dividing by 1000 gives probability chol_bar = barplot(table(cholesterol)/1000, col = "lightblue", ylim = c, xlab = "number of atoms of carbon-13", ylab = "probability") # theoretical results for binomial distribution of carbon-13 in cholesterol chol_binom = dbinom(seq, 27, 0.0111)# overlay theoretical results for binomial distributionpoints(x = chol_bar, y = chol_binom[1:length(chol_bar)], cex = 1.25, pch = 19)One measure of the quality of water in lakes used for recreational purposes is a fecal coliform test. In a typical test a sample of water is passed through a membrane filter, which is then placed on a medium to encourage growth of the bacteria and incubated for 24 hours at 44.5°C. The number of colonies of bacteria is reported. Suppose a lake has a natural background level of 5 colonies per 50 mL of water tested and must be closed for swimming if it exceeds 10 colonies per 50 mL of water tested. We can use a Poisson distribution to determine, over the course of a year of daily testing, the probability that a test will exceed this limit even though the lake’s true fecal coliform count remains at its natural background level.To simulate the distribution we use R’s rpois(n, lambda) function, which returns n random values drawn from a Poisson distribution defined by lambda which is its average incidence. Because we are interested in modeling out a year, n is set to 365 days. The result is shown in , where the dots, added using the points() function, shows the theoretical Poisson distribution. These theoretical values are calculated using the dpois() function. The bar plot is assigned to the object choliform_bar to provide access to the values of x when plotting the points.# create vector of results drawn at random from Poisson distributioncoliforms = rpois# create table of simulated results coliform_table = table(coliforms)# create bar plot; ylim ensures there is some space above the plot's highest bar coliform_bar = barplot(coliform_table, ylim = c(0, 1.2 * max(coliform_table)), col = "lightblue")# theoretical results for Poisson distributiond_coliforms = dpois(seq(0,length(coliform_bar) - 1), 5) * 365# overlay theoretical results for Poisson distributionpoints(coliform_bar, d_coliforms, pch = 19) To find the number of times our simulated results exceed the limit of 10 coliforms colonies per 50 mL we use R’s which() function to identify within coliforms the values that are greater than 10coliforms[which(coliforms > 10)]finding that this happen 2 times over the course of a year.The theoretical probability that a single test will exceed the limit of 10 colonies per 50 mL of water, we use R’s ppois(q, lambda) function, where q is the value we wish to test, which returns the cumulative probability of obtaining a result less than or equal to q on any day; over the course of 365 days(1 - ppois)*365 4.998773we expect that on 5 days the fecal coliform count will exceed the limit of 10.If we place copper metal and an excess of powdered sulfur in a crucible and ignite it, copper sulfide forms with an empirical formula of CuxS. The value of x is determined by weighing the Cu and the S before ignition and finding the mass of CuxS when the reaction is complete (any excess sulfur leaves as the gas SO2). The following are the Cu/S ratios from 62 such experiments, of which just 3 are greater than 2. Because of the central limit theorem, we can use a normal distribution to model the data. shows the distribution of the experimental results as a histogram overlaid with the theoretical normal distribution calculated assuming that \(\mu\) is equal to the mean of the 62 samples and that \(\sigma\) is equal to the standard deviation of the 62 samples. Both the experimental data and theoretical normal distribution suggest that most values of x are between 1.85 and 2.03.# enter the data into a vector with the name cuxscuxs = c(1.764, 1.920, 1.957, 1.993, 1.891, 1.927, 1.943, 1.966, 1.995, 1.919, 1.988, 1.838, 1.922, 1.957, 1.993, 1.897, 1.931, 1.948, 1.968, 1.995, 1.936, 2.017, 1.890, 1.936, 1.963, 2.029, 1.899, 1.935, 1.953, 1.969, 1.865, 1.941, 1.891, 1.937, 1.963, 2.042, 1.910, 1.939, 1.957, 1.977, 1.995, 1.955, 1.906, 1.941, 1.975, 1.866, 1.911, 1.939, 1.959, 1.981, 1.877, 1.963, 1.908, 1.942, 1.976, 1.872, 1.916, 1.940, 1.962, 1.981, 1.900, 1.973)# sequence of ratios over which to display experimental results and theoretical distributionx = seq(1.7,2.2,0.02)# create histogram for experimental resultscuxs_hist = hist(cuxs, breaks = x, col = c("blue", "lightblue"), xlab = "value for x", ylab = "frequency", main = NULL)# calculate theoretical results for normal distribution using the mean and the standard deviation# for the 62 samples as predictors for mu and sigmacuxs_theo = dnorm(cuxs_hist$mids, mean = mean(cuxs), sd = sd(cuxs))# overlay results for theoretical normal distributionpoints(cuxs_hist$mids, cuxs_theo, pch = 19)This page titled 5.4: Modeling Distributions Using R is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by David Harvey.
240
5.5: Exercises
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Chemometrics_Using_R_(Harvey)/05%3A_The_Distribution_of_Data/5.05%3A_Exercises
Behavioral and ecological factors influence dispersion. Uniform patterns of dispersion are generally a result of interactions between individuals like competition and territoriality.1. In ecology a uniform distribution of an organism may result when the organism exhibits territorial behavior that keeps most organisms. In one study, a portion of a field was divided into a \(20 \times \20\) grid and a count made of the number of organisms in each unit of the grid giving the results seen below.Create a plot similar to that in 5.4.1 and comment on your results.2. Chlorine has two isotopes, 35Cl (75.8% abundance) and 37Cl (24.2% abundance). Create a plot similar to that in Figure 5.4.2 for the molecule PCB 77, a chlorinated compound with the formula C12H6Cl4 and comment on your results.3. A radioactive decay process has a background level of 3 emissions per minute and follows a Poisson distribution. The number of emissions per minute was monitored for one hour giving the following resultsUse this data to create a plot similar to that in Figure 5.4.3 and comment on your results4. Using the penny data from Exercise 3.4.5, create a plot similar to that in Figure 5.4.4 using all pennies minted after 1982 and comment on your results.5. Use this link to access a case study on data analysis and complete the first four investigations included in Part IV: Ways to Model Data.This page titled 5.5: Exercises is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by David Harvey.
241
6.1: Properties of a Normal Distribution
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Chemometrics_Using_R_(Harvey)/06%3A_Uncertainty_of_Data/6.01%3A_Properties_of_a_Normal_Distribution
Mathematically a normal distribution is defined by the equation\[P(x) = \frac {1} {\sqrt{2 \pi \sigma^2}} e^{-(x - \mu)^2/(2 \sigma^2)} \nonumber\]where \(P(x)\) is the probability of obtaining a result, \(x\), from a population with a known mean, \(\mu\), and a known standard deviation, \(\sigma\). shows the normal distribution curves for \(\mu = 0\) with standard deviations of 5, 10, and 20.Because the equation for a normal distribution depends solely on the population’s mean, \(\mu\), and its standard deviation, \(\sigma\), the probability that a sample drawn from a population has a value between any two arbitrary limits is the same for all populations. For example, shows that 68.26% of all samples drawn from a normally distributed population have values within the range \(\mu \pm 1\sigma\), and only 0.14% have values greater than \(\mu + 3\sigma\).This feature of a normal distribution—that the area under the curve is the same for all values of \(\sigma\)—allows us to create a probability table (see Appendix 1) based on the relative deviation, \(z\), between a limit, x, and the mean, \(\mu\). \[z = \frac {x - \mu} {\sigma} \nonumber\]The value of \(z\) gives the area under the curve between that limit and the distribution’s closest tail, as shown in .Suppose we know that \(\mu\) is 5.5833 ppb Pb and that \(\sigma\) is 0.0558 ppb Pb for a particular standard reference material (SRM). What is the probability that we will obtain a result that is greater than 5.650 ppb if we analyze a single, random sample drawn from the SRM?Solution shows the normal distribution curve given values of 5.5833 ppb Pb for \(\mu\) and of 0.0558 ppb Pb \(\sigma\). The shaded area in the figures is the probability of obtaining a sample with a concentration of Pb greater than 5.650 ppm. To determine the probability, we first calculate \(z\)\[z = \frac {x - \mu} {\sigma} = \frac {5.650 - 5.5833} {0.0558} = 1.195 \nonumber\]Next, we look up the probability in Appendix 1 for this value of \(z\), which is the average of 0.1170 (for \(z = 1.19\)) and 0.1151 (for \(z = 1.20\)), or a probability of 0.1160; thus, we expect that 11.60% of samples will provide a result greater than 5.650 ppb Pb.Example \(\PageIndex{1}\) considers a single limit—the probability that a result exceeds a single value. But what if we want to determine the probability that a sample has between 5.580 g Pb and 5.625 g Pb?SolutionIn this case we are interested in the shaded area shown in . First, we calculate \(z\) for the upper limit\[z = \frac {5.625 - 5.5833} {0.0558} = 0.747 \nonumber\]and then we calculate \(z\) for the lower limit\[z = \frac {5.580 - 5.5833} {0.0558} = -0.059 \nonumber\]Then, we look up the probability in Appendix 1 that a result will exceed our upper limit of 5.625, which is 0.2275, or 22.75%, and the probability that a result will be less than our lower limit of 5.580, which is 0.4765, or 47.65%. The total unshaded area is 71.4% of the total area, so the shaded area corresponds to a probability of\[100.00 - 22.75 - 47.65 = 100.00 - 71.40 = 29.6 \% \nonumber\]This page titled 6.1: Properties of a Normal Distribution is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by David Harvey.
242
6.2: Confidence Intervals
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Chemometrics_Using_R_(Harvey)/06%3A_Uncertainty_of_Data/6.02%3A_Confidence_Intervals
In the previous section, we learned how to predict the probability of obtaining a particular outcome if our data are normally distributed with a known \(\mu\) and a known \(\sigma\). For example, we estimated that 11.60% of samples drawn at random from a standard reference material will have a concentration of Pb greater than 5.650 ppb given a \(\mu\) of 5.5833 ppb and a \(\sigma\) of 0.0558 ppb. In essence, we determined how many standard deviations 5.650 is from \(\mu\) and used this to define the probability given the standard area under a normal distribution curve.We can look at this in a different way by asking the following question: If we collect a single sample at random from a population with a known \(\mu\) and a known \(\sigma\), within what range of values might we reasonably expect to find the sample’s result 95% of the time? Rearranging the equation\[z = \frac {x - \mu} {\sigma} \nonumber\]and solving for \(x\) gives\[x = \mu \pm z \sigma = 5.5833 \pm (1.96)(0.0558) = 5.5833 \pm 0.1094 \nonumber\]where a \(z\) of 1.96 corresponds to 95% of the area under the curve; we call this a 95% confidence interval for a single sample.It generally is a poor idea to draw a conclusion from the result of a single experiment; instead, we usually collect several samples and ask the question this way: If we collect \(n\) random samples from a population with a known \(\mu\) and a known \(\sigma\), within what range of values might we reasonably expect to find the mean of these samples 95% of the time?We might reasonably expect that the standard deviation for the mean of several samples is smaller than the standard deviation for a set of individual samples; indeed it is and it is given as\[\sigma_{\bar{x}} = \frac {\sigma} {\sqrt{n}} \nonumber\]where \(\frac {\sigma} {\sqrt{n}}\) is called the standard error of the mean. For example, if we collect three samples from the standard reference material described above, then we expect that the mean for these three samples will fall within a range\[\bar{x} = \mu \pm z \sigma_{\bar{X}} = \mu \pm \frac {z \sigma} {\sqrt{n}} = 5.5833 \pm \frac{(1.96)(0.0558)} {\sqrt{3}} = 5.5833 \pm 0.0631 \nonumber\]that is \(\pm 0.0631\) ppb around \(\mu\), a range that is smaller than that of \(\pm 0.1094\) ppb when we analyze individual samples. Note that the relative value to us of increasing the sample’s size diminishes as \(n\) increases because of the square root term, as shown in .Our treatment thus far assumes we know \(\mu\) and \(\sigma\) for the parent population, but we rarely know these values; instead, we examine samples drawn from the parent population and ask the following question: Given the sample’s mean, \(\bar{x}\), and its standard deviation, \(s\), what is our best estimate of the population’s mean, \(\mu\), and its standard deviation, \(\sigma\).To make this estimate, we replace the population’s standard deviation, \(\sigma\), with the standard deviation, \(s\), for our samples, replace the population’s mean, \(\mu\), with the mean, \(\bar{x}\), for our samples, replace \(z\) with \(t\), where the value of \(t\) depends on the number of samples, \(n\)\[\bar{x} = \mu \pm \frac{ts}{\sqrt{n}} \nonumber\]and then rearrange the equation to solve for \(\mu\).\[\mu = \bar{x} \pm \frac {ts} {\sqrt{n}} \nonumber\]We call this a confidence interval. Values for \(t\) are available in tables (see Appendix 2) and depend on the probability level, \(\alpha\), where \((1 − \alpha) \times 100\) is the confidence level, and the degrees of freedom, \(n − 1\); note that for any probability level, \(t \longrightarrow z\) as \(n \longrightarrow \infty\).We need to give special attention to what this confidence interval means and to what it does not mean:This page titled 6.2: Confidence Intervals is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by David Harvey.
243
6.3: Using R to Model Properties of a Normal Distribution
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Chemometrics_Using_R_(Harvey)/06%3A_Uncertainty_of_Data/6.03%3A_Using_R_to_Model_Properties_of_a_Normal_Distribution
Given a mean and a standard deviation, we can use R’s dnorm()function to plot the corresponding normal distributiondnorm(x, mean, sd)wheremeanis the value for \(\mu\),sdis the value for \(\sigma\), andxis a vector of values that spans the range of x-axis values we want to plot. # define the mean and the standard deviationmu = 12 sigma = 2# create vector for values of x that span a sufficient range of# standard deviations on either side of the mean; here we use values # for x that are four standard deviations on either side of the mean x = seq(4, 20, 0.01)# use dnorm() to calculate probabilities for each xy = dnorm(x, mean = mu, sd = sigma)# plot normal distribution curveplot(x, y, type = "l", lwd = 2, col = "blue", ylab = "probability", xlab = "x")To annotate the normal distribution curve to show an area of interest to us, we use R’s polygon() function, as illustrated here for the normal distribution curve in , showing the area that includes values between 8 and 15.# define the mean and the standard deviationmu = 12 sigma = 2# create vector for values of x that span a sufficient range of# standard deviations on either side of the mean; here we use values # for x that are four standard deviations on either side of the mean x = seq(4, 20, 0.01)# use dnorm() to calculate probabilities for each xy = dnorm(x, mean = mu, sd = sigma)# plot normal distribution curve; the options xaxt = "i" and yaxt = "i" # force the axes to begin and end at the limits of the dataplot(x, y, type = "l", lwd = 2, col = "ivory4", ylab = "probability", xlab = "x", xaxs = "i", yaxs = "i")# create vector for values of x between a lower limit of 8 and an upper limit of 15 lowlim = 8 uplim = 15dx = seq(lowlim, uplim, 0.01)# use polygon to fill in area; x and y are vectors of x,y coordinates # that define the shape that is then filled using the desired color polygon(x = c(lowlim, dx, uplim), y = c(0, dnorm(dx, mean = 12, sd = 2), 0), border = NA, col = "ivory4")To find the probability of obtaining a value within the shaded are, we use R’spnorm()commandpnorm(q, mean, sd, lower.tail)whereqis a limit of interest,meanis the value for \(\mu\),sdis the value for \(\sigma\), andlower.tailis a logical value that indicates whether we return the probability for values below the limit (lower.tail = TRUE) or for values above the limit (lower.tail = FALSE). For example, to find the probability of obtaining a result between 8 and 15, given \(\mu = 12\) and \(\sigma = 2\), we use the following lines of code.# find probability of obtaining a result greater than 15prob_greater15 = pnorm(15, mean = 12, sd = 2, lower.tail = FALSE) # find probability of obtaining a result less than 8 prob_less8 = pnorm(8, mean = 12, sd = 2, lower.tail = TRUE) # find probability of obtaining a result between 8 and 15 prob_between = 1 - prob_greater15 - prob_less8 # display results prob_greater15 0.0668072prob_less8 0.02275013prob_between 0.9104427Thus, 91.04% of values fall between the limits of 8 and 15.This page titled 6.3: Using R to Model Properties of a Normal Distribution is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by David Harvey.
244
6.4: Using R to Find Confidence Intervals
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Chemometrics_Using_R_(Harvey)/06%3A_Uncertainty_of_Data/6.04%3A_Using_R_to_Find_Confidence_Intervals
The confidence interval for a population’s mean, \(\mu\), given an experimental mean, \(\bar{x}\), for \(n\) samples is defined as\[\mu = \bar{x} \pm \frac {z \sigma} {\sqrt{n}} \nonumber\]if we know the population's standard deviation, \(\sigma\), and as\[\mu = \bar{x} \pm \frac {t s} {\sqrt{n}} \nonumber\]if we assume that the sample's standard deviation, \(s\), is a reasonable predictor of the population's standard deviation. To find values for \(z\) we use R'sqnorm()function, which takes the formqnorm(p)wherepis the probability on one side of the normal distribution curve that a result is not included within the confidence interval. For a 95% confidence interval, \(p = 0.05/2 = 0.025\) because the total probability of 0.05 is equally divided between both sides of the normal distribution. To find \(t\) we use R'sqt()function, which takes the formqt(p, df)wherepis defined as above and wheredfis the degrees of freedom or \(n - 1\).For example, if we have a mean of \(\bar{x} = 12\) for 10 samples with a known standard deviation of \(\sigma = 2\), then for the 95% confidence interval the value of \(z\) and the resulting confidence interval are# for a 95% confidence interval, alpha is 0.05 and the probability, p, on either end of the distribution is 0.025; # the value of z is positive on one side of the normal distribution and negative on the other side; # as we are interested in just the magnitude, not the sign, we use the abs() function to return the absolute valuez = qnorm(0.025) conf_int_pop = abs(z * 2/sqrt) conf_int_pop 1.23959Adding and subtracting this value from the mean defines the confidence interval, which, in this case is \(12 \pm 1.2\).If we have a mean of \(\bar{x} = 12\) for 10 samples with an experimental standard deviation of \(s = 2\), then for the 95% confidence interval the value of \(t\) and the resulting confidence interval aret = qt(p = 0.025, 9) conf_int_samp = abs(t * 2/sqrt) conf_int_samp 1.430714Adding and subtracting this value from the mean defines the confidence interval, which, in this case is \(12 \pm 1.4\).This page titled 6.4: Using R to Find Confidence Intervals is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by David Harvey.
245
6.5: Exercises
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Chemometrics_Using_R_(Harvey)/06%3A_Uncertainty_of_Data/6.05%3A_Exercises
1. Berglund and Wichardt investigated the quantitative determination of Cr in high-alloy steels using a potentiometric titration of Cr(VI). Before the titration, samples of the steel were dissolved in acid and the chromium oxidized to Cr(VI) using peroxydisulfate. Shown here are the results ( as %w/w Cr) for the analysis of a reference steel as reported in Berglund, B.; Wichardt, C. Anal. Chim. Acta 1990, 236, 399–410.Calculate the mean, the standard deviation, and the 95% confidence interval about the mean. What does this confidence interval mean?2. In Exercise 4.3.2 you determined the mean and the variance for 10 separate tablets of Excedrin Extra Strength Pain Reliever gives the following results (in mg). The data in this problem are from Simonian, M. H.; Dinh, S.; Fray, L. A. Spectroscopy 1993, 8, 37–47.Assuming that \(\overline{X}\) and \(s^2\) are good approximations for \(\mu\) and for \(\sigma^2\), and that the population is normally distributed, what percentage of the tablets are expected to contain more than the standard amount of 250 mg acetaminophen per tablet?.3. In Exercise 4.3.3 you determined the mean and the standard deviation for the amount of morphine hydrochloride in each of four different nominal dosages levels using data from Salem, I. I.; Galan, A. C. Anal. Chim. Acta 1993, 283, 334–337. All results are in mg/tablet.For each dosage level, and assuming that \(\overline{X}\) and \(s^2\) are good approximations for \(\mu\) and for \(\sigma^2\), and that the population is normally, what percentage of tablets contain more than the nominal amount of mophine hydrochloride per tablet?4. Use this link to access a case study on data analysis and complete the last three investigations included in Part IV: Ways to Model Data and the first three investigations included in Part V: Ways to Draw Conclusions from Data.This page titled 6.5: Exercises is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by David Harvey.
246
7.1: Significance Testing
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Chemometrics_Using_R_(Harvey)/07%3A_Testing_the_Significance_of_Data/7.01%3A_Significance_Testing
Let’s consider the following problem. To determine if a medication is effective in lowering blood glucose concentrations, we collect two sets of blood samples from a patient. We collect one set of samples immediately before we administer the medication, and we collect the second set of samples several hours later. After we analyze the samples, we report their respective means and variances. How do we decide if the medication was successful in lowering the patient’s concentration of blood glucose?One way to answer this question is to construct a normal distribution curve for each sample, and to compare the two curves to each other. Three possible outcomes are shown in . In , there is a complete separation of the two normal distribution curves, which suggests the two samples are significantly different from each other. In , the normal distribution curves for the two samples almost completely overlap each other, which suggests the difference between the samples is insignificant. , however, presents us with a dilemma. Although the means for the two samples seem different, the overlap of their normal distribution curves suggests that a significant number of possible outcomes could belong to either distribution. In this case the best we can do is to make a statement about the probability that the samples are significantly different from each other.The process by which we determine the probability that there is a significant difference between two samples is called significance testing or hypothesis testing. Before we discuss specific examples let's first establish a general approach to conducting and interpreting a significance test.The purpose of a significance test is to determine whether the difference between two or more results is sufficiently large that we are comfortable stating that the difference cannot be explained by indeterminate errors. The first step in constructing a significance test is to state the problem as a yes or no question, such as“Is this medication effective at lowering a patient’s blood glucose levels?”A null hypothesis and an alternative hypothesis define the two possible answers to our yes or no question. The null hypothesis, H0, is that indeterminate errors are sufficient to explain any differences between our results. The alternative hypothesis, HA, is that the differences in our results are too great to be explained by random error and that they must be determinate in nature. We test the null hypothesis, which we either retain or reject. If we reject the null hypothesis, then we must accept the alternative hypothesis and conclude that the difference is significant.Failing to reject a null hypothesis is not the same as accepting it. We retain a null hypothesis because we have insufficient evidence to prove it incorrect. It is impossible to prove that a null hypothesis is true. This is an important point and one that is easy to forget. To appreciate this point let’s use this data for the mass of 100 circulating United States pennies. After looking at the data we might propose the following null and alternative hypotheses.H0: The mass of a circulating U.S. penny is between 2.900 g and 3.200 gHA: The mass of a circulating U.S. penny may be less than 2.900 g or more than 3.200 gTo test the null hypothesis we find a penny and determine its mass. If the penny’s mass is 2.512 g then we can reject the null hypothesis and accept the alternative hypothesis. Suppose that the penny’s mass is 3.162 g. Although this result increases our confidence in the null hypothesis, it does not prove that the null hypothesis is correct because the next penny we sample might weigh less than 2.900 g or more than 3.200 g.After we state the null and the alternative hypotheses, the second step is to choose a confidence level for the analysis. The confidence level defines the probability that we will incorrectly reject the null hypothesis when it is, in fact, true. We can express this as our confidence that we are correct in rejecting the null hypothesis (e.g. 95%), or as the probability that we are incorrect in rejecting the null hypothesis. For the latter, the confidence level is given as \(\alpha\), where\[\alpha = 1 - \frac {\text{confidence interval (%)}} {100} \nonumber\]For a 95% confidence level, \(\alpha\) is 0.05.The third step is to calculate an appropriate test statistic and to compare it to a critical value. The test statistic’s critical value defines a breakpoint between values that lead us to reject or to retain the null hypothesis, which is the fourth, and final, step of a significance test. As we will see in the sections that follow, how we calculate the test statistic depends on what we are comparing. The four steps for a statistical analysis of data using a significance test:Suppose we want to evaluate the accuracy of a new analytical method. We might use the method to analyze a Standard Reference Material that contains a known concentration of analyte, \(\mu\). We analyze the standard several times, obtaining a mean value, \(\overline{X}\), for the analyte’s concentration. Our null hypothesis is that there is no difference between \(\overline{X}\) and \(\mu\)\[H_0 \text{: } \overline{X} = \mu \nonumber\]If we conduct the significance test at \(\alpha = 0.05\), then we retain the null hypothesis if a 95% confidence interval around \(\overline{X}\) contains \(\mu\). If the alternative hypothesis is\[H_\text{A} \text{: } \overline{X} \neq \mu \nonumber\]then we reject the null hypothesis and accept the alternative hypothesis if \(\mu\) lies in the shaded areas at either end of the sample’s probability distribution curve ). Each of the shaded areas accounts for 2.5% of the area under the probability distribution curve, for a total of 5%. This is a two-tailed significance test because we reject the null hypothesis for values of \(\mu\) at either extreme of the sample’s probability distribution curve.We can write the alternative hypothesis in two additional ways\[H_\text{A} \text{: } \overline{X} > \mu \nonumber\]\[H_\text{A} \text{: } \overline{X} < \mu \nonumber\]rejecting the null hypothesis if \(\mu\) falls within the shaded areas shown in or , respectively. In each case the shaded area represents 5% of the area under the probability distribution curve. These are examples of a one-tailed significance test.For a fixed confidence level, a two-tailed significance test is the more conservative test because rejecting the null hypothesis requires a larger difference between the results we are comparing. In most situations we have no particular reason to expect that one result must be larger (or must be smaller) than the other result. This is the case, for example, when we evaluate the accuracy of a new analytical method. A two-tailed significance test, therefore, usually is the appropriate choice.We reserve a one-tailed significance test for a situation where we specifically are interested in whether one result is larger (or smaller) than the other result. For example, a one-tailed significance test is appropriate if we are evaluating a medication’s ability to lower blood glucose levels. In this case we are interested only in whether the glucose levels after we administer the medication are less than the glucose levels before we initiated treatment. If a patient’s blood glucose level is greater after we administer the medication, then we know the answer—the medication did not work—and we do not need to conduct a statistical analysis.Because a significance test relies on probability, its interpretation is subject to error. In a significance test, \(\alpha\) defines the probability of rejecting a null hypothesis that is true. When we conduct a significance test at \(\alpha = 0.05\), there is a 5% probability that we will incorrectly reject the null hypothesis. This is known as a type 1 error, and its risk is always equivalent to \(\alpha\). A type 1 error in a two-tailed or a one-tailed significance tests corresponds to the shaded areas under the probability distribution curves in .A second type of error occurs when we retain a null hypothesis even though it is false. This is a type 2 error, and the probability of its occurrence is \(\beta\). Unfortunately, in most cases we cannot calculate or estimate the value for \(\beta\). The probability of a type 2 error, however, is inversely proportional to the probability of a type 1 error.Minimizing a type 1 error by decreasing \(\alpha\) increases the likelihood of a type 2 error. When we choose a value for \(\alpha\) we must compromise between these two types of error. Most of the examples in this text use a 95% confidence level (\(\alpha = 0.05\)) because this usually is a reasonable compromise between type 1 and type 2 errors for analytical work. It is not unusual, however, to use a more stringent (e.g. \(\alpha = 0.01\)) or a more lenient (e.g. \(\alpha = 0.10\)) confidence level when the situation calls for it.This page titled 7.1: Significance Testing is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by David Harvey.
247
7.2: Significance Tests for Normal Distributions
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Chemometrics_Using_R_(Harvey)/07%3A_Testing_the_Significance_of_Data/7.02%3A_Significance_Tests_for_Normal_Distributions
A normal distribution is the most common distribution for the data we collect. Because the area between any two limits of a normal distribution curve is well defined, it is straightforward to construct and evaluate significance tests.You can review the properties of a normal distribution in Chapter 5 and Chapter 6.One way to validate a new analytical method is to analyze a sample that contains a known amount of analyte, \(\mu\). To judge the method’s accuracy we analyze several portions of the sample, determine the average amount of analyte in the sample, \(\overline{X}\), and use a significance test to compare \(\overline{X}\) to \(\mu\). The null hypothesis is that the difference between \(\overline{X}\) and \(\mu\) is explained by indeterminate errors that affect our determination of \(\overline{X}\). The alternative hypothesis is that the difference between \(\overline{X}\) and \(\mu\) is too large to be explained by indeterminate error.\[H_0 \text{: } \overline{X} = \mu \nonumber\]\[H_A \text{: } \overline{X} \neq \mu \nonumber\]The test statistic is texp, which we substitute into the confidence interval for \(\mu\)\[\mu = \overline{X} \pm \frac {t_\text{exp} s} {\sqrt{n}} \nonumber\]Rearranging this equation and solving for \(t_\text{exp}\)\[t_\text{exp} = \frac {|\mu - \overline{X}| \sqrt{n}} {s} \nonumber\]gives the value for \(t_\text{exp}\) when \(\mu\) is at either the right edge or the left edge of the sample's confidence interval ).To determine if we should retain or reject the null hypothesis, we compare the value of texp to a critical value, \(t(\alpha, \nu)\), where \(\alpha\) is the confidence level and \(\nu\) is the degrees of freedom for the sample. The critical value \(t(\alpha, \nu)\) defines the largest confidence interval explained by indeterminate error. If \(t_\text{exp} > t(\alpha, \nu)\), then our sample’s confidence interval is greater than that explained by indeterminate errors b). In this case, we reject the null hypothesis and accept the alternative hypothesis. If \(t_\text{exp} \leq t(\alpha, \nu)\), then our sample’s confidence interval is smaller than that explained by indeterminate error, and we retain the null hypothesis c). Example \(\PageIndex{1}\) provides a typical application of this significance test, which is known as a t-test of \(\overline{X}\) to \(\mu\). You will find values for \(t(\alpha, \nu)\) in Appendix 2.Before determining the amount of Na2CO3 in a sample, you decide to check your procedure by analyzing a standard sample that is 98.76% w/w Na2CO3. Five replicate determinations of the %w/w Na2CO3 in the standard gives the following results\(98.71 \% \quad 98.59 \% \quad 98.62 \% \quad 98.44 \% \quad 98.58 \%\)Using \(\alpha = 0.05\), is there any evidence that the analysis is giving inaccurate results?SolutionThe mean and standard deviation for the five trials are\[\overline{X} = 98.59 \quad \quad \quad s = 0.0973 \nonumber\]Because there is no reason to believe that the results for the standard must be larger or smaller than \(\mu\), a two-tailed t-test is appropriate. The null hypothesis and alternative hypothesis are\[H_0 \text{: } \overline{X} = \mu \quad \quad \quad H_\text{A} \text{: } \overline{X} \neq \mu \nonumber\]The test statistic, texp, is\[t_\text{exp} = \frac {|\mu - \overline{X}|\sqrt{n}} {2} = \frac {|98.76 - 98.59| \sqrt{5}} {0.0973} = 3.91 \nonumber\]The critical value for t(0.05, 4) from Appendix 2 is 2.78. Since texp is greater than t(0.05, 4), we reject the null hypothesis and accept the alternative hypothesis. At the 95% confidence level the difference between \(\overline{X}\) and \(\mu\) is too large to be explained by indeterminate sources of error, which suggests there is a determinate source of error that affects the analysis.There is another way to interpret the result of this t-test. Knowing that texp is 3.91 and that there are 4 degrees of freedom, we use Appendix 2 to estimate the value of \(\alpha\) that corresponds to a t(\(\alpha\), 4) of 3.91. From Appendix 2, t(0.02, 4) is 3.75 and t(0.01, 4) is 4.60. Although we can reject the null hypothesis at the 98% confidence level, we cannot reject it at the 99% confidence level. For a discussion of the advantages of this approach, see J. A. C. Sterne and G. D. Smith “Sifting the evidence—what’s wrong with significance tests?” BMJ 2001, 322, 226–231.Earlier we made the point that we must exercise caution when we interpret the result of a statistical analysis. We will keep returning to this point because it is an important one. Having determined that a result is inaccurate, as we did in Example \(\PageIndex{1}\), the next step is to identify and to correct the error. Before we expend time and money on this, however, we first should critically examine our data. For example, the smaller the value of s, the larger the value of texp. If the standard deviation for our analysis is unrealistically small, then the probability of a type 2 error increases. Including a few additional replicate analyses of the standard and reevaluating the t-test may strengthen our evidence for a determinate error, or it may show us that there is no evidence for a determinate error.If we analyze regularly a particular sample, we may be able to establish an expected variance, \(\sigma^2\), for the analysis. This often is the case, for example, in a clinical lab that analyzes hundreds of blood samples each day. A few replicate analyses of a single sample gives a sample variance, s2, whose value may or may not differ significantly from \(\sigma^2\).We can use an F-test to evaluate whether a difference between s2 and \(\sigma^2\) is significant. The null hypothesis is \(H_0 \text{: } s^2 = \sigma^2\) and the alternative hypothesis is \(H_\text{A} \text{: } s^2 \neq \sigma^2\). The test statistic for evaluating the null hypothesis is Fexp, which is given as either\[F_\text{exp} = \frac {s^2} {\sigma^2} \text{ if } s^2 > \sigma^2 \text{ or } F_\text{exp} = \frac {\sigma^2} {s^2} \text{ if } \sigma^2 > s^2 \nonumber\]depending on whether s2 is larger or smaller than \(\sigma^2\). This way of defining Fexp ensures that its value is always greater than or equal to one.If the null hypothesis is true, then Fexp should equal one; however, because of indeterminate errors, Fexp, usually is greater than one. A critical value, \(F(\alpha, \nu_\text{num}, \nu_\text{den})\), is the largest value of Fexp that we can attribute to indeterminate error given the specified significance level, \(\alpha\), and the degrees of freedom for the variance in the numerator, \(\nu_\text{num}\), and the variance in the denominator, \(\nu_\text{den}\). The degrees of freedom for s2 is n – 1, where n is the number of replicates used to determine the sample’s variance, and the degrees of freedom for \(\sigma^2\) is defined as infinity, \(\infty\). Critical values of F for \(\alpha = 0.05\) are listed in Appendix 3 for both one-tailed and two-tailed F-tests.A manufacturer’s process for analyzing aspirin tablets has a known variance of 25. A sample of 10 aspirin tablets is selected and analyzed for the amount of aspirin, yielding the following results in mg aspirin/tablet.\(254 \quad 249 \quad 252 \quad 252 \quad 249 \quad 249 \quad 250 \quad 247 \quad 251 \quad 252\)Determine whether there is evidence of a significant difference between the sample’s variance and the expected variance at \(\alpha = 0.05\).SolutionThe variance for the sample of 10 tablets is 4.3. The null hypothesis and alternative hypotheses are\[H_0 \text{: } s^2 = \sigma^2 \quad \quad \quad H_\text{A} \text{: } s^2 \neq \sigma^2 \nonumber\]and the value for Fexp is\[F_\text{exp} = \frac {\sigma^2} {s^2} = \frac {25} {4.3} = 5.8 \nonumber\]The critical value for F(0.05, \(\infty\), 9) from Appendix 3 is 3.333. Since Fexp is greater than F(0.05, \(\infty\), 9), we reject the null hypothesis and accept the alternative hypothesis that there is a significant difference between the sample’s variance and the expected variance. One explanation for the difference might be that the aspirin tablets were not selected randomly.We can extend the F-test to compare the variances for two samples, A and B, by rewriting our equation for Fexp as\[F_\text{exp} = \frac {s_A^2} {s_B^2} \nonumber\]defining A and B so that the value of Fexp is greater than or equal to 1.The table below shows results for two experiments to determine the mass of a circulating U.S. penny. Determine whether there is a difference in the variances of these analyses at \(\alpha = 0.05\).SolutionThe standard deviations for the two experiments are 0.051 for the first experiment (A) and 0.037 for the second experiment (B). The null and alternative hypotheses are\[H_0 \text{: } s_A^2 = s_B^2 \quad \quad \quad H_\text{A} \text{: } s_A^2 \neq s_B^2 \nonumber\]and the value of Fexp is\[F_\text{exp} = \frac {s_A^2} {s_B^2} = \frac {(0.051)^2} {(0.037)^2} = \frac {0.00260} {0.00137} = 1.90 \nonumber\]From Appendix 3 the critical value for F(0.05, 6, 4) is 9.197. Because Fexp < F(0.05, 6, 4), we retain the null hypothesis. There is no evidence at \(\alpha = 0.05\) to suggest that the difference in variances is significant.Three factors influence the result of an analysis: the method, the sample, and the analyst. We can study the influence of these factors by conducting experiments in which we change one factor while holding constant the other factors. For example, to compare two analytical methods we can have the same analyst apply each method to the same sample and then examine the resulting means. In a similar fashion, we can design experiments to compare two analysts or to compare two samples.Before we consider the significance tests for comparing the means of two samples, we need to understand the difference between unpaired data and paired data. This is a critical distinction and learning to distinguish between these two types of data is important. Here are two simple examples that highlight the difference between unpaired data and paired data. In each example the goal is to compare two balances by weighing pennies.In both examples the samples of 10 pennies were drawn from the same population; the difference is how we sampled that population. We will learn why this distinction is important when we review the significance test for paired data; first, however, we present the significance test for unpaired data. One simple test for determining whether data are paired or unpaired is to look at the size of each sample. If the samples are of different size, then the data must be unpaired. The converse is not true. If two samples are of equal size, they may be paired or unpaired.Consider two analyses, A and B, with means of \(\overline{X}_A\) and \(\overline{X}_B\), and standard deviations of sA and sB. The confidence intervals for \(\mu_A\) and for \(\mu_B\) are\[\mu_A = \overline{X}_A \pm \frac {t s_A} {\sqrt{n_A}} \nonumber\]\[\mu_B = \overline{X}_B \pm \frac {t s_B} {\sqrt{n_B}} \nonumber\]where nA and nB are the sample sizes for A and for B. Our null hypothesis, \(H_0 \text{: } \mu_A = \mu_B\), is that any difference between \(\mu_A\) and \(\mu_B\) is the result of indeterminate errors that affect the analyses. The alternative hypothesis, \(H_A \text{: } \mu_A \neq \mu_B\), is that the difference between \(\mu_A\)and \(\mu_B\) is too large to be explained by indeterminate error.To derive an equation for texp, we assume that \(\mu_A\) equals \(\mu_B\), and combine the equations for the two confidence intervals\[\overline{X}_A \pm \frac {t_\text{exp} s_A} {\sqrt{n_A}} = \overline{X}_B \pm \frac {t_\text{exp} s_B} {\sqrt{n_B}} \nonumber\]Solving for \(|\overline{X}_A - \overline{X}_B|\) and using a propagation of uncertainty, gives\[|\overline{X}_A - \overline{X}_B| = t_\text{exp} \times \sqrt{\frac {s_A^2} {n_A} + \frac {s_B^2} {n_B}} \nonumber\]Finally, we solve for texp\[t_\text{exp} = \frac {|\overline{X}_A - \overline{X}_B|} {\sqrt{\frac {s_A^2} {n_A} + \frac {s_B^2} {n_B}}} \nonumber\]and compare it to a critical value, \(t(\alpha, \nu)\), where \(\alpha\) is the probability of a type 1 error, and \(\nu\) is the degrees of freedom.Thus far our development of this t-test is similar to that for comparing \(\overline{X}\) to \(\mu\), and yet we do not have enough information to evaluate the t-test. Do you see the problem? With two independent sets of data it is unclear how many degrees of freedom we have.Suppose that the variances \(s_A^2\) and \(s_B^2\) provide estimates of the same \(\sigma^2\). In this case we can replace \(s_A^2\) and \(s_B^2\) with a pooled variance, \(s_\text{pool}^2\), that is a better estimate for the variance. Thus, our equation for \(t_\text{exp}\) becomes\[t_\text{exp} = \frac {|\overline{X}_A - \overline{X}_B|} {s_\text{pool} \times \sqrt{\frac {1} {n_A} + \frac {1} {n_B}}} = \frac {|\overline{X}_A - \overline{X}_B|} {s_\text{pool}} \times \sqrt{\frac {n_A n_B} {n_A + n_B}} \nonumber\]where spool, the pooled standard deviation, is\[s_\text{pool} = \sqrt{\frac {(n_A - 1) s_A^2 + (n_B - 1)s_B^2} {n_A + n_B - 2}} \nonumber\]The denominator of this equation shows us that the degrees of freedom for a pooled standard deviation is \(n_A + n_B - 2\), which also is the degrees of freedom for the t-test. Note that we lose two degrees of freedom because the calculations for \(s_A^2\) and \(s_B^2\) require the prior calculation of \(\overline{X}_A\) amd \(\overline{X}_B\).So how do you determine if it is okay to pool the variances? Use an F-test.If \(s_A^2\) and \(s_B^2\) are significantly different, then we calculate texp using the following equation. In this case, we find the degrees of freedom using the following imposing equation.\[\nu = \frac {\left( \frac {s_A^2} {n_A} + \frac {s_B^2} {n_B} \right)^2} {\frac {\left( \frac {s_A^2} {n_A} \right)^2} {n_A + 1} + \frac {\left( \frac {s_B^2} {n_B} \right)^2} {n_B + 1}} - 2 \nonumber\]Because the degrees of freedom must be an integer, we round to the nearest integer the value of \(\nu\) obtained from this equation.The equation above for the degrees of freedom is from Miller, J.C.; Miller, J.N. Statistics for Analytical Chemistry, 2nd Ed., Ellis-Horward: Chichester, UK, 1988. In the 6th Edition, the authors note that several different equations have been suggested for the number of degrees of freedom for t when sA and sB differ, reflecting the fact that the determination of degrees of freedom an approximation. An alternative equation—which is used by statistical software packages, such as R, Minitab, Excel—is\[\nu = \frac {\left( \frac {s_A^2} {n_A} + \frac {s_B^2} {n_B} \right)^2} {\frac {\left( \frac {s_A^2} {n_A} \right)^2} {n_A - 1} + \frac {\left( \frac {s_B^2} {n_B} \right)^2} {n_B - 1}} = \frac {\left( \frac {s_A^2} {n_A} + \frac {s_B^2} {n_B} \right)^2} {\frac {s_A^4} {n_A^2(n_A - 1)} + \frac {s_B^4} {n_B^2(n_B - 1)}} \nonumber\]For typical problems in analytical chemistry, the calculated degrees of freedom is reasonably insensitive to the choice of equation.Regardless of whether how we calculate texp, we reject the null hypothesis if texp is greater than \(t(\alpha, \nu)\) and retain the null hypothesis if texp is less than or equal to \(t(\alpha, \nu)\).Example \(\PageIndex{3}\) provides results for two experiments to determine the mass of a circulating U.S. penny. Determine whether there is a difference in the means of these analyses at \(\alpha = 0.05\).SolutionFirst we use an F-test to determine whether we can pool the variances. We completed this analysis in Example \(\PageIndex{3}\), finding no evidence of a significant difference, which means we can pool the standard deviations, obtaining\[s_\text{pool} = \sqrt{\frac {(7 - 1)(0.051)^2 + (5 - 1)(0.037)^2} {7 + 5 - 2}} = 0.0459 \nonumber\]with 10 degrees of freedom. To compare the means we use the following null hypothesis and alternative hypotheses\[H_0 \text{: } \mu_A = \mu_B \quad \quad \quad H_A \text{: } \mu_A \neq \mu_B \nonumber\]Because we are using the pooled standard deviation, we calculate texp as\[t_\text{exp} = \frac {|3.117 - 3.081|} {0.0459} \times \sqrt{\frac {7 \times 5} {7 + 5}} = 1.34 \nonumber\]The critical value for t(0.05, 10), from Appendix 2, is 2.23. Because texp is less than t(0.05, 10) we retain the null hypothesis. For \(\alpha = 0.05\) we do not have evidence that the two sets of pennies are significantly different.One method for determining the %w/w Na2CO3 in soda ash is to use an acid–base titration. When two analysts analyze the same sample of soda ash they obtain the results shown here.Analyst A: \(86.82 \% \quad 87.04 \% \quad 86.93 \% \quad 87.01 \% \quad 86.20 \% \quad 87.00 \%\)Analyst B: \(81.01 \% \quad 86.15 \% \quad 81.73 \% \quad 83.19 \% \quad 80.27 \% \quad 83.93 \% \quad\)Determine whether the difference in the mean values is significant at \(\alpha = 0.05\).SolutionWe begin by reporting the mean and standard deviation for each analyst.\[\overline{X}_A = 86.83\% \quad \quad s_A = 0.32\% \nonumber\]\[\overline{X}_B = 82.71\% \quad \quad s_B = 2.16\% \nonumber\]To determine whether we can use a pooled standard deviation, we first complete an F-test using the following null and alternative hypotheses.\[H_0 \text{: } s_A^2 = s_B^2 \quad \quad \quad H_A \text{: } s_A^2 \neq s_B^2 \nonumber\]Calculating Fexp, we obtain a value of\[F_\text{exp} = \frac {(2.16)^2} {(0.32)^2} = 45.6 \nonumber\]Because Fexp is larger than the critical value of 7.15 for F(0.05, 5, 5) from Appendix 3, we reject the null hypothesis and accept the alternative hypothesis that there is a significant difference between the variances; thus, we cannot calculate a pooled standard deviation.To compare the means for the two analysts we use the following null and alternative hypotheses.\[H_0 \text{: } \overline{X}_A = \overline{X}_B \quad \quad \quad H_A \text{: } \overline{X}_A \neq \overline{X}_B \nonumber\]Because we cannot pool the standard deviations, we calculate texp as\[t_\text{exp} = \frac {|86.83 - 82.71|} {\sqrt{\frac {(0.32)^2} {6} + \frac {(2.16)^2} {6}}} = 4.62 \nonumber\]and calculate the degrees of freedom as\[\nu = \frac {\left( \frac {(0.32)^2} {6} + \frac {(2.16)^2} {6} \right)^2} {\frac {\left( \frac {(0.32)^2} {6} \right)^2} {6 + 1} + \frac {\left( \frac {(2.16)^2} {6} \right)^2} {6 + 1}} - 2 = 5.3 \approx 5 \nonumber\]From Appendix 2, the critical value for t(0.05, 5) is 2.57. Because texp is greater than t(0.05, 5) we reject the null hypothesis and accept the alternative hypothesis that the means for the two analysts are significantly different at \(\alpha = 0.05\).Suppose we are evaluating a new method for monitoring blood glucose concentrations in patients. An important part of evaluating a new method is to compare it to an established method. What is the best way to gather data for this study? Because the variation in the blood glucose levels amongst patients is large we may be unable to detect a small, but significant difference between the methods if we use different patients to gather data for each method. Using paired data, in which the we analyze each patient’s blood using both methods, prevents a large variance within a population from adversely affecting a t-test of means.Typical blood glucose levels for most non-diabetic individuals ranges between 80–120 mg/dL (4.4–6.7 mM), rising to as high as 140 mg/dL (7.8 mM) shortly after eating. Higher levels are common for individuals who are pre-diabetic or diabetic.When we use paired data we first calculate the individual differences, di, between each sample's paired resykts. Using these individual differences, we then calculate the average difference, \(\overline{d}\), and the standard deviation of the differences, sd. The null hypothesis, \(H_0 \text{: } d = 0\), is that there is no difference between the two samples, and the alternative hypothesis, \(H_A \text{: } d \neq 0\), is that the difference between the two samples is significant.The test statistic, texp, is derived from a confidence interval around \(\overline{d}\)\[t_\text{exp} = \frac {|\overline{d}| \sqrt{n}} {s_d} \nonumber\]where n is the number of paired samples. As is true for other forms of the t-test, we compare texp to \(t(\alpha, \nu)\), where the degrees of freedom, \(\nu\), is n – 1. If texp is greater than \(t(\alpha, \nu)\), then we reject the null hypothesis and accept the alternative hypothesis. We retain the null hypothesis if texp is less than or equal to t(a, o). This is known as a paired t-test.Marecek et. al. developed a new electrochemical method for the rapid determination of the concentration of the antibiotic monensin in fermentation vats [Marecek, V.; Janchenova, H.; Brezina, M.; Betti, M. Anal. Chim. Acta 1991, 244, 15–19]. The standard method for the analysis is a test for microbiological activity, which is both difficult to complete and time-consuming. Samples were collected from the fermentation vats at various times during production and analyzed for the concentration of monensin using both methods. The results, in parts per thousand (ppt), are reported in the following table.Is there a significant difference between the methods at \(\alpha = 0.05\)?SolutionAcquiring samples over an extended period of time introduces a substantial time-dependent change in the concentration of monensin. Because the variation in concentration between samples is so large, we use a paired t-test with the following null and alternative hypotheses.\[H_0 \text{: } \overline{d} = 0 \quad \quad \quad H_A \text{: } \overline{d} \neq 0 \nonumber\]Defining the difference between the methods as\[d_i = (X_\text{elect})_i - (X_\text{micro})_i \nonumber\]we calculate the difference for each sample.The mean and the standard deviation for the differences are, respectively, 2.25 ppt and 5.63 ppt. The value of texp is\[t_\text{exp} = \frac {|2.25| \sqrt{11}} {5.63} = 1.33 \nonumber\]which is smaller than the critical value of 2.23 for t(0.05, 10) from Appendix 2. We retain the null hypothesis and find no evidence for a significant difference in the methods at \(\alpha = 0.05\).One important requirement for a paired t-test is that the determinate and the indeterminate errors that affect the analysis must be independent of the analyte’s concentration. If this is not the case, then a sample with an unusually high concentration of analyte will have an unusually large di. Including this sample in the calculation of \(\overline{d}\) and sd gives a biased estimate for the expected mean and standard deviation. This rarely is a problem for samples that span a limited range of analyte concentrations, such as those in Example \(\PageIndex{4}\) or Exercise \(\PageIndex{6}\). When paired data span a wide range of concentrations, however, the magnitude of the determinate and indeterminate sources of error may not be independent of the analyte’s concentration; when true, a paired t-test may give misleading results because the paired data with the largest absolute determinate and indeterminate errors will dominate \(\overline{d}\). In this situation a regression analysis, which is the subject of the next chapter, is more appropriate method for comparing the data.The importance of distinguishing between paired and unpaired data is worth examining more closely. The following is data from some work I completed with a colleague in which we were looking at concentration of Zn in Lake Erie at the air-water interface and the sediment-water interface.The mean and the standard deviation for the ppm Zn at the air-water interface are 0.5178 ppm and 0.01732 ppm, and the mean and the standard deviation for the ppm Zn at the sediment-water interface are 0.4445 ppm and 0.1418 ppm. We can use these values to draw normal distributions for both by letting the means and the standard deviations for the samples, \(\overline{X}\) and \(s\), serve as estimates for the means and the standard deviations for the population, \(\mu\) and \(\sigma\). As we see in the following figurethe two distributions overlap strongly, suggesting that a t-test of their means is not likely to find evidence of a difference. And yet, we also see that for each site, the concentration of Zn at the sediment-water interface is less than that at the air-water interface. In this case, the difference between the concentration of Zn at individual sites is sufficiently large that it masks our ability to see the difference between the two interfaces.If we take the differences between the air-water and sediment-water interfaces, we have values of 0.015, 0.028, 0.067, 0.121, 0.102, and 0.107 ppm Zn, with a mean of 0.07333 ppm Zn and a standard deviation of 0.04410 ppm Zn. Superimposing all three normal distributionsshows clearly that most of the normal distribution for the differences lies above zero, suggesting that a t-test might show evidence that the difference is significant.In chapter 7.1 we examined a data set consisting of the masses of 100 circulating United States penny. Table \(\PageIndex{1}\) provides one more data set. Do you notice anything unusual in this data? Of the 100 pennies included in the earlier table, no penny has a mass of less than 3 g. In this table, however, the mass of one penny is less than 3 g. We might ask whether this penny’s mass is so different from the other pennies that it is in error.A measurement that is not consistent with other measurements is called an outlier. An outlier might exist for many reasons: the outlier might belong to a different populationIs this a Canadian penny?or the outlier might be a contaminated or an otherwise altered sampleIs the penny damaged or unusually dirty?or the outlier may result from an error in the analysisDid we forget to tare the balance?Regardless of its source, the presence of an outlier compromises any meaningful analysis of our data. There are many significance tests that we can use to identify a potential outlier, three of which we present here.One of the most common significance tests for identifying an outlier is Dixon’s Q-test. The null hypothesis is that there are no outliers, and the alternative hypothesis is that there is an outlier. The Q-test compares the gap between the suspected outlier and its nearest numerical neighbor to the range of the entire data set ).The test statistic, Qexp, is\[Q_\text{exp} = \frac {\text{gap}} {\text{range}} = \frac {|\text{outlier's value} - \text{nearest value}|} {\text{largest value} - \text{smallest value}} \nonumber\]This equation is appropriate for evaluating a single outlier. Other forms of Dixon’s Q-test allow its extension to detecting multiple outliers [Rorabacher, D. B. Anal. Chem. 1991, 63, 139–146].The value of Qexp is compared to a critical value, \(Q(\alpha, n)\), where \(\alpha\) is the probability that we will reject a valid data point (a type 1 error) and n is the total number of data points. To protect against rejecting a valid data point, usually we apply the more conservative two-tailed Q-test, even though the possible outlier is the smallest or the largest value in the data set. If Qexp is greater than \(Q(\alpha, n)\), then we reject the null hypothesis and may exclude the outlier. We retain the possible outlier when Qexp is less than or equal to \(Q(\alpha, n)\). Table \(\PageIndex{2}\) provides values for \(Q(\alpha, n)\) for a data set that has 3–10 values. A more extensive table is in Appendix 4. Values for \(Q(\alpha, n)\) assume an underlying normal distribution.Although Dixon’s Q-test is a common method for evaluating outliers, it is no longer favored by the International Standards Organization (ISO), which recommends the Grubb’s test. There are several versions of Grubb’s test depending on the number of potential outliers. Here we will consider the case where there is a single suspected outlier.For details on this recommendation, see International Standards ISO Guide 5752-2 “Accuracy (trueness and precision) of measurement methods and results–Part 2: basic methods for the determination of repeatability and reproducibility of a standard measurement method,” 1994.The test statistic for Grubb’s test, Gexp, is the distance between the sample’s mean, \(\overline{X}\), and the potential outlier, \(X_\text{out}\), in terms of the sample’s standard deviation, s.\[G_\text{exp} = \frac {|X_\text{out} - \overline{X}|} {s} \nonumber\]We compare the value of Gexp to a critical value \(G(\alpha, n)\), where \(\alpha\) is the probability that we will reject a valid data point and n is the number of data points in the sample. If Gexp is greater than \(G(\alpha, n)\), then we may reject the data point as an outlier, otherwise we retain the data point as part of the sample. Table \(\PageIndex{3}\) provides values for G(0.05, n) for a sample containing 3–10 values. A more extensive table is in Appendix 5. Values for \(G(\alpha, n)\) assume an underlying normal distribution.Our final method for identifying an outlier is Chauvenet’s criterion. Unlike Dixon’s Q-Test and Grubb’s test, you can apply this method to any distribution as long as you know how to calculate the probability for a particular outcome. Chauvenet’s criterion states that we can reject a data point if the probability of obtaining the data point’s value is less than \((2n^{-1})\), where n is the size of the sample. For example, if n = 10, a result with a probability of less than \((2 \times 10)^{-1}\), or 0.05, is considered an outlier.To calculate a potential outlier’s probability we first calculate its standardized deviation, z \[z = \frac {|X_\text{out} - \overline{X}|} {s} \nonumber\]where \(X_\text{out}\) is the potential outlier, \(\overline{X}\) is the sample’s mean and s is the sample’s standard deviation. Note that this equation is identical to the equation for Gexp in the Grubb’s test. For a normal distribution, we can find the probability of obtaining a value of z using the probability table in Appendix 1.Table \(\PageIndex{1}\) contains the masses for nine circulating United States pennies. One entry, 2.514 g, appears to be an outlier. Determine if this penny is an outlier using a Q-test, Grubb’s test, and Chauvenet’s criterion. For the Q-test and Grubb’s test, let \(\alpha = 0.05\).SolutionFor the Q-test the value for \(Q_\text{exp}\) is\[Q_\text{exp} = \frac {|2.514 - 3.039|} {3.109 - 2.514} = 0.882 \nonumber\]From Table \(\PageIndex{2}\), the critical value for Q(0.05, 9) is 0.493. Because Qexp is greater than Q(0.05, 9), we can assume the penny with a mass of 2.514 g likely is an outlier.For Grubb’s test we first need the mean and the standard deviation, which are 3.011 g and 0.188 g, respectively. The value for Gexp is\[G_\text{exp} = \frac {|2.514 - 3.011|} {0.188} = 2.64 \nonumber\]Using Table \(\PageIndex{3}\), we find that the critical value for G(0.05, 9) is 2.215. Because Gexp is greater than G(0.05, 9), we can assume that the penny with a mass of 2.514 g likely is an outlier.For Chauvenet’s criterion, the critical probability is \((2 \times 9)^{-1}\), or 0.0556. The value of z is the same as Gexp, or 2.64. Using Appendix 1, the probability for z = 2.64 is 0.00415. Because the probability of obtaining a mass of 0.2514 g is less than the critical probability, we can assume the penny with a mass of 2.514 g likely is an outlier.You should exercise caution when using a significance test for outliers because there is a chance you will reject a valid result. In addition, you should avoid rejecting an outlier if it leads to a precision that is much better than expected based on a propagation of uncertainty. Given these concerns it is not surprising that some statisticians caution against the removal of outliers [Deming, W. E. Statistical Analysis of Data; Wiley: New York, 1943 (republished by Dover: New York, 1961); p. 171].You also can adopt a more stringent requirement for rejecting data. When using the Grubb’s test, for example, the ISO 5752 guidelines suggest retaining a value if the probability for rejecting it is greater than \(\alpha = 0.05\), and flagging a value as a “straggler” if the probability for rejecting it is between \(\alpha = 0.05\) and \(\alpha = 0.01\). A “straggler” is retained unless there is compelling reason for its rejection. The guidelines recommend using \(\alpha = 0.01\) as the minimum criterion for rejecting a possible outlier.On the other hand, testing for outliers can provide useful information if we try to understand the source of the suspected outlier. For example, the outlier in Table \(\PageIndex{1}\) represents a significant change in the mass of a penny (an approximately 17% decrease in mass), which is the result of a change in the composition of the U.S. penny. In 1982 the composition of a U.S. penny changed from a brass alloy that was 95% w/w Cu and 5% w/w Zn (with a nominal mass of 3.1 g), to a pure zinc core covered with copper (with a nominal mass of 2.5 g) [Richardson, T. H. J. Chem. Educ. 1991, 68, 310–311]. The pennies in Table \(\PageIndex{1}\), therefore, were drawn from different populations.This page titled 7.2: Significance Tests for Normal Distributions is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by David Harvey.
248
7.3: Analysis of Variance
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Chemometrics_Using_R_(Harvey)/07%3A_Testing_the_Significance_of_Data/7.03%3A_Analysis_of_Variance
Consider the following data, which shows the stability of a reagent under different conditions for storing samples; all values are percent recoveries, so a result of 100 indicates that the reagent's concentration remains unchanged and that there was no degradation.To determine if light has a significant affect on the reagent’s stability, we might choose to perform a series of t–tests, comparing all possible mean values; in this case we need three such tests:Each such test has a probability of a type I error of \(\alpha_{test}\). The total probability of a type I error across k tests, \(\alpha_{total}\), is\[\alpha_{total} = 1 - (1 - \alpha_{test})^{k} \nonumber\]For three such tests using \(\alpha = 0.05\), we have\[\alpha_{total} = 1 - (1 - 0.05)^{3} = 0.143 \nonumber\]or a 14.3% proability of a type I error. The relationship between the number of conditions, n, and the number of tests, k, is\[k = \frac {n(n-1)} {2} \nonumber\]which means that k grows quickly as n increases, as shown in .and that the magnitude of a type I error increases quickly as well, as seen in .We can compensate for this problem by decreasing \(\alpha_{test}\) for each independent test so that \(\alpha_{total}\) is equal to our desired probability; thus, for \(n = 3\) we have \(k = 3\), and to achieve an \(\alpha_{total}\) of 0.05 each individual value of \(\alpha_{test}\) be\[\alpha_{test} = 1 - (1 - 0.05)^{1/3} = 0.017 \nonumber\]Values of \(\alpha_{test}\) decrease quickly, as seen in .The problem here is that we are searching for a significant difference on a pair-wise basis without any evidence that the overall variation in the data across all conditions (also known as treatments) is sufficiently large that it cannot be explained by experimental uncertainty (that is, random error) only. One way to determine if there is a systematic error in the data set, without identifying the source of the systematic error, is to compare the variation within each treatment to the variation between the treatments. We assume that the variation within each treatment reflects uncertainty in the analytical method (random errors) and that the variation between the treatments includes both the method’s uncertainty and any systematic errors in the individual treatments. If the variation between the treatments is significantly greater than the variation within the treatments, then a systematic error seems likely. We call this process an analysis of variance, or ANOVA; for one independent variable (the amount of light in this case), it is a one-way analysis of variance.The basic details of a one-way ANOVA calculation are as follows:Step 1: Treat the data as one large data set and calculate its mean and its variance, which we call the global mean, \(\bar{\bar{x}}\), and the global variance, \(\bar{\bar{s^{2}}}\).\[\bar{\bar{x}} = \frac { \sum_{i=1}^h \sum_{j=1}^{n_{i}} x_{ij} } {N} \nonumber\]\[\bar{\bar{s^{2}}} = \frac { \sum_{i=1}^h \sum_{j=1}^{n_{i}} (x_{ij} - \bar{\bar{x}})^{2} } {N - 1} \nonumber\]where \(h\) is the number of treatments, \(n_{i}\) is the number of replicates for the \(i^{th}\) treatment, and \(N\) is the total number of measurements.Step 2: Calculate the within-sample variance, \(s_{w}^{2}\), using the mean for each treatment, \(\bar{x}_{i}\), and the replicates for that treatment.\[s_{w}^{2} = \frac { \sum_{i=1}^h \sum_{j=1}^{n_{i}} (x_{ij} - \bar{x}_{i})^{2} } {N - h} \nonumber\]Step 3: Calculate the between-sample variance, \(s_{b}^{2}\), using the means for each treatment and the global mean\[s_{b}^{2} = \frac { \sum_{i=1}^h \sum_{j=1}^{n_{i}} (\bar{x}_{i} - \bar{\bar{x}})^2 } {h - 1} = \frac {\sum_{i=1}^h n_{i} (\bar{x}_{i} - \bar{\bar{x}})^2 } {h - 1} \nonumber\]Step 4: If there is a significant difference between the treatments, then \(s_{b}^{2}\) should be significantly greater than \(s_{w}^{2}\), which we evaluate using a one-tailed \(F\)-test where\[H_{0}: s_{b}^{2} = s_{w}^{2} \nonumber\] \[H_{A}: s_{b}^{2} > s_{w}^{2} \nonumber\]Step 5: If there is a significant difference, then we estimate \(\sigma_{rand}^{2}\) and \(\sigma_{systematic}^{2}\) as\[s_{w}^{2} \approx \sigma_{rand}^{2} \nonumber\] \[s_{b}^{2} \approx \sigma_{rand}^{2} + \bar{n}\sigma_{systematic}^{2} \nonumber\] Table \(\PageIndex{1}\) gathers these equations togetherChemical reagents have a limited shelf-life. To determine the effect of light on a reagent's stability, a freshly prepared solution is stored for one hour under three different light conditions: total dark, subdued light, and full light. At the end of one hour, each solution was analyzed three times, yielding the following percent recoveries; a recovery of 100% means that the measured concentration is the same as the actual concentration.The null hypothesis is that there is there is no difference between the different treatments, and the alternative hypothesis is that at least one of the treatments yields a result that is significantly different than the other treatments.SolutionFirst, we treat the data as one large data set of nine values and calculate the global mean, \(\bar{\bar{x}}\), and the global variance, \(\bar{\bar{s^{2}}}\); these are 98 and 23.75, respectively. We also calculate the mean for each of the three treatments, obtaining a value of 102.0 for treatment A, 100.0 for treatment B, and 92.0 for treatment C.Next, we calculate the total sum-of-squares, \(SS_{total}\)\[\bar{\bar{s^{2}}}(N - 1) = 23.75(9 - 1) = 190.0 \nonumber\]the between sample sum-of-squares, \(SS_{b}\)\[SS_{b} = \sum_{i=1}^h n_{i} (\bar{x}_{i} - \bar{\bar{x}})^2 = 3(102.0 - 98.0)^2 + 3(100.0 - 98.0)^2 + 3(92.0 - 98.0)^2 = 168.0 \nonumber\]and the within sample sum-of-squares, \(SS_{w}\)\[ SS_{w} = SS_{total} - SS_{b} = 190.0 - 168.0 = 22.0 \nonumber\]The variance between the treatments, \(s_b^2\) is\[\frac {SS_{b}} {h - 1} = \frac{168}{3 - 1} = 84.0 \nonumber\]and the variance within the treatments, \(s_w^2\) is\[\frac {SS_{w}} {N - h} = \frac{22.0}{9 - 3} = 3.67 \nonumber\]Finally, we complete an F-test, calculating Fexp \[F_{exp} = \frac{s_b^2}{s_w^2} = \frac{84.0}{3.67} = 22.9 \nonumber\]and compare it to the critical value for F(0.05, 2, 6) = 5.143 from Appendix 3. Because Fexp > F(0.05, 2, 6), we reject the null hypothesis and accept the alternative hypothesis that at least one of the treatments yields a result that is significantly different from the other treatments. We can estimate the variance due to random errors as\[\sigma_{random}^{2} = s_{w}^{2} = 3.67 \nonumber\]and the variance due to systematic errors as\[\sigma_{systematic}^{2} = \frac {\sigma_{random}^{2} - s_{w}^{2}} {\bar{n}} = \frac {84.0 - 3.67} {3} = 26.8 \nonumber\]Having found evidence for a significant difference between the treatments, we can use individual t-tests on pairs of treatments to show that the results for treatment C are significantly different from the other two treatments.This page titled 7.3: Analysis of Variance is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by David Harvey.
249
7.4: Non-Parametric Significance Tests
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Chemometrics_Using_R_(Harvey)/07%3A_Testing_the_Significance_of_Data/7.04%3A_Non-Parametric_Significance_Tests
The significance tests described in Chapter 7.2 assume that we can treat the individual samples as if they are drawn from a population that is normally distributed. Although often a reasonable assumption, there are times when this is a poor assumption, such as when there is a likely outlier that we are not inclined to remove. Non-parametric significance tests allow us to compare data sets, but without making implicit assumptions about our data's distribution. In this section we will consider two non-parametric tests, the Wicoxson signed rank test, which we can use in place of a paired t-test, and the Wilcoxon rank sum test, which we can use in place of an unpaired t-test.When we use paired data we first calculate the difference, di, between each sample's paired values. We then subtract the expected difference from each di and then sort these adjusted differences from smallest-to-largest without considering the sign. We then assign each difference a rank (1, 2, 3, ...) and add back its sign. If two or more entries have the same absolute difference, then we average their ranks. Finally, we add together the positive ranks and add together the negative ranks. If there is no difference in the two data sets, then we expect that these two sums should be similar in value. If the smaller of the two ranks is less than a critical value, then there is reason to believe that the two data sets are significantly different from each other; see Appendix 6 for a table of critical values.Marecek et. al. developed a new electrochemical method for the rapid determination of the concentration of the antibiotic monensin in fermentation vats [Marecek, V.; Janchenova, H.; Brezina, M.; Betti, M. Anal. Chim. Acta 1991, 244, 15–19]. The standard method for the analysis is a test for microbiological activity, which is both difficult to complete and time-consuming. Samples were collected from the fermentation vats at various times during production and analyzed for the concentration of monensin using both methods. The results, in parts per thousand (ppt), are reported in the following table. This is the same data as in Example 7.2.6.Is there a significant difference between the methods at \(\alpha = 0.05\)?SolutionDefining the difference between the methods as\[d_i = (X_\text{elect})_i - (X_\text{micro})_i \nonumber\]we calculate the difference for each sample.Next, we order the individual differences from smallest-to-largest without considering the signWe then assign each individual difference a rank, retaining the sign; thusThe sum of the negative ranks is 22 and the sum of the positive ranks is 44. The critical value for 11 samples and \(\alpha = 0.05\) is 10. As the smaller of our two ranks, 22, is greater than 10, there is no evidence to suggest that there is a difference between the two methods.The Wilcoxon rank sum test (also know as the Mann-Whitney U test) is used to compare two unpaired data sets. The values in the two data sets are sorted from smallest-to-largest, maintaining sample identity. After sorting, each value is assigned a rank (1, 2, 3, ...), again, maintaining sample identity. If two or more entries have the same absolute difference, then their ranks are averaged. Next, we add up the ranks for each sample. If there is no difference in the two data sets, then we expect that the positive and negative ranks should be similar in value. To account for differences in the size of each sample, we subtract\[ \frac{n_i(n_i + 1)}{2} \nonumber\]from each sum where \(n_i\) is the size of the sample. If the smaller of the two ranks is less than a critical value, then there is reason to believe that the two data sets are significantly different from each other; see Appenidx 7 for a table of critical values.To compare two production lots of aspirin tablets, you collect samples from each and analyze them, obtaining the following results (in mg aspirin/tablet).Lot 1: 256, 248, 245, 244, 248, 261Lot 2: 241, 258, 241, 256, 254Is there any evidence at \(\alpha = 0.05\) that there is a significant difference between these two sets of results?SolutionFirst, we sort the results from smallest-to-largest. To distinguish between the two samples, those from Lot 1 are shown in bold.241, 241, 244, 245, 248, 248, 254, 256, 256, 258, 261Next we assign ranks, identifying those samples from Lot 1 by underlying them.1.5, 1.5, 3, 4, 5.5, 5.5, 7, 8.5, 8.5, 10, 11The sum of the ranks for Lot 1 is 37.5 and the sum of the ranks for Lot 2 is 28.5. After adjusting for the size of each sample, we have\[37.5 - \frac{6(6 + 1)}{2} = 16.5 \nonumber\]for Lot 1 and\[28.5 - \frac{(5+1)}{2} = 13.5 \nonumber\]for Lot 2. From Appendix 7, the critical value for \(\alpha = 0.05\) is 3. As the smaller of our two ranks, 13.5, is greater than 3, there is no evidence to suggest that there is a difference between the two methods.This page titled 7.4: Non-Parametric Significance Tests is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by David Harvey.
250
7.5: Using R for Significance Testing and Analysis of Variance
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Chemometrics_Using_R_(Harvey)/07%3A_Testing_the_Significance_of_Data/7.05%3A_Using_R_for_Significance_Testing_and_Analysis_of_Variance
The base installation of R has functions for most of the significance tests covered in Chapter 7.2 - Chapter 7.4.The R function for comparing variances isvar.test()which takes the following formvar.test(x, y, ratio = 1, alternative = c("two.sided", "less", "greater"), conf.level = 0.95, ...)where x and y are numeric vectors that contain the two samples, ratio is the expected ratio for the null hypothesis (which defaults to 1), alternative is a character string that states the alternative hypothesis (which defaults to two-sided or two-tailed), and a conf.level that gives the size of the confidence interval, which defaults to 0.95, or 95%, or \(\alpha = 0.05\). We can use this function to compare the variances of two samples, \(s_1^2\) vs \(s_2^2\), but not the variance of a sample and the variance for a population \(s^2\) vs \(\sigma^2\).Let's use R on the data from Example 7.2.3, which considers two sets of United States pennies. # create vectors to store the datasample1 = c(3.080, 3.094, 3.107, 3.056, 3.112, 3.174, 3.198)sample2 = c(3.052, 3.141, 3.083, 3.083, 3.048)# run two-sided variance test with alpha = 0.05 and null hypothesis that variances are equalvar.test(x = sample1, y = sample2, ratio = 1, alternative = "two.sided", conf.level = 0.95)The code above yields the following outputF test to compare two variances data: sample1 and sample2 F = 1.8726, num df = 6, denom df = 4, p-value = 0.5661 alternative hypothesis: true ratio of variances is not equal to 1 95 percent confidence interval: 0.2036028 11.6609726 sample estimates: ratio of variances 1.872598Two parts of this output lead us to retain the null hypothesis of equal variances. First, the reported p-value of 0.5661 is larger than our critical value for \(\alpha\) of 0.05, and second, the 95% confidence interval for the ratio of the variances, which runs from 0.204 to 11.7 includes the null hypothesis that it is 1.R does not include a function for comparing \(s^2\) to \(\sigma^2\).The R function for comparing means is t.test()and takes the following formt.test(x, y = NULL, alternative = c("two.sided", "less", "greater"), mu = 0, paired = FALSE, var.equal = FALSE, conf.level = 0.95, ...)where x is a numeric vector that contains the data for one sample and y is an optional vector that contains data for a second sample, alternative is a character string that states the alternative hypothesis (which defaults to two-tailed), mu is either the population's expected mean or the expected difference in the means of the two samples, paired is a logical value that indicates whether the data is paired , var.equal is a logical value that indicates whether the variances for two samples are treated as equal or unequal (based on a prior var.test()), andconf.level gives the size of the confidence interval (which defaults to 0.95, or 95%, or \(\alpha = 0.05\)).Let's use R on the data from Example 7.2.1, which considers the determination of the \(\% \text{Na}_2 \text{CO}_3\) in a standard sample that is known to be 98.76 % w/w \(\text{Na}_2 \text{CO}_3\).# create vector to store the data na2co3 = c(98.71, 98.59, 98.62, 98.44, 98.58)# run a two-sided t-test, using mu to define the expected mean; because the default values # for paired and var.equal are FALSE, we can omit them here t.test(x = na2co3, alternative = "two.sided", mu = 98.76, conf.level = 0.95)The code above yields the following outputOne Sample t-test data: na2co3 t = -3.9522, df = 4, p-value = 0.01679 alternative hypothesis: true mean is not equal to 98.76 95 percent confidence interval: 98.46717 98.70883 sample estimates: mean of x 98.588 Two parts of this output lead us to reject the null hypothesis of equal variances. First, the reported p-value of 0.01679 is less than our critical value for \(\alpha\) of 0.05, and second, the 95% confidence interval for the experimental mean of 98.588, which runs from 98.467 to 98.709, does not includes the null hypothesis that it is 98.76.When comparing the means for two samples, we have to be careful to consider whether the data is unpaired or paired, and for unpaired data we must determine whether we can pool the variances for the two samples.Let's use R on the data from Example 7.2.4, which considers two sets of United States pennies. This data is unpaired and, as we showed earlier, there is no evidence to suggest that the variances of the two samples are different.# create vectors to store the datasample1 = c(3.080, 3.094, 3.107, 3.056, 3.112, 3.174, 3.198)sample2 = c(3.052, 3.141, 3.083, 3.083, 3.048)# run a two-sided t-test, setting mu to 0 as the null hypothesis is that the means are the same, and setting var.equal to TRUEt.test(x = sample1, y = sample2, alternative = "two.sided", mu = 0, var.equal = TRUE, conf.level = 0.95)The code above yields the following outputTwo Sample t-test data: sample1 and sample2 t = 1.3345, df = 10, p-value = 0.2116 alternative hypothesis: true difference in means is not equal to 0 95 percent confidence interval: -0.02403040 0.09580182 sample estimates: mean of x mean of y 3.117286 3.081400 Two parts of this output lead us to retain the null hypothesis of equal means. First, the reported p-value of 0.2116 is greater than our critical value for \(\alpha\) of 0.05, and second, the 95% confidence interval for the difference in the experimental means, which runs from -0.0240 to 0.0958, includes the null hypothesis that it is 0.Let's use R on the data from Example 7.2.1, which compares two methods for determining the concentration of the antibiotic monensin in fermentation vats.# create vectors to store the datamicrobiological = c(129.5, 89.6, 76.6, 52.2, 110.8, 50.4, 72.4, 141.4, 75.0, 34.1, 60.3)electrochemical = c(132.3, 91.0, 73.6, 58.2, 104.2, 49.9, 82.1, 154.1, 73.4, 38.1, 60.1)# run a two-tailed t-test, setting mu to 0 as the null hypothesis is that the means are the same, and setting paired to TRUE t.test(x = microbiological, y = electrochemical, alternative = "two.sided", mu = 0, paired = TRUE, conf.level = 0.95)The code above yields the following outputPaired t-test data: microbiological and electrochemical t = -1.3225, df = 10, p-value = 0.2155 alternative hypothesis: true difference in means is not equal to 0 95 percent confidence interval: -6.028684 1.537775 sample estimates: mean of the differences -2.245455 Two parts of this output lead us to retain the null hypothesis of equal means. First, the reported p-value of 0.2155 is greater than our critical value for \(\alpha\) of 0.05, and second, the 95% confidence interval for the difference in the experimental mean, which runs from -6.03 to 1.54, includes the null hypothesis that it is 0.The base installation of R does not include tests for outliers, but the outliers package provided functions for Dixon's Q-test and Grubb's test. To install the package, use the following lines of codeinstall.packages("outliers")library(outliers)You only need to install the package once, but you must use library() to make the package available when you begin a new R session.The R function for Dixon's Q-test is dixon.test()and takes the following formdixon.test(x, type, two.sided)where x is a numeric vector with the data we are considering, type defines the specific value(s) that we are testing (we will use type = 10, which tests for a single outlier on either end of the ranked data), and two.sided, which indicates whether we use a one-tailed or two-tailed test (we will use two.sided = FALSE as we are interested in whether the smallest value is too small or the largest value is too large).Let's use R on the data from Example 7.2.7, which considers the masses of a set of United States pennies.penny = c(3.067, 2.514, 3.094, 3.049, 3.048, 3.109, 3.039, 3.079, 3.102)dixon.test(x = penny, two.sided = FALSE, type = 10)The code above yields the following outputDixon test for outliers data: penny Q = 0.88235, p-value < 2.2e-16 alternative hypothesis: lowest value 2.514 is an outlierThe reported p-value of less than \(2.2 \times 10^{-16}\) is less than our critical value for \(\alpha\) of 0.05, which suggests that the penny with a mass of 2.514 g is drawn from a different population than the other pennies.The R function for the Grubb's test is grubbs.test()and takes the following formgurbbs.test(x, type, two.sided)where x is a numeric vector with the data we are considering, type defines the specific value(s) that we are testing (we will use type = 10, which tests for a single outlier on either end of the ranked data), and two.sided, which indicated whether we use a one-tailed or two-tailed test (we will use two.sided = FALSE as we are interested in whether the smallest value is too small or the largest value is too large).Let's use R on the data from Example 7.2.7, which considers the masses of a set of United States pennies.penny = c(3.067, 2.514, 3.094, 3.049, 3.048, 3.109, 3.039, 3.079, 3.102)grubbs.test(x = penny, two.sided = FALSE, type = 10)The code above yields the following outputGrubbs test for one outlier data: penny G = 2.64300, U = 0.01768, p-value = 9.69e-07 alternative hypothesis: lowest value 2.514 is an outlierThe reported p-value of \(9.69 \times 10^{-7}\) is less than our critical value for \(\alpha\) of 0.05, which suggests that the penny with a mass of 2.514 g is drawn from a different population than the other pennies.The R function for completing the Wilcoxson signed rank test and the Wilcoxson rank sum test is wilcox.test(), which takes the following formwilcox.test(x, y = NULL, alternative = c("two.sided", "less", "greater"), mu = 0, paired = FALSE, conf.level = 0.95, ...)where x is a numeric vector that contains the data for one sample and y is an optional vector that contains data for a second sample, alternative is a character string that states the alternative hypothesis (which defaults to two-tailed), mu is either the population's expected mean or the expected difference in the means of the two samples, paired is a logical value that indicates whether the data is paired , andconf.level gives the size of the confidence interval (which defaults to 0.95, or 95%, or \(\alpha = 0.05\)).Let's use R on the data from Example 7.3.1, which compares two methods for determining the concentration of the antibiotic monensin in fermentation vats.# create vectors to store the datamicrobiological = c(129.5, 89.6, 76.6, 52.2, 110.8, 50.4, 72.4, 141.4, 75.0, 34.1, 60.3)electrochemical = c(132.3, 91.0, 73.6, 58.2, 104.2, 49.9, 82.1, 154.1, 73.4, 38.1, 60.1)# run a two-tailed wilcoxson signed rank test, setting mu to 0 as the null hypothesis is that # the means are the same and setting paired to TRUE wilcox.test(x = microbiological, y = electrochemical, alternative = "two.sided", mu = 0, paired = TRUE, conf.level = 0.95)The code above yields the following outputWilcoxon signed rank test data: microbiological and electrochemical V = 22, p-value = 0.3652 alternative hypothesis: true location shift is not equal to 0where the value V is the smaller of the two signed ranks. The reported p-value of 0.3652 is greater than our critical value for \(\alpha\) of 0.05, which means we do not have evidence to suggest that there is a difference between the mean values for the two methods.Let's use R on the data from Example 7.3.2, which compares two methods for determining the amount of aspirin in tablets from two production lots.# create vectors to store the datalot1 = c lot2= c# run a two-tailed wilcoxson signed rank test, setting mu to 0 as the null hypothesis is # that the means are the same, and setting paired to TRUE wilcox.test(x = lot1, y = lot2, alternative = "two.sided", mu = 0, paired = FALSE, conf.level = 0.95)The code above yields the following outputWilcoxon rank sum test with continuity correction data: lot1 and lot2 W = 16.5, p-value = 0.8541 alternative hypothesis: true location shift is not equal to 0 Warning message:In wilcox.test.default(x = lot1, y = lot2, alternative = "two.sided", : cannot compute exact p-value with tieswhere the value W is the larger of the two ranked sums. The reported p-value of 0.8541 is greater than our critical value for \(\alpha\) of 0.05, which means we do not have evidence to suggest that there is a difference between the mean values for the two methods. Note: we can ignore the warning message here as our calculated value for p is very large relative to an \(\alpha\) of 0.05.Let's use the data in Example 7.3.1 to show how to complete an analysis of variance in R. First, we need to create individual numerical vectors for each treatment and then combine these vectors into a single numerical vector, which we will call recovery, that contains the results for each treatment.a = cb = cc = crecovery = c(a, b, c)We also need to create a vector of character strings that identifies the individual treatments for each element in the vector recovery.treatment = c(rep("a", 3), rep("b", 3), rep("c", 3))The R function for completing an analysis of variance is aov(), which takes the following formaov(formula, ...)where formula is a way of telling R to "explain this variable by using that variable." We will examine formulas in more detail in Chapter 8, but in this case the syntax is recovery ~ treatment , which means to model the recovery based on the treatment. In the code below, we assign the output of the aov() function to a variable so that we have access to the results of the analysis of varianceaov_output = aov(recovery ~ treatment)through the summary() functionsummary(aov_output)Df Sum Sq Mean Sq F value Pr(>F) treatment 2 168 84.00 22.91 0.00155 ** Residuals 6 22 3.67 --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1Note that what we earlier called the between variance is identified here as the variance due to the treatments, and that we earlier called the within variance is identified her as the residual variance. As we saw in Example 7.3.1, the value for Fexp is significantly greater than the critical value for F at \(\alpha = 0.05\).Having found evidence that there is a significant difference between the treatments, we can use R's TukeyHSD() function to identify the source(s) of that difference (HSD stands for Honest Significant Difference), which takes the general formTukeyHSD(x, conf.level = 0.95, ...)where x is an object that contains the results of an analysis of variance.TukeyHSD(aov_output)Tukey multiple comparisons of means 95% family-wise confidence level Fit: aov(formula = recovery ~ treatment) $treatment diff lwr upr p adj b-a -2 -6.797161 2.797161 0.4554965 c-a -10 -14.797161 -5.202839 0.0016720 c-b -8 -12.797161 -3.202839 0.0052447The table at the end of the output shows, for each pair of treatments, the difference in their mean values, the lower and the upper values for the confidence interval about the mean, and the value for \(\alpha\), which in R is listed as an adjusted p-value, for which we can reject the null hypothesis that the means are identical. In this case, we can see that the results for treatment C are significantly different from both treatments A and B.We also can view the results of the TukeyHSD analysis visually by passing it to R's plot() function.plot(TukeyHSD(aov_output))This page titled 7.5: Using R for Significance Testing and Analysis of Variance is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by David Harvey.
251
7.6: Exercises
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Chemometrics_Using_R_(Harvey)/07%3A_Testing_the_Significance_of_Data/7.06%3A_Exercises
1. Use this link to access a case study on data analysis and complete the last investigation in Part V: Ways to Draw Conclusions from Data.2. Ketkar and co-workers developed an analytical method to determine trace levels of atmospheric gases. An analysis of a sample that is 40.0 parts per thousand (ppt) 2-chloroethylsulfide gave the following results from Ketkar, S. N.; Dulak, J. G.; Dheandhanou, S.; Fite, W. L. Anal. Chim. Acta 1991, 245, 267–270.Determine whether there is a significant difference between the experimental mean and the expected value at \(\alpha = 0.05\).3. To test a spectrophotometer’s accuracy a solution of 60.06 ppm K2Cr2O7 in 5.0 mM H2SO4 is prepared and analyzed. This solution has an expected absorbance of 0.640 at 350.0 nm in a 1.0-cm cell when using 5.0 mM H2SO4 as a reagent blank. Several aliquots of the solution produce the following absorbance values.Determine whether there is a significant difference between the experimental mean and the expected value at \(\alpha = 0.01\).4. Monna and co-workers used radioactive isotopes to date sediments from lakes and estuaries. To verify this method they analyzed a 208Po standard known to have an activity of 77.5 decays/min, obtaining the following results.Determine whether there is a significant difference between the mean and the expected value at \(\alpha = 0.05\). The data in this problem are from Monna, F.; Mathieu, D.; Marques, A. N.; Lancelot, J.; Bernat, M. Anal. Chim. Acta 1996, 330, 107–116.5. A 2.6540-g sample of an iron ore, which is 53.51% w/w Fe, is dissolved in a small portion of concentrated HCl and diluted to volume in a 250-mL volumetric flask. A spectrophotometric determination of the concentration of Fe in this solution yields results of 5840, 5770, 5650, and 5660 ppm. Determine whether there is a significant difference between the experimental mean and the expected value at \(alpha = 0.05\).6. Horvat and co-workers used atomic absorption spectroscopy to determine the concentration of Hg in coal fly ash. Of particular interest to the authors was developing an appropriate procedure for digesting samples and releasing the Hg for analysis. As part of their study they tested several reagents for digesting samples. Their results using HNO3 and using a 1 + 3 mixture of HNO3 and HCl are shown here. All concentrations are given as ppb Hg sample.Determine whether there is a significant difference between these methods at \(\alpha = 0.05\). The data in this problem are from Horvat, M.; Lupsina, V.; Pihlar, B. Anal. Chim. Acta 1991, 243, 71–79.7. Lord Rayleigh, John William Strutt, was one of the most well known scientists of the late nineteenth and early twentieth centuries, publishing over 440 papers and receiving the Nobel Prize in 1904 for the discovery of argon. An important turning point in Rayleigh’s discovery of Ar was his experimental measurements of the density of N2. Rayleigh approached this experiment in two ways: first by taking atmospheric air and removing O2 and H2; and second, by chemically producing N2 by decomposing nitrogen containing compounds (NO, N2O, and NH4NO3) and again removing O2 and H2. The following table shows his results for the density of N2, as published in Proc. Roy. Soc. 1894, LV, 340 (publication 210); all values are the grams of gas at an equivalent volume, pressure, and temperature.Explain why this data led Rayleigh to look for and to discover Ar. You can read more about this discovery here: Larsen, R. D. J. Chem. Educ. 1990, 67, 925–928.8. Gács and Ferraroli reported a method for monitoring the concentration of SO2 in air. They compared their method to the standard method by analyzing urban air samples collected from a single location. Samples were collected by drawing air through a collection solution for 6 min. Shown here is a summary of their results with SO2 concentrations reported in μL/m3.Using an appropriate statistical test, determine whether there is any significant difference between the standard method and the new method at \(\alpha = 0.05\). The data in this problem are from Gács, I.; Ferraroli, R. Anal. Chim. Acta 1992, 269, 177–185.9. One way to check the accuracy of a spectrophotometer is to measure absorbances for a series of standard dichromate solutions obtained from the National Institute of Standards and Technology. Absorbances are measured at 257 nm and compared to the accepted values. The results obtained when testing a newly purchased spectrophotometer are shown here. Determine if the tested spectrophotometer is accurate at \(\alpha = 0.05\).10. Maskarinec and co-workers investigated the stability of volatile organics in environmental water samples. Of particular interest was establishing the proper conditions to maintain the sample’s integrity between its collection and its analysis. Two preservatives were investigated—ascorbic acid and sodium bisulfate—and maximum holding times were determined for a number of volatile organics and water matrices. The following table shows results for the holding time (in days) of nine organic compounds in surface water.Determine whether there is a significant difference in the effectiveness of the two preservatives at \(\alpha = 0.10\). The data in this problem are from Maxkarinec, M. P.; Johnson, L. H.; Holladay, S. K.; Moody, R. L.; Bayne, C. K.; Jenkins, R. A. Environ. Sci. Technol. 1990, 24, 1665–1670.11. Using X-ray diffraction, Karstang and Kvalhein reported a new method to determine the weight percent of kaolinite in complex clay minerals using X-ray diffraction. To test the method, nine samples containing known amounts of kaolinite were prepared and analyzed. The results (as % w/w kaolinite) are shown here.Evaluate the accuracy of the method at \(\alpha = 0.05\). The data in this problem are from Karstang, T. V.; Kvalhein, O. M. Anal. Chem. 1991, 63, 767–772.12. Mizutani, Yabuki and Asai developed an electrochemical method for analyzing l-malate. As part of their study they analyzed a series of beverages using both their method and a standard spectrophotometric procedure based on a clinical kit purchased from Boerhinger Scientific. The following table summarizes their results. All values are in ppm. The data in this problem are from Mizutani, F.; Yabuki, S.; Asai, M. Anal. Chim. Acta 1991, 245,145–150.13. Alexiev and colleagues describe an improved photometric method for determining Fe3+ based on its ability to catalyze the oxidation of sulphanilic acid by KIO4. As part of their study, the concentration of Fe3+ in human serum samples was determined by the improved method and the standard method. The results, with concentrations in μmol/L, are shown in the following table.Determine whether there is a significant difference between the two methods at \(\alpha = 0.05\). The data in this problem are from Alexiev, A.; Rubino, S.; Deyanova, M.; Stoyanova, A.; Sicilia, D.; Perez Bendito, D. Anal. Chim. Acta, 1994, 295, 211–219.14. Ten laboratories were asked to determine an analyte’s concentration of in three standard test samples. Following are the results, in μg/mL.Determine if there are any potential outliers in Sample 1, Sample 2 or Sample 3. Use all three methods—Dixon’s Q-test, Grubb’s test, and Chauvenet’s criterion—and compare the results to each other. For Dixon’s Q-test and for the Grubb’s test, use a significance level of \(\alpha = 0.05\). The data in this problem are adapted from Steiner, E. H. “Planning and Analysis of Results of Collaborative Tests,” in Statistical Manual of the Association of Official Analytical Chemists, Association of Official Analytical Chemists: Washington, D. C., 1975.15. Use an appropriate non-parametric test to reanalyze the data in some or all of Exercises 7.6.2 to 7.6.14.16. The importance of between-laboratory variability on the results of an analytical method are determined by having several laboratories analyze the same sample. In one such study, seven laboratories analyzed a sample of homogenized milk for a selected aflatoxin [data from Massart, D. L.; Vandeginste, B. G. M; Deming, S. N.; Michotte, Y.; Kaufman, L. Chemometrics: A Textbook, Elsevier: Amsterdam, 1988]. The results, in ppb, are summarized below.(a) Determine if the between-laboratory variability is significantly greater than the within-laboratory variability at \(\alpha = 0.05\). If the between-laboratory variability is significant, then determine the source(s) of that variability.(b) Estimate values for \(\sigma_{rand}^2\) and for \(\sigma_{syst}^2\).This page titled 7.6: Exercises is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by David Harvey.
252
8.1: Unweighted Linear Regression With Errors in y
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Chemometrics_Using_R_(Harvey)/08%3A_Modeling_Data/8.01%3A_Linear_Regression_of_a_Straight-Line_Calibration_Curve
The most common method for completing a linear regression makes three assumptions:Because we assume that the indeterminate errors are the same for all standards, each standard contributes equally in our estimate of the slope and the y-intercept. For this reason the result is considered an unweighted linear regression.The second assumption generally is true because of the central limit theorem, which we considered in Chapter 5.3. The validity of the two remaining assumptions is less obvious and you should evaluate them before you accept the results of a linear regression. In particular the first assumption is always suspect because there certainly is some indeterminate error in the measurement of x. When we prepare a calibration curve, however, it is not unusual to find that the uncertainty in the signal, S, is significantly greater than the uncertainty in the analyte’s concentration, \(C_A\). In such circumstances the first assumption usually is reasonable.To understand the logic of a linear regression consider the example in , which shows three data points and two possible straight-lines that might reasonably explain the data. How do we decide how well these straight-lines fit the data, and how do we determine which, if either, is the best straight-line?Let’s focus on the solid line in . The equation for this line is\[\hat{y} = b_0 + b_1 x \nonumber \]where b0 and b1 are estimates for the y-intercept and the slope, and \(\hat{y}\) is the predicted value of y for any value of x. Because we assume that all uncertainty is the result of indeterminate errors in y, the difference between y and \(\hat{y}\) for each value of x is the residual error, r, in our mathematical model.\[r_i = (y_i - \hat{y}_i) \nonumber\] shows the residual errors for the three data points. The smaller the total residual error, R, which we define as\[R = \sum_{i = 1}^{n} (y_i - \hat{y}_i)^2 \nonumber \]the better the fit between the straight-line and the data. In a linear regression analysis, we seek values of b0 and b1 that give the smallest total residual error.The reason for squaring the individual residual errors is to prevent a positive residual error from canceling out a negative residual error. You have seen this before in the equations for the sample and population standard deviations introduced in Chapter 4. You also can see from this equation why a linear regression is sometimes called the method of least squares.Although we will not formally develop the mathematical equations for a linear regression analysis, you can find the derivations in many standard statistical texts [ See, for example, Draper, N. R.; Smith, H. Applied Regression Analysis, 3rd ed.; Wiley: New York, 1998]. The resulting equation for the slope, b1, is\[b_1 = \frac {n \sum_{i = 1}^{n} x_i y_i - \sum_{i = 1}^{n} x_i \sum_{i = 1}^{n} y_i} {n \sum_{i = 1}^{n} x_i^2 - \left( \sum_{i = 1}^{n} x_i \right)^2} \nonumber \]and the equation for the y-intercept, b0, is\[b_0 = \frac {\sum_{i = 1}^{n} y_i - b_1 \sum_{i = 1}^{n} x_i} {n} \nonumber \]Although these equations appear formidable, it is necessary only to evaluate the following four summations\[\sum_{i = 1}^{n} x_i \quad \sum_{i = 1}^{n} y_i \quad \sum_{i = 1}^{n} x_i y_i \quad \sum_{i = 1}^{n} x_i^2 \nonumber\]Many calculators, spreadsheets, and other statistical software packages are capable of performing a linear regression analysis based on this model; see Section 8.5 for details on completing a linear regression analysis using R. For illustrative purposes the necessary calculations are shown in detail in the following example.Using the calibration data in the following table, determine the relationship between the signal, \(y_i\), and the analyte's concentration, \(x_i\), using an unweighted linear regression.SolutionWe begin by setting up a table to help us organize the calculation.Adding the values in each column gives\[\sum_{i = 1}^{n} x_i = 1.500 \quad \sum_{i = 1}^{n} y_i = 182.31 \quad \sum_{i = 1}^{n} x_i y_i = 66.701 \quad \sum_{i = 1}^{n} x_i^2 = 0.550 \nonumber\]Substituting these values into the equations for the slope and the y-intercept gives\[b_1 = \frac {(6 \times 66.701) - (1.500 \times 182.31)} {(6 \times 0.550) - (1.500)^2} = 120.706 \approx 120.71 \nonumber\]\[b_0 = \frac {182.31 - (120.706 \times 1.500)} {6} = 0.209 \approx 0.21 \nonumber\]The relationship between the signal, \(S\), and the analyte's concentration, \(C_A\), therefore, is\[S = 120.71 \times C_A + 0.21 \nonumber\]For now we keep two decimal places to match the number of decimal places in the signal. The resulting calibration curve is shown in .As we see in , because of indeterminate errors in the signal, the regression line does not pass through the exact center of each data point. The cumulative deviation of our data from the regression line—the total residual error—is proportional to the uncertainty in the regression. We call this uncertainty the standard deviation about the regression, sr, which is equal to\[s_r = \sqrt{\frac {\sum_{i = 1}^{n} \left( y_i - \hat{y}_i \right)^2} {n - 2}} \nonumber \]where yi is the ith experimental value, and \(\hat{y}_i\) is the corresponding value predicted by the regression equation \(\hat{y} = b_0 + b_1 x\). Note that the denominator indicates that our regression analysis has n – 2 degrees of freedom—we lose two degree of freedom because we use two parameters, the slope and the y-intercept, to calculate \(\hat{y}_i\).A more useful representation of the uncertainty in our regression analysis is to consider the effect of indeterminate errors on the slope, b1, and the y-intercept, b0, which we express as standard deviations.\[s_{b_1} = \sqrt{\frac {n s_r^2} {n \sum_{i = 1}^{n} x_i^2 - \left( \sum_{i = 1}^{n} x_i \right)^2}} = \sqrt{\frac {s_r^2} {\sum_{i = 1}^{n} \left( x_i - \overline{x} \right)^2}} \nonumber \]\[s_{b_0} = \sqrt{\frac {s_r^2 \sum_{i = 1}^{n} x_i^2} {n \sum_{i = 1}^{n} x_i^2 - \left( \sum_{i = 1}^{n} x_i \right)^2}} = \sqrt{\frac {s_r^2 \sum_{i = 1}^{n} x_i^2} {n \sum_{i = 1}^{n} \left( x_i - \overline{x} \right)^2}} \nonumber \]We use these standard deviations to establish confidence intervals for the expected slope, \(\beta_1\), and the expected y-intercept, \(\beta_0\)\[\beta_1 = b_1 \pm t s_{b_1} \nonumber \]\[\beta_0 = b_0 \pm t s_{b_0} \nonumber \]where we select t for a significance level of \(\alpha\) and for n – 2 degrees of freedom. Note that these equations do not contain the factor of \((\sqrt{n})^{-1}\) seen in the confidence intervals for \(\mu\) in Chapter 6.2; this is because the confidence interval here is based on a single regression line.Calculate the 95% confidence intervals for the slope and y-intercept from Example \(\PageIndex{1}\).SolutionWe begin by calculating the standard deviation about the regression. To do this we must calculate the predicted signals, \(\hat{y}_i\) , using the slope and the y-intercept from Example \(\PageIndex{1}\), and the squares of the residual error, \((y_i - \hat{y}_i)^2\). Using the last standard as an example, we find that the predicted signal is\[\hat{y}_6 = b_0 + b_1 x_6 = 0.209 + (120.706 \times 0.500) = 60.562 \nonumber\]and that the square of the residual error is\[(y_i - \hat{y}_i)^2 = (60.42 - 60.562)^2 = 0.2016 \approx 0.202 \nonumber\]The following table displays the results for all six solutions.\(\left( y_i - \hat{y}_i \right)^2\)Adding together the data in the last column gives the numerator in the equation for the standard deviation about the regression; thus\[s_r = \sqrt{\frac {0.6512} {6 - 2}} = 0.4035 \nonumber\]Next we calculate the standard deviations for the slope and the y-intercept. The values for the summation terms are from Example \(\PageIndex{1}\).\[s_{b_1} = \sqrt{\frac {6 \times (0.4035)^2} {(6 \times 0.550) - (1.500)^2}} = 0.965 \nonumber\]\[s_{b_0} = \sqrt{\frac {(0.4035)^2 \times 0.550} {(6 \times 0.550) - (1.500)^2}} = 0.292 \nonumber\]Finally, the 95% confidence intervals (\(\alpha = 0.05\), 4 degrees of freedom) for the slope and y-intercept are\[\beta_1 = b_1 \pm ts_{b_1} = 120.706 \pm (2.78 \times 0.965) = 120.7 \pm 2.7 \nonumber\]\[\beta_0 = b_0 \pm ts_{b_0} = 0.209 \pm (2.78 \times 0.292) = 0.2 \pm 0.80 \nonumber\]where t(0.05, 4) from Appendix 2 is 2.78. The standard deviation about the regression, sr, suggests that the signal, Sstd, is precise to one decimal place. For this reason we report the slope and the y-intercept to a single decimal place.Once we have our regression equation, it is easy to determine the concentration of analyte in a sample. When we use a normal calibration curve, for example, we measure the signal for our sample, Ssamp, and calculate the analyte’s concentration, CA, using the regression equation.\[C_A = \frac {S_{samp} - b_0} {b_1} \nonumber \]What is less obvious is how to report a confidence interval for CA that expresses the uncertainty in our analysis. To calculate a confidence interval we need to know the standard deviation in the analyte’s concentration, \(s_{C_A}\), which is given by the following equation\[s_{C_A} = \frac {s_r} {b_1} \sqrt{\frac {1} {m} + \frac {1} {n} + \frac {\left( \overline{S}_{samp} - \overline{S}_{std} \right)^2} {(b_1)^2 \sum_{i = 1}^{n} \left( C_{std_i} - \overline{C}_{std} \right)^2}} \nonumber\]where m is the number of replicates we use to establish the sample’s average signal, Ssamp, n is the number of calibration standards, Sstd is the average signal for the calibration standards, and \(C_{std_i}\) and \(\overline{C}_{std}\) are the individual and the mean concentrations for the calibration standards. Knowing the value of \(s_{C_A}\), the confidence interval for the analyte’s concentration is\[\mu_{C_A} = C_A \pm t s_{C_A} \nonumber\]where \(\mu_{C_A}\) is the expected value of CA in the absence of determinate errors, and with the value of t is based on the desired level of confidence and n – 2 degrees of freedom.A close examination of these equations should convince you that we can decrease the uncertainty in the predicted concentration of analyte, \(C_A\) if we increase the number of standards, \(n\), increase the number of replicate samples that we analyze, \(m\), and if the sample’s average signal, \(\overline{S}_{samp}\), is equal to the average signal for the standards, \(\overline{S}_{std}\). When practical, you should plan your calibration curve so that Ssamp falls in the middle of the calibration curve. For more information about these regression equations see (a) Miller, J. N. Analyst 1991, 116, 3–14; (b) Sharaf, M. A.; Illman, D. L.; Kowalski, B. R. Chemometrics, Wiley-Interscience: New York, 1986, pp. 126-127; (c) Analytical Methods Committee “Uncertainties in concentrations estimated from calibration experiments,” AMC Technical Brief, March 2006.The equation for the standard deviation in the analyte's concentration is written in terms of a calibration experiment. A more general form of the equation, written in terms of x and y, is given here.\[s_{x} = \frac {s_r} {b_1} \sqrt{\frac {1} {m} + \frac {1} {n} + \frac {\left( \overline{Y} - \overline{y} \right)^2} {(b_1)^2 \sum_{i = 1}^{n} \left( x_i - \overline{x} \right)^2}} \nonumber\]Three replicate analyses for a sample that contains an unknown concentration of analyte, yields values for Ssamp of 29.32, 29.16 and 29.51 (arbitrary units). Using the results from Example \(\PageIndex{1}\) and Example \(\PageIndex{2}\), determine the analyte’s concentration, CA, and its 95% confidence interval.SolutionThe average signal, \(\overline{S}_{samp}\), is 29.33, which, using the slope and the y-intercept from Example \(\PageIndex{1}\), gives the analyte’s concentration as\[C_A = \frac {\overline{S}_{samp} - b_0} {b_1} = \frac {29.33 - 0.209} {120.706} = 0.241 \nonumber\]To calculate the standard deviation for the analyte’s concentration we must determine the values for \(\overline{S}_{std}\) and for \(\sum_{i = 1}^{2} (C_{std_i} - \overline{C}_{std})^2\). The former is just the average signal for the calibration standards, which, using the data in Table \(\PageIndex{1}\), is 30.385. Calculating \(\sum_{i = 1}^{2} (C_{std_i} - \overline{C}_{std})^2\) looks formidable, but we can simplify its calculation by recognizing that this sum-of-squares is the numerator in a standard deviation equation; thus,\[\sum_{i = 1}^{n} (C_{std_i} - \overline{C}_{std})^2 = (s_{C_{std}})^2 \times (n - 1) \nonumber\]where \(s_{C_{std}}\) is the standard deviation for the concentration of analyte in the calibration standards. Using the data in Table \(\PageIndex{1}\) we find that \(s_{C_{std}}\) is 0.1871 and\[\sum_{i = 1}^{n} (C_{std_i} - \overline{C}_{std})^2 = (0.1872)^2 \times (6 - 1) = 0.175 \nonumber\]Substituting known values into the equation for \(s_{C_A}\) gives\[s_{C_A} = \frac {0.4035} {120.706} \sqrt{\frac {1} {3} + \frac {1} {6} + \frac {(29.33 - 30.385)^2} {(120.706)^2 \times 0.175}} = 0.0024 \nonumber\]Finally, the 95% confidence interval for 4 degrees of freedom is\[\mu_{C_A} = C_A \pm ts_{C_A} = 0.241 \pm (2.78 \times 0.0024) = 0.241 \pm 0.007 \nonumber\] shows the calibration curve with curves showing the 95% confidence interval for CA.You should never accept the result of a linear regression analysis without evaluating the validity of the model. Perhaps the simplest way to evaluate a regression analysis is to examine the residual errors. As we saw earlier, the residual error for a single calibration standard, ri, is\[r_i = (y_i - \hat{y}_i) \nonumber\]If the regression model is valid, then the residual errors should be distributed randomly about an average residual error of zero, with no apparent trend toward either smaller or larger residual errors ). Trends such as those in and provide evidence that at least one of the model’s assumptions is incorrect. For example, a trend toward larger residual errors at higher concentrations, , suggests that the indeterminate errors affecting the signal are not independent of the analyte’s concentration. In , the residual errors are not random, which suggests we cannot model the data using a straight-line relationship. Regression methods for the latter two cases are discussed in the following sections.Use your results from Exercise \(\PageIndex{1}\) to construct a residual plot and explain its significance.SolutionTo create a residual plot, we need to calculate the residual error for each standard. The following table contains the relevant information.The figure below shows a plot of the resulting residual errors. The residual errors appear random, although they do alternate in sign, and they do not show any significant dependence on the analyte’s concentration. Taken together, these observations suggest that our regression model is appropriate.8.1: Unweighted Linear Regression With Errors in y is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by LibreTexts.
253
8.2: Weighted Linear Regression with Errors in y
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Chemometrics_Using_R_(Harvey)/08%3A_Modeling_Data/8.02%3A_Weighted_Linear_Regression_with_Errors_in_y
Our treatment of linear regression to this point assumes that any indeterminate errors that affect y are independent of the value of x. If this assumption is false, then we must include the variance for each value of y in our determination of the y-intercept, b0, and the slope, b1; thus\[b_0 = \frac {\sum_{i = 1}^{n} w_i y_i - b_1 \sum_{i = 1}^{n} w_i x_i} {n} \nonumber \]\[b_1 = \frac {n \sum_{i = 1}^{n} w_i x_i y_i - \sum_{i = 1}^{n} w_i x_i \sum_{i = 1}^{n} w_i y_i} {n \sum_{i =1}^{n} w_i x_i^2 - \left( \sum_{i = 1}^{n} w_i x_i \right)^2} \nonumber\]where wi is a weighting factor that accounts for the variance in yi \[w_i = \frac {n (s_{y_i})^{-2}} {\sum_{i = 1}^{n} (s_{y_i})^{-2}} \nonumber\]and \(s_{y_i}\) is the standard deviation for yi. In a weighted linear regression, each xy-pair’s contribution to the regression line is inversely proportional to the precision of yi; that is, the more precise the value of y, the greater its contribution to the regression.Shown here are data for an external standardization in which sstd is the standard deviation for three replicate determination of the signal. This is the same data used in the examples in Section 8.1 with additional information about the standard deviations in the signal.Determine the calibration curve’s equation using a weighted linear regression. As you work through this example, remember that x corresponds to Cstd, and that y corresponds to Sstd.SolutionWe begin by setting up a table to aid in calculating the weighting factors.Adding together the values in the fourth column gives\[\sum_{i = 1}^{n} (s_{y_i})^{-2} \nonumber\]which we use to calculate the individual weights in the last column. As a check on your calculations, the sum of the individual weights must equal the number of calibration standards, n. The sum of the entries in the last column is 6.0000, so all is well. After we calculate the individual weights, we use a second table to aid in calculating the four summation terms in the equations for the slope, \(b_1\), and the y-intercept, \(b_0\).Adding the values in the last four columns gives\[\sum_{i = 1}^{n} w_i x_i = 0.3644 \quad \sum_{i = 1}^{n} w_i y_i = 44.9499 \quad \sum_{i = 1}^{n} w_i x_i^2 = 0.0499 \quad \sum_{i = 1}^{n} w_i x_i y_i = 6.1451 \nonumber\]which gives the estimated slope and the estimated y-intercept as\[b_1 = \frac {(6 \times 6.1451) - (0.3644 \times 44.9499)} {(6 \times 0.0499) - (0.3644)^2} = 122.985 \nonumber\]\[b_0 = \frac{44.9499 - (122.985 \times 0.3644)} {6} = 0.0224 \nonumber\]The calibration equation is\[S_{std} = 122.98 \times C_{std} + 0.2 \nonumber\] shows the calibration curve for the weighted regression determined here and the calibration curve for the unweighted regression in from Section 8.2. Although the two calibration curves are very similar, there are slight differences in the slope and in the y-intercept. Most notably, the y-intercept for the weighted linear regression is closer to the expected value of zero. Because the standard deviation for the signal, Sstd, is smaller for smaller concentrations of analyte, Cstd, a weighted linear regression gives more emphasis to these standards, allowing for a better estimate of the y-intercept.Equations for calculating confidence intervals for the slope, the y-intercept, and the concentration of analyte when using a weighted linear regression are not as easy to define as for an unweighted linear regression [Bonate, P. J. Anal. Chem. 1993, 65, 1367–1372]. The confidence interval for the analyte’s concentration, however, is at its optimum value when the analyte’s signal is near the weighted centroid, yc , of the calibration curve.\[y_c = \frac {1} {n} \sum_{i = 1}^{n} w_i x_i \nonumber\]8.2: Weighted Linear Regression with Errors in y is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by LibreTexts.
254
8.3: Weighted Linear Regression With Errors in Both x and y
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Chemometrics_Using_R_(Harvey)/08%3A_Modeling_Data/8.03%3A_Weighted_Linear_Regression_With_Errors_in_Both_x_and_y
If we remove our assumption that indeterminate errors affecting a calibration curve are present only in the signal (y), then we also must factor into the regression model the indeterminate errors that affect the analyte’s concentration in the calibration standards (x). The solution for the resulting regression line is computationally more involved than that for either the unweighted or weighted regression lines. Although we will not consider the details in this textbook, you should be aware that neglecting the presence of indeterminate errors in x can bias the results of a linear regression.See, for example, Analytical Methods Committee, “Fitting a linear functional relationship to data with error on both variable,” AMC Technical Brief, March, 2002), as well as this chapter’s Additional Resources.8.3: Weighted Linear Regression With Errors in Both x and y is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by LibreTexts.
255
8.4: Curvilinear, Multivariable, and Multivariate Regression
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Chemometrics_Using_R_(Harvey)/08%3A_Modeling_Data/8.04%3A_Curvilinear_and_Multivariate_Regression
A straight-line regression model, despite its apparent complexity, is the simplest functional relationship between two variables. What do we do if our calibration curve is curvilinear—that is, if it is a curved-line instead of a straight-line? One approach is to try transforming the data into a straight-line. Logarithms, exponentials, reciprocals, square roots, and trigonometric functions have been used in this way. A plot of log(y) versus x is a typical example. Such transformations are not without complications, of which the most obvious is that data with a uniform variance in y will not maintain that uniform variance after it is transformed.It is worth noting here that the term “linear” does not mean a straight-line. A linear function may contain more than one additive term, but each such term has one and only one adjustable multiplicative parameter. The function\[y = ax + bx^2 \nonumber\]is an example of a linear function because the terms x and x2 each include a single multiplicative parameter, a and b, respectively. The function\[y = x^b \nonumber\]is nonlinear because b is not a multiplicative parameter; it is, instead, a power. This is why you can use linear regression to fit a polynomial equation to your data.Sometimes it is possible to transform a nonlinear function into a linear function. For example, taking the log of both sides of the nonlinear function above gives a linear function.\[\log(y) = b \log(x) \nonumber\]Another approach to developing a linear regression model is to fit a polynomial equation to the data, such as \(y = a + b x + c x^2\). You can use linear regression to calculate the parameters a, b, and c, although the equations are different than those for the linear regression of a straight-line. If you cannot fit your data using a single polynomial equation, it may be possible to fit separate polynomial equations to short segments of the calibration curve. The result is a single continuous calibration curve known as a spline function. The use of R for curvilinear regression is included in Chapter 8.5.For details about curvilinear regression, see (a) Sharaf, M. A.; Illman, D. L.; Kowalski, B. R. Chemometrics, Wiley-Interscience: New York, 1986; (b) Deming, S. N.; Morgan, S. L. Experimental Design: A Chemometric Approach, Elsevier: Amsterdam, 1987.The regression models in this chapter apply only to functions that contain a single dependent variable and a single independent variable. One example is the simplest form of Beer's law in which the absorbance, \(A\), of a sample at a single wavelength, \(\lambda\), depends upon the concentration of a single analyte, \(C_A\)\[A_{\lambda} = \epsilon_{\lambda, A} b C_A \nonumber\]where \(\epsilon_{\lambda, A}\) is the analyte's molar absorptivity at the selected wavelength and \(b\) is the pathlength through the sample. In the presence of an interferent, \(I\), however, the signal may depend on the concentrations of both the analyte and the interferent\[A_{\lambda} = \epsilon_{\lambda, A} b C_A + \epsilon_{\lambda, I} b C_I \nonumber\]where \(\epsilon_{\lambda, I}\) is the interferent’s molar absorptivity and CI is the interferent’s concentration. This is an example of multivariable regression, which is covered in more detail in Chapter 9 when we consider the optimization of experiments where there is a single dependent variable and two or more independent variables.For more details on Beer's law, see Chapter 10 of Analytical Chemistry 2.1.In multivariate regression we have both multiple dependent variables, such as the absorbance of samples at two or more wavelengths, and multiple independent variables, such as the concentrations of two or more analytes in the samples. As discussed in Chapter 0.2, we can represent this using matrix notation\[\begin{bmatrix} \cdots & \cdots & \cdots \\ \vdots & A & \vdots \\ \cdots & \cdots & \cdots \end{bmatrix}_{r \times c} = \begin{bmatrix} \cdots & \cdots & \cdots \\ \vdots & \epsilon b & \vdots \\ \cdots & \cdots & \cdots \end{bmatrix}_{r \times n} \times \begin{bmatrix} \cdots & \cdots & \cdots \\ \vdots & C & \vdots \\ \cdots & \cdots & \cdots \end{bmatrix}_{n \times c} \nonumber\]where there are \(r\) wavelengths, \(c\) samples, and \(n\) analytes. Each column in the \(\epsilon b\) matrix, for example, holds the \(\epsilon b\) value for a different analyte at one of \(r\) wavelengths, and each row in the \(C\) matrix is the concentration of one of the \(n\) analytes in one of the \(c\) samples. We will consider this approach in more detail in Chapter 11.For a nice discussion of the difference between multivariable regression and multivariate regression, see Hidalgo, B.; Goodman, M. "Multivariate or Multivariable Regression," Am. J. Public Health, 2013, 103, 39-40.8.4: Curvilinear, Multivariable, and Multivariate Regression is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by LibreTexts.
256
8.5: Using R for a Linear Regression Analysis
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Chemometrics_Using_R_(Harvey)/08%3A_Modeling_Data/8.05%3A_Using_R_for_a_Linear_Regression_Analysis
In Section 8.1 we used the data in the table below to work through the details of a linear regression analysis where values of \(x_i\) are the concentrations of analyte, \(C_A\), in a series of standard solutions, and where values of \(y_i\), are their measured signals, \(S\). Let’s use R to model this data using the equation for a straight-line.\[y = \beta_0 + \beta_1 x \nonumber\]To begin, we create two objects, one that contains the concentration of the standards and one that contains their corresponding signals.conc = c(0, 0.1, 0.2, 0.3, 0.4, 0.5)signal = c(0, 12.36, 24.83, 35.91, 48.79, 60.42)A linear model in R is defined using the general syntaxdependent variable ~ independent variable(s)For example, the syntax for a model with the equation \(y = \beta_0 + \beta_1 x\), where \(\beta_0\) and \(\beta_1\) are the model's adjustable parameters, is \(y \sim x\). Table \(\PageIndex{2}\) provides some additional examples where \(A\) and \(B\) are independent variables, such as the concentrations of two analytes, and \(y\) is a dependent variable, such as a measured signal.The last formula in this table, \(y \sim A + I(A\text{^2})\), includes the I(), or AsIs function. One complication with writing formulas is that they use symbols that have different meanings in formulas than they have in a mathematical equation. For example, take the simple formula \(y \sim A + B\) that corresponds to the model \(y = \beta_0 + \beta_a A + \beta_b B\). Note that the plus sign here builds a formula that has an intercept and a term for \(A\) and a term for \(B\). But what if we wanted to build a model that used the sum of \(A\) and \(B\) as the variable. Wrapping \(A+B\) inside of the I() function accomplishes this; thus \(y \sim I(A + B)\) builds the model \(y = \beta_0 + \beta_{a+b} (A + B)\).To create our model we use the lm() function—where lm stands for linear model—assigning the results to an object so that we can access them later.calcurve = lm(signal ~ conc)To evaluate the results of a linear regression we need to examine the data and the regression line, and to review a statistical summary of the model. To examine our data and the regression line, we use the plot() function, first introduced in Chapter 3, which takes the following general formplot(x, y, ...)where x and y are the objects that contain our data and the ... allow for passing optional arguments to control the plot's style. To overlay the regression curve, we use the abline() functionabline(object, ...)object is the object that contains the results of the linear regression model and the ... allow for passing optional arguments to control the model's style. Entering the commandsplot(conc, signal, pch = 19, col = "blue", cex = 2)abline(calcurve, col = "red", lty = 2, lwd = 2)creates the plot shown in .The abline() function works only with a straight-line model.To review a statistical summary of the regression model, we use the summary() function.summary(calcurve)The resulting output, which is shown below, contains three sections.Call: lm(formula = signal ~ conc) Residuals: 1 2 3 4 5 6 -0.20857 0.08086 0.48029 -0.51029 0.29914 -0.14143 Coefficients: Estimate Std. Error t value Pr(>|t|)(Intercept) 0.2086 0.2919 0.715 0.514 conc 120.7057 0.9641 125.205 2.44e-08 *** --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 Residual standard error: 0.4033 on 4 degrees of freedom Multiple R-squared: 0.9997, Adjusted R-squared: 0.9997 F-statistic: 1.568e+04 on 1 and 4 DF, p-value: 2.441e-08The first section of this summary lists the residual errors. To examine a plot of the residual errors, use the commandplot(calcurve, which = 1)which produces the result shown in . Note that R plots the residuals against the predicted (fitted) values of y instead of against the known values of x, as we did in Section 8.1; the choice of how to plot the residuals is not critical. The line in is a smoothed fit of the residuals.The reason for including the argument which = 1 is not immediately obvious. When you use R’s plot() function on an object created using lm(), the default is to create four charts that summarize the model’s suitability. The first of these charts is the residual plot; thus, which = 1 limits the output to this plot.The second section of the summary provides estimate's for the model’s coefficients—the slope, \(\beta_1\), and the y-intercept, \(\beta_0\)—along with their respective standard deviations (Std. Error). The column t value and the column Pr(>|t|) are the p-values for the following t-tests.slope: \(H_0 \text{: } \beta_1 = 0 \quad H_A \text{: } \beta_1 \neq 0\)y-intercept: \(H_0 \text{: } \beta_0 = 0 \quad H_A \text{: } \beta_0 \neq 0\)The results of these t-tests provide convincing evidence that the slope is not zero and no evidence that the y-intercept differs significantly from zero.The last section of the summary provides the standard deviation about the regression (residual standard error), the square of the correlation coefficient (multiple R-squared), and the result of an F-test on the model’s ability to explain the variation in the y values. The value for F-statistic is the result of an F-test of the following null and alternative hypotheses.H0: the regression model does not explain the variation in y HA: the regression model does explain the variation in y The value in the column for Significance F is the probability for retaining the null hypothesis. In this example, the probability is \(2.5 \times 10^{-8}\), which is strong evidence for rejecting the null hypothesis and accepting the regression model. As is the case with the correlation coefficient, a small value for the probability is a likely outcome for any calibration curve, even when the model is inappropriate. The probability for retaining the null hypothesis for the data in , for example, is \(9.0 \times 10^{-5}\).The correlation coefficient is a measure of the extent to which the regression model explains the variation in y. Values of r range from –1 to +1. The closer the correlation coefficient is to +1 or to –1, the better the model is at explaining the data. A correlation coefficient of 0 means there is no relationship between x and y. In developing the calculations for linear regression, we did not consider the correlation coefficient. There is a reason for this. For most straight-line calibration curves the correlation coefficient is very close to +1, typically 0.99 or better. There is a tendency, however, to put too much faith in the correlation coefficient’s significance, and to assume that an r greater than 0.99 means the linear regression model is appropriate. provides a useful counterexample. Although the regression line has a correlation coefficient of 0.993, the data clearly is curvilinear. The take-home lesson is simple: do not fall in love with the correlation coefficient!Although R's base installation does not include a command for predicting the uncertainty in the independent variable, \(x\), given a measured value for the dependent variable, \(y\), the chemCal package does. To use this package you need to install it by entering the following command.install.packages("chemCal")Once installed, which you need to do just once, you can access the package's functions by using the library() command.library(chemCal)The command for predicting the uncertainty in CA is inverse.predict()and takes the following form for an unweighted linear regressioninverse.predict(object, newdata, alpha = value)where object is the object that contains the regression model’s results, new-data is an object that contains one or more replicate values for the dependent variable and value is the numerical value for the significance level. Let’s use this command to complete the calibration curve example from Section 8.1 in which we determined the concentration of analyte in a sample using three replicate analyses. First, we create an object that contains the replicate measurements of the signalrep_signal = c(29.32, 29.16, 29.51)and then we complete the computation using the following commandinverse.predict(calcurve, rep_signal, alpha = 0.05)which yields the results shown here$Prediction 0.2412597 $`Standard Error` 0.002363588 $Confidence 0.006562373 $`Confidence Limits` 0.2346974 0.2478221The analyte’s concentration, CA, is given by the value $Prediction, and its standard deviation, \(s_{C_A}\), is shown as $`Standard Error`. The value for $Confidence is the confidence interval, \(\pm t s_{C_A}\), for the analyte’s concentration, and $`Confidence Limits` provides the lower limit and upper limit for the confidence interval for CA.R’s command for an unweighted linear regression also allows for a weighted linear regression if we include an additional argument, weights, whose value is an object that contains the weights.lm(y ~ x, weights = object)Let’s use this command to complete the weighted linear regression example in Section 8.2 First, we need to create an object that contains the weights, which in R are the reciprocals of the standard deviations in y, \((s_{y_i})^{-2}\). Using the data from the earlier example, we entersyi = c(0.02, 0.02, 0.07, 0.13, 0.22, 0.33)w =1/syi^2to create the object, w, that contains the weights. The commandsweighted_calcurve = lm(signal ~ conc, weights = w)summary(weighted_calcurve)generate the following output.Call: lm(formula = signal ~ conc, weights = w) Weighted Residuals: 1 2 3 4 5 6 -2.223 2.571 3.676 -7.129 -1.413 -2.864 Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) 0.04446 0.08542 0.52 0.63 conc 122.64111 0.93590 131.04 2.03e-08 *** --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 Residual standard error: 4.639 on 4 degrees of freedom Multiple R-squared: 0.9998, Adjusted R-squared: 0.9997 F-statistic: 1.717e+04 on 1 and 4 DF, p-value: 2.034e-08Any difference between the results shown here and the results in Section 8.2 are the result of round-off errors in our earlier calculations.You may have noticed that this way of defining weights is different than that shown in Section 8.2 In deriving equations for a weighted linear regression, you can choose to normalize the sum of the weights to equal the number of points, or you can choose not to—the algorithm in R does not normalize the weights.As we see in this example, we can use R to model data that is not in the form of a straight-line by simply adjusting the linear model.Use the data below to explore two models for the data in the table below, one using a straight-line, \(y = \beta_0 + \beta_1 x\), and one that is a second-order polynomial, \(y = \beta_0 + \beta_1 x + \beta_2 x^2\).SolutionFirst, we create objects to store our data.x = c(0, 1.00, 2.00, 3.00, 4.00, 5.00) y = c(0, 0.94, 2.15, 3.19, 3.70, 4.21)Next, we build our linear models for a straight-line and for a curvilinear fit to the datastraight_line = lm(y ~ x)curvilinear = lm(y ~ x + I(x^2))and plot the data and both linear models on the same plot. Because abline() only works for a straight-line, we use our curvilinear model to calculate sufficient values for x and y that we can use to plot the curvilinear model. Note that the coefficients for this model are stored in curvilinear$coefficients with the first value being \(\beta_0\), the second value being \(\beta_1\), and the third value being \(\beta_2\).plot(x, y, pch = 19, col = "blue", ylim = c, xlab = "x", ylab = "y")abline(straight_line, lwd = 2, col = "blue", lty = 2)x_seq = seq(-0.5, 5.5, 0.01)y_seq = curvilinear$coefficients + curvilinear$coefficients * x_seq + curvilinear$coefficients * x_seq^2lines(x_seq, y_seq, lwd = 2, col = "red", lty = 3)legend(x = "topleft", legend = c("straight-line", "curvilinear"), col = c("blue", "red"), lty = c, lwd = 2, bty = "n")The resulting plot is shown here.8.5: Using R for a Linear Regression Analysis is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by LibreTexts.
257
8.6: Exercises
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Chemometrics_Using_R_(Harvey)/08%3A_Modeling_Data/8.06%3A_Exercises
1. The following data are for a series of external standards of Cd2+ buffered to a pH of 4.6.(a) Use a linear regression analysis to determine the equation for the calibration curve and report confidence intervals for the slope and the y-intercept.(b) Construct a plot of the residuals and comment on their significance.At a pH of 3.7 the following data were recorded for the same set of external standards.(c) How much more or less sensitive is this method at the lower pH?(d) A single sample is buffered to a pH of 3.7 and analyzed for cadmium, yielding a signal of 66.3 nA. Report the concentration of Cd2+ in the sample and its 95% confidence interval.The data in this problem are from Wojciechowski, M.; Balcerzak, J. Anal. Chim. Acta 1991, 249, 433–445.2. Consider the following three data sets, each of which gives values of y for the same values of x.(a) An unweighted linear regression analysis for the three data sets gives nearly identical results. To three significant figures, each data set has a slope of 0.500 and a y-intercept of 3.00. The standard deviations in the slope and the y-intercept are 0.118 and 1.125 for each data set. All three standard deviations about the regression are 1.24. Based on these results for a linear regression analysis, comment on the similarity of the data sets.(b) Complete a linear regression analysis for each data set and verify that the results from part (a) are correct. Construct a residual plot for each data set. Do these plots change your conclusion from part (a)? Explain.(c) Plot each data set along with the regression line and comment on your results.(d) Data set 3 appears to contain an outlier. Remove the apparent outlier and reanalyze the data using a linear regression. Comment on your result.(e) Briefly comment on the importance of visually examining your data.These three data sets are taken from Anscombe, F. J. “Graphs in Statistical Analysis,” Amer. Statis. 1973, 27, 17-21.3. Fanke and co-workers evaluated a standard additions method for a voltammetric determination of Tl. A summary of their results is tabulated in the following table.Use a weighted linear regression to determine the standardization relationship for this data. The data in this problem are from Franke, J. P.; de Zeeuw, R. A.; Hakkert, R. Anal. Chem. 1978, 50, 1374–1380.8.6: Exercises is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by LibreTexts.
258
9.1: Response Surfaces
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Chemometrics_Using_R_(Harvey)/09%3A_Gathering_Data/9.01%3A_Response_Surfaces
One of the most effective ways to think about an optimization is to visualize how a system’s response changes when we increase or decrease the levels of one or more of its factors. We call a plot of the system’s response as a function of the factor levels a response surface. The simplest response surface has one factor and is drawn in two dimensions by placing the responses on the y-axis and the factor’s levels on the x-axis. The calibration curve in is an example of a one-factor response surface. We also can define the response surface mathematically. The response surface in , for example, is\[A = 0.008 + 0.0896C_A \nonumber\]where A is the absorbance and CA is the analyte’s concentration in ppm.For a two-factor system, such as the quantitative analysis for vanadium described earlier, the response surface is a flat or curved plane in three dimensions. As shown in , we place the response on the z-axis and the factor levels on the x-axis and the y-axis. shows a pseudo-three dimensional wireframe plot for a system that obeys the equation\[R = 3.0 - 0.30A + 0.020AB \nonumber\]where R is the response, and A and B are the factors. We also can represent a two-factor response surface using the two-dimensional level plot in , which uses a color gradient to show the response on a two-dimensional grid, or using the two-dimensional contour plot in , which uses contour lines to display the response surface.The response surfaces in cover a limited range of factor levels (0 ≤ A ≤ 10, 0 ≤ B ≤ 10), but we can extend each to more positive or to more negative values because there are no constraints on the factors. Most response surfaces of interest to an analytical chemist have natural constraints imposed by the factors, or have practical limits set by the analyst. The response surface in , for example, has a natural constraint on its factor because the analyte’s concentration cannot be less than zero; that is, \(C_A \ge 0\).If we have an equation for the response surface, then it is relatively easy to find the optimum response. Unfortunately, we rarely know any useful details about the response surface. Instead, we must determine the response surface’s shape and locate its optimum response by running appropriate experiments. The focus of this chapter is on useful experimental methods for characterizing a response surface. These experimental methods are divided into two broad categories: searching methods, in which an algorithm guides a systematic search for the optimum response, and modeling methods, in which we use a theoretical model or an empirical model of the response surface to predict the optimum response.This page titled 9.1: Response Surfaces is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by David Harvey.
259
9.2: Searching Algorithms
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Chemometrics_Using_R_(Harvey)/09%3A_Gathering_Data/9.02%3A_Searching_Algorithms
shows a portion of the South Dakota Badlands, a barren landscape that includes many narrow ridges formed through erosion. Suppose you wish to climb to the highest point on this ridge. Because the shortest path to the summit is not obvious, you might adopt the following simple rule: look around you and take one step in the direction that has the greatest change in elevation, and then repeat until no further step is possible. The route you follow is the result of a systematic search that uses a searching algorithm. Of course there are as many possible routes as there are starting points, three examples of which are shown in . Note that some routes do not reach the highest point—what we call the global optimum. Instead, many routes end at a local optimum from which further movement is impossible.We can use a systematic searching algorithm to locate the optimum response. We begin by selecting an initial set of factor levels and measure the response. Next, we apply the rules of our searching algorithm to determine a new set of factor levels and measure its response, continuing this process until we reach an optimum response. Before we consider two common searching algorithms, let’s consider how we evaluate a searching algorithm.A searching algorithm is characterized by its effectiveness and its efficiency. To be effective, a searching algorithm must find the response surface’s global optimum, or at least reach a point near the global optimum. A searching algorithm may fail to find the global optimum for several reasons, including a poorly designed algorithm, uncertainty in measuring the response, and the presence of local optima. Let’s consider each of these potential problems.A poorly designed algorithm may prematurely end the search before it reaches the response surface’s global optimum. As shown in , when you climb a ridge that slopes up to the northeast, an algorithm is likely to fail it if limits your steps only to the north, south, east, or west. An algorithm that cannot respond to a change in the direction of steepest ascent is not an effective algorithm.All measurements contain uncertainty, or noise, that affects our ability to characterize the underlying signal. When the noise is greater than the local change in the signal, then a searching algorithm is likely to end before it reaches the global optimum. , which provides a different view of , shows us that the relatively flat terrain leading up to the ridge is heavily weathered and very uneven. Because the variation in local height (the noise) exceeds the slope (the signal), our searching algorithm ends the first time we step up onto a less weathered local surface that is higher than the immediately surrounding surfaces.Finally, a response surface may contain several local optima, only one of which is the global optimum. If we begin the search near a local optimum, our searching algorithm may never reach the global optimum. The ridge in , for example, has many peaks. Only those searches that begin at the far right will reach the highest point on the ridge. Ideally, a searching algorithm should reach the global optimum regardless of where it starts.A searching algorithm always reaches an optimum. Our problem, of course, is that we do not know if it is the global optimum. One method for evaluating a searching algorithm’s effectiveness is to use several sets of initial factor levels, find the optimum response for each, and compare the results. If we arrive at or near the same optimum response after starting from very different locations on the response surface, then we are more confident that is it the global optimum.Efficiency is a searching algorithm’s second desirable characteristic. An efficient algorithm moves from the initial set of factor levels to the optimum response in as few steps as possible. In seeking the highest point on the ridge in , we can increase the rate at which we approach the optimum by taking larger steps. If the step size is too large, however, the difference between the experimental optimum and the true optimum may be unacceptably large. One solution is to adjust the step size during the search, using larger steps at the beginning and smaller steps as we approach the global optimum.This page titled 9.2: Searching Algorithms is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by David Harvey.
260
9.3: One-Factor-at-a-Time Optimizations
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Chemometrics_Using_R_(Harvey)/09%3A_Gathering_Data/9.03%3A_One-Factor-at-a-Time_Optimizations
A simple algorithm for optimizing the quantitative method for vanadium described earlier is to select initial concentrations for H2O2 and H2SO4 and measure the absorbance. Next, we optimize one reagent by increasing or decreasing its concentration—holding constant the second reagent’s concentration—until the absorbance decreases. We then vary the concentration of the second reagent—maintaining the first reagent’s optimum concentration—until we no longer see an increase in the absorbance. We can stop this process, which we call a one-factor-at-a-time optimization, after one cycle or repeat the steps until the absorbance reaches a maximum value or it exceeds an acceptable threshold value.A one-factor-at-a-time optimization is consistent with a notion that to determine the influence of one factor we must hold constant all other factors. This is an effective, although not necessarily an efficient experimental design when the factors are independent [see Sharaf, M. A.; Illman, D. L.; Kowalski, B. R. Chemometrics, Wiley-Interscience: New York, 1986]. Two factors are independent when a change in the level of one factor does not influence the effect of a change in the other factor’s level. Table \(\PageIndex{1}\) provides an example of two independent factors.If we hold factor B at level B1, changing factor A from level A1 to level A2 increases the response from 40 to 80, or a change in response, \(\Delta R\) of\[R = 80 - 40 = 40 \nonumber\]If we hold factor B at level B2, we find that we have the same change in response when the level of factor A changes from A1 to A2.\[R = 100 - 60 = 40 \nonumber\]We can see this independence visually if we plot the response as a function of factor A’s level, as shown in . The parallel lines show that the level of factor B does not influence factor A’s effect on the response.Mathematically, two factors are independent if they do not appear in the same term in the equation that describes the response surface. , for example, shows the resulting pseudo-three-dimensional surface and a contour map for the equation \[R = 2.0 + 0.12 A + 0.48 B - 0.03A^2 - 0.03 B^2 \nonumber \]which describes a response surface with independent factors because no term in the equation includes both factor A and factor B.The easiest way to follow the progress of a searching algorithm is to map its path on a contour plot of the response surface. Positions on the response surface are identified as (a, b) where a and b are the levels for factor A and for factor B. The contour plot in , for example, shows four one-factor-at-a-time optimizations of the response surface in . The effectiveness and efficiency of this algorithm when optimizing independent factors is clear—each trial reaches the optimum response at in a single cycle.Unfortunately, factors often are not independent. Consider, for example, the data in Table \(\PageIndex{2}\)where a change in the level of factor B from level B1 to level B2 has a significant effect on the response when factor A is at level A1\[R = 60 - 20 = 40 \nonumber\]but no effect when factor A is at level A2.\[R = 80 - 80 = 0 \nonumber\] shows this dependent relationship between the two factors.Factors that are dependent are said to interact and the equation for the response surface includes an interaction term that contains both factor A and factor B. The final term in this equation\[R = 5.5 + 1.5 A + 0.6 B - 0.15 A^2 - 0.245 B^2 - 0.0857 AB \nonumber \]for example, accounts for the interaction between factor A and factor B. shows the resulting pseudo-three-dimensional surface and a contour map for the response surface defined by this equation. The progress of a one-factor-at-a-time optimization for this response surface is shown in . Although the optimization for dependent factors is effective, it is less efficient than that for independent factors. In this case it takes four cycles to reach the optimum response of if we begin at.This page titled 9.3: One-Factor-at-a-Time Optimizations is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by David Harvey.
261
9.4: Simplex Optimization
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Chemometrics_Using_R_(Harvey)/09%3A_Gathering_Data/9.04%3A_Simplex_Optimization
One strategy for improving the efficiency of a searching algorithm is to change more than one factor at a time. A convenient way to accomplish this when there are two factors is to begin with three sets of initial factor levels arranged as the vertices of a triangle . After measuring the response for each set of factor levels, we identify the combination that gives the worst response and replace it with a new set of factor levels using a set of rules. This process continues until we reach the global optimum or until no further optimization is possible. The set of factor levels is called a simplex. In general, for k factors a simplex is a \(k + 1\) dimensional geometric figure [see Spendley, W.; Hext, G. R.; Himsworth, F. R. Technometrics 1962, 4, 441–461, and Deming, S. N.; Parker, L. R. CRC Crit. Rev. Anal. Chem. 1978 7, 187–202]. Thus, for two factors the simplex is a triangle. For three factors the simplex is a tetrahedron.To place the initial two-factor simplex on the response surface, we choose a starting point (a, b) for the first vertex and place the remaining two vertices at (a + sa, b) and (a + 0.5sa, b + 0.87sb) where sa and sb are step sizes for factor A and for factor B [see, for example, Long, D. E. Anal. Chim. Acta 1969, 46, 193–206]. The following set of rules moves the simplex across the response surface in search of the optimum response:Rule 1. Rank the vertices from best (vb) to worst (vw).Rule 2. Reject the worst vertex (vw) and replace it with a new vertex (vn) by reflecting the worst vertex through the midpoint of the remaining vertices. The new vertex’s factor levels are twice the average factor levels for the retained vertices minus the factor levels for the worst vertex. For a two-factor optimization, the equations are shown here where vs is the third vertex.\[a_{v_n} = 2 \left( \frac {a_{v_b} + a_{v_s}} {2} \right) - a_{v_w} \nonumber \]\[b_{v_n} = 2 \left( \frac {b_{v_b} + b_{v_s}} {2} \right) - b_{v_w} \nonumber \]Rule 3. If the new vertex has the worst response, then return to the previous vertex and reject the vertex with the second worst response, (vs) calculating the new vertex’s factor levels using rule 2. This rule ensures that the simplex does not return to the previous simplex.Rule 4. Boundary conditions are a useful way to limit the range of possible factor levels. For example, it may be necessary to limit a factor’s concentration for solubility reasons, or to limit the temperature because a reagent is thermally unstable. If the new vertex exceeds a boundary condition, then assign it the worst response and follow rule 3.Because the size of the simplex remains constant during the search, this algorithm is called a fixed-sized simplex optimization. The following example illustrates the application of these rules.Find the optimum for the response surface described by the equation\[R = 5.5 + 1.5 A + 0.6 B - 0.15 A^2 - 0.254 B^2 - 0.0857 AB \nonumber \]using the fixed-sized simplex searching algorithm. Use for the initial factor levels and set each factor’s step size to 1.00.SolutionLetting a = 0, b =0, sa = 1.00, and sb = 1.00 gives the vertices for the initial simplex as\[\text{vertex 1:} (a, b) = \nonumber\]\[\text{vertex 2:} (a + s_a, b) = (1.00, 0) \nonumber\]\[\text{vertex 3:} (a + 0.5s_a, b + 0.87s_b) = (0.50, 0.87) \nonumber\]The responses for the three vertices are shown in the following tablewith \(v_1\) giving the worst response and \(v_3\) the best response. Following Rule 1, we reject \(v_1\) and replace it with a new vertex; thus\[a_{v_4} = 2 \left( \frac {1.00 + 0.50} {2} \right) - 0 = 1.50 \nonumber\]\[b_{v_4} = 2 \left( \frac {0 + 0.87} {2} \right) - 0 = 0.87 \nonumber\]The following table gives the vertices of the second simplex.with \(v_3\) giving the worst response and \(v_4\) the best response. Following Rule 1, we reject \(v_3\) and replace it with a new vertex; thus\[a_{v_5} = 2 \left( \frac {1.00 + 1.50} {2} \right) - 0.50 = 2.00 \nonumber\]\[b_{v_5} = 2 \left( \frac {0 + 0.87} {2} \right) - 0.87 = 0 \nonumber\]The following table gives the vertices of the third simplex.The calculation of the remaining vertices is left as an exercise. shows the progress of the complete optimization. After 29 steps the simplex begins to repeat itself, circling around the optimum response of.The size of the initial simplex ultimately limits the effectiveness and the efficiency of a fixed-size simplex searching algorithm. We can increase its efficiency by allowing the size of the simplex to expand or to contract in response to the rate at which we approach the optimum. For example, if we find that a new vertex is better than any of the vertices in the preceding simplex, then we expand the simplex further in this direction on the assumption that we are moving directly toward the optimum. Other conditions might cause us to contract the simplex—to make it smaller—to encourage the optimization to move in a different direction. We call this a variable-sized simplex optimization. Consult this chapter’s additional resources for further details of the variable-sized simplex optimization.This page titled 9.4: Simplex Optimization is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by David Harvey.
262
9.5: Mathematical Models of Response Surfaces
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Chemometrics_Using_R_(Harvey)/09%3A_Gathering_Data/9.05%3A_Mathematical_Models_of_Response_Surfaces
A response surface is described mathematically by an equation that relates the response to its factors. If we measure the response for several combinations of factor levels, then we can use a regression analysis to build a model of the response surface. There are two broad categories of models that we can use for a regression analysis: theoretical models and empirical models.A theoretical model is derived from the known chemical and physical relationships between the response and its factors. In spectrophotometry, for example, Beer’s law is a theoretical model that relates an analyte’s absorbance, A, to its concentration, CA \[A = \epsilon b C_A \nonumber\]where \(\epsilon\) is the molar absorptivity and b is the pathlength of the electromagnetic radiation passing through the sample. A Beer’s law calibration curve, therefore, is a theoretical model of a response surface. In Chapter 8 we learned how to use linear regression to build a mathematical model based on a theoretical relationship.In many cases the underlying theoretical relationship between the response and its factors is unknown. We still can develop a model of the response surface if we make some reasonable assumptions about the underlying relationship between the factors and the response. For example, if we believe that the factors A and B are independent and that each has only a first-order effect on the response, then the following equation is a suitable model.\[R = \beta_0 + \beta_a A + \beta_b B \nonumber\]where R is the response, A and B are the factor levels, and \(\beta_0\), \(\beta_a\), and \(\beta_b\) are adjustable parameters whose values are determined by a linear regression analysis. Other examples of equations include those for dependent factors\[R = \beta_0 + \beta_a A + \beta_b B + \beta_{ab} AB \nonumber\]and those with higher-order terms.\[R = \beta_0 + \beta_a A + \beta_b B + \beta_{aa} A^2 + \beta_{bb} B^2 \nonumber\]Each of these equations provides an empirical model of the response surface because it has no rigorous basis in a theoretical understanding of the relationship between the response and its factors. Although an empirical model may provide an excellent description of the response surface over a limited range of factor levels, it has no basis in theory and we cannot reliably extend it to unexplored parts of the response surface.To build an empirical model we measure the response for at least two levels for each factor. For convenience we label these levels as high, Hf, and low, Lf, where f is the factor; thus HA is the high level for factor A and LB is the low level for factor B. If our empirical model contains more than one factor, then each factor’s high level is paired with both the high level and the low level for all other factors. In the same way, the low level for each factor is paired with the high level and the low level for all other factors. As shown in , this requires 2k experiments where k is the number of factors. This experimental design is known as a 2k factorial design.Another system of notation is to use a plus sign (+) to indicate a factor’s high level and a minus sign (–) to indicate its low level.A 22 factorial design requires four experiments and allows for an empirical model with four variables.With four experiments, we can use a 22 factorial design to create an empirical model that includes four variables: an intercept, first-order effects in A and B, and an interaction term between A and B\[R = \beta_0 + \beta_a A + \beta_b B + \beta_{ab} AB \nonumber \]The following example walks us through the calculations needed to find this model.Suppose we wish to optimize the yield of a synthesis and we expect that the amount of catalyst (factor A with units of mM) and the temperature (factor B with units of °C) are likely important factors. The response, \(R\), is the reaction's yield in mg. We run four experiments and obtain the following responses:Determine an equation for a response surface that provides a suitable model for predicting the effect of the catalyst and temperture on the reaction's yield.SolutionExamining the data we see from runs 1 & 2 and from runs 3 & 4, that increasing factor A while holding factor B constant results in an increase in the response; thus, we expect that higher concentrations of the catalyst have a favorable affect on the reaction's yield. We also see from runs 1 & 3 and from runs 2 & 4, that increasing factor B while holding factor A constant results in a decrease in the response; thus, we expect that an increase in temperature has an unfavorable affect on the reaction's yield. Finally, we also see from runs 1 & 2 and from runs 3 & 4, that \(\Delta R\) is more positive when factor B is at its higher level; thus, we expect that there is a positive interaction between factors A and B. With four experiments, we are limited to a model that considers an intercept, first-order effects in A and B, and an interaction term between A and B\[R = \beta_0 + \beta_a A + \beta_b B + \beta_{ab} AB \nonumber \]We can work out values for this model's coefficients by solving the following set of simultaneous equations:\[\beta_0 + 15 \beta_a + 20 \beta_b + \beta_{ab} = \beta_0 + 15 \beta_a + 20 \beta_b + 300 \beta_{ab} = 145 \nonumber \]\[\beta_0 + 25 \beta_a + 20 \beta_b + \beta_{ab} = \beta_0 + 25 \beta_a + 20 \beta_b + 500 \beta_{ab} = 158 \nonumber \]\[\beta_0 + 15 \beta_a + 30 \beta_b + \beta_{ab} = \beta_0 + 15 \beta_a + 30 \beta_b + 450 \beta_{ab} = 135 \nonumber \]\[\beta_0 + 25 \beta_a + 30 \beta_b + \beta_{ab} = \beta_0 + 25 \beta_a + 30 \beta_b + 750 \beta_{ab} = 150 \nonumber \]To solve this set of equations, we subtract the first equation from the second equation and subtract the third equation from the fourth equation, leaving us with the following two equations\[10 \beta_a + 200 \beta_{ab} = 13 \nonumber \]\[10 \beta_a + 300 \beta_{ab} = 15 \nonumber \]Next, subtracting the first of these equations from the second gives\[100 \beta_{ab} = 2 \nonumber \]or \(\beta_{ab} = 0.02\). Substituting back gives\[10 \beta_{a} + 200 \times 0.02 = 13 \nonumber \]or \(\beta_a = 0.9\). Subtracting the equation for the first experiment from the equation for the third experiment gives\[10 \beta_b + 150 \beta_{ab} = -10 \nonumber \]Substituting in 0.02 for \(\beta_{ab}\) and solving gives \(\beta_b = -1.3\). Finally, substituting in our values for \(\beta_a\), \(\beta_b\), and \(\beta_{ab}\) into any of the first four equations gives \(\beta_0 = 151.5\). Our final model is\[R = 151.5 + 0.9 A - 1.3 B + 0.02 AB \nonumber\]When we consider how to interpret our empirical equation for the response surface, we need to consider several important limitations:We can address two of the limitations described above by using coded factor levels in which we assign \(+1\) for a high level and \(-1\) for a low level. Defining the upper limit and the lower limit of the factors as \(+1\) and \(-1\) does two things for us: it places the intercept at the center of our experiments, which avoids the concern of extrapolating our model; and it places all factors on a common scale, and which makes it easier to compare the relative effects of the factors. Coding also makes it easier to determine the empirical model's equation when we complete calculations by hand.To explore the effect of temperature on a reaction, we assign 30oC to a coded factor level of \(-1\) and assign a coded level \(+1\) to a temperature of 50oC. What temperature corresponds to a coded level of \(-0.5\) and what is the coded level for a temperature of 60oC?SolutionThe difference between \(-1\) and \(+1\) is 2, and the difference between 30oC and 50oC is 20oC; thus, each unit in coded form is equivalent to 10oC in uncoded form. With this information, it is easy to create a simple scale between the coded and the uncoded values, as shown in . A temperature of 35oC corresponds to a coded level of \(-0.5\) and a coded level of \(+2\) corresponds to a temperature of 60oC.As we see in the following example, factor levels simplify the calculations for an empirical model.Rework Example \(\PageIndex{1}\) using coded factor levels.SolutionThe table below shows the original factor levels (A and B), their corresponding coded factor levels (A* and B*) and A*B*, which is the empirical model's interaction term.The empirical equation has four unknowns—the four beta terms—and Table \(\PageIndex{1}\) describes the four experiments. We have just enough information to calculate values for \(\beta_0\), \(\beta_a\), \(\beta_b\), and \(\beta_{ab}\). When working with the coded factor levels, the values of these parameters are easy to calculate using the following equations, where n is the number of runs.\[\beta_{0} \approx b_{0}=\frac{1}{n} \sum_{i=1}^{n} R_{i} \nonumber \]\[\beta_{a} \approx b_{a}=\frac{1}{n} \sum_{i=1}^{n} A^*_{i} R_{i} \nonumber \]\[\beta_{b} \approx b_{b}=\frac{1}{n} \sum_{i=1}^{n} B^*_{i} R_{i} \nonumber \]\[\beta_{ab} \approx b_{ab}=\frac{1}{n} \sum_{i=1}^{n} A^*_{i} B^*_{i} R_{i} \nonumber \]Solving for the estimated parameters using the data in Table \(\PageIndex{1}\)\[b_{0}=\frac{145 + 158 + 135 + 150}{4} = 147 \nonumber\]\[b_{a}=\frac{-145 + 158 - 135 + 150}{4} = 7 \nonumber\]\[b_{b}=\frac{-145 - 11.5 + 135 + 150}{4} = 5.0 \nonumber\]\[b_{ab}=\frac{145 - 158 - 135 + 150}{4} = 0.5 \nonumber\]leaves us with the coded empirical model for the response surface.\[R = 147 + 7 A^* - 4.5 B^* + 0.5 A^* B^* \nonumber \]Do you see why the equations for calculating \(b_0\), \(b_a\), \(b_b\), and \(b_{ab}\) work? Take the equation for \(b_a\) as an example\[\beta_{a} \approx b_{a}=\frac{1}{n} \sum_{i=1}^{n} A^*_{i} R_{i} \nonumber \]where\[b_{a}=\frac{-145 + 158 - 135 + 150}{4} = 7 \nonumber\]The first and the third terms in this equation give the response when \(A^*\) is at its low level, and the second and fourth terms in this equation give the response when \(A^*\) is at its high level. In the two terms where \(A^*\) is at its low level, \(B^*\) is at both its low level (first term) and its high level (third term), and in the two terms where \(A^*\) is at its high level, \(B^*\) is at both its low level (second term) and its high level (fourth term). As a result, the contribution of \(B^*\) is removed from the calculation. The same holds true for the effect of \(A^* B^*\), although this is left for you to confirm.We can transform the coded model into a non-coded model by noting that \(A = 20 + 5A^*\) and that \(B = 25 + 5B^*\), solving for \(A^*\) and \(B^*\), to obtain \(A^* = 0.2 A - 4\) and \(B^* = 0.2 B - 5\), and substituting into the coded model and simplifying.\[R = 147 + 7 (0.2A - 4) - 4.5 (0.2B - 5) + 0.5(0.2A - 4)(0.2A - 5) \nonumber\]\[R = 147 + 1.4A - 28 - 0.9B + 22.5 + 0.02AB - 0.5A - 0.4B + 10 \nonumber\]\[R = 151.5 + 0.9A - 1.3B + 0.02AB \nonumber \]Note that this is the same equation that we derived in Example \(\PageIndex{1}\) using uncoded values for the factors.Although we can convert this coded model into its uncoded form, there is no need to do so. If we want to know the response for a new set of factor levels, we just convert them into coded form and calculate the response. For example, if A is 23 and B is 22, then \(A^* = 02 \times 23 - 4 = 0.6\) and \(B^* = 0.2 \times 22 - 5 = -0.6\) and\[R = 147 + 7 \times 0.6 - 4.5 \times (-0.6) + 0.5 \times 0.6 \times (-0.6) = 153.72 \approx 154 \text{ mg} \nonumber \] We can extend this approach to any number of factors. For a system with three factors—A, B, and C—we can use a 23 factorial design to determine the parameters in the following empirical model\[R = \beta_0 + \beta_a A + \beta_b B + \beta_c C + \beta_{ab} AB + \beta_{ac} AC + \beta_{bc} BC + \beta_{abc} ABC \nonumber \]where A, B, and C are the factor levels. The terms \(\beta_0\), \(\beta_a\), \(\beta_b\), and \(\beta_{ab}\) are estimated using the following eight equations.\[\beta_{0} \approx b_{0}=\frac{1}{n} \sum_{i=1}^{n} R_{i} \nonumber \]\[\beta_{a} \approx b_{a}=\frac{1}{n} \sum_{i=1}^{n} A^*_{i} R_{i} \nonumber \]\[\beta_{b} \approx b_{b}=\frac{1}{n} \sum_{i=1}^{n} B^*_{i} R_{i} \nonumber \]\[\beta_{ab} \approx b_{ab}=\frac{1}{n} \sum_{i=1}^{n} A^*_{i} B^*_{i} R_{i} \nonumber \]\[\beta_{c} \approx b_{c}=\frac{1}{n} \sum_{i=1}^{n} C^*_{i} R \nonumber \]\[\beta_{ac} \approx b_{ac}=\frac{1}{n} \sum_{i=1}^{n} A^*_{i} C^*_{i} R \nonumber \]\[\beta_{bc} \approx b_{bc}=\frac{1}{n} \sum_{i=1}^{n} B^*_{i} C^*_{i} R \nonumber \]\[\beta_{abc} \approx b_{abc}=\frac{1}{n} \sum_{i=1}^{n} A^*_{i} B^*_{i} C^*_{i} R \nonumber \]The following table lists the uncoded factor levels, the coded factor levels, and the responses for a 23 factorial design.Determine the coded empirical model for the response surface based on the following equation.\[R = \beta_0 + \beta_a A + \beta_b B + \beta_c C + \beta_{ab} AB + \beta_{ac} AC + \beta_{bc} BC + \beta_{abc} ABC \nonumber \]What is the expected response when A is 10, B is 15, and C is 50?SolutionThe equation for the empirical model has eight unknowns—the eight beta terms—and the table above describes eight experiments. We have just enough information to calculate values for \(\beta_0\), \(\beta_a\), \(\beta_b\), \(\beta_{ab}\), \(\beta_{ac}\), \(\beta_{bc}\), and \(\beta_{abc}\); these values are\[b_{0}=\frac{1}{8} \times(137.25+54.75+73.75+30.25+61.75+30.25+41.25+18.75 )=56.0 \nonumber\]\[b_{a}=\frac{1}{8} \times(137.25+54.75+73.75+30.25-61.75-30.25-41.25-18.75 )=18.0 \nonumber\]\[b_{b}=\frac{1}{8} \times(137.25+54.75-73.75-30.25+61.75+30.25-41.25-18.75 )=15.0 \nonumber\]\[b_{c}=\frac{1}{8} \times(137.25-54.75+73.75-30.25+61.75-30.25+41.25-18.75 )=22.5 \nonumber\]\[b_{ab}=\frac{1}{8} \times(137.25+54.75-73.75-30.25-61.75-30.25+41.25+18.75 )=7.0 \nonumber\]\[b_{ac}=\frac{1}{8} \times(137.25-54.75+73.75-30.25-61.75+30.25-41.25+18.75 )=9.0 \nonumber\]\[b_{bc}=\frac{1}{8} \times(137.25-54.75-73.75+30.25+61.75-30.25-41.25+18.75 )=6.0 \nonumber\]\[b_{abc}=\frac{1}{8} \times(137.25-54.75-73.75+30.25-61.75+30.25+41.25-18.75 )=3.75 \nonumber\]The coded empirical model, therefore, is\[R = 56.0 + 18.0 A^* + 15.0 B^* + 22.5 C^* + 7.0 A^* B^* + 9.0 A^* C^* + 6.0 B^* C^* + 3.75 A^* B^* C^* \nonumber\]To find the response when A is 10, B is 15, and C is 50, we first convert these values into their coded form. helps us make the appropriate conversions; thus, A* is 0, B* is \(-0.5\), and C* is \(+1.33\). Substituting back into the empirical model gives a response of\[R = 56.0 + 18.0 + 15.0 (-0.5) + 22.5 (+1.33) + 7.0 (-0.5) + 9.0 (+1.33) + 6.0 (-0.5) (+1.33) + 3.75 (-0.5) (+1.33) = 74.435 \approx 74.4 \nonumber\]A 2k factorial design can model only a factor’s first-order effect, including first-order interactions, on the response. A 22 factorial design, for example, includes each factor’s first-order effect (\(\beta_a\) and \(\beta_b\)) and a first-order interaction between the factors (\(\beta_{ab}\)). A 2k factorial design cannot model higher-order effects because there is insufficient information. Here is a simple example that illustrates the problem. Suppose we need to model a system in which the response is a function of a single factor, A. shows the result of an experiment using a 21 factorial design. The only empirical model we can fit to the data is a straight line.\[R = \beta_0 + \beta_a A \nonumber\]If the actual response is a curve instead of a straight-line, then the empirical model is in error. To see evidence of curvature we must measure the response for at least three levels for each factor. We can fit the 31 factorial design in to an empirical model that includes second-order factor effects.\[R = \beta_0 + \beta_a A + \beta_{aa} A^2 \nonumber\]In general, an n-level factorial design can model single-factor and interaction terms up to the (n – 1)th order.We can judge the effectiveness of a first-order empirical model by measuring the response at the center of the factorial design. If there are no higher-order effects, then the average response of the trials in a 2k factorial design should equal the measured response at the center of the factorial design. To account for influence of random errors we make several determinations of the response at the center of the factorial design and establish a suitable confidence interval. If the difference between the two responses is significant, then a first-order empirical model probably is inappropriate.One of the advantages of working with a coded empirical model is that b0 is the average response of the 2 \(\times\) k trials in a 2k factorial design.One method for the quantitative analysis of vanadium is to acidify the solution by adding H2SO4 and oxidizing the vanadium with H2O2 to form a red-brown soluble compound with the general formula (VO)2(SO4)3. Palasota and Deming studied the effect of the relative amounts of H2SO4 and H2O2 on the solution’s absorbance, reporting the following results for a 22 factorial design [Palasota, J. A.; Deming, S. N. J. Chem. Educ. 1992, 62, 560–563].Four replicate measurements at the center of the factorial design give absorbances of 0.334, 0.336, 0.346, and 0.323. Determine if a first-order empirical model is appropriate for this system. Use a 90% confidence interval when accounting for the effect of random error.SolutionWe begin by determining the confidence interval for the response at the center of the factorial design. The mean response is 0.335 with a standard deviation of 0.0094, which gives a 90% confidence interval of\[\mu=\overline{X} \pm \frac{t s}{\sqrt{n}}=0.335 \pm \frac{(2.35)(0.0094)}{\sqrt{4}}=0.335 \pm 0.011 \nonumber\]The average response, \(\overline{R}\), from the factorial design is\[\overline{R}=\frac{0.330+0.359+0.293+0.420}{4}=0.350 \nonumber\]Because \(\overline{R}\) exceeds the confidence interval’s upper limit of 0.346, we can reasonably assume that a 22 factorial design and a first-order empirical model are inappropriate for this system at the 95% confidence level.One limitation to a 3k factorial design, which would allow us to use an empirical model with second-order effects, is the number of trials we need to run. As shown in , a 32 factorial design requires 9 trials. This number increases to 27 for three factors and to 81 for 4 factors.A more efficient experimental design for a system that contains more than two factors is a central composite design, two examples of which are shown in . The central composite design consists of a 2k factorial design, which provides data to estimate each factor’s first-order effect and interactions between the factors, and a star design that has \(2^k + 1\) points, which provides data to estimate second-order effects. Although a central composite design for two factors requires the same number of trials, nine, as a 32 factorial design, it requires only 15 trials and 25 trials when using three factors or four factors. See this chapter’s additional resources for details about the central composite designs.This page titled 9.5: Mathematical Models of Response Surfaces is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by David Harvey.
263
9.6: Using R to Model a Response Surface (Multiple Regression)
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Chemometrics_Using_R_(Harvey)/09%3A_Gathering_Data/9.06%3AUsing_R_to_Model_a_Response_Surface
The calculations for determining an empirical model of a response surface using a 2k factorial design, as outlined in Section 9.5, are relatively easy to complete for a small number of factors and for experimental designs without replication where the number of experiments is equal to the number of parameters in the model. If we wish to work with more factors, if we wish to explore other experimental designs, and if we wish to build replication into the experimental design so that we can better evaluate our empirical model, then we need to do so by building a regression model, as we did earlier in Chapter 8.To illustrate how we can use R to create an empirical model, let's use data from an experiment exploring how to optimize a Grignard reaction leading to the synthesis of benzyl-1-cyclopentan-1-ol [Bouzidi, N.; Gozzi, C. J. Chem. Educ. 2008, 85, 1544–1547]. In this study, students begin by studying the effect of six possible factors on the reaction's yield: the volume of diethyl ether used to prepare a solution of benzyl chloride, \(x_1\), the time over which benzyl chloride is added to the reaction mixture, \(x_2\), the stirring time used to prepare the benzyl magnesium chloride, \(x_3\), the relative excess of benzyl chloride to cyclopentanone, \(x_4\), the relative excess of magnesium turnings to benzyl chloride, \(x_5\), and the reaction time, \(x_6\).With six factors to consider, a full 2k factorial design requires 32 experiments, which is labor intensive. Instead, the students begin with a screening study that uses eight experiments to model only the first-order effects of the six factors, as outlined in the following two tables.To carry out the calculations in R we first create vectors for the coded factor levels and the responses.x1 = c(1,-1,-1,1,-1,1,1,-1) x2 = c(1,1,-1,-1,1,-1,1,-1) x3 = c(1,1,1,-1,-1,1,-1,-1) x4 = c(-1,1,1,1,-1,-1,1,-1) x5 = c(1,-1,1,1,1,-1,-1,-1) x6 = c(-1,1,-1,1,1,1,-1,-1) yield = cNext, we use the lm() function to build a linear regression model that includes just the first-order effects of the factors (see Chapter 8.5 to review the syntax for this function), and the summary() function to review the resulting model. screening = lm(yield ~ x1 + x2 + x3 + x4 + x5 + x6) summary(screening)Call: lm(formula = yield ~ x1 + x2 + x3 + x4 + x5 + x6) Residuals: 1 2 3 4 5 6 7 8 5.875 5.875 -5.875 5.875 -5.875 -5.875 -5.875 5.875 Coefficients: Estimate Std.Error t value Pr(>|t|) (Intercept) 45.625 5.875 7.766 0.0815 . x1 15.625 5.875 2.660 0.2290 x2 0.125 5.875 0.021 0.9865 x3 0.875 5.875 0.149 0.9059 x4 0.125 5.875 0.021 0.9865 x5 5.875 5.875 1.000 0.5000 x6 1.875 5.875 0.319 0.8033 ---Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 Residual standard error: 16.62 on 1 degrees of freedom Multiple R-squared: 0.8913, Adjusted R-squared: 0.239 F-statistic: 1.366 on 6 and 1 DF, p-value: 0.5749Because we have one more experiment than there are variables in our empirical model, the summary provides some information on the significance of the model's parameters; however, with just one degree of freedom this information is not really reliable. In addition to the intercept, the three factors with the largest coefficients are the volume of diethyl ether, \(x_1\), the relative excess of magnesium, \(x_5\), and the reaction time, \(x_6\).Having identified three factors for further investigation, the students use a 23 factorial design to explore interactions between these three factors using the experimental design in the following table (see Table \(\PageIndex{1}\) for the actual factor levels.As before, we create vectors for our factors and the response and then use the lm() and the summary() functions to complete and evaluate the resulting empirical model.x1 = c(-1,1,-1,1,-1,1,-1,1) x5 = c(-1,-1,1,1,-1,-1,1,1) x6 = c(-1,-1,-1,-1,1,1,1,1) yield = c(28.5,55.5,38,68,49,66,31.5,72) fact23 = lm(yield ~ x1 * x5 * x6) summary(fact23)Call: lm(formula = yield ~ x1 * x5 * x6) Residuals: ALL 8 residuals are 0: no residual degrees of freedom!Coefficients: Estimate Std. Error t value Pr(>|t|)(Intercept) 51.0625 NA NA NA x1 14.3125 NA NA NA x5 1.3125 NA NA NA x6 3.5625 NA NA NA x1:x5 3.3125 NA NA NA x1:x6 0.0625 NA NA NA x5:x6 -4.1875 NA NA NA x1:x5:x6 2.5625 NA NA NA Residual standard error: NaN on 0 degrees of freedom Multiple R-squared: 1, Adjusted R-squared: NaN F-statistic: NaN on 7 and 0 DF, p-value: NAWith eight experiments and eight variables in the empirical model, we do not have any ability to evaluate the model statistically. Of the three first-order effects, we see that the volume of diethyl ether, \(x_1\), and reaction time, \(x_6\), are more important than the relative excess of magnesium, \(x_5\). We also see that the interaction between \(x_1\) and \(x_5\) is positive (high values for both favor an increased yield) and that the interaction between \(x_5\) and \(x_6\) is negative (yields improve when one factor is high and the other is low).Finally, the students use a central composite model—which allows for adding second-order effects and curvature in the response surface—to study the effect of the volume of diethyl ether, \(x_1\), and reaction time, \(x_6\), on the percent yield. The relative excess of magnesium, \(x_5\) was set at its high level for this study because this provides for greater percent yields (compare the results for runs 4 and 6 to the results for runs 3 and 5 in Table \(\PageIndex{3}\)). The following tables provides the experimental design.As before, we create vectors for our factors and the response, and then use the lm() and the summary() functions to complete and evaluate the resulting empirical model.x1 = c(-1,1,-1,1,-1.414,1.414,0,0,0,0,0,0) x6 = c(-1,-1,1,1,0,0,-1.414,1.414,0,0,0,0) yield = c(39,66.5,22,72.5,10.5,72.5,38,70,59,57,54.5,63) centcomp = lm(yield ~ x1 * x6 + I(x1^2) + I(x6^2)) summary(centcomp)Call: lm(formula = yield ~ x1 * x6 + I(x1^2) + I(x6^2)) Residuals: Min 1Q Median 3Q Max -11.0724 -4.0794 -0.3938 5.2056 9.3695 Coefficients: Estimate Std. Error t value Pr(>|t|)(Intercept) 58.375 4.360 13.389 1.07e-05 *** x1 20.712 3.083 6.718 0.000529 *** x6 4.282 3.083 1.389 0.214267 I(x1^2) -7.876 3.447 -2.285 0.062398 . I(x6^2) -1.625 3.447 -0.471 0.654130 x1:x6 5.750 4.360 1.319 0.235317 --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 Residual standard error: 8.72 on 6 degrees of freedom Multiple R-squared: 0.9, Adjusted R-squared: 0.8167 F-statistic: 10.8 on 5 and 6 DF, p-value: 0.005835With 12 experiments and just six variables, our model has sufficient degrees of freedom to suggest that it provides a reasonable picture of how the reaction time and the volume of diethyl ether affect the reaction's yield even if the residual errors in the responses range from a minimum of -11.7 to a maximum +9.37. The middle 50% of residual errors range between -4.1 to +5.2 with a median residual error of -0.4. We can compare the actual experimental yields to the yields predicted by the model by combining them into a data frame.centcomp_results = data.frame(yield, centcomp$fitted.values, yield - centcomp$fitted.values) colnames(centcomp_results) = c("expt yield", "pred yield", "residual error") centcomp_results expt yield pred yield residual error 1 39.0 29.63046 9.3695385 2 66.5 59.55372 6.9462836 3 22.0 26.69375 -4.6937546 4 72.5 79.61701 -7.1170095 5 10.5 13.34036 -2.8403635 6 72.5 71.91285 0.5871540 7 38.0 49.07236 -11.0723566 8 70.0 61.18085 8.8191471 9 59.0 58.37466 0.6253402 10 57.0 58.37466 -1.3746598 11 54.5 58.37466 -3.8746598 12 63.0 58.37466 4.6253402The plot3D package provides several functions that we can use to visualize a response surface defined by two factors. Here we consider three functions, one for drawing a two-dimensional contour plot of the response surface, one for drawing a three-dimensional surface plot of the response, and one for plotting a three-dimensional scatter plot of the responses. To begin, we use the library() function to make the package available to us (note: you may need to first install the plot3D package; see Chapter 1 for details on how to do this).library(plot3D)Let's begin by creating a two-dimensional contour plot of our response surface that places the volume of diethyl ether, \(x_1\), on the x-axis and the reaction time, \(x_6\) on the y-axis, and using calculated responses from the model to draw the contour lines. First, we create vectors with values for the x-axis and the y-axisx1_axis = seq(-1.5, 1.5, 0.1) x6_axis = seq(-1.5 ,1.5, 0.1)Next, we create a function that uses our empirical model to calculate the response for every combination of x1_axis and x6_axis response = function(x,y){coef(centcomp) + coef(centcomp)*x + coef(centcomp)*y + coef(centcomp)*x^2 + coef(centcomp)*y^2 + coef(centcomp)*x*y}where coef(centcomp)[i] is used to extract the ith coefficient from our empirical model. Now we use R's outer() function to calculate the response for every combination of the variables x1_axis and x6_axisz_axis = outer(X = x1_axis,Y = x6_axis, response)Finally, we use the contour2D() function to create the contour plot in .contour2D(x = x1_axis,y = x6_axis, z = z_axis, xlab = "x1: volume", ylab = "x6: time", clab = "yield")Next, let's create a three-dimensional surface plot of our response surface that places the volume of diethyl ether, \(x_1\), on the x-axis, the reaction time, \(x_6\) on the y-axis, and the calculated responses from the model on the z-axis. For this, we use the persp3D() functionpersp3D(x = x1_axis, y = x6_axis, z = z_axis, ticktype = "detailed", phi = 15, theta = 25, xlab = "x1: volume", ylab = "x6: time", zlab = "yield", clab = "yield", contour = TRUE, cex.axis = 0.75, cex.lab = 0.75)where phi and theta adjust the angle at which we view the response surface—you will have to play with these values to create a plot that is pleasing to look at—and ticktype controls how much information is displayed on the axes. The cex.axis and cex.lab commands adjust the size of the text displayed on the axes, and countour = TRUE places a contour plot on the figure's bottom side. shows the result.Finally, let's use the type = "h" option to overlay a scatterplot of the data used to build the empirical model on top of the three-dimensional surface plot.scatter3D(x = x1, y = x6, z = yield, add = TRUE, type = "h", pch = 19, col = "black", lwd = 2, colkey = FALSE) shows the result using the data from Table \(\PageIndex{4}\). Although the general shape of the response surface is consistent with the underlying data, there is sufficient experimental uncertainty in the results of the four replicate experiments used to create this empirical model, as shown by the standard deviation for runs 9—12, to explain why some of the predicted yields have large errors.sd(yield[9:12]) 3.591077This page titled 9.6: Using R to Model a Response Surface (Multiple Regression) is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by David Harvey.
264
9.7: Exercises
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Chemometrics_Using_R_(Harvey)/09%3A_Gathering_Data/9.07%3AExercises
1. For each of the following equations determine the optimum response using a one-factor-at-a-time searching algorithm. Begin the search at by first changing factor A, using a step-size of 1 for both factors. The boundary conditions for each response surface are 0 ≤ A ≤ 10 and 0≤ B ≤ 10. Continue the search through as many cycles as necessary until you find the optimum response. Compare your optimum response for each equation to the true optimum. Note: These equations are from Deming, S. N.; Morgan, S. L. Experimental Design: A Chemometric Approach, Elsevier: Amsterdam, 1987, and pseudo-three dimensional plots of the response surfaces can be found in their R = 1.68 + 0.24A + 0.56B – 0.04A2 – 0.04B2 μopt = (b) R = 4.0 – 0.4A + 0.08AB μopt =(c) R = 3.264 + 1.537A + 0.5664B – 0.1505A2 – 0.02734B2 – 0.05785AB μopt = (3.91,6.22)2. Use a fixed-sized simplex searching algorithm to find the optimum response for the equation in Problem 1c. For the first simplex, set one vertex at with step sizes of one. Compare your optimum response to the true optimum.3. A 2k factorial design was used to determine the equation for the response surface in Problem 1b. The uncoded levels, coded levels, and the responses are shown in the following table. Determine the uncoded equation for the response surface.4. Koscielniak and Parczewski investigated the influence of Al on the determination of Ca by atomic absorption spectrophotometry using the 2k factorial design shown in the following table [data from Koscielniak, P.; Parczewski, A. Anal. Chim. Acta 1983, 153, 111–119].(a) Determine the uncoded equation for the response surface.(b) If you wish to analyze a sample that is 6.0 ppm Ca2+, what is the maximum concentration of Al3+ that can be present if the error in the response must be less than 5.0%?5. Strange [Strange, R. S. J. Chem. Educ. 1990, 67, 113–115] studied a chemical reaction using the following 23 factorial design.(a) Determine the coded equation for this data.(b) If \(\beta\) terms of less than \(\pm 1\) are insignificant, what main effects and what interaction terms in the coded equation are important? Write down this simpler form for the coded equation.(c) Explain why the coded equation for this data can not be transformed into an uncoded form.(d) Which is the better catalyst, A or B?(e) What is the yield if the temperature is set to 125oC, the concentration of the reactant is 0.45 M, and we use the appropriate catalyst?6. Pharmaceutical tablets coated with lactose often develop a brown discoloration. The primary factors that affect the discoloration are temperature, relative humidity, and the presence of a base acting as a catalyst. The following data have been reported for a 23 factorial design [Armstrong, N. A.; James, K. C. Pharmaceutical Experimental Design and Interpretation, Taylor and Francis: London, 1996 as cited in Gonzalez, A. G. Anal. Chim. Acta 1998, 360, 227–241].(a) Determine the coded equation for this data.(b) If \(\beta\) terms of less than 0.5 are insignificant, what main effects and what interaction terms in the coded equation are important? Write down this simpler form for the coded equation.7. The following data for a 23 factorial design were collected during a study of the effect of temperature, pressure, and residence time on the % yield of a reaction [Akhnazarova, S.; Kafarov, V. Experimental Optimization in Chemistry and Chemical Engineering, MIR Publishers: Moscow, 1982 as cited in Gonzalez, A. G. Anal. Chim. Acta 1998, 360, 227–241].(a) Determine the coded equation for this data.(b) If \(\beta\) terms of less than 0.5 are insignificant, what main effects and what interaction terms in the coded equation are important? Write down this simpler form for the coded equation.(c) Three runs at the center of the factorial design—a temperature of 150oC, a pressure of 0.4 MPa, and a residence time of 15 min—give percent yields of 8%, 9%, and 8.8%. Determine if a first-order empirical model is appropriate for this system at \(\alpha = 0.05\).8. Duarte and colleagues used a factorial design to optimize a flow-injection analysis method for determining penicillin [Duarte, M. M. M. B.; de O. Netro, G.; Kubota, L. T.; Filho, J. L. L.; Pimentel, M. F.; Lima, F.; Lins, V. Anal. Chim. Acta 1997, 350, 353–357]. Three factors were studied: reactor length, carrier flow rate, and sample volume, with the high and low values summarized in the following table.The authors determined the optimum response using two criteria: the greatest sensitivity, as determined by the change in potential for the potentiometric detector, and the largest sampling rate. The following table summarizes their optimization results.(a) Determine the coded equation for the response surface where \(\Delta E\) is the response.(b) Determine the coded equation for the response surface where sample/h is the response.(c) Based on the coded equations in (a) and in (b), do conditions that favor sensitivity also improve the sampling rate?(d) What conditions would you choose if your goal is to optimize both sensitivity and sampling rate?9. Here is a challenge! McMinn, Eatherton, and Hill investigated the effect of five factors for optimizing an H2-atmosphere flame ionization detector using a 25 factorial design [McMinn, D. G.; Eatherton, R. L.; Hill, H. H. Anal. Chem. 1984, 56, 1293–1298]. The factors and their levels wereThe coded (“+” = +1, “–” = –1) factor levels and responses, R, for the 32 experiments are shown in the following table(a) Determine the coded equation for this response surface, ignoring \(\beta\) terms less than \(\pm 0.03\).(b) A simplex optimization of this system finds optimal values for the factors of A = 2278 mL/min, B = 9.90 ppm, C = 260.6 mL/min, and D = 1.71. The value of E was maintained at its high level. Are these values consistent with your analysis of the factorial design.10. A good empirical model provides an accurate picture of the response surface over the range of factor levels within the experimental design. The same model, however, may yield an inaccurate prediction for the response at other factor levels. For this reason, an empirical model, is tested before it is extrapolated to conditions other than those used in determining the model. For example, Palasota and Deming studied the effect of the relative amounts of H2SO4 and H2O2 on the absorbance of solutions of vanadium using the following central composite design [data from Palasota, J. A.; Deming, S. N. J. Chem. Educ. 1992, 62, 560–563].The reaction of H2SO4 and H2O2 generates a red-brown solution whose absorbance is measured at a wavelength of 450 nm. A regression analysis on their data yields the following uncoded equation for the response (absorbance \(\times\) 1000).\[R = 835.90 - 36.82X_1 - 21.34 X_2 + 0.52 X_1^2 + 0.15 X_2^2 + 0.98 X_1 X_2 \nonumber \]where X1 is the drops of H2O2, and X2 is the drops of H2SO4. Calculate the predicted absorbances for 10 drops of H2O2 and 0 drops of H2SO4, 0 drops of H2O2 and 10 drops of H2SO4, and for 0 drops of each reagent. Are these results reasonable? Explain. What does your answer tell you about this empirical model?This page titled 9.7: Exercises is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by David Harvey.
265
What is Chemometrics and Why Study it?
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Chemometrics_Using_R_(Harvey)/00%3A_Front_Matter/What_is_Chemometrics_and_Why_Study_it%3F
The definition of chemometrics is is evident in its name, where chemo– means chemical and –metrics means measurement; thus, chemometrics is the study of chemical (and biochemical) measurements and is a branch of analytical chemistry. Examples of chemometric applications includeThese topics, and others, are the focus of this textbook.Why chemometrics is important becomes clear when we consider a simple analytical problem: How do we determine the concentration of copper in a sample, and how and why has the analytical method used for this analysis changed over time.Prior to the 1950s, gravimetry and titrimetry were the most common analytical methods for determining the concentration of copper in a variety of samples. Both of these methods rely on simple stoichiometric relationships. In a gravimetric analysis, for example, we bring copper into solution as Cu2+(aq), precipitate it as Cu(OH)2(s)\[\text{Cu}^{2+}(aq) + 2 \text{OH}^{–}(aq) \rightarrow \text{Cu(OH)}_{2}(s) \nonumber\]and isolate it as CuO(s) after heating it to a high temperature.\[\text{Cu(OH)}_{2}(s) \rightarrow \text{CuO}(s) + \text{H}_{2}\text{O}(l) \nonumber\]We then use the mass of CuO(s) to determine the amount of copper in the original sample by accounting for the simple stoichiometric relationship between Cu and CuO where each mole of Cu yields one mole of CuO.You can read more about gravimetry in Chapter 8 of the textbook Analytical Chemistry 2.1.In a titrimetric analysis, we bring copper into solution as Cu2+(aq) and slowly add a solution of ethlyenediaminetetracetic acid, EDTA, until the moles of EDTA added is equal to the moles of Cu2+ in the original sample.\[\text{Cu}^{2+}(aq) + \text{EDTA}(aq) \rightarrow \text{Cu(EDTA)}^{2+}(aq) \nonumber\]If we know the concentration of our EDTA solution, then it is easy to determine the amount of Cu2+ in the original sample using the simple stoichiometric relationship between Cu2+ and EDTA. For both of these analyses, a chemometric treatment of the data consists of little more than reporting an average, a standard deviation, and a confidence interval.You can read more about titrimetry in Chapter 9 of the textbook Analytical Chemistry 2.1.Gravimetry and titrimetry are useful analytical methods when copper is a major (> 1% w/w) analyte or a minor analyte (0.01% w/w – 1% w/w) analyte, but less useful if it is a trace analyte (10−7% w/w – 0.01% w/w). Neither method affords a rapid analysis, which makes them less useful if we need to analyze multiple analytes in a large number of samples.For more information about the scale of operations for analytical chemistry, including the relative concentrations of analytes in samples, see Chapter 3.4 of the textbook Analytical Chemistry 2.1.Beginning in the 1950s, instrumental methods of analysis emerged in which an analytical signal is related to the analyte’s concentration, not through the stoichiometry of one or more chemical reactions, but through a theoretical relationship in which at least one variable is not known to us. For example, a solution of Cu2+(aq) is light blue in color because it absorbs light over a broad range of wavelengths between about 600–900 nm, as we see in .The relationship between a solution’s absorbance, \(A_{\lambda}\), at a specific wavelength, \(\lambda\), and a given concentration, C, of Cu2+(aq) is given by Beer’s law\[A_{\lambda} = \epsilon_{\lambda} b C \nonumber\]where \(\epsilon_{\lambda}\) is the analyte’s molar absorptivity at the selected wavelength, \(\lambda\), and b is the distance light travels through the sample. Of these variables—\(A_{\lambda}\), \(\epsilon_{\lambda}\), b, and C—the value of \(\epsilon_{\lambda}\) is not known to us. Contrast that to gravimetry and titrimetry where we almost always know the exact stoichiometric relationships.For more information about visible absorption spectroscopy and Beer's Law, see Chapter 10.2 in Analytical Chemistry 2.1.Although we can measure \(A_{\lambda}\) and b, we cannot calculate C without first determining the value of \(\epsilon_{\lambda}\), which we do using a standard solution for which the concentration of analyte is known, Cstd. If we use a single standard and a single wavelength—which is all early instrumentation allowed—then we have\[\left[ A_{\lambda, std} \right]_{1\ \times\ 1}\ =\ \left[\epsilon_{\lambda} b \right]_{1\ \times\ 1}\ \times\left[C_{std}\right]_{1\ \times\ 1}\nonumber\]which we can solve exactly for \(\epsilon_{\lambda}b\). With this value in hand, we can use the sample’s absorbance to calculate the analyte’s concentration in the sample.Note that we are expressing Beer's Law here using the matrix notation \(\left[ \ \ \right]_{r \times c}\), where r is the number of rows and c is the number of columns in the matrix. In this equation, each matrix holds a single value: an absorbance, a value for \(\epsilon_{\lambda}b\), or a concentration. A matrix with a single value is a scaler. A matrix with a single column or a single row is a vector. The reason for expressing Beer's Law in this way will soon be evident.If we use c standards instead of one standard, and if we continue to use a single wavelength, then we can write Beer’s law this way\[\left[\cdots\ A_{\lambda, std}\ \cdots\right]_{1\ \times\ c}\ =\ \left[\epsilon_{\lambda} b\right]_{1\ \times\ 1}\ \times\left[\cdots\ C_{std}\ \cdots\right]_{1\ \times\ c}\ +\ \left[\cdots\ E\ \cdots\right]_{1\ \times\ c} \nonumber\]where the absorbance values and the concentrations are vectors with dimensions of 1×c (1 wavelength and c standards), where the value of \(\epsilon_{\lambda}b\) is a scalar (a constant), and where we have a vector of residual errors, E, that gives the uncertainties in our measured absorbance values. Having multiple standards provides a new source of information that allows us to consider experimental uncertainty!Note that the equation \(A_{\lambda ,std} = \epsilon_{\lambda}bC\) is in the form of a straight-line, \(y = \beta_{0}x + \beta_{1}\), for which a standard linear regression analysis returns values for the two constants: the slope, \(\beta_{0}\), which is equivalent to \(\epsilon_{\lambda}b\) and the y-intercept, \(\beta_{1}\), which is equivalent to the residual error.If we use r wavelengths and c standards, then we can write Beer’s law this way\[\begin{bmatrix} \cdots & \cdots & \cdots \\ \vdots & A_{\lambda, std} & \vdots \\ \cdots & \cdots & \cdots \end{bmatrix}_{r \times c} = \begin{bmatrix} \vdots \\ \epsilon_{\lambda} b \\ \vdots \end{bmatrix}_{r \times 1} \times [ \cdots C_{std} \cdots]_{1 \times c} + \begin{bmatrix} \cdots & \cdots & \cdots \\ \vdots & E & \vdots \\ \cdots & \cdots & \cdots \end{bmatrix}_{r \times c} \nonumber\]where the absorbance values and the residual errors are in matrices (with wavelengths in rows and standards in columns), the values for \(\epsilon_{\lambda} b\) at each wavelength are in a vector, and the analyte’s concentration in the standards are in a vector; this is a computationally more difficult form of regression, but, as we will learn in a later chapter, one we can solve.But we can push this even further! Note that the \(\epsilon_{\lambda}b\) matrix has one column because we are using a single wavelength, and the C matrix has one row because we assumed just one analyte. As long as the number of analytes is less than the smaller of the number of wavelengths or the number of standards, then we can include additional analytes. For example, if we have n analytes, then\[\begin{bmatrix} \cdots & \cdots & \cdots \\ \vdots & A_{\lambda, std} & \vdots \\ \cdots & \cdots & \cdots \end{bmatrix}_{r \times c} = \begin{bmatrix} \cdots & \cdots & \cdots \\ \vdots & \epsilon_{\lambda} b & \vdots \\ \cdots & \cdots & \cdots \end{bmatrix}_{r \times n} \times \begin{bmatrix} \cdots & \cdots & \cdots \\ \vdots & C_{std} & \vdots \\ \cdots & \cdots & \cdots \end{bmatrix}_{n \times c} + \begin{bmatrix} \cdots & \cdots & \cdots \\ \vdots & E & \vdots \\ \cdots & \cdots & \cdots \end{bmatrix}_{r \times c}\nonumber\]where each column in the \(\epsilon_{\lambda}b\) matrix holds the \(\epsilon_{\lambda}b\) values for a different analyte at one of our wavelengths, and each row in the C matrix is the concentration of a different analyte in one of our standards; again, we can use linear regression to analyze the data.Moving from the analysis of a single analyte in a single standard using a single wavelength\[\left[ A_{\lambda, std} \right]_{1\ \times\ 1}\ =\ \left[\epsilon_{\lambda} b\right]_{1\ \times\ 1}\ \times\left[C_{std}\right]_{1\ \times\ 1} \nonumber\]to the analysis of multiple analytes using multiple standards and multiple wavelengths\[\begin{bmatrix} \cdots & \cdots & \cdots \\ \vdots & A_{\lambda, std} & \vdots \\ \cdots & \cdots & \cdots \end{bmatrix}_{r \times c} = \begin{bmatrix} \cdots & \cdots & \cdots \\ \vdots & \epsilon_{\lambda} b & \vdots \\ \cdots & \cdots & \cdots \end{bmatrix}_{r \times n} \times \begin{bmatrix} \cdots & \cdots & \cdots \\ \vdots & C_{std} & \vdots \\ \cdots & \cdots & \cdots \end{bmatrix}_{n \times c} + \begin{bmatrix} \cdots & \cdots & \cdots \\ \vdots & E & \vdots \\ \cdots & \cdots & \cdots \end{bmatrix}_{r \times c} \nonumber\]required a significant increase in computational power and a significant growth in the capabilities of instrumentation; not surprisingly, new chemometric techniques rely on and are driven by developments in computer science and instrumental analysis! In turn, new chemometric techniques open up new areas of analysis and encourage innovations in computer science and instrumental analysis. This is why chemometrics is an important part of analytical chemistry.
266
InfoPage
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Crystallography_in_a_Nutshell_(Ripoll_and_Cano)/00%3A_Front_Matter/02%3A_InfoPage
This text is disseminated via the Open Education Resource (OER) LibreTexts Project and like the hundreds of other texts available within this powerful platform, it is freely available for reading, printing and "consuming." Most, but not all, pages in the library have licenses that may allow individuals to make changes, save, and print this book. Carefully consult the applicable license(s) before pursuing such effects.Instructors can adopt existing LibreTexts texts or Remix them to quickly build course-specific resources to meet the needs of their students. Unlike traditional textbooks, LibreTexts’ web based origins allow powerful integration of advanced features and new technologies to support learning. The LibreTexts mission is to unite students, faculty and scholars in a cooperative effort to develop an easy-to-use online platform for the construction, customization, and dissemination of OER content to reduce the burdens of unreasonable textbook costs to our students and society. The LibreTexts project is a multi-institutional collaborative venture to develop the next generation of open-access texts to improve postsecondary education at all levels of higher learning by developing an Open Access Resource environment. The project currently consists of 14 independently operating and interconnected libraries that are constantly being optimized by students, faculty, and outside experts to supplant conventional paper-based books. These free textbook alternatives are organized within a central environment that is both vertically (from advance to basic level) and horizontally (across different fields) integrated.The LibreTexts libraries are Powered by NICE CXOne and are supported by the Department of Education Open Textbook Pilot Project, the UC Davis Office of the Provost, the UC Davis Library, the California State University Affordable Learning Solutions Program, and Merlot. This material is based upon work supported by the National Science Foundation under Grant No. 1246120, 1525057, and 1413739.Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation nor the US Department of Education.Have questions or comments? For information about adoptions or adaptions contact More information on our activities can be found via Facebook , Twitter , or our blog .This text was compiled on 07/05/2023
268
1.0: Introduction
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Crystallography_in_a_Nutshell_(Ripoll_and_Cano)/01%3A_Chapters/1.0%3A_Introduction
Why water boils at 100ºC and methane at -161ºC; why blood is red and grass is green; why diamond is hard and wax is soft; why graphite writes on paper and silk is strong; why glaciers flow and iron gets hard when you hammer it; how muscles contract; how sunlight makes plants grow and how living organisms have been able to evolve into ever more complex forms…? The answers to all these problems have come from structural analysis. Max Perutz, July 1996 (Churchill College, Cambridge)With the words pronounced by the Nobel laureate Max Perutz we open these pages (*), a continuing work in progress, intended to guide the interested reader into the fascinating world of Crystallography, which forms part of the scientific knowledge developed by many scientists over many years. This allows us to explain what crystals are, what molecules, hormones, nucleic acids, enzymes, and proteins are, along with their properties and how can we understand their function in a chemical reaction, in a test tube, or inside a living being. The discovery of X-rays in the late 19th century completely transformed the old field of Crystallography, which previously studied the morphology of minerals. The interaction of X-rays with crystals, discovered in the early 20th century, showed us that X-rays are electromagnetic waves with a wavelength of about 10-10 meters and that the internal structure of crystals was regular, arranged in three-dimensional networks, with separations of that order. Since then, Crystallography has become a basic discipline of many branches of Science and particularly of Physics, Chemistry of condensed matter, Biology and Biomedicine. Structural knowledge obtained by Crystallography allows us to produce materials with predesigned properties, from catalyst for a chemical reaction of industrial interest, up to toothpaste, vitro ceramic plates, extremely hard materials for surgery use, or certain aircraft components, just to give some examples of small, or medium sized atomic or molecular materials. Moreover, as biomolecules are the machines of life, like mechanical machines with moving parts, they modify their structure in the course of performing their respective tasks. It would also be extremely illuminating to follow these modifications and see the motion of the moving parts in a movie. To make a film of a moving object, it is necessary to take many snapshots. Faster movement requires a shorter exposure time and a greater number of snapshots to avoid blurring the pictures. This is where the ultrashort duration of the FEL (free electron laser) pulses will ensure sharp, non-blurred pictures of very fast processes (European XFEL or CXFEL). We may suggest you to start getting an overview about Crystallography, or looking at some interesting video clips collected by the International Union of Crystallography. Some of them can directly be reached through the following links:In any case, we suggest you to get a previous overview about the meaning of Crystallography, and if you maintain your interest go deeper into the remaining pages that are shown in the menu on the left (if you don't see the left menu, click here). Enjoy it!(*) We endeavor to assemble these pages and offer them to the interested reader, but obviously we are not immune to errors, inconsistencies or omissions. We are very grateful to several readers who have helped us to correct some previously undetected small errors or that have improved the wording of certain parts of the text. For anything that needs further attention, please, let us know through Martín Martínez Ripoll. These pages were announced by the International Union of Crystallography (IUCr), have been selected as one of the educational web sites and resources of interest to learn crystallography, offered as such in the commemorative web for the International Year of Crystallography, and suggested as the educational website in the brochure prepared by UNESCO for the crystal growing competition for Associated Schools (even in subsequent calls of this competition. The Cambridge Crystallographic Data Centre also offers this website through its Database of Educational Crystallographic Online Resources (DECOR). Martín Martínez Ripoll (1946- ) and Félix Hernández Cano (1941-2005+) were coauthors of a first version of these pages in the early 1990's. Later, in 2002 they produced a PowerPoint presentation dedicated to draw students' attention to the enigmatic beauty of the crystallographic world... This file, called XTAL RUNNER (totally virus free, although in Spanish) can be obtained through this link. If you understand Spanish we also offer you the possibility of reading a short general article of these authors published in 2003, entitled Cristalografía: Transgrediendo los Límites. Today we ask ourselves, where are those glory days gone?Some relevant hints:This page titled 1.0: Introduction is shared under a CC BY-NC 4.0 license and was authored, remixed, and/or curated by Martín Martínez Ripoll & Félix Hernández Cano via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.
271
1.1: The structure of crystals
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Crystallography_in_a_Nutshell_(Ripoll_and_Cano)/01%3A_Chapters/1.01%3A_New_Page
In the context of this chapter, you will also be invited to visit these sections...We all have heard about natural minerals and crystals. We find them daily without entering a museum. A rock and a mountain are made up of minerals, as crystalline as a lump of sugar, a bit of porcelain or a gold ring. However, only occasionally is the size of a crystal large enough to draw our attention, as is the case of these beautiful mineral examples of: Diamond (pure carbon) - Quartz (silicon dioxide) - Scapolite (aluminum silicate) - Pyrite (iron sulfide) Several of these images are property of Amethyst Galleries, Inc. Other excellent images of minerals can be found through this link.Although you can continue reading these pages without any special difficulty, probably you would like to know some aspects about the historical development of our understanding of the crystals. For these readers we offer some further notes that can be found through this link.The ancient Greeks identified quartz with the word crystal (κρύσταλλος, crustallos, or phonetically kroos'-tal-los = cold + drop), ie, very cold icicles of extraordinary hardness. But the formation of crystals is not a unique property of minerals; they are also found (but not necessarily in a natural manner) in the so-called organic compounds, and even in nucleic acids, in proteins and in viruses... A crystal is a material whose constituents, such as atoms, molecules or ions, are arranged in a highly ordered microscopic structure. These constituents are held together by interatomic forces (chemical bonds) such as metallic bonds, ionic bonds, covalent bonds, van der Waals bonds, and others. The crystalline state of matter is the state with the highest order, ie, with very high internal correlations and at the greatest distance range. This is reflected in their properties: anisotropic and discontinuous. Crystals usually appear as unadulterated, homogenous and with well-defined geometric shapes (habits) when they are well-formed. However, as we say in Spanish, "the habit does not make the monk" (clothes do not make the man) and their external morphology is not sufficient to evaluate the crystallinity of a material. The movie below shows the process of crystal growth of lysozyme (a very stable enzyme) from an aqueous medium. The duration of the real process, that takes a few seconds on your screen, corresponds approximately to 30 minutes. The original movie was found on an old website offered by George M. Sheldrik. The figure on the left shows a representation of the faces of a given crystal. If your browser allows the Java Runtime, clicking on the image will open a new window and you will be able to turn this object. If you do not have this application, you can still observe the model rotation in continuous mode from this link.Other Java pop-ups of faces and forms (habits) for ideal crystals can be obtained through this link.So, we ask ourselves, what is unique about crystals which distinguishes them from other types of materials? The so-called microscopic crystal structure is characterized by groups of ions, atoms or molecules arranged in terms of some periodic repetition model, and this concept (periodicity) is easy to understand if we look at the drawings in an carpet, in a mosaic, or a military parade... Repeated motifs in a mosaic Repeated motifs in a military paradeIf we look carefully at these drawings, we will discover that there is always a fraction of them that is repeated. In crystals, the atoms, ions or molecules are packed in such a way that they give rise to "motifs" (a given set or unit) that are repeated every 5 Angstrom, up to the hundreds of Angstrom (1 Angstrom = 10-8 cm), and this repetition, in three dimensions, is known as the crystal lattice. The motif or unit that is repeated, by orderly shifts in three dimensions, produces the network (the whole crystal) and we call it the elementary cell or unit cell. The content of the unit being repeated (atoms, molecules, ions) can also be drawn as a point (the reticular point) that represents every constituent of the motif. For example, each soldier in the figure above could be a reticular point.But there are occasions where the repetition is broken, or it is not exact, and this feature is precisely what distinguishes a crystal from glass, or in general, from materials called amorphous (disordered or poorly ordered)... Planar atomic model of an ordered material (crystal) Planar atomic model of glass (an amorphous material) However, matter is not entirely ordered or disordered (crystalline or non-crystalline) and so we can find a continuous degradation of the order (crystallinity degree) in materials, which goes from the perfectly ordered (crystalline) to the completely disordered (amorphous). This gradual loss of order which is present in materials is equivalent to what we see in the small details of the following photograph of gymnastic training, which is somewhat ordered, but there are some people wearing pants, other wearing skirts, some in different positions or slightly out of line...In the crystal structure (ordered) of inorganic materials, the repetitive units (or motifs) are atoms or ions, which are linked together in such a way that we normally do not distinguish isolated units and hence their stability and hardness (ionic crystals, mainly)... Where we clearly distinguish isolated units is in the case of the so-called organic materials, where the concept of the isolated entity (molecule) appears. Molecules are made up of atoms linked together. However, the links between the molecules within the crystal are very weak (molecular crystals). Thus, they are generally softer and more unstable materials than the inorganic ones. Crystal structure of an organic material: Cinnamamide Protein crystals also contain molecular units (molecules), as in the organic materials, but much larger. The type of forces that bind these molecules are also similar, but their packing in the crystals leaves many holes that are filled with water molecules (not necessarily ordered) and hence their extreme instability... Crystal structure of a protein: AtHal3. The different packing modes in crystals lead to the so-called polymorphic phases (allotropic phases of the elements) which confer different properties to these crystals (to these materials). For example, we all know the different appearances and properties of the chemical element carbon, which is present in Nature in two different crystalline forms, diamond and graphite: Right: Graphite (pure carbon) Graphite is black, soft and an excellent lubricant, suggesting that its atoms must be distributed (packed) in such a way as to explain these properties. However, diamonds are transparent and very hard, so that we can expect their atoms very firmly linked. Indeed, their sub-microscopic structures (at atomic level) show us their differences ... Right: Graphite, showing its layered crystal structure In the diamond structure, each carbon atom is linked to four other ones in the form of a very compact three-dimensional network (covalent crystals), hence its extreme hardness and its property as an electric insulator. However, in the graphite structure, the carbon atoms are arranged in parallel layers much more separated than the atoms in a single layer. Due to these weak links between the atomic layers of graphite, the layers can slide, without much effort, and hence graphite's suitability as a lubricant, its use for pens and as an electrical conductor.And speaking about conductors... The metal atoms in the metallic crystals are structured in such a way that some delocalized electrons give cohesion to the crystals and are responsible for their electrical properties.Before ending this chapter let us introduce a few words about the so-called quasicrystals... A quasicrystal is an "ordered" structure, but not perfectly periodic as the crystals are. The repeating patterns (sets of atoms, etc.) of the quasicrystalline materials can fill all available space continuously, but they do not display an exact repetition by translation. And, as far as symmetry is concerned, while crystals (according to the laws of classical crystallography) can display axes of rotation of order 2, 3, 4 and 6 only, the quasicrystals show other rotational symmetry axes, as for example of order 10 In this website we will not pay attention to the case of quasicrystals. Therefore, if you are interested on it, please go to this link, where Steffen Weber, in a relatively simple way, describes these types of materials from the theoretical point of view, and where some additional sources of information can also be found..Advanced readers should also consult the site offered by Paul J, Steinhardt at the University of Princeton. The Nobel Prize in Chemistry 2011 was awarded to Daniel Shechtman by the discovery of quasicrystals in 1984..There are obviously many questions that the reader will ask, having come this far, and one of the most obvious ones is: how do we know the structure of crystals? This question, and others, will be answered in following chapters and therefore we encourage you to consult them...This page titled 1.1: The structure of crystals is shared under a CC BY-NC 4.0 license and was authored, remixed, and/or curated by Martín Martínez Ripoll & Félix Hernández Cano via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.
272
1.2: X-rays
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Crystallography_in_a_Nutshell_(Ripoll_and_Cano)/01%3A_Chapters/1.02%3A_New_Page
An unexpected result! Discovery of X-rays in 1895. (Illustration by Alejandro Martínez de Andrés, CSIC 2014)By the end of the 19th century, in 1895, Wilhelm Conrad Röntgen, a German scientist from the University of Würzburg, discovered a form of radiation (of unknown nature at that time, and hence the name X-rays) which had the property of penetrating opaque bodies. In the first paragraph of his communication sent to the Society of Physics and Medicine of Wurzburg he reports the discovery as follows: After producing an electrical discharge with a Ruhmkorff’s coil through a Hittorf’s vacuum tube, or a sufficiently evacuated Lenard, Crookes or similar apparatus, covered with a fairly tight-fitting jacket made of thin, black paperboard, one sees that a cardboard sheet coated with a layer of platinum and barium cyanide, located in the vicinity of the apparatus, lights up brightly in the completely darkened room regardless of whether the coated side is pointing or not to the tube. This fluorescence occurs up to 2 meters away from the apparatus. One can easily be convinced that the cause of the fluorescence proceeds from the discharge apparatus and not from any other source of the line. To learn about some aspects of the discovery, as well as about personal aspects of Röntgen, see also the chapter dedicated to some biographical outlines. But if you can read Spanish, there is an extensive chapter dedicated to both the historical details around Röntgen and his discovery. X-rays are invisible to our eyes but they can produce visible images if we use photographic plates or special detectors... Left: Radiographic image of a hand Right: Radiographic image of a monkey Left: Radiographic image of a well-done weld Right: Poorly-done weld (black line) A painting and its X-ray photograph showing two superimposed paintings on the same canvas (Charles II of Spain, by Carreño de Miranda, Museo del Prado, Madrid) We all know several applications of X-rays in the medical field: angiography (the study of blood vessels) or the so-called CT scans, but the use of X-rays has also been extended to detect failures in metals or for the analysis of paintings.Many years passed from the discovery of X-rays in 1895 until that finding produced a revolution in the fields of Physics, Chemistry and Biology. The potential applications in these areas came in 1912 indirectly from the hand of Max von Laue, professor at the Universities of Munich, Zurich, Frankfurt, Würzburg and finally Berlin. Paul Peter Ewald got his friend, Max Laue, interested in his own experiments on the interference between radiations with large wavelengths (practically visible light) on a "crystalline" model based on resonators (note that at that time the question on wave-particle duality was also under discussion). The idea then came to Laue that the much shorter electromagnetic rays, which X-rays were supposed to be, would cause some kind of diffraction or interference phenomena in a medium, and that a crystal could provide this medium. Left: Max von Laue Right: Paul. P. EwaldMax von Laue demonstrated the nature of this new radiation by putting crystals of copper sulfate, and of the mineral zinc blende, in front of an X-ray source, obtaining confirmation of his hypothesis and demonstrating both, the undulatory nature of this radiation and the periodic nature of crystals. For these findings he received the Nobel Prize in Physics in 1914. Left: William H. Right: William L. However, those who really benefited from the discovery of the Germans were the British Braggs (father and son), William H. Bragg and William L. Bragg, who together in 1915 received the Nobel Prize in Physics for demonstrating the usefulness of the phenomenon discovered by von Laue for obtaining the internal structure of crystals - but all this will be the subject of later chapters. This chapter will deal exclusively with the nature and production of X-rays...X-rays are electromagnetic radiations, of the same nature as visible light, ultraviolet or infrared radiations, and the only thing that distinguishes them from other electromagnetic radiations is their wavelength, which is about 10-10 m (equivalent to the unit of length known as one Angstrom). Graphic representation of an electromagnetic wave, showing its associated electric (E) and magnetic (H) fields, moving forwards at the speed of light. The continuous spectrum of visible light (wavelength decreases from red to violet )Excellent information on the electromagnetic spectrum can be found in some pages offered by NASA. The reader can also learn about X-rays and their applications in Medical Radiography and in the pages of The X-Ray Century. ν(Hz) λ(m) = 3 108 m Hz E(J) = h(J/Hz) ν(Hz) = k(J/K molecule) T(K) h = 6.6 10-34 (J/Hz); k = 1.4 10-23 (J/K molecule); 1 eV = 1.6 10-19 (J) Figure taken from the Berkeley Lab The most interesting X-rays for Crystallography are those having a wavelength close to 1 Angstrom (the hard X-rays in the diagram above), which is a distance very close to the interatomic distances occurring in molecules and crystals. These type of X-rays have a frequency of approximately 3 million THz (tera-hertz) and to an energy of 12.4 keV (kilo-electron-volts), which in turn would correspond to a temperature of about 144 million degrees Celsius. These wavelengths are produced in Crystallography laboratories and in large synchrotrons as ESRF, ALBA, Diamond, DESY, ... X-ray generator in a Crystallography laboratory. The goniometric and detection systems are shown behind the X-ray tube. Aerial photograph of the synchrotron at the ESRF in Grenoble (France). Conventional X-ray tubes used for crystallographic studies during the 20th century Static sketch and animation of the X-ray production on a conventional X-ray tube Those 50 kV are supplied as a potential difference (high voltage) between an incandescent filament (through which a low voltage electrical current of intensity i passes: around 5 A at 12 V) and a pure metal (usually copper or molybdenum). This produces an electrical current (of free electrons) between them of about 30 mA. From the incandescent filament (negatively charged) the free electrons jump to the anode (positively charged) causing (in the pure metal) a reorganization in its electronic energy levels. This is a process that generates a lot of heat, so that X-ray tubes must be very well chilled. An alternative to conventional X-ray tubes are the rotating anode generators, in which the anode in the form of a cylinder is maintained in a continuous rotation, so that the incidence of electrons is distributed over its cylindrical surface and thus a higher power can be obtained. Left: Rotating anode generator Right: Rotating anode of polished copper (images taken from Bruker-AXS)The so-called "characteristic X-rays" are produced according to the following scheme: a) Energy state of electrons in an atom of the anode that is going to be reached by an electron from the filament. b) Energy state of the same electrons after impact with the electron from the filament. The incident electron bounces and ejects an electron from the anode, producing the corresponding hole. c) An electron of a higher energy level falls and occupies the hole. This energy jump, perfectly defined, generates the so-called characteristic X-rays of the anodic material.Left: In an X-ray tube the electrons emitted from the cathode are accelerated towards the metal target anode by an accelerating voltage of typically 50 kV. The high energy electrons interact with the atoms in the metal target. Sometimes the electron comes very close to a nucleus in the target and is deviated by the electromagnetic interaction. In this process, which is called bremsstrahlung (braking radiation), the electron loses much energy and a photon (X-ray) is emitted. The energy of the emitted photon can take any value up to a maximum corresponding to the energy of the incident electron.Right: The high energy electron can also cause an electron close to the nucleus in a metal atom to be displaced. This vacancy is filled by an electron further out from the nucleus. The well defined difference in binding energy, characteristic of the material, is emitted as a monoenergetic photon. When detected this X-ray photon gives rise to a characteristic X-ray line in the energy spectrum. Animations taken from Nobelprize.org. Apart from the developments made on the new synchrotron sources, there still exist several attempts to optimize efficiency and power of the "in-house" X-ray sources, as the ones based on the microfocus technology, that is, high brightness sources that additionally use very stable optics mounted to the tube housing, or those based on the use of a liquid metal as anode... Left: New microfocus X-ray tube. Right: New development for an of X-ray source based on liquid metal anodes. Taken from Excillum. There is an animation showing this technology The energetic restoration of the excited anodic electron is carried out with an X-ray emission with a frequency that corresponds exactly to the specific energy gap (quantum) that the electron needs to return to its initial state. These X-rays therefore show a specific wavelength and are known as characteristic wavelengths of the anode. The most important characteristic wavelengths in X-ray Crystallography are the so-called K-alpha lines (Kα), produced by the electrons falling to the innermost layer of the atom (higher binding energy). However, in addition to these specific wavelengths, a continuous range of wavelengths, very close to each other, is also produced known as the continuous radiation which is due to the braking of the incident electrons when they hit the metal target.Distribution of X-ray wavelengths produced in a conventional X-ray tube where the anode material is copper (Cu), molybdenum (Mo), chromium (Cr) or tungsten (W). Over the so-called continuous spectrum, the characteristic K-alpha (Kα) and K-beta (Kβ) lines are shown. The starting point of the continuous spectrum appears at a wavelength which is approximately 12.4 / V, (Angstrom) where V represents the amount of kV between anode and filament. For a given voltage between the anode and filament, only the characteristic wavelengths of molybdenum are obtained (figure on the left).In synchrotrons, the generation of X-rays is quite different. A synchrotron facility contains a large ring (on the order of kilometers), where electrons move at a very high speed in straight channels that occasionally break to match the curvature of the ring. These electrons are made to change direction to go from one channel to another using magnetic fields of high energy. It is at this moment, when electrons change their direction, that the electrons emit a very high energy radiation known as synchrotron radiation. This radiation is composed of a continuum of wavelengths ranging from microwaves to the so-called hard X-rays. Synchrotrons appearance is very similar to that shown in the following schemes: A synchrotron scheme. The linear accelerator (Linac) and the circular accelerator (Booster) are seen in the center, surrounded by the outer storage ring. The emitted X-rays are directed to the beamlines. Left: General sketch of a synchrotron. The central circle is where the charged particles are accelerated (linac & booster). The outer circle is the storage ring, formed by crooked lines, at the end of which the experimental stations are installed. Right: Outline of the junction of two crooked lines of the storage ring of a synchrotron. X-rays appear due to the change of direction of the charged particles. The interested reader can access a demonstration on the operation of a synchrotron ring through this link, or see the same animation in a larger size through this other link. Outline of the point between two straight segments in the storage ring of a synchrotron. Image taken from the ESRF Details of how X-rays are produced in a synchrotron in the curvature of the electrons' trajectory inside the storage ring. Image taken from the ESRF The X-rays obtained in the synchrotrons have two clear advantages for crystallography:Here can you find a list of synchrotrons and storage rings used as synchrotron radiation sources, and free electron lasers around the world. The brilliance of X-ray sources: conventional X-ray tubes, synchrotrons and the future XFEL. Image taken from the ESRF. The following image shows an outline of an experimental station of a synchrotron: a) the optics hutch, where X-rays are filtered and focused using curved mirrors and monochromators; b) the experimental hutch, where the goniometer, sample and detector are located and where the diffraction experiment is done and, c) the control cabin, where the experiment is monitored and, if required, also evaluated. Lightsources.org contains news and science highlights from each light source facility, as well as photos and videos, education and outreach resources, a calendar of conferences and events, and information on funding opportunities.The radiation used for crystallography is usually monochromatic (or nearly monochromatic), that is, a radiation with exclusively (or almost exclusively) a single wavelength. In order to achieve this, the so-called monochromators are used, which consist of a system of crystals that, based on Bragg's Law (which will be presented in another chapter), are able to "filter" (through the interaction between the crystals and the X-rays) the polychromatic radiation, allowing only one wavelength (color), as shown below. Outline of a monochromator. Polychromatic radiation (white) coming from the left (below) is "reflected" , in accordance with Bragg's Law, (to be seen in subsequent chapter), in different orientations of the crystal to produce ("to filter") a monochromatic radiation that is reflected again ("filtered") in the secondary crystal. For the moment it is enough that the reader is aware that this law will allow us to understand how the crystals "reflect" the X-rays, behaving as special mirrors . Image taken from the ESRF. X-rays interact with the electrons of matter... A monochromatic beam (ie with a single wavelength) suffers an exceptional attenuation, proportional to the thickness being crossed. This attenuation may arise from several factors: a) the body heats up, b) a fluorescent radiation, with different wavelength, is produced & accompanied by photoelectrons, both being characteristic of the material (this leads to the photo-electron spectroscopies, Auger and PES); and c) scattered X-rays with the same wavelength (coherent and Bragg) or with slightly higher wavelengths (Compton), together with the scattered electrons. Of all these effects, the most important one is fluorescence, where the absorption increases by increasing incident wavelength. However, this behavior has discontinuities (anomalous dispersion) for those energies that correspond to electronic transitions between different energy levels of the material (this leads to the EXAFS spectroscopy). Spectrum emitted by a metallic anode showing its characteristic wavelengths (continuous line). In the same figure, but referred to a vertical axis of absorbance (not drawn) the increasing and discontinuous variation of the absorption (dashed line) of a given material is also shown. This gives an idea of the use of this property as a filter to obtain monochromatic radiation, at least separating the double Kα1 - Kα2 from the rest of the spectrum. This approach, using concrete materials with specific absorption capacities, was used in Crystallography laboratories until the early 1970's to obtain monochromatic radiation.Special mention deserves the recent discovery introduced in the field of femtosecond X-ray protein nanocrystallography. Using this technique (XFEL: X-ray Free Electron Laser), based on the use of X-rays obtained from a free electron laser, "snapshots" of X-ray diffraction can be obtained in the femtoseconds scale. It has been proposed that femtosecond X-ray pulses can be used to outrun even the fastest damage processes by using single pulses so brief that they terminate before the manifestation of damage to the sample in less time than it needed to be damaged by the crystallites radiation.This will imply a giant step to remove virtually all the difficulties in the crystallization process, especially for proteins (see these articles: Nature 470, 73-77, Nature and Nature). In this sense, it is also worth quoting the article published in Radiation Physics and Chemistry 71, 905-916, which already warned on the future importance of the free electron laser on structural biology. The European XFEL generates ultrashort X-ray flashes, 27,000 times per second and with a brilliance that is a billion times higher than that of the best conventional X-ray radiation sources. Thanks to its outstanding characteristics, which are unique worldwide, the facility opens up completely new research opportunities for scientists and industrial users. It could be interesting to look at the video offered on the web site of the international consortium, or directly through this link.Regarding the use of these powerful X-ray sources for determining the structure of biological macromolecules, the interested readers should consider the very promising results published in Nature 530, 202-206. This study provides the opportunity to use not only the information contained in the diffraction spots generated by crystals, but also in the very weak intensity distribution found around and between the diffraction spots, the so called continuous diffraction. With X-rays from free-electron lasers crystallographic applications are extended to nanocrystals, and even to single non-crystalline biological objects and even movies of biomolecules in action can be produced. To generate the X-ray flashes, bunches of electrons will first be accelerated to high energies and then directed through special arrangements of magnets (undulators). In the process, the particles will emit radiation that is increasingly amplified until an extremely short and intense X-ray flash is finally created. Recently, the modification that involves replacing the so-called material undulators (magnets) with a new optical device also based on laser technology, dramatically reduces the size of the XFEL by about 10,000 times and the size of the accelerator by 100 times, leading to an incredible reduction in size and price of the so called CXFEL (compact X-ray free-electron laser).In any case, X-rays, like any light "illuminate" and "let to see", but in a different manner than we see with our eyes. We encourage you to go forward, to understand how X-rays allow us "to see" inside crystals, that is, to "see" the atoms and the molecules.This page titled 1.2: X-rays is shared under a CC BY-NC 4.0 license and was authored, remixed, and/or curated by Martín Martínez Ripoll & Félix Hernández Cano via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.
273
1.3: The symmetry of crystals
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Crystallography_in_a_Nutshell_(Ripoll_and_Cano)/01%3A_Chapters/1.03%3A_New_Page
In the context of this chapter, you will also be invited to visit these sections...Often we don't realize it, but we continuously live with symmetry... Symmetry is the consistency, the repetition of something in space and/or in time, as is shown in the examples below: a wall drawing, the petals of flowers, the two sides of a butterfly, the succession of night and day, a piece of music, etc. Symmetry by repeating events: Day - Night - Day Symmetry in music. A fragment from "Six unisono melodies" by Bartók. (The diagram at the bottom represents the symmetrization of the one shown above) The word "Symmetry," carefully written with somewhat distorted letters, shows a two-fold axis (a rotation of 180 degrees) perpendicular to the screen. The following sentence also serves to illustrate the concept of symmetry:A MAN, A PLAN, A CANAL: PANAMAwhere, if we forget the commas and the colon, it becomes:AMANAPLANACANALPANAMAwhich can be read from right to left with exactly the same meaning as above. It is a case similar to the "palindromic" numbers (232 or 679976).There are many links in which the reader can find information on the concept of symmetry and we have selected some of them: symmetry and shape of space, some others in the context of crystallographic concepts, some with decorative patterns, or in the context of minerals. There is even an international society for the study of symmetry.The essential knowledge on crystal morphology, symmetry elements and their combination to generate repetitive objects in space, were well established between the 17th and 19th Centuries, as stated elsewhere in these pages...Specifically, in finite objects, there are a number of operations (elements of symmetry) describing repetitions. In the wall-drawing (shown above) we find translational operations (the motif is repeated by translation). The repetition of the petals in the flowers show us rotational operations (the motif is repeated by rotation) around a symmetry axis (or rotation axis). And, although not exactly, the symmetry shown in the phrase or in the music fragment (shown above), lead us to consider other symmetry operations known as symmetry planes (reflection planes, or mirror planes); the same operation that occurs when you look into a mirror. Similarly, for example, if we look at the relationship between the three-dimensional objects in some of the pictures shown below, we will discover a new element of symmetry called center of symmetry (or inversion center), which is an imaginary point between objects (or inside the object) as shown in some drawings below. Generally speaking, and taking into account that pure translational operations are not strictly considered as symmetry operations, we can say that finite objects can contain themselves, or may be repeated (excluding translation) by the following symmetry elements:In addition to the name of the symmetry elements, we use graphical and numerical symbols to represent them. For example, a rotation axis of order 2 (a binary axis) is represented by the number 2, and a reflection plane is represented by the letter m. Left: Polyhedron showing a two-fold rotation axis passing through the centers of the top and bottom edges Right: Polyhedron showing a reflection plane (m) that relates (as a mirror does) the top to the bottom Hands and molecular models related by a twofold axis perpendicular to the drawing plane Hands and molecular models related through a mirror plane (m) perpendicular to the drawing plane Hands (left and right) related through a center of symmetry Two objects related by a center of symmetry and a polyhedron showing a center of symmetry in its centerThe association of elements of rotation with centers or planes of symmetry generates new elements of symmetry called improper rotations. Left: A four-fold improper axis implies 90º rotations followed by reflection through a mirror plane perpendicular to the axis. (Animation taken from M. Kastner, T. Medlock & K. Brown, Univ. of Bucknell)Right: Axis of improper rotation, shown vertically, in a crystal of urea. The meaning of numerical triplets shown will be discussed in another chapter.Combining the rotation axes and the mirror planes with the characteristic translations of the crystals (which are shown below), new symmetry elements appear, with some "sliding" components: screw axes (or helicoidal axes) and glide planes. Twofold screw axis. Glide plane. A glide plane consists of a reflection followed by a translationTwofold screw axis applied to a left hand. The hand rotates 180º and moves a half of the lattice translation in the direction of the screw axis, and so on. Note that the hand always remains as a left hand.(Animation taken from M. Kastner, T. Medlock & K. Brown, Univ. of Bucknell)Glide plane.applied to a left hand. The left hand reflects on the plane, generating a right hand that moves a half of the lattice translation in the direction of the glide operation.(Animation taken from M. Kastner, T. Medlock & K. Brown, Univ. of Bucknell)The symmetry elements of types center or mirror plane relate objects in a peculiar way; the same way that our two hands are related one to the other: they are not superimposable. Objects which in themselves do not contain any of these symmetry elements (center or plane) are called chiral and their repetition through these elements (center or plane) produce objects that are called enantiomers with respect to the original ones. The mirror image of one of our hands is the enantiomer of the one we put in front of the mirror. Regarding the chirality of the crystals and of their building units (molecular or not), advanced readers should also consult the article by Howard D. Flack to be found through this link. The mirror image of either of our hands is the enantiomer of the other hand. They are objects not superimposable and as they do not contain (in themselves) symmetry centers or symmetry planes, are called chiral objects. Chiral molecules have different properties than their enantiomers and so it is important that we are able to differentiate them. The correct determination of the absolute configuration or absolute structure of a molecule (differentiation between enantiomers) can be done in a secure manner through X-ray diffraction only, but this will be explained in another chapter Thus, any finite object (such as a quartz crystal, a chair or a flower) shows that certain parts of it are repeated by symmetry operations that go through a point of the object. This set of symmetry operations is known as a symmetry point group. The advanced reader has also the opportunity to visit the nice work on point group symmetry elements offered through these links:A good general web site about symmetry in crystallography is offered by the Department of Chemistry and Biochemistry of the Oklahoma University. Additionally, the reader can download (totally virus free!!!) and run on his own computer this Java application that, as an introduction to the symmetry of the polyhedra, that was developed by Gervais Chapuis and Nicolas Schöni (École Polytechnique Fédérale de Lausanne, Switzerland).In crystals, the symmetry axes (rotation axes) can only be two-fold, three-fold, four-fold or six-fold, depending on the number of times (order of rotation) that a motif can be repeated by a rotation operation, being transformed into a new state indistinguishable from its starting state. Thus, a rotation axis of order 3 (3-fold) produces 3 repetitions (copies) of the motif, one every 120 degrees (= 360/3) of rotation. If the reader wonders why only symmetry axes of order 2, 3, 4 and 6 can occur in crystals, and not 5-, 7-fold, etc., we recommend the explanations given in another section. Improper rotations (rotations followed by reflection through a plane perpendicular to the rotational axis) are designated by the order of rotation, with a bar above that number. The screw axes (or helicoidal axes, ie, symmetry axes involving rotation followed by a translation along the axis) are represented by the order of rotation, with an added subindex that quantifies the translation along the axis. Thus, a screw axis of type 62 means that in each of the six rotations an associated translation occurs of 2/6 of the axis of the elementary cell in that direction. The mirror planes are represented by the letter m. The glide planes (mirror planes involving reflexion and a translation parallel to the plane) are represented by the letters a, b, c, n or d, depending if the translation associated with the reflection is parallel to the reticular translations (a, b, c), parallel to the diagonal of a reticular plane (n), or parallel to a diagonal of the unit cell (d). The letters and numbers that are used to represent the symmetry elements also have an equivalence with some graphic symbols.But in order to keep talking about symmetry in crystals, it is necessary to introduce and remember the fundamental aspect that defines crystals, which is the periodic repetition by translation of motifs (atoms, molecules or ions). This repetition, which is illustrated in two dimensions with gray circles in the figure below, is derived from the mathematical concept of lattice that we will see more properly in another chapter. In a periodic and repetitive set of motifs (gray circles in the two-dimensional figure above) one can find infinite basic units (unit cells) vastly different in appearance and specification, the repetition of which generates the same mathematical lattice. Note that all represented unit cells delimited by black lines contain in total a single circle inside them, since each vertex contains a certain fraction of a circle inside the cell. These are called primitive cells. However, the cell delimited by red lines contains a total of two gray circles inside (one corresponding to the vertices and a complete one in the center). This type of unit cell is generically called non-primitive.Periodic repetition, which is a characteristic of the internal structure of crystals, is represented by a set of translations in the three directions of space, so that crystals can be seen as the stacking of the same block in three dimensions. Each block, of a certain shape and size (but all of them being identical), is called a unit cell or elementary cell. Its size is determined by the length of its three edges (a, b, c) and the angles between them (alpha, beta, gamma: α, β, γ). Stacking of unit cells forming an octahedral crystal and parameters which characterize the shape and size of an elementary cell (or unit cell)As mentioned above, all symmetry elements passing through a point of a finite object, define the total symmetry of the object, which is known as the point group symmetry of the object. Obiously, the symmetry elements that imply any lattice translations (glide planes and screw axes), are not point group operations. There are many symmetry point groups, but in crystals they must be consistent with the crystalline periodicity (translational periodicity). Thus, in crystals, only rotations (symmetry axes) of order 2, 3, 4 and 6 are possible, that is, only rotations of 180º (= 360/2), 120º (= 360/3), 90º (= 360/4) and 60º (= 360/6) are allowed. See also the crystallographic restriction theorem. Therefore, only 32 point groups are allowed in the crystalline state of matter. These 32 point groups are also known in Crystallography as the 32 crystal classes.point group . Graphic representation of the 32 crystal classes The motif, represented by a single brick, can also be represented by a lattice point. The next three tables show animated drawings about the 32 crystal classes, grouped in terms of the so called crystal system (left column), a classification mode in terms of minimal symmetry, as shown below. These interactive animated drawings need the Java environment and therefore will not run with all browsers These are non-interactive animated gifs obtained from the Java animations appearing in http://webmineral.com. (taken from Marc De Graef) Lluis Casas and Eugenia Estop, from the Department of Geology of the University of Barcelona, ​​offer 32 pdf files which, in an interactive way, allow very easily playing with the 32 point groups through the symmetry of crystalline solids. Additionally, the reader can download and run on his own computer this Java application that, as an introduction to the symmetry of the polyhedra, was developed by Gervais Chapuis and Nicolas Schöni (École Polytechnique Fédérale de Lausanne, Switzerland). Alternatively, the interested reader can interactively view some typical polyhedra of the 7 crystal systems, through the Spanish Gemological Institute.Of the 32 crystal classes, only 11 contain the operator center of symmetry, and these 11 centro-symmetric crystal classes are known as Laue groups. Graphic representation of the 11 Laue groups (centro-symmetric crystal classes) In addition, the repetition modes by translation in crystals must be compatible with the possible point groups (the 32 crystal classes), and this is why we find only 14 types of translational lattices which are compatible with the crystal classes. These types of lattices (translational repetiton modes) are known as the Bravais lattices (you can see them here). The translational symmetry of an ordered distribution of 3-dimensional objects can be described by many types of lattices, but there is always one of them more suited to the object, ie: the one that best describes the symmetry of the object. As the lattices themselves have their own distribution of symmetry elements, we must fit them to the symmetry elements of the structure. A brick wall can be structured with many different types of lattices, with different origins, and defining reticular points representing the brick. But there is a lattice that is more appropriate to the symmetry of the brick and to the way the bricks build the wall.The adequacy of a lattice to the structure is illustrated in the two-dimensional examples shown below. In all three cases two different lattices are shown, one oblique and primitive and one rectangular and centered. In the first two cases, the rectangular lattices are the most appropriate ones. However, the deformation of the structure in the third example leads to metric relationships that make that the most appropriate lattice, the oblique primitive, hexagonal in this case. Adequacy of the lattice type to the structure. The blue lattice is the best one in each case.Finally, combining the 32 crystal classes (crystallographic point groups) with the 14 Bravais lattices, we find up to 230 different ways to replicate a finite object (motif) in 3-dimensional space. These 230 ways to repeat patterns in space, which are compatible with the 32 crystal classes and with the 14 Bravais lattices, are called space groups, and represent the 230 different ways to fit the Bravais lattices to the symmetry of the objects. The interested reader should also consult the excellent work on the symmetry elements present in the space groups, offered by Margaret Kastner, Timathy Medlock and Kristy Brown through this link of the Bucknell University. 32 crystal classes + 14 Bravais lattices = 230 Space groups A wall of bricks showing the most appropriate lattice which best represents both the brick and its symmetry. Note that in this case the point symmetry of the brick and the point symmetry of the reticular point are coincident. The space group, considering the thickness of the brick, is Cmm2.The 32 crystal classes, the 14 Bravais lattices and the 230 space groups can be classified, according to their hosted minimum symmetry, into 7 crystal systems. The minimum symmetry produces some restrictions in the metric values (distances and angles) which describe the shape and size of the lattice.32 classes, 14 lattices, 230 space groups / crystal symmetry = 7 crystal systemsAll this is summarized in the following table: (* Laue) and their symmetry space groups 1 2/m mmm 422 4mm 42m 4/mmm * 4/mmma=b 32 3m 3m * 3m 6/mmma=b=c(or Hexagonal) 622 6mm 6m2 6/mmm * 6/mmm α=β=90 γ=120 432 43m m3m * m3ma=b=c Total: 32, 11 * The 230 crystallographic space groups are listed and described in the International Tables for X-ray Crystallography, where they are classified according to point groups and crystal systems. Chiral compounds that are prepared as a single enantiomer (for instance, biological molecules) can crystallize in only a subset of 65 space groups, those that do not have mirror and/or inversion symmetry operations. A composition of part of the information contained in these tables is shown below, corresponding to the space group Cmm2, where C means that the structure is described in terms of a lattice centered on the faces separated by the c axis. The first m represents a mirror plane perpendicular to the a axis. The second m means another mirror plane (in this case perpendicular to the second main crystallographic direction), the b axis. The number 2 refers to the two-fold axis parallel to the third crystallographic direction, the c axis. Summary of the information shown in the International Tables for X-ray Crystallography for the space group P21/cThe advanced reader can also consult:Crystallographers never get bored! Try to enjoy the beauty, looking for the symmetry of the objects around you, and particularly in the objects shown below ... Look for possible unit-cells and symmetry elements in these structures made with bricks (the solution is obtained clicking on the image)There is a question that surely the readers will have considered... In this chapter we have shown elements of symmetry that operate inside the crystals, but we have not yet said how we can find out the existence of such operations, when in fact, and in the best of cases, we could only visualize the external habit of the crystals if they are well formed! Although we will not answer this question here, we can anticipate that this response will be given by the behavior of the crystals when we illuminate them with that special light that we know as X-rays, but this will be the subject of another chapter. In any case, it doesn't end here! There are many more things to talk about. Go on.This page titled 1.3: The symmetry of crystals is shared under a CC BY-NC 4.0 license and was authored, remixed, and/or curated by Martín Martínez Ripoll & Félix Hernández Cano via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.
274
1.4: Direct and reciprocal lattices
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Crystallography_in_a_Nutshell_(Ripoll_and_Cano)/01%3A_Chapters/1.04%3A_New_Page
Any repetitive and periodic distribution of a set of objects (or motifs) can be characterized, or described, by translations that repeat the set of objects periodically. The implied translations generate what we call a direct lattice (or real lattice).Left: Fragment of a distribution of a set of objects that produce a direct lattice in 2 dimensions. As an example, one of the infinite sets of motifs (small tiles) that produce the repetitive and periodic distribution is shown inside the yellow squares. Right: Fragment of a mosaic in La Alhambra showing a 2-dimensional periodic pattern. These periodic translations can be discovered in the mosaic and produce a 2-dimensional direct lattice. The red square represents the translations of the smallest direct lattice produced by the periodic distributions of the small pieces of this mosaic.The yellow square represents another possible lattice, a bigger one, non primitive. Periodic stacking of balls, producing a 3-dimensional network (direct lattice). The motif being repeated in the three directions of space is the contents of the small box with blue edges, the so called "unit cell".The translations that describe the periodicity in crystals can be expressed as a linear combination of three basic translations, not coplanar, ie independent, known as reticular or lattice axes (or unit cell axes). These axes define a parallelogram (in 2 dimensions), or a parallelepiped (in 3 dimensions) known as a unit cell (or elementary cell). This elementary area (in 2-dimensional cases), or elementary volume (in 3-dimensional cases), which holds the minimum set of the periodic distribution, generates (by translations) the full distribution which, in our atomic 3-dimensional case, we call crystal. In addition to the fact that the unit cell is the smallest repetitive unit as far as translations is concerned, the reader should note that the system of axes defining the unit cell actually defines the reference system to describe the positional coordinates of each atom within the cell. Left: Elementary cell (or unit cell) defined by the 3 non-coplanar reticular translations (cell axes or lattice axes) Right: Crystal formation by stacking of many unit cells in 3 space directions In general, inside the unit cell there is a minimum set of atoms (ions or molecules) which are repeated inside the cell due to the symmetry elements of the crystal structure. This minimum set of atoms (ions or molecules) which generate the whole contents of the unit cell (after applying the symmetry elements to them) is known as the asymmetric unit. The structural motif shown in the left figure is repeated by a symmetry element (symmetry operation), in this case a screw axis The repetition of the motif (asymmetric unit) generates the full content of the unit cell, and the repetition of unit cells generates the entire crystalThe lattice, which is a pure mathematical concept, can be selected in various ways in the same real periodic distribution. However, only one of these lattices "fits" best with the symmetry of the periodic distribution of the motifs... Two-dimensional periodic distribution of one motif containing two objects (a triangle and a circle) Left: Unit cells corresponding to possible direct lattices (=real lattices) that can be drawn over the periodic distribution shown above. Right: The red cell on the left figure (a centered lattice) fits better with the symmetry of the distribution, and can be decomposed in two identical lattices, one for each object of the motif.As is shown in the figures above, although especially in the right one, any lattice that describes the repetition of the motif (triangle + circle) can be decomposed into two identical equivalent lattices (one for each object of the motif). Thus, the concept of lattice is independent of the complexity of the motif, so that we can use only one lattice, since it represents all the remaining equivalent ones. Once we have chosen a representative lattice, appropriate to the symmetry of the structure, any reticular point (or lattice node) can be described by a vector that is a linear combination (with integer numbers) of the direct reticular axes: R = m a + n b + p c, where m, n and p are integers. Non-reticular points can be reached using the nearest R vector, and adding to it the corresponding fractions of the reticular axes to reach it:r = R + r' = (m a + n b + p c) + (x a + y b + z c)Position vector for any non-reticular point of a direct latticewhere x, y, z represent the corresponding dimensionless fractions of axes X/a, Y/b, Z/c, and X, Y, Z the corresponding lengths. Position vector for a non-reticular point (black circle) The reader should also have a look into the chapters about lattices and unit cells offered by the University of Cambridge. Alternatively, the reader can download and run on his own computer this Java application that illustrates the lattice concept (it is totally virus free and was developed by Gervais Chapuis and Nicolas Schöni, École Polytechnique Fédérale de Lausanne, Switzerland).From a geometric point of view, on a lattice we can consider some reticular lines and reticular planes which are those passing through the reticular points (or reticular nodes). Just as we did with the lattices (choosing one of them from all the equivalent ones), we do the same with the reticular lines and planes. A reticular line or a reticular plane can be used as a representative of the entire family of parallel lines or parallel planes. Following with the argument given above, each motif in a repetitive distribution generates its own lattice, although all these lattices are identical (red and blue). Of the two families of equivalent lattices shown (red and blue) we can choose only one of them, on the understanding that it also represents the remaining equivalent ones. Note that the distance between the planes drawn on each lattice (interplanar spacing) is the same for the blue or red families. However, the family of red planes is separated from the family of blue planes by a distance that depends on the separation between the objects which produced the lattice. This distance between the planes of different families can be called the geometric out-of-phase distance.Left: Family of reticular planes cutting the vertical axis of the cell in 2 parts and the horizontal axis in 1 part. These planes are parallel to the third reticular axis (not shown in the figure). Right: Family of reticular planes cutting the vertical axis of the cell in 3 parts and the horizontal axis in 1 part. These planes are parallel to the third reticular axis (not shown in the figure). The number of parts in which a family of planes cut the cell axes can be associated with a triplet of numbers that identify that family of planes. In the three previous figures, the number of cuts, and therefore the numerical triplets would be, and, respectively, according to the vertical, horizontal and perpendicular-to-the-figure axes. In this figure, the numerical triplets for the planes drawn are, that is, the family of planes does not cut the a axis, but cuts the b and c axes in 2 identical parts, respectively.The plane drawn on the left side of the figure above cuts the a axis in 2 equal parts, the b axis in 2 parts and the c axis in 1 part. Hence, the numerical triplet identifying the plane will be. The plane drawn on the right side of the figure cuts the a axis into 2 parts, is parallel to the b axis and cuts the c axis in 1 part. Therefore, the numerical triplet will be. A unique plane, as the one drawn in the top right figure, defined by the numerical triplet known as Miller indices, represents and describes the whole family of parallel planes passing through every element of the motif. Thus, in a crystal structure, there will be as many plane families as possible numerical triplets exist with the condition that these numbers are primes, one to each other (not having a common divisor). The Miller indices are generically represented by the triplet of letters hkl. If there are common divisors among the Miller indices, the numerical triplet would represent a single family of planes only. For example, the family with indices, which are not strictly reticular, can be regarded as the representative of 3 families of indices with a geometric out-of-phase distance (among the families) of 1/3 of the original (see the figures below). Left: Three families of reticular planes, with indices in three equivalent lattices, showing an out-of-phase distance between them of 1/3 of the interplanar spacing in each family. Right: The same set of planes of the figure on the left drawn over one of the equivalent lattices. Therefore its Miller indices are and its interplanar spacing is 1/3 of the interplanar spacing of the family.Thus, the concept of Miller indices, previously restricted to numerical triplets (being prime numbers), can now be generalized to any triplet of integers. In this way, every family of planes, will "cover" the whole crystal. And therefore, for every point of the crystal we can draw an infinite number of plane families with infinite orientations. Through a point in the crystal (in the example in the center of the cell) we can draw an infinite number of plane families with an infinite number of orientations. In this case only 3 families and 3 orientations are shown. Of course, interplanar spacings can be directly calculated from the Miller indices (hkl) and the values of the reticular parameters (unit cell axes). The table below shows that these relations can be simplified for the corresponding metric of the different lattices. Formula to calculate the interplanar spacings (dhkl) for a family of planes with Miller indices hkl in a unit cell of parameters a, b, c, α, β, γ. Vertical bars (for the triclinic case) mean the function "determinant". In the trigonal case a=b=c=A; α=β=γ. In all cases, obviously, the calculated interplanar spacing also represents the distance between the cell origin and the nearest plane of the family. Interested readers should also have a look into the chapter on lattice planes and Miller indices offered by the University of Cambridge.Any plane can also be characterized by a vector (σhkl) perpendicular to it. Therefore, the projection of the position vector of any point (belonging to the plane), over that perpendicular line is constant and independent of the point. It is the distance of the plane to the origin, ie, the spacing (dhkl). Any plane can be represented by a vector perpendicular to it. Consider the family of planes hkl with the interplanar distance dhkl. From the set of vectors normal to the planes' family, we take the one (σhkl) with length 1/dhkl. The scalar product between this vector and the position vector (d'hkl ) of a point belonging to a plane from the family is an integer (n), and this integer gives us the order of that plane in the hkl family. That is: (σhkl) . (d'hkl) = (1/dhkl) . (n.dhkl) = n (see left figure below)n will be 0 for the plane passing through the origin, 1 for the first plane, 2 for the second, etc.Thus, σhkl represents the whole family of hkl planes having an interplanar spacing given by dhkl. In particular, for the first plane we get: |σhkl| dhkl = 1.If we define 1/dhkl, as the length of the vector σhkl, the product of this vector, times the dhkl spacing of the planes family is the unit. If we take a vector 2 times longer than σhkl, the interplanar spacing of the corresponding new family of planes would be a half. If from this normal vector σhkl of length 1/dhkl, we take another vector, n times (integer) longer (n.σhkl), the above mentioned product (|σhkl| dhkl = 1) would imply that the new vector (n.σhkl) will correspond to a family of planes of indices nh,nk,nl having an interplanar spacing n times smaller. In other words, for instance, the lengths of the following interplanar spacings will bear the relation: d100 = 2.(d200)= 3.(d 300)..., so that σ100 = (1/2).σ200 = (1/3).σ300 ... and similarly for other hkl planes. Therefore, it appears that the moduli (lengths) of the perpendicular vectors (σhkl) are reciprocal to the interplanar spacings. The end points of these vectors (blue arrows in figure below) also produce a periodic lattice that, due to this reciprocal property, is known as the reciprocal lattice of the original direct lattice. The reciprocal points obtained in this way (green points in figure below) are identified with the same numerical triplets hkl (Miller indices) which represent the corresponding plane family. Geometrical construction of some points of a reciprocal lattice (green points) from a direct lattice. To simplify, we assume that the third axis of the direct lattice (c) is perpendicular to the screen. The red lines represent the reticular planes (perpendicular to the screen) and whose Miller indices are shown in blue. As an example: the reciprocal point with indices will be located on a vector perpendicular to the plane and its distance to the origin O is inversely proportional to the spacing of that family of planes.Animated example showing how to obtain the reciprocal points from a direct lattice It should now be clear that the direct lattice, and its reticular planes, are directly associated (linked) with the reciprocal lattice. Moreover, in this reciprocal lattice we can also define a unit cell (reciprocal unit cell) whose periodic translations will be determined by three reciprocal axes that form reciprocal angles among them. If the unit cell axes and angles of the direct cell are known by the letters a, b, c, α, β, γ, the corresponding parameters for the reciprocal cell are written with the same symbols, adding an asterisk: a*, b*, c*, α*, β*, γ*. Geometrical relation between direct and reciprocal unit cells The figure below shows again the strong relationship between the two lattices (direct with blue points, reciprocal in green). In this case, the corresponding third reciprocal axes (c and c*) are perpendicular to the screen. And analytically the relationship between the direct (= real) and reciprocal cells can be written as:V = (a x b) . c = a. b. c (1 - cos2α - cos2β - cos2γ + 2 cos α cos β cos γ)1/2Note that, in accordance with the definitions given above, the length of a* is the inverse of the interplanar spacing d100 (|a*| = 1/d100), and that |b*| = 1/d010, and that |c*| = 1/d001. Therefore, the following scalar products (dot products) can be written: a.a* = 1, a.b* = 0 and similarly with the other pairs of axes. Summarizing:In addition to this, we recommend to download and execute the Java applet by Nicolas Schoeni and Gervais Chapuis of the Ecole Polytechnique Fédéral de Lausanne (Switzerland) to understand the relation between direct and reciprocal lattices and how to build the latter from a direct lattice. (Free of any kind of virus). See also the pages on reciprocal space offered by the University of Cambridge through this link. And although we are revealing aspects corresponding to the next chapter (see the last paragraph of this page), the reader should also look at the video made by www.PhysicsReimagined.com, showing the geometric relationships between direct and reciprocal lattices, displayed below as an animated gif: The reader is probably asking himself why we need this new concept (the reciprocal lattice). Well, there are reasons which justify it. One of them is that a family of planes can be represented by just one point, which obviously simplifies things. And another important reason is that this new lattice offer us a very simple geometric model that can interpret the diffraction phenomena in crystals. But this will be described in another chapter. Go on!This page titled 1.4: Direct and reciprocal lattices is shared under a CC BY-NC 4.0 license and was authored, remixed, and/or curated by Martín Martínez Ripoll & Félix Hernández Cano via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.
275
1.5: Scattering and diffraction
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Crystallography_in_a_Nutshell_(Ripoll_and_Cano)/01%3A_Chapters/1.05%3A_New_Page
In the context of this chapter, you will also be invited to visit these sections... Center: Refraction of light after passing through a glass prism. Right: Polarization of light passing through a polarizer. X-ray diffraction is the physical phenomenon that expresses the fundamental interaction between X-rays and crystals (ordered matter). However, to describe the phenomenon, it is advisable to first introduce some physical models that (as all models) do not fully explain reality (as they are an idealization of it), but can be used to help understand the phenomenon.A wave is an undulatory phenomenon (a disturbance) that propagates through space and time, and is regularly repeated. Waves are usually represented graphically by a sinusoidal function (as shown at right), in which we can determine some general parameters that define it. Transverse wave propagation of vibrating longitudinal and circular movements Animations originally taken from physics-animations.com Undulatory phenomena (waves) propagate at a certain speed (v) and can be modeled to meet the so-called wave equation, scalar or vectorial, depending on the nature of the disturbance. The solutions to this equation are usually combinations of trigonometric terms, each of them characterized by: 1) an amplitude (A), which measures the maximum (or minimum) of the disturbance with respect to an equilibrium value, and 2) a phase \(\phi\):\(\phi\) = 2\(\pi\) (K.r - ν.t + \(\alpha\))The intensity of an undulatory disturbance, at any point of the wave, is proportional to the square of the disturbance value at that point, and if it is expressed in terms of complex exponentials, this is equivalent to the product of the disturbance by its complex conjugate. The intensity is a measure of the energy flow per unit of time and per unit of area of the wavefront (spherical or flat, depending on the type of wave).A wave is a regular phenomenon, ie it repeats exactly in time (with a period T) and space (with a period λ, the wavelength), so that λ = ν.T, or λ.ν= v. In the expression of the phase (\(\phi\)), K is the so-called wave vector which gives the sense of progress of the wave (the ray), and is considered with an amplitude 1/λ. Thus, K is the number of repetitions per unit of length.ν is the frequency (the inverse of the period), that is, the number of repetitions (or cycles) per unit of time. We give the name pulse to the magnitude given by: 2\(\pi\).ν, which measures the number of repetitions per radian (180/π degrees) of the cycle.In the full electromagnetic spectrum (ie in the distribution of electromagnetic wavelengths) the hard X-rays (the high energy ones) are located around a wavelength of 1 Angstrom in vacuum (for Cu the average wavelength is 1.5418 Angstrom and for Mo it's 0.7107 Angstrom), while visible light has a wavelength in the range of 4000 to 7000 Angstrom.t and r are, respectively, the time and the position vector with which we measure the disturbance, and \(\alpha\) is the original phase difference relative to the other components of the wave.We speak of waves being in phase if the difference between the phases of the components is an integer multiple of 2\(\pi\), and we say that the waves are in opposition of phase if that difference is an odd multiple of \(\pi\). For an easy mathematical treatment to keep track of the relations between phases of the wave components, these terms are usually expressed in an exponential notation, where the exponential imaginary unit i means a phase difference of +\(\pi\)/2.Possible states of interference of two waves shown at the top, having identical amplitude and frequency. The wave drawn at the bottom (bold line) shows the result of the interference, which has maximum amplitude when interfering waves overlap, i.e. they are in phase. Complete destructive interference is obtained (resulting wave vanishes) when the maxima of one of the component waves coincide with the minima of the other, i.e., when the two waves are in phase opposition. Animation taken from The Pennsylvania State UniversityUndulatory disturbance corresponding to the combination of two elementary waves (blue and green) of similar wavelengths (λ, λ), with the same amplitude (A, A) and relative difference of phase \(\alpha\). The disturbance is moving from left to right with a velocity v. The sum of these two elementary waves produces a wave (sum of the individual ones) depicted in red (λ). Interference usually refers to the interaction of waves which are correlated or coherent with each other, either because they come from the same source or because they have the same, or nearly the same, frequency. The solutions to the wave equation, whose amplitude is not inversely dependent on the distance of origin, are called plane waves, since at a given time all points belonging to the plane K.r = constant have the same phase, the plane is perpendicular to the propagation vector K, and propagates with speed v. For a wave resulting from the sum of several components, the pulse travels with the so-called group velocity and interested readers can consult the simulation offered through this link. In the solutions to the equation in which the amplitude depends inversely on the distance, the planes become spheres and thus spherical waves are obtained. However if the distance of observation is very large, they can be considered similar to plane waves at that observation point. Taking into account what it is shown in the figure above, the principle of superposition states that due to a number of coherent sources (which don't vary phase relationships between them), the wave measured at a given time and point, is the sum of the individual waves at that time and point, taking into account the individual phases (the process of interference), as shown above. If there is no coherence between waves, phase relationships vary over time, and to obtain the total intensity of the resultant wave, we just have to add intensities (see figure below): The total disturbance of two non-coherent sources is just the sum of the individual intensitiesTo model the composition of simple trigonometric waves (of type sine or cosine, or in their imaginary exponential form) the Fresnel representation is normally used. In this representation it is assumed that each wave oscillates around the X axis, as the projection of the circular motion of a vector of length equal to its amplitude and with an angular speed equal to the wave pulse ω. In this way, the resultant wave can be obtained by adding the individual vectors and projecting the resultant vector over the same X axis. Fresnel (or Argand) representation in which is shown the composition of several individual waves (fj). |F| is the amplitude of the resultant wave and Φ its phase. X-ray waves interact with matter through the electrons contained in atoms, which are moving at speeds much slower than light. When the electromagnetic radiation (the X-rays) reaches an electron (a charged particle) it becomes a secondary source of electromagnetic radiation that scatters the incident radiation.According to the wavelength and phase relationships of the scattered radiation, we can refer to elastic processes (or inelastic processes: Compton scattering), depending if the wavelength does not change (or changes), and to coherence (or incoherence) if the phase relations are maintained (or not maintained) over time and space.The exchanges of energy and momentum that are produced during these processes can even lead to the expulsion of an electron out of the atom, followed by the occupation of its energy level by electrons located in higher energy levels.All these types of interactions lead to different processes in the materials such as: refraction, absorption, fluorescence, Rayleigh scattering, Compton scattering, polarization, diffraction, reflection, ... The refractive index of all materials in relation to X-rays is close to 1, so that the phenomenon of refraction of X-rays is negligible. This explains why we are not able to produce lenses for X-rays and why the process of image formation, as in the case of visible light, cannot be carried out with X-rays. It does not explain why reflective optics (catoptric system) cannot be used. Only dioptric system is excluded.Absorption means an attenuation of the transmitted beam, losing its energy through all types of interactions, mainly thermal, fluorescence, inelastic scattering, formation of free radicals and other chemical modifications that could lead to degradation of the material. This intensity decrease follows an exponential model dependent on the distance crossed and on a coefficient of the material (the linear absorption coefficient) which depends on the density and composition of the material.The process of fluorescence, in which an electron is pulled out of an atom's energy level, provides information on the chemical composition of the material. Due to the expulsion of electrons from the different energy levels, sharp discontinuities in the absorption of radiation are produced. These discontinuities allow local analysis around an atom (EXAFS).In the Compton effect, the interaction is inelastic and the radiation loses energy. This phenomenon is always present in the interaction of X-rays with matter, but due to its low intensity, its incoherence and its propagation in all directions, its contribution is only found in the background radiation produced through the interaction.By scattering we will refer here to the changes of direction suffered by the incident radiation, and NOT to dispersion (the phenomenon that causes the separation of a wave into components of varying frequency). Left: Variation in the absorption of a material according to the wavelength of the incident radiation Right: Dispersion of visible light into its nearly monochromatic wavelengths Interaction of a X-ray front with an isolated electron, which becomes a new X-ray source, producing the X-rays waves in a spherical mode The spherical waves produced by two electrons interact with each other, producing positive and negative interferencesWhen a non-polarized X-ray beam (that is, when its electromagnetic field is vibrating at random in all directions perpendicular to the propagation), interacts with an electron, the interaction takes place primarily through its electric field. Thus, in a first approximation, we can neglect both the magnetic and nuclear interactions. According to the electromagnetic theory of Maxwell, the electron scatters electric waves which propagate perpendicular to the electric field, in such a way that the scattered energy (which crosses the unit of area perpendicular to the direction of propagation and per unit of time) is:Ie(Ks) = I0 [e4 / R02 m2 c4] [( 1 + cos2 2θ) / 2] Thomson scattering modelKs is the scattering vector, R0 is the distance to the observation point, 2θ is the angle between the incident direction and the direction where the scattering is observed; e and m are the charge and mass of the electron, respectively, and c is the speed of propagation of radiation in the vacuum.The equation above describes the Thomson's model established in 1906 [Joseph John Thomson] for the spherical wave elastically scattered by a free electron, which is similar to the Rayleigh scattering with visible light. The scattered wave is elastic, coherent and spherical. The mass factor (m) in the denominator justifies neglecting the nuclear scattering.The binding forces between atom and electron are not considered in the model. It is assumed that the natural frequencies of vibration of the electron are much smaller than those of the incident radiation. In this "normal" scattering model (in contrast to the anomalous case in which those frequencies are comparable) the scattered wave is in opposition of phase with the incident radiation.The second factor (in brackets equation above) which depends on the θ angle, is known as the polarization factor, because the scattered radiation becomes partially polarized, which creates a certain anisotropy in the vibrational directions of the electron, as well as a reduction in the scattered intensity (depending of the direction). The scattered intensity shows symmetry around the incident direction. As the scattered wave is spherical, the inverse proportionality to the squared distance makes the energy per unit of solid angle a constant. A solid angle is the angle in three-dimensional space that an object subtends at a point. It is a measure of how big that object appears to an observer looking from that point. Metrically it is the constant ratio between the intersecting areas of concentric spheres with a cone, and the corresponding squared radii of the spheres: A1/R12 = A2/R22 = A3/R32 = ... = solid angle in steradiansWith regard to the phenomenon of diffraction and interference, it is important to consider the phase relationship between two waves due to their different geometric paths. This affects the difference of phase \(\alpha\) of the resultant wave:\(\phi\) = 2\(\pi\)(K0.r - ν.t + \(\alpha\))\(\alpha\) = 2\(\pi\) (Ks - K0) rij + \(\alpha\) 'where K0 is the wave vector of the incident wave, Ks is the wave vector in the direction of propagation and rij is the vector between the two propagation centers which produces the phase difference.If we have several disturbance centers whose phase differences are measured from a common origin, and we consider the position vectors rj of their phase differences, the phase difference of one of the centers can be written (using unit vectors in the directions of propagation with λK = s) as:\(\alpha\)j = 2\(\pi\) [(s - s0) / λ] rj + \(\alpha\) 'This means that all rj points in which the product (s - s0) rj has a constant value (cte) , will have the same phase, given by:\(\alpha\) = (cte. 2\(\pi\) / λ) + \(\alpha\) 'An atom that can be considered as a set of Z electrons (its atomic number) can be expected to scatter Z times that which an electron does. But the distances between the electrons of an atom are of the order of the X-rays wavelength, and therefore we can also expect some type of partial destructive interferences among the scattered waves. In fact, an atom scatters Z times (what an electron does) only in the direction of the incident beam, decreasing with the increasing of the θ angle (the angle between the incident radiation and the direction where we measure the scattering). And the more diffuse the electronic distribution of electrons around the nucleus, the greater the reduction. Diagram showing the variation of the amplitudes scattered by an electron, without considering the polarization (left figure), and an atom (right figure). The amplitude (intensity) scattered by an atom decreases with increasing scattering angle. Scheme taken from School of Crystallography (Birkbeck College, Univ. of London)The atomic scattering factor is the ratio between the amplitude scattered by an atom and a single electron. As the speed of electrons in the atom is much greater than the variation of the electric vector of the wave, the incident radiation only "sees" an average electronic cloud, which is characterized by an electron density of charge ρ(r). If this distribution is considered spherically symmetric, it will just depend on the distance to the nucleus, so that, with: H = 2 sin θ / λ (which is the length of the scattering vector H = Ks- K0 = (s - s0) / λ):f(H) = 4π∫(0 → ∞) r2 ρ(r) (sin H r / H r) drThus, the atomic scattering factor will represent a number of electrons (the effective number of electrons of a particular atom type) that scatter in phase in that direction, so that θ = 0 and f = Z. The hypothesis of isotropy, ie that this atomic factor does not depend on the direction of H, appears to be unsuitable for transition momentum in which d or f orbitals are involved, nor for the valence electrons.By quantum-mechanics calculations we can obtain the values for the atomic scattering factors, and we can derive analytical estimates of the type: Left: Atomic scattering factors calculated for several ions with the same number of electrons as Ne. One can observe that the O-- has a more diffuse electronic cloud than Si 4+ and thus it shows a faster decayRight: Atomic scattering factors calculated for atoms and ions with different numbers of electrons. Note that the single electron of the hydrogen atom (H) scatters very little as compared with other elements, especially with increasing Θ. Hydrogen will therefore be "difficult to see" among other dispersion effectsWhen the frequency of the incident radiation is close to the natural vibration of the electron linked to the atom, we have to make some corrections (Δ) due to the phase differences that occur between the individual waves scattered by electrons, whose vibration (due to the incident wave) is affected by that linking. Thus:f(H) = f0 + Δ' f + i Δ'' falso written as: f(H) = f0 + f ' + i f ''where f0 is the atomic scattering factor without ligation, as previously defined, and i is the imaginary unit that represents the phase differences between individual scattered waves. This situation occurs for atoms with large atomic numbers (heavy atoms), or with atomic numbers close (but smaller) to the metal atoms in the X-ray anode. These corrections, that will be discussed in another chapter, weakly depend on the θ angle, so that this anomalous effect is better seen at larger values of this angle, although this is where the scattered beams have lower intensity due to thermal effects (see below). [These corrections allow us to distinguish the chirality (Bijvoet, 1951) of the crystals and provide us a method for solving the structure of molecules (SAD, MAD)].Due to the movement of the atomic thermal vibrations within the material, the effective volume of the atom appears larger, leading to an exponential decrease of the scattering power, characterized by a coefficient B (initially isotropic) in the Debye-Waller exponential factor:f(H) exp [ -Biso sin2θ / λ2 ]B is 8π2<u2>, <u2> being the quadratic average amplitude of thermal vibration in the H direction. In the isotropic model of vibration, B is considered to be identical in all directions (with normal values between 3 and 6 Angstroms2 in crystals of organic compounds). In the anisotropic model, B is considered to follow an ellipsoidal vibration model. Unfortunately, these thermal parameters may reflect not only thermal vibration, as they are affected by other factors such as atomic static disorder, absorption, wrong scattering factors, etc. Decrease of the atomic scattering factor due to the thermal vibration If the browser allows it, interested readers can also use this applet made by Steffen Weber which shows the decrease of the atomic scattering factor of an atom when the temperature increases its thermal vibration state. Just write in the left column of the applet the atomic number of an atom (eg 80 for mercury), and the same number in the box shown below. Then activate the box marked with the word "Execute" and note the decrease of the scattering factor as a function of the selected temperature. Now increase the temperature (eg 2), and re-activate the "Execute" box.X-rays scattered by a set atoms produce X-ray radiation in all directions, leading to interferences due to the coherent phase differences between the interatomic vectors that describe the relative position of atoms. In a molecule or in an aggregate of atoms, this effect is known as the effect of internal interference, while we refer to an external interference as the effect that occurs between molecules or aggregates. The scattering diagrams below show the relative intensity of each of these effects:Scattering diagrams of a monoatomic material in different states. In the intensity axis we have neglected the background contribution. The figures mainly represent the effect of the external interference, while the internal interference (in this case due to a single atom only) is simply reflected by the relative intensity of the maxima. Note how the thermal movement in the liquid softens and reduces the scattering profile, and how the maxima produced by the glass also decrease. In the crystal, where the phase relations are fixed and repetitive, the scattering profile becomes sharp with well defined peaks, whereas in the other diagrams the peaks are broad and somewhat continuous. In the crystal case the scattering effect is known as diffraction. Note how the scattering phenomenon reflects the internal order of the sample -- the positional correlations between atoms. In the case of monoatomic gases, the effects of interference between atoms m and n lead (in terms of the intensity scattered by an electron) to:I(H) = Ie(H) ΣmΣn fm(H) fn(H) exp [2\(\pi\)i (s - s0) rm,n / λ]which, when averaged over the duration of the experiment and in all k directions of space, gives rise to the Debye formula:<I(H)> = Ie(H) ΣmΣn fm(H) fn(H) [ sin 2π|H| |rm,n| / 2π|H| |rm,n| ] Geometry of the scattering produced by a set of identical atoms In the case of monoatomic liquids some effects appear at short distances, due to correlations between atomic positions. If the density of atoms per unit of volume (at a distance r from any atom with spherical symmetry) is, on average, ρ(r), then the expression 4\(\pi\) r2ρ(r) is known as the radial distribution, and the Debye formula becomes:<I(H)> = Ie(H) N f2(H) [ 1 + ∫(0 → ∞) 4\(\pi\)r2ρ(r) sin (2\(\pi\)|H| |r|) / 2\(\pi\)|H| |r| dr ]All these relationships allow the analysis of the X-ray scattering in amorphous, glassy, liquid and gaseous samples.No matter the possible complexity with which the phenomenon of X-ray scattering is presented. The nonspecialist reader should only remember some simple ideas that are outlined below (drawings taken from the lecture by Stephen Curry)... When the set of atoms is structured as a regular three-dimensional lattice (so that the atoms are nodes of the lattice), the precise geometric relationships between the atoms give rise to particular phase differences. In these cases, cooperative effects occur and the sample acts as a three-dimensional diffraction grid. Under these conditions, the effects of external interference produce a scattering structured in terms of peaks with maximum intensity which can be described in terms of another lattice (reciprocal of the atomic lattice) which shows typical patterns, such as those you can see when you look at a streetlight through an umbrella or a curtain.Schematic diagram of diffraction patterns from several two-dimensional point distributions. The parameters of repetition in the diffraction patterns (reciprocal space) carry the * superscript and k means a constant scale factor which depends on the experiment. All points of the diffraction pattern have the same intensity, because it is assumed that the used wavelength is much larger than the points of the direct lattice (see above in the paragraph about scattering by an atom).Relationship between two 2-dimensional lattices, direct lattice (on the left) and reciprocal lattice (on the right). The repetition parameters in reciprocal space carry the * superscript and k is a scale factor that depends on the experiment. d10 and d01 are the corresponding direct lattice spacings. Note that the figures show a direct unit cell and a reciprocal unit cell only, corresponding to the diffraction patterns shown on the left side of the page. See also direct and reciprocal lattices.Structured in a lattice, any atom can be defined by a vector, referred to a common origin:R j,m1,m2,m3 = m1 a + m2 b + m3 cwhere Rj represents the position of the j node in the lattice; m1, m2, m3, are integers and a, b and c are the vectors defining the lattice. According to this, the intensity scattered by a material would be:I(H) = Ie(H) Σm1Σm'1Σm2Σm'2Σm3Σm'3 fj(H) fj'(H) exp [2\(\pi\)i (s - s0) rm,m' / λ]where:rm,m' = Rm1,m2,m3 - Rm'1,m'2,m'3 = (m1-m'1) a + (m2 - m'2) b + (m3 - m'3) cAnd calculating this sum we have: I(H) = Ie(H) [ [ sin2 π(s - s0) M1 a / λ ] / [sin2 π(s - s0) a / λ ] ] . [ [ sin2 π(s - s0) M2 b / λ ] / [sin2 π(s - s0) b / λ ] ] . [ [ sin2 π(s - s0) M3 c / λ ] / [sin2 π(s - s0) c / λ ] ] = Ie(H) IL(H)In this expression, M1, M2, M3 represent the number of unit cells contained in the crystal along the a, b and c directions, respectively, so that in the total sample the number of unit cells would be M = M1.M2.M3 (around 1015 in crystals of an average thickness of 0.5 mm).IL(H) is the factor of external interference due to the monoatomic lattice. It consists of several products of type (sin2 Cx) / sin2 x, where C is a very large number. This function is almost zero for all x values, except in those points where x is an integer multiple of \(\pi\), where it takes its maximum value of C2. The total value would be a maximum value only when all three products are other than zero, where it will take the value of M2. That is, the diffraction diagram of the direct lattice is another lattice that takes non-zero values in its nodes and that, due to the Ie(H) factor, varies from one place to another...Due to the finite size of the samples, the small chromatic differences of the incident radiation, the mosaic of the sample, etc., the maxima show some type of spreading around them. Therefore, in order to set the experimental conditions for measurement, one needs a small sample oscillation around the maximum position (rocking) to integrate all these effects and to collect the total scattered energy. Graphical representation of one of the products of the IL(H) function between two consecutive maxima. Note the transformation from scattering to diffraction, that is, from broad to very sharp peaks, as the number of cells M1 increases. The maxima are proportional to M12 and the first minimum appears closer to the maximum with increasing M1.When the material is not structured in terms of a monoatomic lattice, but is formed by a group of atoms of the same or of different types, the position of every atom with respect to a common origin is given by: = Tm1,m2,m3 + rj Reduction inside a unit cell of the absolute position of an atom through lattice translationsthat is, that to go from the origin to the atom, at position R, we first go, through the T translation, to the unit cell origin, and from there with the vector r we reach the atom. As the atom is always included within a unit cell, its coordinates referred to the cell are smaller than the axes, and often are expressed as fractions of them:r = X a + Y b + Z c = X/a a + Y/b b + Z/c c = x a + y b + z cwhere x, y, z, as fractions of axes, are now between -1 and +1.Then, under the conditions initially raised, ie with a monochromatic and depolarised X-ray beam (as a plane wave, formed by parallel rays of a common front wave), perpendicular to the propagation unit vector s0 that completely covers the sample, the kinematic model of interaction indicates that the sample produces diffracted beams in the direction s with an intensity given by:I(H) = Ie(H) IF(H) IL(H)where Ie is the intensity scattered by an electron, IL is the external interference effect due to the three-dimensional lattice structure, and IF is the square of the so-called structure factor, a magnitude which takes into account the effect of all internal interferences due to the geometric phase relationships between all atoms contained in the unit cell. This internal structural effect is:IF(H) = | F2(H) | = F(H) F*(H)As a consequence of the complex representation of waves, mentioned at the beginning, the square of a complex magnitude is obtained by multiplying the complex by its conjugate. Thus, specifically, we give the name structure factor, F(H), to the resultant wave from all scattered waves produced by all atoms in a given direction :F(H) = Σ(1 → n) fj(H) exp [2\(\pi\)(s - s0) rj / λ]As already stated, the phase differences due to geometric distances R are proportional to (s - s0) R / λ. This means that if we change the origin, the phase differences will be produced according to the geometric changes, in such a way that as the exponential parts of the intensity functions are conjugate complexes, they will affect the intensities in terms of a proportionality constant only. Thus, a change of origin is not relevant to the phenomenon.In the equation of the total intensity, I(H), the conditions to get a maximum lead to the following consequences: Diffraction patterns of: (a) a single molecule, (b) two molecules, (c) four molecules, (d) a periodically distributed linear array of molecules, (e) two linear arrays of molecules, and (f) a two-dimensional lattice of molecules. Note how the pattern of the latter is the pattern of the molecule sampled in the reciprocal points. To clarify what has been said above, the reader can analyze further objects and their corresponding diffraction patterns through this link. Additionally we suggest you to watch the video prepared by the Royal Institution to demonstrate optically the basis of diffraction using a wire coil (representing a molecule) and a laser (representing an X-ray beam).We have seen that the diffraction diagram of a direct lattice defined by three translations, a, b and c, can be expressed in terms of another lattice (the reciprocal lattice) with its reciprocal translations: a*, b* and c*, and these translation vectors (direct and reciprocal) meet the conditions of reciprocity:a a* = b b* = c c* = 1 and a b* = a c* = b c* = 0and they also meet that (for instance):a* = (b x c) / V (x means vectorial or cross product)where V is the volume of the direct unit cell defined by the 3 vectors of the direct cell, and therefore:a* = N100 / d100where N100 is a unit vector perpendicular to the planes of indices h=1, k=0, l=0, and where d100 is the corresponding interplanar spacing. And similarly with b* and c*.In this way, any vector in the reciprocal lattice will be given by:H*hkl = h a* + k b* + l c* = Nhkl / dhkl|H*hkl| dhkl = 1On the other hand, we have seen that the maxima in the diffraction diagram of a crystal correspond to the maximum function IL(H), meaning that each of the products that define this function must be individually different from zero, as a sufficient condition to obtain a maximum for the diffracted intensity. If we remember that H = (s - s0) / λ, this also means that the three so-called Laue equations must be fulfilled [Max von Laue]:where h, k, l are integersLaue equationsThere is also a less formal way to derive and/or to understand the Laue equations, and therefore we invite interested readers to visit this link ...These three Laue conditions are met if the vector H represents a vector of the reciprocal lattice, so that:H = h a* + k b* + l c*since due to the properties of the reciprocal lattice, it can be stated that:Hhkl a = h, Hhkl b = k, Hhkl c = lSaid in other words: the three conditions of Laue (Nobel Prize for Physics in 1914) are sufficient to establish that the vector H is a vector of the reciprocal lattice (H =H*hkl). | H | = 2 sin θhkl / λ = | (s - s0) | / λ = | H*hkl | = 1 / dhkl And this is Bragg's Law [William L. Bragg], that can be rewritten in its usual form as:λ = 2 dhkl sin θhklBut taking into account that geometrically we can consider spacings of type dhkl/2, dhkl/3, and in general dhkl/n (ie, dnh,nk,nl, where n is an integer), the Bragg’s equation (Nobel Prize in Physics in 1915 ) would be in the form:λ = 2 (dhkl /n) sin θnh,nk,nlwhere n is an integer numberBragg's LawThere is also a less formal way to derive and/or to understand Bragg's Law, and therefore we invite interested readers to visit this link...Moreover, if the Laue conditions are fulfilled (as explained in the following figure) all atoms located on the sequence of planes parallel to the one with indices hkl at a given distance (DP) from the origin (DP being an integer multiple of dhkl) will diffract in phase, and their geometric difference-of-phase factor will be:(s - s0) r = n λand consequently a diffraction maximum will be produced in the direction:s = s0 + λ H*hkl Nhkl = H*hkl dhklThe plane equation can, therefore, be written as:H*hkl r = H*hkl ri= |H*hkl| |ri| cos (H*hkl , ri) = (1/dhkl) DP = nMoreover, this equation holds all the traditional relations of reciprocity of diffraction, between spacing-direction or position-momentum: the shorter spacing, the larger angle and vice versa; direct lattices with large unit cells produce very close diffracted beams, and vice versa. The figure geometrically describes the direction of the diffraction beam due to the constructive interference between atoms located on the planes with interplanar spacing d(hkl). The figure depicts a description of Bragg's model when different types of atoms are located on their respective parallel planes with Δd spacing. The separation between blue and green planes creates interferences and differences of phases (between the reflected beams) giving rise to changes in intensity (depending of the direction). These intensity changes allow us to get information on the structure of atoms that form the crystal).Readers with installed Java Runtime tools can play with Bragg's model using this applet. On the other hand, we have seen that, in general:and this means that the vectors H can be considered as belonging to a sphere of radius 1/λ centered at a point defined by the vector -s0/λ with respect to the origin where the crystal is. This is known as Ewald's sphere (Ewald, 1921), which provides a very easy geometric interpretation of the directions of the diffracted beams. When the H vectors belong to the reciprocal lattice and the end of the vector (a reciprocal point) lies on that spherical surface, diffracted beams are produced, and obviously the crystal planes are in Bragg's position.It's amazing how quickly Paul Peter Ewald developed this interpretation only some months after Max von Laue experiments. His original article, published in 1913 (in German), is available through this link. The advanced reader can also consult the article published by Ewald in Acta Crystallographica A25, 103-108. This figure describes Ewald's geometric model. When a reciprocal point , P*(hkl), touches the surface of Ewald's sphere, a diffracted beam is produced starting in the centre of the sphere and passing through the point P*(hkl). Actually the origin of the reciprocal lattice, O*, coincides with the position of the crystal and the diffracted beam will start from this common origin, but being parallel to the one drawn in this figure, exactly as it is depicted in the figure below.This figure shows the whole reciprocal volume that can give rise to diffracted beams when the sample rotates. Changing the orientation of the reciprocal lattice, one can collect all the beams corresponding to the reciprocal points contained in a sphere of radius 2/λ known as the limit sphere. Reciprocal points are shown as small gray spheres .To obtain all possible diffracted beams that a sample can provide, using a radiation of wavelength λ, it is sufficient to conveniently orient the crystal and make it turn, so that its reciprocal points will have the opportunity to lay on the surface of Ewald's sphere. In these circumstances, diffracted beams will originate as described above. With larger wavelengths, the volume of the reciprocal space that can be explored will be smaller, but the diffracted beams will appear more separated. Ewald's model showing how diffraction occurs. The incident X-ray beam, with wavelength λ, shown as a white line, "creates" an imaginary Ewald's sphere of diameter 2/λ (shown in green). The reciprocal lattice (red points) rotate as the crystal rotates, and every time that a reciprocal point cuts the sphere surface a diffracted beam is produced from the center of the sphere (yellow arrows). According to Bragg's Law, the maximum angle at which one can observe diffraction will correspond to the angle where the sin function is maximum (=1). This also means that the theoretical maximum resolution that can be achieved is λ/2. In practice, due to the decrease of the atomic scattering factors by increasing Bragg angles, appreciable intensities will appear only up to a maximum angular value of θmax < 90º and the real maximum resolution reached will be dmin = λ/2 sin θmax.Considering that the interplanar spacings dhkl are a characteristic of the sample, by reducing the wavelength, Bragg's Law indicates that the diffraction angles (θ) will decrease; the spectrum shrinks, but on the other hand, more diffraction data will be obtained, and therefore a better structural resolution will be achieved.According to Ewald's model, the amount of reciprocal space to be measured can be increased by reducing the wavelength, that is, by increasing the radius of the Ewald's sphere It is also very helpful to visit the pages that on reciprocal space are offered by the University of Cambridge through this link, as well as to look at the video made by www.PhysicsReimagined.com, showing the geometric relationships between direct and reciprocal lattices, displayed below as an animated gif: Once the foundations of the theoretical model which describe the phenomenon of diffraction are set, we encourage the reader to visit the pages dedicated to the different experimental methods to measure the diffraction intensities.This page titled 1.5: Scattering and diffraction is shared under a CC BY-NC 4.0 license and was authored, remixed, and/or curated by Martín Martínez Ripoll & Félix Hernández Cano via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.
276
1.6: Experimental diffraction
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Crystallography_in_a_Nutshell_(Ripoll_and_Cano)/01%3A_Chapters/1.06%3A_New_Page
In the context of this chapter, you will also be invited to visit these sections...Regardless of the huge improvements that have occurred for X-ray generation, the techniques used to measure the intensities and angles of diffraction patterns have evolved over time. In the first diffraction experiment, Friedrich and Knipping used a film sensitive to X-rays, but even in the same year, Bragg used an ionization chamber mounted on a rotating arm that, in general, could more accurately determine angles and intensities. However, the film technique had the advantage of being able to collect many diffracted beams at the same time, and thus during the first years of structural Crystallography (from 1920 to 1970) an extensive use of photographic methods was made. Among them the following techniques should be highlighted: Laue, Weissenberg, precession and oscillation. Since the mid-1970's, photographic methods have been gradually replaced by goniometers coupled with point detectors which subsequently have been replaced by area detectors.For his first experiments, Max von Laue (1879-1960 (Nobel Prize in Physics in 1914) used continuous radiation (with all possible wavelengths) to impact on a stationary crystal. With this procedure the crystal generates a set of diffracted beams that show the internal symmetry of the crystal. In these circumstances, and taking into account Bragg's Law, the experimental constants are the interplanar spacing d and the crystal position referred to the incident beam. The variables are the wavelength λ and the integer number n:n λ = 2 dhkl sin θnh,nk,nlThus, for the same interplanar spacing d, the diffraction pattern will contain the diffracted beams corresponding to the first order of diffraction (n=1) of a certain wavelength, the second order (n=2) of half the wavelength (λ/2), the third order (n=3) with wavelength λ/3, etc. Therefore, the Laue diagram is simply a stereographic projection of the crystal. See also the Java simulation offered through this link. Right: The Laue method in reflection modeThe Weissenberg method is based on a camera with the same name, developed in 1924 by the Austrian scientist Karl Weissenberg. In order to understand Weissenberg’s contribution to X-ray crystallography one should read the two following articles that some years ago were offered to the British Society of Rheology: "Weissenberg’s Influence on Crystallography" (by H. Lipson) (use this link in case of problems) and "Karl Weissenberg and the development of X-ray crystallography" (by M.J. Buerger). The camera consists of a metallic cylinder that contains a film sensitive to X-rays. The crystal is mounted on a shaft (coaxial with the cylinder) that rotates. According to Ewald's model, the reciprocal points will intersect the surface of Ewald's sphere and diffracted beams will be produced. The diffracted beams generate black spots on the photographic film, which when removed from the metallic cylinder, appears as shown below. Left: Scheme and example of a Weissenberg camera. This camera type was used in crystallographic laboratories until about 1975. Right: Camera developed by K. Weissenberg in 1924Two types of diffraction diagrams can be easily obtained with the Weissenberg cameras, depending on the amount of crystal rotation: oscillation diagrams (rotation of approx. +/-20 degrees) or full rotation diagrams (360 degrees) respectively. Oscillation diagrams are used to center the crystal, that is, to ensure that the rotation of axis coincides exactly with a direct axis, which is equivalent to saying that reciprocal planes (which by geometric construction are perpendicular to a direct axis ) generate lines of spots on the photographic film. Once centering is achieved, the full rotation diagrams are used to evaluate the direct axis of the crystal, which coincides with the spacing between the dot lines on the diagram. Scheme explaining the production of a Weissenberg diagram of the rotation or oscillation variety. When the reciprocal points, belonging to the same reciprocal plane, touch the surface of Ewald's sphere, they produce diffracted beams arranged in cones.As shown in the diagram above, each horizontal line of points represents a reciprocal plane perpendicular to the axis of rotation as projected on the photographic plate. The figure on the left shows the real appearance of a Weissenberg diagram of this type, rotation-oscillation. As explained below, the distance between the horizontal spot lines provides information on the crystal repetition period in the vertical direction of the film. These diagrams were also used to align mounted crystals... This technique requires that the crystal rotation axis is coincident with an axis of its direct lattice, so that the reciprocal planes are collected as lines of spots as is shown on the left. The crystal must be mounted in such a way that the rotation axis coincides with a direct axis of the unit cell. Thus, by definition of the reciprocal lattice, there will be reciprocal planes perpendicular to that axis. The reciprocal points (lying on these reciprocal planes) rotate when the crystal rotates and (after passing through the Ewald sphere) produce diffracted beams that arranged in cones, touch the cylindrical film and appear as aligned spots (photograph on the left). It seems obvious that these diagrams immediately provide information about the repetition period of the direct lattice in the direction perpendicular to the horizontal lines (reciprocal planes). However, those reciprocal planes (two dimensional arrays of reciprocal points) are represented as projections (one dimension) on the film and therefore a strong spot overlapping is to be expected.The problem with spot overlap was solved by Weissenberg by adding a translation mechanism to the camera, in such a way that the cylinder containing the film could be moved in a "back-and-forth" mode (in the direction parallel to the axis of rotation) coupled with the crystal rotation. At the same time, he introduced two internal cylinders (as is shown in the left figure, and also below). In this way, only one of the diffracted cones (those from a reciprocal layer) is "filtered" and therefore allowed to reach the photographic film. Thus, a single reciprocal plane (a 2-dimensional array of reciprocal points) is distributed on the film surface (two dimensions) and therefore the overlap effect is avoided. However, as a consequence of the back and forth translation of the camera during the rotation of the crystal, a deformation is originated in the distribution of the spots (diffraction intensities)The appearance of such a diagram, which produces a geometrical deformation of the collected reciprocal plane, is shown below. Taking into account this deformation, one can easily identify every spot of the selected reciprocal plane and measure its intensity. To select the remaining reciprocal planes one just has to shift the internal cylinders and collect their corresponding diffracted beams (arranged in cones). Left: Details of the Weissenberg camera used to collect a cone of diffracted beams. Two internal cylinders showing a slit, through which a cone of diffracted beams is allowed to reach the photographic film. The outer cylinder, containing the film, moves back-and-forth while the crystal rotates, and so the spots that in the previous diagram type were in a line (see above) are now distributed on the film surface (see the figure on the right). Right: Weissenberg diagram showing the reciprocal plane of indices hk2 of the copper metaborate. The precession method was developed by Martin J. Buerger at the beginning of the 1940's as a very clever alternative to collect diffracted intensities without distorting the geometry of the reciprocal planes. As in the Weissenberg technique, precession methodology is also based on a moving crystal, but here the crystal moves (and so does the coupled reciprocal lattice) as the planets do, and hence its name. In this case the film is placed on a planar cassette that moves following the crystal movements. In the precession method the crystal has to be oriented so that the reciprocal plane to be collected is perpendicular to the X-rays' direct beam, ie a direct axis coincides with the direction of the incident X-rays. Two schematic views showing the principle on which the precession camera is based. μ is the precession angle around which the reciprocal plane and the photographic film move. During this movement the reciprocal plane and the film are always kept parallel.The camera designed for this purpose and the appearance of a precession diagram showing the diffraction pattern of an inorganic crystal are shown in the figures below. Left: Scheme and appearance of a precession camera Right: Precession diagram of a perovskite showing cubic symmetry Precession diagrams are much simpler to interpret than those of Weissenberg, as they show the reciprocal planes without any distortion. They show a single reciprocal plane on a photographic plate (picture above) when a circular slit is placed between the crystal and the photographic film. As in the case of Weissenberg diagrams, we can readily measure distances and diffraction intensities. However, with these diagrams it is much easier to observe the symmetry of the reciprocal space. The only disadvantage of the precession method is a consequence of the film, which is flat instead of cylindrical, and therefore the explored solid angle is smaller than in the Weissenberg case.The precession method has been used successfully for many years, even for protein crystals: Left: Precession diagram of a lysozyme crystal. One can easily distinguish a four-fold symmetry axis perpendicular to the diagram. According to the relationships between direct and reciprocal lattices, if the axes of the unit cell are large (as in this case), the separation between reciprocal points is small. Right: Precession diagram of a simple organic compound, showing mm symmetry (two mirror planes perpendicular to the diagram). Note that the distances between reciprocal points is much larger (smaller direct unit cell axes) than in the case of proteins (see the figure on the left).Originally, the methods of rotating the crystal with a wide rotation angle were very successfully used. However, when it was applied to crystals with larger direct cells (ie small reciprocal cells), the collecting time increased. Therefore, these methods were replaced by methods using small oscillation angles, allowing multiple parts of different reciprocal planes to be collected at once. Collecting this type of diagrams at different starting positions of the crystal is sufficient to obtain enough data in a reasonable time. The geometry of collection is described in the figures shown below. Nowadays, with rotating anode generators, synchrotrons, and area detectors (image plate or CCD, see below), this is the method widely used, especially for proteins.Outline of the geometrical conditions for diffraction in the oscillation method. The crystal, and therefore its reciprocal lattice, oscillate in a small angle around an axis (perpendicular to the plane of the figure) which passes through the center. In the figure on the right, the reciprocal area that passes through diffraction conditions, within Ewald's sphere (with radius 2.sin 90/λ), is denoted in yellow. The maximum resolution which can be obtained in the experiment is given by 2.sen θmax/λ). When the reciprocal lattice is oscillated in a small angle around the rotation axis, small areas of different reciprocal planes will cross the surface of Ewald's sphere, reaching diffraction condition. Thus, the detector screen will show diffraction spots from the different reciprocal planes forming small "lunes" on the diagram (figure on the right). A "lune" is a plane figure bounded by two circular arcs of unequal radii, i.e., a crescent.The introduction of digital computers in the late 1970s led to the design of the so-called automatic four-circle diffractometers. These goniometers, with very precise mechanics and by means of three rotation axes, allow crystal samples to be brought to any orientation in space, fulfilling Ewald's requirements to produce diffraction. Once the crystal is oriented, a fourth axis of rotation, which supports the electronic detector, is placed in the right position to collect the diffracted beam. All these movements can be programmed in an automatic mode, with minimal operator intervention.Two different goniometric geometries have been used very successfully for many years. In the Eulerian goniometer (see the figure below) the crystal is oriented through the three Euler angles (three circles): Φ represents the rotation axis around the goniometer head (where the crystal is mounted), χ allows the crystal to roll over the closed circle, and ω allows the full goniometer to rotate around a vertical axis. The fourth circle represents the rotation of the detector, 2θ, which is coaxial with ω. This geometry has the advantage of a high mechanical stability, but presents some restrictions for external devices (for instance, low or high temperature devices) to access the crystal. Right: Rotations in a four-circle goniometer with Eulerian geometryAn alternative to the Eulerian geometry is the so-called Kappa geometry, which does not have an equivalent to the closed χ circle. The role of the Eulerian χ rotation is fulfilled by means of two new axes: κ (kappa) and ωκ (see the figure below), in such a way that with a combination of both new angles one can obtain Eulerian χ angles in the range -90 to +90 degrees. The main advantage of this Kappa geometry is the wide accessibility to the crystal. The angles Φ and 2θ are identical to those in Eulerian geometry: Scheme and appearance of a four-circle goniometer with Kappa geometryThe detection system widely used during many years for both geometries (Euler and Kappa) was based on small-area counters or point detectors. With these detectors the intensity of the diffracted beams must be measured individually, one after the other, and therefore all angles had to be changed automatically according to previously calculated values. Typical measurement times for such detector systems are around 1 minute per reflection.One of the point detectors more widely used for many years is the scintillation counter, whose scheme is shown below: Scheme of a scintillation counter As an alternative to the point detectors, the development of electronic technology has led to the emergence of so-called area detectors which allow the detection of many diffraction beams simultaneously, thereby saving time in the experiment. This technology is particularly useful for proteins and generally for any material that can deteriorate over its exposure to X-rays, since the detection of every collected image (with several hundreds of reflections) is done in a minimum time, on the order of minutes (or seconds if the X-ray source is a synchrotron).One of the area detectors most commonly used is based on the so-called CCD's (Charge Coupled Device) whose scheme is shown below:Schematic view of a CCD with its main components. The X-ray converter, in the figure shown as Phosphor, can also be made with other materials, such as GdOS, etc. The CCD converts X-ray photons at high speed, but its disadvantage is that it operates at very low temperatures (around -70 C). Image taken from ADSC ProductsCCD-type detectors are usually mounted on Kappa goniometers and their use is widespread in the field of protein crystallography, with rotating anode generators or synchrotron sources. Left: Goniometer with Kappa geometry and CCD detector (Image taken from Bruker-AXS) Right: Details of a Kappa goniometer (in this case with a fixed κ angle)Another type of detector widely used today, especially in protein crystallography, are the Image Plate Scanners, which are usually mounted on a relatively rudimentary goniometer, whose only freedom is a rotation axis parallel to the crystal mounting axis. The sensor itself is a circular plate of material sensitive to X-rays. After exposure, a laser is used to scan the plate and read out the intensities. Left: Image Plate Scanner. (image taken from Marxperts) Right: Components of an Image Plate ScannerThe latest technology involves the use of area detectors based on CMOS (complementary metal-oxide semiconductor) technology that has very short readout time, allowing for increased frame rates during the data collection. Area detectors XALOC, the beamline for macromolecular crystallography (left) at the Spanish synchrotron ALBA (right) In summary, a complete data collection with this type of detectors consists of multiple images such as the ones shown below. The collected images are subsequently analyzed in order to obtain the crystal unit cell data, symmetry (space group) and intensities of the diffraction pattern (reciprocal space). This process is explained in more detail in another section. Left: Diffraction image of a protein, obtained with the oscillation method in an Image Plate Scanner. During the exposure time (approx. 5 minutes with a rotating anode generator, or approx. 5 seconds at a synchrotron facility) the crystal rotates about 0.5 degrees around the mounting axis. The read-out of the image takes about 20 seconds (depending on the area of the image plate). This could also be the appearance of an image taken with a CCD detector. However, with a CCD the exposure time would be shorter.Right: A set of consecutive diffraction images obtained with an Image Plate Scanner or a CCD detector. After several images two concentric dark circles appear, corresponding to an infinite number of reciprocal points. They correspond to two consecutive diffraction orders of randomly oriented ice microcrystals that appear due to some defect of the cryoprotector or to some humidity of the cold nitrogen used to cool down the sample. Images are taken from Janet Smith Lab. See also the example published by Aritra Pal and Georg Sheldrick.In all of these described experimental methodologies (except for the Laue method), the radiation used is usually monochromatic (or nearly monochromatic), which is to say, radiation with a single wavelength. Monochromatic radiations are usually obtained with the so-called monochromators, a system composed by single crystals which, based on Bragg's Law, are able to "filter" the polychromatic input radiation and select only one of its wavelengths (color), as shown below:Scheme of a monochromator. A polychromatic radiation (white) coming from the left is "reflected", according to Bragg's Law, "filtering" the input radiation that is reflected again on a secondary crystal. Image taken from ESRF.At present, in crystallographic laboratories or even in the synchrotron lines, the traditional monochromators are being replaced by new optical components that have demonstrated superior efficacy. These components, usually known as "focusing mirrors", can be based on the following phenomena:It can also be very instructive to look at this animated diagram showing the path of each X-ray photon in a given diffraction system: The original video can be seen in //vimeo.com/52155723In order to get the largest and best collection of diffraction data, crystal samples are usually maintained at a very low temperature (about 100 K, that is, about -170 C) using a dry nitrogen stream. At low temperatures, crystals (and especially those of macromolecules) are more stable and resist the effects of X-ray radiation much better. At the same time, the low temperature further reduces the atomic thermal vibration factors, facilitating their subsequent location within the crystal structure. Cooling system using dry liquid nitrogen. Image taken from Oxford CryosystemsTo mount the crystals on the goniometer head, in front of the cold nitrogen stream, crystallographers use special loops (like the one depicted in the left figure) which fix the crystal in a matrix transparent to X-rays. This is especially useful for protein crystals, where the matrix also acts as cryo-protectant (anti-freeze). The molecules of the cryo-protectant spread through the crystal channels replacing the water molecules with the cryo-protectant ones, thus avoiding crystal rupture due to frozen water. Left: Detail of a mounted crystal using a loop filled with an antifreeze matrix Right: Checking the position of the crystal in the goniometric optical center. Video courtesy of Ed BerryIn any case, the crystal center must be coincident with the optical center of the goniometer, where the X-ray beam is also passing through. In this way, when the crystal rotates, it will always be centered on that point, and in any of its positions will be bathed by the X-ray beam. Cryo-protection system mounted on a goniometerThe nitrogen flow at -170 º C (coming through the upper tube) cools the crystal mounted on the goniometer head. The collimator of the X-ray beam points toward the crystal from the left of the image. Note the slight steam generated by the cold nitrogen when mixed with air humidity. Visually analyzing the quality of the diffraction patternIn summary, all of these methodologies can be used to obtain a data collection, consisting of three Miller indices and an intensity for each diffracted beam, which is to say, the largest number of reciprocal points of the reciprocal lattice. This implies evaluating both the geometry and the intensities of the whole diffraction pattern.All these data, crystal unit cell dimensions, crystal symmetry (space group) and intensities associated with the reciprocal points (diffraction pattern), will allow us to "see" the internal structure of the crystal, but this issue will be shown in another chapter...This page titled 1.6: Experimental diffraction is shared under a CC BY-NC 4.0 license and was authored, remixed, and/or curated by Martín Martínez Ripoll & Félix Hernández Cano via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.
277
1.7: Structural Resolution
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Crystallography_in_a_Nutshell_(Ripoll_and_Cano)/01%3A_Chapters/1.07%3A_New_Page
In the context of this chapter, you will also be invited to visit these sections...In previous chapters, we have seen how X-rays interact with periodically structured matter (crystals), and the implicit question that we have raised from these earlier chapters is:Can we "see" the internal structure of crystals?, or in other words, Can we "see" the atoms and molecules that build crystals? The answer is definitely yes! Left: Molecular structure of a pneumococcal surface enzyme Center: Molecular packing in the crystal of a simple organic compound, showing its crystallographic unit cell Right: Geometric details showing several molecular interactions in a fragment of the molecular structure of a proteinAs the examples above demonstrate, crystallography can show us the structures of very large and complicated molecular structures (left figure) and how molecules pack together in a crystal structure (center figure). We can also see every geometric detail, as well as the different types of interactions, among molecules or parts of them (right figure). However, for a better understanding of the fundamentals on which this response is based, it is necessary to introduce some new concepts or refresh some of the previously seen ones... In previous chapters we have seen that crystals represent the organized and ordered matter, consisting of associations of atoms and/or molecules, corresponding to a natural state of it with a minimum of energy. We also know that crystals can be described by repeating units in the three directions of space, and that this space is known as direct or real space. These repeating units are know as unit cells (which also serve as a reference system to describe the atomic positions). This direct or real space, the same in which we live, can be described by the electron density, ρ(xyz), a function defined in each point of the unit cell of coordinates (xyz), where, in addition, operate symmetry elements which repeat atoms and molecules within the cell. Unit cell (left) whose three-dimensional stacking builds a crystal (right) Motifs (atoms, ions or molecules) do repeat themselves by symmetry operators inside the unit cell. Unit cells are stacked in three dimensions, following the rules of the lattice, building the crystal. We have also learned that X-rays interact with the electrons of the atoms in the crystals, resulting in a diffraction pattern, also know as reciprocal space, with the properties of a lattice (reciprocal lattice) with a certain symmetry, and where we also can define a repeating cell (reciprocal cell). The "points" of this reciprocal lattice contain the information on the diffraction intensity. Left: Interaction between two waves scattered by electrons. The resulting waves show areas of darkness (destructive interference), depending on the angle considered. Image originally taken from physics-animations.com. Right: One of the hundreds of diffraction images of a protein crystal. The black spots on the image are the result of the cooperative scattering (diffraction) from the electrons of all atoms contained in the crystal.Through this cooperative scattering (diffraction), scattered waves interact with each other, producing a single diffracted beam in each direction of space, so that, depending on the phase differences (advance or delay) among the individual scattered waves, they add or subtract, as shown in the two figures below: Interference of two waves with the same amplitude and frequency (animation taken from The Pennsylvania State University) Composition of two scattered waves. A = resultant amplitude; I = resultant intensity (~ A2) (a) totally in phase (the total effect is the sum of both waves) (b) with a certain difference of phase (they add, but not totally) (c) out of phase (the resultant amplitude is zero) Between the two mentioned spaces (direct and reciprocal) there is a holistic relationship (every detail of one of the spaces affects the whole of the other, and vice versa). Mathematically speaking this relationship is a Fourier transform that cannot directly be solved, since the diffraction experiment does not allow us to know one of the fundamental magnitudes of the equation, the relative phases (Φ) of the diffraction beams.Left: Holistic relationship between direct space (left) and reciprocal space (right). Every detail of the direct space (left) depends on the total information contained in the reciprocal space (right), and vice versa... Every detail of the reciprocal space (right) depends on the total information contained in the direct space (left).Right: Graphical representation of the out-of-phase between two waves. Relative phase between wavesThe diagram below, with the help of the following paragraph, summarizes what the resolution of a crystalline structure through X-ray diffraction implies ... Atoms, ions, and molecules are packed into units (elemental cells) that are stacked in three dimensions to form a crystal in space that we call direct or real space. The diffraction effects of the crystal can be represented as points of a lattice mathematical space that we call the reciprocal lattice. The diffraction intensities, that is, the blackening of these points of the reciprocal lattice, represent the moduli of some fundamental vector quantities, which we call structure factors. If we get to know not only the moduli of these vectors (the intensities), but their relative orientations (that is, their relative phases), we will be able to obtain the value of the electron density function at each point of the elementary cell, providing thus the positions of the atoms that make up the crystal. Outline on basic crystallographic concepts: direct and reciprocal spaces. The issue is to obtain information on the left side (direct space) from the diffraction experiment (reciprocal space). In order to know (or to see) the internal structure of a crystal we have to solve a mathematical function known as the "electron density;" a function that is defined at every point in the unit cell (a basic concept of the crystal structure introduced in another chapter). The function of electron density, represented by the letter ρ, has to be solved at each point within the unit cell given by the coordinates (x, y, z), referred to the unit cell axes. At those points where this function takes maximum values (estimated in terms of electrons per cubic Angstrom) is where atoms are located. That means that if we are able to calculate this function, we will "see" the atomic structure of the crystal. Formula 1. Function defining the electron density in a point of the unit cell given by the coordinates (x, y, z) Left: Appearance of a zone of the electron density map of a protein crystal, before it is interpreted. Right: The same electron density map after its interpretation in terms of a peptidic fragment.The equation above (Formula 1) represents the Fourier transform between the real or direct space (where the atoms are, represented by the function ρ) and the reciprocal space (the X-ray pattern) represented by the structure factor amplitudes and their phases. Formula 1 also shows the holistic character of diffraction, because in order to calculate the value of the electron density in a single point of coordinates (xyz) it is necessary to use the contributions of all structure factors produced by the crystal diffraction. The structure factors F(hkl) are waves and therefore can be represented as vectors by their amplitudes, [F(hkl)], and phases Φ(hkl) measured on a common origin of phases.When the unit cell is centrosymmetric, for each atom at coordinates (xyz) there is an identical one located at (-x,-y,-z). This implies that Friedel's law holds F(h,k,l) = F(-h,-k,-l) and the expression of the electron density (Formula 1) is simplified, becoming Formula 1.1. And the phases of the structure factors are also simplified, becoming 0° or 180°... Formula 1.1. Electron density function in a point of coordinates (x, y, z) in a centrosymmetric unit cell. It is important to realize that the quantity and quality of information provided by the electron density function, ρ, is very dependent on the quantity and quality of the data used in the formula: the structure factors F(hkl) (amplitudes and phases!). We will see later on that the amplitudes of the structure factors are directly obtained from the diffraction experiment.If your browser is Java enabled, as a practical exercise on Fourier transforms we recommend visiting he following links:The analytic expression of the structure factors, F(hkl), is simple and involves a new magnitude (ƒj ) called atomic scattering factor (defined in a previous chapter) which takes into account the different scattering powers with which the electrons of the j atoms scatter the X-rays: Formula 2. Structure factor for each diffracted beam. This equation is the Fourier transform of the electron density (Formula 1). The expression takes into account the scattering factors ƒ of all j atoms contained in the crystal unit cell. From the experimental point of view, it is relatively simple to measure the amplitudes [F(hkl)] of all diffracted waves produced by a crystal. We just need an X-ray source, a single crystal of the material to be studied and an appropriate detector. With these conditions fulfilled we can then measure the intensities, I(hkl), of the diffracted beams in terms of: Formula 3. Relationship between the amplitude of the structure factors |F(hkl)| and their intensities I(hkl)K is a factor that puts the experimental structure factors, (Frel) , measured on a relative scale (which depends on the power of the X-ray source, crystal size, etc.) into an absolute scale, which is to say, the scale of the calculated (theoretical) structure factors (if we could know them from the real structure, Formula 2 above). As the structure is unknown at this stage, this factor can be roughly evaluated using the experimental data by means of the so-called Wilson plot.Wilson plotI rel represents the average intensity (in a relative scale) collected in a given interval of θ (the Bragg angle); fj are the atomic scattering factors in that angular range, and λ is the X-ray wavelength. By plotting the magnitudes shown in the left figure (green dots), a straight line is obtained from which the following information can be derived: A is an absorption factor, which can be estimated from the dimensions and composition of the crystal. L is known as the Lorentz factor, responsible for correcting the different angular velocities with which the reciprocal points cross the surface of Ewald's sphere. For four-circle goniometers this factor can be calculated as 1/sin 2θ, where θ is the Bragg angle of the reflections. But, unfortunately, this valuable information is lost during the diffraction experiment (there is no experimental technique available to measure the phases!) Thus, we must face the so-called phase problem if we want to solve Formula 1. The phase problem can be very easily understood if we compare the diffraction experiment (as a procedure to see the internal structure of crystals) with a conventional optical microscope... Illustration on the phase problem. Comparison between an optical microscope and the "impossible" X-ray microscope. There are no optical lenses able to combine diffracted X-rays to produce a zoomed image of the crystal contents (atoms and molecules). In a conventional optical microscope the visible light illuminates the sample and the scattered beams can be recombined (with intensity and phase) using a system of lenses, leading to an enlarged image of the sample under observation. In what we might call the impossible X-ray microscope (the process of viewing inside the crystals to locate the atomic positions), the visible light is replaced by X-rays (with wavelengths close to 1 Angstrom) and the sample (the crystal) also scatters this "light" (the X-rays). However, we do not have any system of lenses that could play the role of the optical lenses, to recombine the diffracted waves providing us with a direct "picture" of the internal structure of the crystal. The X-ray diffraction experiment just gives us a picture of the reciprocal lattice of the crystal on a photographic plate or detector. The only thing we can do at this stage is to measure the positions and intensities of the spots collected on the detector. These intensities are proportional to the structure factor amplitudes, [F(hkl)]. But regarding the phases, Φ(hkl), nothing can be concluded for the moment, preventing us from obtaining a direct solution of the electron density function (Formula 1 above). We therefore need some alternatives in order to retrieve the phase values, lost during the diffraction experiment... Once the phase problem is known and understood, let's now see the general steps (see the scheme below) that a crystallographer must face in order to solve the structure of a crystal and therefore locate the positions of atoms, ions or molecules contained in the unit cell... General diagram illustrating the process of resolution of molecular and crystal structures by X-ray diffraction The process consists of different steps that have been treated previously or are described below: For the study to be successful, some important aspects must be taken into account, such as:But let us come back to the most important issue: how do we solve the phase problem?The very first solution to the phase problem was introduced by Arthur Lindo Patterson. Basing his work on the inability to directly solve the electron density function (Formula 1 above or below), and after his training (under the U.S. mathematician Norbert Wiener) on Fourier transforms convolution, Patterson introduced a new function P(uvw) (Formula 4, below) in 1934. This formula, which defines a new space (the Patterson space), can be considered as the most important single development in crystal-structure analysis since the discovery of X-rays by Röntgen in 1895 or X-ray diffraction by Laue in 1914. His elegant formula, known as the Patterson function (Formula 4, below), introduces a simplification of the information contained in the electron density function. The Patterson function removes the term containing the phases, and the amplitudes of the structure factors are replaced by their squares. It is thus a function that can be calculated immediately from the available experimental data (intensities, which are related to the amplitudes of the structure factors). Formally, from the mathematical point of view, the Patterson function is equivalent to the convolution of the electron density (Formula 1, below) with its inverse: ρ(x,y,z) * ρ(-x,-y,-z). Formula 1. The electron density function calculated at the point of coordinates (x,y,z). Formula 4. The Patterson function calculated at the point (u, v, w). This is a simplification of Formula 1, since the summation is done on F2(hkl) and all phases are assumed to be zero. It seems obvious that after omitting the crucial information contained in the phases [Φ(hkl) in Formula 1], the Patterson function will no longer show the direct positions of the atoms in the unit cell, as the electron density function would do.In fact, the Patterson function only provides a map of interatomic vectors (relative atomic positions), the height of its maxima being proportional to the number of electrons of the atoms implied. We will see that this feature means an advantage in detecting the positions of "heavy" atoms (with many electrons) in structures where the remaining atoms have lower atomic numbers. Once the Patterson map is calculated, it has to be correctly interpreted (at least partially) to get the absolute positions (x,y,z) of the heavy atoms within the unit cell. These atomic positions can now be used to obtain the phases Φ(hkl) of the diffracted beams by inverting Formula 1 and therefore this will allow the calculation of the electron density function ρ(xyz), but this will be the object of another section of these pages.The phase problem for crystals formed by small and medium size molecules was solved satisfactorily by several authors throughout the twentieth century with special mention to Jerome Karle and Herbert A. Hauptmann, who shared the Nobel Prize in Chemistry in 1985 (without forgetting the role of Isabella Karle, 1921-2017). The methodology introduced by these authors, known as the direct methods, generally exploit constraints or statistical correlations between the phases of different Fourier components. Left: Herbert A. Center: Jerome Karle Right: Isabella KarleThe atomicity of molecules, and the fact that the electron density function should be zero or positive at any point of the unit cell, creates certain limitations in the distribution of phases associated with the structure factors. In this context, the direct methods establish systems of equations that use the intensities of diffracted beams to describe these limitations. The resolution of these systems of equations provides direct information on the distribution of phases. However, since the validity of each of these equations is established in terms of probability, it is necessary to have a large number of equations to overdetermine the phase values of the unknowns (phases Φ(hkl)). The direct methods use equations that relate the phase of a reflection (hkl) with the phases of other neighbor reflections (h',k',l' y h-h',k-k',l-l'), assuming that these relationships are "probably true" (P) ... where Ehkl, Eh´k´l´ and Eh-h',k-k',l-l' are the so called "normalized structure factors", that is, structure factors corrected for thermal motion, brought to an absolute scale and assuming that structures are made of point atoms. In other words, structure factor normalization converts measured |F| values into "point atoms at rest" coefficients known as |E| values.At present, direct methods are the preferred ones for phasing structure factors produced by small or medium sized molecules having up to 100 atoms in the asymmetric unit. However, they are generally not feasible by themselves for larger molecules such as proteins. The interested reader should look into an excellent introduction to direct methods through this link offered by the International Union of Crystallography.For crystals composed of large molecules, such as proteins and enzymes, the phase problem can be solved successfully with three main methods, depending of the case: (i) introducing atoms in the structure with high scattering power. This methodology, known as MIR (Multiple Isomorphous Replacement) is therefore based on the Patterson method. (ii) introducing atoms that scatter X-rays anomalously, also known as MAD (Multi-wavelength Anomalous Diffraction), and (iii) by means of the method known as MR (Molecular Replacement), which uses the previously known structure of a similar protein.MIR (Multiple Isomorphous Replacement) This technique, based on the Patterson method, was introduced by David Harker, but was successfully applied for the first time by Max F. Perutz and John C. Kendrew who received the Nobel Prize in Chemistry in 1962, for solving the very first structure of a protein, hemoglobin. Left: David Harker Center: Max Ferdinand Perutz Right: John Cowdery Kendrew The MIR method is applied after introducing "heavy" atoms (large scatterers) in the crystal structure. This method is conducted by soaking the crystal of the sample to be analyzed with a heavy atom solution or by co-crystallization with the heavy atom, in the hope that the heavy atoms go through the channels of the crystal structure and remain linked to amino acid side chains with the ability to coordinate metal atoms (eg SH groups of cysteine). In the case of metalloproteins, one can replace their endogenous metals by heavier ones (for instance Zn by Hg, Ca by Sm, etc.). Heavy atoms (with a large number of electrons) show a higher scattering power than the normal atoms of a protein (C, H, N, O and S), and therefore they appreciably change the intensities of the diffraction pattern when compared with the native protein. These differences in intensity between the two spectra (heavy and native structures) are used to calculate a map of interatomic vectors between the heavy atom positions (Patterson map), from which it is relatively easy to determine their coordinates within the unit cell. Scheme of a Patterson function derived from a crystal containing three atoms in the unit cell. To obtain this function graphically from a known crystal structure (left figure) all possible interatomic vectors are plotted (center figure). These vectors are then moved parallel to themselves to the origin of the Patterson unit cell (right figure). The calculated function will show maximum values at the end of these vectors, whose heights are proportional to the product of the atomic numbers of the involved atoms. The positions at these maxima (with coordinates u, v, w) represent the differences between the coordinates of each pair of atoms in the crystal, ie u=x1-x2, v=y1-y2, w=z1-z2. With the known positions of the heavy atoms, the structure factors are now calculated using Formula 2 (see also the diagram below), that is their amplitudes |Fc(hkl)| and phases Φc(hkl), where the c subscript means "calculated". By using Formula 1, an electron density map, ρ(xyz), is now calculated using the amplitudes of the structure factors observed in the experiment, |Fo(hkl)| (containing the contribution of the whole structure) combined with the calculated phases Φc(hkl). If these phases are good enough, the calculated electron density map will show not only the known heavy atoms, but will also yield additional information on further atomic positions (see diagram below).In summary, the MIR methodology steps are:MAD (Multi-wavelength Anomalous Diffraction) The changes in the intensity of the diffraction data produced by introducing heavy atoms in the protein crystals can be regarded as a chemical modification of the diffraction experiment. Similarly, we can cause changes in the intensity of diffraction by modifying the physical properties of atoms. Thus, if the incident X-ray radiation has a frequency close to the natural vibration frequency of the electrons in a given atom, the atom behaves as an "anomalous scatterer". This produces some changes in the atomic scattering factor, ƒj (see Formula 2), so that its expression is modified by two terms, ƒ' and ƒ'' which account for its real and imaginary components, respectively. For atoms which behave anomalously, its scattering factor is given by the expression shown below (Formula 5). Formula 5. In the presence of anomalous scattering, the atomic scattering factor, ƒ0 , has to be modified adding two new terms, a real and an imaginary part. The advanced reader should also read the section about the phenomenon of anomalous dispersion. The ƒ' and ƒ'' corrections vs. X-ray energy (see below for the case of Cu Kα) can be calculated taking into account some theoretical considerations... Real and imaginary components of the Selenium scattering factor vs. the energy of the incident X-rays. The vertical line indicates the wavelength for CuKα.For X-ray energy values where resonance exists, ƒ' increases dramatically, while the value of ƒ'' decreases. This has practical importance considering that many heavy atoms used in crystallography show absorption peaks at energies (wavelengths) which can be easily obtained with synchrotron radiation. Diffraction data collected in these conditions will show a normal component, mainly due to the light atoms (nitrogen, carbon and hydrogen), and an anomalous part produced by the heavy atoms, which will produce a global change in the phase of each reflection. All this leads to an intensity change between those reflections known as Friedel pairs (pairs of reflections which under normal conditions should have the same amplitudes and identical phases, but with opposite signs). The detectable change in intensity between these reflection pairs (Friedel pairs) is what we call anomalous diffraction. The MAD method, developed by Hendrickson and Kahn, involves diffraction data measurement of the protein crystal (containing a strong anomalous scatterer) using X-ray radiations with different energies (wavelengths): one that maximizes ƒ'', another which minimizes ƒ' and a third measurement at an energy value distinct from these two. Combining these diffractions data sets, and specifically analyzing the differences between them, it is possible to calculate the distribution of amplitudes and phases generated by the anomalous scatterers. The subsequent use of the phases generated by these anomalous scatterers, as a first approximation, can be used to calculate an electron density map for the whole protein. In general, there is no current need to introduce individual atoms as anomalous scatterers in protein crystals. It is relatively easy to obtain recombinant proteins in which methionine residues are replaced by selenium-methionine. Selenium (and even sulfur) atoms of methionine (or cysteine), behave as suitable anomalous scatterers for carrying out a MAD experiment. The MAD method presents some advantages vs. the MIR technique: Argand diagram showing the scattering contribution from an anomalous scatterer in a matrix of normal scatterers. This effect implies that Friedel's law fails. Image taken from "Crystallography 101".The anomalous behavior of the atomic scattering factor only produces small differences between the intensities (and therefore among the amplitudes of the structure factors) of the reflections that are related by a centre of symmetry or a mirror plane (such as for instance, I(h,k,l) vs. I(-h,-k,-l), or I(h,k,l) vs. I(h,-k,l). Therefore, to estimate these small differences between the experimental intensities, additional precautions must be taken into account. Thus, it is recommended that reflections expected to show these differences are collected on the same diffraction image, or alternatively, after each collected image, rotate the crystal 180 degrees and collect a new image. Moreover, since changes in ƒ' and ƒ'' occur by minimum X-ray energy variations, it is necessary to have good control of the energy values (wavelengths). Therefore, it is essential to use a synchrotron radiation facility, where wavelengths can be tuned easily. The advanced reader should also have a look into the web pages on anomalous scattering, prepared by Bernhard Rupp, as well as the practical summary prepared by Georg M. Sheldrick.MR (Molecular Replacement) If we know the structural model of a protein with a homologous amino acid sequence, the phase problem can be solved by using the methodology known as molecular replacement (MR). The known structure of the homologous protein is regarded as the protein to be determined and serves as a first model to be subsequently refined. This procedure is obviously based on the observation that proteins with similar peptide sequences show a very similar folding. The problem in this case is transferring the molecular structure of the known protein from its own crystal structure to a new crystal packing of the protein with an unknown structure. The positioning of the known molecule into the unit cell of the unknown protein requires determining its correct orientation and position within the unit cell. Both operations, rotation and translation, are calculated using the so-called rotation and translation functions (see below). Scheme of the molecular replacement (MR) method. The molecule with known structure (A) is rotated through the [R] operation and shifted through T to bring it over the position of the unknown molecule (A’).The rotation function. If we consider the case of two identical molecules, oriented in a different way, then the Patterson function will contain three sets of vectors. The first one will contain the Patterson vectors of one of the molecules, ie all interatomic vectors within molecule one (also called eigenvectors). The second set will contain the same vectors but for the second molecule, identical to the first one, but rotated due to their different orientation. The third set of vectors will be the interatomic cross vectors between the two molecules. While the eigenvectors are confined to the volume occupied by the molecule, the cross vectors will extend beyond this limit. If both molecules (known and unknown) are very similar in structure, the rotation function R(α,β,γ) would try to bring the Patterson vectors of one of the molecules to be coincident with those of the other, until they are in good agreement. This methodology was first described by Rossman and Blow.R(α,β,γ) = ∫u P1(u) x P2(ur) du Formula 6. P1 is the Patterson function and P2 is the rotated Patterson function, where u is the volume of the Patterson map, where interatomic vectors are calculated. The quality of the solutions of these functions is expressed by the correlation coefficient between both Patterson functions: the experimental one and the calculated one (with the known protein). A high correlation coefficient between these functions is equivalent to a good agreement between the experimental diffraction pattern and the diffraction pattern calculated with the known protein structure. Once the known protein structure is properly oriented and translated (within the unit cell of the unknown protein), an electron density map is calculated using these atomic positions and the experimental structure factors. It is worth consulting the article published on this methodology by Eleanor Dodson. Probably it is valuable for the advanced reader to consult a nice article that, despite having been published in 2010, has not lost its validity in relation to the description of the different methodologies for the determination of the relative phases of the diffraction beams.All these methods (Patterson, direct methods, MIR, MAD, MR) provide (directly or indirectly) knowledge about approximate phases which must be upgraded. As indicated above, the calculated initial phases, Φc(hkl), together with the observed experimental amplitudes, |Fo(hkl)|, allow us to calculate an electron density map, also approximate, over which we can build the structural model. The overall process is summarized in the cyclic diagram shown below. The initial phases, Φc(hkl), are combined with the amplitudes of the experimental (observed) structure factors, |Fo(hkl)|, and an electron density map is calculated (shown at the bottom of the scheme). Alternatively, if the initial known data are the coordinates (xyz) of some atoms, they will provide the initial phases (shown at the top of the scheme), and so on in a cyclic way until the process does not produce any new information. Scheme showing a cyclic process to calculate electron density maps ρ(xyz) which produce further structural information. From several known atomic positions we can always calculate the structure factors: their amplitudes, |Fc(hkl)|, and their phases, Φc(hkl),as shown at the top of the scheme. Obviously, the calculated amplitudes can be rejected, because they are calculated from a partial structure and the experimental ones represent the whole and real structure. Therefore, the electron density map (shown at the bottom of the scheme) is calculated with the experimental (or observed) amplitudes, |Fo(hkl)|, and the calculated phases, Φc(hkl). This function is now evaluated in terms of possible new atomic positions that are added to the previously known ones, and the cycle repeated. Historically this process was known as "successive Fourier syntheses", because the electron density is calculated in terms of a Fourier sum. In any case, from atomic positions or directly from phases, if the information is correct, the function of electron density will be interpretable and will contain additional information (new atomic coordinates) that can be injected into the cyclic procedure shown above until structure completion, which is to say until the calculated function ρ(xyz) shows no changes from the last calculation. The lighter atoms of the structure (those with lower atomic number, ie, usually hydrogen atoms) are the most difficult ones to find on an electron density map. Their scattering power is almost obscured by the scattering of the remaining atoms . For this reason, the location of H atoms is normally done via a somewhat modified electron density function (the difference electron density), whose coefficients are the differences between the observed and calculated structure factors of the model known so far: Formula 7. In practice, if the structural model obtained is good enough, if the experiment provided precise structure factors, and there are no specific errors such as X-ray absorption, the difference map Δρ will contain enough signal (maxima) where H atoms can be located. Additionally, to get an enhanced signal from the light atoms scattering, this function is usually calculated with the structure factors appearing at lower diffraction angles only, usually with those appearing at sin θ / λ < 0.4, that is, using the region where the scattering factors for hydrogens are still "visible".This page titled 1.7: Structural Resolution is shared under a CC BY-NC 4.0 license and was authored, remixed, and/or curated by Martín Martínez Ripoll & Félix Hernández Cano via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.
278
1.8: The Structural Model
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Crystallography_in_a_Nutshell_(Ripoll_and_Cano)/01%3A_Chapters/1.08%3A_New_Page
The analysis and interpretation of the electron density function, ie the resolution of a crystal structure (molecular or non-molecular) leads to an initial distribution of atomic positions within the unit cell which can be represented by points or small spheres:Once the structural model is completed, having stereochemical sense and including its crystal packing, it is necessary to make use of all the information we can extract from the experimental data, since the diffraction pattern generally contains much more data (intensities) than needed to locate the atoms at their 3-dimensional coordinates. For instance, for a medium sized structure, with 50 independent atoms in the asymmetric unit (in the structural unit which is repeated by the symmetry operations), the diffraction pattern usually contains around 2500 structure factors, which implies approximately 50 observations per atom (each atom needs 3 coordinates). However, for more complex structures, as in the case of macromolecules, the amount of experimental data available normally does not reach these limits.The basic parameters associated with a three-dimensional structure are, obviously, the three positional coordinates (x, y, z) for each atom, given in terms of unit cell fractions. But, in general, given the experimental overdetermination mentioned above, the atomic model can become more complex. For instance, associating each atom with an additional parameter reflecting its thermal vibrational state, in a first approach as an isotropic (spherical) thermal vibration around its position of equilibrium. This new parameter is normally shown in terms of different radius of the sphere representing the atom. Thus an isotropic structural model would be represented by 4 variables per atom: 3 positional + 1 thermal.However, for small and medium-sized structures (up to several hundred of atoms), the diffraction experiment usually contains enough data to complete the thermal vibration model, associating a tensor (6 variables) to each atom which expresses the state of vibration in an anisotropic manner, ie distinguishing between different directions of vibration in the form of an ellipsoid (which resembles the shape of a baseball). Therefore, a crystallographic anisotropic model will require 9 variables per atom (3 positional + 6 vibrational).Left: Three bonded atoms represented with the isotropic thermal vibration model Right: The same three atoms shown on the left, but represented using the anisotropic thermal vibration modelLeft: Anisotropic model of the 3-dimensional structure of a molecule, showing some atoms from neighboring molecules.Right: Anisotropic model of the 3-dimensional structure of a molecule showing its crystal packing.Regardless of the model type, isotropic or anisotropic, the above-mentioned overabundance of experimental data allows a description of the structural model in terms of very precise atomic parameters (positional and vibrational) which lead to very precise geometrical parameters of the whole structure (interatomic distances, bond angles, etc.).This refined model is obtained by the analytical method of least-squares. Using this technique, atoms are allowed to "move" slightly from their previous positions and thermal factors are applied to each atom so that the diffraction pattern calculated with this model is essentially the same as the experimental one (observed), ie minimizing the differences between the calculated and observed structure factors. This process is carried out by minimizing the function:\[\sum w | |F_o| - |F_c| |^2 → 0\]Least-squares function used to refine the final model of a crystal structurewhere \(w\) represents a "weight" factor assigned to each observation (intensity), weighting the effects of the less-precise observations vs. the more accurate ones and avoiding possible systematic errors in the experimental observations which could bias the model. Fo and Fc are de observed and calculated structure factors, respectively.Although usually the mentioned experimental overdetermination ensures the success of this analytical process of refinement, it must always be controlled through the stereochemical aspects, ie, ensuring that the positional movements of the atoms are reasonable and which therefore generate distances within the expected values. Similarly, the thermal vibration factors (isotropic or anisotropic) associated with the atoms must always show reasonable values.In addition to the aforementioned control of the model changes during the refinement process, it seems obvious that (if everything goes well), additionally the diffraction pattern calculated (Fc) with the refined model (coordinates + thermal vibration factors) will show increasing similarity to the observed pattern (Fo). The comparison between both patterns (observed vs. calculated) is done via the so-called \(R\) parameter, which defines the "disagreement" factor between the two patterns:\[R = \dfrac{\sum [ | |F_o| - |F_c| | ]}{|F_o|}\]Disagreement factor of a structural model, calculated in terms of differences between observed and calculated structure factors with the final modelThe value of the disagreement factor (R) is estimated as a percentage (%), ie, multiplied by 100, so that "well" solved structures, with an appropriate degree of precision, will show an R factor below 0.10 (10% ), which implies that the calculated pattern differs from the observed one (experimental) less than 10%.The diffraction patterns of macromolecules (enzymes, proteins, etc.) usually do not show such large overdetermination of experimental data and therefore it is difficult to reach an anisotropic final model. Moreover, in these cases the values of the R factor are greater than those for small and medium-sized molecules, so that values around or below 20% are usually acceptable. In addition, as a result of this relative scarcity of experimental data, the analytical procedure of refinement (least-squares) must be combined with an interactive stereochemical modeling process and by imposing certain "soft restraints" to the molecular geometry.The reliability of a structural model has to be assessed in terms of several tests, a procedure known as model validation. Thus, the structural model should be continuously checked and validated using consistent stereochemical criteria (for example, bond lengths and bond angles must be acceptable). For instance a C---O distance of 0.8 Angstrom would not be acceptable for a carbonyl group (C = O). Similarly, the bond angles must also be consistent with an acceptable geometry. These criteria are very restrictive for small or medium-sized structures, but even in the structures of macromolecules they must meet some minimum criteria.Maximum dispersion values generally accepted for interatomic distances and bond angles in the structural model of a macromoleculeIn the case of proteins, the peptide bond (the bond between two consecutive amino acids) must also satisfy some geometrical restrictions. The torsional angles of this bond should not deviate much from the acceptable values of the usual conformations shown by the amino acid chains, as is shown in the so-called Ramachandran plot:Left: Schematic representation of the peptide bond, showing the two torsional angles (Ψ and Φ) defining it.Right: Ramachandran plot showing the different allowed (acceptable) areas for the torsional angles of the peptide bonds in a macromolecule. The different areas depend on the different structural arrangements (α-helices, β-sheets, etc.)Similarly, the values of the thermal factors associated with each atom should show physically acceptable values. These parameters account for the thermal vibrational mobility of the different structural parts. Thus, in the structure of a macromolecule, these values should be consistent with the internal or external location of the chain, being generally lower for the internal parts, and higher for external parts near the solvent. A model that has been "validated" according to the criteria described above, that is, which demonstrates:is a reliable model. However, the concept of reliability is not a quantitative parameter which can be written in terms of a single number. Therefore, to interpret a structural model up to its logical consequences one has to bear in mind that it is just a simplified representation, extracted from an electron density function:\[\rho(x y z)=\frac{1}{V} \sum_{\substack{h k l}}^{+\infty}|F(h k l)| \cdot e^{-2 \pi i[h x+k y+l z-\phi(h k l)]}\]on which the atoms have been positioned and which is being affected by some conditions described in another section, which we invite you to read.But, in any case, well-done crystallographic work always provides atomic parameters (positional and vibrational) along with their associated precision estimates. This means that any direct crystallographic parameter (atomic coordinates and vibration factors) or derived (distances, angles, etc.) is usually expressed by a number followed by its standard deviation (in parentheses) affecting the last figure. For example, an interatomic distance expressed as 1.541 Angstroms means a distance of 1.541 and a standard deviation of 0.002.THE ABSOLUTE CONFIGURATION (OR ABSOLUTE STEREOCHEMISTRY)As stated in a previous chapter, all molecules or structures in which neither mirror planes nor centres of symmetry are present, have an absolute configuration, that is, that they are different from their mirror images (they cannot be superimposed). Structural models showing two enantiomers of a compound (the two molecules are mirror images)These particular structural differences, very important as far as the molecular properties are concerned, can be unambiguously determined through the diffraction experiment (without using any external standard). This can be carried out using the so-called anomalous scattering effect which atoms show when appropriate X-ray wavelengths are used. This feature is also very succesfully used as a method to solve the phase problem for macromolecular crystals. It doesn't seem difficult to understand that the molecular enantiomers have different properties, as in the end they are different molecules, but regarding their biological activity (if any) the situation is particularly striking.Enantiomeric molecules that are represented in the left figure were introduced in the market by a pharmaceutical company and, obviously, they showed different properties.The properties of DARVON (Dextropropoxyphene Napsylate) are available through this link, while production of NOVRAD (Levopropoxyphene Napsylate) was discontinued.The experimental diffraction signal that allows this structural differentiation is a consequence of the fact that the atomic scattering factor does not behave as a real number when the frequency of X-rays is similar to the natural frequency of the atomic absorption. See also the chapter dedicated to anomalous dispersion.Under these conditions, Friedel's Law is no longer fulfilled and therefore structure factors such as |Fh,k,l | and |F-h,-k,-l | will be slightly different. These differences are evaluated in terms of the so-called Bijvoet estimators, which compare the ratios for observed structure factors for such reflection pairs with the corresponding ratios for the calculated structure factors using the two possible absolute models. Only one of these two comparisons will maintain the same type of bias:\[\frac{|F(h k l)|_{o}}{|F(\bar{h} \bar{k} \bar{l})|_{o}} \text { vs. } \frac{|F(h k l)|_{c}}{|F(\bar{h} \bar{k} \bar{l})|_{c}}\]Comparison of Bijvoet ratios - Johannes Martin BijvoetThus, if the quotient between the observed structure factors is <1, the same quotient for the calculated structure factors should also be <1. Or, on the contrary, both quotients should be >1. If this is true for a large number of reflection pairs it will indicate that the absolute model is the right one. If it is not so, the structural model has to be inverted.The interested reader should also have a look into the web pages on anomalous scattering, prepared by Ethan A. Merritt.The information describing a final crystallographic model is composed of:The atomic positions are usually given as fractional coordinates (fractions of the unit cell axes), but sometimes, especially for macromolecules where the information usually refers to the isolated molecule, they are given as absolute coordinates, ie, expressed in Angstrom and referred to a system of orthogonal axes independent of the crystallographic ones (see below).Information about several atoms of a protein structure using the so-called PDB format (Protein Data Bank), ie atomic coordinates in Angstrom on a system of orthogonal axes, different from the crystallographic ones. For clarity, the estimated standard deviations have been omitted.The population factor is the fraction of atom located in a specific position, although this factor is usually 1. The meaning of this parameter requires an explanation for the beginner, since it could be understood that atoms could be divided in parts, which obviously has no physical meaning. Due to atomic vibrations, and to the fact that the diffraction experiment has a duration in time, it is possible that in some of the unit cells atoms are missing. Thus, instead of a complete occupancy (population factor = 1), the corresponding site, in an average unit cell, will contain only a fraction of the atom. In these cases it is said that the crystal lattice has defects and population factors smaller than 1 reflect a fraction of unit cells where a specific atomic position is occupied. Obviously, a fraction of unit cells where the same position is empty complements the population factor to unity. Therefore, the crystallographic model reflects the average structure of all unit cells during the experiment time.The atomic coordinates and in general all information collected from a crystallographic study, is stored in accessible databases. There are different databases, depending of the type of compound or molecule, but this will be discussed in another chapter of these pages.The final structural model (atomic coordinates, thermal factors and, possibly, population factors) directly provide additional information which leads to a detailed knowledge of the structure itself, including bond lengths, bond angles, torsional angles, molecular planes, dipole momentum, etc., and any other structural detail that might be useful for understanding the functionality and/or properties of the material under study.In the case of complex biological molecules, the use of high-quality graphic processors and relatively simple models, greatly facilitates the understanding of the relationship between structure and function, as shown in the figure on the left.At present the available computational and graphic techniques allow us to obtain beautiful and very descriptive models which help to visualize and understand structures, as is shown in the examples below:Left: Model of balls and sticks to represent the structure of a simple inorganic compound. Right: Representation of an inorganic compound, in which a partial polyhedral representation has been added Left: Animated model of sitcks to represent the packing and molecular structure of a simple organic compound. Right: Given the complexity of biological molecules, the models which represent them are usually simple, showing the overall folding and the different structural motifs (α-helices, β-strands, loops, etc.) shown with the ribbon model. The example also shows a stick representation of a cofactor linked to the enzyme.Left: Combined model of ribbons and sticks to represent the dimmeric structure of a protein which also shows a sulfate ion in the middle--represented with balls Right: Representation of the surface of a biological molecule where the colours represent different properties of hydrophobia. The arrow represents the dipolar momentum of the molecule.Finally, using additional information from other techniques (such as cryo-electron microscopy), or combining two different crystal conformations of a molecule, other models are available as shown below. Moreover, using the ultrashort exposure times of X-rays produced by free electron lasers (European XFEL), crystallographers are able to collect diffraction data of macromolecules in different conformations, that is, during the course of performing their respective tasks. In this manner, using a huge number of X-ray snapshots we can produce like a film where we are able to follow the molecular modifications and therefore to understand their function.Left: Combined model of the molecular structure of a protein and an envelope (as obtained by high-resolution electron microscopy) showing a pore formed by the association of four protein molecules Right: Simplified animated model showing the backbone folding of an enzyme and the structural changes between two molecular states: active (open) and inactive (closed). The structures of both states were determined by crystallographyThis page titled 1.8: The Structural Model is shared under a CC BY-NC 4.0 license and was authored, remixed, and/or curated by Martín Martínez Ripoll & Félix Hernández Cano via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.
279
1.9: Crystallographic computing
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Crystallography_in_a_Nutshell_(Ripoll_and_Cano)/01%3A_Chapters/1.09%3A_New_Page
Readers who have arrived at this chapter in a sequential manner will notice that, apart from the phase problem, the relationship between the diffraction pattern (reciprocal space) and the crystal structure (direct space) is mediated by a Fourier transform represented by the electron density function: \(ρ(xyz)\) (see the drawing on the left). Readers will also know that the relationship between these two spaces is "holistic", meaning that the value of this function, at each point in the unit cell of coordinates \((xyz)\), is the result of "adding" the contribution of "all" structure factors [ie diffracted waves in terms of their amplitudes \(|F(hkl)|\) and phases \(Φ(hkl)\)] contained in the diffraction pattern. They will also remember that the diffraction pattern contains many structural factors (several thousand for a simple structure, and hundreds of thousands for a protein structure). The "jump" between direct and reciprocal spaces, mediated by a Fourier transform represented by the electron density functionMoreover, the number of points in the unit cell, where the ρ function has to be calculated, is very high. In a cell of about 100 x 100 x 100 Angstrom3, it would be necessary to calculate at least 1000 points in every unit cell direction to obtain a resolution of 100/1000, which equals 0.1 Angstrom in each direction. This means calculating at least 1000 x 1000 x 1000 = 1,000,000,000 points (one billion points) and at each point to "add" several thousand (or hundreds of thousands) structure factors F(hkl). It should therefore be clear that, regardless of the difficulties of the phase problem, solving a crystal structure implies the use of computers. Finally, the analysis of a crystal or molecular structure also implies calculating many geometric parameters that define interatomic distances, bond angles, torsional angles, molecular surfaces, etc., using the atomic coordinates (xyz).For the reasons described above, since the beginning of the use of Crystallography as a discipline to determine molecular and crystal structures, crystallographers have devoted special attention to the development of calculation tools to facilitate crystallographic work. With this aim, and even before the early computers appeared, the crystallographers introduced the so-called "Beevers-Lipson strips," which were widely used in all Crystallography laboratories. The Beevers-Lipson strips The Beevers-Lipson strips (which were strips of paper containing the values for some trigonometric functions) were used in laboratories to speed up the calculations (by hand) of the Fourier transforms (see above: the electron density function, for example). These strips were introduced in 1936 by A.H. Beevers and H. Lipson. In the 1960s, more than 300 boxes were distributed to nearly all the laboratories in the world. You can also have a look into the description made by the International Union of Crystallography. The nightmare was maintaining upright this box, which had a very narrow base, otherwise it was impossible to maintain the strips correctly stored!As expected, the introduction of early computers (or electro-mechanic calculators) inspired great hope in crystallographers... ENIAC (Electronic Numerical Integrator and Computer, 1945) -- the very first electronic computer. Some pictures of the rooms where it was installed. ENIAC, short for Electronic Numerical Integrator And Computer, was the first general-purpose electronic computer, whose design and construction were financed by the United States Army during the Second World War. It was the first digital computer capable of being reprogrammed to solve a full range of computing problems, especially calculating artillery firing tables for the U.S. Army's Ballistic Research Laboratory. The ENIAC had immediate importance. When it was announced in 1946, it was heralded in the press as a "Giant Brain". It boasted speeds one thousand times faster than electro-mechanical machines, a leap in computing power that no single machine has matched. This mathematical power, coupled with general-purpose programmability, excited scientists and industrialists. Besides its speed, the most remarkable thing about ENIAC was its size and complexity. ENIAC had 17,468 vacuum tubes, 7,200 crystal diodes, 1,500 relays, 70,000 resistors, 10,000 capacitors and around 5 million hand-soldered joints. It weighed 27 tons, was roughly 2.6 m by 0.9 m by 26 m, took up 63 m², and consumed 150 kW of power. Later, with the development of Electronics and Microelectronics, which introduced integrated circuits, computers became accessible to crystallographers, who flocked to these facilities with large boxes of "punched cards" (the only means for data storage at that time), containing the diffraction intensities and their own computer programs. A punch card or punched card (or punchcard or Hollerith card or IBM card), is a piece of stiff paper which contains digital information represented by the presence or absence of holes in predefined positions. It was used by crystallographers until the end of the 1970s. Punched paper tape (shown in yellow) and different magnetic tapes (as well as some small disks) used for data storage during the 1970s and 1980s. Around the early 1970s, and for over a decade, crystallographers became a nightmare for the managers and operators of the so-called "computing centers,'' running in some universities and research centers. In the 1980s the laboratories of Crystallography became "flooded" with computers, which for the first time gave crystallographers independence from the large computing centers. The VAX series of computers (sold by the company Digital Equipment Corporation) marked a splendid era for crystallographic calculations. They allowed the use of magnetic tapes and the first hard disk drives, with limited capacity (only a few hundred MB) -- very big and heavy, but they eliminated the need for the tedious punched cards. Nostalgics should have a look into this link.!!! A typical computer (of the VAX series) used in many Crystallography laboratories during the 1980s. Over the years, crystallographic computing has become easy and affordable thanks to personal computers (PC), which meet nearly all the needs of most conventional crystallographic calculations, at least concerning crystals of low and medium complexity (up to hundreds of atoms). Their relative low price and their ability to be assembled into "farms" (for distributed calculation) provide crystallographers the best solution for almost any type of calculation. Left: A typical personal computer (PC) used in the 2000s Right: A typical PC-farm used in the 2000sHowever, the crystallography applied to macromolecules not only needs what we could call "hard" computing. The management of large electron density maps, which are used to build the molecular structure of proteins, as well as the subsequent structural analysis, requires more sophisticated computers with powerful graphic processors and, if possible, with the capability of displaying 3-dimensional images using specialized glasses... A Silicon Graphics computer used to visualize 3-dimensional electron density maps and structures. The processor and the screen are complemented by an infrared transmitter (black box on the screen) and the glasses used by the crystallographer. The current computing facilities represent a big jump respect to the capabilities available during the mid-twentieth century, as it is shown in the representation of the structural model used for the structural description of penicillin, based on three 2-dimensional electron density maps... And even 3d maps where also used!... Left: Three-dimensional model of the structure of penicillin, based on the use of three 2-dimensional electron density maps, as used by Dorothy C. Hodgkin, Nobel laureate in 1964 Right: Representation of 3d electron density maps used until the middle of the 1970's. A typical personal computer commonly used since 2010 for crystallographic calculations and also for their graphic capabilities At present there are enough personal, institutional or commercial computer program developments, or even computing facilities through remote servers, to fulfill nearly all of the needs for crystallographic computing, as well as many sources from which one can download most of those programs. In this context, it could be useful to check the following links:Crystallographic computer programsSpecifically for compounds of small and medium size (molecular or not) we recommend using the Wingx package which can be freely downloaded by courtesy of Louis J. Farrugia, (University of Glasgow, UK). It is easy to install on a PC and contains an interface which includes the most important programs for small and medium size crystallographic problems. Also, for these types of compounds there is a very useful computer program (Mercury), user-friendly and free, which includes powerful graphics and some other analytical tools to analyze crystal structures. It can be downloaded from the Cambridge Crystallographic Data Centre, UK. Protein crystallographers need more specific programs, and in this context we recommended using the link offered by CCP4, Collaborative Computational Project No. 4, Software for Macromolecular X-Ray Crystallography. On the other hand, crystallographic work is currently unimaginable without having access to crystallographic databases, which contain all the structural information that is being published and which have a clear added value for the researcher. The type of structure is what determines its inclusion in any of the existing databases. Thus, metals and intermetallic compounds are made available in the database CRYSTMET; inorganic compounds are centralized in the ICSD database (Inorganic Crystal Structure Database); organic and organometallic in CSD (Cambridge Crystallographic Database); and proteins in PDB (Protein Data Bank), which is a databank (not a database). Other databases, databanks, etc., do not necessarily contain structural information in the most precise sense, but they can also be very helpful for crystallographers. And this is the case of WebCite published by the Cambridge Crystallographic Data Centre (CCDC), containing over 2000 articles with very important information for structural chemistry research in its broadest sense, and in particular to pharmaceutical drug discovery, materials design or drug development, among others.Structural databases and databanksAs indicated, some of these databases (or databanks) are public (glycoSCIENCES.de, LipidBank, PDB and NDB), and therefore can be searched online. However, others (CRYSTMET, ICSD and CSD) require a license or even a local installation. During the period 1990-2012, CRYSTMET, ICSD and CSD have been licensed free of charge to all CSIC research institutes (CRYSTMET and ICSD) and to all academic institutions in Spain and Latin American countries (CSD). However, due to economic constraints, the CSIC's authorities decided to reduce drastically this program that was managed through the Department of Crystallography and Structural Biology (at the Institute of Physical Chemistry "Rocasolano"). Nowadays this program is maintained in a reduced manner, only for Spanish institutions, as it can be seen through this link.This page titled 1.9: Crystallographic computing is shared under a CC BY-NC 4.0 license and was authored, remixed, and/or curated by Martín Martínez Ripoll & Félix Hernández Cano via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.
280
1.10: Biographical outlines
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Crystallography_in_a_Nutshell_(Ripoll_and_Cano)/01%3A_Chapters/1.10%3A_Biographical_outlines
In the context of this chapter, you will also be invited to visit these sections...As mentioned in the introduction, Crystallography is one of the scientific disciplines that has most clearly influenced the development of Chemistry, Biology, Biochemistry and Biomedicine. Although in other pages we made some reference to the scientists directly involved at the early stages, this chapter is aimed at presenting short biographical outlines. As a supplement of the biographical notes presented in this chapter, the reader can also consult the early historical notes about crystals and Crystallography offered in another section. The biographical outlines object of the present chapter (shown below) have been distributed in groups, in chronological order, using the terminology of some musical sections and tempos, trying to describe their relevance, at least from a historical perspective.Wilhelm Conrad Röntgen. None of this would have been possible without the contribution of Wilhelm Conrad Röntgen, who won the first Nobel Prize in Physics for his discovery of X-rays. Although many other biographical personal references to Röntgen can be found on the internet, we recommend visiting the site prepared by Jose L. Fresquet (in Spanish). In the following paragraphs we summarize the most relevant details and add a few others. Wilhelm Conrad Röntgen was born on March 27, 1845, at Lennep in the Lower Rhine Province of Germany, as the only child of a manufacturer and merchant of cloth. His mother was Charlotte Constanze Frowein of Amsterdam, a member of an old Lennep family which had settled in Amsterdam. When he was 3 years old his family moved to Holland. From 16 to 20 years old he studied at the Technical School in Utrecht, and he then moved to Zurich where he got the corresponding academic degree in mechanical engineering. After some years in Zurich, as assistant professor of physics under August Kundt, in 1872 (27 years old), he moved to the University of Würzburg. However, as he couldn't find any job (he previously couldn't pass his exams in Latin and Greek) he moved to Strasbourg where he finally got a position as professor in 1874. Five years later he accepted a teaching position at the University of Giessen and finally at 45 years old, he obtained a professorship in physics at Würzburg, where he became Rector. His work on cathode rays led him to the discovery of a new and different kind of rays. On the evening of November 8, 1895, working with an enclosed and sealed discharge tube (to exclude all light), he found that a paper plate (covered on one side with barium platinocyanide and placed accidentally in the path of the rays) became unexpectedly fluorescent, even when it was as far as two metres from the discharge tube. It took a month until Röntgen understood the importance of this new radiation and he immediately sent a scientific communication to the Society for Physics and Medicine in Würzburg...Specifically, the first sentences of his official statement (written in a nice German language) read: Lässt man durch eine Hittorf’sche Vacuumröhre, oder einen genügend evacuirten Lenard’schen, Crookes’schen oder ähnlichen Apparat die Entladungen eines grösseren Ruhmkorff’s gehen und bedeckt die Röhre mit einem ziemlich eng anliegenden Mantel aus dünnem, schwarzem Carton, so sieht man in dem vollständig verdunkelten Zimmer einen in die Nähe des Apparates gebrachten, mit Bariumplatincyanür angestrichenen Papierschirm bei jeder Entladung hell aufleuchten, fluoresciren, gleichgültig ob die angestrichene oder die andere Seite des Schirmes dem Entladungsapparat zugewendet ist. Die Fluorescenz ist noch in 2 m Entfernung vom Apparat bemerkbar. Man überzeugt sich leicht, dass die Ursache der Fluorescenz vom Entladungsapparat und von keiner anderen Stelle der Leitung ausgeht. After producing an electrical discharge with a Ruhmkorff’s coil through a Hittorf’s vacuum tube, or a sufficiently evacuated Lenard, Crookes or similar apparatus, covered with a fairly tight-fitting jacket made of thin, black paperboard, one sees that a cardboard sheet coated with a layer of platinum and barium cyanide, located in the vicinity of the apparatus, lights up brightly in the completely darkened room regardless of whether the coated side is pointing or not to the tube. This fluorescence occurs up to 2 meters away from the apparatus. One can easily be convinced that the cause of the fluorescence proceeds from the discharge apparatus and not from any other source of the line. Röntgen's discovery quickly produced a social commotion... "Incredible light!". However, almost at the same speed, his public celebrity dropped to a minimum... "his high-flying stopped...". It was during the first months of 1896, after sending to the British Medical Journal an X-ray photograph of a broken arm, that Röntgen began to regain the public's confidence, demonstrating the diagnostic capacity of his discovery. However, it took still many years until his "incredible light" was recognized as of medical interest. It was awarded the first Nobel Prize for Physics in 1901. Wilhelm Conrad Röntgen died in Munich on 10 February 1923 from carcinoma of the intestine. It is not believed his carcinoma was a result of his work with ionizing radiation because of the brief time spent on those investigations and because he was one of the few pioneers in the field who used protective lead shields routinely.If you can read Spanish, there is also an extensive chapter dedicated to both the historical details around Röntgen and his discovery.1914 "Overture", by Max von Laue, with accompaniment by Paul P. EwaldMax von Laue. If Röntgen's discovery was important for the development of Crystallography, the second qualitative step forward was due to another German, Max von Laue, Nobel Prize for Physics in 1914, who trying to demonstrate the undulatory nature of X-rays, discovered the phenomenon of X-ray diffraction by crystals. A complete biographical description can also be found through this link. Max von Laue was born on October 9, 1879 at Pfaffendorf, in a little town near Koblenz. He was the son of Julius von Laue, an official in the German military administration, who was raised to hereditary nobility in 1913 and who was often sent to various towns, so that von Laue spent his youth in Brandenburg, Altona, Posen, Berlin and Strassburg, going to school in the three last-named cities. At the Protestant school at Strassburg he came under the influence of Professor Goering, who introduced him to the exact sciences, where he studied Mathematics, Physics and Chemistry. However, he soon moved to the University of Göttingen and in 1902 to the University of Berlin, where he began working with Max Planck. A year later, after obtaining his doctorate degree, he returned to Gottingen, and in 1905 he went back to Berlin as assistant to Max Planck, who also won the Nobel Prize for Physics in 1918, ie four years after von Laue. Between 1909 and 1919 he went through the Universities of Munich, Zurich, Frankfurt and Würzburg, and he finally returned to Berlin where he earned a position as a professor. Paul Peter Ewald. It was during this last period, namely in 1912, when he met Paul Peter Ewald in Munich. Ewald was then finishing his doctoral thesis under Arnold Sommerfeld, and he got Laue interested in his experiments on the interference between radiations with large wavelengths (practically visible light) on a "crystalline" model based on resonators. Note that at that time the question on wave-particle duality was also under discussion. The idea then came to Laue that the much shorter electromagnetic rays, which X-rays were supposed to be, would cause some kind of diffraction or interference phenomena in a medium and that a crystal could provide this medium. An excellent historical description of these facts and the corresponding experiments, conducted by Walter Friedrich and Paul Knipping under the direction of Max von Laue, can be found in an article by Michael Eckert. The original article of that experiment, signed by Friedrich, W., Knipping, P. and Laue, M., was published with the reference: Sitzungsberichte der Kgl. Bayer. Akad. der Wiss. 303–322, although it was later collected by Annalen der Physik 346, 971-988. It's amazing how quickly Ewald developed the interpretation of Max von Laue experiments, as it can be seen in his original article, published in 1913 (in German), available through this link. Recognizing the role played by Ewald for the development of Crystallography, the International Union of Crystallography grants the Prize and Medal that carry the name of Paul Peter Ewald. And so was it that using a crystal of copper sulfate and some others from zinc blende, in front of an X-ray beam, how Laue got the confirmation on the undulatory nature of the rays discovered by Röntgen (see images below). For this discovery, and its interpretation, Max von Laue received the Nobel Prize for Physics in 1914. But at the same time, his experiment created many questions on the nature of crystals... Left: First X-ray diffraction pattern obtained by Laue and his collaborators using a crystal of copper sulphate Right: One of the first X-ray diffraction patterns obtained by Laue and his collaborators using some crystals of the mineral BlendeLaue was always opposed to National Socialism, and after the Second World War he was brought to England for a short time with several other German scientists contributing to the International Union of Crystallography. He returned to Germany in 1946 as director of the Max Planck Institute and professor at the University of Göttingen. He retired in 1958 as director of the Institute of Physical Chemistry Fritz Haber in Berlin, a position to which he had been elected in 1951. On 8 April, 1960, while driving to his laboratory, Laue’s car was struck by a motorcyclist in Berlin, The cyclist, who had received his license only two days earlier, was killed and Laue’s car flipped. Max von Laue (80 years old) died from his injuries sixteen days later on April 24. Derecha: William Lawrence BraggThis time it did not happen as with Röntgen. Max von Laue's discovery became immediately known, at least by the British William Henry Bragg and his son William Lawrence Bragg, who in 1915 shared the Nobel Prize for Physics for demonstrating the usefulness of the phenomenon discovered by von Laue (X-ray diffraction) in studying the internal structure of crystals. They showed that X-rays diffraction can be described as specular reflection by a set of parallel planes through all lattice elements in such a way that a diffracted beam is obtained if:2.d.sin θ = n.λwhere d is the distance between the planes, θ is the angle of incidence, n is an integer and λ is the wavelength. Through this simple approach the determination of crystal structures was made possible. William Henry Bragg studied Mathematics at the Trinity College in Cambridge and subsequently Physics at the Cavendish Laboratory. At the end of 1885, he was appointed professor at the University of Adelaide (Australia), where his son (William Lawrence Bragg) was born. W. Henry Bragg became successively Cavendish Professor of Physics at Leeds, Quain Professor of Physics at the University College London, and Fullerian Professor of Chemistry in the Royal Institution. His son, William Lawrence, studied Mathematics at the University of Adelaide. In 1909, the family returned to England and W. Lawrence Bragg entered as a fellow at Trinity College in Cambridge. In the autumn of 1912, during the same year that Max von Laue made public his experiment, the young W. Lawrence Bragg started examining the phenomenon that occurs when putting a crystal in front of the X-rays, presenting its first results (The diffraction of short electromagnetic waves by a crystal) at the headquarters of the Cambridge Philosophical Society during its meeting in November 11th, 1912.In 1914, W. Lawrence Bragg was appointed Professor of Natural Sciences at Trinity College, and that same year he was awarded the Barnard Medal. The two years he worked with his father on the experiments of refraction and diffraction by crystals led to a lecture of W.H. Bragg (Bakerian Lecture: X-Rays and Crystal Structure) and to the famous article X-rays and Crystal Structure, also published in 1915. That same year, he (25 years old!) and his father, shared the Nobel Prize in Physics. Father and son were able to explain the phenomenon of X-ray diffraction in crystals through crystallographic planes acting as special mirrors for X-rays (Bragg's Law), and showed that the crystals of substances such as sodium chloride (NaCl or common salt) do not contain molecules of NaCl, but simply ions of Na+ and Cl-, both regularly ordered. These ideas revolutionized Theoretical Chemistry and caused the birth of a new science: X-ray Crystallography. Unfortunately, after the First World War, some difficulties arose between William Lawrence and his father when the general public did not directly credit W. Lawrence with his contributions to their discoveries. Lawrence Bragg desperately wanted to make his own name in research, but he sensed the triumph of their discoveries passing to his father, as the senior man. W. Henry Bragg tried his best to remedy the situation, always pointing out which aspects of their work were his son's ideas; however, much of their work was in the form of joint papers, which made the situation more difficult. Sadly, they never discussed the problem, and the trouble lingered for many years. The close collaboration between father and son ended, but it was natural that their work would continue to overlap. They decided to divide up the available work, and agreed to focus on separate areas of X-ray crystallography. W. Lawrence was to focus on inorganic compounds, metals and silicates, whereas William H. Bragg was to focus on organic compounds. In 1919, William Lawrence was made Langworthy Professor of Physics at Victoria University, Manchester, where he married and remained until 1937. There, in 1929, he published an excellent article on the use of the Fourier series to determine crystal structures, The Determination of Parameters in Crystal Structures by means of Fourier Series. In 1941 father and son were knighted (Sir) and a year later William Henry died. In subsequent years, William Lawrence was interested in the structure of silicates, metals, and especially in the chemistry of proteins. He was appointed Director of the National Physical Laboratory in Teddington and professor of Experimental Physics at the Cavendish Laboratory (Cambridge). In 1954, he was appointed Director of the Royal Institution in London, establishing his own research group aimed at studying the structure of proteins using X-rays. William Lawrence Bragg died in 1971, aged 81. The IUCr published an obituary that you can reach through this link. The year 2012 represents the centennial of the first single crystal X-ray experiments, performed at the Ludwig Maximilian Universität, Munich (Germany), by Paul Knipping and Walter Friedrich under the supervision of Max von Laue, and especiallythe the experiments done by the Braggs. The interested reader can enjoy reading the chapters published as a reminder by the International Union of Crystallography, to be found through the links shown below.Arthur Lindo Patterson. It is unexplainable how the name of Arthur LIndo Patterson is slowly fading and entering history almost as a stranger, at least since the last decade of the Twentieth Century. Probably his name remains associated only with some crystallographic calculation subroutine. However, as mentioned in another chapter, the contribution of Patterson to Crystallography can be seen as the single most important development after the discovery of X rays by Röntgen in 1895. Arthur Lindo Patherson was born in the early years of the Twentieth Century in New Zealand, but his family soon emigrated to Canada, where he spent his youth. For some unknown reason, he went to school in England before returning to Montreal (Canada) to study Physics at McGuill University, where he obtained his master's degree with a thesis on the production of hard X-rays (with small wavelengths) using the interaction of Radio β radiation with solids. He performed his first experiments on X-ray diffraction during a period of two years at the laboratory of W.H. Bragg at the Royal Institution in London. At that time he was aware that, although in small crystal structures the location of atoms in the unit cell was a relatively simple problem, the situation was virtually unfeasible in the case of molecular compounds, or in general with more complex compounds. After a stay in the lab of W.H. Bragg, Lindo Patterson spent a very productive year in the Kaiser-Wilhelm Institute in Berlin, with a grant from the National Research Council of Canada to work under Hermann Mark. With his work, he contributed decisively to the determination of particle size using X-ray diffraction, and started to become interested in the theory of the Fourier transform, an idea that some years later would become his obsession in connection with the resolution of crystal structures. In 1927, he returned to Canada and a year later completed his PhD at McGuill University. After two years with R.W.G. Wyckoff in the Rockefeller Institute in New York, he accepted a position at the Johnson Foundation for Medical Physicsin Philadelphia which gave him the chance to learn X-ray diffraction applied to biological materials. In 1931 he published two articles on Fourier series as a tool to interpret X-ray diffraction data: Methods in Crystal Analysis: I. Fourier Series and the Interpretation of X-ray Data and Methods in Crystal Analysis: II. The Enhancement Principle and the Fourier Series of Certain Types of Function. In 1933, he moved to the MIT (Massachusetts Institute of Technology) where, through his friendship with the mathematician Norbert Wiener, he started learning Fourier theory, and especially the properties of the Fourier transform and convolution. That was how, in 1934, his equation (the Patterson Function) was formulated in an article entitled A Fourier Series Method for the Determination of the Components of Interatomic Distances in Crystals, opening enormous expectations for the resolution of crystal structures. However, due to the technological precariousness of those days in addressing the large amount of sums involved in his function, it took some years until his discovery became effective in indirectly resolving the phase problem. Patterson's death, in November 1966, resulted from a massive cerebral hemorrhage.In addition to the technical difficulties existing at that time in solving complex mathematical equations, the function introduced by Arthur L. Patterson, clearly presented significant difficulties in the case of complex structures. At least it was so until, in 1935, David Harker, a "trainee", realized the existence of special circumstances that significantly facilitated the interpretation of the Patterson Function, and of which Arthur L. Patterson had not been aware. David Harker was born in California, and graduated in 1928 as a chemist at Berkeley. In 1930, he accepted a job as a technician in the laboratory of the Atmospheric Nitrogen Corp. in New York, where, through the reading of articles related to crystal structures, his interest in crystallography increased. Due to the great economic depression in 1933, he lost the job and returned to California. Using some savings, he was able to enter the California Institute of Technology. There, supervised by Linus Pauling, he began to experiment with the resolution of some simple crystal structures. During one of the weekly talks in Pauling's lab, the function recently introduced by Arthur L. Patterson was described and Harker was immediately aware of the difficulties implied in the many calculations in attaining the Patterson map, but especially the difficulty in interpreting it in structures with many atoms. However, a few nights after the speech, he woke up suddenly and said it has to work!. Indeed, it became clear to Harker that the Patterson map contains regions where the interatomic vectors (between atoms related by symmetry elements) are concentrated. Therefore, in order to look for interatomic vectors, one has only to explore certain areas of the map, and not the entire Patterson unit cell, which simplifies the interpretation qualitatively. From 1936 until 1941, Harker had a professor position to teach Physical Chemistry at Johns Hopkins University, where he learned classical Crystallography and Mineralogy. During the remaining years of the 1940's, he obtained a research position at the General Electric Company and from there, together with his colleague, John S. Kasper, made another important contribution to Crystallography: the Harker-Kasper inequalities, the first contribution to the so-called direct methods for solving the phase problem. During the 1950's, Harker accepted the offer of joining the Irwin Langmuir Brooklyn Polytechnic Institute to solve the structure of ribonuclease. This opportunity helped him to establish the methodology that, years later, was used by Max Perutz and John Kendrew to solve the structure of hemoglobin. In 1959, Harker moved his team and project to the Roswell Park Cancer Institute and completed the ribonuclease structure in 1967. He retired officially in 1976, but remained somewhat active at the Medical Foundation of Buffalo (today the Hauptman-Woodward Institute), until his death in 1991 from pneumonia. There is a nice Harker's obituary written by William Duax.John Desmond Bernal. Following the findings and developments by Arthur Lindo Patterson and David Harker, interest was directed to the structure of molecules, especially those related to life: proteins. And in this movement an Irishman settled in England, John Desmond Bernal, played a crucial role to the further development of crystallography. John Desmond Bernal was born in Nenagh, Co. Tipperar, in 1901. The Bernals were originally Sephardic Jews who came to Ireland in 1840 from Spain via Amsterdam and London. They converted to Catholicism and John was Jesuit-educated. John enthusiastically supported the Easter Rising, and, as a boy, organized a Society for Perpetual Adoration. He moved away from religion as an adult, becoming an atheist. Bernal was strongly influenced by the Russian Revolution of 1917 and became a very active member of the Communist Party of Britain. John graduated in 1919 in Mineralogy and Mathematics (applied to symmetry) at the University of Cambridge. In 1923, he obtained a position as assistant in the laboratory of W.H. Bragg at the Royal Institution in London, and in 1927, he returned as a professor to Cambridge. His fellow students in Cambridge nicknamed him ‘Sage’ because of his great knowledge. From there, he attracted many young researchers from Birkbeck College and King's College to the field of macromolecular crystallography. In 1937, he obtained a professor position in London at Birkbeck College, from where he trained many crystallographers (Rosalind Franklin, Dorothy Hodgkin, Aaron Klug and Max Perutz, among others). Undoubtedly, John D. Bernal has earned a prominent position in the Science of the Twentieth Century. He showed that, under appropriate conditions, a protein crystal can maintain its crystallinity under exposure to X-rays. Some of his students were able to solve complex structures such as hemoglobin and other biological materials of importance, such that crystallographic analysis started to revolutionize Biology. John Bernal, who died at the age of 70, was also the engine of crystallographic studies on viruses, together with his collaborator, Isadore Fankuchen. The developments of the Bragg's, based on the previous discovery of Laue and the work by Patterson and Harker, raised the expectations of structural biology. Due to the Second World War, England became an attractive center, especially around John D. Bernal. Max Ferdinand Perutz was born in Vienna, on May 19th, 1914, into a family of textile manufacturers. They had made their fortune in the 19th Century by the introduction of mechanical spinning and weaving to the Austrian monarchy. Max was sent to school at the Theresianum, a grammar school derived from an officers' academy at the time of the empress Maria Theresia. His parents suggested that he should study law in preparation for entering the family business. However, a good schoolmaster awakened his interest in chemistry and he entered the University of Vienna where he, in his own words, "wasted five semesters in an exacting course of inorganic analysis". His curiosity was aroused, however, by organic chemistry, and especially by a course of organic biochemistry, given by F. von Wessely, in which Sir F.G. Hopkins' work at Cambridge was mentioned. It was here that Perutz decided that Cambridge was the place he wanted to work on his Ph.D. thesis. With financial help from his father, in September 1936, Perutz became a research student at the Cavendish Laboratory in Cambridge under John D. Bernal. His relationship with Lawrence Bragg was also critical, and in 1937 he conducted the first diffraction experiments with hemoglobin crystals which had been crystallized in Keilin's Molteno Institute. Thus, from 1938 until the early fifties, the protein chemistry was done at Keilin's Molteno Institute and the X-ray work at the Cavendish, with Perutz busily bridging the gap between biology and physics on his bicycle. After the invasion of Austria by Hitler, the family business was expropriated, his parents became refugees, and his own funds were soon exhausted. Max Perutz was saved by being appointed research assistant to Lawrence Bragg, under a grant from the Rockefeller Foundation, on January 1st, 1939. The grant continued, with various interruptions due to the war, until 1945, when Perutz was given an Imperial Chemical Industries Research Fellowship. In October 1947, he was made head of the newly constituted Medical Research Council Unit for Molecular Biology. His collaboration with Sir Lawrence Bragg continued through many years. As a memorial to Perutz you probably may consult this obituary published in Nature on the occasion of his death in 2002 (otherwise you always may download this obituary written in Spanish). John Cowdery Kendrew was born on 24th March, 1917, in Oxford. He graduated in Chemistry in 1939 from Trinity College. He spent the first few months of the war doing research on reaction kinetics in the Department of Physical Chemistry at Cambridge under the supervision of E.A. Moelwyn-Hughes. The personal influence of John D. Bernal led him to work on the structure of proteins and in 1946 he joined the Cavendish Laboratory, working with Max Perutz under the direction of Lawrence Bragg, where he received his Ph.D. in 1949. Kendrew and Perutz formed the entire staff of the Molecular Biology Unit of the recently established Medical Research Council. Although the work of Kendrew focused on myoglobin, Max Ferdinand Perutz and John Cowdery Kendrew received the Nobel Prize in Chemistry in 1962 for their work on the structure of hemoglobin and both were the first to successfully implement the MIR methodology introduced by David Harker. Rosalind Elsie Franklin. One of the great scientists of those years who also emerged under the direct influence of John D. Bernal, was the controversial and unfortunate Rosalind Franklin. There are many texts concerning Rosalind, but perhaps it is worthwhile to read the detailed pages (in Spanish) prepared by Miguel Vicente: La dama ausente: Rosalind Franklin y la doble hélice and Jaque a la dama: Rosalind Franklin en King's College, both of which do justice to her personality and to her short but fruitful work in the science of the mid-twentieth century. In the summer of 1938, Rosalind Franklin went to Newnham College, Cambridge. She passed her finals in 1941, but was only awarded a titular degree, as women were not entitled to degrees from Cambridge at the time. In 1945, Franklin received her PhD from Cambridge University. After the war Franklin accepted an offer to work in Paris at the Laboratoire de Services Chimiques de L'Etat with Jacques Mering, where she learned X-ray diffraction techniques on coal and related inorganic materials. In January 1951, Franklin started working as a research associate at King's College, London, in the Medical Research Council, in the Biophysics Unit, directed by John Randall. Although originally she was to have worked on X-ray diffraction of proteins and lipids in solution, Randall redirected her work to DNA fibers before she started working at King's, as Franklin was to be the only experienced experimental diffraction researcher at King’s in 1951. In Randall's laboratory, Rosalind's trajectory crossed with that of Maurice Wilkins, as both were dedicated to DNA research. Unfortunately, unfair competition led to a conflict with Wilkins which finally "took its toll". In Rosalind's absence, Wilkins showed the diffraction diagrams, which Rosalind had taken from DNA fibers, to two young scientists lacking excessive scruples... James Watson and Francis Crick. John Bernal called her DNA X-ray photographs "the most beautiful X-ray photographs of any substance ever taken." Rosalind's DNA diagrams provided the establishment of the double helical structure of DNA. It might be interesting for the reader to see this short video prepared by "My Favourite Scientist" (also available through this link). Using a laser pen and some bent wire Andrew Marmery from the Royal Institution in London demonstrates the principles of diffraction and reproduces the characteristic diffraction pattern of the helical structure of DNA (use this other link in case of problems). The interested reader can also access the original manuscripts prepared by Rosalind Franklin on the structure of DNA. Rosalind Franklin died very young, at age 37, from ovarian cancer.Maurice Wilkins was born in New Zealand. He graduated as a physicist in 1938 from St. John's College, Cambridge, and joined John Randall at the University of Birmingham. After obtaining his PhD in 1940, he joined the Manhattan Project in California. After World War II, in 1945, he returned to Europe when John Randall was organizing the study of biophysics at the University of St. Andrew in Scotland. A year later, he obtained a position at King's College, London, in the newly created Medical Research Council, where he became deputy director in 1950. James Dewey Watson (1928-), born in Chicago, obtained a PhD in Zoology in 1950 at the University of Indiana. He spent a year in Copenhagen as a Merck Fellow and during a symposium held in 1951 in Naples, met Maurice Wilkins, who awoke his interest in the structure of proteins and nucleic acids. Thanks to the intervention of his director (Salvador E. Luria), Watson in the same year got a position to work with John Kendrew at the Cavendish Laboratory, where he also met Francis Crick. After two years at the California Institute of Technology, Watson returned to England in 1955 to work one more year in the Cavendish Laboratory with Crick. In 1956 he joined the Department of Biology at Harvard. Francis Crick was born in England and studied Physics at London University College. During the war, he worked for the British Admiralty and later went to the laboratory of W. Cochran to study biology and the principles of crystallography. In 1949, through a grant from the Medical Research Council, he joined the laboratory of Max Perutz, where, in 1954, he completed his doctoral thesis. There he met James Watson, who later would determine his career. He spent his last years at the Salk Institute for Biological Studies in California. In connection with the unfortunate story of Rosalind Franklin, Maurice Wilkins, James Watson and Francis Crick received the Nobel Prize in Physiology or Medicine in 1962 for the discovery of the right handed double helix structure of DNA. The decisive role of Rosalind Franklin was forgotten. it is very instructive to observe the video that hhmi biointeractive offers about this discovery. Dorothy C. Hodgkin, was born in Cairo, but she also spent part of her youth in Sudan and Israel, where her father became director of the British School of Archeology in Jerusalem. From 1928 to 1932 she settled in Oxford thanks to a grant from Sommerville College, where she learned the methods of crystallography and diffraction, and soon was attracted by the character and work of John D. Bernal. In 1933, she moved to Cambridge where she spent two happy years, making many friends and exploring a variety of problems with Bernal. In 1934, she returned to Oxford, from where she never left, except for short periods. In 1946, she obtained a position as Associate Professor for Crystallography and although she was initially linked to Mineralogy, her work soon pointed towards the area which had always interested her and which she had learned under John D. Bernal: sterols and other interesting biological molecules. Dorothy Hodgkin took part in the meetings in 1946 which led to the foundation of the International Union of Crystallography and she visited many countries for scientific purposes, including China, the USA and the USSR. She was elected a Fellow of the Royal Society in 1947, a foreign member of the Royal Netherlands Academy of Sciences in 1956, and of the American Academy of Arts and Sciences (Boston) in 1958. In 1964 she was awarded the Nobel Prize in Chemistry. Although what happened in the first 60 years of the Twentieth Century is astonishing and somewhat unique, the "crystallographic melody" continued, and in this sense it is still worthwhile to mention other scientists who made Crystallography go further. William Nunn Lipscomb was born in Cleveland, Ohio, USA, but moved to Kentucky in 1920, and lived in Lexington throughout his university years. After his bachelors degree at the University of Kentucky, he entered graduate school at the California Institute of Technology in 1941, first in physics. Under the influence of Linus Pauling, he returned to chemistry in early 1942. From then until the end of 1945 he was involved in research and development related to the war. After completing his Ph.D., he joined the University of Minnesota in 1946, and moved to Harvard Universityin 1959. Harvard recognitions include the Abbott and James Lawrence Professorship in 1971, and the George Ledlie Prize, also in 1971. In 1976 Lipscomp was awarded the Nobel Prize in Chemistry for his contributions to the structural chemistry of boranes. This chapter cannot be concluded without mentioning the efforts made by other crystallographers, who during many years tried to solve the phase problem with approaches different from those provided by the Patterson method, ie, trying to solve the problem directly from the intensities of the diffraction pattern and based on probability equations: direct methods. Herbert A. Hauptman, born in New York, graduated in 1939 as a mathematician from Columbia University. His collaboration with Jerome Karle began in 1947 at the Naval Research Laboratory in Washington DC. He earned his PhD in 1954 from the University of Maryland. In 1970, he joined the crystallographers group at the Medical Foundation in Buffalo, where he became research director in 1972. Hauptman was the second non-chemist to win a Chemistry Nobel Prize (the first one was the physicist Ernest Rutherford). Jerome Karle, also from New York, studied mathematics, physics, chemistry and biology, obtaining his master's degree in Biology from Harvard University in 1938. In 1940, he moved to the University of Michigan, where he met and married Isabella Lugosky. He worked on the Manhattan Project at the University of Chicago and earned a doctoral degree in 1944. Finally, in 1946, he moved to the Naval Research Laboratory in Washington DC, where he met Herbert Hauptman.The monograph published in 1953 by Hauptman and Karle, Solution of the Phase Problem I. The Centrosymmetric Crystal, already contained the most important ideas on probabilistic methods which, applied to the phase problem, made them worthy of the Nobel Prize in Chemistry in 1985. However, it would be unfair not to mention the role of Jerome's wife, Isabella Karle, who played an important role, putting the theory into practice.In memory of these important persons, we show this photograph taken in 1994, during the XIII Iberomerican Congress of Crystallography (Montevideo, Uruguay).Left (front to back): Jerome Karle, Isabella Karle and Martin Martinez-Ripoll (author of these pages).Right (front to back): Herbert A. Hauptman and Ray A. Young (neutron expert and one of the pioneers of the Rietveld method)Crystallography is (and has been) one of the most inter- and multidisciplinary sciences. It links together frontier areas of research and has, directly or indirectly, produced the largest number of Nobel Laureates throughout history. Additionally, the International Union of Crystallography (IUCr) established, since 1986, the existence of the Ewald Prize awarded every three years for outstanding contributions to the science of Crystallography. This chapter is dedicated to the many scientists who have made Crystallography one of the most powerful and competitive branches of Science for looking into the "tiny" world of atoms and molecules. It could definitely have been more extensive and detailed, because we cannot forget the participation and effort of many other scientists, past and present, but the important issue is that, after our "finale", "crystallographic music" plays on ... The United Nations in its General Assembly A/66/L.51 (issued on 15 June 2012), after considering the relevant role of Crystallography in Science decided to proclaim 2014 International Year of Crystallography. Click also on the left image! We send congratulations to Gautam R. Desiraju, President of the IUCr, and Sine Larsen, former President of the IUCr, when this initiative was launched! In this context, 11 November 2012 marked the centenary of the presentation of the paper by a young William Lawrence Bragg, where the foundations of X-ray crystallography where outlined. For this reason, the International Union of Crystallography (IUCr) published a fascinating set of articles that the reader can find via the following links:The first 50 years of X-ray diffraction were commemorated in 1962 by the International Union of Crystallography (IUCr) with the publication of an interesting book entitled Fifty Years of X-Ray Diffraction, edited by Paul Peter Ewald. Bart Kahr and Alexander G. Shtukenberg wrote an interesting chapter, Histories of Crystallography by Shafranovskii and Schuh, (included in Recent Advances in Crystallography, where they offer a short summary of the two volumes on the History of Crystallography written by Ilarion Ilarionovich Shafranovskii, a Russian crystallographer who assumed the E.S. Fedorov Chair of Crystallography at the Leningrad Mining Institute. The chapter of Kahr and Shtukenberg also include many other references, especially those taken from Curtis P. Schuh, author of at least a remarkable book entitled Mineralogy & crystallography: an annotated bio-bibliography of books published 1469 through 1919. M.A. Cuevas-Diarte and S. Alvarez Reverter.are the authors of an extensive and commented chronology on crystallography and structural chemistry, starting in the IV Century BC. It is noteworthy the exhibition offered by the University of Illinois (Vera V. Mainz and Gregory S. Girolami, Crystallography - Defining the Shape of Our Modern World, University of Illinois at Urbana-Champaign), commemorating the 100th Anniversary of the Discovery of X-ray Diffraction, as well as a lecture of Prof. Seymour Mauskopf from the Duke University, to be found also directly through these links: PowerPoint format or pdf format. It is also very interesting to read the articles collected in the special issue of Nature, dedicated to Crystallography, especially:among other from the archive included in the same special issue. Nearly in the same context, Nature has also released this interesting article, entitled Structural biology: More than a crystallographer, about the training currently expected from crystallographers working in the field of structural biology. Science, the journal, also joined the celebration of the International Year of Crystallography, devoting a special issue that you can find via this link.This page titled 1.10: Biographical outlines is shared under a CC BY-NC 4.0 license and was authored, remixed, and/or curated by Martín Martínez Ripoll & Félix Hernández Cano via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.
281
1.11: Crystallographic Associations
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Crystallography_in_a_Nutshell_(Ripoll_and_Cano)/01%3A_Chapters/1.11%3A_Crystallographic_Associations
The ollowing table shows links to some scientific associations of crystallographic interest, distributed around the world and alphabetically ordered ... Look also at this link from the IUCrCrystallographyThis page titled 1.11: Crystallographic Associations is shared under a CC BY-NC 4.0 license and was authored, remixed, and/or curated by Martín Martínez Ripoll & Félix Hernández Cano via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.
282
1.12: Crystallography in Spain
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Crystallography_in_a_Nutshell_(Ripoll_and_Cano)/01%3A_Chapters/1.12%3A_Crystallography_in_Spain
In the context of this chapter, you will also be invited to visit these sections...Crystallography is one of the branches of Science whose importance has been critical to the development of Chemistry around the world. Its influence, which was spectacular in Spain during the last third of the twentieth century, led (through many efforts) to the establishment of several groups of crystallographers whose relevance is nowadays beyond any doubt. However, contrary to what happened in other developed countries, Crystallography in Spain, and especially in academic institutions, seems in general to remain an unresolved matter. This is probably due to the fact that it has erroneously been considered as a minor technical issue, whose application and interpretation is trivial. No scientific discipline has so profoundly influenced the field of structural Biochemistry as that of X-ray diffraction analysis when applied to the crystals of macromolecules. It is among the most prolific techniques available for providing significant new data. In theory, there is no limit in molecular size, and thus it includes, in addition to proteins, the viruses, ribo- and deoxyribonucleic acids and protein complexes. Its influence has also affected the development of Biology and Biomedicine, leading to the so-called structural genomics. The detailed knowledge of the structure of biological macromolecules enables us not only to understand the relationship between structure and function, but also to make rational functional improvement proposals. In contrast with the importance of these issues, the rather large number of research groups in Spain (very competitive in Cellular and Molecular Biology), and the lack of resources dedicated to the few Spanish laboratories active in macromolecular Crystallography becomes very apparent. In an separate part of this chapter the reader will find a brief historical outline on the initial development of Crystallography in Spain.Most crystallographers working in Spain are associated with the Specialized Group for Crystallography and Crystal Growth (Grupo Especializado de Cristalografía y Crecimiento Cristalino, GE3C), a group associated with the Spanish Royal Society of Chemistry. Similarly, European crystallographers are associated with the European Crystallographic Association, ECA. Further, the Spanish Committee of Crystallography is the Spanish association responsible for coordinating the official Spanish representation to the International Union of Crystallography, IUCr. There are some other Spanish associations related to Crystallography, organized according to various radiation sources of interest in the field:Crystallographers working in Spain had the responsibility to organize the XXII Congress and General Assembly of the International Union of Crystallography, IUCr, that was held in Madrid (August 22-30, 2011). This type of congress, held every three years in a different country, had in Madrid over 2,800 worldwide participants and implied an explicit recognition by the IUCr of the "Spanish crystallography". The event was officially supported by the IUCr, the Spanish Ministry of Science and Innovation, the Spanish National Research Council (CSIC), several Spanish universities (especially Alcalá, Complutense of Madrid, Autonomous of Madrid, UIMP, Oviedo, Cantabria and Barcelona). Special mention has to be done to the support received by the BBVA Foundation (which specifically supported the participation of the three 2009 Chemistry Nobel Laureates). AECID and Metro-Madrid helped to the participation of a large number of young researchers from less developed countries. The "Madrid Convention Bureau" supported, in 2005, the preparation of the candidature of Spain to organize this unique event. Crystallographers working in Spain and specially those implied in the Local Organizing Commitee, appreciate the suport obtained from all these organizations. See also the contents of this link.The most relevant research groups located in Spain and using Crystallography as a main research tool can be found below. See also the following link. Nine of the groups listed below made an association in the context of a joint project called The Factory of Crystallization, a collaborative project to create an integrated platform for research and services in crystallization and crystallography. The project was thought as a setting to combine advanced research on crystallization and crystallography and service delivery in these fields to companies and research groups in biomedical, pharmacological, biotechnological, nanotechnological, natural or material sciences. The aim was that any group extracting, synthesizing or, in general, developing a new molecule or potentially interesting material could have access, with an adequate level of confidentiality, to the knowledge and technology needed for crystallization, diffraction data collection and structure solution. The project was funded with € 5.0 M by the former Spanish Ministry of Education and Science (now Ministry of Science and Innovation), as part of the Consolider-Ingenio/2010 program. The following information corresponds to a relatively splendid stage of Crystallography in Spain (during the first decade of the 21st century). Unfortunately, with the passage of time the situation has worsened, so many of the links shown below may no longer be operational.. The list of groups shown below may contain involuntary errors or omissions. Groups who would like to be included here should get in contact through this link. Select a region on the mapDivision for X-ray Diffraction, University of Cádiz Campus Universitario del Río San Pedro, E-11510 Puerto Real General Service Unit, Institute for Materials Science of Seville (CSIC-University of Seville) Americo Vespuccio 49, Isla de la Cartuja, E-41092 Sevilla Group of Coordination Chemistry and Structural Analysis, University of Granada Avda. Group of Organometallic Chemistry and Homogeneous Catalysis, Institute of Chemical Reseach (CSIC-Universidad de Sevilla) Américo Vespucio 49, E-41092 Sevilla Group of Protein Structure, Department of Physical-Chemistry, Biochemistry and Inorganic Chemistry, University of Almería Edificio Científico Técnico de Química, Ctra. Laboratory of Crystallographic Studies (LEC), CSIC-University of Granada Edifício Inst. X-ray Diffraction Service, University of Málaga Bulevar Louis Pasteur 33 , Campus de Teatinos, Edificio SCAI, Planta 1, B1-04, E-29071 Málaga Go to the mapInstitute of Chemical Synthesis and Homogeneous Catalysis (iSQCH), CSIC-University of Zaragoza Pedro Cerbuna 12, E-50009 Zaragoza Department of Physics of Condensed Matter, CSIC-University of Zaragoza Plaza de San Francisco s/n, E-50009 Zaragoza Go to the mapDepartment of Physics, University of Oviedo Julián Clavería 8, E-33006 Oviedo Crystallography and Mineralogy, Department of Geology, University of Oviedo Jesús Arias de Velasco, E-33005 Oviedo Synthesis, Structure and Technological Application of Materials, Department of Physical and Analytical Chemstry, Faculty of Chemistry, University of Oviedo Julián Clavería 8, E-33006 Oviedo Go to the mapNo data available Go to the mapGroup of High Pressure and Spectroscopy, Faculty of Sciences, University of Cantabria Avda. Group of Matter Magnetism, University of Cantabria Avda. Go to the mapLaboratory of X-Rays and Molecular Materials, Department of Fundamental Physics II, University of La Laguna Avda Astrofí­sico Francisco Sánchez s/n, E-38204 La Laguna Some other groups (no web link) from the Dept. Avda. E-47002 Valladolid Structural Biology Group, Cancer Research Center, CSIC-University of Salamanca Campus Miguel de Unamumo, E-37007 Salamanca X-ray Diffraction Service, University of Burgos Plaza de Misael Bañuelos s/n, E-09001 Burgos X-ray Diffraction Service, University of Salamanca Plaza de la Merced s/n, E-37008 Salamanca Go to the mapGrupo de Química Organometálica y Catálisis, Facultad de Ciencias Químicas, Universidad de Castilla-La Mancha Avenida de Camilo José Cela 10, E-13071 Ciudad Real Go to the mapDepartment of Crystallography, Institute of Material Sciences of Barcelona, CSIC Campus de la Universidad Autónoma de Barcelona, E-08193 Bellaterra Department of Chemical Engineering, Polytechnic University of Catalonia Escola d'Enginyeria Barcelona Est, Campus Diagonal Besos, Building (EEBE) I 2.21, c/ Eduard Maristany 10-14, E-08019 Barcelona Department of Crystallography, Mineralogy and Mineral Deposits, Faculty of Geology, University of Barcelona c/ Martí i Franqués s/n, E-08028-Barcelona Department of Structural Biology, Institute of Molecular Biology of Barcelona, CSIC Parque Científico de Barcelona, Baldiri i Reixach 15-21, E-08028 Barcelona Group for Physics and Crystallography of Nanomaterials, Faculty of Chemistry, University Rovira i Virgili Marcel.li Domingo s/n, Campus Sescelades, E-43007 Tarragona Unit for X-ray Diffraction, Autonomous University of Barcelona Campus de la Universidad Autónoma de Barcelona, E-08193 Bellaterra X-ray Diffraction Unit, Institute of Chemical Research of Catalonia Avgda. Go to the mapNo data available Go to the mapMetallosupramolecular Chemistry Group (QI5) Universidad de Vigo, Facultad de Química, E-36310 Vigo Research Group on Molecular and Structural Chemistry (GIQIMO), University of Santiago de Compostela Campus Universitario Sur, E-15782 Santiago de Compostela Structural Analysis Unit, Service for Research Support, University of A Coruña E-15001 A Coruña Support Center for Science and Technology, University of Vigo Facultad de Química, E-36310 Vigo X-ray Unit, University of Santiago de Compostela Edificio CACTUS, Campus Universitario Sur s/n, E-15782 Santiago de Compostela Go to the mapCentral X-ray Diffraction Unit (no web site). Avda. Campus de Cantoblanco, E-28049 Madrid Crystallography Group of the UAH, Department of Inorganic Chemistry, University of Alcalá de Henares (UAH) Campus Universitario, E-28871 Alcalá de Henares Department of Analytical Sciences, Faculty of Sciences, Universidad Nacional a Distancia (UNED) Senda del Rey 9, E-28080 Madrid Department of Crystallography and Mineralogy, Faculty of Geological Sciences, University Complutense of Madrid José Antonio Novais 2, E-28040 Madrid Department of Crystallography and Structural Biology, Institute of Physical-Chemistry Rocasolano, CSIC Serrano 119, E-28006 Madrid Some group (with no web link) at the Department of Inorganic Chemistry I, University Complutense of Madrid Ciudad Universitaria, E-28040 Madrid Department of Macromolecular Structures, National Center for Biotechnology, CSIC Darwin 3, Campus de Cantoblanco, E-28049 Madrid Structural biology of viral fibres Cell-Cell and Virus-Cell Interactions Group of Structural Biology of Proteins, Centre for Biological Research, CSIC Ramiro de Maeztu 9, E-28040 Madrid Institute of Material Sciences of Madrid, CSIC Cantoblanco, Ctra. de Colmenar Km. 15, E-28049 Madrid Department of Energy, Environment and Sustainable Technologies Department of New Architectures in Materials Chemistry National Center for Cancer Research, CNIO Melchor Fernández Almagro 3, E-28029 Madrid Cell Signalling and Adhesion Group Crystallography and Protein Engineering Unit Structural Bases of Genome Integrity Group Single Crystal Unit, Faculty of Chemistry, University Complutense of Madrid Edificio C (Aulario), Planta Sótano, Ciudad Universitaria, E-28040 Madrid Go to the mapDepartment of Mining, Geological and Cartographic Engineering, Area of Chemistry, Technical University of Cartagena Campus Muralla del Mar, E-30202 Cartagena Go to the mapGroup of Physical Properties and Applications of Materials, Public University of Navarre Edificio Departamental de los Acebos, Campus Arrosadía, E-31006 Pamplona Go to the mapNo data available Go to the mapNo data available Go to the mapBiophysics Unit, CSIC-University of the Basque Country Campus de Leioa, E-48940 Leioa Department of Mineralogy and Petrology, Faculty of Science and Technilogy, University of the Basque Country Campus de Leioa, E-48940 Leioa Department of Inorganic Chemistry, Faculty of Science and Technology, University of the Basque Country Campus de Leioa, E-48940 Leioa Group of Magnetism and Magnetic Materials, Faculty of Science and Tecnology, University of the Basque Country Campus de Leioa, E-48940 Leioa Group of Structural and Dynamical Properties of Solids, Faculty of Science and Technology, Unversity of the Basque Country Campus de Leioa, E-48940 Leioa Structural Biology Unit, CIC bioGUNE Ed. 801 A, Parque Tecnológico de Vizcaya, E-48160 Derio X-ray Diffraction Service of the University of the Basque Country Campus de Leioa,40 Leioa Go to the mapDepartment of Geology, University of Valencia Doctor Moliner 50, E-46100 Burjassot Some group (with no web link) at the Department of Inorganic Chemistry, University of Valencia Doctor Moliner 50, E-46100 Burjassot Some group (with no web link) at the Department of Organic Chemistry, University of Valencia Doctor Moliner 50, E-46100 Burjassot Some group (with no web link) at the Department of Inorganic and Organic Chemistry, University Jaume I Campus del Riu Sec, E-12071 Castellón de la Plana Institute for Biomedicine of Valencia, CSIC Jaime Roig 11, E-46010 Valencia Unit for Structural Enzymopathology Unit for Macromolecular Crystallography Unit for Signal Transduction Molecular Materials Research Group, Department of Chemistry, Physics and Analytics, University Jaume I Campus del Riu Sec, E-12071 Castellón de la Plana Go to the mapThis page titled 1.12: Crystallography in Spain is shared under a CC BY-NC 4.0 license and was authored, remixed, and/or curated by Martín Martínez Ripoll & Félix Hernández Cano via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.
283
InfoPage
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Instrumental_Analysis_(LibreTexts)/00%3A_Front_Matter/02%3A_InfoPage
This text is disseminated via the Open Education Resource (OER) LibreTexts Project and like the hundreds of other texts available within this powerful platform, it is freely available for reading, printing and "consuming." Most, but not all, pages in the library have licenses that may allow individuals to make changes, save, and print this book. Carefully consult the applicable license(s) before pursuing such effects.Instructors can adopt existing LibreTexts texts or Remix them to quickly build course-specific resources to meet the needs of their students. Unlike traditional textbooks, LibreTexts’ web based origins allow powerful integration of advanced features and new technologies to support learning. The LibreTexts mission is to unite students, faculty and scholars in a cooperative effort to develop an easy-to-use online platform for the construction, customization, and dissemination of OER content to reduce the burdens of unreasonable textbook costs to our students and society. The LibreTexts project is a multi-institutional collaborative venture to develop the next generation of open-access texts to improve postsecondary education at all levels of higher learning by developing an Open Access Resource environment. The project currently consists of 14 independently operating and interconnected libraries that are constantly being optimized by students, faculty, and outside experts to supplant conventional paper-based books. These free textbook alternatives are organized within a central environment that is both vertically (from advance to basic level) and horizontally (across different fields) integrated.The LibreTexts libraries are Powered by NICE CXOne and are supported by the Department of Education Open Textbook Pilot Project, the UC Davis Office of the Provost, the UC Davis Library, the California State University Affordable Learning Solutions Program, and Merlot. This material is based upon work supported by the National Science Foundation under Grant No. 1246120, 1525057, and 1413739.Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation nor the US Department of Education.Have questions or comments? For information about adoptions or adaptions contact More information on our activities can be found via Facebook , Twitter , or our blog .This text was compiled on 07/05/2023
288
1.1: Classification of Analytical Methods
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Instrumental_Analysis_(LibreTexts)/01%3A_Introduction/1.01%3A_Classification_of_Analytical_Methods
Analytical chemistry has a long history. On the bookshelf of my office, for example, there is a copy of the first American edition of Fresenius's A System of Instruction in Quantitative Chemical Analysis, which was published by John Wiley & Sons in 1886. Nearby are many newer texts, such as Bard and Faulkner's Electrochemical Methods: Fundamentals and Applications, the most recent edition of which was published by Wiley in 2000. In 883 pages, Fresnius's text covers essentially all that was known in the 1880s about analytical chemistry and what we now call classical methods of analysis. Bard and Faulkner's text, which is 864 pages, covers just one category of what we now call modern instrumental methods of analysis. Whether a classical method of analysis or a modern instrumental method analysis, the species of interest, which we call the analyte, is probed in a way that provides qualitative or quantitative information.The distinguishing feature of a classical method of analysis is that the principal measurements are observations of reactions (Did a precipitate form? Did the solution change color?) or the measurement of one of a small number of physical properties, such as mass or volume. Because these measurements are not selective for a single analyte, a classical method of analysis usually required extensive work to isolate the analyte of interest from other species that would interfere in the analysis. As we see in , Fresenius's method for determining the amount of nickel in ores required 58 hours, most of which was spent bringing the ore into solution and then isolating the analyte from interferents by a sequence of precipitations and filtrations. The final determination of the amount of nickel in the ore was derived from two measurements of mass: the combined mass of Co and Ni, and the mass of Co. Although of historic interest, we will not consider further classical methods of analysis in this text.The distinguishing feature of modern instrumental methods of analysis is that it extends measurements to many more physical properties, such as current, potential, the absorption or emission of light, and mass-to-charge ratios, to name a few. Instrumental methods for separating analytes, such as chromatographic separations, and instrumental methods that allow for the simultaneous analysis of multiple analytes make for a much more rapid analysis. By the 1970s, flame atomic absorption spectrometry (FAAS) replaced gravimetry as the standard method for analyzing nickel in ores [see, for example, Van Loon, J. C. Analytical Atomic Absorption Spectroscopy, Academic Press: New York, 1980]. Because FAAS is much more selective than precipitation, there is less need to chemically isolate the analyte; as a result, the time to analyze a single sample decreased to a few hours and the throughput of samples increased to hundreds per day.This page titled 1.1: Classification of Analytical Methods is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by David Harvey.
291
1.2: Types of Instrumental Methods
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Instrumental_Analysis_(LibreTexts)/01%3A_Introduction/1.02%3A_Types_of_Instrumental_Methods
It is useful to organize instrumental methods of analysis into several groups based on the chemical or physical properties that we use to generate a signal that we can measure and relate to the analyte of interest to us. One group of instrumental methods is based on the interaction of photons of electromagnetic radiation with matter, which we call collectively spectroscopy. We can divide spectroscopy into two broad classes of techniques. In one class of techniques there is a transfer of energy between the photon and the sample. Table \(\PageIndex{1}\) provides a list of several representative examples.electron spin resonancenuclear magnetic resonancefluorescence spectroscopyphosphorescence spectroscopyatomic fluorescence spectroscopyIn the second broad class of spectroscopic techniques, the electromagnetic radiation undergoes a change in amplitude, phase angle, polarization, or direction of propagation as a result of its refraction, reflection, scattering, diffraction, or dispersion by the sample. Several representative spectroscopic techniques are listed in Table \(\PageIndex{2}\).nephelometryturbidimetryA second group of instrumental methods is based on the measurement of current, charge, or potential at the surface of an electrode, sometimes while controlling one or both of the other two variables, and sometime while stirring the solution. provides a visual introduction to these methods.Our third group of instrumental methods gathers together a variety of other measurements that can provide a useful analytical signal; these are summarized in Table \(\PageIndex{3}\).kinetic methodsflow injection analysisneutron activation analysisisotope diution analysisthermal gravimetrydifferential thermal analysisdifferential scanning calorimetrygas chromatographyliquid chromatographysupercritical fluid chromatographyThis page titled 1.2: Types of Instrumental Methods is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by David Harvey.
292
1.3: Instruments For Analysis
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Instrumental_Analysis_(LibreTexts)/01%3A_Introduction/1.03%3A_Instruments_For_Analysis
An early example of a colorimetric analysis is Nessler’s method for ammonia, which was introduced in 1856. Nessler found that adding an alkaline solution of HgI2 and KI to a dilute solution of ammonia produced a yellow-to-reddish brown colloid in which the colloid’s color depended on the concentration of ammonia. In addition to the sample, Nessler prepared a series of standard solutions, each containing a known amount of ammonia, and placed each in a glass tube with a flat bottom. Allowing sunlight to pass through the tubes from bottom-to-top, Nessler observed them from above, as seen in . By visually comparing the color of the sample to the colors of the standards, Nessler was able to estimate the concentration of ammonia in the sample.Nessler's method converts a sample's chemical and/or physical properties—the color that forms when NH3 reacts with HgI2 and KI—into a signal that we can detect, process, and report as a relative measure of the amount of NH3 in the sample. Although we might not think of a Nessler tube as an instrument, the process of probing a sample in a way that converts its chemical or physical properties into a form of information that we can report is the essence of any instrument.The basic components of an instrument include a probe that interacts with the sample, an input transducer that converts the sample's chemical and/or physical properties into an electrical signal, a signal processor that converts the electrical signal into a form that an output transducer can convert into a numerical or visual output that we can understand. We can represent this as a sequence of actions that take place within the instrument\[\text{probe} \rightarrow \text{sample} \rightarrow \text{input transducer} \rightarrow \text{raw data} \rightarrow \text{signal processor} \rightarrow \text{output transducer} \nonumber \]and as a general flow of information\[\text{chemical and/or physical information} \rightarrow \text{electrical information} \rightarrow \text{numerical or visual response} \nonumber \]In Nessler’s method, the probe is sunlight, the analyst’s eye is the input transducer, the raw data is the response of the eye's optic nerve to the attenuation of light, the signal processor is the brain, and the output is a visual report of the sample's color relative to the standards.\[\text{sunlight} \rightarrow \text{sample} \rightarrow \text{eye} \rightarrow \text{response of optic nerve} \rightarrow \text{brain} \rightarrow \text{visual report of color} \nonumber \]As suggested above, information is encoded in two broad ways: as electrical information (such as currents and potentials) and as information in other, non-electrical forms (such as chemical and physical properties).Nessler's method begins and ends with non-electrical forms of information: the sample has a color and we use that color to report that the concentration of NH3 in our sample is greater than 0.50 mg/L and less than 1.00 mg/L. Other non-electrical ways to encode information are the observation that a precipitate forms when we add Ag+ to a solution of NaCl, the balance beam scale that my doctor uses to measure my weight, the percentage of light that passes through a sample, and the volume and moles of Cu(NO3)2 in a graduated cylinder.Although my doctor's balance beam scale encodes my mass by the position of two movable weights along a signal arm–a decidedly non-electrical means of encoding information—the electronic analytical balance that is found in almost all chemistry labs encodes the mass in the form of electrical information ). An electromagnet levitates the sample pan above a permanent cylindrical magnet. When we place an object on the sample pan, it displaces the sample pan downward by a force equal to the product of the sample’s mass and its acceleration due to gravity. The balance detects this downward movement and generates a counterbalancing force by increasing the current to the electromagnet. The current needed to return the balance to its original position is proportional to the object’s mass.Although we tend to use interchangeably, the terms “weight” and “mass,” there is an important distinction between them. Mass is the absolute amount of matter in an object, measured in grams. Weight, W, is a measure of the gravitational force, g, acting on that mass, m:\[W = m \times g \nonumber \]An object has a fixed mass but its weight depends upon the acceleration due to gravity, which varies subtly from location-to-location.A balance measures an object’s weight, not its mass. Because weight and mass are proportional to each other, we can calibrate a balance using a standard weight whose mass is traceable to the standard prototype for the kilogram. A properly calibrated balance gives an accurate value for an object’s mass.Electrical information comes in three domains: analog, time, and digital. In the analog domain, the signal shows the amplitude of the electrical signal—say current or potential—as a function of an independent variable, which might be wavelength when recording a spectrum, applied potential in a cyclic voltammetry experiment, or time when separating a mixture by gas chromatography. A time domain signal shows the frequency with which the electrical signal rises above or below a threshold value, as when counting the rate at which ionizing radiation, such as alpha or beta particles, are detected by a Geiger counter. Finally, in the digital domain, the signal is a count of discrete events, such as counting the number of drops dispensed by an autotitrator by allowing the drops to disrupt a beam of light.As defined above, a transducer is a device that converts information from a non-electrical form to an electrical form (the input transducer) or from an electrical form to a non-electrical form (the output transducer). Detector is a much broader term that includes all aspects of the instrument from the input transducer to the output transducer; thus, a visible spectrometer is a detector that uses an input transducer to convert the attenuation of the source radiation to a reported absorbance. A sensor is a detector designed to monitor a particular analyte, such as a pH electrode.An instrument's output transducer converts the information carried in electrical form into a non-electrical form that we can understand. Common examples of output transducers, or readout devices, are a simple meter, a digital display, a physical trace of the signal as a function of a dependent variable, such as a spectrum or a chromatogram, or a photographic plate.Many instruments include a computer that provides us with the ability to control the instrument and, perhaps of greater importance, to process the data both by modifying the electrical signal as it passes from the input transducer to the output transducer, and by providing tools for processing the data after it leaves the output transducer.This page titled 1.3: Instruments For Analysis is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by David Harvey.
293
1.4: Selecting an Analytical Method
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Instrumental_Analysis_(LibreTexts)/01%3A_Introduction/1.04%3A_Selecting_an_Analytical_Method
The analysis of a sample generates a chemical or a physical signal that is proportional to the amount of analyte in the sample. This signal may be anything we can measure, such the examples described in Section 1.2. It is convenient to divide analytical techniques into two general classes based on whether the signal is directly proportional to the mass or moles of analyte, or is directly proportional to the analyte’s concentration.Consider the two graduated cylinders in , each of which contains a solution of 0.010 M Cu(NO3)2. The cylinder on the left contains 10 mL, or \(1.0 \times 10^{-4}\) moles of Cu2+, and the cylinder on the right contains 20 mL, or \(2.0 \times 10^{-4}\) moles of Cu2+. If a technique responds to the absolute amount of analyte in the sample, then the signal due to the analyte SA is given as\[S_A = k_A n_A \label{totalanalysis} \]where nA is the moles or grams of analyte in the sample, and kA is a proportionality constant. Because the cylinder on the right contains twice as many moles of Cu2+ as the cylinder on the left, analyzing its contents gives a signal twice as large as that for the other cylinder.A second class of analytical techniques are those that respond to the analyte’s concentration, CA \[S_A = k_A C_A \label{concanalysis} \]In this case, an analysis of the contents of the two cylinders gives the same result. As most instruments respond to the analyte's concentration, we will imit ourselves to using Equation \ref{concanalysis} for the remainder of this section.To select an appropriate analytical method for a particular problem we need to consider our needs and compare them to the strengths and weaknesses of the available analytical methods. If we are screening samples on a production line to determine if an analyte exceeds a threshold so that we can set them aside for a more careful analysis, then we may wish to give more consideration to speed than to accuracy or precision. On the other hand, if we our analyte is part of a complex mixture, then we may wish to give more consideration to analytical methods that provide for greater selectivity. Or, if we expect that our samples will vary substantially in the concentration of analyte, then we may give more consideration to an analytical method for which Equation \ref{concanalysis} applies over a wide range of concentrations.As suggested above, when we choose an analytical method, we match the its performance characteristics (or figures of merit) to our needs. Some of these characteristics are quantitative (accuracy, precision, sensitivity, detection limit, selectivity, dynamic range, and selectivity) and others are more qualitative (robustness, ruggedness, scale of operation, time, and cost).Accuracy, or bias, is a measure of how close the result of an experiment is to the “true” or expected result. We can express accuracy as an absolute error, e \[e = x - \mu \nonumber \]where \(x\) is the experimental result and \(\mu\) is the expected result, or as a percentage relative error, %er \[\% e_r = \frac {x - \mu} {\mu} \times 100 \nonumber \]A method’s accuracy depends on many things, including the signal’s source, the value of kA in Equation \ref{concanalysis}, and the ease of handling samples without loss or contamination.Because it is unlikely that we know the true result, we can use an expected or accepted result to evaluate accuracy. For example, we might use a standard reference material, which has an accepted value for our analyte, to establish the analytical method’s accuracy. You will find a more detailed treatment of accuracy in, including a discussion of sources of errors, in Appendix 1.When we analyze a sample several times, the individual results vary from trial-to-trial. Precision is a measure of this variability. The closer the agreement between individual analyses, the more precise the results. For example, the results shown in the upper half of for the concentration of potassium in a sample of serum are more precise than those in the lower half of . It is important to understand that precision does not imply accuracy. That the data in the upper half of are more precise does not mean that the first set of results is more accurate. In fact, neither set of results may be accurate.A method’s precision depends on several factors, including the uncertainty in measuring the signal and the ease of handling samples reproducibly, and is reported as an absolute standard deviation, s\[s = \sqrt{\frac {\sum_{i = 1}^{n} (X_i - \overline{X})^{2}} {n - 1}} \label{sd} \]or a relative standard deviation, sr\[s_r = \frac {s} {\overline{X}} \label{rsd} \]where \(\overline{X}\) is the average, or mean value of the individual measurements.\[\overline{X} = \frac {\sum_{i = 1}^n X_i} {n} \label{mean} \]Confusing accuracy and precision is a common mistake. See Ryder, J.; Clark, A. U. Chem. Ed. 2002, 6, 1–3, and Tomlinson, J.; Dyson, P. J.; Garratt, J. U. Chem. Ed. 2001, 5, 16–23 for discussions of this and other common misconceptions about the meaning of error. You will find a more detailed treatment of precision in Appendix 1, including a discussion of sources of errors.The ability to demonstrate that two samples have different amounts of analyte is an essential part of many analyses. A method’s sensitivity is a measure of its ability to establish that such a difference is significant. Sensitivity is often confused with a method’s detection limit, which is the smallest amount of analyte we can determine with confidence.See Pardue, H. L. Clin. Chem. 1997, 43, 1831-1837 for an explanation for why a method's sensitivity is not the same as its detection limit.Sensitivity is equivalent to the proportionality constant, kA, in Equation \ref{concanalysis} [IUPAC Compendium of Chemical Terminology, Electronic version]. If \(\Delta S_A\) is the smallest difference we can measure between two signals, then the smallest detectable difference in the analyte's concentration is\[\Delta C_A = \frac {\Delta S_A} {k_A} \nonumber \]Suppose, for example, that our analytical signal is a measurement for which the smallest detectable increment is ±0.001 (arbitrary units). If our method’s sensitivity is \(0.200 \text{M}^{-1}\), then our method can conceivably detect a difference in concentration of as little as\[\Delta C_A = \frac {\pm 0.001 } {0.200 \text{ M}^{-1}} = \pm 0.005 \text{ M}^{-1} \nonumber \]For two methods with the same \(\Delta S_A\), the method with the greater sensitivity—that is, the method with the larger kA—is better able to discriminate between smaller amounts of analyte.The International Union of Pure and Applied Chemistry (IUPAC) defines a method’s detection limit as the smallest concentration or absolute amount of analyte that has a signal significantly larger than the signal from a suitable blank [IUPAC Compendium of Chemical Technology, Electronic Version]. Although our interest is in the amount of analyte, in this section we will define the detection limit in terms of the analyte’s signal. Knowing the signal, we can calculate the analyte’s concentration, CA, using Equation \ref{concanalysis}, \(S_A = k_A C_A\) where k is the method’s sensitivity.Let’s translate the IUPAC definition of the detection limit into a mathematical form by letting Smb represent the average signal for a method blank, and letting \(\sigma_{mb}\) represent the method blank’s standard deviation. To detect the analyte, its signal must exceed Smb by a suitable amount; thus,\[(S_A)_{DL} = S_{mb} \pm z \sigma_{mb} \label{detlimit} \]where \((S_A)_{DL}\) is the analyte’s detection limit.The value we choose for z depends on our tolerance for reporting the analyte’s concentration even if it is absent from the sample (what is called a type 1 error). Typically, z is set to three, which corresponds to a probability, \(\alpha\), of 0.00135, or 0.135%. As shown in a, there is only a 0.135% probability of detecting the analyte in a sample that actually is analyte-free.A detection limit also is subject to a type 2 error in which we fail to find evidence for the analyte even though it is present in the sample. Consider, for example, the situation shown in b where the signal for a sample that contains the analyte is exactly equal to (SA)DL. In this case the probability of a type 2 error is 50% because half of the sample’s possible signals are below the detection limit. We correctly detect the analyte at the IUPAC detection limit only half the time. The IUPAC definition for the detection limit is the smallest signal for which we can say, at a significance level of \(\alpha\), that an analyte is present in the sample; however, failing to detect the analyte does not mean it is not present in the sample.The detection limit often is represented, particularly when discussing public policy issues, as a distinct line that separates detectable concentrations of analytes from concentrations we cannot detect. This use of a detection limit is incorrect [Rogers, L. B. J. Chem. Educ. 1986, 63, 3–6]. As suggested by , for an analyte whose concentration is near the detection limit there is a high probability that we will fail to detect the analyte.An alternative expression for the detection limit, the limit of identification, minimizes both type 1 and type 2 errors [Long, G. L.; Winefordner, J. D. Anal. Chem. 1983, 55, 712A–724A]. The analyte’s signal at the limit of identification, (SA)LOI, includes an additional term, \(z \sigma_A\), to account for the distribution of the analyte’s signal.\[(S_A)_\text{LOI} = (S_A)_\text{DL} + z \sigma_A = S_{mb} + z \sigma_{mb} + z \sigma_A \label{loi} \]As shown in , the limit of identification provides an equal probability of a type 1 and a type 2 error at the detection limit. When the analyte’s concentration is at its limit of identification, there is only a 0.135% probability that its signal is indistinguishable from that of the method blank.The ability to detect the analyte with confidence is not the same as the ability to report with confidence its concentration, or to distinguish between its concentration in two samples. For this reason the American Chemical Society’s Committee on Environmental Analytical Chemistry recommends the limit of quantitation, (SA)LOQ [“Guidelines for Data Acquisition and Data Quality Evaluation in Environmental Chemistry,” Anal. Chem. 1980, 52, 2242–2249 ].\[(S_A)_\text{LOQ} = S_{mb} + 10 \sigma_{mb} \label{loq} \]A method's dynamic range (or linear range) runs from its limit of quantication, (Equation \ref{loq}, to the highest concentration for which the sensitivity, kA, remains constant, resulting in a straight-line relationship between \(S_A\) and \(C_A\). This upper limit is called the limit of linearity, LOL. Between the LOQ and the LOL we can use Equation \ref{concanalysis} to convert a measured signal into the corresponding concentration of the analyte. Above the LOQ the relationship between the signal and the analyte's concentration no longer is a straight-line.An analytical method is specific if its signal depends only on the analyte [Persson, B-A; Vessman, J. Trends Anal. Chem. 1998, 17, 117–119; Persson, B-A; Vessman, J. Trends Anal. Chem. 2001, 20, 526–532]. Although specificity is the ideal, few analytical methods are free from interferences. When an interferent, I, contributes to the signal, we expand \ref{totalanalysis} and Equation \ref{concanalysis} to include its contribution to the sample’s signal, Ssamp \[S_{samp} = S_A + S_I = k_A C_A + k_I C_I \label{concsamp} \]where SI is the interferent’s contribution to the signal, kI is the interferent’s sensitivity, and CI is the concentration of interferent in the sample.Selectivity is a measure of a method’s freedom from interferences [Valcárcel, M.; Gomez-Hens, A.; Rubio, S. Trends Anal. Chem. 2001, 20, 386–393]. A method’s selectivity for an interferent relative to the analyte is defined by a selectivity coefficient, KA,I\[K_{A,I} = \frac {k_I} {k_A} \label{selectcoef} \]which may be positive or negative depending on the signs of kI and kA. The selectivity coefficient is greater than +1 or less than –1 when the method is more selective for the interferent than for the analyte.Although kA and kI usually are positive, they can be negative. For example, some analytical methods work by measuring the concentration of a species that remains after is reacts with the analyte. As the analyte’s concentration increases, the concentration of the species that produces the signal decreases, and the signal becomes smaller. If the signal in the absence of analyte is assigned a value of zero, then the subsequent signals are negative.Determining the selectivity coefficient’s value is easy if we already know the values for kA and kI. As shown by Example \(\PageIndex{1}\), we also can determine KA,I by measuring Ssamp in the presence of and in the absence of the interferent.A method for the analysis of Ca2+ in water suffers from an interference in the presence of Zn2+. When the concentration of Ca2+ is 100 times greater than that of Zn2+, an analysis for Ca2+ has a relative error of +0.5%. What is the selectivity coefficient for this method?Since only relative concentrations are reported, we can arbitrarily assign absolute concentrations. To make the calculations easy, we will let CCa = 100 (arbitrary units) and CZn = 1. A relative error of +0.5% means the signal in the presence of Zn2+ is 0.5% greater than the signal in the absence of Zn2+. Again, we can assign values to make the calculation easier. If the signal for Cu2+ in the absence of Zn2+ is 100 (arbitrary units), then the signal in the presence of Zn2+ is 100.5.The value of kCa is determined using Equation \ref{concanalysis}\[k_\text{Ca} = \frac {S_\text{Ca}} {C_\text{Ca}} = \frac {100} {100} = 1 \nonumber \]In the presence of Zn2+ the signal is given by Equation \ref{concsamp}; thus\[S_{samp} = 100.5 = k_\text{Ca} C_\text{Ca} + k_\text{Zn} C_\text{Zn} = (1 \times 100) + k_\text{Zn} \times 1 \nonumber \]Solving for kZn gives its value as 0.5. The selectivity coefficient is\[K_\text{Ca,Zn} = \frac {k_\text{Zn}} {k_\text{Ca}} = \frac {0.5} {1} = 0.5 \nonumber \]If you are unsure why, in the above example, the signal in the presence of zinc is 100.5, note that the percentage relative error for this problem is given by\[\frac {\text{obtained result} - 100} {100} \times 100 = +0.5 \% \nonumber \]Solving gives an obtained result of 100.5.A selectivity coefficient provides us with a useful way to evaluate an interferent’s potential effect on an analysis. Solving Equation \ref{selectcoef} for kI \[k_I = K_{A,I} \times k_A \label{ki} \]and substituting in Equation \ref{concanalysis} and simplifying gives\[S_{samp} = k_A \{ C_A + K_{A,I} \times C_I \} \label{S_samp} \]An interferent will not pose a problem as long as the term \(K_{A,I} \times C_I\) in Equation \ref{S_samp} is significantly smaller than than CA.Barnett and colleagues developed a method to determine the concentration of codeine (structure shown below) in poppy plants [Barnett, N. W.; Bowser, T. A.; Geraldi, R. D.; Smith, B. Anal. Chim. Acta 1996, 318, 309– 317]. As part of their study they evaluated the effect of several interferents. For example, the authors found that equimolar solutions of codeine and the interferent 6-methoxycodeine gave signals, respectively of 40 and 6 (arbitrary units).(a) What is the selectivity coefficient for the interferent, 6-methoxycodeine, relative to that for the analyte, codeine.(b) If we need to know the concentration of codeine with an accuracy of ±0.50%, what is the maximum relative concentration of 6-methoxy-codeine that we can tolerate?(a) The signals due to the analyte, SA, and the interferent, SI, are\[S_A = k_A C_A \quad \quad S_I = k_I C_I \nonumber \]Solving these equations for kA and for kI, and substituting into Equation \ref{selectcoef} gives\[K_{A,I} = \frac {S_I / C_I} {S_A / C_I} \nonumber \]Because the concentrations of analyte and interferent are equimolar (CA = CI), the selectivity coefficient is\[K_{A,I} = \frac {S_I} {S_A} = \frac {6} {40} = 0.15 \nonumber \](b) To achieve an accuracy of better than ±0.50% the term \(K_{A,I} \times C_I\) in Equation \ref{S_samp} must be less than 0.50% of CA; thus\[K_{A,I} \times C_I \le 0.0050 \times C_A \nonumber \]Solving this inequality for the ratio CI/CA and substituting in the value for KA,I from part (a) gives\[\frac {C_I} {C_A} \le \frac {0.0050} {K_{A,I}} = \frac {0.0050} {0.15} = 0.033 \nonumber \]Therefore, the concentration of 6-methoxycodeine must be less than 3.3% of codeine’s concentration.Problems with selectivity also are more likely when the analyte is present at a very low concentration [Rodgers, L. B. J. Chem. Educ. 1986, 63, 3–6].For a method to be useful it must provide reliable results. Unfortunately, methods are subject to a variety of chemical and physical interferences that contribute uncertainty to the analysis. If a method is relatively free from chemical interferences, we can use it to analyze an analyte in a wide variety of sample matrices. Such methods are considered robust.Random variations in experimental conditions introduces uncertainty. If a method’s sensitivity, k, is too dependent on experimental conditions, such as temperature, acidity, or reaction time, then a slight change in any of these conditions may give a significantly different result. A rugged method is relatively insensitive to changes in experimental conditions.Another way to narrow the choice of methods is to consider three potential limitations: the amount of sample available for the analysis, the expected concentration of analyte in the samples, and the minimum amount of analyte that will produce a measurable signal. Collectively, these limitations define the analytical method’s scale of operations.We can display the scale of operations visually ) by plotting the sample’s size on the x-axis and the analyte’s concentration on the y-axis. For convenience, we divide samples into macro (>0.1 g), meso (10 mg–100 mg), micro (0.1 mg–10 mg), and ultramicro (<0.1 mg) sizes, and we divide analytes into major (>1% w/w), minor (0.01% w/w–1% w/w), trace (10–7% w/w–0.01% w/w), and ultratrace (<10–7% w/w) components. Together, the analyte’s concentration and the sample’s size provide a characteristic description for an analysis. For example, in a microtrace analysis the sample weighs between 0.1 mg and 10 mg and contains a concentration of analyte between 10–7% w/w and 10–2% w/w.The diagonal lines connecting the axes show combinations of sample size and analyte concentration that contain the same absolute mass of analyte. As shown in , for example, a 1-g sample that is 1% w/w analyte has the same amount of analyte (10 mg) as a 100-mg sample that is 10% w/w analyte, or a 10-mg sample that is 100% w/w analyte.We can use to establish limits for analytical methods. If a method’s minimum detectable signal is equivalent to 10 mg of analyte, then it is best suited to a major analyte in a macro or meso sample. Extending the method to an analyte with a concentration of 0.1% w/w requires a sample of 10 g, which rarely is practical due to the complications of carrying such a large amount of material through the analysis. On the other hand, a small sample that contains a trace amount of analyte places significant restrictions on an analysis. For example, a 1-mg sample that is 10–4% w/w in analyte contains just 1 ng of analyte. If we isolate the analyte in 1 mL of solution, then we need an analytical method that reliably can detect it at a concentration of 1 ng/mL.Finally, we can compare analytical methods with respect to their equipment needs, the time needed to complete an analysis, and the cost per sample. Methods that rely on instrumentation are equipment-intensive and may require significant operator training. For example, the graphite furnace atomic absorption spectroscopic method for determining lead in water requires a significant capital investment in the instrument and an experienced operator to obtain reliable results. Other methods, such as titrimetry, require less expensive equipment and less training.The time to complete an analysis for one sample often is fairly similar from method-to-method. This is somewhat misleading, however, because much of this time is spent preparing samples, preparing reagents, and gathering together equipment. Once the samples, reagents, and equipment are in place, the sampling rate may differ substantially. For example, it takes just a few minutes to analyze a single sample for lead using graphite furnace atomic absorption spectroscopy, but several hours to analyze the same sample using gravimetry. This is a significant factor in selecting a method for a laboratory that handles a high volume of samples.The cost of an analysis depends on many factors, including the cost of equipment and reagents, the cost of hiring analysts, and the number of samples that can be processed per hour. In general, methods that rely on instruments cost more per sample then other methods.Unfortunately, the design criteria discussed in this section are not mutually independent [Valcárcel, M.; Ríos, A. Anal. Chem. 1993, 65, 781A–787A]. Working with smaller samples or improving selectivity often comes at the expense of precision. Minimizing cost and analysis time may decrease accuracy. Selecting a method requires carefully balancing the various design criteria. Usually, the most important design criterion is accuracy, and the best method is the one that gives the most accurate result. When the need for a result is urgent, as is often the case in clinical labs, analysis time may become the critical factor.In some cases it is the sample’s properties that determine the best method. A sample with a complex matrix, for example, may require a method with excellent selectivity to avoid interferences. Samples in which the analyte is present at a trace or ultratrace concentration usually require a concentration method. If the quantity of sample is limited, then the method must not require a large amount of sample.This page titled 1.4: Selecting an Analytical Method is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by David Harvey.
294
1.5: Calibration of Instrumental Methods
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Instrumental_Analysis_(LibreTexts)/01%3A_Introduction/1.05%3A_Calibration_of_Instrumental_Methods
To standardize an analytical method we also must determine the analyte’s sensitivity, kA, in the following equation\[S_{total} = k_A C_A + S_{blank} \label{s_total} \]where \(S_{total}\) is the measured signal, \(C_A\) is the analyte's concentration, and \(S_{reag}\) is the signal in the absence of the analyte. In principle, it is possible to derive the value of kA for any analytical method if we understand fully all the chemical reactions and physical processes responsible for the signal. Unfortunately, such calculations are not feasible if we lack a sufficiently developed theoretical model of the physical processes or if the chemical reaction’s evince non-ideal behavior. In such situations we must determine the value of kA by analyzing one or more standard solutions, each of which contains a known amount of analyte. In this section we consider several approaches for determining the value of kA. For simplicity we assume that \(S_{blank}\) is accounted for by a proper reagent blank, allowing us to replace Stotal in with the analyte’s signal, SA.\[S_A = k_A C_A \label{sa} \]The simplest way to determine the value of kA in Equation \ref{sa} is to use a single-point standardization in which we measure the signal for a standard, Sstd, that contains a known concentration of analyte, Cstd. Substituting these values into Equation \ref{sa} and rearrange\[k_A = \frac {S_{std}} {C_{std}} \label{ka} \]to give us the value for kA. Having determined kA, we can calculate the concentration of analyte in a sample by measuring its signal, Ssamp, and calculate CA as\[C_A = \frac {S_{samp}} {k_A} \label{ca} \]A single-point standardization is the least desirable method for standardizing a method. There are two reasons for this. First, any error in our determination of kA carries over into our calculation of CA. Second, our experimental value for kA is based on a single concentration of analyte. To extend this value of kA to other concentrations of analyte requires that we assume a linear relationship between the signal and the analyte’s concentration, an assumption that often is not true [Cardone, M. J.; Palmero, P. J.; Sybrandt, L. B. Anal. Chem. 1980, 52, 1187–1191]. shows how assuming a constant value of kA leads to a determinate error in CA if kA becomes smaller at higher concentrations of analyte. Despite these limitations, single-point standardizations find routine use when the expected range for the analyte’s concentrations is small. Under these conditions it often is safe to assume that kA is constant (although you should verify this assumption experimentally). This is the case, for example, in clinical labs where many automated analyzers use only a single standard.The better way to standardize a method is to prepare a series of standards, each of which contains a different concentration of analyte. Standards are chosen such that they bracket the expected range for the analyte’s concentration. A multiple-point standardization should include at least three standards, although more are preferable. A plot of Sstd versus Cstd is called a calibration curve. The exact standardization, or calibration relationship, is determined by an appropriate curve-fitting algorithm.Linear regression, which also is known as the method of least squares, is one such algorithm. Its use is covered in Appendix 1.There are two advantages to a multiple-point standardization. First, although a determinate error in one standard introduces a determinate error, its effect is minimized by the remaining standards. Second, because we measure the signal for several concentrations of analyte, we no longer must assume kA is independent of the analyte’s concentration. Instead, we can construct a calibration curve similar to the “actual relationship” in .The most common method of standardization uses one or more external standards, each of which contains a known concentration of analyte. We call these standards “external” because they are prepared and analyzed separate from the samples.Appending the adjective “external” to the noun “standard” might strike you as odd at this point, as it seems reasonable to assume that standards and samples are analyzed separately. As we will soon learn, however, we can add standards to our samples and analyze both simultaneously.With a single external standard we determine kA using EEquation \ref{ka} and then calculate the concentration of analyte, CA, using Equation \ref{ca}.A spectrophotometric method for the quantitative analysis of Pb2+ in blood yields an Sstd of 0.474 for a single standard for which the concentration of lead is 1.75 ppb. What is the concentration of Pb2+ in a sample of blood for which Ssamp is 0.361?Equation \ref{ka} allows us to calculate the value of kA using the data for the single external standard.\[k_A = \frac {S_{std}} {C_{std}} = \frac {0.474} {1.75 \text{ ppb}} = 0.2709 \text{ ppb}^{-1} \nonumber \]Having determined the value of kA, we calculate the concentration of Pb2+ in the sample of blood is calculated using Equation \ref{ca}.\[C_A = \frac {S_{samp}} {k_A} = \frac {0.361} {0.2709 \text{ ppb}^{-1}} = 1.33 \text{ ppb} \nonumber \] shows a typical multiple-point external standardization. The volumetric flask on the left contains a reagent blank and the remaining volumetric flasks contain increasing concentrations of Cu2+. Shown below the volumetric flasks is the resulting calibration curve. Because this is the most common method of standardization, the resulting relationship is called a normal calibration curve.When a calibration curve is a straight-line, as it is in , the slope of the line gives the value of kA. This is the most desirable situation because the method’s sensitivity remains constant throughout the analyte’s concentration range. When the calibration curve is not a straight-line, the method’s sensitivity is a function of the analyte’s concentration. In , for example, the value of kA is greatest when the analyte’s concentration is small and it decreases continuously for higher concentrations of analyte. The value of kA at any point along the calibration curve in is the slope at that point. In either case, a calibration curve allows to relate Ssamp to the analyte’s concentration.A second spectrophotometric method for the quantitative analysis of Pb2+ in blood has a normal calibration curve for which\[S_{std} = (0.296 \text{ ppb}^{-1} \times C_{std}) + 0.003 \nonumber \]What is the concentration of Pb2+ in a sample of blood if Ssamp is 0.397?To determine the concentration of Pb2+ in the sample of blood, we replace Sstd in the calibration equation with Ssamp and solve for CA.\[C_A = \frac {S_{samp} - 0.003} {0.296 \text{ ppb}^{-1}} = \frac {0.397 - 0.003} {0.296 \text{ ppb}^{-1}} = 1.33 \text{ ppb} \nonumber \]It is worth noting that the calibration equation in this problem includes an extra term that does not appear in Equation \ref{ca}. Ideally we expect our calibration curve to have a signal of zero when CA is zero. This is the purpose of using a reagent blank to correct the measured signal. The extra term of +0.003 in our calibration equation results from the uncertainty in measuring the signal for the reagent blank and the standards.An external standardization allows us to analyze a series of samples using a single calibration curve. This is an important advantage when we have many samples to analyze. Not surprisingly, many of the most common quantitative analytical methods use an external standardization.There is a serious limitation, however, to an external standardization. When we determine the value of kA using Equation \ref{ka}, the analyte is present in the external standard’s matrix, which usually is a much simpler matrix than that of our samples. When we use an external standardization we assume the matrix does not affect the value of kA. If this is not true, then we introduce a proportional determinate error into our analysis. This is not the case in , for instance, where we show calibration curves for an analyte in the sample’s matrix and in the standard’s matrix. In this case, using the calibration curve for the external standards leads to a negative determinate error in analyte’s reported concentration. If we expect that matrix effects are important, then we try to match the standard’s matrix to that of the sample, a process known as matrix matching. If we are unsure of the sample’s matrix, then we must show that matrix effects are negligible or use an alternative method of standardization. Both approaches are discussed in the following section.The matrix for the external standards in , for example, is dilute ammonia. Because the \(\ce{Cu(NH3)4^{2+}}\) complex absorbs more strongly than Cu2+, adding ammonia increases the signal’s magnitude. If we fail to add the same amount of ammonia to our samples, then we will introduce a proportional determinate error into our analysis.We can avoid the complication of matching the matrix of the standards to the matrix of the sample if we carry out the standardization in the sample. This is known as the method of standard additions.The simplest version of a standard addition is shown in . First we add a portion of the sample, Vo, to a volumetric flask, dilute it to volume, Vf, and measure its signal, Ssamp. Next, we add a second identical portion of sample to an equivalent volumetric flask along with a spike, Vstd, of an external standard whose concentration is Cstd. After we dilute the spiked sample to the same final volume, we measure its signal, Sspike.The following two equations relate Ssamp and Sspike to the concentration of analyte, CA, in the original sample.\[S_{samp} = k_A C_A \frac {V_o} {V_f} \label{sa_samp1} \]\[S_{spike} = k_A \left( C_A \frac {V_o} {V_f} + C_{std} \frac {V_{std}} {V_f} \right) \label{sa_spike1} \]As long as Vstd is small relative to Vo, the effect of the standard’s matrix on the sample’s matrix is insignificant. Under these conditions the value of kA is the same in Equation \ref{sa_samp1} and Equation \ref{sa_spike1}. Solving both equations for kA and equating gives\[\frac {S_{samp}} {C_A \frac {V_o} {V_f}} = \frac {S_{spike}} {C_A \frac {V_o} {V_f} + C_{std} \frac {V_{std}} {V_f}} \label{method_one} \]which we can solve for the concentration of analyte, CA, in the original sample.A third spectrophotometric method for the quantitative analysis of Pb2+ in blood yields an Ssamp of 0.193 when a 1.00 mL sample of blood is diluted to 5.00 mL. A second 1.00 mL sample of blood is spiked with 1.00 mL of a 1560-ppb Pb2+ external standard and diluted to 5.00 mL, yielding an Sspike of 0.419. What is the concentration of Pb2+ in the original sample of blood?We begin by making appropriate substitutions into Equation \ref{method_one} and solving for CA. Note that all volumes must be in the same units; thus, we first covert Vstd from 1.00 mL to \(1.00 \times 10^{-3} \text{ mL}\).\[\frac {0.193} {C_A \frac {1.00 \text{ mL}} {5.00 \text{ mL}}} = \frac {0.419} {C_A \frac {1.00 \text{ mL}} {5.00 \text{ mL}} + 1560 \text{ ppb} \frac {1.00 \times 10^{-3} \text{ mL}} {5.00 \text{ mL}}} \nonumber \]\[\frac {0.193} {0.200C_A} = \frac {0.419} {0.200C_A + 0.3120 \text{ ppb}} \nonumber \]\[0.0386C_A + 0.0602 \text{ ppb} = 0.0838 C_A \nonumber \]\[0.0452 C_A = 0.0602 \text{ ppb} \nonumber \]\[C_A = 1.33 \text{ ppb} \nonumber \]The concentration of Pb2+ in the original sample of blood is 1.33 ppb.It also is possible to add the standard addition directly to the sample, measuring the signal both before and after the spike ). In this case the final volume after the standard addition is Vo + Vstd and Equation \ref{sa_samp1}, Equation \ref{sa_spike1}, and Equation \ref{method_one} become\[S_{samp} = k_A C_A \label{sa_samp2} \]\[S_{spike} = k_A \left( C_A \frac {V_o} {V_o + V_{std}} + C_{std} \frac {V_{std}} {V_o + V_{std}} \right) \label{sa_spike2} \]\[\frac {S_{samp}} {C_A} = \frac {S_{spike}} {C_A \frac {V_o} {V_o + V_{std}} + C_{std} \frac {V_{std}} {V_o + V_{std}}} \label{method_two} \]A fourth spectrophotometric method for the quantitative analysis of Pb2+ in blood yields an Ssamp of 0.712 for a 5.00 mL sample of blood. After spiking the blood sample with 5.00 mL of a 1560-ppb Pb2+ external standard, an Sspike of 1.546 is measured. What is the concentration of Pb2+ in the original sample of blood?\[\frac {0.712} {C_A} = \frac {1.546} {C_A \frac {5.00 \text{ mL}} {5.005 \text{ mL}} + 1560 \text{ ppb} \frac {5.00 \times 10^{-3} \text{ mL}} {5.005 \text{ mL}}} \nonumber \]\[\frac {0.712} {C_A} = \frac {1.546} {0.9990C_A + 1.558 \text{ ppb}} \nonumber \]\[0.7113C_A + 1.109 \text{ ppb} = 1.546C_A \nonumber \]\[C_A = 1.33 \text{ ppb} \nonumber \]The concentration of Pb2+ in the original sample of blood is 1.33 ppb.We can adapt a single-point standard addition into a multiple-point standard addition by preparing a series of samples that contain increasing amounts of the external standard. shows two ways to plot a standard addition calibration curve based on Equation \ref{sa_spike1}. In a we plot Sspike against the volume of the spikes, Vstd. If kA is constant, then the calibration curve is a straight-line. It is easy to show that the x-intercept is equivalent to –CAVo/Cstd.Beginning with Equation \ref{sa_spike1} show that the equations in a for the slope, the y-intercept, and the x-intercept are correct.We begin by rewriting Equation \ref{sa_spike1} as\[S_{spike} = \frac {k_A C_A V_o} {V_f} + \frac {k_A C_{std}} {V_f} \times V_{std} \nonumber \]which is in the form of the equation for a straight-line\[y = y\text{-intercept} + \text{slope} \times x\text{-intercept} \nonumber \]where y is Sspike and x is Vstd. The slope of the line, therefore, is kACstd/Vf and the y-intercept is kACAVo/Vf. The x-intercept is the value of x when y is zero, or\[0 = \frac {k_A C_A V_o} {V_f} + \frac {k_A C_{std}} {V_f} \times x\text{-intercept} \nonumber \]\[x\text{-intercept} = - \frac {k_A C_A V_o / V_f} {K_A C_{std} / V_f} = - \frac {C_A V_o} {C_{std}} \nonumber \]Because we know the volume of the original sample, Vo, and the concentration of the external standard, Cstd, we can calculate the analyte’s concentrations from the x-intercept of a multiple-point standard additions.A fifth spectrophotometric method for the quantitative analysis of Pb2+ in blood uses a multiple-point standard addition based on Equation \ref{sa_spike1}. The original blood sample has a volume of 1.00 mL and the standard used for spiking the sample has a concentration of 1560 ppb Pb2+. All samples were diluted to 5.00 mL before measuring the signal. A calibration curve of Sspike versus Vstd has the following equation\[S_{spike} = 0.266 + 312 \text{ mL}^{-1} \times V_{std} \nonumber \]What is the concentration of Pb2+ in the original sample of blood?To find the x-intercept we set Sspike equal to zero.\[S_{spike} = 0.266 + 312 \text{ mL}^{-1} \times V_{std} \nonumber \]Solving for Vstd, we obtain a value of \(-8.526 \times 10^{-4} \text{ mL}\) for the x-intercept. Substituting the x-intercept’s value into the equation from a\[-8.526 \times 10^{-4} \text{ mL} = - \frac {C_A V_o} {C_{std}} = - \frac {C_A \times 1.00 \text{ mL}} {1560 \text{ ppb}} \nonumber \]and solving for CA gives the concentration of Pb2+ in the blood sample as 1.33 ppb.Since we construct a standard additions calibration curve in the sample, we can not use the calibration equation for other samples. Each sample, therefore, requires its own standard additions calibration curve. This is a serious drawback if you have many samples. For example, suppose you need to analyze 10 samples using a five-point calibration curve. For a normal calibration curve you need to analyze only 15 solutions (five standards and ten samples). If you use the method of standard additions, however, you must analyze 50 solutions (each of the ten samples is analyzed five times, once before spiking and after each of four spikes).We can use the method of standard additions to validate an external standardization when matrix matching is not feasible. First, we prepare a normal calibration curve of Sstd versus Cstd and determine the value of kA from its slope. Next, we prepare a standard additions calibration curve using Equation \ref{sa_spike1}, plotting the data as shown in b. The slope of this standard additions calibration curve provides an independent determination of kA. If there is no significant difference between the two values of kA, then we can ignore the difference between the sample’s matrix and that of the external standards. When the values of kA are significantly different, then using a normal calibration curve introduces a proportional determinate error.To use an external standardization or the method of standard additions, we must be able to treat identically all samples and standards. When this is not possible, the accuracy and precision of our standardization may suffer. For example, if our analyte is in a volatile solvent, then its concentration will increase if we lose solvent to evaporation. Suppose we have a sample and a standard with identical concentrations of analyte and identical signals. If both experience the same proportional loss of solvent, then their respective concentrations of analyte and signals remain identical. In effect, we can ignore evaporation if the samples and the standards experience an equivalent loss of solvent. If an identical standard and sample lose different amounts of solvent, however, then their respective concentrations and signals are no longer equal. In this case a simple external standardization or standard addition is not possible.We can still complete a standardization if we reference the analyte’s signal to a signal from another species that we add to all samples and standards. The species, which we call an internal standard, must be different than the analyte.Because the analyte and the internal standard receive the same treatment, the ratio of their signals is unaffected by any lack of reproducibility in the procedure. If a solution contains an analyte of concentration CA and an internal standard of concentration CIS, then the signals due to the analyte, SA, and the internal standard, SIS, are\[S_A = k_A C_A \nonumber \]\[S_{IS} = k_{SI} C_{IS} \nonumber \]where \(k_A\) and \(k_{IS}\) are the sensitivities for the analyte and the internal standard, respectively. Taking the ratio of the two signals gives the fundamental equation for an internal standardization.\[\frac {S_A} {S_{IS}} = \frac {k_A C_A} {k_{IS} C_{IS}} = K \times \frac {C_A} {C_{IS}} \label{sa_sis} \]Because K is a ratio of the analyte’s sensitivity and the internal standard’s sensitivity, it is not necessary to determine independently values for either kA or kIS.In a single-point internal standardization, we prepare a single standard that contains the analyte and the internal standard, and use it to determine the value of K in Equation \ref{sa_sis}.\[K = \left( \frac {C_{IS}} {C_A} \right)_{std} \times \left( \frac {S_A} {S_{IS}} \right)_{std} \label{K} \]Having standardized the method, the analyte’s concentration is given by\[C_A = \frac {C_{IS}} {K} \times \left( \frac {S_A} {S_{IS}} \right)_{samp} \nonumber \]A sixth spectrophotometric method for the quantitative analysis of Pb2+ in blood uses Cu2+ as an internal standard. A standard that is 1.75 ppb Pb2+ and 2.25 ppb Cu2+ yields a ratio of (SA/SIS)std of 2.37. A sample of blood spiked with the same concentration of Cu2+ gives a signal ratio, (SA/SIS)samp, of 1.80. What is the concentration of Pb2+ in the sample of blood?SolutionEquation \ref{K} allows us to calculate the value of K using the data for the standard\[K = \left( \frac {C_{IS}} {C_A} \right)_{std} \times \left( \frac {S_A} {S_{IS}} \right)_{std} = \frac {2.25 \text{ ppb } \ce{Cu^{2+}}} {1.75 \text{ ppb } \ce{Pb^{2+}}} \times 2.37 = 3.05 \frac {\text{ppb } \ce{Cu^{2+}}} {\text{ppb } \ce{Pb^{2+}}} \nonumber \]The concentration of Pb2+, therefore, is\[C_A = \frac {C_{IS}} {K} \times \left( \frac {S_A} {S_{IS}} \right)_{samp} = \frac {2.25 \text{ ppb } \ce{Cu^{2+}}} {3.05 \frac {\text{ppb } \ce{Cu^{2+}}} {\text{ppb } \ce{Pb^{2+}}}} \times 1.80 = 1.33 \text{ ppb } \ce{Pb^{2+}} \nonumber \]A single-point internal standardization has the same limitations as a single-point normal calibration. To construct an internal standard calibration curve we prepare a series of standards, each of which contains the same concentration of internal standard and a different concentrations of analyte. Under these conditions a calibration curve of (SA/SIS)std versus CA is linear with a slope of K/CIS.Although the usual practice is to prepare the standards so that each contains an identical amount of the internal standard, this is not a requirement.A seventh spectrophotometric method for the quantitative analysis of Pb2+ in blood gives a linear internal standards calibration curve for which\[\left( \frac {S_A} {S_{IS}} \right)_{std} = (2.11 \text{ ppb}^{-1} \times C_A) - 0.006 \nonumber \]What is the ppb Pb2+ in a sample of blood if (SA/SIS)samp is 2.80?To determine the concentration of Pb2+ in the sample of blood we replace (SA/SIS)std in the calibration equation with (SA/SIS)samp and solve for CA.\[C_A = \frac {\left( \frac {S_A} {S_{IS}} \right)_{samp} + 0.006} {2.11 \text{ ppb}^{-1}} = \frac {2.80 + 0.006} {2.11 \text{ ppb}^{-1}} = 1.33 \text{ ppb } \ce{Pb^{2+}} \nonumber \]The concentration of Pb2+ in the sample of blood is 1.33 ppb.In some circumstances it is not possible to prepare the standards so that each contains the same concentration of internal standard. This is the case, for example, when we prepare samples by mass instead of volume. We can still prepare a calibration curve, however, by plotting \((S_A / S_{IS})_{std}\) versus CA/CIS, giving a linear calibration curve with a slope of K.You might wonder if it is possible to include an internal standard in the method of standard additions to correct for both matrix effects and uncontrolled variations between samples; well, the answer is yes as described in the paper “Standard Dilution Analysis,” the full reference for which is Jones, W. B.; Donati, G. L.; Calloway, C. P.; Jones, B. T. Anal. Chem. 2015, 87, 2321-2327.This page titled 1.5: Calibration of Instrumental Methods is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by David Harvey.
295
10.1: Emission Spectroscopy Based on Flame and Plasma Sources
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Instrumental_Analysis_(LibreTexts)/10%3A_Atomic_Emission_Spectrometry/10.01%3A_Emission_Spectroscopy_Based_on_Plasma_Sources
An analyte in an excited state possesses an energy, E2, that is greater than its energy when it is in a lower energy state, E1. When the analyte returns to its lower energy state—a process we call relaxation—the excess energy, \(\Delta E\), is\[\Delta E=E_{2}-E_{1} \nonumber \]There are several ways in which an atom may end up in an excited state, including thermal energy, which is the focus of this chapter. The amount of time an atom, A, spends in its excited state—what we call the excited state's lifetime—is short, typically \(10^{-5}\) to \(10^{-9}\) s for an electronic excited state. Relaxation of the atom's excited state, A*, occurs through several mechanisms, including collisions with other species in the sample and the emission of photons. In the first process, which we call nonradiative relaxation, the excess energy is released as heat.\[A^{*} \longrightarrow A+\text { heat } \nonumber \]In the second mechanism, the excess energy is released as a photon of electromagnetic radiation.\[A^{*} \longrightarrow A+h \nu \nonumber \]The release of a photon following thermal excitation is called emission. The focus of this chapter is on the emission of ultraviolet and visible radiation following the thermal excitation of atoms. Atomic emission spectroscopy has a long history. Qualitative applications based on the color of flames were used in the smelting of ores as early as 1550 and were more fully developed around 1830 with the observation of atomic spectra generated by flame emission and spark emission [Dawson, J. B. J. Anal. At. Spectrosc. 1991, 6, 93–98]. Quantitative applications based on the atomic emission from electric sparks were developed by Lockyer in the early 1870 and quantitative applications based on flame emission were pioneered by Lundegardh in 1930. Atomic emission based on emission from a plasma was introduced in 1964.Atomic emission occurs when a valence electron in a higher energy atomic orbital returns to a lower energy atomic orbital. shows a portion of the energy level diagram for sodium, which consists of a series of discrete lines at wavelengths that correspond to the difference in energy between two atomic orbitals.The intensity of an atomic emission line, Ie, is proportional to the number of atoms, \(N^*\), that populate the excited state\[I_{e}=k N^* \label{10.1} \]where k is a constant that accounts for the efficiency of the transition. If a system of atoms is in thermal equilibrium, the population of excited state i is related to the total concentration of atoms, N, by the Boltzmann distribution. For many elements at temperatures of less than 5000 K the Boltzmann distribution is approximated as\[N^* = N\left(\frac{g_{i}}{g_{0}}\right) e^{-E_i / k T} \label{10.2} \]where gi and g0 are statistical factors that account for the number of equivalent energy levels for the excited state and the ground state, Ei is the energy of the excited state relative to the ground state, E0, k is Boltzmann’s constant (\(1.3807 \times 10^{-23}\) J/K), and T is the temperature in Kelvin. From Equation \ref{10.2} we expect that excited states with lower energies have larger populations and more intense emission lines. We also expect emission intensity to increase with temperature. The emission spectrum for sodium is shown in An atomic emission spectrometer is similar in design to the instrumentation for atomic absorption. In fact, it is easy to adapt most flame atomic absorption spectrometers for atomic emission by turning off the hollow cathode lamp and monitoring the difference between the emission intensity when aspirating the sample and when aspirating a blank. Many atomic emission spectrometers, however, are dedicated instruments designed to take advantage of features unique to atomic emission, including the use of plasmas, arcs, sparks, and lasers as atomization and excitation sources, and an enhanced capability for multielemental analysis.Atomization and excitation in flame atomic emission is accomplished with the same nebulization and spray chamber assembly used in atomic absorption (see Chapter 9). The burner head consists of a single or multiple slots, or a Meker-style burner. Older atomic emission instruments often used a total consumption burner in which the sample is drawn through a capillary tube and injected directly into the flame.A Meker burner is similar to the more common Bunsen burner found in most laboratories; it is designed to allow for higher temperatures and for a larger diameter flame.A plasma is a hot, partially ionized gas that contains an abundant concentration of cations and electrons. The plasma used in atomic emission is formed by ionizing a flowing stream of argon gas, producing argon ions and electrons. A plasma’s high temperature results from resistive heating as the electrons and argon ions move through the gas. Because a plasma operates at a much higher temperature than a flame, it provides for a better atomization efficiency and a higher population of excited states.A schematic diagram of the inductively coupled plasma source (ICP) is shown in . The ICP torch consists of three concentric quartz tubes, surrounded at the top by a radio-frequency induction coil. The sample is mixed with a stream of Ar using a nebulizer, and is carried to the plasma through the torch’s central capillary tube. Plasma formation is initiated by a spark from a Tesla coil. An alternating radio-frequency current in the induction coil creates a fluctuating magnetic field that induces the argon ions and the electrons to move in a circular path. The resulting collisions with the abundant unionized gas give rise to resistive heating, providing temperatures as high as 10000 K at the base of the plasma, and between 6000 and 8000 K at a height of 15–20 mm above the coil, where emission usually is measured. At these high temperatures the outer quartz tube must be thermally isolated from the plasma. This is accomplished by the tangential flow of argon shown in the schematic diagram. Samples are brought into the ICP using the same basic types of nebulization described in Chapter 8 for flame atomic absorption spectroscopy.An alternative to the inductively coupled plasma source is the direct current (dc) plasma jet, one example of which is illustrated in . The argon plasma (shown here in blue) forms between two graphite anodes and a tungsten cathode. The sample is aspirated into the plasma's excitation region where it undergoes atomization, excitation, and emission at temperatures of 5000 K.One advantage of atomic emission over atomic absorption is the ease of analyzing samples for multiple analytes. This additional capability arises because atomic emission, unlike atomic absorption, does not need an analyte-specific source of radiation. The two most common types of spectrometers are sequential and multichannel. In a sequential spectrometer the instrument has a single detector and uses the monochromator to move from one emission line to the next. A multichannel spectrometer uses the monochromator to disperse the emission across a field of detectors, each of which measures the emission intensity at a different wavelength.A sequential instrument uses a programmable scanning monochromator, such as those described in Chapter 7, to rapidly move the monochromator's grating over wavelength regions that are not of interest, and then pauses and scans slowly over the emission lines of the analytes. Sampling rates of 300 determinations per hour are possible with this configuration. Another option, which is less common, is to move the exit slit and the detector across the monochromator's focal plane, pausing and recording the emission at the desired wavelengths.Another approach to a multielemental analysis is to use a multichannel instrument that allows us to monitor simultaneously many analytes. A simple design for a multichannel spectrometer, shown in , couples a monochromator with multiple detectors that are positioned in a semicircular array around the monochromator at positions that correspond to the wavelengths for the analytes. A sample throughput of 3000 determinations per hour are possible using a multichannel ICP.Another option for a multichannel instrument takes advantage of the charge-injection device, or CID, as a detector (see Chapter 7 for discussion of the charge-coupled device, another type of charge-transfer device used as a detector). Light from the plasma source is dispersed across the CID in two dimensions. The surface of the CID has in excess of 90000 detecting elements, or pixels, that allows for a resolution between detecting elements on the order of 0.04 nm. Light from the atomic emission source is distributed across the detector's surface by a diffraction grating such that each element of interest is detected using its own set of pixels, called a read window. shows that individual read windows consist of a set of detecting elements, nine of which collect photons from the spectral line and 30 of which provide a measurement of the source's background.Atomic emission is used widely for the analysis of trace metals in a variety of sample matrices. The development of a quantitative atomic emission method requires several considerations, including choosing a source for atomization and excitation, selecting a wavelength and slit width, preparing the sample for analysis, minimizing spectral and chemical interferences, and selecting a method of standardization.Except for the alkali metals, detection limits when using an ICP are significantly better than those obtained with flame emission (Table \(\PageIndex{1}\)). Plasmas also are subject to fewer spectral and chemical interferences. For these reasons a plasma emission source is usually the better choice.The choice of wavelength is dictated by the need for sensitivity and the need to avoid interferences from the emission lines of other constituents in the sample. Because an analyte’s atomic emission spectrum has an abundance of emission lines—particularly when using a high temperature plasma source—it is inevitable that there will be some overlap between emission lines. For example, an analysis for Ni using the atomic emission line at 349.30 nm is complicated by the atomic emission line for Fe at 349.06 nm.A narrower slit width provides better resolution, but at the cost of less radiation reaching the detector. The easiest approach to selecting a wavelength is to record the sample’s emission spectrum and look for an emission line that provides an intense signal and is resolved from other emission lines.Flame and plasma sources are best suited for samples in solution and in liquid form. Although a solid sample can be analyzed by directly inserting it into the flame or plasma, they usually are first brought into solution by digestion or extraction.The most important spectral interference is broad, background emission from the flame or plasma and emission bands from molecular species. This background emission is particularly severe for flames because the temperature is insufficient to break down refractory compounds, such as oxides and hydroxides. Background corrections for flame emission are made by scanning over the emission line and drawing a baseline ). Because a plasma’s temperature is much higher, a background interference due to molecular emission is less of a problem. Although emission from the plasma’s core is strong, it is insignificant at a height of 10–30 mm above the core where measurements normally are made.Flame emission is subject to the same types of chemical interferences as atomic absorption; they are minimized using the same methods: by adjusting the flame’s composition and by adding protecting agents, releasing agents, or ionization suppressors. An additional chemical interference results from self-absorption. Because the flame’s temperature is greatest at its center, the concentration of analyte atoms in an excited state is greater at the flame’s center than at its outer edges. If an excited state atom in the flame’s center emits a photon, then a ground state atom in the cooler, outer regions of the flame may absorb the photon, which decreases the emission intensity. For higher concentrations of analyte self-absorption may invert the center of the emission band ).Chemical interferences when using a plasma source generally are not significant because the plasma’s higher temperature limits the formation of nonvolatile species. For example, \(\text{PO}_4^{3-}\) is a significant interferent when analyzing samples for Ca2+ by flame emission, but has a negligible effect when using a plasma source. In addition, the high concentration of electrons from the ionization of argon minimizes ionization interferences.From Equation \ref{10.1} we know that emission intensity is proportional to the population of the analyte’s excited state, \(N^*\). If the flame or plasma is in thermal equilibrium, then the excited state population is proportional to the analyte’s total population, N, through the Boltzmann distribution (Equation \ref{10.2}).A calibration curve for flame emission usually is linear over two to three orders of magnitude, with ionization limiting linearity when the analyte’s concentrations is small and self-absorption limiting linearity at higher concentrations of analyte. When using a plasma, which suffers from fewer chemical interferences, the calibration curve often is linear over four to five orders of magnitude and is not affected significantly by changes in the matrix of the standards.Emission intensity is affected significantly by many parameters, including the temperature of the excitation source and the efficiency of atomization. An increase in temperature of 10 K, for example, produces a 4% increase in the fraction of Na atoms in the 3p excited state, an uncertainty in the signal that may limit the use of external standards. The method of internal standards is used when the variations in source parameters are difficult to control. To compensate for changes in the temperature of the excitation source, the internal standard is selected so that its emission line is close to the analyte’s emission line. In addition, the internal standard should be subject to the same chemical interferences to compensate for changes in atomization efficiency. To accurately correct for these errors the analyte and internal standard emission lines are monitored simultaneously.This page titled 10.1: Emission Spectroscopy Based on Flame and Plasma Sources is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by David Harvey.
297
10.2: Emission Spectroscopy Based on Arc and Spark Sources
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Instrumental_Analysis_(LibreTexts)/10%3A_Atomic_Emission_Spectrometry/10.02%3A_Emission_Spectroscopy_Based_on_Arc_and_Spark_Sources
An arc source consists of two electrodes separated by a gap of up to 20 mm (see for one configuration). A potential of 50 V (or more) is applied and a continuous current in the range of 2–30 A is maintained throughout the analysis. If the sample is a metal, then it can be fashioned into the electrodes. For nonmetallic samples, the electrodes typically are fashioned from graphite and a cup-like depression is drilled into one of the electrodes. The sample is ground into a powder and packed into the sample cup. The plasma generated by an arc source typically has a temperature of 4000 K to 5000 K and has an abundance of emission lines for the analyte with a relatively small background emission.Unlike an arc source, which generates a continuous emission of electromagnetic radiation, a spark source generates a series of short emissions, each lasting on the order of a few µs. The sample serves as one of the two electrodes, with the other electrode fashioned from tungsten (see ). The two electrodes are separated by a gap of 3–6 mm. A potential as small as 300–500 V and as large as 1020 KV. The frequency of the spark is in the range of 100–500 per second. The temperature within the plasma can be quite intense, which gives rise to both emission lines from the atoms, but also emission from ions formed in the plasma.For both the arc source and the spark source, emission from the plasma is collected and analyzed using the same types of optical benches discussed in the previous section on atomic emission from flames and plasma sources. shows an emission spectrum for a sample of the alkaline earth metals, which shows a single intense emission line for Ca at 422.673nm and a single intense emission line for Sr at 460.7331 nm. Mg exhibits three closely spaced emission lines at 516.7322 nm, 517.2684 nm, and 518.3604 nm. Finally, Ba has a single strong emission line at 553.5481 nm, but also many less intense emission lines above 600 nm. The presence of faint, but measurable emission lines can create complications when trying to identify the elements present in a sample.This page titled 10.2: Emission Spectroscopy Based on Arc and Spark Sources is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by David Harvey.
298
11.1: General Features of Atomic Mass Spectrometry
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Instrumental_Analysis_(LibreTexts)/11%3A_Atomic_Mass_Spectrometry/11.01%3A_General_Features_of_Atomic_Mass_Spectrometry
In mass spectrometry—whether of atoms, which is covered in this chapter, or of molecules, which is covered in Chapter 20—we convert the analyte into ions and then separate these ions based on the ratio of their masses to their charges. In this section we give careful attention to what we mean by mass, by charge, and by mass-to-charge ratio. We also give brief consideration to how we generate and measure ions, topics covered in greater detail in subsequent sections.We trace the modern era of chemistry to John Dalton’s development of atomic theory, which made three hypotheses:Dalton’s first hypothesis simply recognized the atom as the basic building block of chemistry. Water, for example, is made from atoms of hydrogen and oxygen. The second hypothesis recognizes that for every compound there is a fixed combination of atoms. Regardless of its source (rain, tears, or a bottle of Evian) a molecule of water always consists of two hydrogen atoms for every atom of oxygen. Dalton’s third hypothesis is a statement that atoms are conserved in a reaction; this is more commonly known as the conservation of mass.Although Dalton believed that atoms were indivisible, we know now that they are made from three smaller subatomic particles: the electron, the proton, and the neutron. The atom, however, remains the smallest division of matter with distinct chemical properties.Electrons, Protons, and Neutrons. The characteristic properties of electrons, protons, and neutrons, are shown in Table \(\PageIndex{1}\).The proton and the neutron make up the atom’s nucleus, which is located at the center of the atom and has a radius of approximately \(5 \times 10^{-3} \text{ pm}\). The remainder of the atom, which has radius of approximately 100 pm, is mostly empty space in which the electrons are free to move. Of the three subatomic particles, only the electron and the proton carry a charge, which we can express as a relative unit charge, such as \(+1\) or \(-2\), or as an absolute charge in Coulombs. Because elements have no net charge (that is, they are neutral), the number of electrons and protons in an element must be the same.Atomic Numbers. Why is an atom of carbon different from an atom of hydrogen or helium? One possible explanation is that carbon and hydrogen and helium have different numbers of electrons, protons, or neutrons; Table \(\PageIndex{2}\) provides the relevant numbers.Note that although Table \(\PageIndex{2}\) shows that a helium atom has two neutrons, an atom of hydrogen or carbon has three possibilities for the numbers of neutrons. It is even possible for a hydrogen atom to exist without a neutron. Clearly the number of neutrons is not crucial to determining if an atom is carbon, hydrogen, or helium. Although hydrogen, helium, and carbon have different numbers of electrons, the number is not critical to an element's identity. For example, it is possible to strip an electron away from helium to form a helium ion with a charge of \(+1\) that has the same number of electrons as hydrogen; nevertheless, it is still helium.What makes an atom carbon is the presence of six protons, whereas every atom of hydrogen has one proton and every atom of helium has two protons. The number of protons in an atom is called its atomic number, which we represent as Z.Atomic Mass and Isotopes. Protons and neutrons are of similar mass and much heavier than electrons (see Table \(\PageIndex{1}\)); thus, most of an atom’s mass is in its nucleus. Because not all of an element’s atoms necessarily have the same number of neutrons, it is possible for two atoms of an element to differ in mass. For this reason, the sum of an atom’s protons and neutrons is known as its mass number (A). Carbon, for example, can have a mass number of 12, 13, or 14 (six protons and six, seven, or eight neutrons), and hydrogen can have a mass number of 1, 2, or 3 (one proton and zero, one, or two neutrons).Atoms of the same element (same Z), but with a different number of neutrons (different A) are called isotopes. Hydrogen, for example has three isotopes (see Table \(\PageIndex{2}\)). The isotope with 0 neutrons is the most abundant, accounting for 99.985% of all stable hydrogen atoms, and is known, somewhat self-referentially, as hydrogen. Deuterium, which accounts for 0.015% of all stable hydrogen atoms, has 1 neutron. The isotope of hydrogen with two neutrons is called tritium. Because tritium is radioactive it is unstable and disappears with time.The usual way to represent isotopes is with the symbol \(^A _Z X\) where X is the atomic symbol for the element. The three isotopes of hydrogen, which has an elemental symbol of H, are \(^1 _1 \text{H}\), \(^2 _1 \text{H}\), and \(^3 _1 \text{H}\). Because the elemental symbol (X) and the atomic number (Z) provide redundant information, we often omit the atomic number; thus, deuterium becomes \(^2 \text{H}\). Unlike hydrogen, the isotopes of other elements do not have specific names. Instead they are named by taking the element’s name and appending the atomic mass. For example, the isotopes of carbon are called carbon-12, carbon-13, and carbon-14.Individual atoms weigh very little, typically about \(10^{-24} \text{ g}\) to \(10^{-22} \text{ g}\). This amount is so small that there is no easy way to measure the mass of a single atom. To assign masses to atoms it is necessary to assign a mass to one atom and to report the masses of all other atoms relative to that absolute standard. By agreement, atomic mass is stated in terms of atomic mass units (amu) or Daltons (Da), where 1 amu and 1 Da are defined as 1/12 of the mass of an atom of carbon-12. The atomic mass of carbon-12, therefore, is exactly 12 amu. The atomic mass of carbon-13 is 13.00335 amu because the mass of an atom of carbon-13 is \(1.0836125 \times\) greater than the mass of an atom of carbon-12.If you calculate the masses of carbon-12 and carbon-13 by adding together the masses of each isotope’s electrons, neutrons, and protons from Table \(\PageIndex{1}\) you will obtain a mass ratio of 1.08336, not 1.0836125. The reason for this is that the masses in Table \(\PageIndex{1}\) are for “free” electrons, protons, and neutrons; that is, for electrons, protons, and neutrons that are not in an atom. When an atom forms, some of the mass is lost. “Where does it go?,” you ask. Remember Einstein and \(E = mc^2\)? Mass can be converted to energy and the lost mass is the nuclear binding energy that holds the nucleus together.Average Atomic Mass. Because carbon exists in several isotopes, the atomic mass of an “average” carbon atom is not exactly 12 amu. Instead it is usually reported on periodic tables as 12.01 or 12.011, values that are closer to 12.0 because 98.90% of all carbon atoms are carbon-12. The IUPAC's Commission on Isotopic Abundances and Atomic Weights currently reports its mass as [12.0096, 12.0116] amu where the values in the brackets are the lower and the upper estimates for the average mass in a variety of naturally occurring materials. As shown in the following example, if you know the percent abundance and atomic masses of an element’s isotopes, then you can calculate it’s average atomic mass.The element magnesium, Mg, has three stable isotopes with the following atomic masses and percent abundances:Calculate the average atomic mass for magnesium.To find the average atomic mass we multiply each isotopes’ atomic mass by its fractional abundance (the decimal equivalent of its percent abundance) and add together the results; thusavg. amu = (0.7870)(23.994 amu) + (0.1013)(24.9938 amu) + (0.1117)(25.9898 amu) avg. amu = 24.32 amuAs the next example shows, we also can work such problems in reverse, using an element’s average atomic mass and the atomic masses of its isotopes to find each isotope’s percent abundance.The element gallium, Ga, has two naturally occurring isotopes. The isotope \(^{69} \text{Ga}\) has an atomic mass of 68.926 amu and the isotope \(^{71} \text{Ga}\) has an atomic mass of 70.926 amu. The average atomic mass for gallium is 69.723. Find the percent abundances for gallium’s two isotopes.If we let x be the fractional abundance of \(^{69} \text{Ga}\), then the fractional abundance of \(^{71} \text{Ga}\) is 1 – x (that is, the total amounts of \(^{69} \text{Ga}\) and \(^{71} \text{Ga}\) must add up to one). Usingthe same general approach as Example \(\PageIndex{1}\), we find that69.723 amu = (x)(68.926 amu) + (1 – x)(70.926 amu)69.723 amu = 68.926x amu + 70.926 amu – 70.926x amu2.000x amu = 1.203 amux = 0.60151 – x = 1 – 0.6015 = 0.3985Thus, 60.15% of naturally occurring gallium is \(^{69} \text{Ga}\) and 39.85% is \(^{71} \text{Ga}\).Although many periodic tables report atomic masses to two decimal places—the periodic table I consult most frequently, for example gives the average atomic mass of carbon as 12.01 amu—the high resolving power of some mass spectrometers allows us to report masses to three or four decimal places.As we will learn later, a mass spectrometer separates ions on the basis of their mass-to-charge ratio (m/z), and not on their mass only or their charge only. As most ions that form during mass spectrometry are singly charged, spectra are often reported using masses (m) instead of mass-to-charge ratios; be sure to remain alert for this when looking at mass spectra.This page titled 11.1: General Features of Atomic Mass Spectrometry is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by David Harvey.
299
11.2: Mass Spectrometers
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Instrumental_Analysis_(LibreTexts)/11%3A_Atomic_Mass_Spectrometry/11.02%3A_Mass_Spectrometers
A mass spectrometer has three essential needs: a means for producing ions, in this case (mostly) singly charged atoms; a means for separating these ions in space or in time by their mass-to-charge ratios; and a means for counting the number of ions for each mass-to-charge ratio. provides a general view of a mass spectrometer in the same way that we first introduced optical instruments in Chapter 7. The ionization of the sample is analagous to the source of photons in optical spectroscopy as it generates the particles (ions, instead of photons) that ultimately make up the measured signal. The separation of the resulting ions by their mass-to-charge ratios, which is accomplished using a mass analyzer, is analagous to the role of a monochromator in optical spectroscopy. The means for counting ions serves the same role as, for example, a photomultiplier tube in optical spectroscopy. Note that the mass spectrometer is held under vacuum as this allows the ions to travel great distances without undergoing collisions that might alter their charge or energy.The most common means for generating ions are plasmas of various sorts, lasers, electrical sparks, and other ions. We will give greater attention to these in the next several sections as we consider specific examples of atomic mass spectrometry.The transducer for mass spectrometry must be able to report the number of ions that emerge from the mass analyzer. Here we consider two common types of transducers.In Chapter 7 we introduced the photomultiplier tube as a way to convert photons into electrons, amplifying the signal so that a single photon produces 106 to 107 electrons, which generates a measurable current. An electron multiplier serves the same role in mass spectrometry. shows two versions of this transducer. The electron multiplier in uses a set of individual dynodes. When an ion strikes the first dynode, it generates several electrons, each of which is passed along to the next dynode before arriving at a collecting plate where the current is measured. The result is an amplification, or gain, in the signal of approximately \(10^7 \times\). The electron multiplier in uses a horn-shaped cylinder—typically made from glass coated with a thin layer of a semiconducting material—whose surface acts as a single, continuous dynode. When an ion strikes the continuous dynode it generates several electrons that are reflected toward the collector plate where the current is measured. The result is an amplification of \(10^5 \text{ to } 10^8 \times\).A Faraday cup, as its name suggests, is a simple device shaped like a cup. Ions enter the cup where they strike a collector electrode. A current is directed to the collector plate that is sufficient to neutralize the charge of the ions. The magnitude of this current is proportional to the number of ions. A Faraday cup has the advantage of simplicity, but is less sensitive than an electron multiplier because it lacks the amplification provided by the dynodes.Before we can detect the ions, we need to separate them so that we can generate a spectrum that shows the intensity of ions as a function of their mass-to-charge ratio. In this section we consider the three most common mass analyzers for atomic mass spectrometry.The quadrupole mass analyzer is the most important of the mass analyzers included in this chapter: it is compact in size, low in cost, easy to use, and easy to maintain. As shown in , a quadrapole mass analyzer consists of four cylindrical rods, two of which are connected to the positive terminal of a variable direct current (dc) power supply and two of which are connected to the power supply's negative terminal; the two positive rods are positioned opposite of each other and the two negative rods are positioned opposite of each other. Each pair of rods is also connected to a variable alternating current (ac) source operated such that the alternating currents are 180° out-of-phase with each other. An ion beam from the source is drawn into the channel between the quadrupoles and, depending on the applied dc and ac voltages, ions with only one mass-to-charge ratio successfully travel the length of the mass analyzer and reach the transducer; all other ions collide with one of the four rods and are destroyed.To understand how a quadrupole mass analyzer achieves this separation of ions, it helps to consider the movement of an ion relative to just two of the four rods, as shown in for the poles that carry a positive dc voltage. When the ion beam enters the channel between the rods, the ac voltage causes the ion to begin to oscillate. If, as in the top diagram, the ion is able to maintain a stable oscillation, it will pass through the mass analyzer and reach the transducer. If, as in the middle diagram, the ion is unable to maintain a stable oscillation, then the ion eventually collides with one of the rods and is destroyed. When the rods have a positive dc voltage, as they do here, ions with larger mass-to-charge ratios will be slow to respond to the alternating ac voltage and will pass through the transducer. The result is shown in the figure at the bottom (and repeated in ) where we see that ions with a sufficiently large mass-to-charge ratios successfully pass through the transducer; ions with smaller mass-to-charge ratios do not. In this case, the quadrupole mass analyzer acts as a high-pass filter.We can extend this to the behavior of the ions when they interact with rods that carry a negative dc voltage. In this case, the ions are attracted to the rods, but those ions that have a sufficiently small mass-to-charge ratio are able to respond to the alternating current's voltage and remain in the channel between the rods. The ions with larger mass-to-charge ratios move more sluggishly and eventually collide with one of the rods. As shown in , in this case, the quadrupole mass analyzer acts as a low-pass filter. Together, as we see in , a quadrupole mass analyzer operates as both a high-pass and a low-pass filter, allowing a narrow band of mass-to-charge ratios to pass through the transducer. By varying the applied dc voltage and the applied ac voltage, we can obtain a full mass spectrum.Quadrupole mass analyzers provide a modest mass-to-charge resolution of about 1 amu and extend to \(m/z\) ratios of approximately 2000. Quadrupole mass analyzers are particularly useful for sources based on plasmas.In a time-of-flight mass analyzers, ions are created in small clusters by applying a periodic pulse of energy to the sample using a laser beam or a beam of energetic particles to ionize the sample. The small cluster of ions are then drawn into a tube by applying an electric field and then allowed to drift through the tube in the absence of any additional applied field; the tube, for obvious reasons, is called a drift tube. All of the ions in the cluster enter the drift tube with the same kinetic energy, KE, which means, given\[\text{KE} = \frac{1}{2} m v^2 \label{kineticenergy} \]that the square of an ion's velocity is inversely proportional to the ion's mass. As a result, lighter ions move more quickly than heavier ions. Flight times are typically less than 30 µs. A time-of-flight mass analyzer provide better resolution than a quadrupole mass analyzer, but is limited to sources that can be pulsed.In a double-focusing mass analyzer, two mechanisms are used to focus a beam of ions onto the transducer. One of the mechanisms is an electrotatic analyzer that serves to confine the kinetic energy of the ions to a narrow range of energies. The second mechanism is a magnetic sector analyzer that uses an applied magnetic field to separate the ions by their mass-to-charge ratio. The combination of two analyzers allows for a significant resolution. More details on this type of mass analyzer is included in Chapter 20.This page titled 11.2: Mass Spectrometers is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by David Harvey.
300
11.3: Inductively Coupled Plasma Mass Spectrometer
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Instrumental_Analysis_(LibreTexts)/11%3A_Atomic_Mass_Spectrometry/11.03%3A_Inductively_Coupled_Plasma_Mass_Spectrometer
In Chapter 10 we introduced the inductively coupled plasma (ICP) as a source for atomic emission. The plasma in ICP is formed by ionizing a flowing stream of argon gas, producing argon ions and electrons. The sample is introduced into the plasma where the high operating temperature of 6000–8000 K is sufficient to atomize and ionize the sample. In optical ICP we measure the emission of photons from the atoms and ions that are in excited states. In ICP-MS we use the plasma as a source of ions that we can send to a mass spectrometer for analysis.An ICP torch operates at room pressure and at an elevated temperature, and a mass spectrometer, as noted in Section 11.2, operates under a vacuum and at room temperature. This difference in pressure and temperature means that a coupling together of these two instrument requires an interface than can bring the pressure and temprature in line with the demands of the mass spectrometer. provides a schematic diagram of a typical ICP-MS instrument with the ICP torch on the right and the mass spectrometer's quadrupole mass analyzer and a continuous electron multiplier on the left. In between the two is a two-stage interface. Note that none of the components in are drawn to scale.The first stage of the interface consists of two cone-shaped openings: a sampler cone and a skimmer cone. The hot plasma from the ICP torch enters the first stage of the interface through the sampler cone, which is a pin-hole with a diameter of approximately 1 mm. Samples in solution form are drawn directly into the ICP torch using a nebulizer. Solid samples are vaporized using a laser (a process called laser ablation) and the vapor drawn directly into the ICP torch.A pump is used to drop the pressure in the first stage to approximately 1 torr. The expansion of the plasma as it enters the first stage results in some cooling of the plasma. The skimmer cone allows a small portion of the plasma in the first stage to pass into the second stage, which is held at the mass spectrometer's operating pressure of approximately 10–5 torr. A series of ion lenses are used to narrow the conical dispersion of the plasma, to isolate positive ions from electrons, neutral species, and photons—all of which will generate a signal if they reach the transducer—and to focus the ion beam onto the quadrupole's entrance. shows an example of an ICP-MS spectrum for the analysis of a metal coating using laser ablation to volatilize the sample. The quadrupole mass analyzer operates over a mass-to-charge range of approximately 3 to 300 and can resolve lines that differ by \(\pm1 \text{ m/z}\). Data are collected either by scanning the quadrupole to provide a survey spectrum of all ions generated in the plasma, as is the case in , or by peak hopping in which we gather data for just a few discrete mass-to-charge ratios, adjusting the quadrupole so that it passes only a single mass-to-charge ratio and count the ions for a set period of time before moving to the next mass-to-charge ratio.An ICP-MS spectrum is much simpler than the corresponding ICP atomic emission spectrum because each element in the latter has many emission lines and because the plasma itself has many emission lines. Still, an ICP-MS is not free from interferences, the two most important of which are isobaric ions and polyatomic ions.Isobaric Ions. Iso- means same and -baric means weight; thus, isobaric means same weight and refers to two (or more) species that have—within the resolution of the mass spectrometer—identical weights and that both contribute to the same peak in the mass spectrum. The source of this interference is the existence of isotopes. For example, the most abundant ions for argon and for calcium are 40Ar and 40Ca, and, given the resolving power of a quadrupole mass analyzer, the two ions appear as a single peak at \(m/z = 40\) even though the mass of 40Ar is 39.962383 amu and the mass of 40Ca is 39.962591 amu. We can correct for this interference because the second most abundant isotope of calcium, 44Ca, does not share a \(m/z\) with argon (or with another element). shows the ICP-MS spectrum for a sample that contains calcium and argon, and Example \(\PageIndex{1}\) shows how we can use this spectrum to determine the contribution of each element.For the spectrum in , the intensity at \(m/z = 40\) is 972.07 cps and the intensity at \(m/z = 44\) is 18.77 cps. Given that the istopic abundance of 40Ca is 96.941% and the isotopic abundance of 44Ca is 2.086%, what is the counts-per-second at \(m/z = 40\) for Ca and for Ar.Given that only 44Ca contributes to the peak at \(m/z = 44\) we can use the relative abundances of 40Ca and 44Ca to determine the expected contribution of 40Ca to the total intensity at \(m/z = 40\).\[18.77 \text{ cps} \times \frac{96.941}{2.086} = 872.28 \text{ cps} \nonumber \]Subtracting this result from the total intensity gives the intensity at \(m/z = 40\) for argon as\[972.07 \text{ cps} - 872.28 \text{ cps} = 99.79 \text{ cps} \nonumber \]Polyatomic Ions. Compensating for isobaric ions is relatively straightforward because we can rely on the known isotopic abundances of the elements. A more difficult problem is an interference between the isotope of an elemental analyte and a polyatomic ion that has the same mass. Such polyatomic ions may arise from the sample's matrix or from the plasma. For example, the ion 40Ar16O+ has a mass-to-charge ratio of 56, which overlaps with peak for 56Fe, the most abundant isotope of iron. Although we could choose to monitor iron at a different mass-to-charge ratio, we will lose sensitivity as we are using a less abundant isotope. Corrections can be made using the method outlined in Example \(\PageIndex{1}\), although it may require using multiple peaks, which increases the uncertainty of the final result.A matrix effect occurs when the sample's matrix affects the relationship between the signal and the concentration of the analyte. Matrix effects are common in ICP-MS and may lead to either a suppression or an enhancement in the signal. Although not always well understood, matrix effects likely result from how easily an ionizable element affects the ability to ionize other elements. Matrix matching, using the method of standard additions, or using an internal standard can help minimize matrix effects for quantitative work.ICP-MS finds application for analytes in a wide variety of matrices, including both solutions and solids. Solution samples with high concentrations of dissolved ions may present problems due to the deposition of the salts onto the sampler and skimmer cones, which reduces the size of the pinhole that provides entry into the interface between the ICP torch and the mass spectrometer. The use of laser ablation makes it possible to analyzer surfaces—such as glasses, metals, and ceramics—without additional sample preparation.Qualitative and Semiquantitative Applications. One of the strengths of ICP-MS is its ability to provide a survey scan, such as that in , that allows for the identification of the elements present in a sample. Analysis of a single sample that contains known concentrations of these elements is suitable for providing a rough estimate of their concentration in the sample.Quantitative Analysis. For a more accurate and precise quantitative analysis, one can prepare multiple external standards and prepare a calibration curve. Linearly across approximately six orders of magnitude with detection limits of less than 1 ppb. Including an internal standard in the external standards can help reduce matrix effects. The ideal internal standard will not produce isobaric ions and its primary ionization potential should be similar to that for the analyte; when working with several analytes, it may be necessary to choose a different internal standard for each analyte.Isotope Ratios. An important advantage of ICP-MS over other analytical methods is its ability to monitor multiple isotopes for a single element.This page titled 11.3: Inductively Coupled Plasma Mass Spectrometer is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by David Harvey.
301
11.4: Other Forms of Atomic Mass Spectrometry
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Instrumental_Analysis_(LibreTexts)/11%3A_Atomic_Mass_Spectrometry/11.04%3A_Spark_Source_Mass_Spectrometry
↵Although ICP-MS is the most widely used method of atomic mass spectrometry, there are other forms of atomic mass spectrometry, three of which we highlight here.In SSMS, a solid sample is vaporized using a spark source, as described in Chapter 10.2 for atomic emission. Because the spark is generated in an evacuated housing the interface between the spark source and the mass spectrometer is simpler. Because the spark generates ions with a large distribution of kinetic energies a quadrupole mass analyzer is not practicable; instead, the mass spectrum is recorded using a double-focusing mass analyzer (see Chapter 20 for more details about this type of mass spectrometer). One advantage of the double-focusing mass analyzer is that it is capable or resolving small differences in masses. For example, in ICP-MS the peaks for 56Fe+ and the polyatomic ion 40Ar16O+ overlap, appearing as a single peak. A double-focusing mass analyzer can separate these two ions, which have, respectively, masses of 55.934942 amu and 55.957298 amu.A glow discharge source generates ions in manner similar to that used to generate the emission of photons in a hollow cathode lamp (see Chapter 9.2 for a discussion of the hollow cathode lamp). The sample serves as the cathode in a cell that contains a very low pressure of argon gas. The application of a high voltage pulse between the cathode and an anode that also is in the cell, converts some of the Ar to Ar+ ion, which then collide with the cathode, sputtering some of the solid sample into a mixture of gas-phase atoms and ions, the later of which are drawn into the mass spectrometer for analysis.When analyzing a solid sample, we often are interested in how its composition varies either across the surface or as a function of depth. We can gather information across a surface if we can focus the ion source to a small spot and then raster that spot across the surface, and we can gather information as a function of depth if we can use sputter away a portion of the surface. See Chapter 21 for a discussion of two such techniques: secondary ion mass spectometry (SIMS) and laser microprove mass spectrometry.This page titled 11.4: Other Forms of Atomic Mass Spectrometry is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by David Harvey.
302
12.1: Fundamental Principles
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Instrumental_Analysis_(LibreTexts)/12%3A_Atomic_X-Ray_Spectrometry/12.01%3A_Fundamental_Principles
In Chapter 6 we introduced the electromagnetic spectrum and the characteristic properties of photons, such as the wavelengths, the frequencies, and the energies of ultraviolet, visible, and infrared light. The wavelength range for photons of X-ray radiation extends from approximately 0.01 nm to 10 nm. Although we are used to reporting a photon's wavelength in nanometers, for historical reasons the wavelength of an X-ray photon usually is reported in angstroms (for which the symbol is Å) where 1 Å = 0.1 nm; thus the wavelength range of 0.01 nm to 10 nm for X-ray radiation also is expressed as 0.1 Å to 100 Å. This range of wavelengths corresponds to a range of frequencies from approximately \(3 \times 10^{19} \text{ s}^{-1}\) to \(3 \times 10^{16} \text{ s}^{-1}\), and a range of energies from \(2 \times 10^{-19} \text{ J}\) to \(2 \times 10^{-17} \text{ J}\).There are three routine ways to generate X-rays, each of which is covered in this section: we can bombard a suitable metal with a beam of high-energy electrons, we can use one X-ray to stimulate the emission of additional X-rays through fluorescence, and we can use a radioactive isotope that emits X-rays as it decays.An electron beam is created by heating a tungsten wire filament to a temperature at which it releases electrons. These electrons are pulled toward a metal target by applying an accelerating voltage between the metal target and the tungsten wire. The result is the broad continuum of X-ray emission in . The source of this continuous emission spectrum is the reduction in the kinetic energy of the electrons as they collide with the metal target. The loss of kinetic energy results in the production of photons over a broad range of wavelengths and is known as Bremsstrahlung, or braking radiation.In earlier chapters we divided the sources of photons into two broad groups: continuous sources, such as a tungsten lamp, that produce photons at all wavelengths between a lower limit and an upper limit, and line sources, such as a hollow cathode lamp, that produce photons for one or more discrete wavelengths. The sources used to generate X-rays also generate continuum and/or line spectra.The lower wavelength limit for X-ray emission, identified here as \(\lambda_0\), is the maximum possible loss of kinetic energy, KE, and is equal to\[KE = \frac{hc}{\lambda_0} = Ve \label{lmin} \]where h is Planck's constant, c is the speed of light, V is the accelerating voltage, and e is the charge on the electron. The product of the accelerating voltage and the charge on the electron is the kinetic energy of the electrons. Solving Equation \ref{lmin} for \(\lambda_0\) gives\[\lambda_0 = \frac{hc}{Ve} = \frac{12.398 \text{ kV Å}}{V} \label{lambdamin2} \]where \(\lambda_0\) is in angstroms and V is in kilovolts. Note that Equation \ref{lmin} and Equation \ref{lambdamin2} do not include any terms that depend on the target metal, which means that for any accelerating voltage, \(\lambda_0\) is the same for all metal targets. Table \(\PageIndex{1}\) gives values of \(\lambda_0\) that span the range of accelerating voltages in .If we apply a sufficiently large accelerating voltage, then the emission spectrum will consist of both a continuum spectrum and a line spectrum, as we see in with molybdenum as the target metal. The spectrum consists of both a continuum similar to that in , and two lines, one at a wavelength of 0.63 Å and one at a wavelength of 0.71 Å. The source of these lines is the emission of X-rays from excited state ions that form when a sufficiently high-energy electron from the electron beam removes an electron from an atomic orbital close to the nucleus. As electrons in atomic orbitals at a greater distance from the nucleus drop into the atomic orbital with a vacancy, they release their extra energy as a photon.Although the background emission from the continuum is the same for all metal targets, the energy for the lines have values that are characteristic for different metals because the energy to remove an electron varies from element-to-element, increasing with atomic number. For example, an accelerating voltage of at least\[V = \frac{12.398 \text{ kV Å}}{0.61 Å} = 20 \text{ kV} \nonumber \]is needed to generate the line spectrum for molybdenum in .The characteristic emission lines for molybdenum in are identified as \(K_{\alpha}\) and \(K_{\beta}\), a notation with which you may not be familiar. The simplified energy level diagram in will help us understand this notation. Each arrow in this energy-level diagram shows a transition in which an electron moves from an orbital at greater distance from the nucleus to an orbital closer to the nucleus. The letters K, L, and M correspond to the principal quantum number n, which has values of 1, 2, 3... that indicate the initial vacancy created by the collision of the ion beam with the target metal. The Greek symbols \(\alpha\), \(\beta\), and \(\gamma\) indicate the source of the electron that fills this vacancy in terms of its change in the principal quantum number, \(\Delta n\). An electron moving from n = 2 to n = 1 and an electron moving from n = 4 to n = 3 have the same designation of \(\alpha\). The emission line in identified as \(K_{\beta}\), therefore, is the result of an electron in the n = 3 shell moving into a vacancy in the n = 1 shell \(K\).Why is a simplified energy-level diagram? For each n > 1 there is more than one atomic orbital. When n = 2 there are three energy levels: one that corresponds to l = 0, one that corresponds to l = 1 and ml = 0, and one that corresponds to l = 1 and ml = ±1. The allowed transitions to the n = 1 energy levels requires a change in the value for l; thus, we expect to find two emission lines from n = 2 to n = 1 instead of the one shown in . These two lines, which we can identify as \(\text{K}_{\alpha 1}\) and \(\text{ K}_{\alpha 2}\), generally are sufficiently close in value that they are not resolved in the X-ray emission spectrum. For example, \(\text{K}_{\alpha 1} = 0.709\) and \(\text{ K}_{\alpha 2} = 0.714\) for molybdenum. You can find a table of X-ray emission lines here.When an atom in an excited state emits a photon as a means of returning to a lower energy state, how we describe the process depends on the source of energy that created the excited state. When excitation is the result of thermal energy, we call the process atomic emission. When excitation is the result of the absorption of a photon, we call the process atomic fluorescence. In X-ray fluorescence, excitation is brought about using photons from a source of continuous X-ray radiation. More details on X-ray fluorescence are provided later in this chapter.Atoms that have the same number of protons but a different number of neutrons are isotopes. To identify an isotope we use the notation \({}_Z^A E\), where E is the element’s atomic symbol, Z is the element’s atomic number, and A is the element’s atomic mass number. Although an element’s different isotopes have the same chemical properties, their nuclear properties are not identical. The most important difference between isotopes is their stability. The nuclear configuration of a stable isotope remains constant with time. Unstable isotopes, however, disintegrate spontaneously, emitting radioactive decay particles as they transform into a more stable form.An element’s atomic number, Z, is equal to the number of protons and its atomic mass, A, is equal to the sum of the number of protons and neutrons. We represent an isotope of carbon-13 as \(_{6}^{13} \text{C}\) because carbon has six protons and seven neutrons. Sometimes we omit Z from this notation—identifying the element and the atomic number is repetitive because all isotopes of carbon have six protons and any atom that has six protons is an isotope of carbon. Thus, 13C and C–13 are alternative notations for this isotope of carbon.Radioactive particles can decay in several ways, one of which results in the emission of X-rays. For example, 55Fe can capture an electron and undergo a process in which a proton becomes a neutron, becoming 55Mn and releasing the excess energy as a \(\text{K}_{\alpha}\) X-ray. We will not give further consideration to radioactive sources of atomic X-ray emission; see Chapter 32, however, for a further discussion of radioactive methods of analysis. shows a portion of molybdenum's X-ray absorption spectrum over the same range of wavelengths as shown in for its emission spectrum. Both spectra are relatively simple: the emission spectrum consists of two lines superimposed on a continuum background, and the absorbance spectrum consists of a single line, identified here as the K edge.If an X-ray photon is of sufficient energy, then its absorbance by an atom results in the ejection of an electron from one of the atom's innermost atomic orbitals, which you may recognize as the production of a photoelectron. For molybdenum, a wavelength of 0.62 Å (an energy of 20.0 kV) is needed to eject a photoelectron from the K shell (n = 1). At this wavelength the probability of absorption is at is greatest. At shorter wavelengths (greater energies) there is sufficient energy to eject the electron, however, the probability of absorption decreases and the relative absorbance decreases slowly. The abrupt decrease in absorbance for wavelengths larger than 0.62 Å—this abrupt decrease is the source of the term edge—happens because the photons no longer have sufficient energy to eject an electron from the K shell. The slow increasing absorbance at wavelengths greater than the K edge is the result of ejecting electrons from the L shell, which has edges at 4.3 Å, 4.7 Å, and 4.9 Å.The simplified energy level diagram in \(\PageIndex{3}\) shows only one energy level for n = 2 (the L shell). As we noted earlier, there are three energy levels when n = 2: one that corresponds to l = 0, one that corresponds to l = 1 and ml = 0, and one that corresponds to l = 1 and ml = ±1. The three edges corresponding to these energy levels are identified as LI, LII, and LIII.When a source of X-rays passes through a sample with a thickness of x, the following equation holds\[A = -\ln \frac{P}{P_0} = \mu_{\text{M}} \rho x \label{beerxray} \]where A is the absorbance, \(P_0\) is the power of the X-ray source incident on the sample, \(P\) is the power of the X-ray source after it passes through the sample, \(\mu_{\text{M}}\) is the sample's mass absorption coefficient and \(\rho\) is the sample's density. You may have noticed the similarity between this equation and the equation for Beer's law that we first encountered in Chapter 6\[A = -\ln \frac{P}{P_0} = \epsilon b C \label{beer} \]where \(\epsilon\) is the molar absorptivity, \(b\) is the pathlength, and \(C\) is molar concentration. Note that both density (g/mL) and molarity (mol/L) are a measure of concentration that expresses the amount of the absorbing material present in the sample.When an electron is ejected from a shell near the nucleus by the absorption of an X-ray, the vacancy created is eventually filled when an electron at a greater distance from the nucleus moves down. Because it takes more energy to eject an electron and create a vacancy than is returned by the movement of other electrons into the vacancy, the resulting fluorescent emission of X-rays is always at wavelengths that are longer (lower energy) than the wavelength that was absorbed. We see this in and for molybdenum where it absorbs an X-ray with a wavelength of 0.62 Å and emits X-rays with wavelengths of 0.63 Å and 0.71 Å.When an X-ray beam is focused onto a sample that has a regular (crystalline) pattern of atoms in three dimensions, some of the radiation scatters from the surface and some of the radiation passes through to the next layer of atoms where the combination of scattering and passing through continues. As a result of this process, the radiation undergoes diffraction in which X-rays of some wavelengths appear to reflect off the surface while X-rays of other wavelengths do not. The conditions the result in diffraction are easy to understand using the diagram in .The red and green arrows are two parallel beams of X-rays that are focused on an ordered crystalline solid that consists of a layered repeatable pattern of atoms shown by the blue circles. The two beams of X-rays encounter the solid at an angle of \(\theta\). The X-ray shown in red scatters off of the first layer, exiting at the same angle of \(\theta\). The X-ray shown in green penetrates to the second layer where it undergoes scattering, exiting at the same angle of \(\theta\). We know from the superposition of waves (see Chapter 6) that the two beams of X-rays will remain in phase, and thus experience constructive interference, only if the additional distance traveled by the green wave—the sum of the line segments \(\overline{bc}\) and \(\overline{cd}\)—is an integer multiple of the wavelength; thus\[\overline{bc} + \overline{cd} = n \lambda \label{bragg1} \]We also know that the length of the line segments \(\overline{bc}\) and \(\overline{cd}\) are given by\[\overline{bc}= \overline{cd} = d \sin \theta \label{bragg2} \]where \(d\) is the distance between the crystal's layers. Combining Equation \ref{bragg1} and Equation \ref{bragg2} gives\[n \lambda = 2 d \sin \theta \label{bragg3} \]Rearranging Equation \ref{bragg3} shows that we will observe diffraction only at angles that satisfy the equation\[\sin \theta = \frac{n \lambda}{2d} \label{bragg4} \]This page titled 12.1: Fundamental Principles is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by David Harvey.
303
12.2: Instrument Components
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Instrumental_Analysis_(LibreTexts)/12%3A_Atomic_X-Ray_Spectrometry/12.02%3A_Instrument_Components
Atomic X-ray spectrometry has the same needs as other forms of optical spectroscopy: a source of X-rays, a means for isolating a desired range of wavelengths of X-rays, a means for detecting the X-rays, and a means for converting the signal at the transducer into a meaningful number.The most important source of X-rays is the X-ray tube, a basic diagram of which is shown in . A beam of electrons (shown in red) from a heated tungsten filament (shown in orange) serves as a cathode with a negative potential. The electrons are drawn toward an anode that has a positive potential. The tip of the anode is made from a metal target (shown in blue) that will produce X-rays (shown in green) with the desired wavelengths when struck by the electron beam. Typical metal targets include tungsten, molybdenum, silver, copper, iron, and cobalt. The filament and the target metal are housed inside an evacuated tube. The emitted X-rays exit the tube through an optical window.Any material that is naturally radioactive emits characteristic X-rays that potentially can serve as a source of X-rays that another species can absorb. For example, in the absorption spectrum for molybdenum (see the \(\text{K}_{\alpha}\) line has a wavelength of 0.62 Å, which corresponds to an energy of 20.0 kV. A radioactive source with an emission line that has a wavelength slightly longer than 0.62 Å (between, for example, 0.5 Å and 0.6 Å) is sufficient. One possibility is 109Cd, which emits X-rays with a wavelength of 0.56 Å, or an energy of 22 kV.A filter and a monochromator are designed to take a broad range of emission from a source and narrow the range of wavelengths that reach the sample. shows how to accomplish this using an absorption filter. The blue line shows the emission spectrum for a sample that includes two lines—the \(\text{K}_{\alpha}\) line and the \(\text{K}_{\beta}\) line—superimposed on a broad continuum. The green line shows the absorption spectrum for a different element whose K edge falls in between the sample's \(\text{K}_{\alpha}\) and \(\text{K}_{\beta}\) lines. In this case the K edge filter removes most of the continuum and the \(\text{K}_{\beta}\) line, allowing just the \(\text{K}_{\alpha}\) line and a small amount of the continuum to reach the sample. shows the basic design for an X-ray monochromator, which can operate in either an absorption mode, in which X-rays from the source pass through the sample before entering the monochromator, or in an emission mode, in which X-rays from the source excite the sample and fluorescent emission is sampled at 90°.In either mode, the X-rays pass through a collimator that focuses them onto a crystal where the X-rays undergo diffraction. X-rays are collected by a second collimator before arriving at the transducer. To scan the source, the crystal rotates through an angle of \(\theta\); the transducer must rotate twice as fast, traversing an angle of \(2 \theta\) to maintain an identical angle between the source and the transducer.An X-ray monochromator's effective range is determined by the properties of the crystal used for diffraction. We know from Chapter 12.1 that\[n \lambda = 2 d \sin \theta \label{diffract1} \]where \(n\) is the diffraction order, \(\lambda\) is the wavelength, \(\theta\) is the X-ray's angle of incidence, and \(d\) is the spacing between the crystal's layers. The practical limit for the angle depends on the monochromator's design, but typically \(\theta\) is 7.5° to 75° (or \(2 \theta\) angles of 15° to 150°). A common crystal is LiF, which has a spacing of 2.01 Å; thus, it provides a wavelength range from a lower limit of\[ \lambda = 2 d \sin \theta = 2 \times 2.01 \text{ Å} \times \sin(7.5^{\circ}) = 0.52 \text{ Å} \nonumber \]to an upper limit of\[ \lambda = 2 d \sin \theta = 2 \times 2.01 \text{ Å} \times \sin(75^{\circ}) = 3.9 \text{ Å} \nonumber \]when \(n = 1\). This range of wavelengths is sufficient to study the elements K to Cd using their \(\text{K}_{\alpha}\) lines.The most common transducers for atomic X-ray spectrometry are the flow proportional counter, the scintillation counter, and the Si(Li) semiconductor. All three transducers act as photon counters.The most common transducer for measuring atomic absorbance and atomic emission of ultraviolet and visible light is a photomultiplier tube. As we learned in Chapter 7, a photon strikes a photosensitive surface and generates several electrons. These electrons collide with a series of dynodes, each collision of which generates additional electrons. This amplification of one photon into 106–107 electrons results in a steady-state current that we can measure. When the intensity of radiation from the source is smaller, as it is with X-rays, then it is possible to store the electrons in a capacitor that, when discharged, provides a pulsed signal that carries information about the photons. shows the basic structure of a flow proportional counter. The transducer's cell has an inlet and an outlet for creating the flow of argon gas. The cell has windows made from an X-ray transparent materials, such as beryllium. X-rays enter the cell and, as shown by the reaction in the upper left, ionizes the argon, generating a photoelectron. This photoelectron is sufficiently energetic that it further ionizes the argon, as shown by the reaction in the lower right. The result is an amplification of a single photon into as many as 10,000 electrons. These electrons are drawn to a tungsten wire that is held at a positive charge, and then flow into a capacitor. Discharging the capacitor gives a pulsed signal whose height is proportional to the initial number of electrons and, therefore, to the energy, frequency, and wavelength of the photons.A flow proportional counter is not an efficient transducer for shorter wavelength (lower energy) X-rays that are likely to pass through the cell without being absorbed by the argon gas, leading to a reduction in the signal. In this case we can use a scintillation counter. shows how this works. X-ray photons are focused onto a single crystal of NaI that is doped with a small amount, approximately 0.2%, of Tl+ as an iodide salt. Absorption of the X-rays results in the fluorescent emission of multiple photons of visible light with a wavelength of 410 nm. Each of these photons falls on the photocathode of a photomultiplier, eventually producing a voltage pulse. Each pulse corresponds to a single photon with an energy that is proportional to the pulse's height.In Chapter 7.5 we introduced the use of the pn junction of a silicon semiconductor as a transducer for optical spectroscopy. Absorption of a photon of sufficient energy results in the formation of an electron-hole pair. Movement of the electron through the n-layer and movement of the hole through the p-region generates a current that is proportional to the number of photons reaching the detector. shows the structure of the semiconductor used in monitoring X-rays, which consists of a p-type layer and an n-type layer on either side of single crystal of silicon doped with lithium or germanium. The Si(Li) layer has the same role here as Ar has in the flow proportional counter. An X-ray photon that enters into the Si(Li) layer generates electron-hole pairs leading to a measurable current that is proportional to the energy of the X-ray.The flow proportional counter, scintillation counter, and semiconductor transducers pass a stream of pulses to the signal processor where pulse-height selector is used to isolate only those pulses of interest and a pulse-height analyzer is used to summarize the distribution of pulses.Not all pulses measured by the transducer are of interest. For example, pulses with small heights are likely to be noise and pulses with large heights may be a higher-order (\(n > 1\)) diffraction of shorter, and more energetic wavelengths. shows the basic details of how pulse-height selector works. The pulse-height selector is set to pass only those pulse heights that are between a lower limit and an upper limit. The figure shows three pulses, one that is too small (in blue), one that is too large (in red), and one that we wish to keep (in green). The pulses run through two channels, one that removes only the blue signal and one that retains only the red signal. The latter signal is inverted and combined with the signal from the other channel. Because the red signal has a different sign in the two channels, it, too, is removed, leaving only the one pulse height that meets the criteria for selection.Having removed pulses with heights that are too small or too large, the remaining pulses are analyzed by counting the number of pulses that share a range of pulse heights. Each unique range of pulse heights is called a channel and corresponds to a specific energy of the photons. A spectrum is a plot showing the count of pulses as function of the energy of the photons.This page titled 12.2: Instrument Components is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by David Harvey.
304
12.3: Atomic X-Ray Fluorescence Methods
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Instrumental_Analysis_(LibreTexts)/12%3A_Atomic_X-Ray_Spectrometry/12.03%3A_X-Ray_Fluorescence
In X-ray fluorescence a source of X-rays—emission from an X-ray tube or emission from a radioactive element—is used to excite the atoms of an analyte in a sample. These excited-state atoms return to their ground state by emitting X-rays, the process we know as fluorescence. The wavelengths of these emission lines are characteristic of the elements that make up the sample; thus, atomic X-ray fluorescence is a useful method for both a qualitative analysis and a quantitative analysis.In the previous section we covered the basic components that make up an atomic X-ray spectrometer: a source of X-rays, a means for isolating those wavelengths of interest, a transducer to measure the intensity of fluorescence, and a signal processor to convert the transducer's signal into a useful measurement. How we string these units together is the subject of this section in which we consider two ways to acquire a sample's spectrum: wavelength dispersive instruments and energy dispersive instruments.A wavelength dispersive instrument relies on diffraction using a monochromator, such as that in for the diffracting crystal and \(2 \theta\) for the transducer—for the analyte of interest and the fluorescence intensity measured for 1-100 s. The monochromator is adjusted for the next analyte and the process repeated until the analysis for all analytes is complete. Analyzing a sample for 20 analytes may take 30 min or more.A simultaneous, or multichannel, wavelength dispersive instrument contains as many as 30 crystals and transducers, each at a fixed angle that is preset for an analyte of interest. Each individual channel has a dedicated transducer and pulse-height selector and analyzer. Analysis of a complex sample with many analytes requires less than a minute. This is similar to the multichannel ICP used in atomic emission (see .An energy dispersive instrument eschews a scanning monochromator and, instead, uses a semiconductor transducer to analyze the fluorescent emission by the determining the energies of the emitted photons. As each photon reaches the transducer as a pulse of electrons, its height is measured and converted into the photon's energy. The result is a spectrum showing a count of photons with the same energy as a function of the energy. The collection of data is very fast: if it takes 25 µs to complete the collection and processing of a single photon, then the instrument can count 40,000 photons each second (40 kcps, or kilo counts per second). One limitation to an energy dispersive instrument is its limited resolution with respect to energy. An instrument that operates with 2048 channels—that is, an instrument that divides the energies into 2048 bins—and that processes photons with energies up to 20 keV, has a resolution of approximately 10 eV per channel. Because it does not rely on a monochromator, an energy dispersive instrument occupies a smaller footprint, and portable, hand-held versions are available. shows the X-ray fluorescence spectrum for the yellow pigment known as naples yellow, the major elements of which are zinc, lead, and antimony. It is easy to identify the major elements in the sample by matching the energies of the individual lines to the published emission lines of the elements, which are available in many on-line sources. For example, the first line highlighted in this spectrum is at an energy of 8.66 KeV, which is close to the \(\text{K}_{\alpha}\) line for Zn at 8.64 KeV, and the last highlighted line is at an energy of 29.97 KeV, which is close to the \(\text{K}_{\beta}\) line for Sb of 29.7 KeV.A semi-quantitative analysis is possible if we assume that there is a linear relationship between the intensity of an element's emission line and its %w/w concentration in the sample. The intensity of emission from a pure sample or the element, \(I_\text{pure}\), is measured along with the intensity of emission for the element in a sample, \(I_\text{sample}\), and the %w/w calculated as\[\% \text{w/w} = \frac {I_\text{sample}} {I_\text{pure}} \label{semiquant} \]Equation \ref{semiquant} is essentially a one-point standardization that makes the significant assumption that the intensity of fluorescent emission is independent of the matrix in which the analyte sits. When this is not true, then errors of \(2 \text{-} 3 \times\) are likely.For fluorescent emission to occur, the analyte must first absorb a photon that can eject a photoelectron. For Equation \ref{semiquant} to hold, the photons that initiate the fluorescent emission must come from the source only. If other elements within the sample's matrix produced fluorescent emission with sufficient energy to eject photoelectrons from the analyte, then the total fluorescence increases and we overestimate the analyte's concentration. If an element in the matrix absorbs the X-rays from the source more strongly than the analyte, then the analyte's total fluorescence becomes smaller and we underestimate the analyte's concentration. There are three common strategies for compensating for matrix effects.External Standards with Matrix Matching. Instead of using a single, pure sample for the calibration, we prepare a series of standards with different concentrations of the analyte. By matching, as best we can, the matrix of the standards to the matrix of the samples, we can improve the accuracy of a quantitative analysis. This assumes, of course, that we have sufficient knowledge of our sample's matrix.Internal Standards. An internal standard is an element that we add to the standards and samples so that its concentration is the same in each. If the analyte and the internal standard experience similar matrix effects, then the ratio of their intensities is proportional to the ratio of their concentrations\[\frac{I_\text{analyte, sample}}{I_\text{int std, sample}} = K \times \frac{C_\text{analyte, sample}}{C_\text{int std, sample}} \label{intstd} \]Dilution. A third approach is to dilute the samples and standards by adding a quantity of non-absorbing or poorly absorbing material. Dilution has the effect of minimizing the difference in the matrix of the original samples and standards.This page titled 12.3: Atomic X-Ray Fluorescence Methods is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by David Harvey.
305
12.4: Other X-Ray Methods
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Instrumental_Analysis_(LibreTexts)/12%3A_Atomic_X-Ray_Spectrometry/12.04%3A_Other_X-Ray_Methods
The application of X-rays to the analysis of materials can take forms other than X-ray fluorescence. In X-ray absorption spectrometry, the ability of a sample to absorb radiation from an X-ray source is measured. Absorption follows Beer's law (see Section 12.1) and, compared to emission, is relatively free of matrix effects. X-ray absorption, however, is a less selective technique than atomic fluorescence because we are not measuring the emission from an analyte's characteristic lines. X-ray absorption finds its greatest utility for the quantitative analysis of samples that contain just one or two major analytes.In powder X-ray diffraction we focus the radiation from an X-ray tube line source on a powdered sample and measure the intensity of diffracted radiation as a function of the transducer's angle (\(2 \theta\)). A typical powder X-ray diffraction spectrum is in for the mineral calcite (CaCO3). Qualitative identification is obtained by matching the \(2 \theta\) peaks to those in published databases. A quantitative analysis for the compound—not the elements that make up the compound—is possible using the intensity of a unique diffraction line in a sample to that for a pure sample. for a mixture of calcite and magnesite (MgCO3) shows that a simultaneous quantitative analysis for both compounds is possible using the diffraction line at a \(2 \theta\) of 29.44 for calcite and of 32.65 for magnesite.This page titled 12.4: Other X-Ray Methods is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by David Harvey.
306
13.1: Transmittance and Absorbance
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Instrumental_Analysis_(LibreTexts)/13%3A_Introduction_to_Ultraviolet_Visible_Absorption_Spectrometry/13.01%3A_Transmittance_and_Absorbance
As light passes through a sample, its power decreases as some of it is absorbed. This attenuation of radiation is described quantitatively by two separate, but related terms: transmittance and absorbance. As shown in , transmittance is the ratio of the source radiation’s power as it exits the sample, PT, to that incident on the sample, P0.\[T=\frac{P_{\mathrm{T}}}{P_{0}} \label{10.1} \]Multiplying the transmittance by 100 gives the percent transmittance, %T, which varies between 100% (no absorption) and 0% (complete absorption). All methods of detecting photons—including the human eye and modern photoelectric transducers—measure the transmittance of electromagnetic radiation.Equation \ref{10.1} does not distinguish between different mechanisms that prevent a photon emitted by the source from reaching the detector. In addition to absorption by the analyte, several additional phenomena contribute to the attenuation of radiation, including reflection and absorption by the sample’s container, absorption by other components in the sample’s matrix, and the scattering of radiation. To compensate for this loss of the radiation’s power, we use a method blank. As shown in , we redefine P0 as the power exiting the method blank.An alternative method for expressing the attenuation of electromagnetic radiation is absorbance, A, which we define as\[A=-\log T=-\log \frac{P_{\mathrm{T}}}{P_{0}} \label{10.2} \]Absorbance is the more common unit for expressing the attenuation of radiation because—as we will see in the next section—it is a linear function of the analyte’s concentration.A sample has a percent transmittance of 50%. What is its absorbance?A percent transmittance of 50.0% is the same as a transmittance of 0.500. Substituting into Equation \ref{10.2} gives\[A=-\log T=-\log (0.500)=0.301 \nonumber \]What is the %T for a sample if its absorbance is 1.27?To find the transmittance, \(T\), we begin by noting that\[A=1.27=-\log T \nonumber \]Solving for T \[\begin{align*}-1.27 &=\log T \\[4pt] 10^{-1.27} &=T \end{align*}\]gives a transmittance of 0.054, or a %T of 5.4%.This page titled 13.1: Transmittance and Absorbance is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by David Harvey.
307
13.2: Beer's Law
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Instrumental_Analysis_(LibreTexts)/13%3A_Introduction_to_Ultraviolet_Visible_Absorption_Spectrometry/13.02%3A_Beer's_Law
When monochromatic electromagnetic radiation passes through an infinitesimally thin layer of sample of thickness dx, it experiences a decrease in its power of dP ).This fractional decrease in power is proportional to the sample’s thickness and to the analyte’s concentration, C; thus\[-\frac{d P}{P}=\alpha C d x \label{BL1} \]where P is the power incident on the thin layer of sample and \(\alpha\) is a proportionality constant. Integrating the left side of Equation \ref{BL1} over the sample’s full thickness\[-\int_{P=P_0}^{P=P_t} \frac{d P}{P}=\alpha C \int_{x=0}^{x=b} d x \nonumber \]\[\ln \frac{P_{0}}{P_T}=\alpha b C \nonumber \]converting from ln to log, and substituting into the equation relating transmittance to absorbance\[A = -\text{log}T = -\text{log}\frac{P_\text{T}}{P_0} \nonumber \]gives\[A=a b C \label{BL2} \]where a is the analyte’s absorptivity with units of cm–1 conc–1. If we express the concentration using molarity, then we replace a with the molar absorptivity, \(\varepsilon\), which has units of cm–1 M–1.\[A=\varepsilon b C \label{BL3} \]The absorptivity and the molar absorptivity are proportional to the probability that the analyte absorbs a photon of a given energy. As a result, values for both a and \(\varepsilon\) depend on the wavelength of the absorbed photon.A \(5.00 \times 10^{-4}\) M solution of analyte is placed in a sample cell that has a pathlength of 1.00 cm. At a wavelength of 490 nm, the solution’s absorbance is 0.338. What is the analyte’s molar absorptivity at this wavelength?Solving Equation \ref{BL3} for \(\epsilon\) and making appropriate substitutions gives\[\varepsilon=\frac{A}{b C}=\frac{0.338}{(1.00 \ \mathrm{cm})\left(5.00 \times 10^{-4} \ \mathrm{M}\right)}=676 \ \mathrm{cm}^{-1} \ \mathrm{M}^{-1} \nonumber \]A solution of the analyte from Example 13.2.1 has an absorbance of 0.228 in a 1.00-cm sample cell. What is the analyte’s concentration?Making appropriate substitutions into Beer’s law\[A=0.228=\varepsilon b C=\left(676 \ \mathrm{M}^{-1} \ \mathrm{cm}^{-1}\right)(1 \ \mathrm{cm}) C \nonumber \]and solving for C gives a concentration of \(3.37 \times 10^{-4}\) M.Equation \ref{BL2} and Equation \ref{BL3}, which establish the linear relationship between absorbance and concentration, are known as Beer’s law. Calibration curves based on Beer’s law are common in quantitative analyses.As is often the case, the formulation of a law is more complicated than its name suggests. This is the case, for example, with Beer’s law, which also is known as the Beer-Lambert law or the Beer-Lambert-Bouguer law. Pierre Bouguer, in 1729, and Johann Lambert, in 1760, noted that the transmittance of light decreases exponentially with an increase in the sample’s thickness.\[T \propto e^{-b} \nonumber \]Later, in 1852, August Beer noted that the transmittance of light decreases exponentially as the concentration of the absorbing species increases.\[T \propto e^{-C} \nonumber \]Together, and when written in terms of absorbance instead of transmittance, these two relationships make up what we know as Beer’s law.We can extend Beer’s law to a sample that contains several absorbing components. If there are no interactions between the components, then the individual absorbances, Ai, are additive. For a two-component mixture of analyte’s X and Y, the total absorbance, Atot, is\[A_{tot}=A_{X}+A_{Y}=\varepsilon_{X} b C_{X}+\varepsilon_{Y} b C_{Y} \nonumber \]Generalizing, the absorbance for a mixture of n components, Amix, is\[A_{m i x}=\sum_{i=1}^{n} A_{i}=\sum_{i=1}^{n} \varepsilon_{i} b C_{i} \label{BL4} \]Beer’s law suggests that a plot of absorbance vs. concentration—we will call this a Beer’s law plot—is a straight line with a y-intercept of zero and a slope of ab or \(\varepsilon b\). In some cases a Beer’s law plot deviates from this ideal behavior (see ), and such deviations from linearity are divided into three categories: fundamental, chemical, and instrumental.Beer’s law is a limiting law that is valid only for low concentrations of analyte. There are two contributions to this fundamental limitation to Beer’s law. At higher concentrations the individual particles of analyte no longer are independent of each other. The resulting interaction between particles of analyte may change the analyte’s absorptivity. A second contribution is that an analyte’s absorptivity depends on the solution’s refractive index. Because a solution’s refractive index varies with the analyte’s concentration, values of a and \(\varepsilon\) may change. For sufficiently low concentrations of analyte, the refractive index essentially is constant and a Beer’s law plot is linear.A chemical deviation from Beer’s law may occur if the analyte is involved in an equilibrium reaction. Consider, for example, the weak acid, HA. To construct a Beer’s law plot we prepare a series of standard solutions—each of which contains a known total concentration of HA—and then measure each solution’s absorbance at the same wavelength. Because HA is a weak acid, it is in equilibrium with its conjugate weak base, A–.In the equations that follow, the conjugate weak base A– is sometimes written as A as it is easy to mistake the symbol for anionic charge as a minus sign; thus, we will write \(C_A\) instead of \(C_{A^-}\).\[\mathrm{HA}(a q)+\mathrm{H}_{2} \mathrm{O}(l)\rightleftharpoons\mathrm{H}_{3} \mathrm{O}^{+}(a q)+\mathrm{A}^{-}(a q) \nonumber \]If both HA and A– absorb at the selected wavelength, then Beer’s law is\[A=\varepsilon_{\mathrm{HA}} b C_{\mathrm{HA}}+\varepsilon_{\mathrm{A}} b C_{\mathrm{A}} \label{BL5} \]Because the weak acid’s total concentration, Ctotal, is\[C_{\mathrm{total}}=C_{\mathrm{HA}}+C_{\mathrm{A}} \nonumber \]we can write the concentrations of HA and A– as\[C_{\mathrm{HA}}=\alpha_{\mathrm{HA}} C_{\mathrm{total}} \label{BL6} \]\[C_{\text{A}} = (1 - \alpha_\text{HA})C_\text{total} \label{BL7} \]where \(\alpha_\text{HA}\) is the fraction of weak acid present as HA. Substituting Equation \ref{BL6} and Equation \ref{BL7} into Equation \ref{BL5} and rearranging, gives\[A=\left(\varepsilon_{\mathrm{HA}} \alpha_{\mathrm{HA}}+\varepsilon_{\mathrm{A}}-\varepsilon_{\mathrm{A}} \alpha_{\mathrm{HA}}\right) b C_{\mathrm{total}} \label{BL8} \]To obtain a linear Beer’s law plot, we must satisfy one of two conditions. If \(\varepsilon_\text{HA}\) and \(\varepsilon_{\text{A}}\) have the same value at the selected wavelength, then Equation \ref{BL8} simplifies to\[A = \varepsilon_{\text{A}}bC_\text{total} = \varepsilon_\text{HA}bC_\text{total} \nonumber \]Alternatively, if \(\alpha_\text{HA}\) has the same value for all standard solutions, then each term within the parentheses of Equation \ref{BL8} is constant—which we replace with k—and a linear calibration curve is obtained at any wavelength.\[A=k b C_{\mathrm{total}} \nonumber \]Because HA is a weak acid, the value of \(\alpha_\text{HA}\) varies with pH. To hold \(\alpha_\text{HA}\) constant we buffer each standard solution to the same pH. Depending on the relative values of \(\alpha_\text{HA}\) and \(\alpha_{\text{A}}\), the calibration curve has a positive or a negative deviation from Beer’s law if we do not buffer the standards to the same pH.There are two principal instrumental limitations to Beer’s law: stray radiation and non-polychromatic radiation.Stray radiation is the first contribution to instrumental deviations from Beer’s law. Stray radiation arises from imperfections in the wavelength selector that allow light to enter the instrument and to reach the detector without passing through the sample. Stray radiation adds an additional contribution, Pstray, to the radiant power that reaches the detector; thus\[A=-\log \frac{P_{\mathrm{T}}+P_{\text { stray }}}{P_{0}+P_{\text { stray }}} \nonumber \]For a small concentration of analyte, Pstray is significantly smaller than P0 and PT, and the absorbance is unaffected by the stray radiation. For higher concentrations of analyte, less light passes through the sample and PT and Pstray become similar in magnitude. This results is an absorbance that is smaller than expected, and a negative deviation from Beer’s law.The second limitation is that Beer’s law assumes that radiation reaching the sample is of a single wavelength—that is, it assumes a purely monochromatic source of radiation. Even the best wavelength selector, however, passes radiation with a small, but finite effective bandwidth. Let's assume we have a line source that emits light at two wavelengths, \(\lambda^{\prime}\) and \(\lambda^{\prime \prime}\). When treated separately, the absorbances at these wavelengths, A′ and A′′, are\[A^{\prime}=-\log \frac{P_{\mathrm{r}}^{\prime}}{P_{0}^{\prime}}=\varepsilon^{\prime} b C \quad \quad A^{\prime \prime}=-\log \frac{P_{\mathrm{T}}^{\prime \prime}}{P_{0}^{\prime \prime}}=\varepsilon^{\prime \prime} b C \nonumber \]If both wavelengths are measured simultaneously the absorbance is\[A=-\log \frac{\left(P_{\mathrm{T}}^{\prime}+P_{\mathrm{T}}^{\prime \prime}\right)}{\left(P_{0}^{\prime}+P_{0}^{\prime \prime}\right)} \nonumber \]Expanding the logarithmic function of the equation's right side gives\[A = \log (P_0^{\prime} + P_0^{\prime \prime}) - \log (P_\text{T}^\prime + P_\text{T}^{\prime \prime}) \label{IL1} \]Next, we need to find a relationship between \(P_\text{T}\) and \(P_0\) for any wavelength. To do this, we start with Beer's law\[A = - \log \frac{P_\text{T}}{P_0} = \epsilon b C \nonumber \]and then solve for \(P_\text{T}\) in terms of \(P_0\)\[\log \frac{P_\text{T}}{P_0} = - \epsilon b C \nonumber \]\[\frac{P_\text{T}}{P_0} = 10^{- \epsilon b C} \nonumber \]\[P_\text{T} = P_0 \times 10^{- \epsilon b C} \nonumber \]Substituting this general relationship back into our wavelength-specific equation for absorbance, \ref{IL1}, we obtain\[A = \log (P_0^{\prime} + P_0^{\prime \prime}) - \log (P_0^{\prime} \times 10^{- \epsilon b C} + P_0^{\prime \prime} \times 10^{- \epsilon b C}) \label{IL2} \]For monochromatic radiation, we have \(\epsilon^{\prime} = \epsilon^{\prime \prime} = \epsilon\) and Equation \ref{IL2} simplifies to Beer's law\[A = -\log (10^{- \epsilon b C}) = \epsilon b C \nonumber \]For non-monochromatic radiation, Equation \ref{IL2} predicts that the absorbance is smaller than expected if \(\epsilon^{\prime}\ > \epsilon^{\prime \prime}\). Polychromatic radiation always gives a deviation from Beer’s law, but the effect is smaller if the value of \(\varepsilon\) essentially is constant over the wavelength range passed by the wavelength selector. For this reason, as shown in , it is better to make absorbance measurements at the top of a broad absorption peak. In addition, the deviation from Beer’s law is less serious if the source’s effective bandwidth is less than one-tenth of the absorbing species’ natural bandwidth [(a) Strong, F. C., III Anal. Chem. 1984, 56, 16A–34A; Gilbert, D. D. J. Chem. Educ. 1991, 68, A278–A281]. When measurements must be made on a slope, linearity is improved by using a narrower effective bandwidth.This page titled 13.2: Beer's Law is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by David Harvey.
308
13.3: Effect of Noise on Transmittance and Absorbance Measurements
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Instrumental_Analysis_(LibreTexts)/13%3A_Introduction_to_Ultraviolet_Visible_Absorption_Spectrometry/13.03%3A_Effect_of_Noise_on_Transmittance_and_Absorbance_Measurements
In absorption spectroscopy, precision is limited by indeterminate errors—primarily instrumental noise—which are introduced when we measure absorbance. Precision generally is worse for low absorbances where P0 ≈ PT, and for high absorbances where PT approaches 0. We might expect, therefore, that precision will vary with transmittance.We can derive an expression between precision and transmittance by rewriting Beer's law as\[C=-\frac{1}{\varepsilon b} \log T \label{noise1} \]and completing a propagation of uncertainty (see the Appendicies for a discussion of propagation of error), which gives\[s_{c}=-\frac{0.4343}{\varepsilon b} \times \frac{s_{T}}{T} \label{noise2} \]where sT is the absolute uncertainty in the transmittance. Dividing Equation \ref{noise2} by Equation \ref{noise1} gives the relative uncertainty in concentration, sC/C, as\[\frac{s_c}{C}=\frac{0.4343 s_{T}}{T \log T} \nonumber \]If we know the transmittance’s absolute uncertainty, then we can determine the relative uncertainty in concentration for any measured transmittance.Determining the relative uncertainty in concentration is complicated because sT is a function of the transmittance. As shown in Table \(\PageIndex{1}\), three categories of indeterminate instrumental error are observed [Rothman, L. D.; Crouch, S. R.; Ingle, J. D. Jr. Anal. Chem. 1975, 47, 1226–1233].%T readout resolutionnoise in thermal detectorspositioning of sample cellfluctuations in source intensityA constant sT is observed for the uncertainty associated with reading %T on a meter’s analog or digital scale, both common on less-expensive spectrophotometers. Typical values are ±0.2–0.3% (a k1 of ±0.002–0.003) for an analog scale and ±0.001% a (k1 of ±0.00001) for a digital scale. A constant sT also is observed for the thermal transducers used in infrared spectrophotometers. The effect of a constant sT on the relative uncertainty in concentration is shown by curve A in . Note that the relative uncertainty is very large for both high absorbances and low absorbances, reaching a minimum when the absorbance is 0.4343. This source of indeterminate error is important for infrared spectrophotometers and for inexpensive UV/Vis spectrophotometers. To obtain a relative uncertainty in concentration of ±1–2%, the absorbance is kept within the range 0.1–1.Values of sT are a complex function of transmittance when indeterminate errors are dominated by the noise associated with photon detectors. Curve B in shows that the relative uncertainty in concentration is very large for low absorbances, but is smaller at higher absorbances. Although the relative uncertainty reaches a minimum when the absorbance is 0.963, there is little change in the relative uncertainty for absorbances between 0.5 and 2. This source of indeterminate error generally limits the precision of high quality UV/Vis spectrophotometers for mid-to-high absorbances.Finally, the value of sT is directly proportional to transmittance for indeterminate errors that result from fluctuations in the source’s intensity and from uncertainty in positioning the sample within the spectrometer. The latter is particularly important because the optical properties of a sample cell are not uniform. As a result, repositioning the sample cell may lead to a change in the intensity of transmitted radiation. As shown by curve C in , the effect is important only at low absorbances. This source of indeterminate errors usually is the limiting factor for high quality UV/Vis spectrophotometers when the absorbance is relatively small.When the relative uncertainty in concentration is limited by the %T readout resolution, it is possible to improve the precision of the analysis by redefining 100% T and 0% T. Normally 100% T is established using a blank and 0% T is established while preventing the source’s radiation from reaching the detector. If the absorbance is too high, precision is improved by resetting 100% T using a standard solution of analyte whose concentration is less than that of the sample ). For a sample whose absorbance is too low, precision is improved by redefining 0% T using a standard solution of the analyte whose concentration is greater than that of the analyte ). In this case a calibration curve is required because a linear relationship between absorbance and concentration no longer exists. Precision is further increased by combining these two methods ). Again, a calibration curve is necessary since the relationship between absorbance and concentration is no longer linear.This page titled 13.3: Effect of Noise on Transmittance and Absorbance Measurements is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by David Harvey.
309
13.4: Instrumentation
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Instrumental_Analysis_(LibreTexts)/13%3A_Introduction_to_Ultraviolet_Visible_Absorption_Spectrometry/13.4%3A_Instrumentation
As covered in Chapter 7, the basic instrumentation for absorbance measurements consists of a source of radiation, a means for selecting the wavelengths to use, a means for detecting the amount of light absorbed by the sample, and a means for processing and displaying the data. In this section we consider two other essential components of an instrument for measuring the absorbance of UV/Vis radiation by molecules: the optical path that connects the source to the detector and a means for placing the sample in this optical path.Frequently an analyst must select from among several different optical paths, the one that is best suited for a particular analysis. In this section we examine several different instruments for molecular absorption spectroscopy with an emphasis on their advantages and limitations.The simplest instrument for molecular UV/Vis absorption is a filter photometer ), which uses an absorption or interference filter to isolate a band of radiation. The filter is placed between the source and the sample to prevent the sample from decomposing when exposed to higher energy radiation. A filter photometer has a single optical path between the source and detector, and is called a single-beam instrument. The instrument is calibrated to 0% T while using a shutter to block the source radiation from the detector. After opening the shutter, the instrument is calibrated to 100% T using an appropriate blank. The blank is then replaced with the sample and its transmittance measured. Because the source’s incident power and the sensitivity of the detector vary with wavelength, the photometer is recalibrated whenever the filter is changed. Photometers have the advantage of being relatively inexpensive, rugged, and easy to maintain. Another advantage of a photometer is its portability, making it easy to take into the field. Disadvantages of a photometer include the inability to record an absorption spectrum and the source’s relatively large effective bandwidth, which limits the calibration curve’s linearity.The percent transmittance varies between 0% and 100%. We use a blank to determine P0, which corresponds to 100%T. Even in the absence of light the detector records a signal. Closing the shutter allows us to assign 0%T to this signal. Together, setting 0% T and 100%T calibrates the instrument. The amount of light that passes through a sample produces a signal that is greater than or equal to 0%T and smaller than or equal to 100%T. . Schematic diagram of a filter photometer. The analyst either inserts a removable filter or the filters are placed in a carousel, an example of which is shown in the photographic inset. The analyst selects a filter by rotating it into place.An instrument that uses a monochromator for wavelength selection is called a spectrophotometer. The simplest spectrophotometer is a single-beam instrument equipped with a fixed-wavelength monochromator ). Single-beam spectrophotometers are calibrated and used in the same manner as a photometer. One example of a single-beam spectrophotometer is Thermo Scientific’s Spectronic 20D+, which is shown in the photographic insert to . The Spectronic 20D+ has a wavelength range of 340–625 nm (950 nm when using a red-sensitive detector), and a fixed effective bandwidth of 20 nm. Battery-operated, hand-held single-beam spectrophotometers are available, which are easy to transport into the field. Other single-beam spectrophotometers also are available with effective bandwidths of 2–8 nm. Fixed wavelength single-beam spectrophotometers are not practical for recording spectra because manually adjusting the wavelength and recalibrating the spectrophotometer is awkward and time-consuming. The accuracy of a single-beam spectrophotometer is limited by the stability of its source and detector over time.The limitations of a fixed-wavelength, single-beam spectrophotometer is minimized by using a double-beam spectrophotometer ). A chopper controls the radiation’s path, alternating it between the sample, the blank, and a shutter. The signal processor uses the chopper’s speed of rotation to resolve the signal that reaches the detector into the transmission of the blank, P0, and the sample, PT. By including an opaque surface as a shutter, it also is possible to continuously adjust 0%T. The effective bandwidth of a double-beam spectrophotometer is controlled by adjusting the monochromator’s entrance and exit slits. Effective bandwidths of 0.2–3.0 nm are common. A scanning monochromator allows for the automated recording of spectra. Double-beam instruments are more versatile than single-beam instruments, being useful for both quantitative and qualitative analyses, but also are more expensive and not particularly portable.An instrument with a single detector can monitor only one wavelength at a time. If we replace a single photomultiplier with an array of photodiodes, we can use the resulting detector to record a full spectrum in as little as 0.1 s. In a diode array spectrometer the source radiation passes through the sample and is dispersed by a grating ). The photodiode array detector is situated at the grating’s focal plane, with each diode recording the radiant power over a narrow range of wavelengths. Because we replace a full monochromator with just a grating, a diode array spectrometer is small and compact.One advantage of a diode array spectrometer is the speed of data acquisition, which allows us to collect multiple spectra for a single sample. Individual spectra are added and averaged to obtain the final spectrum. This signal averaging improves a spectrum’s signal-to-noise ratio. If we add together n spectra, the sum of the signal at any point, x, increases as nSx, where Sx is the signal. The noise at any point, Nx, is a random event, which increases as \(\sqrt{n} N_x\) when we add together n spectra. The signal-to-noise ratio after n scans, (S/N)n is\[\left(\frac{S}{N}\right)_{n}=\frac{n S_{x}}{\sqrt{n} N_{x}}=\sqrt{n} \frac{S_{x}}{N_{x}} \nonumber \]where Sx/Nx is the signal-to-noise ratio for a single scan. The impact of signal averaging is shown in . The first spectrum shows the signal after one scan, which consists of a single, noisy peak. Signal averaging using 4 scans and 16 scans decreases the noise and improves the signal-to-noise ratio. One disadvantage of a photodiode array is that the effective bandwidth per diode is roughly an order of magnitude larger than that for a high quality monochromator.The sample compartment provides a light-tight environment that limits stray radiation. Samples normally are in a liquid or solution state, and are placed in cells constructed with UV/Vis transparent materials, such as quartz, glass, and plastic ). A quartz or fused-silica cell is required when working at a wavelength <300 nm where other materials show a significant absorption. The most common pathlength is 1 cm (10 mm), although cells with shorter (as little as 0.1 cm) and longer pathlengths (up to 10 cm) are available. Longer pathlength cells are useful when analyzing a very dilute solution or for gas samples. The highest quality cells allow the radiation to strike a flat surface at a 90o angle, minimizing the loss of radiation to reflection. A test tube often is used as a sample cell with simple, single-beam instruments, although differences in the cell’s pathlength and optical properties add an additional source of error to the analysis.If we need to monitor an analyte’s concentration over time, it may not be possible to remove samples for analysis. This often is the case, for example, when monitoring an industrial production line or waste line, when monitoring a patient’s blood, or when monitoring an environmental system, such as stream. With a fiber-optic probe we can analyze samples in situ. An example of a remote sensing fiber-optic probe is shown in F. The probe consists of two bundles of fiber-optic cable. One bundle transmits radiation from the source to the probe’s tip, which is designed to allow the sample to flow through the sample cell. Radiation from the source passes through the solution and is reflected back by a mirror. The second bundle of fiber-optic cable transmits the nonabsorbed radiation to the wavelength selector. Another design replaces the flow cell shown in with a membrane that contains a reagent that reacts with the analyte. When the analyte diffuses into the membrane it reacts with the reagent, producing a product that absorbs UV or visible radiation. The nonabsorbed radiation from the source is reflected or scattered back to the detector. Fiber optic probes that show chemical selectivity are called optrodes [(a) Seitz, W. R. Anal. Chem. 1984, 56, 16A–34A; (b) Angel, S. M. Spectroscopy 1987, 2, 38–48].This page titled 13.4: Instrumentation is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by David Harvey.
310
14.1: What is Molar Absorptivity?
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Instrumental_Analysis_(LibreTexts)/14%3A_Applications_of_Ultraviolet_Visible_Molecular_Absorption_Spectrometry/14.01%3A_What_is_Molar_Absorptivity%3F
Beer's law, as we learned in Chapter 13, gives the relationship between the amount of light absorbed by a sample, \(A\), the concentration of the species absorbing light, \(C\), the distance (path length) the light travels through the sample, \(b\), and the molar absorptivity of the species absorbing light, \(\epsilon\)\[A = \epsilon b C \nonumber \]The meaning of path length and concentration are self-evident, and their effect on the extent of absorbance also are self-evident: the more absorbing species that are present (concentration) and the more opportunity for any one molecule to absorb light (path length), the greater the absorbance. The meaning of molar absorptivity—what it represents—is less intuitive. It is, of course, a proportionality constant that converts the product of path length and concentration, \(b C\), into absorbance, but that is not a particularly satisfying definition. Maximum values for \(\epsilon\) are on the order of \(10^5\) L/(mol•cm) for simple molecules. and are proportional to the cross-sectional area of the absorbing species and the probability that a photon passing through this cross-sectional area is absorbed. Here we have a self-evident relationship: the greater the cross-sectional area—the more space occupied by the absorbing species—the greater the opportunity for absorbance; and the more favorable the probability of absorption—with probabilities ranging from 0 to 1—the greater the absorbance.Although molar absorptivity values are often reported in the literature, their values usually vary significantly from study-to-study, presumably due to differences in the purity of the reagents, the solvents used to prepare solutions, the precision with which path length is measured, and the instrument used for the measurements. For this reason, molar absorptivity values are usually calculated as needed by making careful measurements of \(A\), \(b\), and \(C\), or by simply reducing Beer's law to \(A = k C\) where \(k\) is determined from a calibration curve.This page titled 14.1: What is Molar Absorptivity? is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by David Harvey.
311
14.2: Absorbing Species
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Instrumental_Analysis_(LibreTexts)/14%3A_Applications_of_Ultraviolet_Visible_Molecular_Absorption_Spectrometry/14.02%3A_Absorbing_Species
There are two general requirements for an analyte’s absorption of electromagnetic radiation. First, there must be a mechanism by which the radiation’s electric field or magnetic field can interact with the analyte. For ultraviolet and visible radiation, absorption of a photon changes the energy of the analyte’s valence electrons. The second requirement is that the photon’s energy, \(h \nu\), must exactly equal the difference in energy, \(\Delta E\), between two of the analyte’s quantized energy states. We can use the energy level diagram in to explain an absorbance spectrum. The lines labeled E0 and E1 represent the analyte’s ground (lowest) electronic state and its first electronic excited state. Superimposed on each electronic energy level is a series of lines representing vibrational energy levels.The valence electrons in organic molecules and polyatomic ions, such as \(\text{CO}_3^{2-}\), occupy quantized sigma bonding (\(\sigma\)), pi bonding (\(\pi\)), and non-bonding (n) molecular orbitals (MOs). Unoccupied sigma antibonding (\(\sigma^*\)) and pi antibonding (\(\pi^*\)) molecular orbitals are slightly higher in energy. Because the difference in energy between the highest-energy occupied molecular orbitals (HOMO) and the lowest-energy unoccupied molecular orbitals (LUMO) corresponds to ultraviolet and visible radiation, absorption of a photon is possible.Four types of transitions between quantized energy levels account for most molecular UV/Vis spectra. Table \(\PageIndex{1}\) lists the approximate wavelength ranges for these transitions, as well as a partial list of bonds, functional groups, or molecules responsible for these transitions. Of these transitions, the most important are \(n \rightarrow \pi^*\) and \(\pi \rightarrow \pi^*\) because they involve important functional groups that are characteristic of many analytes and because the wavelengths are easily accessible. The bonds and functional groups that give rise to the absorption of ultraviolet and visible radiation are called chromophores.Many transition metal ions, such as Cu2+ and Co2+, form colorful solutions because the metal ion absorbs visible light. The transitions that give rise to this absorption are valence electrons in the metal ion’s d-orbitals, which are shown in . For a free metal ion, the five d-orbitals are of equal energy.In the presence of a complexing ligand or solvent molecule, however, the d-orbitals split into two or more groups that differ in energy. For example, in an octahedral complex of \(\text{Cu(H}_2\text{O)}_6^{2+}\) the six water molecules, which are aligned with the metal rods in , perturb the d-orbitals into the two groups shown in . The magnitude of the splitting of the \(d\)-orbitals is called the octahedral field strength, \(\Delta_\text{oct}\).Although the magnitude of the resulting \(d \rightarrow d\) transitions for transition metal ions are relatively weak, solutions of the metal-ligand complexes show distinct colors that depend on the metal ion and the ligand, which affect the magnitude of \(\Delta_\text{oct}\). shows the variation in color for a series of seven octahedral complexes of Co3+. The spectra for three of these complexes are shown in , which we can use to estimate the relative size of \(\Delta_\text{oct}\). Each of the spectra shows two absorption bands, one near 400 nm and one a somewhat longer wavelength: a shoulder at about 470 nm for phenanthroline, a peak at about 550 nm for glycine, and a peak at about 620 nm for oxalate. Because \(\Delta_\text{oct}\) is inversely proportional to wavelength, the relative magnitude of \(\Delta_\text{oct}\) increases from Co(phen)33+ to Co(glycine)33+ to Co(oxalate)33–. In , the octahedral field strengths of the ligands decrease from Co(NO2)63– to Co(CO3)33–.A more important source of UV/Vis absorption for inorganic metal–ligand complexes is charge transfer, in which absorption of a photon produces an excited state in which there is transfer of an electron from the metal, M, to the ligand, L.\[M-L+h \nu \rightarrow\left(M^{+}-L^{-}\right)^{*} \nonumber \]Charge-transfer absorption is important because it produces very large absorbances. One important example of a charge-transfer complex is that of o-phenanthroline with Fe2+, the UV/Vis spectrum for which is shown in . Charge-transfer absorption in which an electron moves from the ligand to the metal also is possible.This page titled 14.2: Absorbing Species is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by David Harvey.
312
14.3: Qualitative and Characterization Applications
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Instrumental_Analysis_(LibreTexts)/14%3A_Applications_of_Ultraviolet_Visible_Molecular_Absorption_Spectrometry/14.03%3A_Qualitative_Applications
As discussed in Chapter 14.2, ultraviolet, visible, and infrared absorption bands result from the absorption of electromagnetic radiation by specific valence electrons or bonds. The energy at which the absorption occurs, and the intensity of that absorption, is determined by the chemical environment of the absorbing moiety. For example, benzene has several ultraviolet absorption bands due to \(\pi \rightarrow \pi^*\) transitions. The position and intensity of two of these bands, 203.5 nm (\(\epsilon\) = 7400 M–1cm–1) and 254 nm (\(\epsilon\) = 204 M–1 cm–1), are sensitive to substitution. For benzoic acid, in which a carboxylic acid group replaces one of the aromatic hydrogens, the two bands shift to 230 nm (\(\epsilon\) = 11600 M–1 cm–1) and 273 nm (\(\epsilon\) = 970 M–1 cm–1). A variety of rules have been developed to aid in correlating UV/Vis absorption bands to chemical structure. With the availability of computerized data acquisition and storage it is possible to build digital libraries of standard reference spectra. The identity of an a unknown compound often can be determined by comparing its spectrum against a library of reference spectra, a process known as spectral searching.Molecular absorption, particularly in the UV/Vis range, has been used for a variety of different characterization studies, including determining the stoichiometry of metal–ligand complexes and determining equilibrium constants. Both of these examples are examined in this section.We can determine the stoichiometry of the metal–ligand complexation reaction\[\mathrm{M}+y \mathrm{L} \rightleftharpoons \mathrm{ML}_{y} \nonumber \]using one of three methods: the method of continuous variations, the mole-ratio method, and the slope-ratio method. Of these approaches, the method of continuous variations, also called Job’s method, is the most popular. In this method a series of solutions is prepared such that the total moles of metal and of ligand, ntotal, in each solution is the same. If (nM)i and (nL)i are, respectively, the moles of metal and ligand in solution i, then\[n_{\text { total }}=\ \left(n_{\mathrm{M}}\right)_{i} \ + \ \left(n_{\mathrm{L}}\right)_{i} \nonumber \]The relative amount of ligand and metal in each solution is expressed as the mole fraction of ligand, (XL)i, and the mole fraction of metal, (XM)i,\[\left(X_{\mathrm{L}}\right)_{i}=\frac{\left(n_{\mathrm{L}}\right)_{i}}{n_{\mathrm{total}}} \nonumber \]\[\left(X_{M}\right)_{i}=1-\frac{\left(n_\text{L}\right)_{i}}{n_{\text { total }}}=\frac{\left(n_\text{M}\right)_{i}}{n_{\text { total }}} \nonumber \]The concentration of the metal–ligand complex in any solution is determined by the limiting reagent, with the greatest concentration occurring when the metal and the ligand are mixed stoichiometrically. If we monitor the complexation reaction at a wavelength where only the metal–ligand complex absorbs, a graph of absorbance versus the mole fraction of ligand has two linear branches—one when the ligand is the limiting reagent and a second when the metal is the limiting reagent. The intersection of the two branches represents a stoichiometric mixing of the metal and the ligand. We use the mole fraction of ligand at the intersection to determine the value of y for the metal–ligand complex MLy.\[y=\frac{n_{\mathrm{L}}}{n_{\mathrm{M}}}=\frac{X_{\mathrm{L}}}{X_{\mathrm{M}}}=\frac{X_{\mathrm{L}}}{1-X_{\mathrm{L}}} \nonumber \]You also can plot the data as absorbance versus the mole fraction of metal. In this case, y is equal to (1 – XM)/XM.To determine the formula for the complex between Fe2+ and o-phenanthroline, a series of solutions is prepared in which the total concentration of metal and ligand is held constant at \(3.15 \times 10^{-4}\) M. The absorbance of each solution is measured at a wavelength of 510 nm. Using the following data, determine the formula for the complex.A plot of absorbance versus the mole fraction of ligand is shown in . To find the maximum absorbance, we extrapolate the two linear portions of the plot. The two lines intersect at a mole fraction of ligand of 0.75. Solving for y gives\[y=\frac{X_{L}}{1-X_{L}}=\frac{0.75}{1-0.75}=3 \nonumber \]The formula for the metal–ligand complex is \(\text{Fe(phen)}_3^{2+}\).Use the continuous variations data in the following table to determine the formula for the complex between Fe2+ and SCN–. The data for this problem is adapted from Meloun, M.; Havel, J.; Högfeldt, E. Computation of Solution Equilibria, Ellis Horwood: Chichester, England, 1988, p. 236.The figure below shows a continuous variations plot for the data in this exercise. Although the individual data points show substantial curvature—enough curvature that there is little point in trying to draw linear branches for excess metal and excess ligand—the maximum absorbance clearly occurs at XL ≈ 0.5. The complex’s stoichiometry, therefore, is Fe(SCN)2+.Several precautions are necessary when using the method of continuous variations. First, the metal and the ligand must form only one metal–ligand complex. To determine if this condition is true, plots of absorbance versus XL are constructed at several different wavelengths and for several different values of ntotal. If the maximum absorbance does not occur at the same value of XL for each set of conditions, then more than one metal–ligand complex is present. A second precaution is that the metal–ligand complex’s absorbance must obey Beer’s law. Third, if the metal–ligand complex’s formation constant is relatively small, a plot of absorbance versus XL may show significant curvature. In this case it often is difficult to determine the stoichiometry by extrapolation. Finally, because the stability of a metal–ligand complex may be influenced by solution conditions, it is necessary to control carefully the composition of the solutions. When the ligand is a weak base, for example, each solutions must be buffered to the same pH.In the mole-ratio method the moles of one reactant, usually the metal, is held constant, while the moles of the other reactant is varied. The absorbance is monitored at a wavelength where the metal–ligand complex absorbs. A plot of absorbance as a function of the ligand-to-metal mole ratio, nL/nM, has two linear branches that intersect at a mole–ratio corresponding to the complex’s formula. shows a mole-ratio plot for the formation of a 1:1 complex in which the absorbance is monitored at a wavelength where only the complex absorbs. shows a mole-ratio plot for a 1:2 complex in which all three species—the metal, the ligand, and the complex—absorb at the selected wavelength. Unlike the method of continuous variations, the mole-ratio method can be used for complexation reactions that occur in a stepwise fashion if there is a difference in the molar absorptivities of the metal–ligand complexes, and if the formation constants are sufficiently different. A typical mole-ratio plot for the step-wise formation of ML and ML2 is shown in .For both the method of continuous variations and the mole-ratio method, we determine the complex’s stoichiometry by extrapolating absorbance data from conditions in which there is a linear relationship between absorbance and the relative amounts of metal and ligand. If a metal–ligand complex is very weak, a plot of absorbance versus XL or nL/nM becomes so curved that it is impossible to determine the stoichiometry by extrapolation. In this case the slope-ratio is used.In the slope-ratio method two sets of solutions are prepared. The first set of solutions contains a constant amount of metal and a variable amount of ligand, chosen such that the total concentration of metal, CM, is much larger than the total concentration of ligand, CL. Under these conditions we may assume that essentially all the ligand reacts to form the metal–ligand complex. The concentration of the complex, which has the general form MxLy, is\[\left[\mathrm{M}_{x} \mathrm{L_y}\right]=\frac{C_{\mathrm{L}}}{y} \nonumber \]If we monitor the absorbance at a wavelength where only MxLy absorbs, then\[A=\varepsilon b\left[\mathrm{M}_{x} \mathrm{L}_{y}\right]=\frac{\varepsilon b C_{\mathrm{L}}}{y} \nonumber \]and a plot of absorbance versus CL is linear with a slope, sL, of\[s_{\mathrm{L}}=\frac{\varepsilon b}{y}\ ]A second set of solutions is prepared with a fixed concentration of ligand that is much greater than a variable concentration of metal; thus\[\left[\mathrm{M}_{x} \mathrm{L}_{y}\right]=\frac{C_{\mathrm{M}}}{x} \nonumber \]\[A=\varepsilon b\left[\mathrm{M}_{x} \mathrm{L}_{y}\right]=\frac{\varepsilon b C_{\mathrm{M}}}{x} \nonumber \]\[s_{M}=\frac{\varepsilon b}{x} \nonumber \]A ratio of the slopes provides the relative values of x and y.\[\frac{s_{\text{M}}}{s_{\text{L}}}=\frac{\varepsilon b / x}{\varepsilon b / y}=\frac{y}{x} \nonumber \]An important assumption in the slope-ratio method is that the complexation reaction continues to completion in the presence of a sufficiently large excess of metal or ligand. The slope-ratio method also is limited to systems in which only a single complex forms and for which Beer’s law is obeyed.Another important application of molecular absorption spectroscopy is the determination of equilibrium constants. Let’s consider, as a simple example, an acid–base reaction of the general form\[\operatorname{HIn}(a q)+ \ \mathrm{H}_{2} \mathrm{O}(l) \rightleftharpoons \ \mathrm{H}_{3} \mathrm{O}^{+}(a q)+\operatorname{In}^{-}(a q) \nonumber \]where HIn and In– are the conjugate weak acid and weak base forms of an acid–base indicator. The equilibrium constant for this reaction is\[K_{\mathrm{a}}=\frac{\left[\mathrm{H}_{3} \mathrm{O}^{+}\right][\mathrm{A^-}]}{[\mathrm{HA}]} \nonumber \]To determine the equilibrium constant’s value, we prepare a solution in which the reaction is in a state of equilibrium and determine the equilibrium concentration for H3O+, HIn, and In–. The concentration of H3O+ is easy to determine by measuring the solution’s pH. To determine the concentration of HIn and In– we can measure the solution’s absorbance.If both HIn and In– absorb at the selected wavelength, then, from Beer's law, we know that\[A=\varepsilon_{\mathrm{Hln}} b[\mathrm{HIn}]+\varepsilon_{\mathrm{ln}} b[\mathrm{In}^-] \label{10.5} \]where \(\varepsilon_\text{HIn}\) and \(\varepsilon_{\text{In}}\) are the molar absorptivities for HIn and In–. The indicator’s total concentration, C, is given by a mass balance equation\[C=[\mathrm{HIn}]+ [\text{In}^-] \label{10.6} \]Solving Equation \ref{10.6} for [HIn] and substituting into Equation \ref{10.5} gives\[A=\varepsilon_{\mathrm{Hln}} b\left(C-\left[\mathrm{In}^{-}\right]\right)+\varepsilon_{\mathrm{ln}} b\left[\mathrm{In}^{-}\right] \nonumber \]which we simplify to\[A=\varepsilon_{\mathrm{Hln}} bC- \varepsilon_{\mathrm{Hln}}b\left[\mathrm{In}^{-}\right]+\varepsilon_{\mathrm{ln}} b\left[\mathrm{In}^{-}\right] \nonumber \]\[A=A_{\mathrm{HIn}}+b\left[\operatorname{In}^{-}\right]\left(\varepsilon_{\mathrm{ln}}-\varepsilon_{\mathrm{HIn}}\right) \label{10.7} \]where AHIn, which is equal to \(\varepsilon_\text{HIn}bC\), is the absorbance when the pH is acidic enough that essentially all the indicator is present as HIn. Solving Equation \ref{10.7} for the concentration of In– gives\[\left[\operatorname{In}^{-}\right]=\frac{A-A_{\mathrm{Hln}}}{b\left(\varepsilon_{\mathrm{ln}}-\varepsilon_{\mathrm{HIn}}\right)} \label{10.8} \]Proceeding in the same fashion, we derive a similar equation for the concentration of HIn\[[\mathrm{HIn}]=\frac{A_{\mathrm{In}}-A}{b\left(\varepsilon_{\mathrm{ln}}-\varepsilon_{\mathrm{Hln}}\right)} \label{10.9} \]where AIn, which is equal to \(\varepsilon_{\text{In}}bC\), is the absorbance when the pH is basic enough that only In– contributes to the absorbance. Substituting Equation \ref{10.8} and Equation \ref{10.9} into the equilibrium constant expression for HIn gives\[K_a = \frac {[\text{H}_3\text{O}^+][\text{In}^-]} {[\text{HIn}]} = [\text{H}_3\text{O}^+] \times \frac {A - A_\text{HIn}} {A_{\text{In}} - A} \label{10.10} \]We can use Equation \ref{10.10} to determine Ka in one of two ways. The simplest approach is to prepare three solutions, each of which contains the same amount, C, of indicator. The pH of one solution is made sufficiently acidic such that [HIn] >> [In–]. The absorbance of this solution gives AHIn. The value of AIn is determined by adjusting the pH of the second solution such that [In–] >> [HIn]. Finally, the pH of the third solution is adjusted to an intermediate value, and the pH and absorbance, A, recorded. The value of Ka is calculated using Equation \ref{10.10}.The acidity constant for an acid–base indicator is determined by preparing three solutions, each of which has a total concentration of indicator equal to \(5.00 \times 10^{-5}\) M. The first solution is made strongly acidic with HCl and has an absorbance of 0.250. The second solution is made strongly basic and has an absorbance of 1.40. The pH of the third solution is 2.91 and has an absorbance of 0.662. What is the value of Ka for the indicator?The value of Ka is determined by making appropriate substitutions into 10.20 where [H3O+] is \(1.23 \times 10^{-3}\); thus\[K_{\mathrm{a}}=\left(1.23 \times 10^{-3}\right) \times \frac{0.662-0.250}{1.40-0.662}=6.87 \times 10^{-4} \nonumber \]To determine the Ka of a merocyanine dye, the absorbance of a solution of \(3.5 \times 10^{-4}\) M dye was measured at a pH of 2.00, a pH of 6.00, and a pH of 12.00, yielding absorbances of 0.000, 0.225, and 0.680, respectively. What is the value of Ka for this dye? The data for this problem is adapted from Lu, H.; Rutan, S. C. Anal. Chem., 1996, 68, 1381–1386.The value of Ka is\[K_{\mathrm{a}}=\left(1.00 \times 10^{-6}\right) \times \frac{0.225-0.000}{0.680-0.225}=4.95 \times 10^{-7} \nonumber \]A second approach for determining Ka is to prepare a series of solutions, each of which contains the same amount of indicator. Two solutions are used to determine values for AHIn and AIn. Taking the log of both sides of Equation \ref{10.10} and rearranging leave us with the following equation.\[\log \frac{A-A_{\mathrm{Hin}}}{A_{\mathrm{ln}^{-}}-A}=\mathrm{pH}-\mathrm{p} K_{\mathrm{a}} \label{10.11} \]A plot of log[(A – AHIn)/(AIn – A)] versus pH is a straight-line with a slope of +1 and a y-intercept of –pKa.To determine the Ka for the indicator bromothymol blue, the absorbance of each a series of solutions that contain the same concentration of bromothymol blue is measured at pH levels of 3.35, 3.65, 3.94, 4.30, and 4.64, yielding absorbance values of 0.170, 0.287, 0.411, 0.562, and 0.670, respectively. Acidifying the first solution to a pH of 2 changes its absorbance to 0.006, and adjusting the pH of the last solution to 12 changes its absorbance to 0.818. What is the value of Ka for bromothymol blue? The data for this problem is from Patterson, G. S. J. Chem. Educ., 1999, 76, 395–398.To determine Ka we use Equation \ref{10.11}, plotting log[(A – AHIn)/(AIn – A)] versus pH, as shown below.Fitting a straight-line to the data gives a regression model of\[\log \frac{A-A_{\mathrm{HIn}}}{A_{\mathrm{ln}}-A}=-3.80+0.962 \mathrm{pH} \nonumber \]The y-intercept is –pKa; thus, the pKa is 3.80 and the Ka is \(1.58 \times 10^{-4}\).In developing these approaches for determining Ka we considered a relatively simple system in which the absorbance of HIn and In– are easy to measure and for which it is easy to determine the concentration of H3O+. In addition to acid–base reactions, we can adapt these approaches to any reaction of the general form\[X(a q)+Y(a q)\rightleftharpoons Z(a q) \nonumber \]including metal–ligand complexation reactions and redox reactions, provided we can determine spectrophotometrically the concentration of the product, Z, and one of the reactants, either X or Y, and that we can determine the concentration of the other reactant by some other method. With appropriate modifications, a more complicated system in which we cannot determine the concentration of one or more of the reactants or products also is possible [Ramette, R. W. Chemical Equilibrium and Analysis, Addison-Wesley: Reading, MA, 1981, Chapter 13].This page titled 14.3: Qualitative and Characterization Applications is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by David Harvey.
313
14.4: Quantitative Applications
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Instrumental_Analysis_(LibreTexts)/14%3A_Applications_of_Ultraviolet_Visible_Molecular_Absorption_Spectrometry/14.04%3A_Quantitative_Applications
The determination of an analyte’s concentration based on its absorption of ultraviolet or visible radiation is one of the most frequently encountered quantitative analytical methods. One reason for its popularity is that many organic and inorganic compounds have strong absorption bands in the UV/Vis region of the electromagnetic spectrum. In addition, if an analyte does not absorb UV/Vis radiation—or if its absorbance is too weak—we often can react it with another species that is strongly absorbing. For example, a dilute solution of Fe2+ does not absorb visible light. Reacting Fe2+ with o-phenanthroline, however, forms an orange–red complex of \(\text{Fe(phen)}_3^{2+}\) that has a strong, broad absorbance band near 500 nm. An additional advantage to UV/Vis absorption is that in most cases it is relatively easy to adjust experimental and instrumental conditions so that Beer’s law is obeyed.The analysis of waters and wastewaters often relies on the absorption of ultraviolet and visible radiation. Many of these methods are outlined in Table \(\PageIndex{1}\). Several of these methods are described here in more detail.Although the quantitative analysis of metals in waters and wastewaters is accomplished primarily by atomic absorption or atomic emission spectroscopy, many metals also can be analyzed following the formation of a colorful metal–ligand complex. One advantage to these spectroscopic methods is that they easily are adapted to the analysis of samples in the field using a filter photometer. One ligand used for the analysis of several metals is diphenylthiocarbazone, also known as dithizone. Dithizone is not soluble in water, but when a solution of dithizone in CHCl3 is shaken with an aqueous solution that contains an appropriate metal ion, a colored metal–dithizonate complex forms that is soluble in CHCl3. The selectivity of dithizone is controlled by adjusting the sample’s pH. For example, Cd2+ is extracted from solutions made strongly basic with NaOH, Pb2+ from solutions made basic with an NH3/ NH4+ buffer, and Hg2+ from solutions that are slightly acidic.The structure of dithizone is shown below.When chlorine is added to water the portion available for disinfection is called the chlorine residual. There are two forms of chlorine residual. The free chlorine residual includes Cl2, HOCl, and OCl–. The combined chlorine residual, which forms from the reaction of NH3 with HOCl, consists of monochloramine, NH2Cl, dichloramine, NHCl2, and trichloramine, NCl3. Because the free chlorine residual is more efficient as a disinfectant, there is an interest in methods that can distinguish between the total chlorine residual’s different forms. One such method is the leuco crystal violet method. The free residual chlorine is determined by adding leuco crystal violet to the sample, which instantaneously oxidizes to give a blue-colored compound that is monitored at 592 nm. Completing the analysis in less than five minutes prevents a possible interference from the combined chlorine residual. The total chlorine residual (free + combined) is determined by reacting a separate sample with iodide, which reacts with both chlorine residuals to form HOI. When the reaction is complete, leuco crystal violet is added and oxidized by HOI, giving the same blue-colored product. The combined chlorine residual is determined by difference.The concentration of fluoride in drinking water is determined indirectly by its ability to form a complex with zirconium. In the presence of the dye SPADNS, a solution of zirconium forms a red colored compound, called a lake, that absorbs at 570 nm. When fluoride is added, the formation of the stable \(\text{ZrF}_6^{2-}\) complex causes a portion of the lake to dissociate, decreasing the absorbance. A plot of absorbance versus the concentration of fluoride, therefore, has a negative slope.SPADNS, the structure of which is shown below, is an abbreviation for the sodium salt of 2-(4-sulfophenylazo)-1,8-dihydroxy-3,6-napthalenedisulfonic acid, which is a mouthful to say.Spectroscopic methods also are used to determine organic constituents in water. For example, the combined concentrations of phenol and ortho- and meta-substituted phenols are determined by using steam distillation to separate the phenols from nonvolatile impurities. The distillate reacts with 4-aminoantipyrine at pH 7.9 ± 0.1 in the presence of K3Fe(CN)6 to a yellow colored antipyrine dye. After extracting the dye into CHCl3, its absorbance is monitored at 460 nm. A calibration curve is prepared using only the unsubstituted phenol, C6H5OH. Because the molar absorptivity of substituted phenols generally are less than that for phenol, the reported concentration represents the minimum concentration of phenolic compounds.4-aminoantipyreneMolecular absorption also is used for the analysis of environmentally significant airborne pollutants. In many cases the analysis is carried out by collecting the sample in water, converting the analyte to an aqueous form that can be analyzed by methods such as those described in Table \(\PageIndex{1}\). For example, the concentration of NO2 is determined by oxidizing NO2 to \(\text{NO}_3^-\). The concentration of \(\text{NO}_3^-\) is then determined by first reducing it to \(\text{NO}_2^-\) with Cd, and then reacting \(\text{NO}_2^-\) with sulfanilamide and N-(1-naphthyl)-ethylenediamine to form a red azo dye. Another important application is the analysis for SO2, which is determined by collecting the sample in an aqueous solution of \(\text{HgCl}_4^{2-}\) where it reacts to form \(\text{Hg(SO}_3)_2^{2-}\). Addition of p-rosaniline and formaldehyde produces a purple complex that is monitored at 569 nm. Infrared absorption is useful for the analysis of organic vapors, including HCN, SO2, nitrobenzene, methyl mercaptan, and vinyl chloride. Frequently, these analyses are accomplished using portable, dedicated infrared photometers.The analysis of clinical samples often is complicated by the complexity of the sample’s matrix, which may contribute a significant background absorption at the desired wavelength. The determination of serum barbiturates provides one example of how this problem is overcome. The barbiturates are first extracted from a sample of serum with CHCl3 and then extracted from the CHCl3 into 0.45 M NaOH (pH ≈ 13). The absorbance of the aqueous extract is measured at 260 nm, and includes contributions from the barbiturates as well as other components extracted from the serum sample. The pH of the sample is then lowered to approximately 10 by adding NH4Cl and the absorbance remeasured. Because the barbiturates do not absorb at this pH, we can use the absorbance at pH 10, ApH 10, to correct the absor-ance at pH 13, ApH 13\[A_\text{barb} = A_\text{pH 13} - \frac {V_\text{samp} + V_{\text{NH}_4\text{Cl}}} {V_\text{samp}} \times A_\text{pH 10} \nonumber \]where Abarb is the absorbance due to the serum barbiturates and Vsamp and \(V_{\text{NH}_4\text{Cl}}\) are the volumes of sample and NH4Cl, respectively. Table \(\PageIndex{2}\) provides a summary of several other methods for analyzing clinical samples.UV/Vis molecular absorption is used for the analysis of a diverse array of industrial samples including pharmaceuticals, food, paint, glass, and metals. In many cases the methods are similar to those described in Table \(\PageIndex{1}\) and in Table \(\PageIndex{2}\). For example, the amount of iron in food is determined by bringing the iron into solution and analyzing using the o-phenanthroline method listed in Table \(\PageIndex{1}\).Many pharmaceutical compounds contain chromophores that make them suitable for analysis by UV/Vis absorption. Products analyzed in this fashion include antibiotics, hormones, vitamins, and analgesics. One example of the use of UV absorption is in determining the purity of aspirin tablets, for which the active ingredient is acetylsalicylic acid. Salicylic acid, which is produced by the hydrolysis of acetylsalicylic acid, is an undesirable impurity in aspirin tablets, and should not be present at more than 0.01% w/w. Samples are screened for unacceptable levels of salicylic acid by monitoring the absorbance at a wavelength of 312 nm. Acetylsalicylic acid absorbs at 280 nm, but absorbs poorly at 312 nm. Conditions for preparing the sample are chosen such that an absorbance of greater than 0.02 signifies an unacceptable level of salicylic acid.UV/Vis molecular absorption routinely is used for the analysis of narcotics and for drug testing. One interesting forensic application is the determination of blood alcohol using the Breathalyzer test. In this test a 52.5-mL breath sample is bubbled through an acidified solution of K2Cr2O7, which oxidizes ethanol to acetic acid. The concentration of ethanol in the breath sample is determined by a decrease in the absorbance at 440 nm where the dichromate ion absorbs. A blood alcohol content of 0.10%, which is above the legal limit, corresponds to 0.025 mg of ethanol in the breath sample.To develop a quantitative analytical method, the conditions under which Beer’s law is obeyed must be established. First, the most appropriate wavelength for the analysis is determined from an absorption spectrum. In most cases the best wavelength corresponds to an absorption maximum because it provides greater sensitivity and is less susceptible to instrumental limitations. Second, if the instrument has adjustable slits, then an appropriate slit width is chosen. The absorption spectrum also aids in selecting a slit width by choosing a width that is narrow enough to avoid instrumental limitaions to Beer’s law, but wide enough to increase the throughput of source radiation. Finally, a calibration curve is constructed to determine the range of concentrations for which Beer’s law is valid. Additional considerations that are important in any quantitative method are the effect of potential interferents and establishing an appropriate blank.To determine the concentration of an analyte we measure its absorbance and apply Beer’s law using any of the standardization methods described in Chapter 5. The most common methods are a normal calibration curve using external standards and the method of standard additions. A single point standardization also is possible, although we must first verify that Beer’s law holds for the concentration of analyte in the samples and the standard.The determination of iron in an industrial waste stream is carried out by the o-phenanthroline described in Representative Method 10.3.1. Using the data in the following table, determine the mg Fe/L in the waste stream.Linear regression of absorbance versus the concentration of Fe in the standards gives the calibration curve and calibration equation shown here\[A=0.0006+\left(0.1817 \ \mathrm{mg}^{-1} \mathrm{L}\right) \times(\mathrm{mg} \mathrm{Fe} / \mathrm{L}) \nonumber \]Substituting the sample’s absorbance into the calibration equation gives the concentration of Fe in the waste stream as 1.48 mg Fe/LThe concentration of Cu2+ in a sample is determined by reacting it with the ligand cuprizone and measuring its absorbance at 606 nm in a 1.00-cm cell. When a 5.00-mL sample is treated with cuprizone and diluted to 10.00 mL, the resulting solution has an absorbance of 0.118. A second 5.00-mL sample is mixed with 1.00 mL of a 20.00 mg/L standard of Cu2+, treated with cuprizone and diluted to 10.00 mL, giving an absorbance of 0.162. Report the mg Cu2+/L in the sample.For this standard addition we write equations that relate absorbance to the concentration of Cu2+ in the sample before the standard addition\[0.118=\varepsilon b \left[ C_{\mathrm{Cu}} \times \frac{5.00 \text{ mL}}{10.00 \text{ mL}}\right] \nonumber \]and after the standard addition\[0.162=\varepsilon b\left(C_{\mathrm{Cu}} \times \frac{5.00 \text{ mL}}{10.00 \text{ mL}}+\frac{20.00 \ \mathrm{mg} \ \mathrm{Cu}}{\mathrm{L}} \times \frac{1.00 \ \mathrm{mL}}{10.00 \ \mathrm{mL}}\right) \nonumber \]in each case accounting for the dilution of the original sample and for the standard. The value of \(\varepsilon b\) is the same in both equation. Solving each equation for \(\varepsilon b\) and equating\[\frac{0.162}{C_{\mathrm{Cu}} \times \frac{5.00 \text{ mL}}{10.00 \text{ mL}}+\frac{20.00 \ \mathrm{mg} \ \mathrm{Cu}}{\mathrm{L}} \times \frac{1.00 \ \mathrm{mL}}{10.00 \ \mathrm{mL}}}=\frac{0.118}{C_{\mathrm{Cu}} \times \frac{5.00 \text{ mL}}{10.00 \text{ mL}}} \nonumber \]leaves us with an equation in which CCu is the only variable. Solving for CCu gives its value as\[\frac{0.162}{0.500 \times C_{\mathrm{Cu}}+2.00 \ \mathrm{mg} \ \mathrm{Cu} / \mathrm{L}}=\frac{0.118}{0.500 \times C_{\mathrm{Cu}}} \nonumber \]\[0.0810 \times C_{\mathrm{Cu}}=0.0590 \times C_{\mathrm{Ca}}+0.236 \ \mathrm{mg} \ \mathrm{Cu} / \mathrm{L} \nonumber \]\[0.0220 \times C_{\mathrm{Cu}}=0.236 \ \mathrm{mg} \ \mathrm{Cu} / \mathrm{L} \nonumber \]\[C_{\mathrm{Cu}}=10.7 \ \mathrm{mg} \ \mathrm{Cu} / \mathrm{L} \nonumber \]Suppose we need to determine the concentration of two analytes, X and Y, in a sample. If each analyte has a wavelength where the other analyte does not absorb, then we can proceed using the approach in Example 14.4.1 . Unfortunately, UV/Vis absorption bands are so broad that frequently it is not possible to find suitable wavelengths. Because Beer’s law is additive the mixture’s absorbance, Amix, is\[\left(A_{m i x}\right)_{\lambda_{1}}=\left(\varepsilon_{x}\right)_{\lambda_{1}} b C_{X}+\left(\varepsilon_{Y}\right)_{\lambda_{1}} b C_{Y} \label{10.1} \]where \(\lambda_1\) is the wavelength at which we measure the absorbance. Because Equation \ref{10.1} includes terms for the concentration of both X and Y, the absorbance at one wavelength does not provide enough information to determine either CX or CY. If we measure the absorbance at a second wavelength\[\left(A_{m i x}\right)_{\lambda_{2}}=\left(\varepsilon_{x}\right)_{\lambda_{2}} b C_{X}+\left(\varepsilon_{Y}\right)_{\lambda_{2}} b C_{Y} \label{10.2} \]then we can determine CX and CY by solving simultaneously Equation \ref{10.1} and Equation \ref{10.2}. Of course, we also must determine the value for \(\varepsilon_X\) and \(\varepsilon_Y\) at each wavelength. For a mixture of n components, we must measure the absorbance at n different wavelengths.The concentrations of Fe3+ and Cu2+ in a mixture are determined following their reaction with hexacyanoruthenate (II), \(\text{Ru(CN)}_6^{4-}\), which forms a purple-blue complex with Fe3+ (\(\lambda_\text{max}\) = 550 nm) and a pale-green complex with Cu2+ (\(\lambda_\text{max}\) = 396 nm) [DiTusa, M. R.; Schlit, A. A. J. Chem. Educ. 1985, 62, 541–542]. The molar absorptivities (M–1 cm–1) for the metal complexes at the two wavelengths are summarized in the following table.When a sample that contains Fe3+ and Cu2+ is analyzed in a cell with a pathlength of 1.00 cm, the absorbance at 550 nm is 0.183 and the absorbance at 396 nm is 0.109. What are the molar concentrations of Fe3+ and Cu2+ in the sample?Substituting known values into Equation \ref{10.1} and Equation \ref{10.2} gives\[\begin{aligned} A_{550} &=0.183=9970 C_{\mathrm{Fe}}+34 C_{\mathrm{Cu}} \\ A_{396} &=0.109=84 C_{\mathrm{Fe}}+856 C_{\mathrm{Cu}} \end{aligned} \nonumber \]To determine CFe and CCu we solve the first equation for CCu\[C_{\mathrm{Cu}}=\frac{0.183-9970 C_{\mathrm{Fe}}}{34} \nonumber \]and substitute the result into the second equation.\[\begin{aligned} 0.109 &=84 C_{\mathrm{Fe}}+856 \times \frac{0.183-9970 C_{\mathrm{Fe}}}{34} \\ &=4.607-\left(2.51 \times 10^{5}\right) C_{\mathrm{Fe}} \end{aligned} \nonumber \]Solving for CFe gives the concentration of Fe3+ as \(1.8 \times 10^{-5}\) M. Substituting this concentration back into the equation for the mixture’s absorbance at 396 nm gives the concentration of Cu2+ as \(1.3 \times 10^{-4}\) M.Another approach to solving Example 14.4.2 is to multiply the first equation by 856/34 giving\[4.607=251009 C_{\mathrm{Fe}}+856 C_\mathrm{Cu} \nonumber \]Subtracting the second equation from this equation\[\begin{aligned} 4.607 &=251009 C_{\mathrm{Fe}}+856 C_{\mathrm{Cu}} \\-0.109 &=84 C_{\mathrm{Fe}}+856 C_{\mathrm{Cu}} \end{aligned} \nonumber \]gives\[4.498=250925 C_{\mathrm{Fe}} \nonumber \]and we find that CFe is \(1.8 \times 10^{-5}\). Having determined CFe we can substitute back into one of the other equations to solve for CCu, which is \(1.3 \times 10^{-5}\).The absorbance spectra for Cr3+ and Co2+ overlap significantly. To determine the concentration of these analytes in a mixture, its absorbance is measured at 400 nm and at 505 nm, yielding values of 0.336 and 0.187, respectively. The individual molar absorptivities (M–1 cm–1) for Cr3+ are 15.2 at 400 nm and 0.533 at 505 nm; the values for Co2+ are 5.60 at 400 nm and 5.07 at 505 nm.Substituting into Equation \ref{10.1} and Equation \ref{10.2} gives\[A_{400} = 0.336 = 15.2C_\text{Cr} + 5.60C_\text{Co} \nonumber \]\[A_{400} = 0187 = 0.533C_\text{Cr} + 5.07C_\text{Co} \nonumber \]To determine CCr and CCo we solve the first equation for CCo\[C_{\mathrm{Co}}=\frac{0.336-15.2 \mathrm{C}_{\mathrm{Co}}}{5.60} \nonumber \]and substitute the result into the second equation.\[0.187=0.533 C_{\mathrm{Cr}}+5.07 \times \frac{0.336-15.2 C_{\mathrm{Co}}}{5.60} \nonumber \]\[0.187=0.3042-13.23 C_{\mathrm{Cr}} \nonumber \]Solving for CCr gives the concentration of Cr3+ as \(8.86 \times 10^{-3}\) M. Substituting this concentration back into the equation for the mixture’s absorbance at 400 nm gives the concentration of Co2+ as \(3.60 \times 10^{-2}\) M.To obtain results with good accuracy and precision the two wavelengths should be selected so that \(\varepsilon_X > \varepsilon_Y\) at one wavelength and \(\varepsilon_X < \varepsilon_Y\) at the other wavelength. It is easy to appreciate why this is true. Because the absorbance at each wavelength is dominated by one analyte, any uncertainty in the concentration of the other analyte has less of an impact. shows that the choice of wavelengths for Practice Exercise 14.4.2 are reasonable. When the choice of wavelengths is not obvious, one method for locating the optimum wavelengths is to plot \(\varepsilon_X / \varepsilon_y\) as function of wavelength, and determine the wavelengths where \(\varepsilon_X / \varepsilon_y\) reaches maximum and minimum values [Mehra, M. C.; Rioux, J. J. Chem. Educ. 1982, 59, 688–689].When the analyte’s spectra overlap severely, such that \(\varepsilon_X \approx \varepsilon_Y\) at all wavelengths, other computational methods may provide better accuracy and precision. In a multiwavelength linear regression analysis, for example, a mixture’s absorbance is compared to that for a set of standard solutions at several wavelengths [Blanco, M.; Iturriaga, H.; Maspoch, S.; Tarin, P. J. Chem. Educ. 1989, 66, 178–180]. If ASX and ASY are the absorbance values for standard solutions of components X and Y at any wavelength, then\[A_{SX}=\varepsilon_{X} b C_{SX} \label{10.3} \]\[A_{SY}=\varepsilon_{Y} b C_{SY} \label{10.4} \]where CSX and CSY are the known concentrations of X and Y in the standard solutions. Solving Equation \ref{10.3} and Equation \ref{10.4} for \(\varepsilon_X\) and for \(\varepsilon_Y\), substituting into Equation \ref{10.1}, and rearranging, gives\[\frac{A_{\operatorname{mix}}}{A_{S X}}=\frac{C_{X}}{C_{S X}}+\frac{C_{Y}}{C_{S Y}} \times \frac{A_{S Y}}{A_{S X}} \nonumber \]To determine CX and CY the mixture’s absorbance and the absorbances of the standard solutions are measured at several wavelengths. Graphing Amix/ASX versus ASY/ASX gives a straight line with a slope of CY/CSY and a y-intercept of CX/CSX. This approach is particularly helpful when it is not possible to find wavelengths where \(\varepsilon_X > \varepsilon_Y\) and \(\varepsilon_X < \varepsilon_Y\).The approach outlined here for a multiwavelength linear regression uses a single standard solution for each analyte. A more rigorous approach uses multiple standards for each analyte. The math behind the analysis of such data—which we call a multiple linear regression—is beyond the level of this text. For more details about multiple linear regression see Brereton, R. G. Chemometrics: Data Analysis for the Laboratory and Chemical Plant, Wiley: Chichester, England, 2003. shows visible absorbance spectra for a standard solution of 0.0250 M Cr3+, a standard solution of 0.0750 M Co2+, and a mixture that contains unknown concentrations of each ion. The data for these spectra are shown here.Use a multiwavelength regression analysis to determine the composition of the unknown.First we need to calculate values for Amix/ASX and for ASY/ASX. Let’s define X as Co2+ and Y as Cr3+. For example, at a wavelength of 375 nm Amix/ASX is 0.53/0.01, or 53 and ASY/ASX is 0.26/0.01, or 26. Completing the calculation for all wavelengths and graphing Amix/ASX versus ASY/ASX gives the calibration curve shown in . Fitting a straight-line to the data gives a regression model of\[\frac{A_{\operatorname{mix}}}{A_{S X}}=0.636+2.01 \times \frac{A_{S Y}}{A_{S X}} \nonumber \]Using the y-intercept, the concentration of Co2+ is\[\frac{C_{X}}{C_{S X}}=\frac{\left[\mathrm{Co}^{2+}\right]}{0.0750 \mathrm{M}}=0.636 \nonumber \]or [Co2+] = 0.048 M; using the slope the concentration of Cr3+ is\[\frac{C_{Y}}{C_{S Y}}=\frac{\left[\mathrm{Cr}^{3+}\right]}{0.0250 \mathrm{M}}=2.01 \nonumber \]or [Cr3+] = 0.050 M.A mixture of \(\text{MnO}_4^{-}\) and \(\text{Cr}_2\text{O}_7^{2-}\), and standards of 0.10 mM KMnO4 and of 0.10 mM K2Cr2O7 give the results shown in the following table. Determine the composition of the mixture. The data for this problem is from Blanco, M. C.; Iturriaga, H.; Maspoch, S.; Tarin, P. J. Chem. Educ. 1989, 66, 178–180.Letting X represent \(\text{MnO}_4^{-}\) and letting Y represent \(\text{Cr}_2\text{O}_7^{2-}\), we plot the equation\[\frac{A_{\operatorname{mix}}}{A_{SX}}=\frac{C_{X}}{C_{SX}}+\frac{C_{Y}}{C_{S Y}} \times \frac{A_{S Y}}{A_{SX}} \nonumber \]placing Amix/ASX on the y-axis and ASY/ASX on the x-axis. For example, at a wavelength of 266 nm the value Amix/ASX of is 0.766/0.042, or 18.2, and the value of ASY/ASX is 0.410/0.042, or 9.76. Completing the calculations for all wavelengths and plotting the data gives the result shown hereFitting a straight-line to the data gives a regression model of\[\frac{A_{\text { mix }}}{A_{\text { SX }}}=0.8147+1.7839 \times \frac{A_{SY}}{A_{SX}} \nonumber \]Using the y-intercept, the concentration of \(\text{MnO}_4^{-}\) is\[\frac{C_{X}}{C_{S X}}=0.8147=\frac{\left[\mathrm{MnO}_{4}^{-}\right]}{1.0 \times 10^{-4} \ \mathrm{M} \ \mathrm{MnO}_{4}^{-}} \nonumber \]or \(8.15 \times 10^{-5}\) M \(\text{MnO}_4^{-}\), and using the slope, the concentration of \(\text{Cr}_2\text{O}_7^{2-}\) is\[\frac{C_{Y}}{C_{S Y}}=1.7839=\frac{\left[\mathrm{Cr}_{2} \mathrm{O}_{7}^{2-}\right]}{1.00 \times 10^{-4} \ \mathrm{M} \ \text{Cr}_{2} \mathrm{O}_{7}^{2-}} \nonumber \]or \(1.78 \times 10^{-4}\) M \(\text{Cr}_2\text{O}_7^{2-}\).Sometimes our signal is superimposed on a background signal, which complicates our analysis because the measure absorbance has contributions from both our analyte and from the background. For example, the following figure shows a Gaussian signal with a maximum value of 50 centered at \(x = 125\) that is superimposed on an exponential background. The dotted line is the Gaussian signal, which has a maximum value of 50 at \(x = 125\), and the solid line is the signal as measured, which has a maximum value of 57 at \(x = 125\).If the background signal is consistent across all samples, then we can analyze the data without first removing its contribution. For example, the following figure shows a set of calibration standards and their resulting calibration curve, for which the y-intercept of 7 gives the offset introduced by the background.But background signals often are not consistent across samples, particularly when the source of the background is a property of the samples we collect (natural water samples, for example, may have variations in color due to differences in the concentration of dissolved organic matter) or a property of the instrument we are using (such as a variation in source intensity over time). When true, our data may look more like what we see in the following figure, which leads to a calibration curve with a greater uncertainty.Because the background changes gradually with the values for x while the analyte's signal changes quickly, we can use a derivative to the distinguish between the two. One approach is to calculate and plot the derivative, \(\frac{\Delta y}{\Delta x}\), as a function of \(x\), as shown in . The calibration signal in this case is the difference between the maximum signal and the minimum signal, which are shown by the dotted red lines in the top part of the figure. The fit of the calibration curve to the data and the calibration curve's y-intercept of zero shows that we have successfully compensated for the background signals.This page titled 14.4: Quantitative Applications is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by David Harvey.
314
14.5: Photometric Titrations
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Instrumental_Analysis_(LibreTexts)/14%3A_Applications_of_Ultraviolet_Visible_Molecular_Absorption_Spectrometry/14.05%3A_Photometric_Titrations
If at least one species in a titration absorbs electromagnetic radiation, then we can identify the end point by monitoring the titrand’s absorbance at a carefully selected wavelength. For example, we can identify the end point for a titration of Cu2+ with EDTA in the presence of NH3 by monitoring the titrand’s absorbance at a wavelength of 745 nm, where the \(\text{Cu(NH}_3)_2^{4+}\) complex absorbs strongly. At the beginning of the titration the absorbance is at a maximum. As we add EDTA, however, the reaction\[\text{Cu(NH}_3)_4^{2+}(aq) + \text{Y}^{4-} \rightleftharpoons \text{CuY}^{2-}(aq) + 4\text{NH}_3(aq) \nonumber \]decreases the concentration of \(\text{Cu(NH}_3)_2^{4+}\) and decreases the absorbance until we reach the equivalence point. After the equivalence point the absorbance essentially remains unchanged. The resulting spectrophotometric titration curve is shown in . Note that the titration curve’s y-axis is not the measured absorbance, Ameas, but a corrected absorbance, Acorr\[A_\text{corr} = A_\text{meas} \times \frac {V_\text{EDTA} + V_\text{Cu}} {V_\text{Cu}} \nonumber \]where VEDTA and VCu are, respectively, the volumes of EDTA and Cu. Correcting the absorbance for the titrand’s dilution ensures that the spectrophotometric titration curve consists of linear segments that we can extrapolate to find the end point. Other common spectrophotometric titration curves are shown in .This page titled 14.5: Photometric Titrations is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by David Harvey.
315
15.1: Theory of Fluorescence and Phosphorescence
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Instrumental_Analysis_(LibreTexts)/15%3A_Molecular_Luminescence/15.01%3A_Theory_of_Fluorescence_and_Phosphorescence
The use of molecular fluorescence for qualitative analysis and for semi-quantitative analysis dates to the early to mid 1800s, with more accurate quantitative methods appearing in the 1920s. Instrumentation for fluorescence spectroscopy using a filter or a monochromator for wavelength selection appeared in, respectively, the 1930s and 1950s. Although the discovery of phosphorescence preceded that of fluorescence by almost 200 years, qualitative and quantitative applications of molecular phosphorescence did not receive much attention until after the development of fluorescence instrumentation.Photoluminescence is divided into two categories: fluorescence and phosphorescence. A pair of electrons that occupy the same electronic ground state have opposite spins and are in a singlet spin state ).When an analyte absorbs an ultraviolet or a visible photon, one of its valence electrons moves from the ground state to an excited state with a conservation of the electron’s spin ). Emission of a photon from a singlet excited state to the singlet ground state—or between any two energy levels with the same spin—is called fluorescence. The probability of fluorescence is very high and the average lifetime of an electron in the excited state is only 10–5–10–8 s. Fluorescence, therefore, rapidly decays once the source of excitation is removed.In some cases an electron in a singlet excited state is transformed to a triplet excited state ) in which its spin no is longer paired with the ground state. Emission between a triplet excited state and a singlet ground state—or between any two energy levels that differ in their respective spin states–is called phosphorescence. Because the average lifetime for phosphorescence can be quite long—it ranges from 10–4–104 seconds—phosphorescence may continue for some time after we remove the excitation source.To appreciate the origin of fluorescence and phosphorescence we must consider what happens to a molecule following the absorption of a photon. Let’s assume the molecule initially occupies the lowest vibrational energy level of its electronic ground state, which is the singlet state labeled S0 in . Absorption of a photon excites the molecule to one of several vibrational energy levels in the first excited electronic state, S1, or the second electronic excited state, S2, both of which are singlet states. Relaxation to the ground state occurs by a number of mechanisms, some of which result in the emission of a photon and others that occur without the emission of a photon. These relaxation mechanisms are shown in . The most likely relaxation pathway from any excited state is the one with the shortest lifetime.A molecule in an excited state can return to its ground state in a variety of ways that we collectively call deactivation processes.When a molecule relaxes without emitting a photon we call the process radiationless deactivation. One example of radiationless deactivation is vibrational relaxation, in which a molecule in an excited vibrational energy level loses energy by moving to a lower vibrational energy level in the same electronic state. Vibrational relaxation is very rapid, with an average lifetime of <10–12 s. Because vibrational relaxation is so efficient, a molecule in one of its excited state’s higher vibrational energy levels quickly returns to the excited state’s lowest vibrational energy level.Another form of radiationless deactivation is an internal conversion in which a molecule in the ground vibrational level of an excited state passes directly into a higher vibrational energy level of a lower energy electronic state of the same spin state. By a combination of internal conversions and vibrational relaxations, a molecule in an excited electronic state may return to the ground electronic state without emitting a photon. A related form of radiationless deactivation is an external conversion in which excess energy is transferred to the solvent or to another component of the sample’s matrix.Let’s use to illustrate how a molecule can relax back to its ground state without emitting a photon. Suppose our molecule is in the highest vibrational energy level of the second electronic excited state. After a series of vibrational relaxations brings the molecule to the lowest vibrational energy level of S2, it undergoes an internal conversion into a higher vibrational energy level of the first excited electronic state. Vibrational relaxations bring the molecule to the lowest vibrational energy level of S1. Following an internal conversion into a higher vibrational energy level of the ground state, the molecule continues to undergo vibrational relaxation until it reaches the lowest vibrational energy level of S0.A final form of radiationless deactivation is an intersystem crossing in which a molecule in the ground vibrational energy level of an excited electronic state passes into one of the higher vibrational energy levels of a lower energy electronic state with a different spin state. For example, an intersystem crossing is shown in between the singlet excited state S1 and the triplet excited state T1.Fluorescence occurs when a molecule in an excited state’s lowest vibrational energy level returns to a lower energy electronic state by emitting a photon. Because molecules return to their ground state by the fastest mechanism, fluorescence is observed only if it is a more efficient means of relaxation than a combination of internal conversions and vibrational relaxations.A quantitative expression of fluorescence efficiency is the fluorescent quantum yield, \(\Phi_f\), which is the fraction of excited state molecules that return to the ground state by fluorescence. The fluorescent quantum yields range from 1 when every molecule in an excited state undergoes fluorescence, to 0 when fluorescence does not occur.The intensity of fluorescence, If, is proportional to the amount of radiation absorbed by the sample, P0 – PT, and the fluorescence quantum yield\[I_{f}=k \Phi_{f}\left(P_{0}-P_{\mathrm{T}}\right) \label{10.1} \]where k is a constant that accounts for the efficiency of collecting and detecting the fluorescent emission. From Beer’s law we know that\[\frac{P_{\mathrm{T}}}{P_{0}}=10^{-\varepsilon b C} \label{10.2} \]where C is the concentration of the fluorescing species. Solving Equation \ref{10.2} for PT and substituting into Equation \ref{10.1} gives, after simplifying\[I_{f}=k \Phi_{f} P_{0}\left(1-10^{-\varepsilon b C}\right) \label{10.3} \]When \(\varepsilon bC\) < 0.01, which often is the case when the analyte's concentration is small, Equation \ref{10.3} simplifies to\[I_{f}=2.303 k \Phi_{f} \varepsilon b C P_{0}=k^{\prime} P_{0} \label{10.4} \]where k′ is a collection of constants. The intensity of fluorescence, therefore, increases with an increase in the quantum efficiency, the source’s incident power, and the molar absorptivity and the concentration of the fluorescing species.Fluorescence generally is observed when the molecule’s lowest energy absorption is a \(\pi \rightarrow \pi^*\) transition, although some \(n \rightarrow \pi^*\) transitions show weak fluorescence. Many unsubstituted, nonheterocyclic aromatic compounds have a favorable fluorescence quantum yield, although substitutions on the aromatic ring can effect \(\Phi_f\) significantly. For example, the presence of an electron-withdrawing group, such as –NO2, decreases \(\Phi_f\), while adding an electron-donating group, such as –OH, increases \(\Phi_f\). Fluorrescence also increases for aromatic ring systems and for aromatic molecules with rigid planar structures. shows the fluorescence of quinine under a UV lamp.A molecule’s fluorescent quantum yield also is influenced by external variables, such as temperature and solvent. Increasing the temperature generally decreases \(\Phi_f\) because more frequent collisions between the molecule and the solvent increases external conversion. A decrease in the solvent’s viscosity decreases \(\Phi_f\) for similar reasons. For an analyte with acidic or basic functional groups, a change in pH may change the analyte’s structure and its fluorescent properties.As shown in , fluorescence may return the molecule to any of several vibrational energy levels in the ground electronic state. Fluorescence, therefore, occurs over a range of wavelengths. Because the change in energy for fluorescent emission generally is less than that for absorption, a molecule’s fluorescence spectrum is shifted to higher wavelengths than its absorption spectrum.A molecule in a triplet electronic excited state’s lowest vibrational energy level normally relaxes to the ground state by an intersystem crossing to a singlet state or by an external conversion. Phosphorescence occurs when the molecule relaxes by emitting a photon. As shown in , phosphorescence occurs over a range of wavelengths, all of which are at lower energies than the molecule’s absorption band. The intensity of phosphorescence, \(I_p\), is given by an equation similar to Equation \ref{10.4} for fluorescence\[\begin{align} I_{P} &= 2.303 k \Phi_{P} \varepsilon b C P_{0} \nonumber \\[4pt] &= k^{\prime} P_{0} \label{10.5} \end{align}\]where \(\Phi_p\) is the phosphorescence quantum yield.Phosphorescence is most favorable for molecules with \(n \rightarrow \pi^*\) transitions, which have a higher probability for an intersystem crossing than \(\pi \rightarrow \pi^*\) transitions. For example, phosphorescence is observed with aromatic molecules that contain carbonyl groups or heteroatoms. Aromatic compounds that contain halide atoms also have a higher efficiency for phosphorescence. In general, an increase in phosphorescence corresponds to a decrease in fluorescence.Because the average lifetime for phosphorescence can be quite long, ranging from 10–4–104 s, the phosphorescent quantum yield usually is quite small. An improvement in \(\Phi_p\) is realized by decreasing the efficiency of external conversion. This is accomplished in several ways, including lowering the temperature, using a more viscous solvent, depositing the sample on a solid substrate, or trapping the molecule in solution. shows an example of phosphorescence.Photoluminescence spectra are recorded by measuring the intensity of emitted radiation as a function of either the excitation wavelength or the emission wavelength. An excitation spectrum is obtained by monitoring emission at a fixed wavelength while varying the excitation wavelength. When corrected for variations in the source’s intensity and the detector’s response, a sample’s excitation spectrum is nearly identical to its absorbance spectrum. The excitation spectrum provides a convenient means for selecting the best excitation wavelength for a quantitative or qualitative analysis.In an emission spectrum a fixed wavelength is used to excite the sample and the intensity of emitted radiation is monitored as function of wavelength. Although a molecule has a single excitation spectrum, it has two emission spectra, one for fluorescence and one for phosphorescence. shows the UV absorption spectrum and the UV fluorescence emission spectrum for quinine.This page titled 15.1: Theory of Fluorescence and Phosphorescence is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by David Harvey.
316
15.2: Instruments for Measuring Fluorescence and Phosphorescence
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Instrumental_Analysis_(LibreTexts)/15%3A_Molecular_Luminescence/15.02%3A_Instruments_for_Measuring_Fluorescence_and_Phosphorescence
The basic instrumentation for monitoring fluorescence and phosphorescence—a source of radiation, a means of selecting a narrow band of radiation, and a detector—are the same as those for absorption spectroscopy. The unique demands of fluorescence and phosphorescence, however, require some modifications to the instrument designs discussed in earlier chapters: the filter photometer, the single-beam spectrophotometer, the double-beam spectrophotometer, and the diode array spectrometer. The most important difference is that the detector cannot be placed directly across from the source. shows why this is the case. If we place the detector along the source’s axis it receives both the transmitted source radiation, PT, and the fluorescent, If, or phosphorescent, Ip, radiation. Instead, we rotate the director and place it at 90o to the source. shows the basic design of an instrument for measuring fluorescence, which includes two wavelength selectors, one for selecting the source's excitation wavelength and one for selecting the analyte's emission wavelength. In a fluorometer the excitation and emission wavelengths are selected using absorption or interference filters. The excitation source for a fluorometer usually is a low-pressure Hg vapor lamp that provides intense emission lines distributed throughout the ultraviolet and visible region. When a monochromator is used to select the excitation and the emission wavelengths, the instrument is called a spectrofluorometer. With a monochromator the excitation source usually is a high-pressure Xe arc lamp, which has a continuous emission spectrum. Either instrumental design is appropriate for quantitative work, although only a spectrofluorometer can record an excitation or emission spectrum.A Hg vapor lamp has emission lines at 254, 312, 365, 405, 436, 546, 577, 691, and 773 nm.The sample cells for molecular fluorescence are similar to those for molecular absorption. Remote sensing using a fiber optic probe is possible using with either a fluorometer or spectrofluorometer. An analyte that is fluorescent is monitored directly. For an analyte that is not fluorescent, a suitable fluorescent probe molecule is incorporated into the tip of the fiber optic probe. The analyte’s reaction with the probe molecule leads to an increase or decrease in fluorescence.An instrument for molecular phosphorescence must discriminate between phosphorescence and fluorescence. Because the lifetime for fluorescence is shorter than that for phosphorescence, discrimination is achieved by incorporating a delay between exciting the sample and measuring the phosphorescent emission. shows how two out-of-phase choppers allow us to block fluorescent emission from reaching the detector when the sample is being excited and to prevent the source radiation from causing fluorescence when we are measuring the phosphorescent emission.Because phosphorescence is such a slow process, we must prevent the excited state from relaxing by external conversion. One way this is accomplished is by dissolving the sample in a suitable organic solvent, usually a mixture of ethanol, isopentane, and diethylether. The resulting solution is frozen at liquid-N2 temperatures to form an optically clear solid. The solid matrix minimizes external conversion due to collisions between the analyte and the solvent. External conversion also is minimized by immobilizing the sample on a solid substrate, making possible room temperature measurements. One approach is to place a drop of a solution that contains the analyte on a small disc of filter paper. After drying the sample under a heat lamp, the sample is placed in the spectrofluorometer for analysis. Other solid substrates include silica gel, alumina, sodium acetate, and sucrose. This approach is particularly useful for the analysis of thin layer chromatography plates.This page titled 15.2: Instruments for Measuring Fluorescence and Phosphorescence is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by David Harvey.
317
15.3: Applications and Photoluminescence methods
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Instrumental_Analysis_(LibreTexts)/15%3A_Molecular_Luminescence/15.03%3A_Applications_and_Photoluminescence_methods
Molecular fluorescence and, to a lesser extent, phosphorescence are used for the direct or indirect quantitative analysis of analytes in a variety of matrices. A direct quantitative analysis is possible when the analyte’s fluorescent or phosphorescent quantum yield is favorable. If the analyte is not fluorescent or phosphorescent, or if the quantum yield is unfavorable, then an indirect analysis may be feasible. One approach is to react the analyte with a reagent to form a product that is fluorescent or phosphorescent. Another approach is to measure a decrease in fluorescence or phosphorescence when the analyte is added to a solution that contains a fluorescent or phosphorescent probe molecule. A decrease in emission is observed when the reaction between the analyte and the probe molecule enhances radiationless deactivation or results in a nonemitting product. The application of fluorescence and phosphorescence to inorganic and organic analytes are considered in this section.Except for a few metal ions, most notably \(\text{UO}_2^+\), most inorganic ions are not sufficiently fluorescent for a direct analysis. Many metal ions are determined indirectly by reacting with an organic ligand to form a fluorescent or, less commonly, a phosphorescent metal–ligand complex. One example is the reaction of Al3+ with the sodium salt of 2, 4, 3′-trihydroxyazobenzene-5′-sulfonic acid—also known as alizarin garnet R—which forms a fluorescent metal–ligand complex ). The analysis is carried out using an excitation wavelength of 470 nm, with fluorescence monitored at 500 nm. Table \(\PageIndex{1}\) provides additional examples of chelating reagents that form fluorescent metal–ligand complexes with metal ions. A few inorganic nonmetals are determined by their ability to decrease, or quench, the fluorescence of another species. One example is the analysis for F– based on its ability to quench the fluorescence of the Al3+–alizarin garnet R complex.As noted earlier, organic compounds that contain aromatic rings generally are fluorescent and aromatic heterocycles often are phosphorescent. Table \(\PageIndex{2}\) provides examples of several important biochemical, pharmaceutical, and environmental compounds that are analyzed quantitatively by fluorimetry or phosphorimetry. If an organic analyte is not naturally fluorescent or phosphorescent, it may be possible to incorporate it into a chemical reaction that produces a fluorescent or phosphorescent product. For example, the enzyme creatine phosphokinase is determined by using it to catalyze the formation of creatine from phosphocreatine. Reacting the creatine with ninhydrin produces a fluorescent product of unknown structure.phenylalanine (F)tyrosine (F)tryptophan (F, P)vitamin A (F)vitamin B2 (F)vitamin B6 (F)vitamin B12 (F)vitamin E (F)folic acid (F)dopamine (F)norepinephrine (F)quinine (F)salicylic acid (F, P)morphine (F)barbiturates (F)LSD (F)codeine (P)caffeine (P)sulfanilamide (P)pyrene (F)benzo[a]pyrene (F)organothiophosphorous pesticides (F)carbamate insecticides (F)DDT (P)In Section 15.1 we showed that the intensity of fluorescence or phosphorescence is a linear function of the analyte’s concentration provided that the sample’s absorbance of source radiation (\(A = \varepsilon bC\)) is less than approximately 0.01. Calibration curves often are linear over four to six orders of magnitude for fluorescence and over two to four orders of magnitude for phosphorescence. For higher concentrations of analyte the calibration curve becomes nonlinear because the assumption that absorbance is negligible no longer apply. Nonlinearity may be observed for smaller concentrations of analyte fluorescent or phosphorescent contaminants are present. As discussed earlier, quantum efficiency is sensitive to temperature and sample matrix, both of which must be controlled when using external standards. In addition, emission intensity depends on the molar absorptivity of the photoluminescent species, which is sensitive to the sample matrix.The best way to appreciate the theoretical and the practical details discussed in this section is to carefully examine a typical analytical method. Although each method is unique, the following description of the determination of quinine in urine provides an instructive example of a typical procedure. The description here is based on Mule, S. J.; Hushin, P. L. Anal. Chem. 1971, 43, 708–711, and O’Reilly, J. E.; J. Chem. Educ. 1975, 52, 610–612.Quinine is an alkaloid used to treat malaria. It is a strongly fluorescent compound in dilute solutions of H2SO4 (\(\Phi_f = 0.55\)). Quinine’s excitation spectrum has absorption bands at 250 nm and 350 nm and its emission spectrum has a single emission band at 450 nm. Quinine is excreted rapidly from the body in urine and is determined by measuring its fluorescence following its extraction from the urine sample.Transfer a 2.00-mL sample of urine to a 15-mL test tube and use 3.7 M NaOH to adjust its pH to between 9 and 10. Add 4 mL of a 3:1 (v/v) mixture of chloroform and isopropanol and shake the contents of the test tube for one minute. Allow the organic and the aqueous (urine) layers to separate and transfer the organic phase to a clean test tube. Add 2.00 mL of 0.05 M H2SO4 to the organic phase and shake the contents for one minute. Allow the organic and the aqueous layers to separate and transfer the aqueous phase to the sample cell. Measure the fluorescent emission at 450 nm using an excitation wavelength of 350 nm. Determine the concentration of quinine in the urine sample using a set of external standards in 0.05 M H2SO4, prepared from a 100.0 ppm solution of quinine in 0.05 M H2SO4. Use distilled water as a blank.1. Chloride ion quenches the intensity of quinine’s fluorescent emission. For example, in the presence of 100 ppm NaCl (61 ppm Cl–) quinine’s emission intensity is only 83% of its emission intensity in the absence of chloride. The presence of 1000 ppm NaCl (610 ppm Cl–) further reduces quinine’s fluorescent emission to less than 30% of its emission intensity in the absence of chloride. The concentration of chloride in urine typically ranges from 4600–6700 ppm Cl–. Explain how this procedure prevents an interference from chloride.The procedure uses two extractions. In the first of these extractions, quinine is separated from urine by extracting it into a mixture of chloroform and isopropanol, leaving the chloride ion behind in the original sample.2. Samples of urine may contain small amounts of other fluorescent compounds, which will interfere with the analysis if they are carried through the two extractions. Explain how you can modify the procedure to take this into account?One approach is to prepare a blank that uses a sample of urine known to be free of quinine. Subtracting the blank’s fluorescent signal from the measured fluorescence from urine samples corrects for the interfering compounds.3. The fluorescent emission for quinine at 450 nm can be induced using an excitation frequency of either 250 nm or 350 nm. The fluorescent quantum efficiency is the same for either excitation wavelength. Quinine’s absorption spectrum shows that \(\varepsilon_{250}\) is greater than \(\varepsilon_{350}\). Given that quinine has a stronger absorbance at 250 nm, explain why its fluorescent emission intensity is greater when using 350 nm as the excitation wavelength.We know that If is a function of the following terms: k, \(\Phi_f\), P0, \(\varepsilon\), b, and C. We know that \(\Phi_f\), b, and C are the same for both excitation wavelengths and that \(\varepsilon\) is larger for a wavelength of 250 nm; we can, therefore, ignore these terms. The greater emission intensity when using an excitation wavelength of 350 nm must be due to a larger value for P0 or k . In fact, P0 at 350 nm for a high-pressure Xe arc lamp is about 170% of that at 250 nm. In addition, the sensitivity of a typical photomultiplier detector (which contributes to the value of k) at 350 nm is about 140% of that at 250 nm.To evaluate the method described iabove, a series of external standard are prepared and analyzed, providing the results shown in the following table. All fluorescent intensities are corrected using a blank prepared from a quinine-free sample of urine. The fluorescent intensities are normalized by setting If for the highest concentration standard to 100.After ingesting 10.0 mg of quinine, a volunteer provides a urine sample 24-h later. Analysis of the urine sample gives a relative emission intensity of 28.16. Report the concentration of quinine in the sample in mg/L and the percent recovery for the ingested quinine.Linear regression of the relative emission intensity versus the concentration of quinine in the standards gives the calibration curve shown below and the following calibration equation.\[I_{f}=0.122+9.978 \times \frac{\mathrm{g} \text { quinine }}{\mathrm{mL}} \nonumber \]Substituting the sample’s relative emission intensity into the calibration equation gives the concentration of quinine as 2.81 μg/mL. Because the volume of urine taken, 2.00 mL, is the same as the volume of 0.05 M H2SO4 used to extract the quinine, the concentration of quinine in the urine also is 2.81 μg/mL. The recovery of the ingested quinine is\[\frac{\frac{2.81 \ \mu \mathrm{g} \text { quinine }}{\mathrm{mL} \text { urine }} \times 2.00 \ \mathrm{mL} \text { urine } \times \frac{1 \mathrm{mg}}{1000 \ \mu \mathrm{g}}} {10.0 \ \mathrm{mg} \text { quinine ingested }} \times 100=0.0562 \% \nonumber \]It can take 10–11 days for the body to completely excrete quinine so it is not surprising that such a small amount of quinine is recovered from this sample of urine.This page titled 15.3: Applications and Photoluminescence methods is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by David Harvey.
318
15.4: Chemiluminscence
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Instrumental_Analysis_(LibreTexts)/15%3A_Molecular_Luminescence/15.04%3A_Chemiluminscence
The focus of this chapter has been on molecular luminescence methods in which emission from the analyte's excited state is achieved following its absorption of a photon. In Chapter 10 we considered atomic emission following excitation of the analyte by thermal energy. An exothermic reaction may also serve as a source of energy. In chemiluminescence the analyte is raised to a higher-energy state by means of a chemical reaction, emitting characteristic radiation when it returns to a lower-energy state. When the chemical reaction results from a biological or enzymatic reaction, the emission of radiation is called bioluminescence. Commercially available “light sticks” and the flash of light from a firefly are examples of chemiluminescence and bioluminescence.The intensity of emitted light, \(I\), is proportional to the quantum yield for chemiluminescent emission, \(\Phi_{CL}\), which is, itself the product of the quantum yield for creating excited states, \(\Phi_{EX}\), and the quantum yield for emission through emission of a photon, \(\Phi_{EM}\). The intensity also depends on the rate of the chemical reaction(s) responsible for creating the excited state; thus\[I = \Phi_{Cl} \times \frac{dC}{dt} \nonumber \]where \(dC/dt\) is the rate of the chemical reaction.Chemiluminescent measurements require less equipment than do other forms of molecular emission because there is no need for a source of photons and no need for a monochromator as the only source of photons are those arising from the chemiluminescent reaction. A sample cell to hold the reaction mixture and a photomultiplier tube may be sufficient for the optical bench. Because chemiluminescent emission depends on the reaction's rate, and because the rate decreases with time, the intensity of emission is time-dependent. As a result, the analytical signal is often the integrated emission intensity over a fixed interval of time.This page titled 15.4: Chemiluminscence is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by David Harvey.
319
15.5: Evaluation of Molecular Luminescence
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Instrumental_Analysis_(LibreTexts)/15%3A_Molecular_Luminescence/15.05%3A_Evaluation_of_Molecular_Luminescence
Photoluminescence spectroscopy is used for the routine analysis of trace and ultratrace analytes in macro and meso samples. Detection limits for fluorescence spectroscopy are influenced by the analyte’s quantum yield. For an analyte with \(\Phi_f > 0.5\), a picomolar detection limit is possible when using a high quality spectrofluorometer. For example, the detection limit for quinine sulfate, for which \(\Phi\) is 0.55, generally is between 1 part per billion and 1 part per trillion. Detection limits for phosphorescence are somewhat higher, with typical values in the nanomolar range for low-temperature phosphorimetry and in the micromolar range for room-temperature phosphorimetry using a solid substrate.The accuracy of a fluorescence method generally is between 1–5% when spectral and chemical interferences are insignificant. Accuracy is limited by the same types of problems that affect other optical spectroscopic methods. In addition, accuracy is affected by interferences that affect the fluorescent quantum yield. The accuracy of phosphorescence is somewhat greater than that for fluorescence.The relative standard deviation for fluorescence usually is between 0.5–2% when the analyte’s concentration is well above its detection limit. Precision usually is limited by the stability of the excitation source. The precision for phosphorescence often is limited by reproducibility in preparing samples for analysis, with relative standard deviations of 5–10% being common.The sensitivity of a fluorescent or a phosphorescent method is affected by a number of parameters. We already have considered the importance of quantum yield and the effect of temperature and solution composition on \(\Phi_f\) and \(\Phi_p\). Besides quantum yield, sensitivity is improved by using an excitation source that has a greater emission intensity, P0, at the desired wavelength, and by selecting an excitation wavelength for which the analyte has a greater molar absorptivity, \(\varepsilon\). Another approach for improving sensitivity is to increase the volume from which emission is monitored. shows how rotating a monochromator’s slits from their usual vertical orientation to a horizontal orientation increases the sampling volume. The result can increase the emission from the sample by \(5-30 \times\).The selectivity of fluorescence and phosphorescence is superior to that of absorption spectrophotometry for two reasons: first, not every compound that absorbs radiation is fluorescent or phosphorescent; and, second, selectivity between an analyte and an interferent is possible if there is a difference in either their excitation or their emission spectra. The total emission intensity is a linear sum of that from each fluorescent or phosphorescent species. The analysis of a sample that contains n analytes, therefore, is accomplished by measuring the total emission intensity at n wavelengths.As with other optical spectroscopic methods, fluorescent and phosphorescent methods provide a rapid means for analyzing samples and are capable of automation. Fluorometers are relatively inexpensive, ranging from several hundred to several thousand dollars, and often are satisfactory for quantitative work. Spectrofluorometers are more expensive, with models often exceeding $50,000.This page titled 15.5: Evaluation of Molecular Luminescence is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by David Harvey.
320
16.1: Theory of Infrared Absorption Spectrometry
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Instrumental_Analysis_(LibreTexts)/16%3A_An_Introduction_to_Infrared_Spectrometry/16.01%3A_Theory_of_Infrared_Absorption_Spectrometry
shows the infrared spectrum for ethanol. Unlike a UV/Vis absorbance spectrum, the y-axis is displayed as percent transmittance (%T) instead of absorbance, reflecting the fact that IR is used more for qualitative purposes than for quantitative purposes, where Beer's law, which is a linear function of concentration (\(A = \epsilon b C\)) makes absorbance the more useful measurement. The x-axis for an IR spectrum usually is given in wavenumbers, \(\overline{\nu} = \lambda^{-1}\), with units of cm–1. The peaks in an IR spectrum are inverted relative to absorbance spectrum; that is, they descend from a baseline of 100%T instead of rising from a baseline of zero absorbance.The energy of a photon of infrared radiation (see ) is not sufficient to affect a change in the electronic energy levels of electrons, as in the UV/Vis atomic or molecular absorption or emission spectroscopies covered in Chapters 9, 10, and 12–15. Instead, infrared radiation is confined to changes in the vibrational energy states of molecules and molecular ions. To absorb an IR photon, the absorbing species must experience a change in its dipole moment, which allows the oscillation in the photon's electrical field to interact with an oscillation in charge within the absorbing species. If the two oscillations have the same frequency, then absorption is possible.Each vibrational energy state in also has a set of rotational energy states, which means that the peak for a particular change in vibrational energy levels may consist of a series of closely spaced lines, one for each of several changes in rotational energy. Because rotation is difficult for analytes that in liquid or solid forms, we usually see just a single, broad absorption line; for this reason, we will consider only vibrational transitions in this chapter.Although we tend to think of the atoms in a molecule as being rigidly fixed in space relative to each other, the individual atoms are in a constant state of motion: bond lengths increase and decrease by stretching and compressing, and bond angles change as the result of the bending of the bonds relative to each other. shows two different types of stretching (symmetric and asymmetric) and four different types of bending (in-plane rocking, in-plane scissoring, out-of-plane wagging, and out-of-plane twisting).Even a simple molecule can have many vibrational modes that give rise to a peak in the IR spectrum, as is the case for ethanol ). The number of possible normal vibrational modes for a linear molecule is \(3N - 5\), where N is the number of atoms, and \(3N - 6\) for a non-linear molecule. Ethanol, for example, has \(3 \times 9 - 6 = 21\) possible vibrational modes. As we will see later in this section, some of these modes may not lead to a change in dipole moment, decreasing the number of peaks in the IR spectrum.Why does a non-linear molecule have \(3N - 6\) vibrational modes? Consider a molecule of methane, CH4. Each of methane’s five atoms can move in one of three directions (x, y, and z) for a total of \(3 \times 5 = 15\) different ways in which the molecule’s atoms can move. A molecule can move in three ways: it can move from one place to another, which we call translational motion; it can rotate around an axis, which we call rotational motion; and its bonds can stretch and bend, which we call vibrational motion. Because the entire molecule can move in the x, y, and z directions, three of methane’s 15 different ways of moving are translational. In addition, the molecule can rotate about its x, y, and z axes, accounting for three additional forms of motion. This leaves \(15 - 3 - 3 = 9\) vibrational modes. A linear molecule, such as CO2, has \(3N - 5\) vibrational modes because it can rotate around only two axes.The simplest model system for the the stretching and compressing of a bond is a weight with a mass, m, attached to an ideal spring that hangs from the ceiling as shown in . If we pull on the mass and then release it, we initiate a simple oscillating harmonic motion that we can model using Hooke's law. If we displace the weight by a distance, y, then the force, F, that acts on the weight is\[F = - k y \label{hookeslaw} \]where \(k\) is the spring's force constant—a measure of the spring's springiness. The negative sign in Equation \ref{hookeslaw} indicates that this is the force needed to restore the spring to its original position; that is, the force is in the direction opposite to our action of pulling down on the weight.Let's take the potential energy, E, of the spring and weight as 0 when they are at rest (y = 0). If we pull down on the weight by a distance of \(dy\), then the change in the system's potential energy, \(dE\), must increase by the product of force and distance\[dE = - F \times dy = - ky \times dy \label{PEchange} \]Integrating Equation \ref{PEchange} from \(E = 0\) to \(E = E\) and from \(y = 0\) to \(y = y\)\[\int_0^E dE = - k \int_0^y ydy \label{PEintegrals} \]gives the energy as\[E = \frac{1}{2} k y^2 \label{PE} \] shows the resulting potential energy curve, for which the maximum potential energy is \(\frac{1}{2}kA^2\) when the weight is at its maximum displacement. Note that the potential energy curve is a parabola.The simple harmonic oscillator described above and shown in vibrates with a frequency, \(\nu_0\), given by the equation\[\nu_0 = \frac{1}{2 \pi} \sqrt{\frac{k}{m}} \label{natfreq} \]where \(k\) is the spring's force constant and \(m\) is the weight's mass. We can extend this to a spring that connects two weights to each other by substituting for the mass, \(m\), the system's reduced mass, \(\mu\)\[\mu = \frac{m_1 \times m_2}{m_1 + m_2} \label{redmass} \]where \(m_1\) and \(m_2\) are the masses of the two weights. Substituting Equation \ref{redmass} into Equation \ref{natfreq} gives\[\nu_0 = \frac{1}{2 \pi} \sqrt{\frac{k}{\mu}} = \frac{1}{2 \pi} \sqrt{\frac{k(m_1 + m_2)}{m_1 \times m_2}} \label{natfreq2} \]If we make the assumption that Equation \ref{natfreq2} applies to simple diatomic molecules, then we can estimate the bond's force constant, \(k\), by measuring its vibrational frequency.Equations \ref{PE}\ and \ref{natfreq2} are based on a classical mechanics treatment of the simple harmonic oscillator in which any displacement, and, thus, any energy is possible. Molecular vibrations, however, are quantized; thus\[E = \left( v + \frac{1}{2} \right) \times h \times \frac{1}{2 \pi} \sqrt{\frac{k}{\mu}} = \left( v + \frac{1}{2} \right) h \nu_0 \label{quantizedE} \]where \(v\) is the vibrational quantum number, which has allowed values of \(0, 1, 2, \dots\). The difference in energy, \(\Delta E\), between any two consecutive vibrational energy levels is \(h \nu_0\). As allowed transitions in quantum mechanics are limited to \(\Delta \nu = \pm 1\) and as the difference in energy is limited to \(\Delta E = h \nu_0\), any particular mode of vibration should give rise to a single peak.The ideal behavior described in the last section, in which each vibrational motion that produces a change in dipole moment results in a single peak, does not hold due to a variety of reasons, including the coulombic interactions between the atoms as they move toward and away from each other. One result of these non-ideal behaviors is that the value \(\Delta E\) does not remain constant for all values of the vibrational quantum number \(v\). For larger values of \(v\), the value of \(\Delta E\) becomes smaller and transitions where \(\Delta v = \pm 2\) or \(\Delta v = \pm 3\) become possible giving rise to what are called overtone lines at frequencies that are \(2 \times\) or \(3 \times\) that for \(\nu_0\). shows the IR spectrum for carbon dioxide, CO2, which consists of three clusters of peaks located at approximately 670 cm–1, 2350 cm–1, and 3700 cm–1. As carbon dioxide is a linear molecule that consists of two carbon-oxygen double bonds (O=C=O), it has \(3 \times 3 - 5 = 9 - 5 = 4\) vibrational modes. So why do we see just three clusters of peaks?One of the requirements for the absorption of infrared radiation, is that the vibrational motion must result in a change in dipole moment. shows the four vibrational modes for CO2. Of these four vibrational modes, the symmetric stretch does not result in a change in dipole moment. Although this appears to explain why we see just three clusters of peaks, a close examination of the two bending motions in should convince you that they are identical and, therefore, will appear as a single peak.So what is the source of the cluster of peaks around 3700 cm–1? Sometimes the absorption of a single photon excites two or more vibrational modes. In this case, the wavenumber for this absorption band is equivalent to the sum of the wavenumbers for the asymmetric stretch and the two degenerate bending modes (2349 + 667 = 3016 cm–1, and 2349 + 667 + 667 = 3683 cm–1). These are called combination bands.Another source of additional peaks are overtone bands in which \(\Delta v = \pm 2\) or \(\Delta v = \pm 3\). shows the IR spectrum for carbonyl sulfide, OCS, which is analagous to CO2 in which one of the oxygens is replaced with sulfur. The peak at 520 cm–1 is for its two degenerate bending motions and is labeled \(\nu_2\). The asymmetric stretch at 2062 cm–1 \((\nu_3)\) and the symmetric stretch at 859 cm–1 \((\nu_1)\) are the other two fundamental absorption bands. The remaining peaks are overtones, such as the peak labeled \(2 \nu_2\) at 1040 cm–1, or combination bands, such as the peak labeled \(\nu_3 + \nu_1\) at 2921 cm–1. Many of the peaks appear as two peaks; this is the result of changes in rotational energy as well.This page titled 16.1: Theory of Infrared Absorption Spectrometry is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by David Harvey.
321
16.2: Infrared Sources and Transducers
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Instrumental_Analysis_(LibreTexts)/16%3A_An_Introduction_to_Infrared_Spectrometry/16.02%3A_Infrared_Sources_and_Transducers
Instrumentation for IR spectroscopy requires a source of infrared radiation and a transducer for detecting the radiation after it passes through the sample.Most IR sources consist of a solid material that emits radiation when heated by passing a current through the device. The intensity of emitted light typically is greatest at 5900–5000 cm–1 and then decreases steadily to 500 cm–1. Common examples of IR sources include the Nernst glower (a ceramic rod heated to 1200–2200 K), a globar (a silicon carbide rod heated to 1300–1500 K), an incandescent wire (a nichrome wire heated to approximately 1100 K).Because IR radiation has a much lower energy than visible and ultraviolet light, the types of detectors used in UV/Vis spectroscopy are not suitable for recording IR spectra. Most IR detectors measure heat either directly or by a temperature-dependent change in one of its properties.When two different metals, M1 and M2, are connected to each other in a closed loop, forming two M1–M2 junctions, a potential difference exists between the two junctions. The magnitude of this difference in potential depends on the difference in the temperatures of the two junctions. If the temperature of one junction is held constant or, if the source radiation is chopped—see Chapter 9.2 for a discussion of chopping—then the change in temperature of the other junction can be measured. The active junction is usually coated with a dark material to enhance the absorbance of thermal energy, and is small in size. A high-quality thermocouple is sensitive temperature differences as small as 10–6 K.A bolometer is fashioned from materials for which the resistance is temperature dependent. As is true for a thermocouple, the active part of the detector is coated with a dark material and kept small in size.Triglycerine sulfate, (NH2CH2COOH)3 • H2SO4, TGS, is a crystalline pyroelectric material. It usually is partially deuterated (DTGS) and, perhaps, doped with L-alanine (DLaTGS). The pyroelectric material is placed between two electrodes, one of which is optically transparent to infrared radiation. The absorption of infrared radiation results in a change in temperature and a resulting change in the detector's capacitance and, therefore, the current that flows.This page titled 16.2: Infrared Sources and Transducers is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by David Harvey.
322
16.3: Infrared Instruments
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Instrumental_Analysis_(LibreTexts)/16%3A_An_Introduction_to_Infrared_Spectrometry/16.03%3A_Infrared_Instruments
Instrumentation for infrared spectroscopy use one of three common optical benches: non-dispersive instruments, dispersive instruments, and Fourier transform instruments. As we have already examined non-dispersive and dispersive instruments in Chapter 13, and because they are no longer as common as they once were, we give them only a brief consideration here. Fourier transform instruments, which dominate the current marketplace, receive a more detailed treatment.The simplest instrument for IR absorption spectroscopy is a filter photometer similar to that shown earlier in . There are four key components that make up the interfometer: the drive mechanism that moves the moving mirror, the beam splitter, the light source, and the detector.As we learned in Chapter 7, the Fourier transform encodes information about the wavelength or frequency of source radiation absorbed by the sample by observing how the signal reaching the detector varies with time. As the moving mirror is displaced in space, some frequencies of light experience complete constructive interference, some frequencies of light experience complete destructive interference, and other frequencies fall somewhere in between giving rise to a time domain spectrum. As the signal is monitored as function of time and the moving mirror is traversing a variable distance, the drive mechanism must allow for a precise and accurate relationship between the two. The mechanism of the moving mirror must be capable of moving the mirror through a distance of up to 20 cm at a scan rate as fast as 10 cm/s; it must also accomplish this while maintaining the mirror's orientation relative to the axis of its movement. To maintain accuracy, a HeNe laser, which emits visible light with a wavelength of 632.8 nm, is aligned with the light source so that they follow the same optical path.The beam splitter is designed to reflect 50% of the source radiation to the fixed mirror and to pass the remaining 50% of the source radiation to the moving mirror. The materials used to construct the beam splitter depends on the range of wavelengths being used. The most common range of wavelengths, which is called mid-IR, runs from approximately 670 cm–1 to 4000 cm–1. Instruments for mid-IR use a beam splitter that consists of silicon or germanium coated onto a substrate of KBr or NaCl.The most common sources for FT-IR are those discussed in the previous section, such as a Nernst glower. The most common transducer for FT-IR is pyroelectric triglycine sulfate.This page titled 16.3: Infrared Instruments is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by David Harvey.
323
17.1: Mid-Infrared Absorption Spectometry
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Instrumental_Analysis_(LibreTexts)/17%3A_Applications_of_Infrared_Spectrometry/17.01%3A_Mid-Infrared_Absorption_Spectometry
Mid-infrared spectrometry is used for the routine qualitative analysis and, to a lesser extent, the quantitative analysis of organic molecules. In this section we consider absorption spectrometry in which we measure the absorbance of IR light as it passes through a gas, solution, liquid, or solid sample. In Section 17.2 we consider reflectance spectrometry in which we measure the absorbance of IR light as it reflects off the surface of a solid sample or a thin film of a liquid sample.Infrared spectroscopy is routinely used to analyze gas, liquid, and solid samples. We know from Beer's law, \(A = \epsilon b C\), that absorbance is a linear function of the analyte's concentration, \(C\), and the distance, \(b\), the light travels through the sample. The challenge with obtaining an IR spectrum, is rarely the analyte's concentration or path length; instead it is finding materials and solvents that are transparent to IR radiation. The optical windows in IR cells are made from materials, such as NaCl and KBr, that are transparent to infrared radiation.The cell for analyzing a sample in the gas phase generally is a 5–10 cm glass cylinder fitted with optically transparent windows. For an analyte with a particularly small concentration, the sample cell is designed with reflective surfaces that allow the infrared radiation to make several passes through the cell before it exits the sample cell, increasing the pathlength and, therefore, the absorbance.The analysis of a sample in solution is limited by the solvent’s IR absorbing properties, with carbon tetrachloride, CCl4, carbon disulfide, CS2, and chloroform, CHCl3, being common solvents. A typical solution cell is shown in . It is fashioned with two NaCl windows separated by a spacer. By changing the spacer, pathlengths from 0.015–1.0 mm are obtained. The sample is introduced into the cell using a syringe and the sample inlet port.A sample that is a volatile liquid may be analyzed using the solution cell in . For a non-volatile liquid sample, however, a suitable sample for qualitative work can be prepared by placing a drop of the liquid between the two NaCl plates shown in , forming a thin film that typically is less than 0.01 mm thick. An alternative approach is to place a drop of the sample on a disposable card equipped with a polyethylene "window" that IR transparent with the exception of strong absorption bands at 2918 cm–1 and 2849 cm–1 ).Transparent solid samples are analyzed by placing them directly in the IR beam. Most solid samples, however, are opaque, and are first dispersed in a more transparent medium before recording the IR spectrum. If a suitable solvent is available, then the solid is analyzed by preparing a solution and analyzing as described above. When a suitable solvent is not available, solid samples are analyzed by preparing a mull of the finely powdered sample with a suitable oil and then smearing it on a NaCl salt plate or a disposable IR card ). Alternatively, the powdered sample is mixed with KBr and pressed, under high pressure, into a thin, optically transparent pellet, as shown in .The most important application of mid-infrared spectroscopy is in the qualitative identification of organic molecules. shows mid-IR solution spectra for four simple alcohols: methanol, CH3OH, ethanol, CH3CH2OH, propanol, CH3CH2CH2OH, and isopropanol, (CH3)2CHOH. Clearly there are similarities and differences in these four spectra: similarities that might lead us to expect that each molecule contains the same functional groups and differences that appear as features unique to a particular molecule. The similarities in these four spectra appear at the higher wavenumber end of the x-axis scale; we call the peaks we find there group frequencies. The differences in these four spectra occur below approximately 1500 cm–1 in what we call the fingerprint region.The fingerprint region is defined here as beginning at 1500 cm–1, extending to the lowest wavenumber shown on the x-axis. If you do some searching on the fingerprint region you will see that there is no broad agreement on where it begins. In my searching, I found sources that place the beginning of the fingerprint region as 1500 cm–1, 1450 cm–1, 1300 cm–1, 1200 cm–1, and 1000 cm–1.All four of the spectra in share a small intensity, sharp peak at approximately 3650 cm–1, a strong intensity, broad peak at approximately 3350 cm–1, and two medium intensity, sharp peaks at 2950 cm–1 and 3850 cm–1. By comparing spectra for these and other compounds, we know that the presence of a broad peak between approximately 3200 cm–1 and 3600 cm–1 is good evidence that the compound contains a hydrogen-bonded –OH functional group. The sharp peak at approximately 3650 cm–1 also is evidence of an –OH functional group, but one that is not hydrogen-bonded. The two sharp peaks at 2950 cm–1 and 3850 cm–1 are consistent with C–H bonds. All four of these peaks are for stretching vibrations. Tables of group frequencies are routinely available. shows a close-up of the fingerprint region for the alcohol samples in . Of particular interest with this set of samples is the increasing complexity of the spectra as we move from the simplest of these alcohols (methanol), to the most complex of these alcohols (propanol and isopropanol). Also of interest is that each spectrum is unique in a way that allows us to confirm a sample by matching it against a library of recorded spectra. There are a number of accessible collections of spectra that are available for this purpose. One such collection of spectra is the NIST Webbook—NIST is the National Institute of Standards and Technology—which is the source of the data used to display the spectra included in this section's figures and which includes spectra for over 16,000 compounds.With the availability of computerized data acquisition and storage it is possible to build digital libraries of standard reference spectra. The identity of an a unknown compound often can be determined by comparing its spectrum against a library of reference spectra, a process known as spectral searching. Comparisons are made using an algorithm that calculates the cumulative difference between the sample’s spectrum and a reference spectrum. For example, one simple algorithm uses the following equation\[D = \sum_{i = 1}^n | (A_{sample})_i - (A_{reference})_i | \label{spec_sub} \]where D is the cumulative difference, Asample is the sample’s absorbance at wavelength or wavenumber i, Areference is the absorbance of the reference compound at the same wavelength or wavenumber, and n is the number of digitized points in the spectra. Note that the spectra are defined here by absobrance instead of transmittance as absorbance is directly proportional to concentration. The cumulative absolute difference is calculated for each reference spectrum. The reference compound with the smallest value of D is the closest match to the unknown compound. The accuracy of spectral searching is limited by the number and type of compounds included in the library, and by the effect of the sample’s matrix on the spectrum.Another advantage of computerized data acquisition is the ability to subtract one spectrum from another. When coupled with spectral searching it is possible to determine the identity of several components in a sample without the need of a prior separation step by repeatedly searching and subtracting reference spectra. An example is shown in in which the composition of a two-component mixture is determined by successive searching and subtraction. shows the spectrum of the mixture. A search of the spectral library selects cocaine•HCl ) as a likely component of the mixture. Subtracting the reference spectrum for cocaine•HCl from the mixture’s spectrum leaves a result ) that closely matches mannitol’s reference spectrum ). Subtracting the reference spectrum for mannitol leaves a small residual signal ).A quantitative analysis based on the absorption of infrared radiation, although important, is encountered less frequently than with UV/Vis absorption, primarily due to the three issues raised here.One challenge for quantitative IR is the greater tendency for instrumental deviations from Beer’s law when using infrared radiation. Because an infrared absorption band is relatively narrow, any deviation due to the lack of monochromatic radiation is more pronounced. In addition, infrared sources are less intense than UV/Vis sources, which makes stray radiation more of a problem. Differences between the path lengths for samples and for standards when using thin liquid films or KBr pellets are a problem, although an internal standard can correct for any difference in pathlength; alternatively, we can use the cell shown in to maintain a constant path length.The water and carbon dioxide in air have strong absorbances in the mid-IR. A double-beam dispersive instrument corrects for the contributions of CO2 and H2O vapor because they are present in both pathways through the instrument. An FT-IR, however, includes only a single optical path, so it is necessary to collect a separate spectrum to compensate for the absorbance of atmospheric CO2 and H2O vapor. This is done by collecting a background spectrum without the sample and storing the result in the instrument’s computer memory. The background spectrum is removed from the sample’s spectrum by taking the ratio the two signals. Another approach is to flush the sample compartment with nitrogren.Another challenge for quantitative IR is that establishing a 100% T (A = 0) baseline often is difficult because the optical properties of NaCl sample cells may change significantly with wavelength due to contamination and degradation. We can minimize this problem by measuring absorbance relative to a baseline established for the absorption band. shows how this is accomplished.A recent review paper [Fahelelbom, K. M.; Saleh, A.; Al-Tabakha, M. A.; Ashames, A. A. Rev. Anal. Chem. 2022, 41, 21–33] summarizes the rich literature in quantitative mid-infrared spectrometry. Among the areas covered are the analysis of pharmaceuticals, including antibiotics, antihypertensives, antivirals, and counterfeit drugs. Mid-infrared spectrometry also finds use for the analysis of environmentally significant gases, such as methane, CH4, hydrogen chloride, HCl, sulfur dioxide, SO2, and nitric oxide, NO.This page titled 17.1: Mid-Infrared Absorption Spectometry is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by David Harvey.
324
17.2: Mid-Infrared Reflection Spectrometry
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Instrumental_Analysis_(LibreTexts)/17%3A_Applications_of_Infrared_Spectrometry/17.02%3A_Mid-Infrared_Reflection_Spectrometry
The first section of this chapter considered mid-IR absorption spectrometry in which we measure the amount of light that is transmitted by the sample, which we can convert, if we wish, into absorbance values. In the process, we examined both transmittance and absorbance spectra. In this section, we consider experiments in which we measure the reflection of infrared radiation by a sample.There are two broad classes of reflection: internal and external. As shown in , internal reflection occurs when light encounters an interface between two media—here identified as the sample and the support—that have different refractive indicies, n. When the refractive index of the support is greater than the refractive index of the sample, then some of the light reflects off the interface. Attenuated total reflectance spectrometry is one example of an instrumental method that relies on internal reflection.External reflectance occurs when light reflects off of the sample's surface. As shown in , the way in which light reflects depends on the nature of the sample's surface. In specular reflectance, the angle of reflection is the same at all locations because the sample's surface is smooth; in diffuse reflectance, the angle of reflection varies between locations due to the roughness of the sample's surface. Diffuse reflectance spectrometry is one example of an instrumental method that relies on external reflection.The analysis of an aqueous sample is complicated by the solubility of the NaCl cell window in water. One approach to obtaining an infrared spectrum of an aqueous solution is to use attenuated total reflectance instead of transmission. shows a diagram of a typical attenuated total reflectance (ATR) FT–IR instrument. The ATR cell consists of a high refractive index material, such as ZnSe or diamond, sandwiched between a low refractive index substrate and a lower refractive index sample. Radiation from the source enters the ATR crystal where it undergoes a series of internal reflections before exiting the crystal. During each reflection the radiation penetrates a short distance into the sample. This depth of penetration, \(d_p\), depends on the wavelength of the light, \(\lambda\), the refractive index of the ATR crystal, \(n_1\), the refractive index of the sample, \(n_2\), and the angle of the incident radiation, \(\theta\).\[d_p = \frac {\lambda} {2 \pi \sqrt{n_1^2 \sin^2 \theta - n_2^2}} \label{depth} \]For example, when using ZnSe as the ATR crystal (\(n_1 = 2.4\)) and an angle of incidence of \(45^{\circ}\), light of 1000 cm–1 penetrates to a depth of 2.0 µm in a sample with a refractive index similar to that for KBr (\(n_2\ = 1.5\)).Solid samples also can be analyzed using an ATR sample cell. After placing the solid in the sample slot, a compression tip ensures that it is in contact with the ATR crystal. Examples of solids analyzed by ATR include polymers, fibers, fabrics, powders, and biological tissue samples. ATR spectra are similar, but not identical, to those obtained by measuring transmission. An important contribution to this is the wavelength-dependent depth of penetration of the infrared radiation where a decrease in wavenumber (longer wavelength) results in a greater depth of penetration, which changes the intensity and width of absorption bands.Another reflectance method is diffuse reflectance, in which radiation is reflected from a rough surface, such as a powder. Powdered samples are mixed with a non-absorbing material, such as powdered KBr, and the reflected light is collected and analyzed. As with ATR, the resulting spectrum is similar to that obtained by conventional transmission methods. shows the IR spectrum for urea obtained using transmission and diffuse reflectance (both collected using an FT-IR). Both spectra show similar features between 1000 cm–1 and 2000 cm–1, although there are differences in relative peak heights and background absorption.This page titled 17.2: Mid-Infrared Reflection Spectrometry is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by David Harvey.
325
17.3: Near-Infrared and Far-Infrared Spectroscopy
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Instrumental_Analysis_(LibreTexts)/17%3A_Applications_of_Infrared_Spectrometry/17.03%3A_Near_Far_IR
At the beginning of this chapter we divided infrared radiation into three areas: the near-IR, the mid-IR, and the far-IR. The mid-IR, which runs from 4000 cm–1 to 670 cm–1 (2.5 µm to 15 µm) is the most analytical useful region and was the subject of the previous two sections. Here we briefly turn our attention to applications using the near-IR and the far-IR.The near-IR extends from approximately 13,000 cm–1 (a wavelength of 770 nm or 0.77 µm, the upper wavelength limit of visible light) to 4000 cm–1 (a wavelength of 2,500 nm or 2.5 µm). Earlier we noted that absorption bands in the region that extends from 1500 cm–1 to 4000 cm–1 are called group frequencies. The absorption bands in the near-infrared often are overtones and combination bands of these group frequencies. Of particular importance are functional groups that include hydrogen: OH, CH, and NH are examples. Absorption bands generally are less intense and less broad. Compared to mid-IR, the NIR is more useful for a quantitative analysis of aqueous samples because the OH absorption bands are much weaker. The instrumentation for NIR spectroscopy, both in transmission mode and in reflectance mode, is similar to that for UV/visible spectrometers and for mid-IR spectrometry.The far-IR extends from approximately 670 cm–1 (a wavelength of 15 µm) to 10 cm–1 (a wavelength of 1000 µm or 1 mm). FIR spectroscopy finds applications in the analysis of materials that include metals, including metal oxides, metal sulfides, and metal-ligand complexes. FIR spectroscopy has also been applied to the analysis of polyamides, peptides, and proteins. Because the FIR merges into the microwave region, it also finds use in the analysis of the rotational energies of gases.This page titled 17.3: Near-Infrared and Far-Infrared Spectroscopy is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by David Harvey.
326
18.1: Theory of Raman Spectroscopy
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Instrumental_Analysis_(LibreTexts)/18%3A_Raman_Spectroscopy/18.01%3A_Theory_of_Raman_Spectroscopy
The blue color of the sky during the day and the red color of the sun at sunset are the result of the scattering of light by small particles of dust, by molecules of water vapor, and by other gases in the atmosphere. The efficiency of a photon's scattering depends on its wavelength. We see the sky as blue during the day because violet and blue light scatter to a greater extent than other, longer wavelengths of light. For the same reason, the sun appears red at sunset because red light scatters less efficiently and is more likely to pass through the atmosphere than other wavelengths of light. If we send a focused, monochromatic beam of radiation with a wavelength of \(\lambda\) through a medium of particles—whether solid particulates or individual molecules—that have dimensions \(<1.5 \lambda\), then the radiation scatters in all directions. For example, infrared light in the near-IR with a wavelength of 700 nm will scatter from any particle whose longest dimension is less than 1,300 nm. Even in an otherwise transparent sample, scattering from molecules occurs.There are two general classes of scattering: elastic scattering and inelastic scattering. In elastic scattering, a photon is first absorbed by a particle and then emitted without a change in its energy (\(\Delta E = 0\)); this is called Rayleigh scattering. With inelastic scattering, a photon is first absorbed by a particle and then emitted with a change in its energy (\(\Delta E \ne 0\)); this is called Raman scattering. A plot that shows the intensity of scattered radiation as a function of the scattered photon's energy, expressed as a change in the wavenumber, \(\Delta \overline{\nu}\), is called a Raman spectrum and values of \(\Delta \overline{\nu}\) are called Raman shifts. shows a portion of the Raman spectrum for carbon tetrachloride and illustrates several important features. First, Rayleigh scattering produces an intense peak at \(\Delta \overline{\nu} = 0\). Although the peak is intense, it carries no useful information as the absolute energy is just that for the source. Second, Raman scattering has two components—Stokes lines and the anti-Stokes lines—that have identical absolute shifts relative to the line for Rayleigh scattering, but that have different signs. The Stokes lines have positive values for \(\Delta\overline{\nu}\) and the anti-Stokes lines have negative values for \(\Delta \overline{\nu}\). Third, each of the Stokes lines is more intense than the corresponding anti-Stokes line. Fourth, because we measure the shift in a peak's wavenumber relative to the source radiation, the spectrum is independent of the source radiation.The energy—and, thus, the wavenumber—of a photon that experiences Stokes scattering is less than the energy—and, thus, the wavenumber—of the source radiation, which begs the question of why a Stokes shift is reported as a positive value instead of a negative value. Although you will find most Raman spectra with positive values for the Stokes shift, you also will find examples where Stokes shifts are reported with negative values. Because the Stokes lines are more intense than the anti-Stokes lines, and, therefore, more useful, and because their respective shifts result from the same changes in vibrational energy states that we find in IR spectroscopy, it is convenient to report the Stokes lines as positive values so that we can align a species's Raman and IR spectra. See the next two sections for additional details.In Chapter 6 we examined the mechanism by which absorption and emission occur. In subsequent chapters we explored atomic absorption and atomic emission spectrometry, ultraviolet and visible molecular absorption spectrometry, molecular luminescence spectrometry, and infrared molecular absorption spectrometry. In each case we began by considering an energy level diagram that explains the origin of absorption and emission. provides an energy diagram that we can use to explain the origin of the lines that make up a Raman spectrum, such as the spectrum for carbon tetrachloride in .The first thing to note about the energy level diagram in is that, in addition to showing the ground electronic state and the first excited electronic state—each with three vibrational energy levels—it also shows a virtual electronic state, something we did not encounter with other methods (see, for example, the energy diagram for UV and IR molecular absorption spectrometry in . The ground and the first excited electronic states are quantized, which means that absorption cannot happen if the source's energy does not match exactly the change in energy between the two electronic states. The energy of an emitted photon also is fixed by the difference in the energy of the two electronic states. A virtual electronic state, however, is not quantized and is determined by the energy of the source radiation. The source of radiation, therefore, does not need to match a particular change in energy.Absorption of a photon of source radiation moves the analyte from the ground electronic state to a virtual electronic state without a change in vibrational energy state, as seen by the two arrows at the far left of the diagram. Because the ground vibrational energy state, \(\nu_0\), is more populated than the vibrational energy state, \(\nu_1\), more of the analyte ends up in a virtual electronic state's lowest vibrational energy than in a higher vibrational energy state, which is shown here by the relative thickness of the two arrows.Once in a virtual electronic state, the analyte can return to the ground excited state in one of three ways. It can do so without a change in the vibrational energy level. In this case, the energy of absorption and the energy of emission are the same and \(\Delta E = 0\) and \(\Delta \overline{\nu} = 0\). This is Rayleigh scattering and, as suggested by the combined thickness of the two arrows in , it is the most important mechanism of relaxation.When relaxation includes a change in the vibrational energy level, the result is an absolute change in energy equivalent to the difference in energy, \(\Delta E\), between adjacent vibrational energy levels. For Stokes scattering, relaxation is to a higher vibrational energy level, such as \(\nu_0 \rightarrow \nu_1\) and, for anti-Stokes scattering, relaxation to a lower vibrational energy level, such as \(\nu_1 \rightarrow \nu_0\). As suggested by the thickness of the lines for Stokes and anti-Stokes scattering in , the Stokes lines are more intense than the anti-Stokes lines because they begin in a more heavily populated excited state.One important feature of is that the transition that gives rise to a particular Stokes line or anti-Stokes line is the same transition that will give rise to a corresponding IR band. If the selection rules for these transitions are the same for a particular species, then we expect that its IR spectrum and its Raman spectrum will have peaks at the same (or similar) values of \(\overline{\nu}\) and \(\Delta \overline{\nu}\) for its fundamental vibrations; however, as we see in Table \(\PageIndex{1}\) for carbon tetrachloride, CCl4, there are five fundamental vibrations in its Raman spectrum, but just three in its IR spectrum.In Chapter 16 we learned that in IR spectroscopy a compound's fundamental vibrational energy is active—that is, we see a peak in its IR spectrum—only if the corresponding stretch or bend results in a change in the compound's dipole moment. For Raman spectroscopy, a compound's fundamental vibrational energy is active only if the corresponding stretch or bend results in a change in the polarizability of its electrons. Polarizability essentially is a measure of how easy it is to distort a compound's electron cloud by applying an external electric field, such as when a photon from the source is absorbed; in general, polarizability increases when a stretching or bending motion increases the compound's volume as the electrons are then spread over a greater amount of space. shows the four stretching and bending modes for CCl4. The stretching motion in (a), in which all four C–Cl bond lengths increase and decrease together, means the molecule's volume increases and decreases; thus, this vibrational mode is Raman active. The symmetry of the stretching motion, however, means there is no change in the molecule's dipole moment and the vibrational mode is IR inactive. The asymmetric stretch in (b), on the other hand, is both IR and Raman active. The bending motion in (c) results in the molecule becoming more or less compact in size, and is Raman active; the symmetry of the scissoring motions, however, means that the vibrational mode is IR inactive. The bending motions in (d) are both IR and Raman active.In general, symmetric stretching and bending modes result in relatively strong Raman scattering peaks, but no absorption in the IR, while symmetric stretching and bending modes result in both IR and Raman peaks. As a result, IR and Raman are complementary techniques.If the source of electromagnetic radiation is plane-polarized, then it is possible to collect a Raman spectrum using light scattered in a plane that is parallel to the source and, separately, in a plane that is perpendicular to the source. The ratio of a line's intensity of scattering in the perpendicular spectrum, \(I_{\perp}\), to the intensity of scattering in the parallel spectrum, \(I_{||}\), is called the depolarization ratio, \(p\).\[p = \frac{I_{\perp}}{I_{||} } \label{depolarization} \]A Raman line that originates from a vibrational mode that does not change the molecules shape will result in a depolarization ratio close to zero and an absence of the line in the perpendicular spectrum. shows the Raman spectrum when collecting data parallel (top) and perpendicular (bottom) to the light source. The absence of the peak at 458.7 cm–1 in the perpendicular spectrum confirms that this is the symmetric stretch illustrated in .This page titled 18.1: Theory of Raman Spectroscopy is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by David Harvey.
327
18.2: Instrumentation
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Instrumental_Analysis_(LibreTexts)/18%3A_Raman_Spectroscopy/18.02%3A_Instrumentation
The basic instrumentation for Raman spectroscopy is similar to that for other spectroscopic techniques: a source or radiation, an optical bench for bringing the source to the sample, and a suitable detector.One of the notable features of the Raman spectrum for CCl4 (see is the low intensity of the Stokes lines and the anti-Stokes lines relative to the line for Rayleigh scattering. The low intensity of these lines requires that we use a high intensity source so that there are a sufficient number of scattered photons to collect. For this reason, a laser is the most common source for Raman spectroscopy, providing a high intensity, monochromatic source. Table \(\PageIndex{1}\) summarizes some of the more common lasers.The intensity of Raman scattering is proportional to \(\frac{1}{\lambda^4}\), where \(\lambda\) is the wavelength of the source radiation; thus, the smaller the wavelength, the more intense the intensity of scattered light. For example, the intensity of scattering using an Ar ion laser at 488.0 nm is almost \(23 \times\) greater than the intensity of scattering using a Nd/YAG laser at 1064 nm\[\frac{(1/480)^4}{(1/1064)^4} = 22.6 \nonumber \]The increased scattering when using a smaller wavelength laser comes at a cost, however, of an interference from fluorescence from species that are promoted into excited electronic states by the source. The NIR diode laser and Nd/YAG laser, when operated at 1064 cm–1, discriminate against fluorescence and are useful, therefore, for samples where fluorescence is a problem.Raman spectroscopy has several advantages over infrared spectroscopy. Because water does not exhibit much Raman scattering it is possible to analyze aqueous samples; this is a serious limitation for IR spectroscopy where water absorbs strongly. The ability to focus a laser onto a small area makes it possible to analyze very small samples. A liquid sample, for example, can be held in the tip of a 1-mm inner diameter capillary tube, such as that used for measuring melting points. Solid samples and gaseous samples can be sampled using the same types of cells used in IR and FT-IR (see Chapter 17). Fiber optic probes make it possible to collect samples remotely. shows the basic set-up. A small bundle of fibers (shown in blue) brings light from the source to the sample where a second bundle of fibers (shown in green) brings the scattered light to the slit that passes light onto the detector.Raman spectrometers use optical benches similar to those for UV/Vis or IR spectroscopy, which were covered in Chapter 7. Dispersive instruments place the laser source and the detector at 90° to each other so that any unscattered high intensity emission from the laser source is not collected by the detector. A filter is used to remove the Rayleigh scattering. To record a spectrum one either uses a scanning monochromator or a multichannel detector. Fourier transform instruments are similar to those used in FT-IR and include a filter to isolate the Stokes lines.This page titled 18.2: Instrumentation is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by David Harvey.
328
18.3: Applications of Raman Spectroscopy
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Instrumental_Analysis_(LibreTexts)/18%3A_Raman_Spectroscopy/18.03%3A_Applications_of_Raman_Spectroscopy
Raman spectroscopy is useful for both qualitative and quantitative analyses, examples of which are provided in this section.There are numerous databases that provide reference spectra for inorganic compounds, for minerals, for synthetic organic pigments, for natural and synthetic inorganic and organic pigments, and for carbohydrates. Such data bases are often searchable by not only name and formula, but by the prominent Raman scattering lines. Examples of spectra are included here using data from the databases linked to above.The intensity of Raman scattering, \(I(\nu)_R\), is directly proportional to the intensity of the source radiation, \(I_l\), and the concentration of the scattering species, \(C\). The direct proportionality between \(I(\nu)_R\) and \(I_l\) is important given that each photon experiencing Raman scattering requires approximately \(10^8\) excitation photons. Using a laser as a source of radiation and increasing its power leads to an improvement in sensitivity. The direct proportionality between \(I(\nu)_R\) and the concentration of the scattering species means that a calibration curve of band intensity (or band area) is a linear function of concentration, allowing for a quantitative analysis.This page titled 18.3: Applications of Raman Spectroscopy is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by David Harvey.
329
18.4: Other Types of Raman Spectroscopy
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Instrumental_Analysis_(LibreTexts)/18%3A_Raman_Spectroscopy/18.04%3A_Other_Types_of_Raman_Spectroscopy
Traditional Raman spectroscopy has several limitations, perhaps the most important of which is that the probability of Raman scattering is much less than that for Rayleigh scattering, which leads to low sensitivity with detection limits often as large as 0.1 M. Here we briefly describe two forms of Raman spectroscopy that allow for significant improvements in detection limits.If the wavelength of the source is similar to the wavelength needed to move the species from its ground electronic state to its first electronic excited state (not the virtual excited state shown in , then the lines associated with the symmetric fundamental vibrations increase in intensity by a factor of \(10^2\) to \(10^6\). The improvement in sensitivity results in a substantial reduction in detection limits as low as \(10^{-8} \text{ M}\). The use of a tunable laser makes it possible to adjust the wavelength of light emitted by the source to maximize the intensity of scattering. For reasons that are poorly understood, the intensity of Raman scattering lines is enhanced when the scattering species is absorbed to the surface of colloidal particles of metals such as Ag, Au, or Cu, or to the surface of etched metals. The phenomenon is not limited to just a few lines—as is the case for RRS—and results in a \(10^3\) to \(10^6)\) improvement in the intensity of scattering. If a tunable laser is used for the source, allowing for both RRS and SERS, detection limits of \(10^{-9} \text{ M}\) to \(10^{-12} \text{ M}\) are possible.This page titled 18.4: Other Types of Raman Spectroscopy is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by David Harvey.
330
19.1: Theory of Nuclear Magnetic Resonance
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Instrumental_Analysis_(LibreTexts)/19%3A_Nuclear_Magnetic_Resonance_Spectroscopy/19.01%3A_Theory_of_Nuclear_Magnetic_Resonance
As is the case with other forms of optical spectroscopy, the signal in nuclear magnetic resonance (NMR) spectroscopy arises from a difference in the energy levels occupied by the nuclei in the analyte. In this section we develop a general theory of nuclear magnetic resonance spectroscopy that draws on quantum mechanics and on classical mechanics to explain these energy levels.The quantum mechanical description of an electron is given by four quantum numbers: the principal quantum number, \(n\), the angular momentum quantum number, \(l\), the magnetic quantum number, \(m_l\), and the spin quantum number, \(m_s\). The first three of these quantum numbers tell us something about where the electron is relative to the nucleus and something about the electron's energy. The last of these four quantum numbers, the spin quantum number, tells us something about the ability of an electron to interact with an applied magnetic field. An electron has possible spins of +1/2 or of –1/2, which we often refer to as spin up, using an upwards arrow, \(\uparrow\), to represent it, or as spin down, using a downwards arrow, \(\downarrow\), to represent it.A nucleus, like an electron, carries a charge and has a spin quantum number. The overall spin, \(I\), of a nucleus is a function of the number of protons and neutrons that make up the nucleus. Here are three simple rules for nuclear spin states:Predicting that 13C has a spin of \(I = 1/2\), but that 127I has a spin of \(I = 3/2\) and that 17O has a spin of \(I = 5/2\) is not trivial. A periodic table that provides spin states for elements is available here.The total number of spin states—that is, the total number of possible orientations of the spin—is equal to \((2 \times I) + 1\). To be NMR active, a nucleus must have at least two spin states so that a change in spin states, and, therefore, a change in energy, is possible; thus, 12C, for which there are \((2 \times 0) + 1 = 1\) spin states, is NMR inactive, but 13C, for which there are \((2 \times 1/2) + 1 = 2\) spin states with values of \(m = +1/2\) and of \(m = -1/2\), is NMR active, as is 2H for which there are \((2 \times 1) + 1 = 3\) spin states with values of \(m = +1/2\), \(m = 0\), and \(m = -1/2\). As our interest in this chapter is in the NMR spectra for 1H and for 13C, we will limit ourselves to considering \(I = 1/2\) and spin states of \(m = +1/2\) and of \(m = -1/2\).Suppose we have a large population of 1H atoms. In the absence of an applied magnetic field the atoms are divided equally between their possible spin states: 50% of the atoms have a spin of +1/2 and 50% of the atoms have a spin of –1/2. Both spin states have the same energy, as is the case on the left side of , and neither absorption nor emission occurs.In the presence of an applied magnetic field, as on the right side of , the nuclei are either aligned with the magnetic field with spins of \(m = +1/2\), or aligned against the magnetic field with spins of \(m = -1/2\). The energies in these two spin states, \(E_\text{lower}\) and \(E_\text{upper}\), are given by the equations\[E_\text{lower} = - \frac{\gamma h}{4 \pi}B_0 \label{nmr1} \]\[E_\text{upper} = + \frac{\gamma h}{4 \pi}B_0 \label{nmr2} \]where \(\gamma\) is the magnetogyric ratio for the nucleus, \(h\) is Planck's constant, and \(B_0\) is the strength of the applied magnetic field. The difference in energy, \(\Delta E\), between the two states is\[\Delta E = E_\text{upper} - E_\text{lower} = + \frac{\gamma h}{4 \pi}B_0 - \left( - \frac{\gamma h}{4 \pi}B_0 \right) = \frac{\gamma h}{2 \pi}B_0 \label{nmr3} \]Substituting Equation \ref{nmr3} into the more familiar equation \(\Delta E = h \nu\) gives the frequency, \(\nu\), of electromagnetic radiation needed to effect a change in spin state as\[\nu = \frac{\gamma B_0}{2 \pi} \label{nmr4} \]This is called the Larmor frequency for the nucleus. For example, if the magnet has a field strength of 11.74 Tesla, then the frequency needed to effect a change in spin state for 1H, for which \(\gamma\) is \(2.68 \times 10^8 \text{ rad}\text{ T}^{-1} \text{s}^{-1}\), is\[\nu = \frac{(2.68 \times 10^8 \text{rad} \text{ T}^{-1}\text{s}^{-1})(11.74 \text{ T})}{2 \pi} = 5.01 \times 10^8 \text{ s}^{-1} \nonumber \]or 500 MHz, which is in the radio frequency (RF) range of the electromagnetic spectrum. This is the Larmor frequency for 1H.The relative population of the upper spin state, \(N_\text{upper}\), and of the lower spin state, \(N_\text{lower}\), is given by the Boltzmann equation\[\frac{N_\text{upper}}{N_\text{lower}} = e^{- \Delta E/k T} \label{nmr5} \]where \(k\) is Boltzmann's constant (\(1.38066 \times 10^{-23} \text{ J/K}\)) and \(T\) is the temperature in Kelvin. Substituting in Equation \ref{nmr3} for \(\Delta E\) gives this ratio as\[\frac{N_\text{upper}}{N_\text{lower}} = e^{-\gamma h B_0/2 \pi k T} \label{nmr6} \]IF we place a population of 1H atoms in a magnetic field with a strength of 11.74 Tesla, the ratio \(\frac{N_\text{upper}}{N_\text{lower}}\) at 298 K is\[\frac{N_\text{upper}}{N_\text{lower}} = e^{-\frac{(2.68 \times 10^{8} \text{ rad} \text{ s}^{-1})(6.626 \times 10^{-34} \text{ Js})(11.74 \text{ T})}{(2 \pi)(1.38 \times 10^{-23} \text{ JK}^{-1})(298 \text{ K})}} = 0.99992 \nonumber \]If this ratio is 1:1, then the probability of absorption and emission are equal and there is no net signal. In this case, the difference in the populations is on the order of 8 per 100,000, or 80 per 1,000,000, or 80 ppm. The small difference in the two populations means that NMR is less sensitive than many other spectroscopic methods.To understand the classical description of an NMR experiment we draw upon . For simplicity, let's assume that in the population of nuclei available to us, there is an excess of just one nucleus with a spin state of +1/2. In , we see that the spin of this nucleus is not perfectly aligned with the applied magnetic field, \(B_0\), which is aligned with the z-axis; instead the nucleus precesses around the z-axis at an angle of theta, \(\Theta\). As a result, the net magnetic moment along the z-axis, \(\mu_z\), is less than the magnetic moment, µ, of the nucleus. The precession occurs with an angular velocity, \(\omega_0\), of \(\gamma B_0\).If we apply a source of radio frequency (RF) electromagnetic radiation along the x-axis such that its magnetic field component, \(B_1\), is perpendicular to \(B_0\), then it will generate its own angular velocity in the xy-plane. When the angular velocity of the precessing nucleus matches the angular velocity of \(B_1\), absorption takes place and the spin flips, as seen in .When the magnetic field \(B_1\) is removed, the nucleus returns to its original state, as seen in , a process called relaxation. In the absence of relaxation, the system is saturated with equal populations of the two spin states and absorption approaches zero. This process of relaxation has two separate mechanisms: spin-lattice relaxation and spin-spin relaxation.In spin-lattice relaxation the nucleus in its higher energy spin state, , returns to its lower energy state spin state, , by transferring energy to other species present in the sample (the lattice in spin-lattice). Spin-lattice relaxation is characterized by first-order exponential decay with a characteristic relaxation time of \(T_1\) that is a measure of the average time the nucleus remains in its higher energy spin state. Smaller values for \(T_1\) result in more efficient relaxation.If two nuclei of the same type, but in different spin states, are in close proximity to each other, they can trades places in which the nucleus in the higher energy spin state gives up its energy to the nucleus in the lower energy spin state. The result is a decrease in the average life-time of an excited state. This is called spin-spin relaxation and it is characterized by a relaxation time of \(T_2\).In Chapter 16 we learned that we can record an infrared spectrum by using a scanning monochromator to pass, sequentially, different wavelengths of IR radiation through a sample, obtaining a spectrum of absorbance as a function of wavelength. We also learned that we can obtain the same spectrum by passing all wavelengths of IR radiation through the sample at the same time using an interferometer, and then use a Fourier transform to convert the resulting interferogram into a spectrum of absorbance as a function of wavelength. Here we consider their equivalents for NMR spectroscopy.If we scan \(B_1\) while holding \(B_0\) constant—or scan \(B_0\) while holding \(B_1\) constant—then we can identify the Larmor frequencies where a particular nucleus absorbs. The result is an NMR spectrum that shows the intensity of absorption as a function of the frequency at which that absorption takes place. Because we record the spectrum by scanning through a continuum of frequencies, the method is known as continuous wave NMR. provides a useful visualization for this experiment.In Fourier transform NMR, the magnetic field \(B_1\) is applied as a brief pulse of radio frequency (RF) electromagnetic radiation centered at a frequency appropriate for the nucleus of interest and for the strength of the primary magnetic field, \(B_0\). The pulse typically is 1-10 µs in length and applied in the xy-plane. From the Heisenberg uncertainty principle, a short pulse of \(\Delta t\) results in a broad range of frequencies as \(\Delta f = 1/\Delta t\); this ensures that the pulse spans a sufficient range of frequencies such that the nucleus of interest to us will absorb energy and enter into an excited state.Before we apply the pulse, the population of nuclei are aligned parallel to the applied magnetic field, \(B_0\), some with a spin of +1/2 and others with a spin of –1/2. As we learned above, there is a slight excess of nuclei with spins of +1/2, which we can represent as a single vector that shows their combined magnetic moments along the z-axis, \(\mu_z\), as shown in . When we apply a pulse of RF electromagnetic radiation with a magnetic field strength of \(B_1\), the spin states of the nuclei tip away from the z-axis by an angle that depends on the nucleus's magnetogyric ratio, \(\gamma\), the value of \(B_1\), and the length of the pulse. If, for example, a pulse of 5 µs tips the the magnetic vector by 45° ), then a pulse of 10 µs will tip the magnetic vector by 90° degrees ), so that it now lies completely within the xy-plane.At the end of the pulse, the nuclei begin to relax back to their original state. shows that this relaxation occurs both in the xy-plane (spin-spin relaxation) and along the z-axis (spin-lattice relaxation). If we were to trace the path of the magnetic vector with time, we would see that it follows a spiral-like motion as its contribution in the xy-plane decreases and its contribution along the z-axis increases. We measure this signal—called the free induction decay, or FID—during this period of relaxation.The FID for a system that consists of only one type of nucleus is the simple exponentially damped oscillating signal in . The Fourier transform of this simple FID gives the spectrum in that has a single peak. A sample with a more than one type of nucleus yields a more complex FID pattern, such as that in , and a more complex spectrum, such as the two peaks in . Note that, as we learned in an earlier treatment of the Fourier transform in Chapter 7, a broader peak in the frequency domain results in a faster decay in the time domain. shows a typical pulse sequence highlighting the total cycle time and its component parts: the pulse width, the acquisition time during which the FID is recorded, and a recycle delay before applying the next pulse and beginning the next cycle.This page titled 19.1: Theory of Nuclear Magnetic Resonance is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by David Harvey.
331