title
stringlengths
13
247
url
stringlengths
35
578
text
stringlengths
197
217k
__index_level_0__
int64
1
8.68k
7.4: Sample Containers
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Instrumental_Analysis_(LibreTexts)/07%3A_Components_of_Optical_Instruments/7.04%3A_Sample_Containers
The sample compartment provides a light-tight environment that limits stray radiation. Samples normally are in a liquid or solution state, and are placed in cells constructed with UV/Vis transparent materials, such as quartz, glass, and plastic ). A quartz or fused-silica cell is required when working at a wavelength <300 nm where other materials show a significant absorption. The most common pathlength is 1 cm (10 mm), although cells with shorter (as little as 0.1 cm) and longer pathlengths (up to 10 cm) are available. Longer pathlength cells are useful when analyzing a very dilute solution or for gas samples. The highest quality cells allow the radiation to strike a flat surface at a 90o angle, minimizing the loss of radiation to reflection. A test tube often is used as a sample cell with simple, single-beam instruments, although differences in the cell’s pathlength and optical properties add an additional source of error to the analysis.Infrared spectroscopy routinely is used to analyze gas, liquid, and solid samples. Sample cells are made from materials, such as NaCl and KBr, that are transparent to infrared radiation. Gases are analyzed using a cell with a pathlength of approximately 10 cm. Longer pathlengths are obtained by using mirrors to pass the beam of radiation through the sample several times.A liquid sample may be analyzed using a variety of different sample cells ). For non-volatile liquids a suitable sample is prepared by placing a drop of the liquid between two NaCl plates, forming a thin film that typically is less than 0.01 mm thick. Volatile liquids are placed in a sealed cell to prevent their evaporation.This page titled 7.4: Sample Containers is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by David Harvey.
436
7.5: Radiation Transducers
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Instrumental_Analysis_(LibreTexts)/07%3A_Components_of_Optical_Instruments/7.05%3A_Radiation_Transducers
In Nessler’s original method for determining ammonia (see Section 7.4) the analyst’s eye serves as the detector, matching the sample’s color to that of a standard. The human eye, of course, has a poor range—it responds only to visible light—and it is not particularly sensitive or accurate. Modern detectors use a sensitive transducer to convert a signal consisting of photons into an easily measured electrical signal. Ideally the detector’s signal, S, is a linear function of the electromagnetic radiation’s power, P,\[S=k P+D \]where k is the detector’s sensitivity, and D is the detector’s dark current, or the background current when we prevent the source’s radiation from reaching the detector. There are two broad classes of spectroscopic transducers: photon transducers and thermal transducers, although we will subdivide the photon transducers given their rich variety. Table \(\PageIndex{1}\) provides several representative examples of each class of transducers.Transducer is a general term that refers to any device that converts a chemical or a physical property into an easily measured electrical signal. The retina in your eye, for example, is a transducer that converts photons into an electrical nerve impulse; your eardrum is a transducer that converts sound waves into a different electrical nerve impulse.A photon transducer takes a photon and converts it into an electrical signal, such as a current, a change in resistance, or a voltage. Many such detectors use a semiconductor as the photosensitive surface. When the semiconductor absorbs photons, valence electrons move to the semiconductor’s conduction band, producing a measurable current.A photovoltaic cell ) consists of a thin film of a semiconducting material, such as selenium sandwiched between two electrodes: a base electrode of iron or copper and a thin semi-transparent layer of silver or gold that serves as the collector electrode. When a photon of visible light falls on the photovoltaic cell it generates an electron and a hole with a positive charge within the semiconductor. Movement of the electrons from the collector electrode to the base electrode generates a current that is proportional to the power of the incoming radiation and that serves as the signal.Phototubes and photomultipliers use a photosensitive surface that absorbs radiation in the ultraviolet, visible, or near IR to produce an electrical current that is proportional to the number of photons that reach the transducer (see ). The current results from applying a negative potential to the photoemissive surface and a positive potential to a wire that serves as the anode. In a photomultiplier tube, a series of positively charged dynodes serves to amplify the current, producing 106–107 electrons per photon.Applying a reverse biased voltage to the pn junction of a silicon semiconductor creates a depletion zone in which conductance is close to zero (see Chapter 2 for an earlier discussion of semiconductors). When a photon of light of sufficient energy impinges on the depletion zone, an electron-hole pair is formed. Movement of the electron through the n–region and of the hole through the p–region generates a current that is proportional to the number of photons reaching the detector. A silicon photodiode has a wide spectral range from approximately 190 nm to 1100 nm, which makes it versatile; however, a photodiode is less sensitive than a photomultiplier.The photon transducers discussed above detect light at a single wavelength passed by the monochromator to the detector. If we wish to record a complete spectrum then we must continually adjust the monochromator either manually or by using a servo motor. In a multichannel instrument we create a one-dimensional or two-dimensional array of detectors that allow us to monitor simultaneously radiation spanning a broad range of wavelengths.An individual silicon photodiode is quite small, typically with a width of approximately 0.025 mm. As a result, a linear (one-dimensional) array that consists of 1024 individual photodiodes has a width of just 25.6 mm. , for example, shows the UV detector from an HPLC. Light from the deuterium lamp passes through a flow cell, is dispersed by a diffraction grating, and then focused onto a linear array of photodiodes. The close-up on the right shows the active protion of the photodiode array covered by an optical window. The active width of this photodiode array is approximately 6 mm and includes more than 200 individual photodiodes, sufficient to provide 1 nm resolution from 180 nm to 400 nm.One way to increase the sensitivity of a detector is to collect and store charges before counting them. This is the approach taken with two types of charge-transfer devices: charged-coupled detectors and charge-injection detectors. Individual detectors, or pixels, consist of a layer of silicon dioxide coated on top of semiconductor. When a photon impinges on the detector it creates an electron-hole pair. An electrode on top of the silicon dioxide layer collects and stores either the negatively charged electrons or the positively charged holes. After a sufficient time, during which 10,000-100,000 charges are collected, the total accumulated charge is measured. Because individual pixels are small, typically 10 µm, they can be arranged in either a linear, one-dimensional array or a two-dimensional array. A charge-transfer device with 1024 x 1024 pixels will be approximately 10 mm x 10 mm in size.There are two important charge-transfer devices used as detectors: a charge-coupled device (CCD), which is discussed below, and a charge-injection device (CID), which is discussed in Chapter 10. Both types of devices use a two-dimensional arrays of individual detectors that store charge. The two devices differ primarily in how the accumulated charges are read. shows a cross-section of a single detector (pixel) in a charge-coupled device (CCD) where individual pixels are arranged in a two-dimensional array. Electron-hole pairs are created in a layer of p-doped silicon. The holes migrate to the n-doped silicon layer and the electrons are drawn to the area below a positively charged electrode. When it is time to record the accumulated charges, the charge is read in the upper-right corner of the array with charges in the same row measured by shifting them from left-to-right. When the first row is read, the charges in the remaining rows are shifted up and recorded. In a charge-injection device, the roles of the electrons and holes are reversed and the accumulated positive charged are recorded. shows an example of spectrophotometer equipped with a linear CCD detector that includes 2048 individual elements with a wavelength range from 200 nm to 1100 nm. The spectrometer is housed in a compact space of 90 mm x 60 mmInfrared photons do not have enough energy to produce a measurable current with a photon transducer. A thermal transducer, therefore, is used for infrared spectroscopy. The absorption of infrared photons increases a thermal transducer’s temperature, changing one or more of its characteristic properties. A pneumatic transducer, for example, is a small tube of xenon gas with an IR transparent window at one end and a flexible membrane at the other end. Photons enter the tube and are absorbed by a blackened surface, increasing the temperature of the gas. As the temperature inside the tube fluctuates, the gas expands and contracts and the flexible membrane moves in and out. Monitoring the membrane’s displacement produces an electrical signal.This page titled 7.5: Radiation Transducers is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by David Harvey.
437
7.6: Fiber Optics
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Instrumental_Analysis_(LibreTexts)/07%3A_Components_of_Optical_Instruments/7.06%3A_Fiber_Optics
If we need to monitor an analyte’s concentration over time, it may not be possible to remove samples for analysis. This often is the case, for example, when monitoring an industrial production line or waste line, when monitoring a patient’s blood, or when monitoring an environmental system, such as stream. With a fiber-optic probe we can analyze samples in situ. An example of a remote sensing fiber-optic probe is shown in . The probe consists of two bundles of fiber-optic cable. One bundle transmits radiation from the source to the probe’s tip, which is designed to allow the sample to flow through the sample cell. Radiation from the source passes through the solution and is reflected back by a mirror. The second bundle of fiber-optic cable transmits the nonabsorbed radiation to the wavelength selector. Another design replaces the flow cell shown in with a membrane that contains a reagent that reacts with the analyte. When the analyte diffuses into the membrane it reacts with the reagent, producing a product that absorbs UV or visible radiation. The nonabsorbed radiation from the source is reflected or scattered back to the detector. Fiber optic probes that show chemical selectivity are called optrodes.This page titled 7.6: Fiber Optics is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by David Harvey.
438
7.7: Fourier Transform Optical Spectroscopy
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Instrumental_Analysis_(LibreTexts)/07%3A_Components_of_Optical_Instruments/7.07%3A_Types_of_Optical_Instruments
Thus far, the optical benches described in this chapter either use a single detector and a monochromator to pass a single wavelength of light to the detector, or use a multichannel array of detectors and a diffraction grating to disperse the light across the detectors. Both of these approaches have advantages and limitations. For the first of these designs, we can improve resolution by using a smaller slit width, although this comes with a decrease in the throughput of light that reaches the detector, which increases noise. Recording a complete spectrum requires scanning the monochromator; a slow scan rate can improve resolution by reducing the range of wavelengths reaching the detector per unit time, but at the expense of a longer analysis time, which is a problem if the composition of our samples changes with time. For the second of these designs, resolution is limited by the size of the array; for example, a spectral range of 190 nm to 800 nm and a photodiode array with 512 individual elements has a digital resolution of\[\frac{800 - 190}{512} = 1.2 \text{ nm/diode} \nonumber \]although the optical resolution—defined by the actual number of individual diodes over which a wavelength of light is dispersed—is greater and may vary with wavelength. Because a photodiode array allows for the simultaneous detection of radiation by each diode in the array, data acquisition is fast and a complete spectrum is acquired in approximately one second.We can overcome the limitations described above if we can find a way to avoid dispersing the source radiation in time by scanning the monochromator, or dispersing the source radiation in space across an array of sensors. An interferometer, , provides one way to accomplish this. Radiation from the source is collected by a collimating mirror and passed to a beam splitter where half of the radiation is directed toward a mirror set at a fixed distance from the beam splitter, and the other half of the radiation is passed through to a mirror that moves back and forth. The radiation from the two mirrors is recombined at the beam splitter and half of it is passed along to the detector.When the radiation recombines at the beam splitter, constructive and destructive interference determines, for each wavelength, the intensity of light that reaches the detector. As the moving mirror changes position, the wavelength of light that experiences maximum constructive interference and maximum destructive interference also changes. The signal at the detector shows intensity as a function of the moving mirror’s position, expressed in units of distance or time. The result is called an interferogram or a time domain spectrum. The time domain spectrum is converted mathematically, by a process called a Fourier transform, to a spectrum (a frequency domain) that shows intensity as a function of the radiation’s frequency. shows the relationship between the time domain spectrum and the frequency domain spectrum. The spectra in the first row show the relationship between (a) the time domain spectrum and (b) the corresponding frequency domain spectrum for a monochromatic source of radiation with a frequency, \(\nu_1\), of 1 and an amplitude, \(A_1\), of 1.0. In the time domain we see a simple cosine function with the general form\[S = A_1 \times \cos{(2 \pi \nu_1 t)} \label{signal1} \]where \(S\) is the signal and \(t\) is the time. The spectra in the second row show the same information for a second monochromatic source of radiation with a frequency, \(\nu_2\), of 1.2 and an amplitude, \(A_2\), of 1.5, which is given by the equation\[S = A_2 \times \cos{(2 \pi \nu_2 t)} \label{signal2} \]If we have a source that emits just these two frequencies of light, then the corresponding time domain and frequency domain spectra in the last row, where\[S = A_1 \times \cos{(2 \pi \nu_1 t)} + A_2 \times \cos{(2 \pi \nu_2 t)} \label{signal3} \]Although the time domain spectrum in panel (e) is more complex than those in panels (a) and (c), there is a clear repeating pattern, one cycle of which is shown by the arrow. Note that for each of these three examples, the time domain spectrum and the frequency domain spectrum encode the same information about the source radiation.The two monochromatic signals in are line spectra with line widths that are essentially zero. But what if our signal has a measurable linewidth? We might consider such a signal to be the sum of a series of cosine functions, each with an amplitude and a frequency. shows a frequency domain that contains a single peak with a finite width and shows the corresponding time domain spectrum, which consists of an oscillating signal with an amplitude that decays over time. In general, and show thatThe mathematical process of converting between the time domain and the frequency domain is called a Fourier transform. The details of the mathematics are sufficiently complex that calculations by hand are impractical.In comparison to a monochromator, an interferometer has several significant advantages. The first advantage, which is termed Jacquinot’s advantage, is the greater throughput of source radiation. Because an interferometer does not use slits and has fewer optical components from which radiation is scattered and lost, the throughput of radiation reaching the detector is \(80-200 \times\) greater than that for a monochromator. The result is less noise. A second advantage, which is called Fellgett’s advantage, is a savings in the time needed to obtain a spectrum. Because the detector monitors all frequencies simultaneously, a spectrum takes approximately one second to record, as compared to 10–15 minutes when using a scanning monochromator. A third advantage is that increased resolution is achieved by increasing the distance traveled by the moving mirror, which we can achieve without the need to decrease a scanniing monochromator's slit width or without increasing the size of an array detector.This page titled 7.7: Fourier Transform Optical Spectroscopy is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by David Harvey.
439
8.1: Optical Atomic Spectra
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Instrumental_Analysis_(LibreTexts)/08%3A_An_Introduction_to_Optical_Atomic_Spectroscopy/8.01%3A_Optical_Atomic_Spectra
The energy of ultraviolet and visible electromagnetic radiation is sufficient to cause a change in an atom’s valence electron configuration. Sodium, for example, has a single valence electron in its 3s atomic orbital. As shown in , unoccupied, higher energy atomic orbitals also exist. The valence shell energy level diagram in might strike you as odd because it shows the 3p orbitals split into two groups of slightly different energy (the two lines differ by just 0.6 nm). The cause of this splitting is a consequence of an electron's angular momentum and its spin. When these two are in the opposite direction, then the energy is slightly smaller than when the two are in the same direction. The effect is largest for p orbitals and sufficiently smaller for d and f orbitals that we do not bother to show the difference in their energies in this diagram.Absorption of a photon is accompanied by the excitation of an electron from a lower-energy atomic orbital to an atomic orbital of higher energy. Not all possible transitions between atomic orbitals are allowed. For sodium the only allowed transitions are those in which there is a change of ±1 in the orbital quantum number (l); thus transitions from \(s \rightarrow p\) orbitals are allowed, but transitions from \(s \rightarrow s\) and from \(s \rightarrow d\) orbitals are forbidden.The atomic absorption spectrum for Na is shown in , and is typical of that found for most atoms. The most obvious feature of this spectrum is that it consists of a small number of discrete absorption lines that correspond to transitions between the ground state (the 3s atomic orbital) and the 3p and the 4p atomic orbitals. Absorption from excited states, such as the \(3p \rightarrow 4s\) and the \(3p \rightarrow 3d\) transitions included in , are too weak to detect. Because an excited state’s lifetime is short—an excited state atom typically returns to a lower energy state in 10–7 to 10–8 seconds—an atom in the exited state is likely to return to the ground state before it has an opportunity to absorb a photon.Atomic emission occurs when electrons in higher energy orbitals return to a lower energy state, releasing the excess energy as a photon. The ground state electron configuration for Na of \(1s^2 2s^2 2p^6 3s^1\) places a single electron in the \(3s\) valence shell. Introducing a solution of NaCl to a flame results in the formation of Na atoms (more on this in Chapter 9) and provides sufficient energy to promote the valence electron in the \(3s\) orbital to higher energy excited states, such as the \(3p\) orbitals identified in the energy level diagram for sodium in . When an electron returns to its ground state, the excess energy is released as a photon. As seen in , the emission spectrum for Na is dominated by the pair of lines with wavelengths of 589.0 and 589.6 nm.When an atom in an excited state emits a photon as a means of returning to a lower energy state, how we describe the process depends on the source of energy creating the excited state. When excitation is the result of thermal energy, as is the case for the spectrum in , we call the process atomic emission spectroscopy. When excitation is the result of the absorption of a photon, we call the process atomic fluorescence spectroscopy. The absorption spectrum for Na in and its emission spectrum in shows that Na has both strong absorption and emission lines at 589.0 and 589.6 nm. If we use a source of light at 589.6 nm to move the 3s valence electron to a 3p excited state, we can then measure the emission of light at the same wavelength, making the measurement at 90° to avoid an interference from the original light source.Fluorescence also may occur when an electron in an excited state first loses energy by a process other than the emission of a photon—we call this a radiationless transition—reaching a lower energy excited state from which it then emits a photon. For example, a ground state Na atom may first absorb a photon with a wavelength of 330.2 nm (a \(3s \rightarrow 4 p\) transition), which then loses energy through a radiationless transition to the 3p orbital where it then emits a photon to reach the 3s orbital.Another feature of the atomic absorption spectrum in and the atomic emission spectrum in is the narrow width of the absorption and emission lines, which is a consequence of the fixed difference in energy between the ground state and the excited state, and the lack of vibrational and rotational energy levels. The width of an atomic absorption or emission line arises from several factors that we consider here.From the uncertainty principle, the product of the uncertainty of the frequency of light and the uncertainty in time must be greater than 1.\[\Delta \nu \times \Delta t > 1 \nonumber \]To determine the frequency with infinite precision, \(\Delta \nu = 0\), requires that the lifetime of an electron in a particular orbital must be infinitely large. While this may be essentially true for an electron in the ground state, it is not true for an electron in an excited state where the average lifetime—how long it takes before it returns to the ground state—may be on the order of \(10^{-7} \text{ to }10^{-8} \text{ s}\). For example, if \(\Delta t = 5 \times 10^{-8} \text{ s}\) for the emission of a photon with a wavelength of 500.0 nm, then\[\Delta \nu = 2 \times 10^7 \text{ s}^{-1} \nonumber \]To convert this an uncertainty in wavelength, \(\Delta \lambda\), we begin with the relationship\[\nu = \frac{c}{\lambda} \nonumber \]and take the derivative of \(\nu\) with respect to wavelength\[d \nu = - \frac{c}{\lambda^2} d \lambda \nonumber \]Rearranging to solve for the uncertainty in wavelength, and letting \(\Delta \nu\) and \(\Delta \lambda\) serve as estimates for \(d \nu\) and \(d \lambda\) leaves us with\[ \left| \Delta \lambda \right| = \frac{\Delta \nu \times \lambda^2}{c} = \frac{(500.0 \times 10^{-9} \text{ m}^2) \times (2 \times 10^7 \text{s}^{-1})}{2.998 \times 10^8 \text{ m/s}} = 1.7 \times 10^{-14} \text{ m} \nonumber \]or \(1.7 \times 10^{-5} \text{ nm}\). Natural line widths for atomic spectra are approximately 10–5 nm.When an atom emits a photon, the frequency (and, thus, the wavelength) of the photon depends on whether the emitting atom is moving toward the detector or moving away from the detector. When the atom is moving toward the detector, as in , its emitted light reaches the detector at a greater frequency—a shorter wavelength—than when the light source is stationary, as in . An atom moving away from the detector, as in emits light that reaches the detector with a smaller frequency and a longer wavelength.Atoms are in constant motion, which means that they also experience constant collisions, each of which results in a small change in the energy of an electron in the ground state or in an excited state, and a corresponding change in the wavelength emitted or absorbed. This effect is called pressure (or collisional) broadening. As is the case for Doppler broadening, pressure broadening increases with temperature. Together, Doppler broadening and pressure broadening result in an approximately 100-fold increase in line width, with line widths on the order of approximately 10–3 nm.As noted in the previous section, temperature contributes to the broadening of atomic absorption and atomic emission lines. Temperature also has an effect on the intensity of emission lines as it determines the relative population of an atom's various excited states. The Boltzmann distribution\[\frac{N_i}{N_0} = \frac{P_i}{P_0} e^{-E_i/kT} \nonumber \]gives the relative number of atoms in a specific excited state, \(N_i\), relative to the number of atoms in the ground state, \(N_0\), as a function of the difference in their energies, \(E_i\), Boltzmann's constant, \(k\), the temperature in Kelvin, \(T\), and \(P_i\) and \(P_0\) are statistical factors that account for the number of equivalent energy states for the excited state and the ground state. shows how temperature affects the atomic emission spectrum for sodium's two intense emission lines at 589.0 and 589.6 nm for temperatures from 2500 K to 7500 K. Note that the emission at 2500 K is too small to appear using a y-axis scale of absolute intensities. A change in temperature from 5500 K to 4500 K reduces the emission intensity by 62%. As you might guess from this, a small change in temperature—perhaps as little as 10 K can result in a measurable decrease in emission intensity of a few percent.An increase in temperature may also change the relative emission intensity of different lines. , for example, shows the atomic emission spectra for copper at 5000 K and 7000 K. At the higher temperature, the most intense emission line changes from 510.55 nm to 521.82 nm, and several additional peaks between 400 nm and 500 nm become more intense.The atomic emission spectra for sodium in consists of discrete, narrow lines because they arise from the transition between the discrete, well-defined energy levels seen in . Atomic emission from a flame also include contributions from two additional sources: the emission from molecular species that form in the flame and emission from the flame. A sample of water, for example, is likely to contain a variety of ions, such as Ca2+, that form molecular species, such as CaOH in the flame, and that emit photon over a much broader range of wavelengths than do atoms. The flame, itself, emits photons throughout the range of wavelengths used in UV/Vis atomic emission.This page titled 8.1: Optical Atomic Spectra is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by David Harvey.
440
8.2: Atomization Methods
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Instrumental_Analysis_(LibreTexts)/08%3A_An_Introduction_to_Optical_Atomic_Spectroscopy/8.02%3A_Atomization_Methods
Atomic methods require that the sample consist of individual gas phase atoms or gas phase atomic ions. With rare exceptions, this is not the form in which we obtain samples. If we are interested in analyzing seawater for the concentration of sodium, we need to find a way to convert the solution of aqueous sodium ions, Na+(aq), into gas phase sodium atoms, Na(g), or gas phase sodium ions, Na+(g). The process by which this happens is called atomization and requires a source of thermal energy. Examples of atomization methods include the use of flames, resistive heating, plasmas, and electric arcs and sparks. More details on specific atomization methods appear in the chapters that follow.This page titled 8.2: Atomization Methods is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by David Harvey.
441
8.3: Sample Introduction Methods
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Instrumental_Analysis_(LibreTexts)/08%3A_An_Introduction_to_Optical_Atomic_Spectroscopy/8.03%3A_Sample_Introduction_Methods
In addition to a method of atomization, atomic spectroscopic methods require a means of placing the sample within the device used for atomization. The analysis of seawater for sodium ions requires a means for working with a sample that is in solution. The analysis of a salt-substitute for sodium, on the other hand, requires a means for working with solid samples, which could be first bringing it into solution or working directly with the solid. How a sample is introduced also depends on the method of atomization. Examples of different methods of sample introduction include aspirating a solution directly into a flame, injecting a small aliquot of solution onto a resistive heating mechanism, or exposing a solid sample to a laser or electric spark. More details on specific methods for introducing samples appear in the chapters that follow.This page titled 8.3: Sample Introduction Methods is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by David Harvey.
442
9.1: Sample Atomization Techniques
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Instrumental_Analysis_(LibreTexts)/09%3A_Atomic_Absorption_and_Atomic_Fluorescence_Spectrometry/9.01%3A_Sample_Atomization_Techniques
The process of converting an analyte to a free gaseous atom is called atomization. Converting an aqueous analyte into a free atom requires that we strip away the solvent, volatilize the analyte, and, if necessary, dissociate the analyte into free atoms. Desolvating an aqueous solution of CuCl2, for example, leaves us with solid particulates of CuCl2. Converting the particulate CuCl2 to gas phases atoms of Cu and Cl requires thermal energy.\[\mathrm{CuCl}_{2}(a q) \rightarrow \mathrm{CuCl}_{2}(s) \rightarrow \mathrm{Cu}(g)+2 \mathrm{Cl}(g) \nonumber \]There are two common atomization methods: flame atomization and electrothermal atomization, although a few elements are atomized using other methods. shows a typical flame atomization assembly with close-up views of several key components. In the unit shown here, the aqueous sample is drawn into the assembly by passing a high-pressure stream of compressed air past the end of a capillary tube immersed in the sample. When the sample exits the nebulizer it strikes a glass impact bead, which converts it into a fine aerosol mist within the spray chamber. The aerosol mist is swept through the spray chamber by the combustion gases—compressed air and acetylene in this case—to the burner head where the flame’s thermal energy desolvates the aerosol mist to a dry aerosol of small, solid particulates. The flame’s thermal energy then volatilizes the particles, producing a vapor that consists of molecular species, ionic species, and free atoms.Burner. The slot burner in provides a long optical pathlength and a stable flame. Because absorbance is directly proportional to pathlength, a long pathlength provides greater sensitivity. A stable flame minimizes uncertainty due to fluctuations in the flame.The burner is mounted on an adjustable stage that allows the entire assembly to move horizontally and vertically. Horizontal adjustments ensure the flame is aligned with the instrument’s optical path. Vertical adjustments change the height within the flame from which absorbance is monitored. This is important because two competing processes affect the concentration of free atoms in the flame. The more time an analyte spends in the flame the greater the atomization efficiency; thus, the production of free atoms increases with height. On the other hand, a longer residence time allows more opportunity for the free atoms to combine with oxygen to form a molecular oxide. As seen in , for a metal this is easy to oxidize, such as Cr, the concentration of free atoms is greatest just above the burner head. For a metal, such as Ag, which is difficult to oxidize, the concentration of free atoms increases steadily with height.Flame. The flame’s temperature, which affects the efficiency of atomization, depends on the fuel–oxidant mixture, several examples of which are listed in Table \(\PageIndex{1}\). Of these, the air–acetylene and the nitrous oxide–acetylene flames are the most popular. Normally the fuel and oxidant are mixed in an approximately stoichiometric ratio; however, a fuel-rich mixture may be necessary for easily oxidized analytes. shows a cross-section through the flame, looking down the source radiation’s optical path. The primary combustion zone usually is rich in gas combustion products that emit radiation, limiting is useful- ness for atomic absorption. The interzonal region generally is rich in free atoms and provides the best location for measuring atomic absorption. The hottest part of the flame typically is 2–3 cm above the primary combustion zone. As atoms approach the flame’s secondary combustion zone, the decrease in temperature allows for formation of stable molecular species.Sample Introduction. The most common means for introducing a sample into a flame atomizer is a continuous aspiration in which the sample flows through the burner while we monitor absorbance. Continuous aspiration is sample intensive, typically requiring from 2–5 mL of sample.Flame microsampling allows us to introduce a discrete sample of fixed volume, and is useful if we have a limited amount of sample or when the sample’s matrix is incompatible with the flame atomizer. For example, continuously aspirating a sample that has a high concentration of dissolved solids—sea water, for example, comes to mind—may build-up a solid de- posit on the burner head that obstructs the flame and that lowers the absorbance. Flame microsampling is accomplished using a micropipet to place 50–250 μL of sample in a Teflon funnel connected to the nebulizer, or by dipping the nebulizer tubing into the sample for a short time. Dip sampling usually is accomplished with an automatic sampler. The signal for flame microsampling is a transitory peak whose height or area is proportional to the amount of analyte that is injected.Advantages and Disadvantages of Flame Atomization. The principal advantage of flame atomization is the reproducibility with which the sample is introduced into the spectrophotometer; a significant disadvantage is that the efficiency of atomization is quite poor. There are two reasons for poor atomization efficiency. First, the majority of the aerosol droplets produced during nebulization are too large to be carried to the flame by the combustion gases. Consequently, as much as 95% of the sample never reaches the flame, which is the reason for the waste line shown at the bottom of the spray chamber in . A second reason for poor atomization efficiency is that the large volume of combustion gases significantly dilutes the sample. Together, these contributions to the efficiency of atomization reduce sensitivity because the analyte’s concentration in the flame may be a factor of \(2.5 \times 10^{-6}\) less than that in solution [Ingle, J. D.; Crouch, S. R. Spectrochemical Analysis, Prentice-Hall: Englewood Cliffs, NJ, 1988; p. 275].A significant improvement in sensitivity is achieved by using the resistive heating of a graphite tube in place of a flame. A typical electrothermal atomizer, also known as a graphite furnace, consists of a cylindrical graphite tube approximately 1–3 cm in length and 3–8 mm in diameter. As shown in , the graphite tube is housed in an sealed assembly that has an optically transparent window at each end. A continuous stream of inert gas is passed through the furnace, which protects the graphite tube from oxidation and removes the gaseous products produced during atomization. A power supply is used to pass a current through the graphite tube, resulting in resistive heating.Samples of between 5–50 μL are injected into the graphite tube through a small hole at the top of the tube. Atomization is achieved in three stages. In the first stage the sample is dried to a solid residue using a current that raises the temperature of the graphite tube to about 110oC. In the second stage, which is called ashing, the temperature is increased to between 350–1200oC. At these temperatures organic material in the sample is converted to CO2 and H2O, and volatile inorganic materials are vaporized. These gases are removed by the inert gas flow. In the final stage the sample is atomized by rapidly increasing the temperature to between 2000–3000oC. The result is a transient absorbance peak whose height or area is proportional to the absolute amount of analyte injected into the graphite tube. Together, the three stages take approximately 45–90 s, with most of this time used for drying and ashing the sample.Electrothermal atomization provides a significant improvement in sensitivity by trapping the gaseous analyte in the small volume within the graphite tube. The analyte’s concentration in the resulting vapor phase is as much as \(1000 \times\) greater than in a flame atomization [Parsons, M. L.; Major, S.; Forster, A. R. Appl. Spectrosc. 1983, 37, 411–418]. This improvement in sensitivity—and the resulting improvement in detection limits—is offset by a significant decrease in precision. Atomization efficiency is influenced strongly by the sample’s contact with the graphite tube, which is difficult to control reproducibly.A few elements are atomized by using a chemical reaction to produce a volatile product. Elements such as As, Se, Sb, Bi, Ge, Sn, Te, and Pb, for example, form volatile hydrides when they react with NaBH4 in the presence of acid. An inert gas carries the volatile hydride to either a flame or to a heated quartz observation tube situated in the optical path. Mercury is determined by the cold-vapor method in which it is reduced to elemental mercury with SnCl2. The volatile Hg is carried by an inert gas to an unheated observation tube situated in the instrument’s optical path.The most important factor in choosing a method of atomization is the analyte’s concentration. Because of its greater sensitivity, it takes less analyte to achieve a given absorbance when using electrothermal atomization. Table \(\PageIndex{2}\) which compares the amount of analyte needed to achieve an absorbance of 0.20 when using flame atomization and electrothermal atomization, is useful when selecting an atomization method. For example, flame atomization is the method of choice if our samples contain 1–10 mg Zn2+/L, but electrothermal atomization is the best choice for samples that contain 1–10 μg Zn2+/L.Source: Varian Cookbook, SpectraAA Software Version 4.00 Pro.As: 10 mg/L by hydride vaporization; Hg: 11.5 mg/L by cold-vapor; and Sn:18 mg/L by hydride vaporizationThis page titled 9.1: Sample Atomization Techniques is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by David Harvey.
443
9.2: Atomic Absorption Instrumentation
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Instrumental_Analysis_(LibreTexts)/09%3A_Atomic_Absorption_and_Atomic_Fluorescence_Spectrometry/9.02%3A_Atomic_Absorption_Instrumentation
Atomic absorption spectrophotometers use optical benches similar to those described earlier in Chapter 7, including a source of radiation, a method for introducing the sample (covered in the previous section), a means for isolating the wavelengths of interest, and a way to measure the amount of light absorbed or emitted.Because atomic absorption lines are narrow, we need to use a line source instead of a continuum source to record atomic absorption spectra. will help us understand why this is necessary. As discussed in Chapter 7, a typical continuum source has an effective bandwidth on the order of 1 nm after passing through a monochromator. An atomic absorption line, as we learned in Chapter 8, has an effective line width on the order of 0.002 nm due to Doppler broadening and pressure broadening that takes place in a flame. If we pass the radiation from a continuum source through the flame, the incident power from the source, \(P_0\), and the power that reaches the detector, \(P_t\), are essentially identical, leading to an absorbance of zero. A line source, which operates a temperature that is lower than a flame, has a line width on the order of 0.001 nm. Passing this source radiation through the flame results in a measurable \(P_T\) and a measurable absorbance.The source for atomic absorption is a hollow cathode lamp that consists of a cathode and anode enclosed within a glass tube filled with a low pressure of an inert gas, such as Ne or Ar ). Applying a potential across the electrodes ionizes the filler gas. The positively charged gas ions collide with the negatively charged cathode, sputtering atoms from the cathode’s surface. Some of the sputtered atoms are in the excited state and emit radiation characteristic of the metal(s) from which the cathode is manufactured. By fashioning the cathode from the metallic analyte, a hollow cathode lamp provides emission lines that correspond to the analyte’s absorption spectrum.Each element in a hollow cathode lamp provides several atomic emission lines that we can use for atomic absorption. Usually the wavelength that provides the best sensitivity is the one we choose to use, although a less sensitive wavelength may be more appropriate for a sample that has higher concentration of analyte. For the Cr hollow cathode lamp in Table \(\PageIndex{1}\), the best sensitivity is obtained using a wavelength of 357.9 nm as this line requires the smallest concentration of analyte to achieve an absorbance of 0.20.Another consideration is the emission line's intensity, \(P_0\). If several emission lines meet our requirements for sensitivity, we may wish to use the emission line with the largest relative P0 because there is less uncertainty in measuring P0 and PT. When analyzing a sample that is ≈10 mg Cr/L, for example, the first three wavelengths in Table \(\PageIndex{1}\) provide good sensitivity; the wavelengths of 425.4 nm and 429.0 nm, however, have a greater P0 and will provide less uncertainty in the measured absorbance.The emission spectrum for a hollow cathode lamp includes, in addition to the analyte's emission lines, additional emission lines from impurities present in the metallic cathode and from the filler gas. These additional lines are a potential source of stray radiation that could result in an instrumental deviation from Beer’s law. The monochromator’s slit width is set as wide as possible to improve the throughput of radiation and narrow enough to eliminate these sources of stray radiation.Atomic absorption spectrometers are available using either a single-beam and a double-beam optical bench. shows a typical single-beam spectrometer, which consists of a hollow cathode lamp as a source, a flame, a grating monochromator, a detector (usually a photomultiplier tube), and a signal processor. Also included in this design is a chopper that periodically blocks light from the hollow cathode lamp from passing through the flame and reaching the detector. The purpose of the chopper is to provide a means for discriminating against the emission of light from the flame, which will otherwise contribute to the total amount of light that reaches the detector. As shown in , when the chopper is closed, the only light reaching the detector is from the flame; emission from the flame and light from the lamp after it passes through the flame reach the detector when the chopper is open. The difference between the two signals gives the amount of light that reaches the detector after being absorbed by the sample. An alternative method that accomplishes the same thing is to modulate the amount of radiation emitted by the hollow cathode lamp. shows the typical arrangement of a double-beam instrument for atomic absorption spectroscopy. In this design, the chopper alternates between two optical paths: one in which light from the hollow cathode lamp bypasses the flame and that measures the total emission of radiation from the flame and the lamp, and one that passes the hollow light from the hollow cathode lamp through the flame and that measures the emission of light from the flame and the amount of light from the hollow cathode lamp that is not absorbed by the sample. The difference between the two signals gives the amount of light that reaches the detector after being absorbed by the sample.This page titled 9.2: Atomic Absorption Instrumentation is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by David Harvey.
444
9.3: Interferences in Absorption Spectroscopy
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Instrumental_Analysis_(LibreTexts)/09%3A_Atomic_Absorption_and_Atomic_Fluorescence_Spectrometry/9.03%3A_Interferences_in_Absorption_Spectroscopy
In describing the optical benches for atomic absorption spectroscopy, we noted the need to modulate the radiation from the source in order to discriminate against emission of radiation from the flame. In this section we consider additional sources of interference and discuss ways to compensate for them.A spectral interference occurs when an analyte’s absorption line overlaps with an interferent’s absorption line or band. Because atomic absorption lines are so narrow, the overlap of two such lines seldom is a problem. On the other hand, a molecule’s broad absorption band or the scattering of source radiation is a potentially serious spectral interference.An important consideration when using a flame as an atomization source is its effect on the measured absorbance. Among the products of combustion are molecular species that exhibit broad absorption bands and particulates that scatter radiation from the source. If we fail to compensate for these spectral interferences, then the intensity of transmitted radiation is smaller than expected. The result is an apparent increase in the sample’s absorbance. Fortunately, absorption and scattering of radiation by the flame are corrected by analyzing a blank that does not contain the sample.Spectral interferences also occur when components of the sample’s matrix other than the analyte react to form molecular species, such as oxides and hydroxides. The resulting absorption and scattering constitutes the sample’s background and may present a significant problem, particularly at wavelengths below 300 nm where the scattering of radiation becomes more important. If we know the composition of the sample’s matrix, then we can prepare our samples using an identical matrix. In this case the background absorption is the same for both the samples and the standards. Alternatively, if the background is due to a known matrix component, then we can add that component in excess to all samples and standards so that the contribution of the naturally occurring interferent is insignificant. Finally, many interferences due to the sample’s matrix are eliminated by increasing the atomization temperature. For example, switching to a higher temperature flame helps prevents the formation of interfering oxides and hydroxides.If the identity of the matrix interference is unknown, or if it is not possible to adjust the flame or furnace conditions to eliminate the interference, then we must find another method to compensate for the background interference. Several methods have been developed to compensate for matrix interferences, and most atomic absorption spectrophotometers include one or more of these methods.One of the most common methods for background correction is to use a continuum source, such as a D2 lamp. Because a D2 lamp is a continuum source, absorbance of its radiation by the analyte’s narrow absorption line is negligible. Only the background, therefore, absorbs radiation from the D2 lamp. Both the analyte and the background, on the other hand, absorb the hollow cathode’s radiation. Subtracting the absorbance for the D2 lamp from that for the hollow cathode lamp gives a corrected absorbance that compensates for the background interference. Although this method of background correction is effective, it does assume that the background absorbance is constant over the range of wavelengths passed by the monochromator. If this is not true, then subtracting the two absorbances underestimates or overestimates the background. A typical optical arrangement is shown in .Another approach to removing the background is to take advantage of the Zeeman effect. The basis of the technique is outlined in and described below in more detail. In the absence of an applied magnetic field—B = 0, where B is the strength of the magnetic field—a \(p \rightarrow d\) absorbance by the analyte takes place between two well-defined energy levels and yields a single well-defined absorption line, as seen on the left side of panel (a). When a magnetic field is applied, B > 0, the three equal energy p-orbitals split into three closely spaced energy levels and the five equal energy d-orbitals split into five closely spaced energy levels. The allowed transitions between these energy levels of \(\Delta M_l = 0, \pm 1\) yields three well-defined absorption lines, as seen on the right-side of panel (a), the central one of which (\(\Delta M_l = 0\)) is at the same wavelength as the absorption line in the absence of the applied magnetic field. This central band is the only wavelength at which the analyte absorbs.As we see in , we apply a magnetic field to the instrument's electrothermal atomizer and place a rotating polarizer between it and the hollow cathode lamp. When the rotating polarizer is in one position, radiation from the hollow cathode light is absorbed only by the central absorption line, giving a measure of absorption by both the background and the analyte. when the rotating polarizer is in the other position, radiation from the hollow cathode lamp is absorbed only by the two outside lines, providing a measure of absorption by the background only. The difference in these two absorption values is a function of the analyte's concentration.A third method for compensating for background absorption is to take advantage of what happens to the emission intensity of a hollow cathode lamp when it is operated at a high current. As seen in , when using a high current the emission band become significantly broader than when using a normal (low) current and, at the analytical wavelength, the emission intensity from the lamp decreases due to self-absorption, a process in which the ground state atoms in the hollow cathode lamp absorb photons emitted by the excited state atoms in the hollow cathode lamp. When using a low current we measure absorption from both the analyte and the background; when using a high current, absorption is due almost exclusively to the background. This approach is called Smith-Hieftje background corrections.The quantitative analysis of some elements is complicated by chemical interferences that occur during atomization. The most common chemical interferences are the formation of nonvolatile compounds that contain the analyte and ionization of the analyte.One example of the formation of a nonvolatile compound is the effect of \(\text{PO}_4^{3-}\) or Al3+ on the flame atomic absorption analysis of Ca2+. In one study, for example, adding 100 ppm Al3+ to a solution of 5 ppm Ca2+ decreased calcium ion’s absorbance from 0.50 to 0.14, while adding 500 ppm \(\text{PO}_4^{3-}\) to a similar solution of Ca2+ decreased the absorbance from 0.50 to 0.38. These interferences are attributed to the formation of nonvolatile particles of Ca3(PO4)2 and an Al–Ca–O oxide [Hosking, J. W.; Snell, N. B.; Sturman, B. T. J. Chem. Educ. 1977, 54, 128–130].When using flame atomization, we can minimize the formation of non-volatile compounds by increasing the flame’s temperature by changing the fuel-to-oxidant ratio or by switching to a different combination of fuel and oxidant. Another approach is to add a releasing agent or a protecting agent to the sample. A releasing agent is a species that reacts preferentially with the interferent, releasing the analyte during atomization. For example, Sr2+ and La3+ serve as releasing agents for the analysis of Ca2+ in the presence of \(\text{PO}_4^{3-}\) or Al3+. Adding 2000 ppm SrCl2 to the Ca2+/ \(\text{PO}_4^{3-}\) and to the Ca2+/Al3+ mixtures described in the previous paragraph increased the absorbance to 0.48. A protecting agent reacts with the analyte to form a stable volatile complex. Adding 1% w/w EDTA to the Ca2+/ \(\text{PO}_4^{3-}\) solution described in the previous paragraph increased the absorbance to 0.52.An ionization interference occurs when thermal energy from the flame or the electrothermal atomizer is sufficient to ionize the analyte\[\mathrm{M}(s)\rightleftharpoons \ \mathrm{M}^{+}(a q)+e^{-} \label{10.1} \]where M is the analyte. Because the absorption spectra for M and M+ are different, the position of the equilibrium in reaction \ref{10.1} affects the absorbance at wavelengths where M absorbs. To limit ionization we add a high concentration of an ionization suppressor, which is a species that ionizes more easily than the analyte. If the ionization suppressor's concentration is sufficient, then the increased concentration of electrons in the flame pushes reaction \ref{10.1} to the left, preventing the analyte’s ionization. Potassium and cesium frequently are used as an ionization suppressor because of their low ionization energy.This page titled 9.3: Interferences in Absorption Spectroscopy is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by David Harvey.
445
9.4: Atomic Absorption Techniques
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Instrumental_Analysis_(LibreTexts)/09%3A_Atomic_Absorption_and_Atomic_Fluorescence_Spectrometry/9.04%3A_Atomic_Absorption_Techniques
Flame and electrothermal atomization require that the analyte is in solution. Solid samples are brought into solution by dissolving in an appropriate solvent. If the sample is not soluble it is digested, either on a hot-plate or by microwave, using HNO3, H2SO4, or HClO4. Alternatively, we can extract the analyte using a Soxhlet extractor. Liquid samples are analyzed directly or the analytes extracted if the matrix is in- compatible with the method of atomization. A serum sample, for instance, is difficult to aspirate when using flame atomization and may produce an unacceptably high background absorbance when using electrothermal atomization. A liquid–liquid extraction using an organic solvent and a chelating agent frequently is used to concentrate analytes. Dilute solutions of Cd2+, Co2+, Cu2+, Fe3+, Pb2+, Ni2+, and Zn2+, for example, are concentrated by extracting with a solution of ammonium pyrrolidine dithiocarbamate in methyl isobutyl ketone.Because Beer’s law also applies to atomic absorption, we might expect atomic absorption calibration curves to be linear. In practice, however, most atomic absorption calibration curves are nonlinear or linear over a limited range of concentrations. Nonlinearity in atomic absorption is a consequence of instrumental limitations, including stray radiation from the hollow cathode lamp and the variation in molar absorptivity across the absorption line. Accurate quantitative work, therefore, requires a suitable means for computing the calibration curve from a set of standards.When possible, a quantitative analysis is best conducted using external standards. Unfortunately, matrix interferences are a frequent problem, particularly when using electrothermal atomization. For this reason the method of standard additions often is used. One limitation to this method of standardization, however, is the requirement of a linear relationship between absorbance and concentration.Most instruments include several different algorithms for computing the calibration curve. The instrument in my lab, for example, includes five algorithms. Three of the algorithms fit absorbance data using linear, quadratic, or cubic polynomial functions of the analyte’s concentration. It also includes two algorithms that fit the concentrations of the standards to quadratic functions of the absorbance.Atomic absorption spectroscopy is ideally suited for the analysis of trace and ultratrace analytes, particularly when using electrothermal atomization. For minor and major analytes, sample are diluted before the analysis. Most analyses use a macro or a meso sample. The small volume requirement for electrothermal atomization or for flame microsampling, however, makes practical the analysis of micro and ultramicro samples.If spectral and chemical interferences are minimized, an accuracy of 0.5–5% is routinely attainable. When the calibration curve is nonlinear, accuracy is improved by using a pair of standards whose absorbances closely bracket the sample’s absorbance and assuming that the change in absorbance is linear over this limited concentration range. Determinate errors for electrothermal atomization often are greater than those obtained with flame atomization due to more serious matrix interferences.For an absorbance greater than 0.1–0.2, the relative standard deviation for atomic absorption is 0.3–1% for flame atomization and 1–5% for electrothermal atomization. The principle limitation is the uncertainty in the concentration of free analyte atoms that result from variations in the rate of aspiration, nebulization, and atomization for a flame atomizer, and the consistency of injecting samples for electrothermal atomization.The sensitivity of a flame atomic absorption analysis is influenced by the flame’s composition and by the position in the flame from which we monitor the absorbance. Normally the sensitivity of an analysis is optimized by aspirating a standard solution of analyte and adjusting the fuel-to-oxidant ratio, the nebulizer flow rate, and the height of the burner, to give the greatest absorbance. With electrothermal atomization, sensitivity is influenced by the drying and ashing stages that precede atomization. The temperature and time at each stage is optimized for each type of sample.Sensitivity also is influenced by the sample’s matrix. We already noted, for example, that sensitivity is decreased by a chemical interference. An increase in sensitivity may be realized by adding a low molecular weight alcohol, ester, or ketone to the solution, or by using an organic solvent.Due to the narrow width of absorption lines, atomic absorption provides excellent selectivity. Atomic absorption is used for the analysis of over 60 elements at concentrations at or below the level of μg/L.The analysis time when using flame atomization is short, with sample throughputs of 250–350 determinations per hour when using a fully automated system. Electrothermal atomization requires substantially more time per analysis, with maximum sample throughputs of 20–30 determinations per hour. The cost of a new instrument ranges from between $10,000– $50,000 for flame atomization, and from $18,000–$70,000 for electrothermal atomization. The more expensive instruments in each price range include double-beam optics, automatic samplers, and can be programmed for multielemental analysis by allowing the wavelength and hollow cathode lamp to be changed automatically.This page titled 9.4: Atomic Absorption Techniques is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by David Harvey.
446
About the Author
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Instrumental_Analysis_(LibreTexts)/00%3A_Front_Matter/About_the_Author
David Harvey, professor of chemistry and biochemistry at DePauw University, is the recipient of the 2016 American Chemical Society Division of Analytical Chemistry J. Calvin Giddings Award for Excellence in Education. The national award recognizes a scientist who has enhanced the professional development of analytical chemistry students, developed and published innovative experiments, designed and improved equipment or teaching labs and published influential textbooks or significant articles on teaching analytical chemistry.
447
InfoPage
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Molecular_and_Atomic_Spectroscopy_(Wenzel)/00%3A_Front_Matter/02%3A_InfoPage
This text is disseminated via the Open Education Resource (OER) LibreTexts Project and like the hundreds of other texts available within this powerful platform, it is freely available for reading, printing and "consuming." Most, but not all, pages in the library have licenses that may allow individuals to make changes, save, and print this book. Carefully consult the applicable license(s) before pursuing such effects.Instructors can adopt existing LibreTexts texts or Remix them to quickly build course-specific resources to meet the needs of their students. Unlike traditional textbooks, LibreTexts’ web based origins allow powerful integration of advanced features and new technologies to support learning. The LibreTexts mission is to unite students, faculty and scholars in a cooperative effort to develop an easy-to-use online platform for the construction, customization, and dissemination of OER content to reduce the burdens of unreasonable textbook costs to our students and society. The LibreTexts project is a multi-institutional collaborative venture to develop the next generation of open-access texts to improve postsecondary education at all levels of higher learning by developing an Open Access Resource environment. The project currently consists of 14 independently operating and interconnected libraries that are constantly being optimized by students, faculty, and outside experts to supplant conventional paper-based books. These free textbook alternatives are organized within a central environment that is both vertically (from advance to basic level) and horizontally (across different fields) integrated.The LibreTexts libraries are Powered by NICE CXOne and are supported by the Department of Education Open Textbook Pilot Project, the UC Davis Office of the Provost, the UC Davis Library, the California State University Affordable Learning Solutions Program, and Merlot. This material is based upon work supported by the National Science Foundation under Grant No. 1246120, 1525057, and 1413739.Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation nor the US Department of Education.Have questions or comments? For information about adoptions or adaptions contact More information on our activities can be found via Facebook , Twitter , or our blog .This text was compiled on 07/05/2023
449
1.1: Introduction to Molecular Spectroscopy
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Molecular_and_Atomic_Spectroscopy_(Wenzel)/1%3A_General_Background_on_Molecular_Spectroscopy/1.1%3A_Introduction_to_Molecular_Spectroscopy
Molecular spectroscopy relates to the interactions that occur between molecules and electromagnetic radiation. Electromagnetic radiation is a form of radiation in which the electric and magnetic fields simultaneously vary. One well known example of electromagnetic radiation is visible light. Electromagnetic radiation can be characterized by its energy, intensity, frequency and wavelength. What is the relationship between the energy (E) and frequency (\(\nu\)) of electromagnetic radiation? The fundamental discoveries of Max Planck, who explained the emission of light by a blackbody radiator, and Albert Einstein, who explained the observations in the photoelectric effect, led to the realization that the energy of electromagnetic radiation is proportional to its frequency. The proportionality expression can be converted to an equality through the use of Planck’s constant.\[\mathrm{E = h\nu} \nonumber \]What is the relationship between the energy and wavelength (\(\lambda\)) of electromagnetic radiation?Using the knowledge that the speed of electromagnetic radiation (c) is the frequency times the wavelength (\(\mathrm{c = \lambda\nu}\)), we can solve for the frequency and substitute in to the expression above to get the following.\[\mathrm{E = \dfrac{hc}{\lambda}} \nonumber \]Therefore the energy of electromagnetic radiation is inversely proportional to the wavelength. Long wavelength electromagnetic radiation will have low energy. Short wavelength electromagnetic radiation will have high energy.Write the types of radiation observed in the electromagnetic spectrum going from high to low energy. Also include what types of processes occur in atoms or molecules for each type of radiation.Atoms and molecules have the ability to absorb or emit electromagnetic radiation. A species absorbing radiation undergoes a transition from the ground to some higher energy excited state. A species emitting radiation undergoes a transition from a higher energy excited state to a lower energy state. Spectroscopy in analytical chemistry is used in two primary manners: to identify a species and to quantify a species.Identification of a species involves recording the absorption or emission of a species as a function of the frequency or wavelength to obtain a spectrum (the spectrum is a plot of the absorbance or emission intensity as a function of wavelength). The features in the spectrum provide a signature for a molecule that may be used for purposes of identification. The more unique the spectrum for a species, the more useful it is for compound identification. Some spectroscopic methods (e.g., NMR spectroscopy) are especially useful for compound identification, whereas others provide spectra that are all rather similar and therefore not as useful. Among methods that provide highly unique spectra, there are some that are readily open to interpretation and structure assignment (e.g., NMR spectra), whereas others (e.g., infrared spectroscopy) are less open to interpretation and structure assignment. Since molecules do exhibit unique infrared spectra, an alternative means of compound identification is to use a computer to compare the spectrum of the unknown compound to a library of spectra of known compounds and identify the best match. In this case, identification is only possible if the spectrum of the unknown compound is in the library.Quantification of a species using a spectroscopic method involves measuring the magnitude of the absorbance or intensity of the emission and relating that to the concentration. At this point, we will focus on the use of absorbance measurements for quantification.Consider a sample through which you will send radiation of a particular wavelength as shown in . You measure the power from the radiation source (Po) using a blank solution (a blank is a sample that does not have any of the absorbing species you wish to measure). You then measure the power of radiation that makes it through the sample (P).The ratio P/Po is a measure of how much radiation passed through the sample and is defined as the transmittance (T).\[\mathrm{T = \dfrac{P}{P_o} \hspace{20px} and \hspace{20px} \%T = \left(\dfrac{P}{P_o}\right)\times 100} \nonumber \]The higher the transmittance, the more similar P is to Po. The absorbance (A) is defined as:\[\mathrm{A = -\log T \textrm{ or } \log\left(\dfrac{P_o}{P}\right).} \nonumber \]The higher the absorbance, the lower the value of P, and the less light that makes it through the sample and to the detector.This page titled 1.1: Introduction to Molecular Spectroscopy is shared under a CC BY-NC 4.0 license and was authored, remixed, and/or curated by Thomas Wenzel via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.
452
1.2: Beer’s Law
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Molecular_and_Atomic_Spectroscopy_(Wenzel)/1%3A_General_Background_on_Molecular_Spectroscopy/1.2%3A_Beers_Law
One factor that influences the absorbance of a sample is the concentration (c). The expectation would be that, as the concentration goes up, more radiation is absorbed and the absorbance goes up. Therefore, the absorbance is directly proportional to the concentration.A second factor is the path length (b). The longer the path length, the more molecules there are in the path of the beam of radiation, therefore the absorbance goes up. Therefore, the path length is directly proportional to the concentration.When the concentration is reported in moles/liter and the path length is reported in centimeters, the third factor is known as the molar absorptivity (\(\varepsilon\)). In some fields of work, it is more common to refer to this as the extinction coefficient. When we use a spectroscopic method to measure the concentration of a sample, we select out a specific wavelength of radiation to shine on the sample. As you likely know from other experiences, a particular chemical species absorbs some wavelengths of radiation and not others. The molar absorptivity is a measure of how well the species absorbs the particular wavelength of radiation that is being shined on it. The process of absorbance of electromagnetic radiation involves the excitation of a species from the ground state to a higher energy excited state. This process is described as an excitation transition, and excitation transitions have probabilities of occurrences. It is appropriate to talk about the degree to which possible energy transitions within a chemical species are allowed. Some transitions are more allowed, or more favorable, than others. Transitions that are highly favorable or highly allowed have high molar absorptivities. Transitions that are only slightly favorable or slightly allowed have low molar absorptivities. The higher the molar absorptivity, the higher the absorbance. Therefore, the molar absorptivity is directly proportional to the absorbance.If we return to the experiment in which a spectrum (recording the absorbance as a function of wavelength) is recorded for a compound for the purpose of identification, the concentration and path length are constant at every wavelength of the spectrum. The only difference is the molar absorptivities at the different wavelengths, so a spectrum represents a plot of the relative molar absorptivity of a species as a function of wavelength.Since the concentration, path length and molar absorptivity are all directly proportional to the absorbance, we can write the following equation, which is known as the Beer-Lambert law (often referred to as Beer’s Law), to show this relationship.\[\mathrm{A = \varepsilon bc} \nonumber \]Note that Beer’s Law is the equation for a straight line with a y-intercept of zero.Measuring the concentration of a species in a sample involves a multistep process.One important consideration is the wavelength of radiation to use for the measurement. Remember that the higher the molar absorptivity, the higher the absorbance. What this also means is that the higher the molar absorptivity, the lower the concentration of species that still gives a measurable absorbance value. Therefore, the wavelength that has the highest molar absorptivity (\(\lambda\)max) is usually selected for the analysis because it will provide the lowest detection limits. If the species you are measuring is one that has been commonly studied, literature reports or standard analysis methods will provide the \(\lambda\)max value. If it is a new species with an unknown \(\lambda\)max value, then it is easily measured by recording the spectrum of the species. The wavelength that has the highest absorbance in the spectrum is \(\lambda\)max.The second step of the process is to generate a standard curve. The standard curve is generated by preparing a series of solutions (usually 3-5) with known concentrations of the species being measured. Every standard curve is generated using a blank. The blank is some appropriate solution that is assumed to have an absorbance value of zero. It is used to zero the spectrophotometer before measuring the absorbance of the standard and unknown solutions. The absorbance of each standard sample at \(\lambda\)max is measured and plotted as a function of concentration. The plot of the data should be linear and should go through the origin as shown in the standard curve in . If the plot is not linear or if the y-intercept deviates substantially from the origin, it indicates that the standards were improperly prepared, the samples deviate in some way from Beer’s Law, or that there is an unknown interference in the sample that is complicating the measurements. Assuming a linear standard curve is obtained, the equation that provides the best linear fit to the data is generated.Note that the slope of the line of the standard curve in is (\(\varepsilon\)b) in the Beer’s Law equation. If the path length is known, the slope of the line can then be used to calculate the molar absorptivity.The third step is to measure the absorbance in the sample with an unknown concentration. The absorbance of the sample is used with the equation for the standard curve to calculate the concentration.The way to think about this question is to consider the expression we wrote earlier for the absorbance.\[\mathrm{A = \log\left(\dfrac{P_o}{P}\right)} \nonumber \]Since stray radiation always leaks in to the detector and presumably is a fixed or constant quantity, we can rewrite the expression for the absorbance including terms for the stray radiation. It is important to recognize that Po, the power from the radiation source, is considerably larger than \(P_S\). Also, the numerator (Po + Ps) is a constant at a particular wavelength.\[\mathrm{A = \log\left(\dfrac{P_o + P_s}{P + P_s}\right)} \nonumber \]Now let’s examine what happens to this expression under the two extremes of low concentration and high concentration. At low concentration, not much of the radiation is absorbed and P is not that much different than Po. Since \(P_o \gg P_S\), \(P\) will also be much greater than \(P_S\). If the sample is now made a little more concentrated so that a little more of the radiation is absorbed, P is still much greater than PS. Under these conditions the amount of stray radiation is a negligible contribution to the measurements of Po and P and has a negligible effect on the linearity of Beer’s Law.As the concentration is raised, P, the radiation reaching the detector, becomes smaller. If the concentration is made high enough, much of the incident radiation is absorbed by the sample and P becomes much smaller. If we consider the denominator (P + PS) at increasing concentrations, P gets small and PS remains constant. At its limit, the denominator approaches PS, a constant. Since Po + PS is a constant and the denominator approaches a constant (Ps), the absorbance approaches a constant. A plot of what would occur is shown in .The ideal plot is the straight line. The curvature that occurs at higher concentrations that is caused by the presence of stray radiation represents a negative deviation from Beer’s Law.The sample molecules are more likely to interact with each other at higher concentrations, thus the assumption used to derive Beer’s Law breaks down at high concentrations. The effect, which we will not explain in any more detail in this document, also leads to a negative deviation from Beer’s Law at high concentration.Spectroscopic instruments typically have a device known as a monochromator. There are two key features of a monochromator. The first is a device to disperse the radiation into distinct wavelengths. You are likely familiar with the dispersion of radiation that occurs when radiation of different wavelengths is passed through a prism. The second is a slit that blocks the wavelengths that you do not want to shine on your sample and only allows \(\lambda\)max to pass through to your sample as shown in .An examination of shows that the slit has to allow some “packet” of wavelengths through to the sample. The packet is centered on \(\lambda\)max, but clearly nearby wavelengths of radiation pass through the slit to the sample. The term effective bandwidth defines the packet of wavelengths and it depends on the slit width and the ability of the dispersing element to divide the wavelengths. Reducing the width of the slit reduces the packet of wavelengths that make it through to the sample, meaning that smaller slit widths lead to more monochromatic radiation and less deviation from linearity from Beer’s Law.The important thing to consider is the effect that this has on the power of radiation making it through to the sample (Po). Reducing the slit width will lead to a reduction in Po and hence P. An electronic measuring device called a detector is used to monitor the magnitude of Po and P. All electronic devices have a background noise associated with them (rather analogous to the static noise you may hear on a speaker and to the discussion of stray radiation from earlier that represents a form of noise). Po and P represent measurements of signal over the background noise. As Po and P become smaller, the background noise becomes a more significant contribution to the overall measurement. Ultimately the background noise restricts the signal that can be measured and detection limit of the spectrophotometer. Therefore, it is desirable to have a large value of Po. Since reducing the slit width reduces the value of Po, it also reduces the detection limit of the device. Selecting the appropriate slit width for a spectrophotometer is therefore a balance or tradeoff of the desire for high source power and the desire for high monochromaticity of the radiation.It is not possible to get purely monochromatic radiation using a dispersing element with a slit. Usually the sample has a slightly different molar absorptivity for each wavelength of radiation shining on it. The net effect is that the total absorbance added over all the different wavelengths is no longer linear with concentration. Instead a negative deviation occurs at higher concentrations due to the polychromicity of the radiation. Furthermore, the deviation is more pronounced the greater the difference in the molar absorbtivity. compares the deviation for two wavelengths of radiation with molar absorptivities that are (a) both 1,000, (b) 500 and 1,500, and (c) 250 and 1,750. As the molar absorptivities become further apart, a greater negative deviation is observed.Therefore, it is preferable to perform the absorbance measurement in a region of the spectrum that is relatively broad and flat. The hypothetical spectrum in shows a species with two wavelengths that have the same molar absorptivity. The peak at approximately 250 nm is quite sharp whereas the one at 330 nm is rather broad. Given such a choice, the broader peak will have less deviation from the polychromaticity of the radiation and is less prone to errors caused by slight misadjustments of the monochromator.It is important to consider the error that occurs at the two extremes (high concentration and low concentration). Our discussion above about deviations to Beer’s Law showed that several problems ensued at higher concentrations of the sample. Also, the point where only 10% of the radiation is transmitted through the sample corresponds to an absorbance value of 1. Because of the logarithmic relationship between absorbance and transmittance, the absorbance values rise rather rapidly over the last 10% of the radiation that is absorbed by the sample. A relatively small change in the transmittance can lead to a rather large change in the absorbance at high concentrations. Because of the substantial negative deviation to Beer’s law and the lack of precision in measuring absorbance values above 1, it is reasonable to assume that the error in the measurement of absorbance would be high at high concentrations.At very low sample concentrations, we observe that Po and P are quite similar in magnitude. If we lower the concentration a bit more, P becomes even more similar to Po. The important realization is that, at low concentrations, we are measuring a small difference between two large numbers. For example, suppose we wanted to measure the weight of a captain of an oil tanker. One way to do this is to measure the combined weight of the tanker and the captain, then have the captain leave the ship and measure the weight again. The difference between these two large numbers would be the weight of the captain. If we had a scale that was accurate to many, many significant figures, then we could possibly perform the measurement in this way. But you likely realize that this is an impractical way to accurately measure the weight of the captain and most scales do not have sufficient precision for an accurate measurement. Similarly, trying to measure a small difference between two large signals of radiation is prone to error since the difference in the signals might be on the order of the inherent noise in the measurement. Therefore, the degree of error is expected to be high at low concentrations.The discussion above suggests that it is best to measure the absorbance somewhere in the range of 0.1 to 0.8. Solutions of higher and lower concentrations have higher relative error in the measurement. Low absorbance values (high transmittance) correspond to dilute solutions. Often, other than taking steps to concentrate the sample, we are forced to measure samples that have low concentrations and must accept the increased error in the measurement. It is generally undesirable to record absorbance measurements above 1 for samples. Instead, it is better to dilute such samples and record a value that will be more precise with less relative error.Another question that arises is whether it is acceptable to use a non-linear standard curve. As we observed earlier, standard curves of absorbance versus concentration will show a non-linearity at higher concentrations. Such a non-linear plot can usually be fit using a higher order equation and the equation may predict the shape of the curve quite accurately. Whether or not it is acceptable to use the non-linear portion of the curve depends in part on the absorbance value where the non-linearity starts to appear. If the non-linearity occurs at absorbance values higher than one, it is usually better to dilute the sample into the linear portion of the curve because the absorbance value has a high relative error. If the non-linearity occurs at absorbance values lower than one, using a non-linear higher order equation to calculate the concentration of the analyte in the unknown may be acceptable.One thing that should never be done is to extrapolate a standard curve to higher concentrations. Since non-linearity will occur at some point, and there is no way of knowing in advance when it will occur, the absorbance of any unknown sample must be lower than the absorbance of the highest concentration standard used in the preparation of the standard curve. It is also not desirable to extrapolate a standard curve to lower concentrations. There are occasions when non-linear effects occur at low concentrations. If an unknown has an absorbance that is below that of the lowest concentration standard of the standard curve, it is preferable to prepare a lower concentration standard to ensure that the curve is linear over such a concentration region.Another concern that always exists when using spectroscopic measurements for compound quantification or identification is the potential presence of matrix effects. The matrix is everything else that is in the sample except for the species being analyzed. A concern can occur when the matrix of the unknown sample has components in it that are not in the blank solution and standards. Components of the matrix can have several undesirable effects.One concern is that a component of the matrix may absorb radiation at the same wavelength as the analyte, giving a false positive signal. Particulate matter in a sample will scatter the radiation, thereby reducing the intensity of the radiation at the detector. Scattered radiation will be confused with absorbed radiation and result in a higher concentration than actually occurs in the sample.Another concern is that some species have the ability to change the value of \(\lambda\)max. For some species, the value of \(\lambda\)max can show a pronounced dependence on pH. If this is a consideration, then all of the standard and unknown solutions must be appropriately buffered. Species that can hydrogen bond or metal ions that can form donor-acceptor complexes with the analyte may alter the position of \(\lambda\)max. Changes in the solvent can affect \(\lambda\)max as well.This page titled 1.2: Beer’s Law is shared under a CC BY-NC 4.0 license and was authored, remixed, and/or curated by Thomas Wenzel via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.
453
1.3: Instrumental Setup of a Spectrophotometer
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Molecular_and_Atomic_Spectroscopy_(Wenzel)/1%3A_General_Background_on_Molecular_Spectroscopy/1.3%3A__Instrumental_Setup_of_a_Spectrophotometer
A spectrophotometer has five major components to it, a source, monochromator, sample holder, detector, and readout device. Most spectrophotometers in use today are linked to and operated by a computer and the data recorded by the detector is displayed in some form on the computer screen. 1.3A: Radiation Sources1.3B: Monochromators1.3C: Detectors This page titled 1.3: Instrumental Setup of a Spectrophotometer is shared under a CC BY-NC 4.0 license and was authored, remixed, and/or curated by Thomas Wenzel via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.
454
1.3A: Radiation Sources
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Molecular_and_Atomic_Spectroscopy_(Wenzel)/1%3A_General_Background_on_Molecular_Spectroscopy/1.3%3A__Instrumental_Setup_of_a_Spectrophotometer/1.3A%3A_Radiation_Sources
Describe the desirable features of a radiation source for a spectrophotometer.An obvious feature is that the source must cover the region of the spectrum that is being monitored. Beyond that, one important feature is that the source has high power or intensity, meaning that it gives off more photons. Since any detector senses signal above some noise, having more signal increases what is known as the signal-to-noise ratio and improves the detection limit. The second important feature is that the source be stable. Instability on the power output from a source can contribute to noise and can contribute to inaccuracy in the readings between standards and unknown samples.Plot the relative intensity of light emitted from an incandescent light bulb (y-axis) as a function of wavelength (x-axis). This plot is a classic observation known as blackbody radiation. On the same graph, show the output from a radiation source that operated at a hotter temperature.As shown in , the emission from a blackbody radiator has a specific wavelength that exhibits maximum intensity or power. The intensity diminishes at shorter and longer wavelengths. The output from a blackbody radiator is a function of temperature. As seen in , at hotter temperatures, the wavelength with maximum intensity moves toward the ultraviolet region of the spectrum.Examining the plots in , what does this suggest about the power that exists in radiation sources for the infrared portion of the spectrum?The intensity of radiation in the infrared portion of the spectrum diminishes considerably for most blackbody radiators, especially at the far infrared portions of the spectrum. That means that infrared sources do not have high power, which ultimately has an influence on the detection limit when using infrared absorption for quantitative analysis.Blackbody radiators are known as continuous sources. An examination of the plots in shows that a blackbody radiator emits radiation over a large continuous band of wavelengths. A monochromator can then be used to select out a single wavelength needed for the quantitative analysis. Alternatively, it is possible to scan through the wavelengths of radiation from a blackbody radiator and record the spectrum for the species under study.Explain the advantages of a dual- versus single-beam spectrophotometer.One way to set up a dual-beam spectrophotometer is to split the beam of radiation from the source and send half through a sample cell and half through a reference cell. The reference cell has a blank solution in it. The detector is set up to compare the two signals. Instability in the source output will show up equally in the sample and reference beam and can therefore be accounted for in the measurement. Remember that the intensity of radiation from the source varies with wavelength and drops off toward the high and low energy region of the spectrum. The changes in relative intensity can be accounted for in a dual-beam configuration.A laser (LASER = Light Amplification by Stimulated Emission of Radiation) is a monochromatic source of radiation that emits one specific frequency or wavelength of radiation. Because lasers put out a specific frequency of radiation, they cannot be used as a source to obtain an absorbance spectrum. However, lasers are important sources for many spectroscopic techniques, as will be seen at different points as we further develop the various spectroscopic methods. What you probably know about lasers is that they are often high-powered radiation sources. They emit a highly focused and coherent beam. Coherency refers to the observation that the photons emitted by a laser have identical frequencies and waves that are in phase with each other.A laser relies on two important processes. The first is the formation of a population inversion. A population inversion occurs for an energy transition when more of the species are in the excited state than are in the ground state. The second is the process of stimulated emission. Emission is when an excited state species emits radiation a). Absorption occurs when a photon with the exact same energy as the difference in energy between the ground and excited state of a species interacts with and transfers its energy to the species to promote it to the excited state c). Stimulated emission occurs when an incident photon that has exactly the same energy as the difference in energy between the ground and excited state of a transition interacts with the species in the excited state. In this case, the extra energy that the species has is converted to a photon that is emitted. In addition, though, the incident photon also is emitted. One final point is that the two photons in the stimulated emission process have their waves in phase with each other (are coherent) b). In absorption, one incident photon comes in and no photons come out. In stimulated emission, one incident photon comes in and two photons come out.Why is it impossible to create a 2-level laser?A 2-level laser involves a process with only two energy states, the ground and excited state. In a resting state, the system will have a large population of species in the ground state (essentially 100% as seen in ) and only a few or none in the excited state. Incident radiation of an energy that matches the transition is then applied and ground state species absorb photons and become excited. The general transition process is illustrated in a.Species in the excited state will give up the excess energy either as an emitted photon or heat to the surroundings. We will discuss this in more detail later on, but for now, it is acceptable to realize that excited state species have a finite lifetime before they lose their energy and return to the ground state. Without worrying about the excited state lifetime, let’s assume that the excited species remain in that state and incident photons can continue to excite additional ground state species into the excited state. As this occurs, the number of species in the excited state (e.g., the excited state population) will grow and the number in the ground state will diminish. The key point to consider is the system where 50% of the species are in the excited state and 50% of the species are in the ground state, as shown in b.For a system with exactly equal populations of the ground and excited state, incident photons from the radiation source have an equal probability of interacting with a species in the ground or excited state.If a photon interacts with a species in the ground state, absorption of the photon occurs and the species becomes excited. However, if another photon interacts with a species in the excited state, stimulated emission occurs, the species returns to the ground state and two photons are emitted. The net result is that for every ground state species that absorbs a photon and becomes excited there is a corresponding excited species that undergoes stimulated emission and returns to the ground state. Therefore it is not possible to get beyond the point of a 50-50 population and never possible to get a population inversion. A 2-level system with a 50-50 population is said to be a saturated transition.Using your understanding of a 2-level system, explain what is meant by a 3-level and 4-level system. 3- and 4-level systems can function as a laser. How is it possible to achieve a population inversion in a 3- and 4-level system?The diagrams for a 3-level and 4-level laser system are shown in . For the 3-level system, If they are stuck in level 3 long enough, it may be possible to deplete enough of the population from level 1 such that the population in level 3 is now higher than the population in level 1. The level 3 to level 1 transition is the lasing transition and note that the incident photons from the source have a different energy than this transition so no stimulated emission occurs. When the population inversion is achieved, a photon emitted from a species in level 3 can interact with another species that is excited to level 3, causing the stimulated emission of two photons. These emitted photons can interact with additional excited state species in level 3 to cause more stimulated emission and the result is a cascade of stimulated emission. This large cascade or pulse of photons all have the same frequency and are coherent. The process of populating level 3 in either the 3- or 4-level system using energy from the incident photons from the radiation source is referred to as optical pumping.For the 4-level laser, the lasing transition is from level 3 to level 4, meaning that a population inversion is needed between levels 3 and 4 and not levels 3 and 1.Since the population of level 4 is much lower than the population of level 1, it is much easier to achieve a population inversion in a 4-level laser compared to a 3-level laser. Therefore, the 4-level laser is generally preferred and more common than a 3-level laser.This page titled 1.3A: Radiation Sources is shared under a CC BY-NC 4.0 license and was authored, remixed, and/or curated by Thomas Wenzel via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.
455
1.3B: Monochromators
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Molecular_and_Atomic_Spectroscopy_(Wenzel)/1%3A_General_Background_on_Molecular_Spectroscopy/1.3%3A__Instrumental_Setup_of_a_Spectrophotometer/1.3B%3A_Monochromators
The two most common ways of achieving monochromatic radiation from a continuous radiation source are to use either a prism or a grating.Explain in general terms the mechanism in a prism and grating that leads to the attainment of monochromatic radiation. Compare the advantages and disadvantages of each type of device. What is meant by second order radiation in a grating? Describe the difference between a grating that would be useful for the infrared region of the spectrum and one that would be useful for the ultraviolet region of the spectrum.A prism disperses radiation because different wavelengths of radiation have different refractive indices in the material that makes up the prism. That causes different angles of refraction that disperse the radiation as it moves through the prism ).A grating is a device that consists of a series of identically shaped, angled grooves as shown in .The grating illustrated in is a reflection grating. Incoming light represented as A and B is collimated and appears as a plane wave. Therefore, as seen in a, the crest of the wave for A strikes a face of the grating before the crest of the wave for B strikes the adjoining face. Light that strikes the surface of the grating is scattered in all directions, one direction of which is shown in b for A and B. An examination of the paths for A and B in shows that B travels a further distance than A. For monochromatic radiation, if B travels an integer increment of the wavelength further than A, the two constructively interfere. If not, destructive interference results. Diffraction of polychromatic radiation off the grating leads to an interference pattern in which different wavelengths of radiation constructively and destructively interfere at different points in space.The advantage of a grating over a prism is that the dispersion is linear ). This means that a particular slit width allows an identical packet of wavelengths of radiation through to the sample. The dispersion of radiation with a prism is non-linear and, for visible radiation, there is less dispersion of the radiation toward the red end of the spectrum. See for a comparison of a glass and quartz prism. Note, the glass prism absorbs ultraviolet radiation in the range of 200-350 nm. The non-linear dispersion of a prism means that the resolution (ability to distinguish two nearby peaks) in a spectrum will diminish toward the red end of the spectrum. Linear dispersion is preferable. The other disadvantage of a prism is that it must transmit the radiation, whereas gratings usually rely on a reflection process.An important aspect of a grating is that more than one wavelength of radiation will exhibit constructive interference at a given position. Without incorporating other specific design features into the monochromator, all wavelengths that constructively interfere will be incident on the sample. For example, radiation with a wavelength of 300 nm will constructively interfere at the same position as radiation with a wavelength of 600 nm. This is referred to as order overlap. There are a variety of procedures that can be used to eliminate order overlap, details of which can be found at the following: Diffraction Gratings.The difference between gratings that are useful for the ultraviolet and visible region as compared to those that are useful for the infrared region involves the distance between the grooves. Gratings for the infrared region have a much wider spacing between the grooves.As discussed earlier, the advantage of making the slit width smaller is that it lets a smaller packet of wavelengths through to the sample. This improves the resolution in the spectrum, which means that it is easier to identify and distinguish nearby peaks. The disadvantage of making the slit width smaller is that it allows fewer photons (less power) through to the sample. This decreases the signal-to-noise ratio and raises the detection limit for the species being analyzed.This page titled 1.3B: Monochromators is shared under a CC BY-NC 4.0 license and was authored, remixed, and/or curated by Thomas Wenzel via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.
456
1.3C: Detectors
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Molecular_and_Atomic_Spectroscopy_(Wenzel)/1%3A_General_Background_on_Molecular_Spectroscopy/1.3%3A__Instrumental_Setup_of_a_Spectrophotometer/1.3C%3A_Detectors
A photomultiplier tube is commonly used to measure the intensity of ultraviolet and visible radiation. The measurement is based initially on the photoelectric effect and then on the amplification of the signal through a series of dynodes ). The initiation of the detection process involves radiation striking the surface of a photoactive surface and dislodging electrons. Electrons dislodged from this surface are accelerated toward the first dynode. This acceleration is accomplished by having the first dynode at a high voltage. Because of the acceleration, each electron released from the photoactive surface dislodges several electrons when it strikes the surface of the first dynode. Electrons emitted from the first dynode are accelerated toward the second dynode, etc. to eventually create a cascade of electrons that causes a large current.The advantage of the photomultiplier tube is its ability to measure relatively small amounts of electromagnetic radiation because of the amplification process that occurs. A disadvantage is that any spurious signal such as stray radiation is also amplified in the process, leading to an enhancement of the noise. The noise can be reduced by cooling the photomultiplier tube, which is done with some instruments. A caution when using a photomultiplier tube is that it must not be exposed to too high an intensity of radiation, since high intensity radiation can damage the photoelectric surface.Photomultiplier tubes are useful for the measurement of radiation that produces a current through the photoelectric effect – primarily ultraviolet and visible radiation. It is not useful for measuring the intensity of low energy radiation in the infrared and microwave portion of the spectrum.A photodiode array detector consists of an array or series of adjacent photosensitive diodes ). Radiation striking a diode causes a charge buildup that is proportional to the intensity of the radiation. The individual members of the array are known as pixels and are quite small in size. Since many pixels or array elements can be fit onto a small surface area, it is possible to build an array of these pixels and shine dispersed light from a monochromator onto it, thereby measuring the intensity of radiation for an entire spectrum. The advantage of the photodiode array detector is the potential for measuring multiple wavelengths at once, thereby measuring the entire spectrum of a species at once. Unfortunately, photodiode arrays are not that sensitive.A more sensitive array device uses a charge-transfer process. These are often two-dimensional arrays with many more pixels than a photodiode array. Radiation striking pixels in the array builds up a charge that is measured in either a charge-injection device (CID) or charge-coupled device (CCD).This page titled 1.3C: Detectors is shared under a CC BY-NC 4.0 license and was authored, remixed, and/or curated by Thomas Wenzel via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.
457
2.1: Introduction
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Molecular_and_Atomic_Spectroscopy_(Wenzel)/2%3A_Ultraviolet_Visible_Absorption_Spectroscopy/2.1%3A_Introduction
Compare and contrast the absorption of ultraviolet (UV) and visible (VIS) radiation by an atomic substance (something like helium) with that of a molecular substance (something like ethylene).Do you expect different absorption peaks or bands from an atomic or molecular substance to have different intensities? If so, what does this say about the transitions?UV/VIS radiation has the proper energy to excite valence electrons of chemical species and cause electronic transitions.For atoms, the only process we need to think about is the excitation of electrons, (i.e., electronic transitions), from one atomic orbital to another. Since the atomic orbitals have discrete or specific energies, transitions among them have discrete or specific energies. Therefore, atomic absorption spectra consist of a series of “lines” at the wavelengths of radiation (or frequency of radiation) that correspond in energy to each allowable electronic transition. The diagram in represents the energy level diagram of any multielectron atom.The different lines in the spectrum will have different intensities. As we have already discussed, different transitions have different probabilities or different molar absorptivities, which accounts for the different intensities. The process of absorption for helium is shown in in which one electron is excited to a higher energy orbital. Several possible absorption transitions are illustrated in the diagram.The illustration in represents the atomic emission spectrum of helium and clearly shows the “line” nature of an atomic spectrum.For molecules, there are two other important processes to consider besides the excitation of electrons from one molecular orbital to another. The first is that molecules vibrate. Molecular vibrations or vibrational transitions occur in the infrared portion of the spectrum and are therefore lower in energy than electronic transitions. The second is that molecules can rotate. Molecular rotations or rotational transitions occur in the microwave portion of the spectrum and are therefore lower in energy than electronic and vibrational transitions. The diagram in represents the energy level diagram for a molecule. The arrows in the diagram represent possible transitions from the ground to excited states.Note that the vibrational and rotational energy levels in a molecule are superimposed over the electronic transitions. An important question to consider is whether an electron in the ground state (lowest energy electronic, vibrational and rotational state) can only be excited to the first excited electronic state (no extra vibrational or rotational energy), or whether it can also be excited to vibrationally and/or rotationally excited states in the first excited electronic state. It turns out that molecules can be excited to vibrationally and/or rotationally excited levels of the first excited electronic state, as shown by arrows in . Molecules can also be excited to the second and higher excited electronic states. Therefore, we can speak of a molecule as existing in the second excited rotational state of the third excited vibrational state of the first excited electronic state.One consequence in the comparison of atomic and molecule absorption spectra is that molecular absorption spectra ought to have many more transitions or lines in them than atomic spectra because of all the vibrational and rotational excited states that exist.Compare a molecular absorption spectrum of a dilute species dissolved in a solvent at room temperature versus the same sample at 10K.The difference to consider here is that the sample at 10K will be frozen into a solid whereas the sample at room temperature will be a liquid. In the liquid state, the solute and solvent molecules move about via diffusion and undergo frequent collisions with each other. In the solid state, collisions are reduced considerably.What is the effect of collisions of solvent and solute molecules? Collisions between molecules cause distortions of the electrons. Since molecules in a mixture move with a distribution of different speeds, the collisions occur with different degrees of distortion of the electrons. Since the energy of electrons depends on their locations in space, distortion of the electrons causes slight changes in the energy of the electrons. Slight changes in the energy of an electron means there will be a slight change in the energy of its transition to a higher energy state. The net effect of collisions is to cause a broadening of the lines in the spectrum. The spectrum at room temperature will show significant collisional broadening whereas the spectrum at 10K will have minimal collisional broadening. The collisional broadening at room temperature in a solvent such as water is significant enough to cause a blurring together of the energy differences between the different rotational and vibrational states, such that the spectrum consists of broad absorption bands instead of discrete lines. By contrast, the spectrum at 10K will consist of numerous discrete lines that distinguish between the different rotationally and vibrationally excited levels of the excited electronic states. The diagrams in show the difference between the spectrum at room temperature and 10K, although the one at 10K does not contain nearly the number of lines that would be observed in the actual spectrum.Are there any other general processes that contribute to broadening in an absorption spectrum?The other general contribution to broadening comes from something known as the Doppler Effect. The Doppler Effect occurs because the species absorbing or emitting radiation is moving relative to the detector. Perhaps the easiest way to think about this is to consider a species moving away from the detector that emits a specific frequency of radiation heading toward the detector. The frequency of radiation corresponds to that of the energy of the transition, so the emitted radiation has a specific, fixed frequency. The picture in shows two species emitting waves of radiation toward a detector. It is worth focusing on the highest amplitude portion of each wave. Also, in , assume that the detector is on the right side of the diagram and the right side of the two emitting spheres. The emission process to produce the wave of radiation requires some finite amount of time. If the species is moving away from the detector, even though the frequency is fixed, to the detector it will appear as if each of the highest amplitude regions of the wave is lagging behind where they would be if the species is stationary (see the upper sphere in ). The result is that the wavelength of the radiation appears longer, meaning that the frequency appears lower. For visible radiation, we say that the radiation from the emitting species is red-shifted. The lower sphere in is moving towards the detector. Now the highest amplitude regions of the wave are appearing at the detector faster than expected. This radiation is blue-shifted. In a solution, different species are moving in different directions relative to the detector. Some exhibit no Doppler shift. Others would be blue-shifted whereas others would be red-shifted and the degree of red- and blue-shift varies among different species. The net effect would be that the emission peak is broadened. The same process occurs with the absorption of radiation as well.The emission spectrum in represents the Doppler broadening that would occur for a gas phase atomic species where the atoms are not moving (top) and then moving with random motion (bottom).A practical application of the Doppler Effect is the measurement of the distance of galaxies from the earth. The universe is expanding away from a central point. Hubble’s Law and the Hubble effect is an observation that the further a galaxy is from the center of the universe, the faster it moves. There is also a precise formula that predicts the speed of movement relative to the distance from the center of the universe. Galaxies further from the center of the universe therefore show a larger red shift in their radiation due to the Doppler Effect than galaxies closer to the center of the universe. Measurements of the red-shift are used to determine the placement of galaxies in the universe.This page titled 2.1: Introduction is shared under a CC BY-NC 4.0 license and was authored, remixed, and/or curated by Thomas Wenzel via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.
459
2.2: Effect of Conjugation
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Molecular_and_Atomic_Spectroscopy_(Wenzel)/2%3A_Ultraviolet_Visible_Absorption_Spectroscopy/2.2%3A_Effect_of_Conjugation
In organic compounds, the bonding orbitals are almost always filled and the anti-bonding orbitals are almost always empty. The important consideration becomes the ordering of the molecular orbitals in an energy level diagram. shows the typical ordering that would occur for an organic compound with \(\pi\) orbitals.The most important energy transition to consider in is the one from the highest occupied molecular orbital (HOMO) to the lowest unoccupied molecular orbital (LUMO). This will be the lowest energy transition. In the case of 1-butene, the lowest energy transition would be the \(\pi\)-\(\pi\)* transition. The UV/VIS absorption spectrum for 1-butene is shown in . The \(\lambda\)max value has a value of about 176 nm, which is in the vacuum ultraviolet portion of the spectrum.1,3-Butadiene has two double bonds and is said to be a conjugated system. There are two ways we could consider what happens in butadiene. The first, shown , is to consider each double bond separately showing how the p-orbitals overlap to form the \(\pi\) and \(\pi\)* orbitals a). Each of these double bonds and energy level diagrams is comparable to the double bond in 1-butene. However, because of the conjugation in 1,3-butadiene, you can think of the \(\pi\) and \(\pi\)* orbitals from each double bond as further overlapping to create the energy level diagram in the bottom picture b).Because of the additional overlap, the lowest energy transition in butadiene is lower than that in 1-butene. Therefore, the spectrum is expected to shift toward the red.A better way to consider the situation is to examine all the possible orientations of the p-orbitals in 1,3-butadiene. The picture in provides a representation of 1.3-butadiene showing how the four p-orbitals are all positioned side-by-side to overlap with each other.The four pictures in represent the possible alignments of the signs of the wave functions in 1,3-butadiene. In a, all four p-orbitals constructively overlap with each other. In b, two adjacent pairs of p-orbitals constructively overlap with each other. In c, only the pair of p-orbitals in the center has constructive overlap. In d, there is no constructive overlap and only destructive overlap occurs.Rank these from high to low energy.The orbital in which all four p-orbitals overlap would be the lowest in energy ). The next has two regions of overlap. The third has only one region of overlap and the highest energy orbital has no regions of overlap. Because there are four electrons to put in the orbitals (one from each of the contributing p-orbitals), the bottom two orbitals are filled and the top two are empty.The lowest energy HOMO to LUMO transition will be lower than observed in 1-butene. The UV/VIS spectrum of 1,3-butadiene is shown in . In this case, the \(\lambda\)max value is at about 292 nm, a significant difference from the value of 176 nm in 1-butene. The effect of increasing conjugation is to shift the spectrum toward longer wavelength (lower frequency, lower energy) absorptions.Another comparative set of conjugated systems occurs with fused ring polycyclic aromatic hydrocarbons such as naphthalene, anthracene and pentacene.The spectra in are for benzene, naphthalene, anthracene and pentacene. Note that as more rings and more conjugation are added, the spectrum shifts further toward and into the visible region of the spectrum.. UV/VIS absorption spectra of benzene, naphthalene, anthracene and pentacene.This page titled 2.2: Effect of Conjugation is shared under a CC BY-NC 4.0 license and was authored, remixed, and/or curated by Thomas Wenzel via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.
460
2.3: Effect of Non-bonding Electrons
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Molecular_and_Atomic_Spectroscopy_(Wenzel)/2%3A_Ultraviolet_Visible_Absorption_Spectroscopy/2.3%3A_Effect_of_Non-bonding_Electrons
Benzene has a set of conjugated \(\pi\)-bonds and the lowest energy transition would be a \(\pi\)-\(\pi\)* transition as shown in .The UV/VIS absorption spectrum for benzene is shown in .Benzene absorbs radiation in the vacuum ultraviolet over the range from 160-208 nm with a \(\lambda\)max value of about 178 nm. Pyridine has a similar conjugation of double bonds comparable to what occurs in benzene.For pyridine, the lowest energy transition involves the n-\(\pi\)* orbitals and this will be much lower in energy than the \(\pi\)-\(\pi\)* transition in pyridine or benzene. The UV/VIS absorption spectrum of pyridine is shown in .The shift toward higher wavelengths when compared to benzene is quite noticeable in the spectrum of pyridine, where the peaks from 320-380 nm represent the n-\(\pi\)* transition and the peak at about 240 nm is a \(\pi\)-\(\pi\)* transition. Note that intensity and therefore the molar absorptivity of the n-\(\pi\)* transition is lower than that of the \(\pi\)-\(\pi\)* transition. This is usually the case with organic compounds.Dye molecules absorb in the visible portion of the spectrum. They absorb wavelengths complementary to the color of the dye. Most \(\pi\)-\(\pi\)* transitions in organic molecules are in the ultraviolet portion of the spectrum unless the system is highly conjugated. Visible absorption is achieved in dye molecules by having a combination of conjugation and non-bonding electrons. Azo dyes with the N=N group are quite common, one example of which is shown in .This page titled 2.3: Effect of Non-bonding Electrons is shared under a CC BY-NC 4.0 license and was authored, remixed, and/or curated by Thomas Wenzel via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.
461
2.4: Effect of Solvent
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Molecular_and_Atomic_Spectroscopy_(Wenzel)/2%3A_Ultraviolet_Visible_Absorption_Spectroscopy/2.4%3A_Effect_of_Solvent
These are the lowest energy peaks in the spectrum and correspond to the n-\(\pi\)* transition in pyridine. Hexane (C6H14) is a non-polar hydrocarbon. Methanol (CH3OH) is a polar solvent with the ability to form hydrogen bonds. For pyridine, the hydrogen atom of the hydroxyl group of methanol will form hydrogen bonds with the lone pair on the nitrogen atoms, as shown in . Hexane cannot form such hydrogen bonds.In order to account for the blue-shift in the spectrum, we need to consider what, if anything, will happen to the energies of the n, \(\pi\), and \(\pi\)* orbitals. Bonding between two atomic orbitals leads to the formation of a bonding and anti-bonding molecular orbital, one of which drops in energy and the other of which rises in energy. The electrostatic attraction between a positively charged hydrogen atom and negatively charged lone pair of electrons in a hydrogen-bond (as illustrated in for methanol and pyridine) is a stabilizing interaction. Therefore, the energy of the non-bonding electrons will be lowered.The picture in shows representations of a \(\pi\)- and \(\pi\)*-orbital.Electrons in a ­\(\pi\)-orbital may be able to form a weak dipole-dipole interaction with the hydroxyl hydrogen atom of methanol. This weak interaction may cause a very slight drop in energy of the \(\pi\)-orbital, but it will not be nearly as pronounced as that of the non-bonding electrons. Similarly, if an electron has been excited to the \(\pi\)*-orbital, it has the ability to form a weak dipole-dipole interaction with the hydroxyl hydrogen atom of methanol. This weak interaction will cause a drop in energy of the \(\pi\)*-orbital, but it will not be nearly as pronounced as that of the non-bonding electrons. However, the drop in energy of the \(\pi\)*-orbital will be larger than that of the \(\pi\)-orbital because the \(\pi\)*-orbital points out from the C=C bond and is more accessible to interact with the hydroxyl hydrogen atom of methanol than the \(\pi\)-orbital. The diagram in shows the relative changes in the energies of the n, \(\pi\), and \(\pi\)* orbitals that would occur on changing the solvent from hexane to methanol with stabilization occurring in the order n > \(\pi\)* > \(\pi\).An examination of the relative energies between hexane and methanol shows that both the n and \(\pi\)* levels drop in energy, but the drop of the n level is greater than the drop of the \(\pi\)* level. Therefore, the n-\(\pi\)* transition moves to higher energy, hence a blue-shift is observed in the peaks in the spectrum in the 320-380 nm range of pyridine. The blue-shift that is observed is referred to as a hypsochromic shift.The absorption in benzene corresponds to the \(\pi\)-\(\pi\)* transition. Using the diagram in , the drop in energy of the \(\pi\)*-orbital is more than that of the \(\pi\)-orbital. Therefore, the \(\pi\)-\(\pi\)* transition is slightly lower in energy and the peaks shift toward the red. The red-shift is referred to as a bathochromic shift.Note as well that the change in the position of the peak for the \(\pi\)-\(\pi\)* transition of benzene would be less than that for the n-\(\pi\)* transition of pyridine because the stabilization of the non-bonding electrons is greater than the stabilization of the electrons in the \(\pi\)*-orbital.This page titled 2.4: Effect of Solvent is shared under a CC BY-NC 4.0 license and was authored, remixed, and/or curated by Thomas Wenzel via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.
462
2.5: Applications
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Molecular_and_Atomic_Spectroscopy_(Wenzel)/2%3A_Ultraviolet_Visible_Absorption_Spectroscopy/2.5%3A_Applications
The answer depends in part of what type of system you are examining. Ultraviolet-absorbing organic molecules usually involve n-\(\pi\)* and \(\pi\)-\(\pi\)* transitions. Since UV/VIS absorption spectra are usually recorded at room temperature in solution, collisional broadening leads to a blurring together of all of the individual lines that would correspond to excitations to the different vibrational and rotational states of a given electronic state. As such, UV/VIS absorption spectra of organic compounds are not all that different and distinct from each other. Whereas we can reliably assign unique structures to molecules using the spectra that are obtained in NMR spectroscopy, the spectra in UV/VIS spectroscopy do not possess enough detail for such an analysis. Therefore, UV/VIS spectroscopy is not that useful a tool for qualitative analysis of organic compounds.However, a particular organic compound does have a specific UV/VIS absorption spectrum as seen in the various examples provided above. If the spectrum of an unknown compound exactly matches that of a known compound (provided both have been recorded under the same conditions – in the same solvent, at the same pH, etc.), it is strong evidence that the compounds are the same. However, because of the featureless nature of many UV/VIS spectra, such a conclusion must be reached with caution. The use of a UV-diode array detector as a liquid chromatographic detection method is quite common. In this case, the match of identical spectra with the match in retention time between a known and unknown can be used to confirm an assignment of the identity of the compound.Many transition metal ions have distinct UV/VIS absorption spectra that involve d-d electron transitions. The position of peaks in the spectra can vary significantly depending on the ligand, and there is something known as the spectrochemical series that can be used to predict certain changes that will be observed as the ligands are varied. UV/VIS spectroscopy can oftentimes be used to reliably confirm the presence of a particular metal species in solution. Some metal species also have absorption processes that result from a charge transfer process. In a charge transfer process, the electron goes from the HOMO on one species to the LUMO on the other. In metal complexes, this can involve a ligand-to-metal transition or metal-to-ligand transition. The ligand-to-metal transition is more common and the process effectively represents an internal electron transfer or redox reaction. Certain pairs of organic compounds also associate in solution and exhibit charge-transfer transitions. An important aspect of charge transfer transitions is that they tend to have very high molar absorptivities.We have the ability to sensitively measure UV/VIS radiation using devices like photomultiplier tubes or array detectors. Provided the molar absorptivity is high enough, UV/VIS absorption is a highly sensitive detection method and is a useful tool for quantitative analysis. Since many substances absorb broad regions of the spectrum, it is prone to possible interferences from other components of the matrix. Therefore, UV/VIS absorption spectroscopy is not that selective a method. The compound under study must often be separated from other constituents of the sample prior to analysis. The coupling of liquid chromatography with ultraviolet detection is one of the more common analysis techniques. In addition to the high sensitivity, the use of UV/VIS absorption for quantitative analysis has wide applicability, is accurate, and is easy to use.The best wavelength to use is the one with the highest molar absorptivity (\(\lambda\)max), provided there are no interfering substances that absorb at the same wavelength. If so, then there either needs to be a separation step or it may be possible to use a different wavelength that has a high enough molar absorptivity but no interference from components of the matrix.We have discussed several of these already in the unit. The solvent can have an effect and cause bathochromic and hypsochromic shifts. Species in the matrix that may form dipole-dipole interactions including hydrogen bonds can alter the spectra as well. Metal ions that can form donor-acceptor complexes can have the same effect. Temperature can have an effect on the spectrum. The electrolyte concentration can have an effect as well. As discussed above, the possibility that the sample has interferences that absorb the same radiation must always be considered.Finally, pH can have a pronounced effect because the spectrum of protonated and deprotonated acids and bases can be markedly different from each other. In fact, UV/VIS spectroscopy is commonly used to measure the pKa of new substances. The reaction below shows a generalized dissociation of a weak acid (HA) into its conjugate base.\[\mathrm{HA + H_2O = A^– + H_3O^+} \nonumber \]This rate of dissociation represented above is slow on the time scale of absorption (the absorption of a photon occurs over the time scale of 10-14 to 10-15 seconds). Because the reaction rate is slow, this means that during the absorption of a photon, the species is only in one of the two forms (either HA or A–). Therefore, if the solution is at a pH where both species are present, peaks for both will show up in the spectrum. To measure the pKa, standards must first be analyzed in a strongly acidic solution, such that all of the species is in the HA form, and a standard curve for HA can be generated. Then standards must be analyzed in a strongly basic solution, such that all of the species is in the A– form, to generate a standard curve for A–. At intermediate pH values close to the pKa, both HA and A– will be present and the two standard curves can be used to calculate the concentration of each species. The pH and two concentrations can then be substituted into the Henderson-Hasselbalch equation to determine the pKa value.\[\mathrm{pH = pKa + \log\left(\dfrac{[A^–]}{[HA]}\right)} \nonumber \] This page titled 2.5: Applications is shared under a CC BY-NC 4.0 license and was authored, remixed, and/or curated by Thomas Wenzel via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.
463
2.6: Evaporative Light Scattering Detection
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Molecular_and_Atomic_Spectroscopy_(Wenzel)/2%3A_Ultraviolet_Visible_Absorption_Spectroscopy/2.6%3A_Evaporative_Light_Scattering_Detection
Evaporative light scattering detection is a specialized technique in which UV radiation is used to detect non-UV-absorbing compounds separated by liquid chromatography. The column effluent is passed through a heated chamber that evaporates the mobile phase solvent. Non-volatile analyte compounds, which is usually the case for compounds separated by liquid chromatography, form solid particulates when the solvent is evaporated. The solid particulates scatter UV radiation, which will lead to a reduction in the UV power at the detector (i.e., photomultiplier tube) when a compound elutes from the chromatographic column. The method is more commonly used to determine the presence and retention time of non-UV-absorbing species in a chromatographic analysis rather than their concentration. It is common in liquid chromatographic separations to employ a buffer to control the pH of the mobile phase. Many buffers will form particulates on evaporation of the solvent and interfere with evaporative light scattering detection.Evaporative light scattering detection is encompassed more broadly within a technique known as turbidimetry. In turbidometric measurements, the detector is placed in line with the source and the decrease in power from scattering by particulate matter is measured. Nephelometry is another technique based on scattering, except now the detector is placed at 90o to the source and the power of the scattered radiation is measured. Turbidmetry can be measured using a standard UV/VIS spectrophotometer; nephelometry can be measured using a standard fluorescence spectrophotometer (discussed in Chapter 3). Turbimetry is better for samples that have a high concentration of scattering particles where the power reaching the detector will be significantly less than the power of the course. Nephelometry is preferable for samples with only low concentration of scattering particles. Turbidimetry and nephelometry are widely used to determine the clarity of solutions such as water, beverages, and food products. This page titled 2.6: Evaporative Light Scattering Detection is shared under a CC BY-NC 4.0 license and was authored, remixed, and/or curated by Thomas Wenzel via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.
464
3.2: Energy States and Transitions
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Molecular_and_Atomic_Spectroscopy_(Wenzel)/3%3A_Molecular_Luminescence/3.2%3A_Energy_States_and_Transitions
Fluorescence only occurs after a chemical species has first been excited by electromagnetic radiation. The emission of radiation by a solid object heated in a flame (e.g., a piece of iron) is not fluorescence because the excitation has occurred thermally rather than through the absorption of electromagnetic radiation. Fluorescence can occur from species that have been excited by UV/VIS radiation. To consider what happens in the process of fluorescence, we need to think of the possible energy states for a ground and excited state system. Draw an energy level diagram for a typical organic compound with \(\pi\) and \(\pi\) * orbitals. represents the energy levels for a typical organic compound in which the \(\pi\) orbitals are full and the \(\pi\)* orbitals are empty.Now consider the electron spin possibilities for the ground and excited state. Are there different possible ways to orient the spins (if so, these represent different spin states). The ground state, which is shown on the left in , has two electrons in the \(\pi\)-orbital. These two electrons must have opposite spins or else they would have the same four quantum numbers. Therefore, there is only one possible way to align the spins of the two electrons in the \(\pi\)-orbital.The excited state has one electron in the \(\pi\)-orbital and one electron in the \(\pi\)*-orbital as shown in . In this case, there are two possible ways we might align the spins. In one case, the electron in the \(\pi\)*-orbital could have the opposite spin of the electron in the \(\pi\)-orbital (e.g., the electrons have paired spins, even though they are in different orbitals – see , middle diagram). In the other case, the electron in the \(\pi\)*-orbital could have a spin that is parallel with the electron in the \(\pi\)-orbital (see – far right diagram). In both cases, it does not matter which electron has spin-up and which has spin-down, the only important point is that in one case the two spins are opposite and in the other they are parallel. The energy level diagram in shows representations for the two possibilities.Do you think these different spin states have different energies?Since they are different from each other (i.e., spins parallel versus spins paired), it makes sense that they would have different energies.Which one do you expect to be lower in energy?To answer this question, we have to think back to a rule we established with placing electrons into atomic or molecular orbitals that have the same energy (i.e., are degenerate). We learned that electrons go into degenerate orbitals with parallel spins and only pair up their spins when forced to do so (e.g., an atomic p3 configuration has three unpaired electrons with parallel spins; only when we added a fourth electron to make a p4 configuration do two of the electrons have paired spins). The rationale we gave for this observation is that configurations with parallel spins in degenerate orbitals are lower in energy than configurations with paired spins (i.e., it took energy to pair up electron spins). Applying this general concept to the situation above, we can reason that the configuration in which the electrons in the \(\pi\)- and \(\pi\)*-orbitals have parallel spins is lower in energy than the configuration in which the two electrons have paired spins. The energy level diagrams in show the lower energy of the configuration where the electrons have parallel spins.If the spin state is defined as (2S + 1) where S represents the total electronic spin for the system, try to come up with names for the ground and possible excited states for the system that are based on their spin state.Remember that spin quantum numbers are either +½ or –½. S, the total electronic spin for the system, is the sum of the individual spin quantum numbers for all of the electrons.In the case of the ground state, for every electron with a spin of +½ there is an electron with a spin of –½. Therefore, the value of S is zero. The spin state, which is 2S + 1, would have a value of 1.In the case of the excited state in which the electrons have paired spins (+½ and –½), the value of S is also zero. Therefore, the spin state, which is 2S + 1, would have a value of 1.In the case of the excited state in which the electrons have parallel spins (+½ and +½; by convention, we use the positive value of the spin for parallel spins when determining the spin state), the value of S is now one. Therefore, the spin state, which is 2S + 1, would have a value of 3.The name we use to signify a system with a spin state of one is a singlet state. The name we use to signify a system with a spin state of three is a triplet state. Note that the ground state is a singlet state and that one of the excited states is a singlet state as well. We differentiate these by denoting the energy level with a number subscript. So the ground singlet state is denoted as S0 whereas the first excited state is denoted as S1. It is possible to excite a molecular species to higher electronic states so that higher energy S2, S3, etc. singlet states exist as well. The triplet state would be denoted as T1. There are also T2, T3, etc. as well. Now we can draw a more complex energy diagram for the molecule that shows different singlet and triplet levels ).Draw a diagram of the energy levels for such a molecule. Draw arrows for the possible transitions that could occur for the molecule.Note in how a triplet state is slightly lower in energy than the corresponding singlet state. Note as well that there are vibrational and rotational levels superimposed within the electronic states as we observed before when considering UV/VIS spectroscopy. The energy level diagram in shows the transitions that can occur within this manifold of energy states for an organic molecule. The transitions are numbered to facilitate our discussion of them.Transition 1 (Absorption)The transitions labeled with the number in represent the process of absorption of incident radiation that promotes the molecule to an excited electronic state. The diagram shows the absorption process to the S1 and S2 states. It is also possible to excite the molecule to higher vibrational and rotational levels within the excited electronic states, so there are many possible absorption transitions. The following are equations that show the absorption of different frequencies of radiation needed to excite the molecule to S1 and S2.\[\mathrm{S_0 + h\nu = S_1} \nonumber \]\[\mathrm{S_0 + h\nu ’ = S_2} \nonumber \]It is reasonable at first to think that there is an absorption transition that goes directly from the S0 to the T1 state. This is a transition that involves a spin-flip and it turns out that transitions that involve a spin-flip or change in spin state are forbidden, meaning that they do not happen (although, as we will soon see, sometimes transitions that are forbidden do happen). What is important here is that you will not get direct excitation from the S0 level to a higher energy triplet state. These transitions are truly forbidden and do not happen.Transition 2 (Internal Conversion)Internal conversion is the process in which an electron crosses over to another electronic state of the same spin multiplicity (e.g., singlet-to-singlet, triplet-to-triplet). The internal conversion in is from S2 to S1 and involves a crossover into a higher energy vibrational state of S1. It is also possible to have internal conversion from S1 to a higher vibrational level of S0.Transition 3 (Radiationless decay – loss of energy as heat)The transitions labeled with the number in are known as radiationless decay or external conversion. These generally correspond to the loss of energy as heat to surrounding solvent or other solute molecules.\[\mathrm{S_1 = S_0 + heat} \nonumber \]\[\mathrm{T_1 = S_0 + heat} \nonumber \]Note that systems in S1 and T1 can lose their extra energy as heat. Also, systems excited to higher energy vibrational and rotational states also lose their extra energy as heat. The energy diagram level in shows systems excited to higher vibrational levels of S1 and all of these will rapidly lose some of the extra energy as heat and drop down to the S1 level that is only electronically excited.An important consideration that effects the various processes that take place for excited state systems is the lifetimes of the different excited states. The lifetime of a particular excited state (e.g. the S1 state) depends to some degree of the specific molecular species being considered and the orbitals involved, but measurements of excited state lifetimes for many different compounds allows us to provide ballpark numbers of the lifetimes of different excited states.The lifetime of an electron in an S2 state is typically on the order of 10-15 second.The lifetime of an electron in an S1 state depends on the energy levels involved. For a \(\pi\)-\(\pi\)* system, the lifetimes range from 10-7 to 10-9 second. For a n-\(\pi\)* system, the lifetimes range from 10-5 to 10-7 second. Since \(\pi\)-\(\pi\)* molecules are more commonly studied by fluorescence spectroscopy, S1 lifetimes are typically on the order of 10-8 second. While this is a small number on an absolute scale of numbers, note that it is a large number compared to the lifetimes of the S2 state.The lifetime of a vibrational state is typically on the order of 10-12 second. Note that the lifetime of an electron in the S1 state is significantly longer than the lifetime of an electron in a vibrationally excited state of S1. That means that systems excited to vibrationally excited states of S1 rapidly lose heat (in 10-12 second) until reaching S1, where they then “pause” for 10-8 second.Transition 4 (Fluorescence)The transition labeled in denotes the loss of energy from S1 as radiation. This process is known as fluorescence.\[\mathrm{S_1 = S_0 + h\nu} \nonumber \]Therefore, molecular fluorescence is a term used to describe a singlet-to-singlet transition in a system where the chemical species was first excited by absorption of electromagnetic radiation. Note that the diagram in does not show molecular fluorescence occurring from the S2 level. Fluorescence from the S2 state is extremely rare in molecules and there are only a few known systems where it occurs. Instead, what happens is that most molecules excited to energy states higher than S1 quickly (10-15 second) undergo an internal conversion to a high energy vibrational state of S1. They then rapidly lose the extra vibrational energy as heat and “pause” in the S1 state. From S1, they can either undergo fluorescence or undergo another internal conversion to a high energy vibrational state of S0 and then lose the energy as heat. The extent to which fluorescence or loss of heat occurs from S1 depends on particular features of the molecule and solution that we will discuss in more detail later in this unit.An important aspect of fluorescence from the S1 state is that the molecule can end up in vibrationally excited states of S0, as shown in the diagram above. Therefore, fluorescence emission from an excited state molecule can occur at a variety of different wavelengths. Just like we talked about with absorbance and the probability of different transitions (reflected in the magnitude of the molar absorptivity), fluorescent transitions have different probabilities as well. In some molecules, the S1-to-S0 fluorescent transition is the most probable, whereas in other molecules the most probable fluorescent transition may involve a higher vibrational level of S1. A molecule ending up in a higher vibrational level of S1 after a fluorescent emission will quickly lose the extra energy as heat and drop down to S0.So how do fluorescent light bulbs work? Inside the tube that makes up the bulb is a gas comprised of argon and a small amount of mercury. An electrical current that flows through the gas excites the mercury atoms causing them to emit light. This light is not fluorescence because the gaseous species was excited by an electrical current rather than radiation. The light emitted by the mercury strikes the white powdery coating on the inside of the glass tube and excites it. This coating then emits light. Since the coating was excited by light and emits light, it is a fluorescence emission.Transition 5 (Intersystem crossing)The transition labeled in is referred to as intersystem crossing. Intersystem crossing involves a spin-flip of the excited state electron. Remember that the electron has “paused” in S1 for about 10-8 second. While there, it is possible for the species to interact with things in the matrix (e.g. collide with a solvent molecule) that can cause the electron in the ground and/or excited state to flip its spin. If the spin flip occurs, the molecule is now in a vibrationally excited level of T1 and it rapidly loses the extra vibrational energy as heat to drop down to the T1 electronic level.What do you expect for the lifetime of an electron in the T1 state?Earlier we had mentioned that transitions that involve a change in spin state are forbidden. Theoretically that means that an electron in the T1 state ought to be trapped there, because the only place for it to go on losing energy is to the S0 state. The effect of this is that electrons in the T1 state have a long lifetime, which can be on the order of 10-4 to 100 seconds.There are two possible routes for an electron in the T1 state. One is that another spin flip can occur for one of the two electrons causing the spins to be paired. If this happens, the system is now in a high-energy vibrational state of S0 and the extra energy is lost rapidly as radiationless decay (transition 3) or heat to the surroundings.Transition 6 (Phosphorescence)The other possibility that can occur for a system in T1 is to emit a photon of radiation. Although, theoretically a forbidden process, it does happen for some molecules. This emission, which is labeled in , is known as phosphorescence. There are two common occasions where you have likely seen phosphorescence emission. One is from glow-in-the-dark stickers. The other is if you have ever turned off your television in a dark room and observed that the screen has a glow that takes a few seconds to die down. Phosphorescence is usually a weak emission from most substances.Why is phosphorescence emission weak in most substances?One reason why phosphorescence is usually weak is that it requires intersystem crossing and population of the T1 state. In many compounds, radiationless decay and/or fluorescence from the S1 state is preferable to intersystem crossing and not many of the species ever make it to the T1 state. Systems that happen to have a close match between the energy of the S1 state and a higher vibrational level of the T1 state may have relatively high rates of intersystem crossing. Compounds with non-bonding electrons often have higher degrees of intersystem crossing because the energy difference between the S1 and T1 states in these molecules is less. Paramagnetic substances such as oxygen gas (O2) promote intersystem crossing because the magnetic dipole of the unpaired electrons of oxygen can interact with the magnetic spin dipole of the electrons in the species under study, although the paramagnetism also diminishes phosphorescence from T1 as well. Heavy atoms such as Br and I in a molecule also tend to promote intersystem crossing.A second reason why phosphorescence is often weak has to do with the long lifetime of the T1 state. The longer the species is in the excited state, the more collisions it has with surrounding molecules. Collisions tend to promote the loss of excess energy as radiationless decay. Such collisions are said to quench fluorescence or phosphorescence. Observable levels of phosphorescent emission will require that collisions in the sample be reduced to a minimum. Hence, phosphorescence is usually measured on solid substances. Glow-in-the-dark stickers are a solid material. Chemical substances dissolved in solution are usually cooled to the point that the sample is frozen into a solid glass to reduce collisions before recording the phosphorescence spectrum. This requires a solvent that freezes to a clear glass, something that can be difficult to achieve with water as it tends to expand and crack when frozen.Which transition (\(\pi\)*-\(\pi\) or \(\pi\)*-n) would have a higher fluorescent intensity? Justify your answer.There are two reasons why you would expect the \(\pi\)*-n transition to have a lower fluorescent intensity. The first is that the molar absorptivity of n-\(\pi\)* transitions is less than that of \(\pi\)-\(\pi\)* transitions. Fewer molecules are excited for the n-\(\pi\)* case, so fewer are available to fluoresce. The second is that the excited state lifetime of the n-\(\pi\)* state (10-5-10-7 second) is longer than that of the \(\pi\)-\(\pi\)* state (10-7-10-9 second). The longer lifetime means that more collisions and more collisional deactivation will occur for the n-\(\pi\)* system than the \(\pi\)-\(\pi\)* system.Now that we understand the transitions that can occur in a system to produce fluorescence and phosphorescence occurs, we can examine the instrumental setup of a fluorescence spectrophotometer.This page titled 3.2: Energy States and Transitions is shared under a CC BY-NC 4.0 license and was authored, remixed, and/or curated by Thomas Wenzel via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.
465
3.3: Instrumentation
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Molecular_and_Atomic_Spectroscopy_(Wenzel)/3%3A_Molecular_Luminescence/3.3%3A_Instrumentation
What would constitute the basic instrumental design of a fluorescence spectrophotometer?In many ways the design of a fluorescence spectrophotometer is similar to an UV/VIS absorption spectrophotometer. We need a source of radiation and a monochromator to select out the desired wavelength of light. The device needs a sample holder and a detector to measure the intensity of the radiation.Just like UV/VIS absorption spectroscopy, radiation is used to excite the sample. Unlike absorption spectroscopy, a fluorescent sample emits radiation, and the emission goes from the S1 level to either the S0 level or higher vibrational states of the S0 level. Since fluorescence involves an excitation and emission process, and the wavelengths that these two processes occur at will almost always be different, a fluorescence spectrophotometer requires an excitation and emission monochromator. Also, since the emitted radiation leaves the sample in all directions, the detector does not need to be at 180o relative to the source as in an absorption instrument. Usually the detector is set at 90o to the incident beam and mirrors are placed around the sample cell 180o to the source and 180o to the detector to reflect the source beam back through the sample and to reflect emitted radiation toward the detector. A diagram of the components of a fluorescence spectrophotometer is shown in .This page titled 3.3: Instrumentation is shared under a CC BY-NC 4.0 license and was authored, remixed, and/or curated by Thomas Wenzel via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.
466
3.4: Excitation and Emission Spectra
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Molecular_and_Atomic_Spectroscopy_(Wenzel)/3%3A_Molecular_Luminescence/3.4%3A_Excitation_and_Emission_Spectra
What would be the difference between an excitation and emission spectrum in fluorescence spectroscopy?In an excitation spectrum, the emission monochromator is set to some wavelength where the sample is known to emit radiation and the excitation monochromator is scanned through the different wavelengths. The excitation spectrum will look similar if not identical to the absorption spectrum obtained in UV/VIS spectroscopy.In an emission spectrum, the excitation monochromator is set to some wavelength known to excite the sample and the emission monochromator is scanned through the different wavelengths.Draw representative examples of the excitation and emission spectrum for a molecule.The important point to realize is that the only peak that overlaps between the excitation and emission spectrum is the S0-S1 transition. Otherwise, all the excitation peaks occur at higher frequencies or shorter wavelengths and all of the emission peaks occur at lower frequencies or longer wavelengths. The spectra in show the excitation and emission spectra of anthracene. Note that the only overlap occurs at 380 nm, which corresponds to the S0-S1 transition.Describe a way to measure the phosphorescence spectrum of a species that is not compromised by the presence of any fluorescence emission.The important thing to consider in addressing this question is that the lifetime of the S1 state from which fluorescence occurs is approximately 10-8 second whereas the lifetime of the T1 state from which phosphorescence occurs is on the order of 10-4 to 100 seconds. Because of these different lifetimes, fluorescence emission will decay away rather quickly while phosphorescence emission will decay away more slowly. The diagram in shows representations for the decay of fluorescence versus phosphorescence as a function of time if the radiation source was turned off. The two can be distinguished by using a pulsed source. A pulsed source is turned on for a brief instant and then turned off. Many fluorescent spectrophotometers use a pulsed source. The electronics on the detector can be coordinated with the source pulsing. When measuring fluorescence, the detector reads signal when the pulse is on. When measuring phosphorescence, a delay time during which the detector is turned off occurs after the pulse ends. Then the detector is turned on for some period of time, which is referred to as the gate time. also shows where the delay and gate times might be set for the sample represented in the decay curves. The proper gate time depends in part on how slow the phosphorescence decays. You want a reasonable length of time to measure enough signal, but if the gate time is too long and weak to no phosphorescence occurs at the end, the detector is mostly measuring noise and the signal-to-noise ratio will be reduced.If performing quantitative analysis in fluorescence spectroscopy, which wavelengths would you select from the spectra you drew in the problem above?The two best wavelengths would be those that produced the maximum signal on the excitation and emission spectra. That will lead to the most sensitivity and lowest detection limits in the analysis. For the spectra of anthracene drawn in , that would correspond to an excitation wavelength of 360 nm and emission wavelength of 402 nm. The one exception is if the S0-S1 transition is the maximum on both spectra, which would mean having the excitation and emission monochromators set to the same wavelength. The problem that occurs here is that the excitation beam of radiation will always exhibit some scatter as it passes through the sample. Scattered radiation appears in all directions and the detector has no way to distinguish this from fluorescence. Usually the excitation and emission wavelengths must be offset by some suitable value (often 30 nm) to keep the scatter to acceptable levels.This page titled 3.4: Excitation and Emission Spectra is shared under a CC BY-NC 4.0 license and was authored, remixed, and/or curated by Thomas Wenzel via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.
467
3.5: Quantum Yield of Fluorescence
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Molecular_and_Atomic_Spectroscopy_(Wenzel)/3%3A_Molecular_Luminescence/3.5._Quantum_Yield_of_Fluorescence_((varphi_ce_F))
The quantum yield (\(\varphi_F\)) is a ratio that expresses the number of species that fluoresce relative to the total number of species that were excited. Earlier we said that anything that reduces the number of excited state species that undergo fluorescence is said to quench the fluorescence. The expression for the quantum yield will depend on the rate constants for the different processes that can occur for excited state species. Referring back to our original drawing of the different processes that can occur, we can write the following expression for the quantum yield, where kF is the rate constant for fluorescence, kIC is the rate constant for internal conversion, kEC is the rate constant for external conversion, kISC is the rate constant for intersystem crossing and kC is the rate constant for any other competing processes and includes photodecomposition of the sample. Excited state species sometimes have sufficient energy to decompose through processes of dissociation or predissociation. In dissociation, the electron is excited to a high enough vibrational level that the bond ruptures. In predissociation, the molecule undergoes internal conversion from a higher electronic state to an upper vibrational level of a lower electronic state prior to bond rupture. When putting a sample into a fluorescence spectrophotometer, it is usually desirable to block the excitation beam until just before making the measurement to minimize photodecomposition. \[\mathrm{\varphi_F = \dfrac{k_F}{k_F + k_{IC} + k_{EC} +k_{ISC} + k_C}} \nonumber \]Since this is a ratio, the limits of \(\varphi\)F are from 0 to 1. Species with quantum yields of 0.01 or higher (1 out of 100 excited species actually undergo fluorescence) are useful for analysis purposes.On first consideration it might seem reasonable to think that absorption spectroscopy is more sensitive than fluorescence spectroscopy. As stated above, for some compounds that we measure by fluorescence, only one of the 100 species that is excited undergoes fluorescence emission. In this case, 100 photons are absorbed but only one is emitted. The answer though requires a different consideration.The measurement of absorption involves a comparison of \(P\) to \(P_o\). At low concentrations, these two values are large and similar in magnitude. Therefore, at low concentrations, absorption involves the measurement of a small difference between two large signals. Fluorescence, on the other hand, is measured at 90o to the source. In the absence of fluorescence, as in a blank solution, there ought to be no signal reaching the detector (however, there is still some scattered and stray light that may reach the detector as noise). At low concentrations, fluorescence involves the measurement of a small signal over no background. For comparison, suppose you tried to use your eyes to distinguish the difference between a 100 and 99 Watt light bulb and the difference between complete darkness and a 1 Watt light bulb. Your eyes would have a much better ability to determine the small 1 Watt signal over darkness than the difference between two large 100 and 99 Watt signals. The same occurs for the electronic measurements in a spectrophotometer. Therefore, because emission involves the measurement of a small signal over no background, any type of emission spectroscopy has an inherent sensitivity advantage of one to three orders of magnitude over measurements of absorption. Fluorescence spectroscopy is an especially sensitive analysis method for those compounds that have suitable quantum yields.This page titled 3.5: Quantum Yield of Fluorescence is shared under a CC BY-NC 4.0 license and was authored, remixed, and/or curated by Thomas Wenzel via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.
468
3.6: Variables that Influence Fluorescence Measurements
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Molecular_and_Atomic_Spectroscopy_(Wenzel)/3%3A_Molecular_Luminescence/3.6%3A_Variables_that_Influence_Fluorescence_Measurements
There are a variety of variables that influence the signal observed in fluorescence spectroscopy. As seen in the original diagram showing the various energy levels and transitions that can occur, anything that can quench the fluorescent transition will affect the intensity of the fluorescence.When discussing absorption spectroscopy, an important consideration is Beer’s Law. A similar relationship exists for fluorescence spectroscopy, as shown below, in which \(I\) is the fluorescence intensity, \(\varepsilon\) is the molar absorptivity, \(b\) is the path length, \(c\) is the concentration, and \(P_o\) is the source power.\[\mathrm{I = 2.303K’\varepsilon bcP_o} \nonumber \]Not surprisingly, fluorescence intensity varies linearly with the path length and with the concentration. K’ is a constant that is dependent on the geometry and other factors and includes the fluorescence quantum yield. Since \(\varphi_F\) is a constant for a given system, K’ is defined as K”\(\varphi\)F. Of particular interest is that the fluorescence intensity relates directly to the source power. It stands to reason that the higher the source power, the more species that absorb photons and become excited, and therefore the more that eventually emit fluorescence radiation. This suggests that high-powered lasers, provided they emit at the proper wavelength of radiation to excite a system, have the potential to be excellent sources for fluorescence spectroscopy.The equation above predicts a linear relationship between fluorescence intensity and concentration. However, the utility of this equation breaks down at absorbance values of 0.05 or higher leading to a negative deviation of the standard curve.Something else that can possibly occur with fluorescence or other emission processes is that emitted photons can be reabsorbed by ground state molecules. This is a particular problem if the S1-S0 emission transition is the one being monitored. In this situation, at high concentrations of analyte, the fluorescence intensity measured at the detector may actually start to drop as shown in the standard curve in .Any changes in the system that will affect the number and force of collisions taking place in the solution will influence the magnitude of the fluorescence emission. Collisions promote radiationless decay and loss of extra energy as heat, so more collisions or more forceful collisions will promote radiationless decay and reduce fluorescence emission. Therefore, fluorescent intensity is dependent on the temperature of the solution. Higher temperatures will speed up the movement of the molecules (i.e., higher translational energy) leading to more collisions and more forceful collisions, thereby reducing the fluorescent intensity. Insuring that all the measurements are done at the same temperature is important. Reducing the temperature of the sample will also increase the signal-to-noise ratio.Another factor that will affect the number of collisions is the solvent viscosity. More viscous solutions will have fewer collisions, less collisional deactivation, and higher fluorescent intensity.The solvent can have other effects as well, similar to what we previously discussed in the section on UV/VIS absorption spectroscopy. For example, a hydrogen-bonding solvent can influence the value of \(\lambda\)max in the excitation and emission spectra by altering the energy levels of non-bonding electrons and electrons in \(\pi\)* orbitals. Other species in the solution (e.g., metal ions) may also associate with the analyte and change the \(\lambda\)max values.Many metal ions and dissolved oxygen are paramagnetic. We already mentioned that paramagnetic species promote intersystem crossing, thereby quenching the fluorescence. Removal of paramagnetic metal ions from a sample is not necessarily a trivial matter. Removing dissolved oxygen gas is easily done by purging the sample with a diamagnetic, inert gas such as nitrogen, argon or helium. All solution-phase samples should be purged of oxygen gas prior to the analysis.Another concern that can distinguish sample solutions from the blank and standards is the possibility that the unknown solutions have impurities that can absorb the fluorescent emission from the analyte. Comparing the fluorescent excitation and emission spectra of the unknown samples to the standards may provide an indication of whether the unknown has impurities that are interfering with the analysis.The pH will also have a pronounced effect on the fluorescence spectrum for organic acids and bases. An interesting example is to consider the fluorescence emission spectrum for the compound 2-naphthol. The hydroxyl hydrogen atom is acidic and the compound has a pKa of 9.5. At a pH of 1, the compound exists almost exclusively as the protonated 2-naphthol. At a pH of 13, the compound exists almost exclusively as the deprotonated 2-naphtholate ion. At a pH equal to the pKa value, the solution would consist of a 50-50 mixture of the protonated and deprotonated form.The most obvious thing to note is the large difference in the \(\lambda\)max value for the neutral 2-naphthol (355 nm) and the anionic 2-naphtholate ion (415 nm). The considerable difference between the two emission spectra occurs because the presence of more resonance forms leads to stabilization (i.e., lower energy) of the excited state. As shown in , the 2-naphtholate species has multiple resonance forms involving the oxygen atom whereas the neutral 2-naphthol species only has a single resonance form. Therefore, the emission spectrum of the 2-naphtholate ion is red-shifted relative to that of the 2-naphthol species.Consider the reaction shown below for the dissociation of 2-naphthol. This reaction may be either slow (slow exchange) or fast (fast exchange) on the time scale of fluorescence spectroscopy. Draw the series of spectra that would result for an initial concentration of 2-naphthol of 10-6 M if the pH was adjusted to 2, 8.5, 9.5, 10.5, and 13 and slow exchange occurred. Draw the spectra at the same pH when the exchange rate is fast.If slow exchange occurs, an individual 2-naphthol or 2-naphtholate species stays in its protonated or deprotonated form during the entire excitation-emission process and emits its characteristic spectrum. Therefore, when both species are present in appreciable concentrations, two peaks occur in the spectrum for each of the individual species. On the left side of , at pH 2, all of the species is in the neutral 2-naphthol form, whereas at pH 13 it is all in the anionic 2-naphtholate form. At pH 9.5, which equals the pKa value, there is a 50-50 mixture of the two and the peaks for both species are equal in intensity. At pH 8.5 and 10.5, one of the forms predominates. The intensity of each species is proportional to the concentration.If fast exchange occurs, as seen on the right side of , a particular species rapidly changes between its protonated and deprotonated form during the excitation and emission process. Now the emission is a weighted time average of the two forms. If the pH is such that more neutral 2-naphthol is present in solution, the maximum is closer to 355 nm (pH = 8.5). If the pH is such that more anionic 2-naphtholate is present in solution, the maximum is closer to 415 nm (pH = 10.5). At the pKa value (9.5), the peak appears in the middle of the two extremes.What actually happens – is the exchange fast or slow? The observation is that the exchange of protons that occurs in the acid-base reaction is slow on the time scale of fluorescence spectroscopy. Remember that the lifetime of an excited state is about 10-8 second. This means that the exchange rate of protons among the species in solution is slower than 10-8 second and the fluorescence emission spectrum has peaks for both the 2-naphthol and 2-naphtholate species.The pKa value of an acid is incorporated into an expression called the Henderson-Hasselbalch equation, which is shown below where HA represents the protonated form of any weak acid and A– is its conjugate base.\[\mathrm{pH = pKa + \log \dfrac{[A^–]}{[HA]}} \nonumber \]If a standard curve was prepared for 2-naphthol at a highly acidic pH and 2-naphtholate at a highly basic pH, the concentration of each species at different intermediate pH values when both are present could be determined. These concentrations, along with the known pH, can be substituted into the Henderson-Hasselbach equation to calculate pKa. As described earlier, this same process is used quite often in UV/VIS spectroscopy to determine the pKa of acids, so long as the acid and base forms of the conjugate pair have substantially different absorption spectra.If you do this with the fluorescence spectra of 2-naphthol; however, you get a rather perplexing set of results in that slightly different pKa values are calculated at different pH values where appreciable amounts of the neutral and anionic form are present. This occurs because the pKa of excited state 2-naphthol is different from the pKa of the ground state. Since the fluorescence emission occurs from the excited state, this difference will influence the calculated pKa values. A more complicated set of calculations can be done to determine the excited state pKa values. UV/VIS spectroscopy is therefore often an easier way to measure the pKa of a species than fluorescence spectroscopy.Because many compounds are weak acids or bases, and therefore the fluorescence spectra of the conjugate pairs might vary considerably, it is important to adjust the pH to insure all of either the protonated or deprotonated form.Answering this question involves a consideration of the effect that collisions of the molecules will have in causing radiationless decay. Note that anthracene is quite a rigid molecule. Diphenylmethane is rather floppy because of the methylene bridge between the two phenyl rings. Hopefully it is reasonable to see that collisions of the floppy diphenylmethane are more likely to lead to radiationless decay than collisions of the rigid anthracene molecules. Another way to think of this is the consequences of a crash between a Greyhound bus (i.e., anthracene) and a car towing a boat (i.e., diphenylmethane). It might be reasonable to believe that under most circumstances, the car would suffer more damage in the collision.Molecules that are suitable for analysis by fluorescence spectroscopy are therefore rigid species, often with conjugated \(\pi\) systems, that undergo less collisional deactivation. As such, fluorescence spectroscopy is a much more selective method than UV/VIS absorption spectroscopy. In many cases, a suitable fluorescent chromophore is first attached to the compound under study. For example, a fluorescent derivatization agent is commonly used to analyze amino acids that have been separated by high performance liquid chromatography. The advantage of performing such a derivatization step is because of the high sensitivity of fluorescence spectroscopy. Because of the high sensitivity of fluorescence spectroscopy, it makes it all the more important to control the variables described above as they will then have a more pronounced effect with the potential to cause errors in the measurement.This page titled 3.6: Variables that Influence Fluorescence Measurements is shared under a CC BY-NC 4.0 license and was authored, remixed, and/or curated by Thomas Wenzel via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.
469
3.7: Other Luminescent Methods
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Molecular_and_Atomic_Spectroscopy_(Wenzel)/3%3A_Molecular_Luminescence/3.7%3A_Other_Luminescent_Methods
Two other important forms of luminescence are chemiluminescence and bioluminescence. Chemiluminescence refers to a process in which a chemical reaction forms a product molecule that is in an excited state. The excited state product then emits radiation. The classic example of a chemiluminescent process involves the reaction of luminol with hydrogen peroxide (H2O2) in the presence of a catalyst as shown below. The reaction generates 3-aminophthalate in an excited state and it emits a bluish light. The luminal reaction is used in forensics to detect the presence of blood. In this case, the iron from the hemoglobin serves as the catalyst.Another important example of a chemiluminescent reaction involves the reaction of nitric oxide (NO) with ozone (O3) to produce excited state nitrogen dioxide (NO2*) and oxygen gas. Nitric oxide is an important compound in atmospheric chemistry and, with the use of an ozone generator, it is possible to use the chemiluminescent reaction as a sensitive way of measuring NO.\[\mathrm{NO = O_3 = NO_2^* + O_2} \nonumber \]\[\mathrm{NO_2^* = NO_2 + h\nu} \nonumber \]An important feature of both chemiluminescent reactions above is that peroxide and ozone, which are strong oxidants, have an unstable or energetic chemical bond. Chemiluminescence is a rare process only occurring in a limited number of chemical reactions.Bioluminescence refers to a situation when living organisms use a chemiluminescent reaction to produce a luminescent emission. The classic example is fireflies. There are also a number of bioluminescent marine organisms.Triboluminescence is a form of luminescence caused by friction. Breaking or crushing a wintergreen-flavored lifesaver in the dark produces triboluminescence. The friction of the crushing action excites sugar molecules that emit ultraviolet radiation, which is triboluminescence but cannot be seen by our eyes. However, the ultraviolet radiation emitted by the sugar is absorbed by fluorescent methyl salicylate molecules that account for the wintergreen flavor. The methyl salicylate molecules emit the light that can be seen by our eyes.Finally, light sticks also rely on a fluorescent process. Bending the light stick breaks a vial that leads to the mixing of phenyl oxalate ester and hydrogen peroxide. Two subsequent decomposition reactions occur, the last of which releases energy that excites a fluorescent dye. Emission from the dye accounts for the glow from the light stick.This page titled 3.7: Other Luminescent Methods is shared under a CC BY-NC 4.0 license and was authored, remixed, and/or curated by Thomas Wenzel via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.
470
4.1: Introduction to Infrared Spectroscopy
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Molecular_and_Atomic_Spectroscopy_(Wenzel)/4%3A_Infrared_Spectroscopy/4.1%3A_Introduction_to_Infrared_Spectroscopy
Infrared radiation is the proper energy to excite vibrations in molecules. The IR spectrum consists of near (4,000-12,800 cm-1), mid (200-4,000 cm-1) and far (10-200 cm-1) regions. The mid-IR region is most commonly used for analysis purposes. Vibrational excitations correspond to changes in the internuclear distances within molecules. You have likely recorded infrared spectra in your organic chemistry course. Thinking back to the instrument you used to record the spectrum, consider the following question.Thinking back to the instrument you used in your organic chemistry course, you presumably realize that no attempt was made to remove air from the system. The beam of infrared radiation passed through the air, indicating that the major constituents of air (nitrogen gas, N2, and oxygen as, O2) either do not absorb infrared radiation or absorb in another region of the spectrum. You likely know that double and triple bonds have strong absorptions in the mid-IR region of the spectrum. N2 and O2 have triple and double bonds, respectively, so it turns out that N2 and O2 do not absorb infrared radiation. There are certainly minor constituents of the air (e.g. carbon dioxide) that do absorb infrared radiation, and these are accounted for by either using a dual beam configuration on a continuous wave infrared spectrophotometer or by recording a background spectrum on a fourier transform infrared spectrophotometer.In order for a vibration to absorb infrared radiation and become excited, the molecule must change its dipole moment during the vibration. Homonuclear diatomic molecules such as N2 and O2 do not have dipole moments. If the molecule undergoes a stretching motion as shown in , where the spheres represent the two nuclei, there is no change in the dipole moment during the vibrational motion, therefore N2 and O2 do not absorb infrared radiation.HCl does have a dipole moment. Stretching the HCl bond leads to a change in the dipole moment. If we stretched the bond so far as to break the bond and produce the two original neutral atoms, there would be no dipole moment. Therefore, as we lengthen the bond in HCl, the dipole moment gets smaller. Because the dipole moment of HCl changes during a stretching vibration, it absorbs infrared radiation.The number of possible vibrations for a molecule is determined by the degrees of freedom of the molecule. The degrees of freedom for most molecules are (3N – 6) where N is the number of atoms. The degrees of freedom for a linear molecule are (3N – 5). Carbon dioxide is a linear molecule so it has four degrees of freedom and four possible vibrations.One vibration is the symmetrical stretch ). Each bond dipole, which is represented by the arrows, does change on stretching, but the overall molecular dipole is zero throughout. Since there is no net change in the molecular dipole, this vibration is not IR active.A second vibration is the asymmetrical stretch ). Each bond dipole does change on stretching and the molecule now has a net dipole. Since the molecular dipole changes during an asymmetrical stretch, this vibration is IR active.The third vibration is the bending vibration ). There are two bending vibrations that occur in two different planes. Both are identical so both have the same energy and are degenerate. The bending motion does lead to a net molecular dipole. Since the molecular dipole changes during the bending motion, these vibrations are IR active.An atomic stretching vibration can be represented by a potential energy diagram as shown in (also referred to as a potential energy well). The x-axis is the internuclear distance. Note that different vibrational energy levels, which are shown on the diagram as a series of parallel lines, are superimposed onto the potential well. Also note that, if the bond gets to too high a vibrational state, it can be ruptured.IR spectra are recorded in reciprocal wavenumbers (cm-1) and there are certain parts of the mid-IR spectrum that correspond to specific vibrational modes of organic compounds.2700-3700 cm-1: Hydrogen stretching1950-2700 cm-1: Triple bond stretching1550-1950 cm-1: Double bond stretching700 -1500 cm-1: Fingerprint regionAn important consideration is that as molecules get complex, the various vibrational modes get coupled together and the infrared (IR) absorption spectrum becomes quite complex and difficult to accurately determine. Therefore, while each compound has a unique IR spectrum (suggesting that IR spectroscopy ought to be especially useful for the qualitative analysis – compound identification – of compounds), interpreting IR spectra is not an easy process. When using IR spectra for compound identification, usually a computer is used to compare the spectrum of the unknown compound to a library of spectra of known compounds to find the best match.IR spectroscopy can also be used for quantitative analysis. One limitation to the use of IR spectroscopy for quantitative analysis is that IR sources have weak power that enhances the noise relative to signal and reduces the sensitivity of the method relative to UV/Visible absorption spectroscopy. Also, IR detectors are much less sensitive than those for the UV/VIS region of the spectrum. IR bands are narrower than observed in UV/VIS spectra so instrumental deviations to Beer’s Law (e.g., polychromatic radiation) are of more concern. Fourier transform methods are often used to enhance the sensitivity of infrared methods, and there are some specialized IR techniques that are used as well.This page titled 4.1: Introduction to Infrared Spectroscopy is shared under a CC BY-NC 4.0 license and was authored, remixed, and/or curated by Thomas Wenzel via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.
472
4.2: Specialized Infrared Methods
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Molecular_and_Atomic_Spectroscopy_(Wenzel)/4%3A_Infrared_Spectroscopy/4.2%3A_Specialized_Infrared_Methods
One technique is called non-dispersive infrared (NDIR) spectroscopy. NDIR is usually used to measure a single constituent of an air sample. Think what the name implies and consider how such an instrument might be designed. The word non-dispersive implies that the instrument does not use a monochromator. The design of a NDIR is illustrated in . Common things that are often measured using NDIR are the amounts of carbon monoxide and hydrocarbons in automobile exhaust.The device either splits the beam or uses two identical sources, one of which goes through a reference cell and the other of which goes through the sample cell. The sample of air (e.g., auto exhaust) is continually drawn through the sample cell during the measurement. The reference cell is filled with a non-absorbing gas. The detector cell is filled with the analyte (i.e., carbon monoxide, which has an IR absorption band in the region from 2050-2250 cm-1). If the system is designed to measure carbon monoxide, the reference cell does not absorb any radiation from 2050-2250 cm-1. The sample cell absorbs an amount of radiation from 2050-2250 cm-1 proportional to the concentration of carbon monoxide in the sample. The two detector cells, which are filled with carbon monoxide, absorb all of the radiation from 2050-2250 cm-1 that reaches them. The infrared energy absorbed by the detector cells is converted to heat, meaning that the molecules in the cell move faster and exert a greater pressure. Because the reference cell did not absorb any of the radiation from 2050-2250 cm-1, the detector cell on the reference side will have a higher temperature and pressure than the detector cell on the side with the sample. A flexible metal diaphragm is placed between the two cells and forms part of an electronic device known as a capacitor. Note that the capacitor has a gap between the two metal plates, and the measured capacitance varies according to the distance between the two plates. Therefore, the capacitance is a measure of the pressure difference of the two cells, which can be related back to the amount of carbon monoxide in the sample cell. The device is calibrated using a sealed sample cell with a known amount of carbon monoxide. When measuring hydrocarbons, methane (CH4) is used for the calibration since it is a compound that has a C-H stretch of similar energy to the C-H stretching modes of other hydrocarbons. Another common application of NDIR would be as a monitoring device for lethal levels of carbon monoxide in a coal mine.Another specialty application is known as attenuated total reflectance spectroscopy (ATR). ATR involves the use of an IR transparent crystal in which the sample is either coated or flows over both sides of the crystal. A representation of the ATR device is shown in .The radiation enters the crystal in such a way that it undergoes a complete internal reflection inside the crystal. The path is such that many reflections occur as the radiation passes through the crystal. At each reflection, the radiation slightly penetrates the coated material and a slight absorption occurs. The reason for multiple reflections is to increase the path length of the radiation through the sample. The method can be used to analyze opaque materials that do not transmit infrared radiation.An inconvenience when recording IR spectra is that glass cells cannot be used since glass absorbs IR radiation. Liquid samples are often run neat between two salt plates. Since solvents absorb IR radiation, IR cells usually have rather narrow path lengths to keep solvent absorption to acceptable levels. Solid samples are often mixed with KBr and pressed into an IR transparent pellet.Another way to record an IR spectrum of a solid sample is to perform a diffuse reflectance measurement. The beam strikes the surface of a fine powder and as in ATR some of the radiation is absorbed. Suitable signal-to-noise for diffuse reflectance IR usually requires the use of Fourier transform IR methods.This page titled 4.2: Specialized Infrared Methods is shared under a CC BY-NC 4.0 license and was authored, remixed, and/or curated by Thomas Wenzel via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.
473
4.3: Fourier-Transform Infrared Spectroscopy (FT-IR)
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Molecular_and_Atomic_Spectroscopy_(Wenzel)/4%3A_Infrared_Spectroscopy/4.3%3A_Fourier-Transform_Infrared_Spectroscopy_(FT-IR)
Up until this point, when recording a spectrum, we have described methods in which a monochromator is used to systematically scan through the different wavelengths or frequencies while recording either the absorbance or emission intensity. Spectra recorded in such a mode are said to be in the frequency domain. Fourier transform methods are designed in such a way that they record the spectra in the time domain. The plot in represents a particular wavelength or frequency of radiation in its time domain. What we observe in the time domain is the oscillation of the amplitude of the wave as a function of time.The waveform drawn above has a certain amplitude as well as a single, specific frequency. If a species in a sample could absorb this particular frequency of radiation, we would observe that the amplitude of this wave diminishes. We could then convert this to a frequency domain spectrum, which would consist of a single line as shown in .The frequency domain spectrum would have a single line at the same frequency as before, but its amplitude would be reduced.Suppose we have a frequency domain spectrum that consisted of two single lines, each with a different frequency. The time domain spectrum of this would now consist of two waves, one for each of the frequencies. The net time domain spectrum would be the addition of those two waves. If there were many frequencies, then the time domain wave form would be a complex pattern. A Fourier transform (FT) is a mathematical procedure that can be used to determine the individual frequency components and their amplitudes that are used to construct a composite wave. The Fourier transform allows you to convert a time domain spectrum to a frequency domain spectrum.Note that time domain spectra are difficult to interpret for either qualitative or quantitative analysis. Frequency domain spectra are more readily interpreted and used for qualitative and quantitative analysis. Yet there are certain advantages to recording a spectrum in the time domain using FT methods. The two most common spectroscopic techniques that are done in an FT mode are IR and NMR spectroscopy. These are two methods that are not the most sensitive among the various spectroscopic techniques that are available, and one advantage of FT methods is that they can be used to improve the signal-to-noise ratio.Recording an FT-IR spectrum requires a process in which the radiation from the source is somehow converted to the time domain. The most common way of achieving this with IR radiation is to use a device known as a Michelson interferometer. A diagram of a Michelson interferometer is shown in .In the Michelson interferometer, radiation from the source is collimated and sent to the beam splitter. At the splitter, half of the radiation is reflected and goes to the fixed mirror. The other half is transmitted through and goes to the moveable mirror. The two beams of radiation reflect off of the two mirrors and meet back up at the beam splitter. Half of the light from the fixed mirror and half of the light from the moveable mirror recombines and goes to the sample. When the moveable mirror is at position 0, it is exactly the same distance from the beam splitter as the fixed mirror. Knowing an exact location of the 0-position is essential to the proper functioning of a Michelson interferometer. The critical factor is to consider what happens to particular wavelengths of light at the moveable mirror is moved to different positions.An important thing to recognize in drawing these plots is that, if the mirror is at –½x, the radiation that goes to the moveable mirror travels an extra distance x compared to the radiation that goes to the fixed mirror (It travels an extra ½x to get to the moveable mirror and an extra ½x to get back to the zero position). If the two beams of radiation recombine at the beam splitter in phase with each other, they will constructively interfere. If the two beams of radiation recombine at the beam splitter out of phase with each other, they will destructively interfere. Using this information, we can then determine what mirror positions will lead to constructive and destructive interference for radiation of wavelengths x, 2x and 4x. The plots that are obtained for wavelength x, 2x and 4x are shown in .There are two important consequences from the plots in . The first is that for each of these wavelengths, the intensity of the radiation at the sample oscillates from full amplitude to zero amplitude as the mirror is moved. In a Michelson interferometer, the moveable mirror is moved at a fixed speed from one extreme (e.g., +x extreme) to the other (e.g., –x extreme). After the relatively slow movement in one direction, the moveable mirror is then rapidly reset to the original position (in the example we are using, it is reset back to the +x extreme), and then moved again to record a second spectrum that is added to the first. Because the mirror moves at a set, fixed rate, the intensity of any one of these three wavelengths varies as a function of time. Each wavelength now has a time domain property associated with it.The second important consequence is that the time domain property of radiation with wavelengths x, 2x and 4x is different. An examination of the plots in shows that the pattern of when the radiation is at full and zero amplitude is different for the radiation with wavelength x, 2x or 4x. The aggregate plot of all of these wavelengths added together is called an interferogram. If a sample could absorb infrared radiation of wavelength x, the intensity of light at this wavelength would drop after the sample and it would be reflected in the interferogram.The usual process of recording an FT-IR spectrum is to record a background interferogram with no sample in the cell. The interferogram with a sample in the cell is then recorded and subtracted from the background interferogram. The difference is an interferogram reflecting the radiation absorbed from the sample. This time domain infrared spectrum can then be converted to a frequency domain infrared spectrum using the Fourier transform.It is usually common to record several interferograms involving repetitive scans of the moveable mirror and then adding them together. An advantage of using multiple scans is that the signal of each scan is additive. Noise is a random process so adding together several scans leads to a reduction due to cancelling out of some of the noise. Therefore, adding together multiple scans will lead to an improvement in the signal-to-noise ratio. The improvement in the signal-to-noise ratio actually goes up as the square root of the number of scans. This means that recording twice as many scans, which takes twice as long, does not double the signal-to-noise ratio. As such, there are diminishing returns to running excessively large numbers of scans if the sample has an especially weak signal (e.g., due to a low concentration) because the time for the experiment can become excessive.Two important characteristics of an FT-IR spectrophotometer are to have an accurate location of the zero position and a highly reproducible movement of the mirror. Identifying the exact location of the zero position and controlling the mirror movement is usually accomplished in FT-IR spectrophotometers using a laser system. With regards to mirror movement, since the position is equated with time, it is essential that the mirror move with exactly the same speed over the entire scan, and that the speed remain identical for each scan. More expensive FT-IR spectrophotometers have better control of the mirror movement.We have already mentioned one, which is the ease of recording multiple spectra and adding them together. Whereas a conventional scanning spectrophotometer that uses a monochromator takes several minutes to scan through the wavelengths, the mirror movement in an FT-IR occurs over a few seconds.Another advantage is that an FT-IR has no slits and therefore has a high throughput of radiation. Essentially all of the photons from the source are used in the measurement and there are no losses of power because of the monochromator. Since IR sources have weaker power than UV and visible sources, this is an important advantage of FT-IR instruments. This is especially so in the far IR region where the source power drops off considerably.The ability to add together multiple scans combined with the higher throughput of radiation leads to a significant sensitivity advantage of FT-IR over conventional IR spectrophotometers that use a monochromator. As such, FT-IR instruments can be used with much lower concentrations of substances.An FT-IR will also have much better resolution than a conventional scanning IR, especially if there is reproducible movement of the mirror. Resolution is the ability to distinguish two nearby peaks in the spectrum. The more reproducible the mirror movement, the better the resolution. Distinguishing nearby frequencies is more readily accomplished by a Fourier transform of a composite time domain wave than it is using a monochromator comprised of a grating and slits.This page titled 4.3: Fourier-Transform Infrared Spectroscopy (FT-IR) is shared under a CC BY-NC 4.0 license and was authored, remixed, and/or curated by Thomas Wenzel via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.
474
6.1: Introduction to Atomic Spectroscopy
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Molecular_and_Atomic_Spectroscopy_(Wenzel)/6%3A_Atomic_Spectroscopy/6.1%3A_Introduction_to_Atomic_Spectroscopy
Earlier we discussed the difference between atomic spectra, which only consist of electronic transitions and therefore appear as sharp lines, and molecular spectra, which because of the presence of lower energy vibrational and rotational energy states appear as a broad continuum. Provided we have atoms present in a sample, it is possible to analyze them spectroscopically using either absorption or emission measurements. One problem is that most samples we analyze do not consist of atoms but instead consist of molecules with covalent or ionic bonds. Therefore, performing atomic spectroscopy on most samples involves the utilization of an atomization source, which is a device that has the ability to convert molecules to atoms.It is also important to recognize that the absorption or emission spectrum of a neutral atom will be different than that of its ions (e.g., Cr0, Cr3+, Cr6+ all have different lines in their absorption or emission spectra). Atomic absorbance measurements are performed on neutral, ground-state atoms. Atomic emission measurements can be performed on either neutral atoms or ions, but are usually performed on neutral atoms as well. It is important to recognize that certain metal species exist in nature in various ionic forms. For example, chromium is commonly found as its +3 or +6 ion. Furthermore, Cr3+ is relatively benign, whereas Cr6+ is a carcinogen. In this case, an analysis of the particular chromium species might be especially important to determine the degree of hazard of a sample containing chromium. The methods we will describe herein cannot be used to distinguish the different metal species in samples. They will provide a measurement of the total metal concentration. Metal speciation would require a pre-treatment step involving the use of suitable chemical reagents that selectively separate one species from the other without altering their distribution. Metal speciation is usually a complex analysis process and it is far more common to analyze total metal concentrations. Many environmental regulations that restrict the amounts of metals in samples (e.g., standards for drinking water, food products and sludge from wastewater treatment plants) specify total metal concentrations instead of concentrations of specific species.The measurement of atomic absorption or emission requires selection of a suitable wavelength. Just like the selection of the best wavelength in molecular spectroscopic measurements, provided there are no interfering substances, the optimal wavelength in atomic spectroscopic measurements is the wavelength of maximum absorbance or emission intensity. This page titled 6.1: Introduction to Atomic Spectroscopy is shared under a CC BY-NC 4.0 license and was authored, remixed, and/or curated by Thomas Wenzel via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.
475
6.2: Atomization Sources
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Molecular_and_Atomic_Spectroscopy_(Wenzel)/6%3A_Atomic_Spectroscopy/6.2%3A_Atomization_Sources
There are a variety of strategies that can be used to create atoms from molecular substances. The three main methods involve the use of a flame, a device known as a graphite furnace or a plasma. These three atomization methods are commonly used with liquid samples. While there are various plasma devices that have been developed, only the most common one – the inductively coupled plasma – will be discussed herein. Some specialized techniques that have been designed for especially important elements (e.g., mercury, arsenic) will be described as well. Since many samples do not come in liquid form (e.g., soils, sludges, foods, plant matter), liquid samples suitable for introduction into flames, furnace or plasma instruments are often obtained by digestion of the sample. Digestion usually involves heating the sample in concentrated acids to solubilize the metal species. Digestion can be done in an appropriate vessel on a hotplate or using a microwave oven. Microwave digesters are specialized instruments designed to measure the temperature and pressure in sealed chambers so that the digestion is completed under optimal conditions. In some cases it is desirable to measure a sample its solid form. There are arc or spark sources that can be used for the analysis of solid samples. 6.2A: Flames6.2B: Electrothermal Atomization – Graphite Furnace6.2C: Specialized Atomization Methods6.2D: Inductively Coupled Plasma6.2E: Arcs and Sparks This page titled 6.2: Atomization Sources is shared under a CC BY-NC 4.0 license and was authored, remixed, and/or curated by Thomas Wenzel via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.
476
6.2A: Flames
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Molecular_and_Atomic_Spectroscopy_(Wenzel)/6%3A_Atomic_Spectroscopy/6.2%3A_Atomization_Sources/6.2A%3A_Flames
As alluded to earlier, flames can be used as an atomization source for liquid samples. The sample is introduced into the flame as an aerosol mist. The process of creating the aerosol is referred to as nebulization. Common nebulizer designs include pneumatic and ultrasonic devices, the details of which we will not go into here. The most common flame atomization device, which is illustrated in , is known as a laminar flow or pre-mix burner. Note the unusual design of the burner head, which instead of having the shape of a common Bunsen burner, has a long, thin flame that is 10 cm long. Radiation from the course passes through the 10 cm distance of the flame. Often the monochromator is placed after the flame and before the detector. If atomic emission is being measured, there is no light source. The burner design provides a much longer path length to increase the sensitivity of the method.A flame requires a fuel and oxidant. In the laminar flow burner, the fuel and oxidant are pre-mixed at the bottom of a chamber. The force created by the flowing gases draws sample up through a thin piece of tubing where it is nebulized into the bottom of the chamber. The chamber has a series of baffles in it that creates an obstructed pathway up to the burner head. The purpose of the baffles is to allow only the finest aerosol particles to reach the flame. Larger particles strike the baffles, collect and empty out by the drain tube. Even using the best nebulizers that have been developed, only about 2% of the sample actually makes it through the baffles and to the flame. The remaining 98% empties out the drain.At first it might seem counterintuitive to discard 98% of the sample and instead seem preferable to introduce the entire sample into the flame, but we must consider what happens to an aerosol droplet after it is created and as it enters the flame. Remembering that the solution has molecules but we need atoms, there are several steps required to complete this transformation. The first involves evaporating the solvent (Equation \ref{eq1}). Many metal complexes form hydrates and the next step involves dehydration (Equation \ref{eq2}). The metal complexes must be volatilized (Equation \ref{eq3}) and then decomposed (Equation \ref{eq4}). Finally, the metal ions must be reduced to neutral atoms (Equation \ref{eq5}). Only now are we able to measure the absorbance by the metal atoms. If the measurement involves atomic emission, then a sixth step (Equation \ref{eq6}) involves the excitation of the atoms.\[\begin{align} \ce{ML(aq)} &= \ce{ML^.xH_2O (s)} \label{eq1}\\[4pt] \ce{ML^.xH_2O (s)} &= \ce{ML(s)} \label{eq2}\\[4pt] \ce{ML(s)} &= \ce{ML(g)} \label{eq3}\\[4pt] \ce{ML(g)} &= \ce{ M+ + L- } \label{eq4}\\[4pt] \ce{M+ + e-} &= \ce{M} \label{eq5}\\[4pt] \ce{M + heat} &= \ce{M^{*}} \label{eq6}\ \end{align} \nonumber \]The problem with large aerosol droplets is that they will not make it through all of the necessary steps during their lifetime in the flame. These drops will contribute little to the signal, but their presence in the flame will create noise and instability in the flame that will compromise the measurement. Hence, only the finest aerosol droplets will lead to atomic species and only those are introduced into the flame.The various steps outlined in Equations \ref{eq1}-\ref{eq6} also imply that there will be a distinct profile to the flame. Profiles result because of the efficiency with which neutral and excited atoms are formed in a flame. Therefore, a specific section of the flame will have the highest concentration of ground state atoms for the metal being analyzed. The absorbance profile that shows the concentration of ground state atoms in the flame is likely to be different than the emission profile that shows the concentration of excited state atoms in the flame. shows representative absorption profiles for chromium, magnesium and silver. Magnesium shows a peak in its profile. The increase in the lower part of the flame occurs because exposure to the heat creates more neutral ground state atoms. The decrease in the upper part of the flame occurs due to the formation of magnesium oxide species that do not absorb the atomic line. Silver is not as easily oxidized and its concentration continually increases the longer the sample is exposed to the heat of the flame. Chromium forms very stable oxides and the concentration of ground state atoms decreases the longer it is exposed to the heat of the flame.When performing atomic absorbance or emission measurements using a flame atomization source, it is important to measure the section of the flame with the highest concentration. There are controls in the instrument to raise and lower the burner head to insure that the light beam passes through the optimal part of the flame.An important factor in the characteristics of a flame is the identity of the fuel and oxidant. Standard Bunsen burner flames use methane as the fuel and air as the oxidant and have a temperature in the range of 1,700-1,900oC. A flame with acetylene as the fuel and air as the oxidant has a temperature in the range of 2,100-2,400oC. For most elements, the methane/air flame is too cool to provide suitable atomization efficiencies for atomic absorbance or emission measurements, and an acetylene/air flame must be used. For some elements, the use of a flame with acetylene as the fuel and nitrous oxide (N2O) as the oxidant is recommended. The acetylene/nitrous oxide flame has a temperature range of about 2,600-2,800oC. There are standard reference books on atomic methods that specify the type of flame that is best suited for the analysis of particular elements.It is also important to recognize that some elements do not atomize well in flames. Flame and other atomization methods are most suitable for the measurement of metals. Non-metallic elements rarely atomize with enough efficiency to permit analysis of trace levels. Metalloids such as arsenic and selenium have intermediate atomization efficiencies and may require specialized atomization methods for certain samples with trace levels of the elements. Mercury is another atom that does not atomize well and often requires the use of a specialized atomization procedure. Flame methods are usually used for atomic absorbance measurements because most elements do not produce high enough concentrations of excited atoms to facilitate sensitive detection based on atomic emission. Alkali metals can be measured in a flame by atomic emission. Alkaline earth metals can possibly be measured by flame emission as well provided the concentration is high enough. This page titled 6.2A: Flames is shared under a CC BY-NC 4.0 license and was authored, remixed, and/or curated by Thomas Wenzel via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.
477
6.2B: Electrothermal Atomization – Graphite Furnace
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Molecular_and_Atomic_Spectroscopy_(Wenzel)/6%3A_Atomic_Spectroscopy/6.2%3A_Atomization_Sources/6.2B%3A_Electrothermal_Atomization__Graphite_Furnace
The graphite furnace, which is pictured in , is a small, hollow graphite tube about 2 Inches long by ¼ inch in diameter with a hole in the top. Graphite furnaces are used for atomic absorbance measurements. Radiation from the source shines through the tube to the detector. A small volume of sample (typically \(0.5\) to \(10 \mu l\)) is introduced through the hole into the tube either through the use of a micropipette or a spray system. The entire furnace system is maintained under an argon atmosphere.After introduction of the sample into the furnace, a three step heating process is followed. The first step (heating to about 100oC) evaporates the solvent. The second (heating to about 800oC) ashes the sample to a metal power or metal oxide. The third (heating to between 2,000-3,000oC) atomizes the sample. The first two steps are on the order of seconds to a minute. The third step occurs over a few milliseconds to seconds. The atomization step essentially creates a “puff” of gas phase atoms in the furnace and the absorbance is measured during this time, yielding a signal similar to what is shown in . This “puff” of atoms only occurs over a second or so before the sample is swept from the furnace. The area under the curve is integrated and related back to the concentration through the use of a standard curve.Sample size: One obvious difference is the amount of sample needed for the analysis. Use of the flame requires establishing a steady state system in which sample is introduced into the flame. A flame analysis usually requires about 3-5 ml of sample for the measurement. Triplicate measurements on a furnace require less than 50 ul of sample. In cases where only small amounts of sample are available, the furnace is the obvious choice.Sensitivity: The furnace has a distinct advantage over the flame with regards to the sensitivity and limits of detection. One reason is that the entire sample is put into the furnace whereas only 2% of the sample makes it into the flame. Another is that the furnace integrates signal over the “puff” of atoms whereas the flame involves establishment of a steady state reading. A disadvantage of the flame is that atoms only spend a brief amount of time (about 10-4 seconds) in the optical path. Finally, for certain elements, the atomization efficiency (what percentage of the elements end up as ground state atoms suitable for absorption of energy) is higher for the furnace than the flame.Reproducibility: The flame has a distinct advantage over the furnace in terms of reproducibility of measurements. Remember that more reproducible measurements mean that there is better precision. One concern is whether the amount of sample being introduced to the atomization source is reproducible. Even though we often use micropipettes and do not question their accuracy and reproducibility, they can get out of calibration and have some degree of irreproducibility from injection to injection. Introduction of the sample into the flame tends to be a more reproducible process.Another concern with atomic methods is the presence of matrix effects. The matrix is everything else in the sample besides the species being analyzed. Atomic methods are highly susceptible to matrix effects. Matrix effects can enhance or diminish the response in atomic methods. For example, when using a flame, the response for the same concentration of a metal in a sample where water is the solvent may be different when compared to a sample with a large percentage of alcohol as the solvent (e.g., a hard liquor). One difference is that alcohol burns so it may alter the temperature of the flame. Another is that alcohol has a different surface tension than water so the nebulization efficiency and production of smaller aerosol particles may change. Another example of a matrix effect would be the presence of a ligand in the sample that leads to the formation of a non-volatile metal complex. This complex may not be as easy to vaporize and then atomize. While it is somewhat sample dependent, matrix effects are more variable with a furnace than the flame. An issue that comes up with the furnace that does not exist in the flame is the condition of the interior walls of the furnace. These walls “age” as repeated samples are taken through the evaporation/ash/atomize steps and the atomization efficiency changes as the walls age. The furnace may also exhibit memory effects from run to run because not all of the material may be completely removed from the furnace. Evaporation of the solvent in the furnace may lead to the formation of salt crystals that rupture with enough force to spew material out the openings in the furnace during the ashing step. This observation is why some manufacturers have developed spray systems that spread the sample in a thinner film over more of the interior surface than would occur if adding a drop from a micropipette. These various processes that can occur in the furnace often lead to less reproducibility and reduced precision (relative precision on the order of 5-10%) when compared to flame (relative precision of 1% or better) atomization. This page titled 6.2B: Electrothermal Atomization – Graphite Furnace is shared under a CC BY-NC 4.0 license and was authored, remixed, and/or curated by Thomas Wenzel via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.
478
6.2C: Specialized Atomization Methods
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Molecular_and_Atomic_Spectroscopy_(Wenzel)/6%3A_Atomic_Spectroscopy/6.2%3A_Atomization_Sources/6.2C%3A_Specialized_Atomization_Methods
There are a few elements where the atomization efficiencies with other sources are diminished to the point that trace analysis sometimes requires specialized procedures. The most common element where this is done is mercury. Mercury is important because of its high toxicity. The procedure is referred to as a cold vapor method. One design of a cold vapor system consists of a closed loop where there is a pump to circulate air flow, a reaction vessel, and a gas cell. The sample is placed in the reaction vessel and all of the mercury is first oxidized to the +2 state through the addition of strong acids. When the oxidation is complete, tin(II)chloride is added as a reducing agent to reduce the mercury to neutral mercury atoms. Mercury has sufficient vapor pressure at room temperature that enough atoms enter the gas phase and distribute throughout the system including the gas cell. A mercury hollow cathode lamp shines radiation through the gas cell and absorbance by atomic mercury is measured.Two other toxic elements that are sometimes measured using specialized techniques are arsenic and selenium. In this process, sodium borohydride is added to generate arsine (AsH3) and selenium hydride (SeH2). These compounds are volatile and are introduced into the flame. The volatile nature of the complexes leads to a much higher atomization efficiency. Commercial vendors sell special devices that have been developed for the cold vapor or hydride generation processes. This page titled 6.2C: Specialized Atomization Methods is shared under a CC BY-NC 4.0 license and was authored, remixed, and/or curated by Thomas Wenzel via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.
479
6.2D: Inductively Coupled Plasma
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Molecular_and_Atomic_Spectroscopy_(Wenzel)/6%3A_Atomic_Spectroscopy/6.2%3A_Atomization_Sources/6.2D%3A_Inductively_Coupled_Plasma
A plasma is a gaseous mixture in which a significant proportion of the gas-phase species are ionized. An illustration of an inductively coupled plasma (ICP) is shown in . The device consists of a quartz tube (about ¾ inch in diameter), the end of which is wrapped in a high power radiofrequency (RF) induction coil. Argon gas flows down the quartz tube at a high rate (about 15 liters/minute). A current of electricity is run through the RF coil, which produces a magnetic field inside the end of the quartz tube. Sparking the argon creates some Ar+ ions, which are now paramagnetic and absorb energy from the magnetic field. The argon ions absorb enough energy that a plasma is created in the area of the tube covered by the RF induction coil. The nature of the magnetic field causes the plasma to flow in a closed annular path (basically a donut shape). What is especially impressive is that enough energy is absorbed from the magnetic field to heat the plasma up to a temperature of about 6,000 K. As a comparison, this temperature is about the same as the temperature of the surface of the sun. The hot temperature means that new argon flowing into the plasma is ionized, which maintains the plasma. The plasma is kept from melting the walls of the quartz tube by an additional tangential flow of argon along the walls of the tube. Finally, the sample is nebulized and sprayed as an aerosol mist into the center of the plasma.An ICP offers several advantages over flame and furnace atomization sources. One is that it is so hot that it leads to a more complete atomization and leads to the formation of many excited state atoms. Because sufficient numbers of atoms are excited, they can be detected by emission instead of absorbance. The illustration in shows the plume that forms in an ICP above the RF coil. Above the plasma is a zone in which argon regeneration occurs. A continuum background emission is given off in this zone. Above this zone in the plume, there are excited atoms that emit the characteristic lines of each particular element in the sample. In our discussion of fluorescence spectroscopy, we learned that emission methods have an inherent sensitivity advantage over absorbance methods. This occurs because emission entails measuring a small signal over no background and absorbance entails measuring a small difference between two large signals. This same sensitivity advantage exists for measurements of atomic emission over atomic absorbance. Light emitted by atoms in the plume can be measured either radially (off to the side of the plume) or axially (looking down into the plume. Axial measurements are often more sensitive because of the increase in path length. However, in some cases, depending on the element profile in the plasma, radial measurements may be preferable. Instruments today often allow for either axial or radial measurements.A second advantage of an ICP is that all of the elements can be measured simultaneously. All metals in the sample will be atomized at the same time and all are emitting light. Some instruments measure elements in a sequential arrangement. In this case, the operator programs in the elements to be measured, and the monochromator moves one-by-one through the specific wavelengths necessary for the measurement of each element. Other instruments use an array detector with photoactive pixels that can measure all of the elements at once. Array instruments are preferable as the analysis will be faster and less sample is consumed. shows the printout of the pixels on a array detector that include and surround the lead emission that occurs at 220.353 nm. The peak due to the lead emission from the four different samples is apparent. Also note that there is a background emission on the neighboring pixels and the intensity of this background emission must be subtracted from the overall emission occurring at the lead wavelength.An observation with emission spectroscopy to be aware of is the possibility of self-absorption. We already discussed this in the unit on fluorescence spectroscopy. Self-absorption refers to the situation in which an excited state atom emits a photon that is then absorbed by another atom in the ground state. If the photon was headed toward the detector, then it will not be detected. Self-absorption becomes more of a problem at higher concentrations as the emitted photons are more likely to encounter a ground state atom. The presence of self-absorption can lead to a diminishment of the response in a calibration curve at high concentrations as shown in . Atomic emission transitions always correspond with absorption transitions for the element being analyzed so the likelihood of observing self-absorption is higher in atomic emission spectroscopy than in fluorescence spectroscopy. For a set of samples with unknown concentrations of analyte, it may be desirable to test one or two after dilution to insure that the concentration decreases by a proportional factor and that the samples are not so high in concentration to be out in the self-absorption portion of the standard curve.Another advantage is that the high number of Ar+ ions and free electrons suppress the ionization of other elements being measure, thereby increasing the number of neutral atoms whose emission is being measured. The argon used to generate the plasma is chemically inert compared to the chemical species that make up a flame, which increases the atomization efficiency. The inductively coupled plasma tends to be quite stable and reproducible. The combination of high temperature with chemically inert environmental reduces matrix effects in the plasma relative to other atomization sources, but it does not eliminate them and matrix effects must always be considered. Some elements (e.g., mercury, arsenic, phosphorus) that are impractical to analyze on a flame or furnace instrument without specialized atomization techniques can often be measured on an ICP.A final advantage of the plasma is that there are now methods to introduce the atoms into a mass spectrometer (MS). The use of the mass spectrometer may further reduce certain matrix effects. Also, mass spectrometry usually provides more sensitive detection than emission spectroscopy.This page titled 6.2D: Inductively Coupled Plasma is shared under a CC BY-NC 4.0 license and was authored, remixed, and/or curated by Thomas Wenzel via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.
480
6.2E: Arcs and Sparks
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Molecular_and_Atomic_Spectroscopy_(Wenzel)/6%3A_Atomic_Spectroscopy/6.2%3A_Atomization_Sources/6.2E%3A_Arcs_and_Sparks
Arc and spark devices can be used as atomization sources for solid samples. illustrates the setup for an arc device. A high voltage applied across a gap between two conducting electrodes causes an arc or spark to form. As the electrical arc or spark strikes the positively charged electrode, it can create a “puff” of gas phase atoms and emission from the atoms can be measured. The arc also creates a plasma between the two electrodes. Depending on the nature of the solid material to be measured, it can either be molded into an electrode or coated onto a carbon electrode. This page titled 6.2E: Arcs and Sparks is shared under a CC BY-NC 4.0 license and was authored, remixed, and/or curated by Thomas Wenzel via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.
481
6.3A: Source Design
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Molecular_and_Atomic_Spectroscopy_(Wenzel)/6%3A_Atomic_Spectroscopy/6.3%3A_Instrument_Design_Features_of_Atomic_Absorption_Spectrophotometers/6.3A%3A_Source_Design
While ICP devices do offer certain advantages over flame atomic absorption (AA) spectrophotometers, flame AAs are still widely used for measurement purposes. They are cheaper to purchase and operate than an ICP and, for someone only needing to measure a few specific elements on a regular basis, a flame AA may be the better choice. There are a variety of instrumental design features on AA spectrophotometers that are worth consideration.One of these concerns the radiation source. Atomic absorption spectrophotometers require a separate source lamp, called a hollow cathode lamp, for each individual element that you wish to measure. An illustration of a hollow cathode lamp is shown in . The hollow cathode is coated with the element you wish to measure. The interior is filled with a relatively low pressure (1 Torr) of an inert gas such as argon or helium. A voltage is applied across the anode and cathode. The filler gas (e.g., argon) is ionized to Ar+ at the anode. The Ar+ ions are drawn toward the cathode and when they strike the surface, sputter off some of the coated atoms into the gas phase. In the sputtering process, some of the atoms are excited and emit the characteristic lines of radiation of the atoms. Hollow cathode lamps cost about $200 a piece, so buying lamps for many elements can get a bit expensive.Why is the cathode designed with a hollow configuration?There are two reasons for the hollow cathode design. One is that the configuration helps to focus the light beam allowing a higher intensity of photons to be directed toward the flame or furnace. The second is that it helps prolong the lifetime of the lamp. It is desirable to have sputtered atoms coat back onto the cathode, since it is only those atoms that can be excited by collisions with the Ar+ ions. Over time the number of atoms coated onto the cathode will diminish and the intensity of the lamp will decrease. The lamps also have an optimal current at which they should be operated. The higher the current, the more Ar+ ions strike the cathode. While a higher current will provide a higher intensity, it will also reduce lamp lifetime. (Note: There is another reason not to use high currents that we will explore later after developing some other important concepts about the instrument design). The lifetime of a hollow cathode lamp run at the recommended current is about 500 hours. The need to use a separate line source for each element raises the following question.Why is it apparently not feasible to use a broadband continuum source with a monochromator when performing atomic absorption spectroscopy?One thing you might consider is whether continuum lamps have enough power in the part of the electromagnetic spectrum absorbed by elements.In what part of the electromagnetic spectrum do most atoms absorb (or emit) light?Recollecting back to the emission of metal salts in flames, or the light given off in firework displays, it turns out that atoms emit, and hence absorb, electromagnetic radiation in the visible and ultraviolet portions of the spectrum.Do powerful enough continuum sources exist in the ultraviolet and visible region of the spectrum?Yes. We routinely use continuum sources to measure the ultraviolet/visible spectrum of molecules at low concentrations, so these sources certainly have enough power to measure corresponding concentrations of atomic species.Another thing to consider is the width of an atomic line. What are two contributions to the broadening of atomic lines? (Hint: We went over both of these earlier in the course).Earlier in the course we discussed collisional and Doppler broadening as two general contributions to line broadening in spectroscopic methods. When these contributions to line broadening are considered, the width of an atomic line is observed to be in the range of 0.002-0.005 nm.Using information about the width of an atomic line, explain why a continuum source will not be suitable for measuring atomic absorption.The information provided above indicates that atomic lines are extremely narrow. If we examine the effective bandwidth of a common continuum ultraviolet/visible source/monochromator system, it will be a wavelength packet on the order of 1 nm wide. superimposes the atomic absorption line onto the overall output from a continuum source. What should be apparent is that the reduction in power due to the atomic absorbance is only a small fraction of the overall radiation emitted by the continuum source. In fact, it is such a small portion that it is essentially non-detectable and lost in the noise of the system.What is the problem with reducing the slit width of the monochromator to get a narrower line? The problem with reducing the slit width is that it reduces the number of photons or source power reaching the sample. Reducing the slit width on a continuum source to a level that would provide a narrow enough line to respond to atomic absorption would reduce the power so that it would not be much above the noise. Therefore, hollow cathode lamps, which emit intense narrow lines of radiation specific to the element being analyzed, are needed for atomic absorption measurements.With this understanding we can ask why the hollow cathode lamp has a low pressure of argon filler gas.The pressure of the argon is low to minimize collisions of argon atoms with sputtered atoms. Collisions of excited state sputtered atoms with argon atoms will lead to broadening of the output of the hollow cathode lamp and potentially lead to the same problem described above with the use of a continuum source. A low pressure of argon in the lamp insures that the line width from the hollow cathode lamp is less than the line width of the absorbing species.This page titled 6.3A: Source Design is shared under a CC BY-NC 4.0 license and was authored, remixed, and/or curated by Thomas Wenzel via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.
483
6.3B: Interferences of Flame Noise
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Molecular_and_Atomic_Spectroscopy_(Wenzel)/6%3A_Atomic_Spectroscopy/6.3%3A_Instrument_Design_Features_of_Atomic_Absorption_Spectrophotometers/6.3B%3A_Interferences_of_Flame_Noise
Background signal from the flame is measured at the detector and is indistinguishable from the source power. Flame noise in the form of emission from the flame or changes in the flame background as a sample is introduced can cause a significant interference in atomic methods.Can you design a feature that could be incorporated into a flame atomic absorption spectrophotometer to account for flame noise?We can account for flame noise and changes in the flame noise by using a device called a chopper. A chopper is a spinning wheel that alternately lets source light through to the flame and then blocks the source light from reaching the flame. illustrates several chopper designs. shows the output from the detector when using a chopper. When the chopper blocks the source, the detector only reads the background flame noise. When the chopper lets the light through, both flame noise and source noise is detected. The magnitude of Po and P is shown on the diagram. By subtracting the combined source/flame signal from only the flame background it is possible to measure the magnitudes of Po and P and to determine whether the introduction of the sample is altering the magnitude of the flame background.This page titled 6.3B: Interferences of Flame Noise is shared under a CC BY-NC 4.0 license and was authored, remixed, and/or curated by Thomas Wenzel via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.
484
6.3C: Spectral Interferences
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Molecular_and_Atomic_Spectroscopy_(Wenzel)/6%3A_Atomic_Spectroscopy/6.3%3A_Instrument_Design_Features_of_Atomic_Absorption_Spectrophotometers/6.3C%3A_Spectral_Interferences
Particulate matter in a flame will scatter light from the hollow cathode lamp. Some metals are prone to forming solid refractory oxides in the flame that scatter radiation. Organic matter in a flame may lead to carbonaceous particles that scatter radiation. This is a problem since the detector cannot distinguish the difference between light that is scattered and light that is absorbed. Similarly, molecular species in a flame exhibit broadband absorption of light. shows a plot of an atomic absorption line superimposed over molecular absorption. As with scattered radiation, the detector cannot distinguish broadband absorption from molecular species from line absorption by atomic species.Can you design a feature that could be incorporated into an atomic absorption spectrophotometer than can account for both scattered light and light absorbed by molecular species?To address this question, we need to think back to the previous discussion of the source requirement for atomic absorption spectrophotometers. Earlier we saw that it was not possible to use a continuum source with a monochromator since the atomic absorption was so negligible as to be non-detectable. However, a continuum source will measure molecular absorption and will respond to any scattered radiation. The answer is to alternately send the output from the hollow cathode lamp and a continuum source (the common one used in AA instruments is a deuterium lamp) to the flame. The output of the hollow cathode lamp will be diminished by atomic absorption, molecular absorption and scatter. The continuum lamp will only be diminished by molecular absorption and scatter, since any contribution from atomic absorption is negligible. By comparing these, it is possible to correct the signal measured when the hollow cathode lamp passes through the flame for scattered radiation and molecular absorption. In atomic absorption spectroscopy, this process is referred to as background correction.An alternative way of getting a broadened source signal to pass through the flame is known as the Smith-Hieftje method (named after the investigators who devised this method). The Smith-Hieftje method only uses a hollow cathode lamp. Earlier, when we discussed hollow cathode lamps, we learned that the argon pressure inside the lamp was kept low to avoid collisional broadening. We also learned that the current was not set to a high value because it would sputter off too many atoms and shorten the lamp lifetime. Another observation when running a hollow cathode lamp at a high current is that the lamp emission lines broaden. This occurs because, at a high current, so many atoms get sputtered off into the hollow cathode that they collide with each other and broaden the wavelength distribution of the emitted light. The Smith-Hieftje method relies on using a pulsed lamp current. For most of the time, the lamp is run at its optimal current and emits narrow lines that would diminish when passing through the flame due to atomic absorption, molecular absorption and scatter. For a brief pulse of time, the current is set to a very high value such that the lamp emits a broadened signal. When this broadened signal passes through the flame, atomic absorption is negligible and only molecular absorption and scatter decrease the intensity of the beam.A third strategy is to use what is known as the “two-line” method. This can be used in a situation where you have a source that emits two narrow atomic lines, one of which is your analysis wavelength and the other of which is close by. Looking back at , the analysis wavelength is diminished in intensity by atomic absorption, molecular absorption and scattering. A close by line does not have any atomic absorption and only is reduced in intensity by molecular absorption and scattering. While it might at first seem difficult to see how it is possible to get nearby atomic lines for many elements, there is something known as the Zeeman Effect that can be used for this purpose. Without going into the details of the Zeeman Effect, what is important to know is that exposing an atomic vapor to a strong magnetic field causes a slight splitting of the energy levels of the atom causing a series of closely spaced lines for each electronic transition. The neighboring lines are about 0.01 nm from each other, making them ideal for monitoring background molecular absorption and scatter. Corrections using the Zeeman Effect are more reliable than those using a continuum source. The magnetic field can be applied either to the hollow cathode lamp or the atomization source. The method is useful in flame and graphite furnace measurements.This page titled 6.3C: Spectral Interferences is shared under a CC BY-NC 4.0 license and was authored, remixed, and/or curated by Thomas Wenzel via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.
485
6.4A: Chemical Interferences
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Molecular_and_Atomic_Spectroscopy_(Wenzel)/6%3A_Atomic_Spectroscopy/6.4%3A_Other_Considerations/6.4A%3A_Chemical_Interferences
It is also possible to have chemical processes that interfere with atomic absorption and emission measurements. It is important to realize that the chemical interferences described herein can potentially occur in flame, furnace and plasma devices. One example of a chemical interference occurs for metal complexes that have low volatility. These are often difficult to analyze at trace concentrations because the atomization efficiency is reduced to unacceptably low levels. One possibility is to use a higher temperature flame. Switching from an acetylene/air flame to an acetylene/nitrous oxide flame may overcome the volatility limitations of the metal complex and produce sufficient atomization efficiencies.Another strategy is to add a chemical that eliminates the undesirable metal-ligand complex. One possibility is to add a ligand that preferentially binds to the metal to form a more volatile complex. This is referred to as a protecting agent. The sensitivity of calcium measurements is reduced by the presence of aluminum, silicon, phosphate and sulfate. Ethylenediaminetetraacetic acid (EDTA) complexes with the calcium and eliminates these interferences. The other strategy is to add another metal ion that preferentially binds to the undesirable ligand to free up the desired metal. This is known as a releasing agent. The presence of phosphate ion decreases the sensitivity of measurements of calcium. Excess strontium or lanthanum ions will complex with the phosphate and improve the sensitivity of the calcium measurement.Another potential problem that can occur in flames and plasmas is to have too high a concentration of the analyte metal exist in an ionic form. Since neutral atoms are usually being measured (sometimes when using an ICP it may actually be preferable to measure emission from an ionic species), the presence of ionic species reduces the sensitivity and detection limits.One possibility might be to use a cooler atomization source, although there are limitations on the range to which this is feasible. The RF power used in an inductively coupled plasma does influence the temperature of the plasma, and there are recommendations for specific elements about the recommended source power. Similarly, changes in the fuel/oxidant ratio cause changes in the temperature of a flame.A more common strategy is to add something to the sample known as an ionization suppression agent. An ionization suppressor is something that is easily ionized. Common ionization suppressors would include alkali metals such as potassium. Thinking of Le Chatlier’s principle, ionization of the suppressor forms more electrons and greater charges of positive ions that suppress the ionization of the analyte species.This page titled 6.4A: Chemical Interferences is shared under a CC BY-NC 4.0 license and was authored, remixed, and/or curated by Thomas Wenzel via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.
487
6.4B: Accounting for Matrix Effects
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Molecular_and_Atomic_Spectroscopy_(Wenzel)/6%3A_Atomic_Spectroscopy/6.4%3A_Other_Considerations/6.4B%3A_Accounting_for_Matrix_Effects
Flame noise, spectral interferences and chemical interferences are all examples of matrix effects. Atomic methods are among the most sensitive of all analysis methods to matrix effects. The previous sections have described ways of trying to account for the possibility of some types of matrix effects. Even with these methods, there is still the possibility that some aspect of the matrix (remember that the matrix is everything except what is being analyzed) either enhances or decreases the signal measured at the detector. A concern is that standard solutions often have a different matrix than the unknowns that are being analyzed. A process called standard addition can often be used to assess whether a sample has a matrix effect. If the sample does have a matrix effect, the standard addition procedure will provide a more accurate measurement of the concentration of analyte in the sample than the use of a standard curve. The process involves adding a series of small increments of the analyte to the sample and measuring the signal. The assumption is that the additional analyte experiences the same matrix effects as the species already in the sample. The additional increments are kept small to minimize the chance that they swamp out the matrix and no longer experience the same matrix effects.The signal for each increment is plotted against the concentration that was added as shown in . Included in are plots for two different samples, both of which have the exact same concentration of analyte. One of the samples has a matrix that enhances the signal relative to the other. An examination of the plots shows that the sample with an enhancing matrix produces a linear plot with a higher slope than the linear plot obtained for the other sample. The plot is then extrapolated back to the X-intercept, which indicates the concentration of analyte that would need to be added to the matrix to obtain the signal measured in the original sample.The experimental steps involved in conducting a standard addition are more complex than those involving the use of a standard curve. If someone is testing a series of samples with similar properties that have similar matrices, it is desirable to use the standard addition procedure on one or a few samples and compare the concentration to that obtained using a standard curve. If the two results are similar, then it says that the matrix effects are minimal and the use of a standard curve is justified.This page titled 6.4B: Accounting for Matrix Effects is shared under a CC BY-NC 4.0 license and was authored, remixed, and/or curated by Thomas Wenzel via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.
488
InfoPage
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Non-Isothermal_Kinetic_Methods_(Arhangel'skii_et_al.)/00%3A_Front_Matter/02%3A_InfoPage
This text is disseminated via the Open Education Resource (OER) LibreTexts Project and like the hundreds of other texts available within this powerful platform, it is freely available for reading, printing and "consuming." Most, but not all, pages in the library have licenses that may allow individuals to make changes, save, and print this book. Carefully consult the applicable license(s) before pursuing such effects.Instructors can adopt existing LibreTexts texts or Remix them to quickly build course-specific resources to meet the needs of their students. Unlike traditional textbooks, LibreTexts’ web based origins allow powerful integration of advanced features and new technologies to support learning. The LibreTexts mission is to unite students, faculty and scholars in a cooperative effort to develop an easy-to-use online platform for the construction, customization, and dissemination of OER content to reduce the burdens of unreasonable textbook costs to our students and society. The LibreTexts project is a multi-institutional collaborative venture to develop the next generation of open-access texts to improve postsecondary education at all levels of higher learning by developing an Open Access Resource environment. The project currently consists of 14 independently operating and interconnected libraries that are constantly being optimized by students, faculty, and outside experts to supplant conventional paper-based books. These free textbook alternatives are organized within a central environment that is both vertically (from advance to basic level) and horizontally (across different fields) integrated.The LibreTexts libraries are Powered by NICE CXOne and are supported by the Department of Education Open Textbook Pilot Project, the UC Davis Office of the Provost, the UC Davis Library, the California State University Affordable Learning Solutions Program, and Merlot. This material is based upon work supported by the National Science Foundation under Grant No. 1246120, 1525057, and 1413739.Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation nor the US Department of Education.Have questions or comments? For information about adoptions or adaptions contact More information on our activities can be found via Facebook , Twitter , or our blog .This text was compiled on 07/05/2023
491
1.1: Empirical Relationships and Specifics of Calculation Methods Used for Solving Non-Isothermal Kinetic Problems
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Non-Isothermal_Kinetic_Methods_(Arhangel'skii_et_al.)/01%3A_Chapters/1.01%3A_Empirical_Relationships_and_Specifics_of_Calculation_Methods_Used_for_Solving_Non-Isothermal_Kinetic_Problems
The solving of the kinetic problem consists of several steps. At the first step the inverse kinetic problem is solved by evaluating of Arrhenius parameters. However, the solution requires a series of experiments with different heating rates and determination of optimal experimental conditions:\[\ce{A_{s} -> B_{s} + C_{g}} \label{1.1}\]where As is the initial powdered solid reagent forming a flat layer, Bs is the solid reaction product located on the grains of the initial reagent, and Cg is the gaseous reaction product released into the environment. This process is referred to as quasi-one-stage, for reason that the reactions of the type of Equation \ref{1.1} have at least three stages and comprise the chemical reaction stage, heat and mass transfer. However, depending on the experimental conditions, one of the stages can be limiting. In our case, we believe the chemical reaction to be the rate-limiting stage. Let us assume that the dependence of this process on time and temperature is a single-mode thermoanalytical curve.For such a process, the change in the reaction rate as a function of temperature can be described as follows:\[-\frac{d \alpha}{d t}=A e^{\frac{-E}{R T}} f(\alpha) \label{1.2}\]where A and E are the Arrhenius parameters, T is the temperature, and f(α) is some function of the conversion of the reaction characterizing its mechanism.The conversion \(\alpha\), according to Equation \ref{1.1}, is defined as the fraction of the initial reagent As that has reacted by time ti and changes from 0 to 1. It is worth noting that the conversion \(\alpha\) can be calculated from both the TG and DSC data, as well as from differential thermogravimetry (DTG) data.The so-called non-isothermal kinetic techniques are widely used owing to the apparent simplicity of processing experimental data according to the formal kinetic model described by Equation \ref{1.2}. Equation \ref{1.2} characterizes a single measurement curve.As applied to experimental TA data, the kinetic model can be represented in the following form:\[-\frac{\mathrm{d} \alpha}{\mathrm{d} T_{T=T_{i}}}=\frac{A}{\beta_{i}} \exp \left(-\frac{E}{R T}\right) \alpha_{i}^{m}\left(1-\alpha_{i}\right)^{n} \label{1.3}\]where (1 - αi) is the experimentally measured degree of reaction incompleteness, Ti is the current temperature in kelvins (K), is the instantaneous heating rate (in our experiment, βi = β = constant), А is the preexponential factor, Е is the activation energy, m and n are the kinetic equation parameter.Equation \ref{1.3} is easily linearized:\[\ln \left(-\frac{\mathrm{d} \alpha}{\mathrm{d} T}_{T=T_{i}}\right)=\ln \left(\frac{A}{\beta_{i}}\right)-\frac{E}{R T_{i}}+m \cdot \ln \alpha_{i}+n \cdot \ln \left(1-\alpha_{i}\right) \label{1.4}\]that is, it reduces to a linear least-squares problem.The least-squares problem for Equation \ref{1.4} reduces to solving the following set of equations:\[C \vec{x}=\vec{b} \label{1.5}\]where C is the matrix of coefficients of Equation \ref{1.4} (Ci1 = 1/βi, Ci2 = -1/Ti, Ci3=lnαi, Ci4 = ln(1-α)), and is the vector of the sought parameters. Since the experimental data are random numbers, they are determined with a certain error.Solving the problem in Equation \ref{1.5} involves certain difficulties.The 1/T value changes insignificantly in the temperature range of reaction, therefore first and second column of matrix C are practically identical (up to a constant multiplier), and as a result matrix С is almost degenerate. Detailed description of this problem one can found in . To minimize uncertainty one should provide several experiments with different heating rates and calculate kinetic parameters using all experimental data.The above arguments suggest only that the calculation of Arrhenius parameters using a single thermoanalytical curve is incorrect. The Netzsch Termokinetics Software uses a different approach, when the Arrhenius parameters are estimated using model-free calculation methods according to a series of experiments at different heating rates (see below).The inverse kinetic problem thus solved does not guarantee the adequate description of experimental data. To verify the adequacy of the solution, it is necessary to solve the direct problem, that is, to integrate Equation \ref{1.3} and compare the calculated and experimental dependences in the entire range of heating rates. This procedure has been implemented in the NETZSCH Termokinetics, Peak Separation, and Thermal Simulation program packages with the help of NETZSCH. These programs and their applications are discussed in detail below.Joint processing of experimental results obtained at different heating rates necessitates the constancy of the mechanism of a process, that is, the constancy of the type of f(α) function at different heating rates. Whether this condition is met can be verified by affine transformation of experimental curves, that is, by using reduced coordinates. To do this, one should select variables on the abscissa and ordinate so that each of them would change independently of the process under consideration. In addition, it is desirable that the relationship between the selected variables and experimental values be simple. These requirements are met by reduced coordinates. A reduced quantity is defined as the ratio of some variable experimental quantity to another experimental quantity of the same nature. As one of the variables, the conversion α is used, which is defined as the fraction of the initial amount of the reagent that has converted at a given moment of time. In heterogeneous kinetics, this variable is, as a rule, the conversion of the initial solid reagent.If it is necessary to reflect the relationship between the conversion and time or temperature in the thermoanalytical experiment (at various heating rates), then the α is used as the ordinate and the reduced quantity equal to the ratio of the current time t or temperature T corresponding to this time to the time tα* it takes to achieve the desired conversion. For example, if the time or temperature required to achieve 50 or 90% conversion (α* = 0.5 or 0.9) is selected, the reduced quantities will be t/t0.5 (T/T0.5) or t/t0.9 (T/T0.9).The above formalism pertains to the chemical stage of a heterogeneous process. However, it should be taken into account that in general case heterogeneous process may involve heat and mass transfer, that is, the process may be never strictly one-step. The multistage character of a process significantly complicates the solution of the kinetic problem. In this case, a set of at least three differential equations with partial derivatives should be solved, which often makes the problem unsolvable. At the same time, experimental conditions can be found under which one of the stages, most frequently, the chemical stage, would be the rate-limiting stage of the process. Such experimental conditions are found in a special kinetic experiment.This page titled 1.1: Empirical Relationships and Specifics of Calculation Methods Used for Solving Non-Isothermal Kinetic Problems is shared under a CC BY-NC-SA 3.0 license and was authored, remixed, and/or curated by Igor V. Arhangel`skii, Alexander V. Dunaev, Irina V. Makarenko, Nikolay A. Tikhonov, & Andrey V. Tarasov (Max Planck Research Library for the History and Development of Knowledge) via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.
494
1.2: Kinetic Experiment and Separation Methods
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Non-Isothermal_Kinetic_Methods_(Arhangel'skii_et_al.)/01%3A_Chapters/1.02%3A_Kinetic_Experiment_and_Separation_Methods
As is known, the thermoanalytical experiment is carried out under variable temperature conditions, most frequently at a constant heating rate. Thereby, a so-called quasi-stationary temperature gradient appears in the bottom-heated sample. The temperature at every point of a thermally inert cylindrical sample with radius R and height H≤4R is described by the following equation:\[T_{i}\left(r_{i}, t\right)=T_{0}+\beta t-\frac{\left(\beta R^{2}-r^{2}\right)}{4 a}\left[1+\frac{2 \lambda}{h R}-\frac{r^{2}}{R^{2}}\right] \label{2.1}\]where Ti(ri,t) is the temperature at the ith point of the sample, T0 is the starting temperature of the experiment, β is the temperature change rate dT/dt = constant, t is time, R is the radius of the cylindrical sample, ri is the radius vector of a point of the sample, a is the thermal diffusivity, is the thermal conductivity, and h is the heat emission coefficient in the sample–holder system. Equation \ref{2.1}, which is an analytical representation of the solution to the heat transfer equation under certain assumptions, shows that a so-called quasi-stationary temperature regime is established in a sample, corresponding to a parabolic temperature field in the sample–holder system identical at any moment in time before the onset of thermal processes. Hence, the “conversion field” has the same shape, that is, each point of the sample is in its own state differerent from a neighboring one. Thus, different processes can occur at different points of the sample. In a chemical reaction accompanied by heat release or absorption (exo- and endothermic reactions), the temperature field can change significantly and temperature gradients can be as large as several tens of kelvins. To avoid this, conditions should be created under which the temperature gradients in the reacting system would not exceed the quasi-stationary gradient within the error of determination. This requirement is fulfilled under heat dilution conditions when the temperature field and heat exchange conditions are dictated by the thermophysical properties of the sample holder. This occurs in studying small amounts of a substance when the sample holder is made up of a metal with high heat conductance and its weight significantly exceeds the weight of the sample. Under these conditions, a so-called degenerate regime is realized, and heat exchange conditions have little effect on the kinetics of the process.The mathematical description of mass transfer events accompanying heterogeneous processes is beyond the scope of this section. Rather, the aim is to show, at the qualitative level, how they can be experimentally affected. Let us consider the simplest heterogeneous process described by Equation 1.1.In this process, several possible diffusion steps can be discerned. First, this is diffusion of gaseous products through the solid surface–environment interface. This mass transfer step can be controlled by purging the reaction volume with an inert gas. remains constant at different conversions, it can be stated that this type of diffusion is not a rate-limiting stage.Thus, diffusion hindrances can manifest themselves at different steps of the process under consideration and can depend on both the design of equipment and the nature of substances involved in the process.A conclusion that can be drawn from the above is that to mitigate a noticeable effect of transfer processes of experimental results, small amounts of the initial reagent (a few milligrams) with minimal porosity or lower heating rates should be used. In addition, it is important that the sample is placed on a rather large surface and that purge gases at a rather high flow rate are used.If our experiment is carried out under conditions such that transfer phenomena have no effect on the shape of thermoanalytical curves, the reaction can be thought of, to a first approximation, as a quasi-one-stage process representing the chemical transformations of reaction 1.1. However, the experimental results depend also on a change in the morphology of the initial reagent, i.e., on the formation of the reaction product, first of all, on the reagent surface. In this case, the conversion kinetics is dominated by the nucleation of the new phase and the subsequent growth of its nuclei. For heterogeneous processes, we are usually not aware of what atomic or molecular transformations lead to the nucleation of the product phase, so that the process is represented by a set of some formally geometric transformations. Non-isothermal kinetics is aimed at finding the forms of functions and their parameters describing these transformations.Nucleation is related to the chemical stage of the process. However, because of the complexity and diversity of nucleation processes, we believe it is necessary to briefly dwell on this phenomenon, without going into theoretical descriptions of different steps of these processes. Consideration focuses on the manifestations of nucleation processes in the thermoanalytical experiment and on the proper design of the latter.In the case of heterogeneous topochemical processes, the interface between the initial solid reagent and the solid reaction product is most frequently formed via nucleation processes. The reaction can simultaneously begin over the entire surface. In addition, nucleation can occur at separate sites of the surface, or by the branched chain mechanism, or another one. Possible mechanisms of these processes have been well documented (see, e.g.,). Here, we do not intend to go into details of all possible mechanisms; we will consider these phenomena in more detail when describing NETZSCH software.For carrying out a kinetic experiment and obtaining reproducible results, it is necessary to standardize the surface of the initial reagent and create a definite amount of nuclei prior to the kinetic experiment. In the framework of thermoanalytical study, we can measure only the conversion (determination of the conversion or the overall reaction rate). Here, we do not consider the use of other physical methods for determining the number of nuclei on the reagent surface, for example, direct nucleus counting under the microscope. In thermal analysis, the most accessible and efficient method is natural nucleation under standard conditions. In this method, prior to the kinetic experiment, a noticeable amount of the initial reagent is heat treated up to a certain conversion. As a rule, the conversion amounts to several percent. The method is based on the fact that the last nucleation stages have little effect on the development of the reaction interface, since a large part of potential centers has already been activated. The sample thus standardized is used in all kinetic experiments, that is, at different heating rates.Using the CuSO4·5H2O dehydration as an example, let us consider how the thermoanalytical curves change after natural nucleation under standard conditions as compared with the nonstandardized sample. curves for the initial untreated copper sulfate pentahydrate (curve 2) and for the sample subjected to natural nucleation under standard conditions (curve 1). To this end, the powder of the initial reagent was heated at T = 70 °C until 10% H2O was lost. As is seen, the shapes of the TG and DTG curves of the treated reagent differ from those of the initial reagent. Hence, the dehydration kinetics changes.Thus, using non-isothermal kinetics methods necessitates carrying out a special experiment involving a series of runs at various heating rates using the methods of separation of the rate-limiting stage, and small amounts of the solid reagent, purge gases, crucibles of appropriate size, and so forth.This page titled 1.2: Kinetic Experiment and Separation Methods is shared under a CC BY-NC-SA 3.0 license and was authored, remixed, and/or curated by Igor V. Arhangel`skii, Alexander V. Dunaev, Irina V. Makarenko, Nikolay A. Tikhonov, & Andrey V. Tarasov (Max Planck Research Library for the History and Development of Knowledge) via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.
495
1.3: NETZSCH ThermoKinetics Software
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Non-Isothermal_Kinetic_Methods_(Arhangel'skii_et_al.)/01%3A_Chapters/1.03%3A_NETZSCH_ThermoKinetics_Software
The NETZSCH software suite intended for use in kinetic calculations from thermoanalytical data is based on the above principles. It naturally has specific features and computational procedure. Let us consider the software design philosophy and operational principles. In this workbook, Netzsch Proteus® 4.8.3 and Netzsch Thermokinetics® 3.0 versions are used to demonstrate main operating procedures. As Netzsch continuously improve software, we strongly recommend using actual versions of Proteus® and Thermokinetics®. New versions of software always include basic procedures presented in this workbook, as well as new useful functions.The inverse kinetic problem is solved with the use of model-free Friedman methods and Ozawa–Flynn–Wall methods. The model-free methods (Friedman analysis, Ozawa–Flynn–Wall analysis, evaluation according to ASTM E698) are applied to non-isothermal kinetic analysis when experimental data is represented as the set of measurements at different heating rates. The model-free methods provide information on kinetic parameters, such as the activation energy and preexponential factor, without determining a concrete kinetic model. The Arrhenius parameters obtained by these methods are used as starting approximations in solving the direct kinetic problem. This solution makes it possible to find the type of function approximating the experimental data and to refine the Arrhenius parameters.In thermal analysis, the concept of conversion is used. NETZSCH ThermoKinetics software operates with partial mass loss (for thermogravimetry), and partial area (for DSC, DTA and mass spectrometry) rather than with the common term conversion degree.For integral measurements (thermogravimetry, dilatometry), the measured curve is converted to the plot of conversion αi versus time ti by Equation 3.1:\[\alpha_{i}=\frac{m\left(t_{s}\right)-m\left(t_{i}\right)}{m\left(t_{s}\right)-m\left(t_{f}\right)} \label{3.1}\]where (ts) is the signal at the starting moment of time ts, m(ti) is the signal at the ith moment of time ti, and m(tf) is the signal at the final moment of time tf.For differential measurements (DSC, DTA, mass spectrometry), the conversion is calculated by Equation 3.2:\[\alpha_{i}=\frac{\int_{t_{s}}^{t_{i}}[S(t)-B(t)] d t}{\int_{t_{s}}^{t_{f}}[S(t)-B(t)] d t} \label{3.2}\]where S(t) is the signal at the moment of time t and B(t) is the baseline at the moment of time t.The Friedman method is a differential one, where the initial experimental parameter is the instantaneous rate dαi/dti.Providing several measurements at different heating rates one can plot a linear dependence of the logarithm of rate on inverse temperature for given αi. As we notice above, Equation 1.4 can be easily linearized for any f(α) a linear dependence of the logarithm of rate on inverse temperature for given αi. In the Friedman method, the slope m = E/R of this line is found. Thus, the activation energy for each conversion rate can be calculated from the slope of the ln(dαi/dT) vs 1/T curve. The conversion rate on the left-hand side of the equation is found directly from the initial measured curve (e.g., thermogravimetric) by its differentiation with respect to time. This procedure is performed with the NETZSCH Proteus software used for processing the experimental data.\[\ln \left(-\frac{d \alpha}{d T}\right)_{T=T_{i}}=\ln \left(\frac{A}{\beta_{i}}\right)-\frac{E}{R T_{i}}+\ln \left(f\left(\alpha_{i}\right)\right) \label{3.3}\]The second Arrhenius parameter, the logarithm of the preexponential factor, is also calculated from Equation 3.3.Thus, the software allows the calculation of both Arrhenius parameters, the activation energy, and the logarithm of the preexponential factor. The calculation results are given in the tabulated form as the dependence of Arrhenius parameters on the conversion, as well as in the graphical form.The Ozawa method uses the integral dependence for solving Equation 1.3. Integration of the Arrhenius equation leads to Equation 3.4:(3.4)If T0 is lower than the temperature at which the reaction occurs actively, the lower integration limit can be taken as zero, T0 = 0, and after integration, Equation 3.5 takes the form of Equation 3.6:(3.5)(3.6)Analytical calculation of the integral in Equation 3.6 is impossible, therefore, it is determined as follows: Using the DOYLE approximation (ln p(z) = –5.3305 + 1.052z), we reduce Equation 3.5 to Equation 3.7:(3.7)It follows from Equation 3.7 that, for a series of measurements with different heating rates at the fixed conversion value α=αk, the plot of the dependence(3.8)is a straight line with the slope m = –1.052 E/R. Tik is the temperature at which the conversion αk is achieved at the heating rate βi. It is evident that the slope of the linear dependence is directly proportional to the activation energy. If the activation energy has the same value at different αk values, we can state with confidence that the reaction is one-stage. Otherwise, a change in the activation energy with an increase in the conversion is evidence that the process is multistep. The separation of the variables in Equation 1.3 is thus impossible.If E, αk, and zi are known, lnA can be calculated by Equation 3.9:The presence of several extreme points on the experimental TA curves is unambiguous evidence of the multistep character of the process. In this case, the use of the NETZSCH Peak Separation program makes it possible to separate individual stages and to estimate the Arrhenius parameters for each stage. The Peak Separation program is discussed below.The direct kinetic problem is solved by the linear least-squares method for one-stage reactions or by the nonlinear least-squares method for multistage processes. For one-stage reactions, it is necessary to choose the type of function that best approximates (from the statistical viewpoint) the experimental curves for all heating rates used. The NETZSCH Thermokinetics software includes a set of basic equations describing the macrokinetics of processes to be analyzed.Each stage of a process can correspond to one (or several) of the equations listed in Table 3.1. The type of f(α) function depends on the nature of the process and is usually selected a priori. For the users convenience, the notation of parameters and variables in Table 3.1 is the same as in the Thermokinetics software. Here, the p parameter corresponds to the conversion, p = α, and e = 1 – α.If the type of function corresponding to the process under consideration is unknown, the program performs calculations for the entire set of functions presented in Table 3.1. Then, on the basis of statistical criteria, the function is selected that best approximates the experimental data.This approach is a formal statistical-geometric method and, to a first approximation, the type of function approximating the experimental curves for all heating rates has no physical meaning. Even for quasi-one-stage processes where the chemical conversion stage has been separated, the equations presented in Table 3.1 can be correlated with a change in the morphology of the initial reagent, but no unambiguous conclusions can be drawn about the types of chemical transformations responsible for the nucleation of the reaction product. It often occurs that several functions adequately describe the experiment according to statistical3.1 Reaction types and corresponding type of function f(α) in Equation 1.2).criteria. The choice of the function is based on the search for the physical meaning of the resulting relation. In this context, some a priori ideas are used concerning the mechanisms of possible processes in the system under consideration. This can be literature data, results of other physicochemical studies, or general considerations based on the theories of heterogeneous processes. However, similar kinetic analysis provides a better insight into the effect of various external factors on the change in the morphology of the initial reagent and on the course of the process as a whole. Let us consider in detail the procedure of kinetic analysis based on the thermoanalytical experimental data for the dehydration of calcium oxalate monohydrate (CaC2O4 ∙ H2O).This page titled 1.3: NETZSCH ThermoKinetics Software is shared under a CC BY-NC-SA 3.0 license and was authored, remixed, and/or curated by Igor V. Arhangel`skii, Alexander V. Dunaev, Irina V. Makarenko, Nikolay A. Tikhonov, & Andrey V. Tarasov (Max Planck Research Library for the History and Development of Knowledge) via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.
496
1.4: Kinetic Analysis Based on Thermogravimetry Data
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Non-Isothermal_Kinetic_Methods_(Arhangel'skii_et_al.)/01%3A_Chapters/1.04%3A_Kinetic_Analysis_Based_on_Thermogravimetry_Data
Let us consider the dehydration as the reaction that occurs by the scheme , that is, as an ASCII file. To do this, the user must select the desired curve and click the Extras Export data button in the Proteus toolbar. The user must enter the lower and upper limits of the data range to be exported. To correctly specify the limits, the derivative of the selected curve is used. The left- and right-hand limits are chosen in the ranges where the derivative becomes zero .Remember that the derivative of the selected curve can be obtained by clicking the corresponding icon in the NETZSCH Proteus program window.Let us consider the computation results obtained by the linear regression method for the CaC2O4 · H2O dehydration .. For the reaction under consideration, the best fitting function is the Prout–Tompkins equation with autocatalysis (the Bna code), which is indicated at the top left of the table. However, before discussing the meaning of the results obtained, let us consider the F-test: Fit Quality window .Step significance windows present the statistical analysis of the fit quality for different models. This allows us to determine using the statistical methods which of the models provides the best fit for the experimental data.To perform such an analysis, Fisher’s exact test is used. In general, Fisher’s test is a variance ratio which makes it possible to verify whether the difference between two independent estimates of the variance of some data samples is significant. To do this, the ratio of these two variances is compared with the corresponding tabulated value of the Fisher distribution for a given number of degrees of freedom and significance level. If the ratio of two variances exceeds the corresponding theoretical Fisher test value, the difference between the variances is significant.In the Thermokinetics software, Fisher’s test is used for comparing the fit qualities ensured by different models. The best-fit model, that is, the model with the minimal sum of squared deviations, is taken as a reference (conventionally denoted as model 1). Then, each model is compared to the reference model. If the Fisher test value does not exceed the critical value, the difference between current model 2 and reference model 1 is insignificant. There is no reason to then believe that model 1 provides a more adequate description of the experiment in comparison to model 2.The Fexp value is estimated by means of Fisher’s test:\[F_{e x p}=\frac{L S Q_{1} / f_{1}}{L S Q_{2} / f_{2}} \label{4.2}\]The Fexp value is compared with the Fisher distribution Fcrit(0.95) for the significance level of 0.95 and the corresponding number of degrees of freedom..In the Const. column, the option ‘false’ is set for the parameters that should be varied and the option ‘true’ is chosen for the parameters that remain constant. The three columns to the right of this column are intended for imposing constraints on the selected values. The computation results are presented in via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.
497
1.5: Kinetic Analysis Based on Differential Scanning Calorimetry Data
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Non-Isothermal_Kinetic_Methods_(Arhangel'skii_et_al.)/01%3A_Chapters/1.05%3A_Kinetic_Analysis_Based_on_Differential_Scanning_Calorimetry_Data
The procedure of kinetic analysis of a reaction based on DSC experiment data can be exemplified by the curing of an epoxy resin. The curing reaction involves the opening of the epoxy ring by amine and is accompanied by an exotherm, which is recorded on a differential scanning calorimeter. The fraction of the cured resin is directly proportional to the evolved heat quantity. The knowledge of how the conversion (that is, the fraction of the reacted resin) depends on time and temperature provided by DSC enables one, in studying actual epoxy binders, to optimize the conditions of their treatment and the forming of products, for example, a polymer composite material. It is worth noting that curing occurs without weight change. TG measurements are therefore inapplicable in this case.It is well known, that, depending on the composition of reagents and process conditions, curing would occur as a one-stage as well as a two-stage process. In the present section both variants are considered.Let us consider a classical system consisting of an epoxy diane resin based on 4,4’-dihydroxydiphenylpropane (bisphenol A) and a curing agent, metaphenylenediamine . The curing of this system was studied on a Netzsch DSC-204 Phenix analyzer. Measurements were taken at five heating rates: 2.5, 5, 7.5, 10, and 15 K/min. Samples were placed in Netzsch aluminum crucibles with a lid. A hole was preliminarily made in the lid. The process was carried out in an argon flow at a flow rate of 100 mL/min. A mixture of the resin components were freshly prepared before taking measurements. The samples were 5–5.5 mg for each of the heating rates.Experimental data acquired using the NETZSCH equipment is processed with the NETZSCH Proteus program. serves as the calibration measurement for determining the correction parameters. This metal is chosen because it melts in the same temperature range in which the epoxy resin is cured. The path to the file with preliminarily calculated correction parameters is specified in the same window .The data is loaded by clicking the Load ASCII file icon (see , analogously to the procedure with TG data. If necessary, the user corrects the evaluation range limits . For further calculation, the type of baseline should be selected . In this case, we use a linear baseline.The loaded data is checked and model-free analysis is performed .The resulting activation energies and preexponential factors are used as a zero approximation in solving the direct kinetic problem.Let us consider, first of all, the computation results obtained by the linear regression method under the assumption of a one-stage process.. The calculation was performed for all models. As expected, however, the only relevant model turned out to be the model of reaction with autocatalysis described by the Prout–Tompkins equation (Bna code) , which is indicated at the top left of the table showed in In the software, this equation is represented as a model with two parallel reactions: a reaction with autocatalysis described by the Prout–Tompkins equation (Bna) and an nth order equation (Fn).Model with two parallel reactions: a reaction with autocatalysis described by the Prout–Tompkins equation and an nth order reaction (Fn). The kinetic parameters corresponding to a two-step model with two parallel reactions are presented in . Let us consider how to obtain the conversion versus time plot (at a given temperature) from the calculated data. To do this, we open the Predictions toolbar. The curves are shown in via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.
498
1.6: Analysis of Multistage Processes
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Non-Isothermal_Kinetic_Methods_(Arhangel'skii_et_al.)/01%3A_Chapters/1.06%3A_Analysis_of_Multistage_Processes
For multistage process, it is recommended to perform kinetic analysis using the following algorithm. Kinetic analysis is performed separately for each of the stages of the process as for a one-stage reaction, and the corresponding kinetic model and kinetic parameters are determined. Then, kinetic analysis is performed for the entire process using the data obtained for the model of each stage. In so doing, the kinetic parameters of each stage are refined. Thus, when the process is studied as a whole, there is already no need to try different kinetic models since they have been chosen for each stage.However, for most multistage processes, the effects overlap. To separate quasi-one-stage processes, the Peak Separation software can be used. In general case reaction steps are dependent, so the separation procedure is formal. Obtained peaks does not refer to the single reactions and may be used just for the initial approximation of the Arrenius parameters. In the case of the independent reactions separated peaks refer to the single reactions and thus may be used for calculation of the Arrenius parameters.In multistage processes with competing or parallel reactions, separate stages, as a rule, overlap, which can lead to considerable errors in the calculated Arrhenius parameters and to an incorrect choice of the scheme of the processes. This in turn will lead to significant errors of the nonlinear regression method because of the nonlinearity of the problem. To solve this problem, the experimental curve is represented by a superposition of separate one-stage processes. This procedure in implemented in the NETZSCH Peak Separation software .The NETZSCH Peak Separation software fits experimental data by a superposition of separate peaks, each of which can be described by one of the following functions:In thermal analysis, chemical reaction steps are described in most cases by the Fraser–Suzuki function (asymmetric Gaussian function). For other processes, e.g., polymer melting, the modified Laplace function must be used. and its graphical representation are given below.\[A_{F \text { raser }}=0.5 \cdot \sqrt{\pi / \ln 2} \cdot A m p l \cdot \exp \left[\frac{A s y m^{2}}{4 \ln 2}\right]\]The software outputs the following parameters:First, multimodal curves are decomposed into separate components and parameters of each process are found by the above procedures, assuming that all processes are quasi-one-stage reactions. Then, the phenomenon is described as a whole. For multiple step processes, the program suggests a list of some schemes of similar transformations, and appropriate schemes are selected from this list. The schemes are presented in of the Appendix. The choice of the corresponding scheme is based on some a priori ideas about the character of stages of the process under consideration.As an example of the kinetic analysis of multistage processes, let us consider the carbonization of oxidized polyacrylonitrile fiber yielding carbon fiber. The results of thermogravimetric analysis are convenient to use as input data .Then, the resulting peaks are sorted on the basis of the number of stage and the inverse kinetic problem is solved. As a result, we obtain the type of model for the given stage and approximate Arrhenius parameters. As a rule, several models of comparable statistical significance satisfy the solution of this problem. If no information is available on the true mechanism of the process, both variants are used for solving the direct kinetic problem.The estimated parameters and types of models calculated at this step of calculations are listed in Table 6.1, and statistical data of the calculation is presented in Table 6.3. In Table 6.1, the “apparent” activation energy is expressed in kelvins since the notion of mole has no physical meaning for fiber. The Ea/R constant is the temperature coefficient of the reaction rate.6.1 Estimated kinetic models and parameters of separate stages of the carbonization process.6.2 Calculated kinetic models and parameters of separate stages of the carbonization process.6.3 Statistical data of the calculation.6.4 Statistical data of the calculation.6.5 Calculated kinetic models and parameters of separate stages of the carbonization process for the successive-parallel scheme.According to the authors carbonization process may be represented by a set of successive-parallel processes. Let us consider this situation (the qffc scheme in of the Appendix). The results shown in , evaluate kinetic parameters, determine the number of stages and their sequence (successive, parallel, etc.). In addition, the statistically optimal kinetic parameters allow one to model temperature change conditions resulting in constant weight loss or enthalpy change rate and obtain dependences of these characteristics under isothermal conditions or in other temperature programs. J. Sestak. Thermophysical Properties of Solids. Prague: Academia, 1984 S. Vyazovkin al.. ICTAC Kinetics Committee Recommendations for Performing Kinetic Computations on Thermal Analysis Data Thermochimica Acta 1: 520: 520. V.A. Sipachev, I.V. Arkhangel ' skii. Calculation Techniques Solving Non-Isothermal Kinetic Problems Journal of Thermal Analysis 38: 1283-1291: 1283-1291. B. Delmon. Introduction to Heterogeneous Kinetics. Paris: Technip, 1969 P. Barret. Kinetics in Heterogeneous Chemical Systems., 1974 H.L. Friedman. A Quick, Direct Method for the Determination of Activation Energy from Thermogravimetric Data J. Polym. Lett. 4: 323-328: 323-328. T. Ozawa. A New Method of Analyzing Thermogravimetric Data Bull. Chem. Soc. Jpn. 38: 1881-1886: 1881-1886. J.H. Flynn, L.A. Wall. A Quick, Direct Method for the Determination of Activation Energy from Thermogravimetric Data J. Polym. Sci. Polym. Lett. 4: 323-328: 323-328. NETZSCH-Thermokinetics 3.1 Software Help. NETZSCH-Thermokinetics 3.1 Software Help. . J. Opfermann. Kinetic Analysis Using Multivariate Non-Linear Regression Journal of Thermal Analysis & Calorimetry 60: 641-658: 641-658. S.Z.D. Cheng. Handbook of Thermal Analysis and Calorimetry. Applications to Polymers and Plastics., 2001 L. Shechter, J. Wynstra, J. W.. Glycidyl Ether Reactions with Amines Ind. Eng. Chem. 48: 94-97: 94-97. NETZSCH-Peakseparation 3.0 Software Help.. NETZSCH-Peakseparation 3.0 Software Help.. . P. Morgan. Carbon Fibers and Their Composites. New York: TaylorFrancis, 2005 V.Y. Varshavskii. Carbon Fibers. Moscow: Varshavskii, 2005This page titled 1.6: Analysis of Multistage Processes is shared under a CC BY-NC-SA 3.0 license and was authored, remixed, and/or curated by Igor V. Arhangel`skii, Alexander V. Dunaev, Irina V. Makarenko, Nikolay A. Tikhonov, & Andrey V. Tarasov (Max Planck Research Library for the History and Development of Knowledge) via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.
499
InfoPage
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Physical_Methods_in_Chemistry_and_Nano_Science_(Barron)/00%3A_Front_Matter/02%3A_InfoPage
This text is disseminated via the Open Education Resource (OER) LibreTexts Project and like the hundreds of other texts available within this powerful platform, it is freely available for reading, printing and "consuming." Most, but not all, pages in the library have licenses that may allow individuals to make changes, save, and print this book. Carefully consult the applicable license(s) before pursuing such effects.Instructors can adopt existing LibreTexts texts or Remix them to quickly build course-specific resources to meet the needs of their students. Unlike traditional textbooks, LibreTexts’ web based origins allow powerful integration of advanced features and new technologies to support learning. The LibreTexts mission is to unite students, faculty and scholars in a cooperative effort to develop an easy-to-use online platform for the construction, customization, and dissemination of OER content to reduce the burdens of unreasonable textbook costs to our students and society. The LibreTexts project is a multi-institutional collaborative venture to develop the next generation of open-access texts to improve postsecondary education at all levels of higher learning by developing an Open Access Resource environment. The project currently consists of 14 independently operating and interconnected libraries that are constantly being optimized by students, faculty, and outside experts to supplant conventional paper-based books. These free textbook alternatives are organized within a central environment that is both vertically (from advance to basic level) and horizontally (across different fields) integrated.The LibreTexts libraries are Powered by NICE CXOne and are supported by the Department of Education Open Textbook Pilot Project, the UC Davis Office of the Provost, the UC Davis Library, the California State University Affordable Learning Solutions Program, and Merlot. This material is based upon work supported by the National Science Foundation under Grant No. 1246120, 1525057, and 1413739.Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation nor the US Department of Education.Have questions or comments? For information about adoptions or adaptions contact More information on our activities can be found via Facebook , Twitter , or our blog .This text was compiled on 07/05/2023
504
1.1: Introduction to Elemental Analysis
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Physical_Methods_in_Chemistry_and_Nano_Science_(Barron)/01%3A_Elemental_Analysis/1.01%3A_Introduction_to_Elemental_Analysis
The purpose of elemental analysis is to determine the quantity of a particular element within a molecule or material. Elemental analysis can be subdivided in two ways:In either case elemental analysis is independent of structure unit or functional group, i.e., the determination of carbon content in toluene (\(\ce{C6H5CH3}\)) does not differentiate between the aromatic \(sp^2\) carbon atoms and the methyl \(sp^3\) carbon.Elemental analysis can be performed on a solid, liquid, or gas. However, depending on the technique employed the sample may have to be pre-reacted, e.g., by combustion or acid digestion. The amounts required for elemental analysis range from a few gram (g) to a few milligram (mg) or less.Elemental analysis can also be subdivided into general categories related to the approach involved in determining quantities.Many classical methods they can be further classified into the following categories:The biggest limitation in classical methods is most often due to sample manipulation rather than equipment error, i.e., operator error in weighing a sample or observing an end point. In contrast, the errors in modern analytical methods are almost entirely computer sourced and inherent in the software that analyzes and fits the data.This page titled 1.1: Introduction to Elemental Analysis is shared under a CC BY 4.0 license and was authored, remixed, and/or curated by Pavan M. V. Raja & Andrew R. Barron (OpenStax CNX) via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.
507
1.2: Spot Tests
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Physical_Methods_in_Chemistry_and_Nano_Science_(Barron)/01%3A_Elemental_Analysis/1.02%3A_Spot_Tests
Spot tests (spot analysis) are simple chemical procedures that uniquely identify a substance. They can be performed on small samples, even microscopic samples of matter with no preliminary separation. The first report of a spot test was in 1859 by Hugo Shiff for the detection of uric acid. In a typical spot test, a drop of chemical reagent is added to a drop of an unknown mixture. If the substance under study is present, it produces a chemical reaction characterized by one or more unique observables, e.g., a color change. A typical example of a spot test is the detection of chlorine in the gas phase by the exposure to paper impregnated with 0.1% 4-4'bis-dimethylamino-thiobenzophenone (thio-Michler's ketone) dissolved in benzene. In the presence of chlorine the paper will change from yellow to blue. The mechanism involves the zwitterionic form of the thioketoneThis, in turn, undergoes an oxidation reaction and subsequent disulfide couplingThis page titled 1.2: Spot Tests is shared under a CC BY 4.0 license and was authored, remixed, and/or curated by Pavan M. V. Raja & Andrew R. Barron (OpenStax CNX) via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.
508
1.3: Introduction to Combustion Analysis
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Physical_Methods_in_Chemistry_and_Nano_Science_(Barron)/01%3A_Elemental_Analysis/1.03%3A_Introduction_to_Combustion_Analysis
Combustion, or burning as it is more commonly known, is simply the mixing and exothermic reaction of a fuel and an oxidizer. It has been used since prehistoric times in a variety of ways, such as a source of direct heat, as in furnaces, boilers, stoves, and metal forming, or in piston engines, gas turbines, jet engines, rocket engines, guns, and explosives. Automobile engines use internal combustion in order to convert chemical into mechanical energy. Combustion is currently utilized in the production of large quantities of \(\ce{H2}\). Coal or coke is combusted at 1000◦C in the presence of water in a two-step reaction. The first step shown in involved the partial oxidation of carbon to carbon monoxide.\[\ce{C(g) + H2O(g) -> CO(g) + H2(g)} \nonumber \]The second step involves a mixture of produced carbon monoxide with water to produce hydrogen and is commonly known as the water gas shift reaction.\[\ce{CO(g) + H2O(g) → CO2(g) + H2(g)} \nonumber \]Although combustion provides a multitude of uses, it was not employed as a scientific analytical tool until the late 18th century.In the 1780's, Antoine Lavoisier (figure \(\PageIndex{1}\) ) was the first to analyze organic compounds with combustion using an extremely large and expensive apparatus (figure \(\PageIndex{2}\) ) that required over 50 g of the organic sample and a team of operators.The method was simplified and optimized throughout the 19th and 20th centuries, first by Joseph Gay- Lussac ), who began to use copper oxide in 1815, which is still used as the standard catalyst.William Prout ) invented a new method of combustion analysis in 1827 by heating a mixture of the sample and \(\ce{CuO }\) using a multiple-flame alcohol lamp ) and measuring the change in gaseous volume.In 1831, Justus von Liebig )) simplified the method of combustion analysis into a "combustion train" system ) and )) that linearly heated the sample using coal, absorbed water using calcium chloride, and absorbed carbon dioxide using potash (KOH). This new method only required 0.5 g of sample and a single operator, and Liebig moved the sample through the apparatus by sucking on an opening at the far right end of the apparatus.Jean-Baptiste André Dumas )) used a similar combustion train to Liebig. However, he added a U-shaped aspirator that prevented atmospheric moisture from entering the apparatus )).In 1923, Fritz Pregl )) received the Nobel Prize for inventing a micro-analysis method of combustion. This method required only 5 mg or less, which is 0.01% of the amount required in Lavoisier's apparatus.Today, combustion analysis of an organic or organometallic compound only requires about 2 mg of sample. Although this method of analysis destroys the sample and is not as sensitive as other techniques, it is still considered a necessity for characterizing an organic compound.There are several categories of combustion, which can be identified by their flame types (Table \(\PageIndex{1}\)). At some point in the combustion process, the fuel and oxidant must be mixed together. If these are mixed before being burned, the flame type is referred to as a premixed flame, and if they are mixed simultaneously with combustion, it is referred to as a nonpremixed flame. In addition, the ow of the flame can be categorized as either laminar (streamlined) or turbulent ).The amount of oxygen in the combustion system can alter the ow of the flame and the appearance. As illustrated in , a flame with no oxygen tends to have a very turbulent flow, while a flame with an excess of oxygen tends to have a laminar flow.A combustion system is referred to as stoichiometric when all of the fuel and oxidizer are consumed and only carbon dioxide and water are formed. On the other hand, a fuel-rich system has an excess of fuel, and a fuel-lean system has an excess of oxygen (Table \(\PageIndex{2}\)).If the reaction of a stoichiometric mixture is written to describe the reaction of exactly 1 mol of fuel (\(\ce{H2}\) in this case), then the mole fraction of the fuel content can be easily calculated as follows, where \( ν \) denotes the mole number of \(\ce{O2}\) in the combustion reaction equation for a complete reaction to \(\ce{H2O}\) and \(\ce{CO2}\),\[ x_{\text{fuel, stoich}} = \dfrac{1}{1+v} \nonumber \]For example, in the reaction\[\ce{H2 + 1/2 O2 → H2O2 + H2} \nonumber \]we have \( v = \frac{1}{2} \), so the stoichiometry is calculated as\[ x_{\ce{H2}, \text{stoich}}= \dfrac{1}{1+0.5} = 2/3 \nonumber \]However, as calculated this reaction would be for the reaction in an environment of pure oxygen. On the other hand, air has only 21% oxygen (78% nitrogen, 1% noble gases). Therefore, if air is used as the oxidizer, this must be taken into account in the calculations, i.e.\[ x_{\ce{N2}} = 3.762 (x_{\ce{O2}}) \nonumber \]The mole fractions for a stoichiometric mixture in air are therefore calculated in following way:\[ x_{\text{fuel, stoich}} = \dfrac{1}{1+v(4.762)} \label{eq:xfuel} \]\[ x_{\ce{O2},\text{stoich}} = v(x_{\text{fuel, stoich}}) \nonumber \]\[ x_{\ce{N2},\text{stoich}} = 3.762(x_{\ce{O2}, \text{stoich}}) \nonumber \]Calculate the fuel mole fraction (\(x_{\text{fuel}} \)) for the stoichiometric reaction:\[ \ce{CH4 + 2O2} + (2 \times 3.762)\ce{N2 → CO2 + 2H2O} + (2 \times 3.762)\ce{N2} \nonumber \]SolutionIn this reaction \( ν \) = 2, as 2 moles of oxygen are needed to fully oxidize methane into \(\ce{H2O}\) and \(\ce{CO2}\).\[ x_{\text{fuel, stoich}} = \dfrac{1}{1+2 \times 4.762} = 0.09502 = 9.502~\text{mol} \% \nonumber \]Calculate the fuel mole fraction for the stoichiometric reaction:\[ \ce{C3H8 + 5O2} + (5 \times 3.762)\ce{N2 → 3CO2 + 4H2O} + (5 \times 3.762)\ce{N2} \nonumber \]The fuel mole fraction is 4.03%Premixed combustion reactions can also be characterized by the air equivalence ratio, \( \lambda \):\[ \lambda = \dfrac{x_{\text{air}}/x_{\text{fuel}}}{x_{\text{air, stoich}}/x_{\text{fuel,stoich}}} \nonumber \]The fuel equivalence ratio, \( Φ\), is the reciprocal of this value\[ Φ = 1/\lambda \nonumber \]Rewriting \ref{eq:xfuel} in terms of the fuel equivalence ratio gives:\[ x_{\text{fuel}} = \frac { 1 } { 1 + v( 4.672 / \Phi ) } \nonumber \]\[ x_{\text{air}} = 1 - x_{\text{fuel}} \nonumber \]\[ x_{\ce{O2}} = x_{\text{air}}/4.762 \nonumber \]\[ x_{\ce{N2}} = 3.762(x_{\ce{O2}}) \nonumber \]The premixed combustion processes can also be identified by their air and fuel equivalence ratios (Table \(\PageIndex{3}\) ).With a premixed type of combustion, there is much greater control over the reaction. If performed at lean conditions, then high temperatures, the pollutant nitric oxide, and the production of soot can be minimized or even avoided, allowing the system to combust efficiently. However, a premixed system requires large volumes of premixed reactants, which pose a fire hazard. As a result, nonpremixed combusted, while not being efficient, is more commonly used.Though the instrumentation of combustion analysis has greatly improved, the basic components of the apparatus have not changed much since the late 18th century.The sample of an organic compound, such as a hydrocarbon, is contained within a furnace or exposed to a ame and burned in the presence of oxygen, creating water vapor and carbon dioxide gas ). The sample moves first through the apparatus to a chamber in which\(\ce{H2O}\) is absorbed by a hydrophilic substance and second through a chamber in which \(\ce{CO2}\) is absorbed. The change in weight of each chamber is determined to calculate the weight of \(\ce{H2O}\) and \(\ce{CO2}\). After the masses of \(\ce{H2O}\) and \(\ce{CO2}\) have been determined, they can be used to characterize and calculate the composition of the original sample.Combustion analysis is a standard method of determining a chemical formula of a substance that contains hydrogen and carbon. First, a sample is weighed and then burned in a furnace in the presence of excess oxygen. All of the carbon is converted to carbon dioxide, and the hydrogen is converted to water in this way. Each of these are absorbed in separate compartments, which are weighed before and after the reaction. From these measurements, the chemical formula can be determined.Generally, the following reaction takes place in combustion analysis:\[ \ce{C_{a}H_{b} + O2(xs) → aCO2 + b/2 H2O} \nonumber \]After burning 1.333 g of a hydrocarbon in a combustion analysis apparatus, 1.410 g of \(\ce{H2O}\) and 4.305 g of \(\ce{CO2}\) were produced. Separately, the molar mass of this hydrocarbon was found to be 204.35 g/mol. Calculate the empirical and molecular formulas of this hydrocarbon.Step 1: Using the molar masses of water and carbon dioxide, determine the moles of hydrogen and carbon that were produced.\[ 1.410~\text{g}~\ce{H2O} \times \dfrac{1~\text{mol}~\ce{H2O}}{18.015~\text{g}~\ce{H2O}} \times \dfrac{2~\text{mol H}}{1~\text{mol}~\ce{H2O}} = 0.1565~\text{mol H} \nonumber \]\[ 4.3051~\text{g}~\ce{CO2} \times \dfrac{1~\text{mol}~\ce{CO2}}{44.010~\text{g}~\ce{CO2}} \times \dfrac{1~\text{mol C}}{1~\text{mol}~\ce{CO2}} = 0.09782 ~\text{mol C} \nonumber \]Step 2: Divide the larger molar amount by the smaller molar amount. In some cases, the ratio is not made up of two integers. Convert the numerator of the ratio to an improper fraction and rewrite the ratio in whole numbers as shown\[ \frac { 0.1565~\mathrm { mol~H } } { 0.09782~\mathrm{ mol~C} } = \frac { 1.600~\mathrm { mol~H } } { 1~\mathrm { mol~C } } = \frac { 16 / 10~\mathrm { mol~H } } { 1~\mathrm { mol~C } } = \frac { 8 / 5~\mathrm { mol~H } } { 1~\mathrm { mol~C } } = \frac { 8~\mathrm { mol~H } } { 5~\mathrm { mol~C } } \nonumber \]Therefore, the empirical formula is \(\ce{C5H8}\).Step 3: To get the molecular formula, divide the experimental molar mass of the unknown hydrocarbon by the empirical formula weight.\[ \frac { \text { Molar mass } } { \text { Empirical formula weight } } = \frac { 204.35~\mathrm { g } / \mathrm { mol } } { 68.114~\mathrm { g } / \mathrm { mol } } = 3 \nonumber \]Therefore, the molecular formula is \(\ce{(C5H8)3}\) or \(\ce{C15H24}\).After burning 1.082 g of a hydrocarbon in a combustion analysis apparatus, 1.583 g of \(\ce{H2O}\) and 3.315 g of \(\ce{CO2}\) were produced. Separately, the molar mass of this hydrocarbon was found to be 258.52 g/mol. Calculate the empirical and molecular formulas of this hydrocarbon.The empirical formula is \(\ce{C3H7}\), and the molecular formula is \(\ce{(C3H7)6}\) or\(\ce{ C18H42}\).Combustion analysis can also be utilized to determine the empiric and molecular formulas of compounds containing carbon, hydrogen, and oxygen. However, as the reaction is performed in an environment of excess oxygen, the amount of oxygen in the sample can be determined from the sample mass, rather than the combustion dataA 2.0714 g sample containing carbon, hydrogen, and oxygen was burned in a combustion analysis apparatus; 1.928 g of \(\ce{H2O}\) and 4.709 g of \(\ce{CO2}\) were produced. Separately, the molar mass of the sample was found to be 116.16 g/mol. Determine the empirical formula, molecular formula, and identity of the sample.Step 1: Using the molar masses of water and carbon dioxide, determine the moles of hydrogen and carbon that were produced. \[ 1.928~\text{g}~\ce{H2O} \times \dfrac{1~\text{mol}~\ce{H2O}}{18.015~\text{g}~\ce{H2O}} \times \dfrac{2~\text{mol H}}{1~\text{mol}~\ce{H2O}} = 0.2140~\text{mol H} \nonumber \] \[ 4.709~\text{g}~\ce{CO2} \times \dfrac{1~\text{mol}~\ce{CO2}}{44.010~\text{g}~\ce{CO2}} \times \dfrac{1~\text{mol C}}{1~\text{mol}~\ce{CO2}} = 0.1070 ~\text{mol C} \nonumber \]Step 2: Using the molar amounts of carbon and hydrogen, calculate the masses of each in the original sample.\[ 0.2140~\mathrm { mol~H } \times \frac { 1.008~\mathrm { g~H } } { 1~\mathrm { mol~H } } = 0.2157~\mathrm { g~H } \nonumber \]\[ 0.1070 ~\mathrm{mol~C} \times \frac { 12.011~\mathrm{g~C} } { 1~\mathrm{ mol~C} } = 1.285~\mathrm{g~C} \nonumber \]Step 3: Subtract the masses of carbon and hydrogen from the sample mass. Now that the mass of oxygen is known, use this to calculate the molar amount of oxygen in the sample.\[ 2.0714 \mathrm{ g~sample } - 0.2157~\mathrm{ g~H} - 1.285~\mathrm{ g~C} = 0.5707~\mathrm{ g~O } \nonumber \]\[ 0.5707 ~\mathrm{mol~O} \times \frac { 1~\mathrm{mol~O} } { 16.00~\mathrm{ g~O} } = 0.03567~\mathrm{g~O} \nonumber \]Step 4: Divide each molar amount by the smallest molar amount in order to determine the ratio between the three elements.\[ \frac { 0.03567~\mathrm { mol~O } } { 0.03567 } = 1.00~\mathrm { mol~O } = 1~\mathrm { mol~O } \nonumber \]\[ \frac { 0.1070~\mathrm { mol~C } } { 0.03567 } = 3.00 \mathrm { mol~C } = 3~\mathrm { mol~C } \nonumber \]\[ \frac { 0.2140~\mathrm { mol~H } } { 0.03567 } = 5.999~\mathrm { mol~H } = 6~\mathrm { mol~H } \nonumber \]Therefore, the empirical formula is \(\ce{C3H6O}\).Step 5: To get the molecular formula, divide the experimental molar mass of the unknown hydrocarbon by the empirical formula weight.\[ \frac { \text { Molar mass } } { \text { Empirical formula weight } } = \frac { 116.16~\mathrm { g /mol } } { 58.08~\mathrm { g /mol } } = 2 \nonumber \]Therefore, the molecular formula is \(\ce{(C3H6O)2}\) or \(\ce{C6H12O2}\).Structure of possible compounds with the molecular formula \(\ce{C6H12O2}\): (a) butylacetate, (b) sec-butyl acetate, (c) tert-butyl acetate, (d) ethyl butyrate, (e) haxanoic acid, (f) isobutyl acetate, (g) methyl pentanoate, and (h) propyl proponoate.A 4.846 g sample containing carbon, hydrogen, and oxygen was burned in a combustion analysis apparatus; 4.843 g of \(\ce{H2O}\) and 11.83 g of \(\ce{CO2}\) were produced. Separately, the molar mass of the sample was found to be 144.22 g/mol. Determine the empirical formula, molecular formula, and identity of the sample.The empirical formula is \(\ce{C4H8O}\), and the molecular formula is (\(\ce{C4H8O)2}\) or \(\ce{C8H16O2}\).Structure of possible compounds with the molecular formula \(\ce{C8H16O2}\): (a) octanoic acid (caprylic acid), (b) hexyl acetate, (c) pentyl proponate, (d) 2-ethyl hexanoic acid, (e) valproic acid (VPA), (f) cyclohexanedimethanol (CHDM), and (g) 2,2,4,4-tetramethyl-1,3-cyclobutandiol (CBDO).By using combustion analysis, the chemical formula of a binary compound containing oxygen can also be determined. This is particularly helpful in the case of combustion of a metal which can result in potential oxides of multiple oxidation states.A sample of iron weighing 1.7480 g is combusted in the presence of excess oxygen. A metal oxide (\(\ce{Fe_{x}O_{y})}\) is formed with a mass of 2.4982 g. Determine the chemical formula of the oxide product and the oxidation state of Fe.Step 1: Subtract the mass of Fe from the mass of the oxide to determine the mass of oxygen in the product.\[ 2.4982~\mathrm { g~Fe } _ { \mathrm { x } } \mathrm { O } _ { \mathrm { y } } - 1.7480~\mathrm { g~Fe } = 0.7502~\mathrm { g~O } \nonumber \]Step 2: Using the molar masses of Fe and O, calculate the molar amounts of each element.\[ 1.7480 \mathrm { g~Fe } \times \frac { 1 \text { mol Fe } } { 55.845 \text { g Fe } } = 0.031301 \text { mol Fe } \nonumber \]\[ 0.7502~\text { g } \times \frac { 1 \text { mol O }} { 16.00~\text { g O} } = 0.04689~\text { mol O } \nonumber \]Step 3: Divide the larger molar amount by the smaller molar amount. In some cases, the ratio is not made up of two integers. Convert the numerator of the ratio to an improper fraction and rewrite the ratio in whole numbers as shown.\[ \frac { 0.031301~\text{ mol Fe } } { 0.04689~\mathrm { mol~O } } = \frac { 0.6675~\mathrm { mol~Fe } } { 1~\mathrm { mol~O } } = \frac{ \frac{2}{3} \mathrm { mol~Fe } } { 1~\mathrm { mol~O } } = \frac { 2~\mathrm { mol~Fe } } { 3~\mathrm { mol~O } } \nonumber \]Therefore, the chemical formula of the oxide is \(\ce{Fe2O3}\), and Fe has a 3+ oxidation state.A sample of copper weighing 7.295 g is combusted in the presence of excess oxygen. A metal oxide (\(\ce{Cu_{x}O_{y}}\)) is formed with a mass of 8.2131 g. Determine the chemical formula of the oxide product and the oxidation state of Cu.The chemical formula is \(\ce{Cu2O}\), and Cu has a 1+ oxidation state..This page titled 1.3: Introduction to Combustion Analysis is shared under a CC BY 4.0 license and was authored, remixed, and/or curated by Pavan M. V. Raja & Andrew R. Barron (OpenStax CNX) via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.
509
1.4: Introduction to Atomic Absorption Spectroscopy
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Physical_Methods_in_Chemistry_and_Nano_Science_(Barron)/01%3A_Elemental_Analysis/1.04%3A_Introduction_to_Atomic_Absorption_Spectroscopy
The earliest spectroscopy was first described by Marcus Marci von Kronland in 1648 by analyzing sunlight as is passed through water droplets and thus creating a rainbow. Further analysis of sunlight by William Hyde Wollaston ) led to the discovery of black lines in the spectrum, which in 1820 Sir David Brewster ) explained as absorption of light in the sun’s atmosphere.Robert Bunsen ) and Gustav Kirchhoff ) studied the sodium spectrum and came to the conclusion that every element has its own unique spectrum that can be used to identify elements in the vapor phase. Kirchoff further explained the phenomenon by stating that if a material can emit radiation of a certain wavelength, that it may also absorb radiation of that wavelength. Although Bunsen and Kirchoff took a large step in defining the technique of atomic absorption spectroscopy (AAS), it was not widely utilized as an analytical technique except in the field of astronomy due to many practical difficulties.In 1953, Alan Walsh ) drastically improved the AAS methods. He advocated AAS to many instrument manufacturers, but to no avail. Although he had improved the methods, he hadn’t shown how it could be useful in any applications. In 1957, he discovered uses for AAS that convinced manufactures market the first commercial AAS spectrometers. Since that time, AAS's popularity has fluctuated as other analytical techniques and improvements to the methods are made.In order to understand how atomic absorption spectroscopy works, some background information is necessary. Atomic theory began with John Dalton ) in the 18th century when he proposed the concept of atoms, that all atoms of an element are identical, and that atoms of different elements can combine to form molecules. In 1913, Niels Bohr ) revolutionized atomic theory by proposing quantum numbers, a positively charged nucleus, and electrons orbiting around the nucleus in the what became known as the Bohr model of the atom. Soon afterward, Louis deBroglie ) proposed quantized energy of electrons, which is an extremely important concept in AAS. Wolfgang Pauli ) then elaborated on deBroglie’s theory by stating that no two electrons can share the same four quantum numbers. These landmark discoveries in atomic theory are necessary in understanding the mechanism of AAS.Atoms have valence electrons, which are the outermost electrons of the atom. Atoms can be excited when irradiated, which creates an absorption spectrum. When an atom is excited, the valence electron moves up an energy level. The energies of the various stationary states, or restricted orbits, can then be determined by these emission lines. The resonance line is then defined as the specific radiation absorbed to reach the excited state.The Maxwell-Boltzmann equation gives the number of electrons in any given orbital. It relates the distribution to the thermal temperature of the system (as opposed to electronic temperature, vibrational temperature, or rotational temperature). Plank proposed radiation emitted energy in discrete packets (quanta),\[ E= h \nu \nonumber \]which can be related to Einstein’s equation\[ E=mc^2 \label{eq:mc2} \]Both atomic emission and atomic absorption spectroscopy can be used to analyze samples. Atomic emission spectroscopy measures the intensity of light emitted by the excited atoms, while atomic absorption spectroscopy measures the light absorbed by atomic absorption. This light is typically in the visible or ultraviolet region of the electromagnetic spectrum. The percentage is then compared to a calibration curve to determine the amount of material in the sample. The energy of the system can be used to find the frequency of the radiation, and thus the wavelength through the combination of equations \ref{eq:mc2} and \ref{eq:ncl}.\[ \nu = c/\lambda \label{eq:ncl} \]Because the energy levels are quantized, only certain wavelengths are allowed and each atom has a unique spectrum. There are many variables that can affect the system. For example, if the sample is changed in a way that increases the population of atoms, there will be an increase in both emission and absorption and vice versa. There are also variables that affect the ratio of excited to unexcited atoms such as an increase in temperature of the vapor.There are many applications of atomic absorption spectroscopy (AAS) due to its specificity. These can be divided into the broad categories of biological analysis, environmental and marine analysis, and geological analysis.Biological samples can include both human tissue samples and food samples. In human tissue samples, AAS can be used to determine the amount of various levels of metals and other electrolytes, within tissue samples. These tissue samples can be many things including but not limited to blood, bone marrow, urine, hair, and nails. Sample preparation is dependent upon the sample. This is extremely important in that many elements are toxic in certain concentrations in the body, and AAS can analyze what concentrations they are present in. Some examples of trace elements that samples are analyzed for are arsenic, mercury, and lead.An example of an application of AAS to human tissue is the measurement of the electrolytes sodium and potassium in plasma. This measurement is important because the values can be indicative of various diseases when outside of the normal range. The typical method used for this analysis is atomization of a 1:50 dilution in strontium chloride (\(\ce{SrCl2}\)) using an air-hydrogen flame. The sodium is detected at its secondary line (330.2 nm) because detection at the first line would require further dilution of the sample due to signal intensity. The reason that strontium chloride is used is because it reduces ionization of the potassium and sodium ions, while eliminating phosphate’s and calcium’s interference.In the food industry, AAS provides analysis of vegetables, animal products, and animal feeds. These kinds of analyses are some of the oldest application of AAS. An important consideration that needs to be taken into account in food analysis is sampling. The sample should be an accurate representation of what is being analyzed. Because of this, it must be homogenous, and many it is often needed that several samples are run. Food samples are most often run in order to determine mineral and trace element amounts so that consumers know if they are consuming an adequate amount. Samples are also analyzed to determine heavy metals which can be detrimental to consumers.Environmental and marine analysis typically refers to water analysis of various types. Water analysis includes many things ranging from drinking water to waste water to sea water. Unlike biological samples, the preparation of water samples is governed more by laws than by the sample itself. The analytes that can be measured also vary greatly and can often include lead, copper, nickel, and mercury.An example of water analysis is an analysis of leaching of lead and zinc from tin-lead solder into water. The solder is what binds the joints of copper pipes. In this particular experiment, soft water, acidic water, and chlorinated water were all analyzed. The sample preparation consisted of exposing the various water samples to copper plates with solder for various intervals of time. The samples were then analyzed for copper and zinc with air-acetylene flame AAS. A deuterium lamp was used. For the samples that had copper levels below 100 µg/L, the method was changed to graphite furnace electrothermal AAS due to its higher sensitivity.Geological analysis encompasses both mineral reserves and environmental research. When prospecting mineral reserves, the method of AAS used needs to be cheap, fast, and versatile because the majority of prospects end up being of no economic use. When studying rocks, preparation can include acid digestions or leaching. If the sample needs to have silicon content analyzed, acid digestion is not a suitable preparation method.An example is the analysis of lake and river sediment for lead and cadmium. Because this experiment involves a solid sample, more preparation is needed than for the other examples. The sediment was first dried, then grounded into a powder, and then was decomposed in a bomb with nitric acid (\(\ce{HNO3}\)) and perchloric acid (\(\ce{HClO4}\)). Standards of lead and cadmium were prepared. Ammonium sulfate (\(\ce{[NH4][SO4]}\)]) and ammonium phosphate (\(\ce{[NH4][3PO4]}\)]) were added to the samples to correct for the interferences caused by sodium and potassium that are present in the sample. The standards and samples were then analyzed with electrothermal AAS.In order for the sample to be analyzed, it must first be atomized. This is an extremely important step in AAS because it determines the sensitivity of the reading. The most effective atomizers create a large number of homogenous free atoms. There are many types of atomizers, but only two are commonly used: flame and electrothermal atomizers.Flame atomizers ) are widely used for a multitude of reasons including their simplicity, low cost, and long length of time that they have been utilized. Flame atomizers accept an aerosol from a nebulizer into a flame that has enough energy to both volatilize and atomize the sample. When this happens, the sample is dried, vaporized, atomized, and ionized. Within this category of atomizers, there are many subcategories determined by the chemical composition of the flame. The composition of the flame is often determined based on the sample being analyzed. The flame itself should meet several requirements including sufficient energy, a long length, non-turbulent, and safe.Although electrothermal atomizers were developed before flame atomizers, they did not become popular until more recently due to improvements made to the detection level. They employ graphite tubes that increase temperature in a stepwise manner. Electrothermal atomization first dries the sample and evaporates much of the solvent and impurities, then atomizes the sample, and then rises it to an extremely high temperature to clean the graphite tube. Some requirements for this form of atomization are the ability to maintain a constant temperature during atomization, have rapid atomization, hold a large volume of solution, and emit minimal radiation. Electrothermal atomization is much less harsh than the method of flame atomization.The radiation source then irradiates the atomized sample. The sample absorbs some of the radiation, and the rest passes through the spectrometer to a detector. Radiation sources can be separated into two broad categories: line sources and continuum sources. Line sources excite the analyte and thus emit its own line spectrum. Hollow cathode lamps and electrodeless discharge lamps are the most commonly used examples of line sources. On the other hand, continuum sources have radiation that spreads out over a wider range of wavelengths. These sources are typically only used for background correction. Deuterium lamps and halogen lamps are often used for this purpose.Spectrometers are used to separate the different wavelengths of light before they pass to the detector. The spectrometer used in AAS can be either single-beam or double-beam. Single-beam spectrometers only require radiation that passes directly through the atomized sample, while double-beam spectrometers ), as implied by the name, require two beams of light; one that passes directly through the sample, and one that does not pass through the sample at all. (Insert diagrams) The single-beam spectrometers have less optical components and therefore suffer less radiation loss. Double-beam monochromators have more optical components, but they are also more stable over time because they can compensate for changes more readily.Sample preparation is extremely varied because of the range of samples that can be analyzed. Regardless of the type of sample, certain considerations should be made. These include the laboratory environment, the vessel holding the sample, storage of the sample, and pretreatment of the sample.Sample preparation begins with having a clean environment to work in. AAS is often used to measure trace elements, in which case contamination can lead to severe error. Possible equipment includes laminar flow hoods, clean rooms, and closed, clean vessels for transportation of the sample. Not only must the sample be kept clean, it also needs to be conserved in terms of pH, constituents, and any other properties that could alter the contents.When trace elements are stored, the material of the vessel walls can adsorb some of the analyte leading to poor results. To correct for this, perfluoroalkoxy polymers (PFA), silica, glassy carbon, and other materials with inert surfaces are often used as the storage material. Acidifying the solution with hydrochloric or nitric acid can also help prevent ions from adhering to the walls of the vessel by competing for the space. The vessels should also contain a minimal surface area in order to minimize possible adsorption sites.Pretreatment of the sample is dependent upon the nature of the sample. See Table \(\PageIndex{1}\) for sample pretreatment methods.In order to determine the concentration of the analyte in the solution, calibration curves can be employed. Using standards, a plot of concentration versus absorbance can be created. Three common methods used to make calibration curves are the standard calibration technique, the bracketing technique, and the analyte addition technique.This technique is the both the simplest and the most commonly used. The concentration of the sample is found by comparing its absorbance or integrated absorbance to a curve of the concentration of the standards versus the absorbances or integrated absorbances of the standards. In order for this method to be applied the following conditions must be met:The curve is typically linear and involves at least five points from five standards that are at equidistant concentrations from each other ). This ensures that the fit is acceptable. A least means squares calculation is used to linearly fit the line. In most cases, the curve is linear only up to absorbance values of 0.5 to 0.8. The absorbance values of the standards should have the absorbance value of a blank subtracted.The bracketing technique is a variation of the standard calibration technique. In this method, only two standards are necessary with concentrations \(c_1\) and \(c_2\). They bracket the approximate value of the sample concentration very closely. Applying Equation \ref{bracketing } to determines the value for the sample, where \(c_x\) and \(A_x\) are the concentration and adsorbance of the unknown, and \(A_1\) and \(A_2\) are the adsorbance for \(c_1\) and \(c_2\), respectively.\[ c _ { x } = \frac { \left( A _ { x } - A _ { 1 } \right) \left( c _ { 1 } - c _ { 2 } \right) } { A _ { 2 } - A _ { 1 } } + c _ { 1 } \label{bracketing } \]This method is very useful when the concentration of the analyte in the sample is outside of the linear portion of the calibration curve because the bracket is so small that the portion of the curve being used can be portrayed as linear. Although this method can be used accurately for nonlinear curves, the further the curve is from linear the greater the error will be. To help reduce this error, the standards should bracket the sample very closely.The analyte addition technique is often used when the concomitants in the sample are expected to create many interferences and the composition of the sample is unknown. The previous two techniques both require that the standards have a similar matrix to that of the sample, but that is not possible when the matrix is unknown. To compensate for this, the analyte addition technique uses an aliquot of the sample itself as the matrix. The aliquots are then spiked with various amounts of the analyte. This technique must be used only within the linear range of the absorbances.Interference is caused by contaminants within the sample that absorb at the same wavelength as the analyte, and thus can cause inaccurate measurements. Corrections can be made through a variety of methods such as background correction, addition of chemical additives, or addition of analyte.This page titled 1.4: Introduction to Atomic Absorption Spectroscopy is shared under a CC BY 4.0 license and was authored, remixed, and/or curated by Pavan M. V. Raja & Andrew R. Barron (OpenStax CNX) via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.
510
1.5: ICP-AES Analysis of Nanoparticles
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Physical_Methods_in_Chemistry_and_Nano_Science_(Barron)/01%3A_Elemental_Analysis/1.05%3A_ICP-AES_Analysis_of_Nanoparticles
Inductively coupled plasma atomic emission spectroscopy (ICP-AES) is a spectral method used to determine very precisely the elemental composition of samples; it can also be used to quantify the elemental concentration with the sample. ICP-AES uses high-energy plasma from an inert gas like argon to burn analytes very rapidly. The color that is emitted from the analyte is indicative of the elements present, and the intensity of the spectral signal is indicative of the concentration of the elements that is present. A schematic view of a typical experimental set-up is shown here.ICP-AES works by the emission of photons from analytes that are brought to an excited state by the use of high-energy plasma. The plasma source is induced when passing argon gas through an alternating electric field that is created by an inductively couple coil. When the analyte is excited the electrons try to dissipate the induced energy moving to a ground state of lower energy, in doing this they emit the excess energy in the form of light. The wavelength of light emitted depends on the energy gap between the excited energy level and the ground state. This is specific to the element based on the number of electrons the element has and electron orbital’s are filled. In this way the wavelength of light can be used to determine what elements are present by detection of the light at specific wavelengths.As a simple example consider the situation when placing a piece of copper wire into the flame of a candle. The flame turns green due to the emission of excited electrons within the copper metal, as the electrons try to dissipate the energy incurred from the flame, they move to a more stable state emitting energy in the form of light. The energy gap between the excited state to the ground state (\(ΔE\) dictates the color of the light or wavelength of the light, Equation \ref{eq:DeltaE}, where \(h\) is Plank's constant (6.626×10-34 m2kg/s), and \(\nu\) is the frequency of the emitted light.\[ \Delta E = h \nu \label{eq:DeltaE} \]The wavelength of light is indicative of the element present. If another metal is placed in the flame such as iron a different color flame will be emitted because the electronic structure of iron is different from that of copper. This is a very simple analogy for what is happening in ICP-AES and how it is used to determine what elements are present. By detecting the wavelength of light that is emitted from the analyte one can deduce what elements are be present.Naturally if there is a lot of the material present then there will be an accumulative effect making the intensity of the signal large. However, if there were very little materials present the signal would be low. By this rationale one can create a calibration curve from analyte solutions of known concentrations, whereby the intensity of the signal changes as a function of the concentration of the material that is present. When measuring the intensity from a sample of unknown concentration the intensity from this sample can be compared to that from the calibration curve, so this can be used to determine the concentration of the analytes within the sample.As with any sample being studied by ICP-AES nanoparticles need to be digested so that all the atoms can be vaporized in the plasma equally. If a metal containing nanoparticle were not digested using a strong acid to bring the metals atoms into solution, the form of the particle could hinder some of the material being vaporized. The analyte would not be detected even though it is present in the sample and this would give an erroneous result. Nanoparticles are often covered with a protective layer of organic ligands and this must be removed also. Further to this the solvent used for the nanoparticles may also be an organic solution and this should be removed as it too will not be miscible in the aqueous medium.Several organic solvents have low vapor pressures so it is relatively easy to remove the solvent by heating the samples, removing the solvent by evaporation. To remove the organic ligands that are present on the nanoparticle, choric acid can be used. This is a very strong acid and can break down the organic ligands readily. To digest the particles and get the metal into solution concentrated nitric acid is often used.A typical protocol may use 0.5 mL of concentrated nanoparticle solution and digest this with 9.5 mL of concentrated nitric acid over the period of a few days. After which 0.5 mL of the digested solution is placed in 9.5 mL of nanopure water. The reason why nanopure water is used is because DI water or regular water will have some amount of metals ions present and these will be detected by the ICP-AES measurement and will lead to figures that are not truly representative of the analyte concentration alone. This is especially pertinent when there is a very a low concentration of metal analyte to be detected, and is even more a problem when the metal to be detected is commonly found in water such as iron. Once the nanopure water and digested solution are prepared then the sample is ready for analysis.Another point to consider when doing ICP-AES on nanoparticles to determine chemical compositions, includes the potential for wavelength overlap. The energy that is released in the form of light is unique to each element, but elements that are very similar in atomic structure will have emission wavelengths that are very similar to one another. Consider the example of iron and cobalt, these are both transition metals and sit right beside each other on the periodic table. Iron has an emission wavelength at 238.204 nm and cobalt has an emission wavelength at 238.892 nm. So if you were to try determine the amount of each element in an alloy of the two you would have to select another wavelength that would be unique to that element, and not have any wavelength overlap to other analytes in solution. For this case of iron and cobalt it would be wiser to use a wavelength for iron detection of 259.940 nm and a wavelength detection of 228.616 nm. Bearing this in mind a good rule of thumb is to try use the wavelength of the analyte that affords the best detection primarily. But if this value leads to a possible wavelength overlap of within 15 nm wavelength with another analyte in the solution then another choice should be made of the detection wavelength to prevent wavelength overlap from occurring.Some people have also used the ICP-AES technique to determine the size of nanoparticles. The signal that is detected is determined by the amount of the material that is present in solution. If very dilute solutions of nanoparticles are being analyzed, particles are being analyzed one at a time, i.e., there will be one nanoparticle per droplet in the nebulizer. The signal intensity would then differ according to the size of the particle. In this way the ICP-AES technique could be used to determine the concentration of the particles in the solution as well as the size of the particles.In order to performe ICP-AES stock solutions must be prepared in dilute nitric acid solutions. To do this a concentrated solution should be diluted with nanopure water to prepare 7 wt% nitric acid solutions. If the concentrated solution is 69.8 wt% (check the assay amount that is written on the side of the bottle) then the amount to dilute the solution will be as such:Concentrated percentage 69.8 wt% from assay. First you must determine the molarity of the concentrated solution: \[ \text { Molarity } = \left[ ( \% ) ( \mathrm { d } ) / \left( \mathrm { M } _ { \mathrm { W } } \right) \right] \times 10 \label{eq:molarity} \]For the present assay amount, the figure will be calculated as follows\[ \mathrm { M } = [ ( 69.8 ) ( 1.42 ) / ( 63.01 ) ] \times 10 \nonumber \]\[ \therefore \mathrm { M } = 15.73 \nonumber \]This is the initial concentration \( C_I\). To determine the molarity of the 7% solution we again use Equation \ref{eq:molarity} to find the final concentration \( C_F\).\[ \mathbf { M } = [ ( 7 ) ( 1.42 ) / ( 63.01 ) ] \times 10 \nonumber \]\[ \therefore M = 1.58 \nonumber \]We use these figures to determine the amount of dilution required to dilute the concentrated nitric acid to make it a 7% solution.\[ \text { mass } _ { 1 } \times \text { concentration } _ { 1 } = \text { mass } _ { \mathrm { F } }\times \text { concentration } _ { \mathrm { F } } \nonumber \]Now as we are talking about solutions the amount of mass will be measured in mL, and the concentration will be measured as a molarity. MI and MF have been calculated above.\[ \mathrm { mL } _ { 1 } * \mathrm { C } _ { 1 } = \mathrm { mL } _ { \mathrm { F } } * \mathrm { C } _ { \mathrm { F } } \label{eq:MV} \]\[ \therefore \mathrm { mL } _ { 1 } = \left[ \mathrm { mL } _ { \mathrm { F } } * \mathrm { C } _ { \mathrm { F } } \right]/ \mathrm { C } _ { 1 } \nonumber \]In addition, the amount of dilute solution will be dependent on the user and how much is required by the user to complete the ICP analysis, for the sake of argument let’s say that we need 10 mL of dilute solution, this is mLF:\[ \mathrm { mL } _ { 1 } = [ 10 * 1.58 ] / 15.73 \nonumber \] \[ \therefore \mathrm { mL } _ { 1 } = 10.03 \mathrm { mL } \nonumber \]This means that 10.03 mL of the concentrated nitric acid (69.8%) should be diluted up to a total of 100 mL with nanopure water.Now that you have your stock solution with the correct percentage then you can use this solution to prepare your solutions of varying concentration. Let’s take the example that the stock solution that you purchase from a supplier has a concentration of 100 ppm of analyte, which is equivalent to 1 μg/mL.In order to make your calibration curve more accurate it is important to be aware of two issues. Firstly, as with all straight-line graphs, the more points that are used then the better the statistics is that the line is correct. But, secondly, the more measurements that are used means that more room for error is introduced to the system, to avoid these errors from occurring one should be very vigilant and skilled in the use of pipetting and diluting of solutions. Especially when working with very low concentration solutions a small drop of material making the dilution above or below the exactly required amount can alter the concentration and hence affect the calibration deleteriously. The premise upon which the calculation is done is based on Equation \ref{eq:MV}, whereby C refers to concentration in ppm, and mL refers to mass in mL.The choice of concentrations to make will depend on the samples and the concentration of analyte within the samples that are being analyzed. For first time users it is wise to make a calibration curve with a large range to encompass all the possible outcomes. When the user is more aware of the kind of concentrations that they are producing in their synthesis then they can narrow down the range to fit the kind of concentrations that they are anticipating.In this example we will make concentrations ranging from 10 ppm to 0.1 ppm, with a total of five samples. In a typical ICP-AES analysis about 3 mL of solution is used, however if you have situations with substantial wavelength overlap then you may have chosen to do two separate runs and so you will need approximately 6 mL solution. In general it is wise to have at least 10 mL of solution to prepare for any eventuality that may occur. There will also be some extra amount needed for samples that are being used for the quality control check. For this reason 10 mL should be a sufficient amount to prepare of each concentration.We can define the unknowns in the equation as follows:The methodology adopted works as follows. Make the high concentration solution then take from that solution and dilute further to the desired concentrations that are required.Let's say the concentration of the stock solution from the supplier is 100 ppm of analyte. First we should dilute to a concentration of 10 ppm. To make 10 mL of 10 ppm solution we should take 1 mL of the 100 ppm solution and dilute it up to 10 mL with nanopure water, now the concentration of this solution is 10 ppm. Then we can take from the 10 ppm solution and dilute this down to get a solution with 5 ppm. To do this take 5 mL of the 10 ppm solution and dilute it to 10 mL with nanopure water, then you will have a solution of 10 mL that is 5 ppm concentration. And so you can do this successively taking aliquots from each solution working your way down at incremental steps until you have a series of solutions that have concentrations ranging from 10 ppm all the way down to 0.1 ppm or lower, as required.While ICP-AES is a useful method for quantifying the presence of a single metal in a given nanoparticle, another very important application comes from the ability to determine the ratio of metals within a sample of nanoparticles.In the following examples we can consider the bi-metallic nanoparticles of iron with copper. In a typical synthesis 0.75 mmol of \(\ce{Fe(acac)3}\) is used to prepare iron-oxide nanoparticle of the form \(\ce{Fe3O4}\). It is possible to replace a quantity of the \(\ce{Fe^{n+}}\) ions with another metal of similar charge. In this manner bi-metallic particles were made with a precursor containing a suitable metal. In this example the additional metal precursor will be \(\ce{Cu(acac)2}\).Keep the total metal concentration in this example is 0.75 mmol. So if we want to see the effect of having 10% of the metal in the reaction as copper, then we will use 10% of 0.75 mmol, that is 0.075 mmol \(\ce{Cu(acac)2}\), and the corresponding amount of iron is 0.675 mmol \(\ce{Fe(acac)3}\). We can do this for successive increments of the metals until you make 100% copper oxide particles.Subsequent \(\ce{Fe}\) and \(\ce{Cu}\) ICP-AES of the samples will allow the determination of \(\ce{Fe:Cu}\)ratio that is present in the nanoparticle. This can be compared to the ratio of \(\ce{Fe}\) and \(\ce{Cu}\)that was applied as reactants. The graph shows how the percentage of \(\ce{Fe}\) in the nanoparticle changes as a function of how much \(\ce{Fe}\) is used as a reagent.Once the nanoparticles are digested and the ICP-AES analysis has been completed you must turn the figures from the ICP-AES analysis into working numbers to determine the concentration of metals in the solution that was synthesized initially.Let's first consider the nanoparticles that are of one metal alone. The figure given by the analysis in this case is given in units of mg/L, this is the value in ppm's. This figure was recorded for the solution that was analyzed, and this is of a dilute concentration compared to the initial synthesized solution because the particles had to be digested in acid first, then diluted further into nanopure water.As mentioned above in the experimental 0.5 mL of the synthesized nanoparticles were first digested in 9.5 mL of concentrated nitric acid. Then when the digestion was complete 0.5 mL of this solution was dissolved in 9.5 mL of nanopure water. This was the final solution that was analyzed using ICP, and the concentration of metal in this solution will be far lower than that of the original solution. In this case the amount of analyte in the final solution being analyzed is 1/20th that of the total amount of material in the solution that was originally synthesized.Let us take an example that upon analysis by ICP-AES the amount of \(\ce{Fe}\) detected is 6.38 mg/L. First convert the figure to mg/mL,\[ 6.38~\mathrm { mg } / \mathrm { L } \times 1 / 1000~\mathrm { L } / \mathrm { mL } = 6.38`\mathrm { x } 10 ^ { - 3 }~\mathrm { mg } / \mathrm { mL } \nonumber \]The amount of material was diluted to a total volume of 10 mL. Therefore we should multiply this value by 10 mL to see how much mass was in the whole container.\[ 6.38 \times 10 ^ { - 3 }~\mathrm { mg } / \mathrm { mL } \times 10~\mathrm { mL } = 6.38 \times 10 ^ { - 2 }~\mathrm { mg } \nonumber \]This is the total mass of iron that was present in the solution that was analyzed using the ICP device. To convert this amount to ppm we should take into consideration the fact that 0.5 mL was initially diluted to 10 mL, to do this we should divide the total mass of iron by this amount that it was diluted to.\[ 6.38 \times 10 ^ { - 2 }~\mathrm { mg } / 0.5~\mathrm { mL } = 0.1276~\mathrm { mg } / \mathrm { mL } \nonumber \]This was the total amount of analyte in the 10 mL solution that was analyzed by the ICP device, to attain the value in ppm it should be mulitplied by a thousand, that is then 127.6 ppm of \(\ce{Fe}\).We now need to factor in the fact that there were several dilutions of the original solution first to digest the metals and then to dissolve them in nanopure water, in all there were two dilutions and each dilution was equivalent in mass. By diluting 0.5 mL to 10 mL , we are effectively diluting the solution by a factor of 20, and this was carried out twice.\[ 0.1276~\mathrm { mg } / \mathrm { mL } \times 20 = 2.552~\mathrm { mg } / \mathrm { mL } \nonumber \]This is the amount of analyte in the solution of digested particles, to covert this to ppm we should multiply it by 1/1000 mL/L, in the following way:\[ 2.552~\mathrm { mg } / \mathrm { mL } *\times 1 / 1000 \mathrm { mL } / \mathrm { L } = 2552~\mathrm { mg } / \mathrm { L } ^ { \mathrm { L } } \nonumber \]This is essentially your answer now as 2552 ppm. This is the amount of \(\ce{Fe}\) in the solution of digested particles. This was made by diluting 0.5 mL of the original solution into 9.5 mL concentrated nitric acid, which is the same as diluting by a factor of 20. To calculate how much analyte was in the original batch that was synthesized we multiply the previous value by 20 again. This is the final amount of \(\ce{Fe}\) concentration of the original batch when it was synthesized and made soluble in hexanes.\[ 2552~\mathrm { ppm } \times 20 = 51040~\mathrm { ppm } \nonumber \]Moving from calculating the concentration of individual elements now we can concentrate on the calculation of stoichiometric ratios in the bi-metallic nanoparticles.Consider the case when we have the iron and the copper elements in the nanoparticle. The amounts determined by ICP are:We must account for the molecular weights of each element by dividing the ICP obtained value, by the molecular weight for that particular element. For iron this is calculated by\[ \frac{1.429~\mathrm { mg }/ \mathrm { L }}{ 55.85} = 0.0211 \nonumber \],and thus this is molar ratio of iron. On the other hand the ICP returns a value for copper that is given by:\[ \frac{1.837 \mathrm { mg } / \mathrm { L } }{ 63.55} = 0.0289 \nonumber \]To determine the percentage iron we use this equation, which gives a percentage value of 42.15% Fe.\[ \% \text { Fe } = [ \frac{ \text { molar ratio of iron } }{\text { sum of molar ratios } } ] \times 100 \nonumber \]We work out the copper percentage similarly, which leads to an answer of 57.85% Cu.\[ \% \text { Cu} = [ \frac{ \text { molar ratio of copper} }{\text { sum of molar ratios } } ] \times 100 \nonumber \]In this way the percentage iron in the nanoparticle can be determined as function of the reagent concentration prior to the synthesis ).The previous examples have shown how to calculate both the concentration of one analyte and the effective shared concentration of metals in the solution. These figures pertain to the concentration of elemental atoms present in solution. To use this to determine the concentration of nanoparticles we must first consider how many atoms that are being detected are in a nanoparticle. Let us consider that the \(\ce{Fe3O4}\) nanoparticles are of 7 nm diameter. In a 7 nm particle we expect to find 20,000 atoms. However in this analysis we have only detected Fe atoms, so we must still account for the number of oxygen atoms that form the crystal lattice also.For every 3 Fe atoms, there are 4 O atoms. But as iron is slightly larger than oxygen, it will make up for the fact there is one less Fe atom. This is an over simplification but at this time it serves the purpose to make the reader aware of the steps that are required to take when judging nanoparticles concentration. Let us consider that half of the nanoparticle size is attributed to iron atoms, and the other half of the size is attributed to oxygen atoms.As there are 20,000 atoms total in a 7 nm particle, and then when considering the effect of the oxide state we will say that for every 10,000 atoms of Fe you will have a 7 nm particle. So now we must find out how many Fe atoms are present in the sample so we can divide by 10,000 to determine how many nanoparticles are present.In the case from above, we found the solution when synthesized had a concentration 51,040 ppm Fe atoms in solution. To determine how how many atoms this equates to we will use the fact that 1 mole of material has the Avogadro number of atoms present.\[ 51040~\mathrm { ppm } = 51040~\mathrm { mg } / \mathrm { L } = 51.040~\mathrm { g } / \mathrm { L } \nonumber \]1 mole of iron weighs 55.847 g. To determine how many moles we now have, we divide the values like this:\[ \frac{ 51.040~\mathrm{g / L} }{ 55.847~\mathrm{g} } = 0.9139~\text { mol/L } \nonumber \]The number of atoms is found by multiplying this by Avogadro’s number (6.022x1023):\[ ( 0.9139~\text { mol/L} ) \times \left( 6.022 \times 10 ^ { 23 } \text { atoms } \right) = 5.5 \times 10 ^ { 23 }~\text { atoms/L } \nonumber \]For every 10,000 atoms we have a nanoparticle (NP) of 7 nm diameter, assuming all the particles are equivalent in size we can then divide the values. This is the concentration of nanoparticles per liter of solution as synthesized.\[ \left( 5.5 \times 10 ^ { 23 } \text { atoms/ L } \right) / ( 10,000 \text { atoms/NP} ) = 5.5 \times 10 ^ { 19 }~\mathrm { NP } / \mathrm { L } \nonumber \]One very interesting thing about nanotechnology that nanoparticles can be used for is their incredible ratio between the surface areas compared with the volume. As the particles get smaller and smaller the surface area becomes more prominent. And as much of the chemistry is done on surfaces, nanoparticles are good contenders for future use where high aspect ratios are required.In the example above we considered the particles to be of 7 nm diameters. The surface area of such a particle is 1.539 x10-16 m2. So the combined surface area of all the particles is found by multiplying each particle by their individual surface areas.\[ \left( 1.539 \times 10 ^ { - 16 } \mathrm { m } ^ { 2 } \right) \times \left( 5.5 \times 10 ^ { 19 }~mathrm { NP } / \mathrm { L } \right) = 8465~\mathrm { m } ^ { 2 } / \mathrm { L } \nonumber \]To put this into context, an American football field is approximately 5321 m2. So a liter of this nanoparticle solution would have the same surface area of approximately 1.5 football fields. That is allot of area in one liter of solution when you consider how much material it would take to line the football field with thin layer of metallic iron. Remember there is only about 51 g/L of iron in this solution!This page titled 1.5: ICP-AES Analysis of Nanoparticles is shared under a CC BY 4.0 license and was authored, remixed, and/or curated by Pavan M. V. Raja & Andrew R. Barron (OpenStax CNX) via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.
511
1.6: ICP-MS for Trace Metal Analysis
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Physical_Methods_in_Chemistry_and_Nano_Science_(Barron)/01%3A_Elemental_Analysis/1.06%3A_ICP-MS_for_Trace_Metal_Analysis
Inductively coupled plasma mass spectroscopy (ICP-MS) is an analytical technique for determining trace multi-elemental and isotopic concentrations in liquid, solid, or gaseous samples. It combines an ion-generating argon plasma source with the sensitive detection limit of mass spectrometry detection. Although ICP-MS is used for many different types of elemental analysis, including pharmaceutical testing and reagent manufacturing, this module will focus on its applications in mineral and water studies. Although akin to ICP-AES (inductively coupled plasma atomic emission spectroscopy), ICP-MS has significant differences, which will be mentioned as well.As shown in there are several basic components of an ICP-MS instrument, which consist of a sampling interface, a peristaltic pump leading to a nebulizer, a spray chamber, a plasma torch, a detector, an interface, and ion-focusing system, a mass-separation device, and a vacuum chamber, maintained by turbo molecular pumps. The basic operation works as follows: a liquid sample is pumped into the nebulizer to convert the sample into a spray. An internal standard, such as germanium, is pumped into a mixer along with the sample prior to nebulization to compensate for matrix effects. Large droplets are filtered out, and small droplets continue into the plasma torch, turning to ions. The mass separation device separates these ions based on their mass-to-charge ratio. An ion detector then converts these ions into an electrical signal, which is multiplied and read by computer software.The main difference between ICP-MS and ICP-AES is the way in which the ions are generated and detected. In ICP-AES, the ions are excited by vertical plasma, emitting photons that are separated on the basis of their emission wavelengths. As implied by the name, ICP-MS separates the ions, generated by horizontal plasma, on the basis of their mass-to-charge ratios (m/z). In fact, caution is taken to prevent photons from reaching the detector and creating background noise. The difference in ion formation and detection methods has a significant impact on the relative sensitivities of the two techniques. While both methods are capable of very fast, high throughput multi-elemental analysis (~10 - 40 elements per minute per sample), ICP-MS has a detection limit of a few ppt to a few hundred ppm, compared to the ppb-ppm range (~1 ppb - 100 ppm) of ICP-AES. ICP-MS also works over eight orders of magnitude detection level compared to ICP-AES’ six. As a result of its lower sensitivity, ICP-MS is a more expensive system. One other important difference is that only ICP-MS can distinguish between different isotopes of an element, as it segregates ions based on mass. A comparison of the two techniques is summarized in this table.With such small sample sizes, care must be taken to ensure that collected samples are representative of the bulk material. This is especially relevant in rocks and minerals, which can vary widely in elemental content from region to region. Random, composite, and integrated sampling are each different approaches for obtaining representative samples.Because ICP-MS can detect elements in concentrations as minute as a few nanograms per liter (parts per trillion), contamination is a very serious issue associated with collecting and storing samples prior to measurements. In general, use of glassware should be minimized, due to leaching impurities from the glass or absorption of analyte by the glass. If glass is used, it should be washed periodically with a strong oxidizing agent, such as chromic acid (\(\ce{H2Cr2O7}\)), or a commercial glass detergent. In terms of sample containers, plastic is usually better than glass, polytetrafluoroethylene (PTFE) and Teflon® being regarded as the cleanest plastics. However, even these materials can contain leachable contaminants, such as phosphorus or barium compounds. All containers, pipettes, pipette tips, and the like should be soaked in 1 - 2% \(\ce{HNO3}\). Nitric acid is preferred over \(\ce{HCl}\) HCl, which can ionize in the plasma to form \(\ce{^{35}Cl^{16}O+}\) and \(\ce{^{40}Ar^{35}Cl+}\), which have the same mass-to-charge ratios as \(\ce{^{51}V+}\) and \(\ce{^{75}As+}\), respectively. If possible, samples should be prepared as close as possible to the ICP-MS instrument without being in the same room.With the exception of solid samples analyzed by laser ablation ICP-MS, samples must be in liquid or solution form. Solids are ground into a fine powder with a mortar and pestle and passed through a mesh sieve. Often the first sample is discarded to prevent contamination from the mortar or sieve. Powders are then digested with ultrapure concentrated acids or oxidizing agents, like chloric acid (\(\ce{HClO3}\)), and diluted to the correct order of magnitude with 1 - 2% trace metal grade nitric acid.Once in liquid or solution form, the samples must be diluted with 1 - 2% ultrapure \(\ce{HClO3}\) to a low concentration to produce a signal intensity lower than about 106 counts. Not all elements have the same concentration to intensity correlation; therefore, it is safer to test unfamiliar samples on ICP-AES first. Once properly diluted, the sample should be filtered through a 0.25 - 0.45 μm membrane to remove particulates.Gaseous samples can also be analyzed by direct injection into the instrument. Alternatively, gas chromatography equipment can be coupled to an ICP-MS machine for separation of multiple gases prior to sample introduction.Multi- and single-element standards can be purchased commercially, and must be diluted further with 1 - 2% nitric acid to prepare different concentrations for the instrument to create a calibration curve, which will be read by the computer software to determine the unknown concentration of the sample. There should be several standards, encompassing the expected concentration of the sample. Completely unknown samples should be tested on less sensitive instruments, such as ICP-AES or EDXRF (energy dispersive X-ray fluorescence), before ICP-MS.While ICP-MS is a powerful technique, users should be aware of its limitations. Firstly, the intensity of the signal varies with each isotope, and there is a large group of elements that cannot be detected by ICP-MS. This consists of H, He and most gaseous elements, C, and elements without naturally occurring isotopes, including most actinides.There are many different kinds of interferences that can occur with ICP-MS, when plasma-formed species have the same mass as the ionized analyte species. These interferences are predictable and can be corrected with element correction equations or by evaluating isotopes with lower natural abundances. Using a mixed gas with the argon source can also alleviate the interference.The accuracy of ICP-MS is highly dependent on the user’s skill and technique. Standard and sample preparations require utmost care to prevent incorrect calibration curves and contamination. As exemplified below, a thorough understanding of chemistry is necessary to predict conflicting species that can be formed in the plasma and produce false positives. While an inexperienced user may be able to obtain results fairly easily, those results may not be trustworthy. Spectral interference and matrix effects are problems that the user must work diligently to correct.In order to illustrate the capabilities of ICP-MS, various geochemical applications as described. The chosen examples are representative of the types of studies that rely heavily on ICP-MS, highlighting its unique capabilities.With its high throughput, ICP-MS has made sensitive analysis of multi-element detection in rock and mineral samples feasible. Studies of trace components in rock can reveal information about the chemical evolution of the mantle and crust. For example, spinel peridotite xenoliths ), which are igneous rock fragments derived from the mantle, were analyzed for 27 elements, including lithium, scandium and titanium at the parts per million level and yttrium, lutetium, tantalum, and hafnium in parts per billion. X-ray fluorescence was used to complement ICP-MS, detecting metals in bulk concentrations. Both liquid and solid samples were analyzed, the latter being performed using laser-ablation ICP-MS, which points out the flexibility of the technique for being used in tandem with others. In order to prepare the solution samples, optically pure minerals were sonicated in 3 M HCl, then 5% \(\ce{HF}\), then 3 M \(\ce{HCl}\) again and dissolved in distilled water. The solid samples were converted into plasma by laser ablation prior to injection into the nebulizer of the LA-ICP-MS instrument. The results showed good agreement between the laser ablation and solution methods. Furthermore, this comprehensive study shed light on the partitioning behavior of incompatible elements, which, due to their size and charge, have difficulty entering cation sites in minerals. In the upper mantle, incompatible trace elements, especially barium, niobium and tantalum, were found to reside in glass pockets within the peridotite samples.Another important area of geology that requires knowledge of trace elemental compositions is water analysis. In order to demonstrate the full capability of ICP-MS as an analytical technique in this field, researchers aim to use the identification of trace metals present in groundwater to determine a fingerprint for a particular water source. In one study the analysis of four different Nevada springs determined trace metal analysis in parts per billion and even parts per trillion (ng/L). Because they were present is such low concentrations, samples containing rare earth elements lutetium, thulium, and terbium were preconcentrated by a cation exchange column to enable detection at 0.05 ppt. For some isotopes, special corrections necessary to account for false positives, which are produced by plasma-formed molecules with the same mass-to-charge ratio as the isotopic ions. For instance, false positives for Sc (m/z = 45) or Ti (m/z = 47) could result from \(\ce{CO2H+}\) (m/z = 45) or \(\ce{PO+}\) (m/z = 47); and \(\ce{BaO+}\) (m/z = 151, 153) conflicts with Eu-151 and Eu-153. In the latter case, barium has many isotopes in various abundances, Ba-138 comprising 71.7% barium abundance. ICP-MS detects peaks corresponding to \(\ce{BaO+}\) for all isotopes. Thus researchers were able to approximate a more accurate europium concentration by monitoring a non-interfering barium peak and extrapolating back to the concentration of barium in the system. This concentration was subtracted out to give a more realistic europium concentration. By employing such strategies, false positives could be taken into account and corrected. Additionally, 10 ppb internal standard was added to all samples to correct for changes in sample matrix, viscosity and salt buildup throughout collection. In total, 54 elements were detected at levels spanning seven orders of magnitude. This study demonstrates the incredible sensitivity and working range of ICP-MS.Elemental analysis in water is also important for the health of aquatic species, which can ultimately affect the entire food chain, including people. With this in mind, arsenic content was determined in fresh water and aquatic organisms in Hayakawa River in Kanagawa, Japan, which has very high arsenic concentrations due to its hot spring source in Owakudani Valley. While water samples were simply filtered and prior to analysis, organisms required special preparation, in order to be compatible with the sampler. Organisms collected for this studied included water bug, green macroalga, fish, and crustaceans. For total As content determination, the samples were freeze-dried to remove all water from the sample in order to know the exact final volume upon resuspension. Next, the samples were ground into a powder, followed by soaking in nitric acid, heating at 110 °C. The sample then underwent heating with hydrogen peroxide, dilution, and filtering through a 0.45 μm membrane. This protocol served to oxidize the entire sample and remove large particles prior to introduction into the ICP-MS instrument. Samples that are not properly digested can build up on the plasma torch and cause expensive damage to the instrument. Since the plasma converts the sample into various ion constituents, it is unnecessary to know the exact oxidized products prior to sample introduction. In addition to total As content, the As concentration of different organic arsenic-containing compounds (arsenicals) produced in the organisms was measured by high performance liquid chromatography coupled to ICP-MS (HPLC/ICP-MS). The arsenicals were separated by HPLC before travelling into the ICP-MS instrument for As concentration determination. For this experiment, the organic compounds were extracted from biological samples by dissolving freeze-dried samples in methanol/water solutions, sonicating, and centrifuging. The extracts were dried under vacuum, redissolved in water, and filtered prior to loading. This did not account for all compounds, however, because over 50% arsenicals were nonsoluble in aqueous solution. One important plasma side product to account for was \(\ce{ArCl+}\), which has the same mass-to-charge ratio (m/z = 75) as As. This was corrected by oxidizing the arsenic ions within the mass separation device in the ICP-MS vacuum chamber to generate \(\ce{AsO+}\), with m/z 91. The total arsenic concentration of the samples ranged from 17 - 18 ppm.This page titled 1.6: ICP-MS for Trace Metal Analysis is shared under a CC BY 4.0 license and was authored, remixed, and/or curated by Pavan M. V. Raja & Andrew R. Barron (OpenStax CNX) via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.
512
1.7: Ion Selective Electrode Analysis
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Physical_Methods_in_Chemistry_and_Nano_Science_(Barron)/01%3A_Elemental_Analysis/1.07%3A_Ion_Selective_Electrode_Analysis
Ion selective electrode (ISE) is an analytical technique used to determine the activity of ions in aqueous solution by measuring the electrical potential. ISE has many advantages compared to other techniques, including:Based on these advantages, ISE has wide variety of applications, which is reasonable considering the importance of measuring ion activity. For example, ISE finds its use in pollution monitoring in natural waters (CN-, F-, S-, Cl-, etc.), food processing (NO3-, NO2- in meat preservatives), Ca2+ in dairy products, and K+ in fruit juices, etc.Before focusing on how ISE works, it would be good to get an idea what ISE setup looks like and the component of the ISE instrument. shows the basic components of ISE setup. It has an ion selective electrode, which allows measured ions to pass, but excludes the passage of the other ions. Within this ion selective electrode, there is an internal reference electrode, which is made of silver wire coated with solid silver chloride, embedded in concentrated potassium chloride solution (filling solution) saturated with silver chloride. This solution also contains the same ions as that to be measured. There is also a reference electrode similar to ion selective electrode, but there is no to-be-measured ion in the internal electrolyte and the selective membrane is replaced by porous frit, which allows the slow passage of the internal filling solution and forms the liquid junction with the external text solution. The ion selective electrode and reference electrode are connected by a milli-voltmeter. Measurment is accomplished simply by immersing the two electrodes in the same test solution.There are commonly more than one types of ions in solution. So how ISE manage to measure the concentration of certain ion in solution without being affected by other ions? This is done by applying a selective membrane at the ion selective electrode, which only allows the desired ion to go in and out. At equilibrium, there is potential difference existing between two sides of the membrane, and it is governed by the concentration of the tested solution described by Nernst equation EQ, where E is potential, E0 is a constant characteristic of a particular ISE, R is the gas constant (8.314 J/K.mol), T is the temperature (in K), n is the charge of the ion and F is Faraday constant (96,500 coulombs/mol). To make it relevant, the measured potential difference is proportional to the logarithm of ion concentration. Thus, the relationship between potential difference and ion concentration can be determined by measuring the potential of two solutions of already-known ion concentration and a plot based on the measured potential and logarithm of the ion concentration. Based on this plot, the ion concentration of an unknown solution can be known by measuring the potential and corresponding it to the plot.\[ E = E ^ { 0 } + ( 2.030~RT / nF ) \log C \label{eq:nernst} \]Fluoride is added into drinking water and toothpaste to prevent dental caries and thus the determination of its concentration is of great importance to human health. Here, we will give some data and calculations to show how the concentration of fluoride ion is determined and have a glance at how relevant ISE is to our daily life. According to Nernst equation, (Equation \ref{eq:nernst}), in this case n = 1, T = 25 °C and E0, R, F are constants and thus this equation can be simplied as\[ E= K+S\log C \nonumber \]The first step is to obtain a calibration curve for fluoride ion and this can be done by preparing several fluoride standard solution with known concentration and making a plot of E versus log C.From the plot we can clearly identify the linear relationship between E versus log C with slope measured at -59.4 mV, which is very closed to the theoretical value -59.2 mV at 25 °C. This plot can give the concentration of any solution containing fluoride ion within the range of 0.195 mg/L and 200 mg/L by measuring the potential of the unknown solution.Though ISE is a cost-effective and useful technique, it has some drawbacks that cannot be avoided. The selective ion membrane only allows the measured ions to pass and thus the potential is only determined by this particular ion. However, the truth is there is no such membrane that only permits the passage of one ion, and so there are cases when there are more than one ions that can pass the membrane. As a result, the measured potential are affected by the passage of the “unwanted” ions. Also, because of its dependence on ion selective membrane, one ISE is only suitable for one ion and this may be inconvenient sometimes. Another problem worth noticing is that ion selective measures the concentration of ions in equilibrium at the surface of the membrane surface. This does matter much if the solution is dilute but at higher concentrations, the inter-ionic interactions between the ions in the solution tend to decrease the mobility of ions and thus the concentration near the membrane would be lower than that in the bulk. This is one source of inaccuracy of ISE. To better analyze the results of ISE, we have to be aware of these inherent limitations of it.This page titled 1.7: Ion Selective Electrode Analysis is shared under a CC BY 4.0 license and was authored, remixed, and/or curated by Pavan M. V. Raja & Andrew R. Barron (OpenStax CNX) via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.
513
1.8: A Practical Introduction to X-ray Absorption Spectroscopy
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Physical_Methods_in_Chemistry_and_Nano_Science_(Barron)/01%3A_Elemental_Analysis/1.08%3A_A_Practical_Introduction_to_X-ray_Absorption_Spectroscopy
X-ray absorption spectroscopy (XAS) is a technique that uses synchrotron radiation to provide information about the electronic, structural, and magnetic properties of certain elements in materials. This information is obtained when X-rays are absorbed by an atom at energies near and above the core level binding energies of that atom. Therefore, a brief description about X-rays, synchrotron radiation and X-ray absorption is provided prior to a description of sample preparation for powdered materials.X-rays were discovered by the Wilhelm Röntgen in 1895 (figure \(\PageIndex{1}\)). They are a form of electromagnetic radiation, in the same manner as visible light but with a very short wavelength, around 0.25 - 25 Å. As electromagnetic radiation, X-rays have a specific energy. The characteristic range is defined by soft versus hard X-rays. Soft X-rays cover the range from hundreds of eV to a few KeV, and the hard X-rays have an energy range from a few KeV up to around 100 KeV.X-rays are commonly produced by X-ray tubes, when high-speed electrons strike a metal target. The electrons are accelerated by a high voltage towards the metal target; X-rays are produced when the electrons collide with the nuclei of the metal target.Synchrotron radiation is generated when particles are moving at really high velocities and are deflected along a curved trajectory by a magnetic field. The charged particles are first accelerated by a linear accelerator (LINAC) (figure \(\PageIndex{2}\)); then, they are accelerated in a booster ring that injects the particles moving almost at the speed of light into the storage ring. There, the particles are accelerated toward the center of the ring each time their trajectory is changed so that they travel in a closed loop. X-rays with a broad spectrum of energies are generated and emitted tangential to the storage ring. Beamlines are placed tangential to the storage ring to use the intense X-ray beams at a wavelength that can be selected varying the set up of the beamlines. Those are well suited for XAS measurements because the X-ray energies produced span 1000 eV or more as needed for an XAS spectrum.Light is absorbed by matter through the photoelectric effect. It is observed when an X-ray photon is absorbed by an electron in a strongly bound core level (such as the 1s or 2p level) of an atom (figure \(\PageIndex{3}\)). In order for a particular electronic core level to participate in the absorption, the binding energy of this core level must be less than the energy of the incident X-ray. If the binding energy is greater than the energy of the X-ray, the bound electron will not be perturbed and will not absorb the X-ray. If the binding energy of the electron is less than that of the X-ray, the electron may be removed from its quantum level. In this case, the X-ray is absorbed and any energy in excess of the electronic binding energy is given as kinetic energy to a photo-electron that is ejected from the atom.When X-ray absorption is discussed, the primary concern is about the absorption coefficient, µ, which gives the probability that X-rays will be absorbed according to Beer’s Law where I0 is the X-ray intensity incident on a sample, t is the sample thickness, and I is the intensity transmitted through the sample.\[I = I _ { 0 } e ^ { - \mu t } \label{eq:BeerLambert} \]The absorption coefficient, µE, is a smooth function of energy, with a value that depends on the sample density ρ, the atomic number Z, atomic mass A, and the X-ray energy E roughly as\[ \mu _ { E } \approx \frac { \rho Z ^ { 4 } } { A E ^ { 3 } } \nonumber \]When the incident X-ray has energy equal to that of the binding energy of a core-level electron, there is a sharp rise in absorption: an absorption edge corresponding to the promotion of this core level to the continuum. For XAS, the main concern is the intensity of µ, as a function of energy, near and at energies just above these absorption edges. An XAS measurement is simply a measure of the energy dependence of µ at and above the binding energy of a known core level of a known atomic species. Since every atom has core-level electrons with well-defined binding energies, the element to probe can be selected by tuning the X-ray energy to an appropriate absorption edge. These absorption edge energies are well-known. Because the element of interest is chosen in the experiment, XAS is element-specific.X-ray absorption fine structure (XAFS) spectroscopy, also named X-ray absorption spectroscopy, is a technique that can be applied for a wide variety of disciplines because the measurements can be performed on solids, gasses, or liquids, including moist or dry soils, glasses, films, membranes, suspensions or pastes, and aqueous solutions. Despites its broad adaptability with the kind of material used, there are samples which limits the quality of an XAFS spectrum. Because of that, the sample requirements and sample preparation is reviewed in this section as well the experiment design which are vital factors in the collection of good data for further analysis.The main information can be obtained using XAFS spectra consist in small changes in the absorption coefficient (E), which can be measured directly in a transmission mode or indirectly using a fluorescence mode. Therefore, a good signal to noise ratio is required (better than 103). In order to obtain this signal to noise ratio, an intense beam is required (on the order 1010 photons/second or better), with the energy bandwidth of 1 eV or less, and the capability of scanning the energy of the incident beam over a range of about 1 KeV above the edge in a time range of seconds or few minutes. As a result, synchrotron radiation is preferred further than other kind of X-ray sources previously mentioned.Despite the setup of a synchrotron beamline is mostly done by the assistance of specialist beamline scientists, nevertheless, it is useful to understand the system behind the measurement. The main components of a XAFS beamline, as shown in figure below, are as follows:Slits are used to define the X-ray beam profile and to block unwanted X-rays. Slits can be used to increase the energy resolution of the X-ray incident on the sample at the expense of some loss in X-ray intensity. They are either fixed or adjustable slits. Fixed slits have a pre-cut opening of heights between 0.2 and 1.0 mm and a width of some centimeters. Adjustable slits use metal plates that move independently to define each edge of the X-ray beam.The monochromator is used to select the X-ray energy incident on the sample. There are two main kinds of X-ray monochromators:Most monochromator crystals are made of silicon or germanium and are cut and polished such that a particular atomic plane of the crystal is parallel to the surface of the crystal as Si, Si, or Ge. The energy of X-rays diffracted by the crystal is controlled by rotating the crystals in the white beam.The harmonic X-ray intensity needs to be reduced, as these X-rays will adversely affect the XAS measurement. A common method for removing harmonic X-rays is using a harmonic rejection mirror. This mirror is usually made of Si for low energies, Rh for X-ray energies below the Rh absorption edge at 23 keV, or Pt for higher X-ray energies. The mirror is placed at a grazing angle in the beam such that the X-rays with fundamental energy are reflected toward the sample, while the harmonic X-rays are not.Most X-ray absorption measurements use ionization detectors. These contain two parallel plates separated by a gas-filled space that the X-rays travel through. Some of the X-rays ionize the gas particles. A voltage bias applied to the parallel plates separates the gas ions, creating a current. The applied voltage should give a linear detector response for a given change in the incident X-ray intensity. There are also other kinds as fluorescence and electron yield detectors.X-ray Absorption measurements can be performed in several modes: transmission, fluorescence and electron yield; where the two first are the most common. The choice of the most appropriate mode to use in one experiment is a crucial decision.The transmission mode is the most used because it only implies the measure of the X-ray flux before and after the beam passes the sample. Therefore, the adsorption coefficient is defined as\[ \mu _ { E } = \ln \left( \frac { I _ { 0 } } { I } \right) \nonumber \]Transmission experiments are standard for hard X-rays, because the use of soft X-rays implies the use the samples thinner than 1 μm. Also, this mode should be used for concentrated samples. The sample should have the right thickness and be uniform and free of pinholes.The fluorescence mode measures the incident flux I0 and the fluorescence X-rays If that are emitted following the X-ray absorption event. Usually the fluorescent detector is placed at 90° to the incident beam in the horizontal plane, with the sample at an angles, commonly 45°, with respect to the beam, because in that position there is not interference generated because of the initial X-ray flux (I0). The use of fluorescence mode is preferred for thicker samples or lower concentrations, even ppm concentrations or lower. For a highly concentrated sample, the fluorescence X-rays are reabsorbed by the absorber atoms in the sample, causing an attenuation of the fluorescence signal, it effect is named as self-absorption and is one of the most important concerns in the use of this mode.The samples should have a uniform distribution of the absorber atom, and have the correct absorption for the measurement. The X-ray beam typically probes a millimeter-size portion of the sample. This volume should be representative of the entire sample.For transmission mode samples, the thickness of the sample is really important. It supposes to be a sample with a given thickness, t, where the total adsorption of the atoms is less than 2.5 adsorption lengths, µEt ≈ 2.5; and the partial absorption due to the absorber atoms is around one absorption length ∆ µEt ≈ 1, which corresponds to the step edge.The thickness to give ∆ µEt = 1 is as\[t = \frac { 1 } { \Delta \mu } = \frac { 1.66 \sum _ { i } n _ { i } M _ { i } } { \rho \sum _ { i } n _ { i } \left[ \sigma _ { i } \left( E _ { + } \right) - \sigma _ { i } \left( E _ { - } \right) \right] } \nonumber \]where ρ is the compound density, n is the elemental stoichiometry, M is the atomic mass, σE is the adsorption cross-section in barns/atom (1 barn = 10-24 cm2) tabulated in McMaster tables, and E+and E- are the just above and below the energy edge. This calculation can be accomplished using the free download software HEPHAESTUS.For non-concentrate samples, the total X-ray adsorption of the sample is the most important. It should be related to the area concentration of the sample (\(ρt\), in g/cm2). The area concentration of the sample multiplied by the difference of the mass adsorption coefficient (\(∆µE/ρ\)) give the edge step, where a desired value to obtain a good measure is a edge step equal to one, \((∆µE/ρ)ρt ≈ 1\).The difference of the mass adsorption coefficient is given by\[ \left( \frac { \Delta \mu _ { E } } { \rho } \right) = \sum f _ { i } \left[ \left( \frac { \Delta \mu _ { E } } { \rho } \right) _ { i , ( E_+ ) } - \left( \frac { \Delta \mu _ { E } } { \rho } \right) _ { i , \left( E _{ - } \right) } \right] \nonumber \]where \((µE/ρ)_i \) is the mass adsorption coefficient just above (\(E_+\)) and below (\(E_-\)) of the edge energy and \(f_i\) is the mass fraction of the element i. Multiplying the area concentration, \(ρt\), for the cross-sectional area of the sample holder, amount of sample needed is known.As was described in last section, there are diluted solid samples, which can be prepared onto big substrates or concentrate solid samples which have to be prepared in thin films. Both methods are following described.Liquid and gases samples can also be measured, but the preparation of those kind of sample is not discussed in this paper because it depends in the specific requirements of each sample. Several designs can be used as long they avoid the escape of the sample and the material used as container does not absorb radiation at the energies used for the measure.This page titled 1.8: A Practical Introduction to X-ray Absorption Spectroscopy is shared under a CC BY 4.0 license and was authored, remixed, and/or curated by Pavan M. V. Raja & Andrew R. Barron (OpenStax CNX) via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.
514
1.9: Neutron Activation Analysis (NAA)
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Physical_Methods_in_Chemistry_and_Nano_Science_(Barron)/01%3A_Elemental_Analysis/1.09%3A_Neutron_Activation_Analysis_(NAA)
Neutron activation analysis (NAA) is a non-destructive analytical method commonly used to determine the identities and concentrations of elements within a variety of materials. Unlike many other analytical techniques, NAA is based on nuclear rather than electronic transitions. In NAA, samples are subjected to neutron radiation (i.e., bombarded with neutrons), which causes the elements in the sample to capture free neutrons and form radioactive isotopes, such as\[ ^{59}_{27}\ce{Co} + ^1_0 n \rightarrow ^{60}_{27}\ce{Co} \nonumber. \]The excited isotope undergoes nuclear decay and loses energy by emitting a series of particles that can include neutrons, protons, alpha particles, beta particles, and high-energy gamma ray photons. Each element on the periodic table has a unique emission and decay path that allows the identity and concentration of the element to be determined.Almost eighty years ago in 1936, George de Hevesy and Hilde Levi published the first paper on the process of neutron activation analysis. They had discovered that rare earth elements such as dysprosium became radioactive after being activated by thermal neutrons from a radon-beryllium (266Ra + Be) source. Using a Geiger counter to count the beta particles emitted, Hevesy and Levi were able to identify the rare earth elements by half-life. This discovery led to the increasingly popular process of inducing radioactivity and observing the resulting nuclear decay in order to identify an element, a process we now know as NAA. In the years immediately following Hevesy and Levi’s discovery, however, the advancement of this technique was restricted by the lack of stable neutron sources and adequate spectrometry equipment. Even with the development of charged-particle accelerators in the 1930s, analyzing multi-element samples remained time-consuming and tedious. The method was improved in the mid-1940s with the availability of the X-10 reactor at the Oak Ridge National Laboratory, the first research-type nuclear reactor. As compared with the earlier neutron sources used, this reactor increased the sensitivity of NAA by a factor of a million. Yet the detection step of NAA still revolved around Geiger or proportional counters; thus, many technological advancements were still to come. As technology has progressed in the recent decades, the NAA method has grown tremendously, and scientists now have a plethora of neutron sources and detectors to choose from when analyzing a sample with NAA.In order to analyze a material with NAA, a small sample of at least 50 milligrams must be obtained from the material, usually by drilling. It is suggested that two different samples are obtained from the material using two drill bits of different compositions. This will show any contamination from the drill bits and, thus, minimize error. Prior to irradiation, the small samples are encapsulated in vials of either quartz or high purity linear polyethylene.Neutron activation analysis works through the processes of neutron activation and radioactive decay. In neutron activation, radioactivity is induced by bombarding a sample with free neutrons from a neuron source. The target atomic nucleus captures a free neutron and, in turn, enters an excited state. This excited and therefore unstable isotope undergoes nuclear decay, a process in which the unstable nucleus emits a series of particles that can include neutrons, protons, alpha, and beta particles in an effort to return to a low-energy, stable state. As suggested by the several different particles of ionizing radiation listed above, there are many different types of nuclear decay possible. These are summarized in the figure below.An additional type of nuclear decay is that of gamma radiation (denoted as γ), a process in which the excited nucleus emits high-energy gamma ray photons. There is no change in either neutron number N or atomic number Z, yet the nucleus undergoes a nuclear transformation involving the loss of energy. In order to distinguish the higher energy parent nucleus (prior to gamma decay) from the lower energy daughter nucleus (after gamma decay), the mass number of the parent nucleus is labeled with the letter m, which means “metastable.” An example of gamma radiation with the element technetium is shown here.\[ ^{99m}_{43}\ce{Tc} \rightarrow ^{99}_{43}\ce{Tc} + ^0_0\gamma \nonumber \]In NAA, the radioactive nuclei in the sample undergo both gamma and particle nuclear decay. The figure below presents a schematic example of nuclear decay. After capturing a free neutron, the excited 60mCo nucleus undergoes an internal transformation by emitting gamma rays. The lower-energy daughter nucleus 60Co, which is still radioactive, then emits a beta particle. This results in a high-energy 60Ni nucleus, which once again undergoes an internal transformation by emitting gamma rays. The nucleus then reaches the stable 60Ni state.Although alpha and beta particle detectors do exist, most detectors used in NAA are designed to detect the gamma rays that are emitted from the excited nuclei following neutron capture. Each element has a unique radioactive emission and decay path that is scientifically known. Thus, based on the path and the spectrum produced by the instrument, NAA can determine the identity and concentration of the element.As mentioned above, there are many different neutron sources that can be used in modern-day NAA. A chart comparing three common sources is shown in the table below.As mentioned earlier, most detectors used in NAA are designed to detect the gamma rays emitted from the decaying nucleus. Two widely used gamma detectors are the scintillation type and the semiconductor type. The former uses a sensitive crystal, often sodium iodide that is doped with thallium (NaI(Tl)), that emits light when gamma rays strike it. Semiconductor detectors, on the other hand, use germanium to form a diode that produces a signal in response to gamma radiation. The signal produced is proportional to the energy of the emitted gamma radiation. Both types of gamma detectors have excellent sensitivity with detection limits ranging from 0.1 to 106 nanogram element per gram sample, but semiconductor type detectors usually have superior resolution.Furthermore, particles detectors designed to detect the alpha and beta particles that are emitted in nuclear decay are also available; however, gamma detectors are favorable. Particle detectors require a high vacuum since atmospheric gases in the air can absorb and affect the emission of these particles. Gamma rays are not affected in this way.Instrumental neutron activation analysis (INAA) is the simplest and most widely used form of NAA. It involves the direct irradiation of the sample, meaning that the sample does not undergo any chemical separation or treatment prior to detection. INAA can only be used if the activity of the other radioactive isotopes in the sample does not interfere with the measurement of the element(s) of interest. Interference often occurs when the element(s) of interest are present in trace or ultratrace amounts. If interference does occur, the activity of the other radioactive isotopes must be removed or eliminated. Radiochemical separation is one way to do this. NAA that involves sample decomposition and elemental separation is known as radiochemical neutron activation analysis (RNAA). In RNAA, the interfering elements are separated from the element(s) of interest through an appropriate separation method. Such methods include extractions, precipitations, distillations, and ion exchanges. Inactive elements and matrices are often added to ensure appropriate conditions and typical behavior for the element(s) of interest. A schematic comparison of INAA and RNAA is shown below.Another experimental parameter that must be considered is the kinetic energy of the neutrons used for irradiation. In epithermal neutron activation analysis (ENAA), the neutrons – known as epithermal neutrons – are partially moderated in the reactor and have kinetic energies between 0.5 eV to 0.5 MeV. These are lower-energy neutrons as compared to fast neutrons, which are used in fast neutron activation analysis (FNAA). Fast neutrons are high-energy, unmoderated neutrons with kinetic energies above 0.5 MeV.The final parameter to be discussed is the time of measurement. The nuclear decay products can be measured either during or after neutron irradiation. If the gamma rays are measured during irradiation, the procedure is known as prompt gamma neutron activation analysis (PGNAA). This is a special type of NAA that requires additional equipment including an adjacent gamma detector and a neutron beam guide. PGNAA is often used for elements with rapid decay rates, elements with weak gamma emission intensities, and elements that cannot easily be determined by delayed gamma neutron activation analysis (DGNAA) such as hydrogen, boron, and carbon. In DGNAA, the emitted gamma rays are measured after irradiation. DGNAA procedures include much longer irradiation and decay periods than PGNAA, often extending into days or weeks. This means that DGNAA is ideal for long-lasting radioactive isotopes. A schematic comparison of PGNAA and DGNAA is shown below.Throughout recent decades, NAA has often been used to characterize many different types of samples including archaeological materials. In 1961, the Demokritos nuclear reactor, a water moderated and cooled reactor, went critical at low power at the National Center for Scientific Research “Demokritos” (NCSR “Demokritos”) in Athens, Greece. Since then, NCSR “Demokritos” has been a leading center for the analysis of archaeological materials.Ceramics, carbonates, silicates, and steatite are routinely analyzed at NCSR “Demokritos” with NAA. A routine analysis begins by weighing and placing 130 milligrams of the powdered sample into a polyethylene vial. Two batches of ten vials, eight samples and two standards, are then irradiated in the Demokritos nuclear reactor for 45 minutes at a thermal neutron flux of 6 x 1013 neutrons cm-2 s-1. The first measurement occurs seven days after irradiation. The gamma ray emissions of both the samples and standards are counted with a germanium gamma detector (semiconductor type) for one hour. This measurement determines the concentrations of the following elements: As, Ca, K, La, Lu, Na, Sb, Sm, U, and Yb. A second measurement is performed three weeks after irradiation in which the samples and standards are counted for two hours. In this measurement, the concentrations of the following elements are determined: Ba, Ce, Co, Cr, Cs, Eu, Fe, Hf, Nd, Ni, Rb, Sc, Ta, Tb, Th, Zn, and Zr.Using the method described above, NCSR “Demokritos” analyzed 195 samples of black-on-red painted pottery from the late Neolithic age in what is now known as the Black-On-Red Pottery Project. An example of black-on-red painted pottery is shown here.This project aimed to identify production patterns in this ceramic group and explore the degree of standardization, localization, and scale of production from 14 sites throughout the Strymonas Valley in northern Greece. A map of the area of interest is provided below in figure \(\PageIndex{6}\). NCSR “Demokritos” also sought to analyze the variations in pottery traditions by differentiating so-called ceramic recipes. By using NAA, NCSR “Demokritos” was able to determine the unique chemical make-ups of the many pottery fragments. The chemical patterning revealed through the analyses suggested that the 195 samples of black-on-red Neolithic pottery came from four distinct productions areas with the primary production area located in the valley of the Strymon and Angitis rivers. Although distinct, the pottery from the four different geographical areas all had common technological and stylistic characteristics, which suggests that a level of standardization did exist throughout the area of interest during the late Neolithic age.Additionally, NAA has been used in hematology laboratories to determine specific elemental concentrations in blood and provide information to aid in the diagnosis and treatment of patients. Identifying abnormalities and unusual concentrations of certain elements in the bloodstream can also aid in the prediction of damage to the organ systems of the human body.In one study, NAA was used to determine the concentrations of sodium and chlorine in blood serum. In order to investigate the accuracy of the technique in this setting, 26 blood samples of healthy male and female donors – aged between 25 and 60 years and weighing between 50 and 85 kilograms – were selected from the Paulista Blood Bank in São Paulo. The samples were initially irradiated for 2 minutes at a neutron flux ranging from approximately 1 x 1011 to 6 x 1011 neutrons cm-2 s-1 and counted for 10 minutes using a gold activation detector. The procedure was later repeated using a longer irradiation time of 10 minutes. The determined concentrations of sodium and chlorine were then compared to standard values. The NAA analyses resulted in concentrations that strongly agreed with the adopted reference value. For example, the chlorine concentration was found to be 3.41 - 3.68 µg/µL of blood, which correlates closely to the reference value of 3.44 - 3.76 µg/µL of blood. This illustrates that NAA can accurately measure elemental concentrations in a variety of materials including blood samples.Although NAA is an accurate (~5%) and precise (<0.1%) multi-element analytical technique, it has several limitations that should be addressed. Firstly, samples irradiated in NAA will remain radioactive for a period of time (often years) following the analysis procedures. These radioactive samples require special handling and disposal protocols. Secondly, the number of the available nuclear reactors has declined in recent years. In the United States, only 31 nuclear research and test reactors are currently licensed and operating. A map of these reactors shown here.As a result of the declining number of reactors and irradiation facilities in the nation, the cost of neutron activation analysis has increased. The popularity of NAA has declined in recent decades due to both the increasing cost and the development of other successful multi-element analytical methods such as inductively coupled plasma atomic emission spectroscopy (ICP-AES).This page titled 1.9: Neutron Activation Analysis (NAA) is shared under a CC BY 4.0 license and was authored, remixed, and/or curated by Pavan M. V. Raja & Andrew R. Barron (OpenStax CNX) via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.
515
1.10: Total Carbon Analysis
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Physical_Methods_in_Chemistry_and_Nano_Science_(Barron)/01%3A_Elemental_Analysis/1.10%3A_Total_Carbon_Analysis
Carbon is one of the more abundant elements on the planet; all living things and many non-living things have some form of carbon in them. Having the ability to measure and characterize the carbon content of a sample is of extreme value in a variety of different industries and research environments.Total carbon (TC) content is just one important piece of information that is needed by analysts concerned with the carbon content of a sample. Having the knowledge of the origin of carbon in the sample, whether it be derived from organic or inorganic material, is also of extreme importance. For example, oil companies are interested in finding petroleum, a carbon containing material derived from organic matter, knowing the carbon content and the type of carbon in a sample of interest can mean the difference between investing millions of dollars and not doing so. Regulatory agencies like the U.S. Environmental Protection Agency (EPA) is another such example, where regulation of the carbon content and character of that carbon is essential for environmental and human health.Considering the importance of identifying and quantifying the carbon content of an analyte, it may be surprising to learn that there is no one method to measure the carbon content of a sample. Unlike other techniques, no fancy instrument is required (although some exists that can be useful). In fact, methods to measure the different forms of carbon (organic or inorganic) are different themselves because they take advantage of the different properties characteristics to the carbon content you are measuring, in fact you will most likely use multiple techniques to fully characterize the carbon content of a sample, not just one.Measurements of carbon content are related, and therefore measurement of either total carbon content (TC), total inorganic carbon content (TIC) and total organic carbon content (TOC) is related to the other two by\[ \mathrm { TC } = \mathrm { TIC } + \mathrm { TOC }. \label{eq:TC} \]This means that measurement of two variables can indirectly give you the third, as there are only two classes of carbon: organic carbon and inorganic carbon.Herein several of the methods used in measuring the TOC, TIC and TC for samples will be outlined. Not all samples require the same kind of instruments and methods. The goal of this module is to get the reader to see the simplicity of some of these methods and understand the need for such quantification and analysis.The total organic carbon content for a variety of different samples can be determined; there are very few samples that cannot be measured for total carbon content. Before treatment, a sample must be homogenized, whereby a sample is mixed or broken such that a measurement done on the sample can be representative of the entire sample. For example, if our sample were a rock, we would want to make sure that the inner core of the rock, which could have a different composition than the outer surface, were being measured as well. Not homogenizing the sample would lead to inconsistent and perhaps irreproducable results. Techniques for homogenization vary wildly, depending on the sample, different techniques exist.In order to measure the organic carbon content in a sample, the inorganic sources of carbon, which exist in the form of carbonate and bicarbonate salts and minerals, must be removed from the sample. This is typically done by treating the sample with non-oxidative acids such as H2SO4 and HCl, releasing CO2 and H2O, as shown\[ \ce{2HCl + CaCO3 -> CaCl2 + CO2 + H2O} \nonumber \]\[ \ce{HCl + NaHCO3 -> NaCl + H2O + CO2} \nonumber \]Non oxidative acids are chosen such that minimal amounts of organic carbon are affected. Although the selection of acid chosen to remove the inorganic sources of carbon is important; depending on your measurement technique, acids may interfere with the measurement. For example, in the wet measurement technique that will be discussed later, the counter ion Cl- will add systematic error to the measurement.Treatment of a sample with acid is intended to dissolve all inorganic forms of carbon in the sample. In selectively digesting and dissolving inorganic forms of carbon, be it aqueous carbonates or bicarbonates or trapped CO2, one can selectively remove inorganic sources of carbon from organic ones; thereby leaving behind, in theory, only organic carbon in the sample.It becomes apparent, in this treatment, the importance of sample homogenization. Using the rock example again. If a rock is treated with acid without homogenizing, the inorganic carbon at the surface of the sample may be dissolved. Only with homogenization can the acid dissolve in inorganic carbon on the inside of the rock. Otherwise this inorganic carbon may be interpreted as organic carbon, leading to gross errors in total organic carbon determination.A large problem and a potential source of error in technique measurement are the assumptions that have to be made, particularly in the case of TOC measurement, that all of the inorganic carbon has been washed away and separated from the sample. There is no way to distinguish TOC or TIC spectroscopically, the experimenter is forced to assume that they are looking at is all organic carbon or all inorganic carbon, when in reality there may be some of both still on the sample.Most TOC quantification methods are destructive in nature. The destructive nature of the methods means that none of the sample may be recovered. Of the methods, there are two destructive techniques that will be discussed in this module. The first is the wet method to measure TOC of solid sediment samples, and the second is a the dry combustion.Following sample pre-treatment with inorganic acids to dissolve away any inorganic material from the sample, a known amount of potassium dichromate (K2Cr2O7) in concentrated sulfuric acid are added to the sample as per the Walkey-Black procedure, a well known wet technique. The amount of dichromate and H2SO4 added can vary depending on the expected organic carbon content of the sample, typically enough H2SO4 is added such that the solid potassium dichromate dissolves in solution.The mixture of potassium dichromate with H2SO4 is an exothermic one, meaning that heat is evolved from the solution. As the dichromate reacts according to\[ \ce{2Cr2O7^2- + 3C^0 + 16 H+ -> 4Cr^3+ + 3CO2 + 8H2O} \label{eq:dichromate} \]The solution will bubble away CO2. Because the only source of carbon in the sample is in theory the organic forms of carbon (assuming adequate pre-treatment of the sample to remove the inorganic forms of carbon), the evolved CO2 comes from organic sources of carbon.Elemental forms of carbon in this method present problems for oxidation of elemental carbon to CO2, meaning that not all of the carbon will be converted to CO2, which will lead to an underestimation of total organic carbon content in the quantification steps. In order to facilitate the oxidation of elemental carbon, the digestive solution of dichromate and H2SO4 is heated at 150°C for some time (~30 min, depending on total carbon content in the sample and the amount of dichromate added). It is important that the solution not be heated above 150 oC, as decomposition of the dichromate solution.Other shortcomings, in addition to incomplete digestion, exist with this method. Fe2+ and Cl- in the sample can interfere with the chromate solution, Fe2+ can be oxidized to Fe3+ and Cl- can form CrO2Cl2 leading to systematic error towards higher organic carbon content. Conversely MnO2, like dichromate, will oxidize organic carbon, thereby leading to a negative bias and an underestimation of TOC content in samples.In order to counteract these biases, several additives can be used in the pre-treatment process. Fe2+ can be oxidized with mild oxidant phosphoric acid, which will not oxidize organic carbon. Treatment of the digestive solution with AgSO2 can precipitate silver chloride. MnO2 interferences can be dealt with using FeSO4, where the oxidation power of the manganese is dealt with by taking the iron(II) sulfate to the +3 oxidation state. Any excess iron(II) can be dealt with using phosphoric acid.What follows sample treatment, where all of the organic carbon has been digested, is a titration to oxidize the excess dichromate in the sample. Comparing the excess that is titrated to the amount that was originally added to the original solution, one can do stoichiometric calculations according to Equation \ref{eq:dichromate} and calculate the amount of dichromate that oxidized the organic carbon in the sample, thereby allowing the determination of TOC in the sample.How this titration is run is up to the user. Manual, potentiometric, titrations are all available to the investigator doing the TOC measurement, as well as some others.Measurement of TOC via the described wet techniques is a rather crude method to measure organic carbon content in a sample. The technique relies on several assumptions that in reality are not wholly accurate, leading to TOC values that are in reality an approximate.As mentioned previously, measurement of TOC levels in water is extremely valuable to regulatory agencies concerned with water quality. The presence of organic carbon in a substance that should have no carbon is of concern. Measurement of TOC in water uses a variant of the wet method in order to avoid highly toxic oxidants: typically a persulfate salt is used as an oxidant instead of dichromate.The procedure for measuring TOC levels in water is essentially the same as in the typical wet oxidation technique. The water is first acidified to remove inorganic sources of carbon. Now because water is being measured, one cannot simply wash away the inorganic carbon. The inorganic carbon escapes from the water solution as CO2. The remaining carbon in the solution is thought to be organic. Treatment of the solution with persulfate will do nothing. Irradiation of the solution treated with persulfate with UV radiation or heating will activate a radical species. This radical species will mediate oxidation of the organic carbon to CO2, which can then be quantified by similar methods as the traditional wet oxidation technique.As an alternative to technique for TOC measurement, dry techniques present several advantages over wet techniques. Dry techniques frequently involve the measurement of evolved carbon from the combustion of a sample. In this section of the module, TOC measurements using dry techniques will be discussed.Like in the wet-oxidation case, measurement of TOC by dry techniques requires the removal of inorganic forms of carbon, and therefore samples are treated with inorganic acids to do so. The inorganic acids are washed away and theoretically only organic forms of carbon remain. Before combustion of the sample, the treated sample must be completely dried so as to remove any moisture from the sample. In the case where non-volatile organics are present, or where little concern about the escape of organic material exists (e.g., rock samples or Kerogen), sample can be placed in a 100 °C oven overnight. In the case where evolution of organic matter at slightly elevated temperatures is a problem, drying can be done under vacuum and under the presence of deterite. Volatile organics are difficult to measure using dry techniques because the sample needs to be without moisture, and removal of moisture by any technique will most likely remove volatile organics.As mentioned before, quantification of TOC in the dry quantification method will proceed via complete combustion of the sample in a carbon free atmosphere (typically a pure oxygen atmosphere). Quantification of sample is performed via non-dispersive infrared detection cell. A characteristic asymmetric stretching at 2350 cm-1 can be seen for CO2. The intensity of this infrared signal CO2 is proportional to the quantity of CO2 in the sample. Therefore, in order to translate signal intensity to amount, a calibration curve is constructed from known amounts of pure calcium carbonate, looking specifically at the intensity of the CO2 peak. One may point out that calcium carbonate is an inorganic source of carbon, but it is important to note that the source of carbon has no effect on its quantification. Preparation of a calibration curve follows similar preparation as to an analyte, while no pre-treatment with acid is needed, the standards must be thoroughly dried in an oven. When a sample is ready to be analyzed, it is first weighed on some form of analytical balance, and then placed in the combustion analyzer, such as a LECO analyzer, where the oven and the non-dispersive IR cell are one machine.Combustion proceeds at temperatures in the excess of 1350 oC in a stream of pure oxygen. Comparing the intensity of your characteristic IR peak to the intensities of the characteristic IR peaks of your known standards, the TOC of the sample can be determined. By comparing the mass of the sample to the mass of carbon obtained from the analyzer, the % organic carbon in the sample can be determined according to\[ \% \text { TOC } = \text { mass carbon/mass sample } \nonumber \]Use of this dry technique is most common for rock and other solid samples. In the oil and gas industry, it is extremely important to know the organic carbon content of rock samples in order to ascertain production viability of a well. The sample can be loaded in the LECO combustion analyzer and pyrolyzed in order to quantify TOC.As shown in Equation \ref {eq:TC} the total carbon in a sample (TC) is the sum of the inorganic forms of carbon and organic forms of carbon in a sample.It is known that no other sources of carbon contribute to the TC determination because no other sources of carbon exist. So in theory, if one could quantify the TOC by a method described in the previous section, and follow that with a measurement of the TIC in the pre-treatment acid waste, one could find the TC of a sample by summing the value obtained for TIC and the value obtained for TOC. However, in TC quantification this is hardly done: partly in order to avoid propagation of error associated with the other two methods, also cost restraints.In measuring TC of a sample, the same dry technique of combustion of the sample is used, just like in the quantification of TOC. The same analyzer used to measure TOC can handle a TC measurement. No sample pre-treatment with acid is needed, so it is important to remember that the characteristic peak of CO2 now seen is representative of the carbon of the entire sample. Now using Equation \ref {eq:TC}, the TIC carbon of the sample can be found as well. Subtraction of the TOC from the measured TC in the analyzer gives the value for TIC.Direct methods to measure the TIC of a sample, in addition to indirect measurement by taking advantage of Equation \ref{TC}, are possible. Typical TIC measurements are done on water samples, where the alkalinity and hardness of water is a result of inorganic carbonates, be it bicarbonate or carbonate. Treatment of these types of samples follows similar procedures to treatment of samples for organic carbon. A sample of water is acidified, such that the equilibrium, Equation \ref{eq4} obeys Le Chatelier’s principle and favors the release of CO2. The CO2 released can be measured in a variety of different ways\[\ce{CO2 + H20 <=> H2CO3 <=> HCO3^{-} + H^{+}} \label{eq4}\]As with the combustion technique for measuring TC and TOC, measurement of the intensity of the characteristic IR stretch for CO2 compared to standards can be used to quantity of TIC in a sample. However, in this case, it is emission of IR radiation that is measured, not absorption. An instrument that can do such a measurement is a FIRE-TIC, meaning Flame IR emission. This instrument consists of a purge like devices connected to a FIRE detector.Measurement of Carbon content is crucial for a lot of industries. In this module you have seen a variety of ways to measure Total Carbon TC, as well as the source of that carbon, whether it be organic in nature (TOC), or inorganic (TIC). This information is extremely important for several industries: from oil exploration, where information on carbon content is needed to evaluate a formation’s production viability, to regulatory agencies, where carbon content and its origin are needed to ensure quality control and public safety.TOC, TC, TIC measurements do have significant limitations. Mostly all techniques are destructive in nature, meaning that sample cannot be recovered. Further limitations include assumptions that have to be made in the measurement. In TOC measurement for example, assumptions that all TIC has been removed in pretreatments with acid have to be made, as well as that all organic carbon is completely oxidized to CO2. In TIC measurements, it is assumed that all carbon sources are removed from the sample and detected. Several things can be done to promote these conditions so as to make such assumptions valid.All measurements cost money, because TOC, TIC, and TC are all related by Equation, more frequently than not only two measurements are done, and the third value is found by using their relation to one another.This page titled 1.10: Total Carbon Analysis is shared under a CC BY 4.0 license and was authored, remixed, and/or curated by Pavan M. V. Raja & Andrew R. Barron (OpenStax CNX) via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.
516
1.11: Fluorescence Spectroscopy
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Physical_Methods_in_Chemistry_and_Nano_Science_(Barron)/01%3A_Elemental_Analysis/1.11%3A_Fluorescence_Spectroscopy
Atomic fluorescence spectroscopy (AFS) is a method that was invented by Winefordner and Vickers in 1964 as a means to analyze the chemical concentration of a sample. The idea is to excite a sample vapor with the appropriate UV radiation, and by measuring the emitting radiation, the amount of the specific element being measured could be quantified. In its most basic form, AFS consists of a UV light source to excite the sample, a monochromator, a detector and a readout device (figure \(\PageIndex{1}\)). Cold vapor atomic fluorescence spectroscopy (CVAFS) uses the same technique as AFS, but the preparation of the sample is adapted specifically to quantify the presence of heavy metals that are volatile, such as mercury, and allows for these elements to be measured at room temperature.The theory behind CVAFS is that as the sample absorbs photons from the radiation source, it will enter an excited state. As the atom falls back into the ground state from its excited vibrational state(s), it will emit a photon, which can then be measured to determine the concentration. In its most basic sense, this process is represented by \ref{1}, where PF is the power given off as photons from the sample, Pabs is the power of the radiation absorbed by the sample, and φ is the proportionality factor of the energy lost due to collisions and interactions between the atoms present, and not due to photon emission.\[ \text{P}_{F}\ =\ \psi \text{P}_{\text{abs}} \label{1} \]For CVAFS, the sample must be digested, usually with an acid to break down the compound being tested so that all metal atoms in the sample are accessible to be vaporized. The sample is put into a bubbler, usually with an agent that will convert the element to its gaseous species. An inert gas carrier such as argon is then passed through the bubbler to carry the metal vapors to the fluorescence cell. It is important that the gas carrier is inert, so that the signal will only be absorbed and emitted by the sample in question and not the carrier gas.Once the sample is loaded into the cell, a collimated (almost parallel) UV light source passes through the sample so that it will fluoresce. A monochromator is often used, either between the light source and the sample, or between the sample and the detector. These two different setups are referred to as excitation or emission spectrum, respectively. In an excitation spectrum, the light source is kept at a constant wavelength via the monochromator, and multiple wavelengths of emitted light are gathered, whereas in the emission spectrum, only the specified wavelength of light emitted from the sample is measured, but the sample is exposed to multiple wavelengths of light from the excitatory source. The fluorescence will be detected by a photomultiplier tube, which is extremely light sensitive, and a photodiode is used to convert the light into voltage or current, which can then in turn be interpreted into the amount of the chemical present.Mercury poisoning can damage the nervous system, kidneys, and also fetal development in pregnant women, so it is important to evaluate the levels of mercury present in our environment. Some of the more common sources of mercury are in the air (from industrial manufacturing, mining, and burning coal), the soil (deposits, waste), water (byproduct of bacteria, waste), and in food (especially seafood). Although regulation for food, water and air mercury content differs, EPA regulation for mercury content in water is the lowest, and it cannot exceed 2 ppb (27 µg/L).In 1972, J. F. Kopp et al. first published a method to detect minute concentrations of mercury in soil, water, and air using gold amalgamation and cold vapor atomic fluorescence spectroscopy. While atomic absorption can also measure mercury concentrations, it is not as sensitive or selective as cold vapour atomic fluorescence spectroscopy (CVAFS).As is common with all forms of atomic fluorescence spectroscopy (AFS) and atomic absorption spectrometry (AES), the sample must be digested, usually with an acid, to break down the compounds so that all the mercury present can be measured. The sample is put in the bubbler with a reducing agent such as stannous chloride (SnCl2) so that Hg0 is the only state present in the sample.Once the mercury is in its elemental form, the argon enters the bubbler through a gold trap, and carries the mercury vapors out of the bubbler to the first gold trap, after first passing through a soda lime (mixture of Ch(OH)2, NaOH, and KOH) trap where any remaining acid or water vapors are caught. After all the mercury from the sample is absorbed by the first gold trap, it is heated to 450 °C, which causes the mercury absorbed onto the gold trap to be carried by the argon gas to the second gold trap. Once the mercury from the sample has been absorbed by the second trap, it is heated to 450 °C, releasing the mercury to be carried by the argon gas into the fluorescence cell, where light at a wavelength of 253.7 nm will be used for mercury samples. The detection limit for mercury using gold amalgamation and CVAFS is around 0.05 ng/L, but the detection limit will vary due to the equipment being used, as well as human error.A standard solution of mercury should be made, and from this dilutions will be used to make at least five different standard solutions. Depending on the detection limit and what is being analyzed, the concentrations in the standard solutions will vary. Note that what other chemicals the standard solutions contain will depend upon how the sample is digested.A 1.00 g/mL Hg (1 ppm) working solution is made, and by dilution, five standards are made from the working solution, at 5.0, 10.0, 25.0, 50.0, and 100.0 ng/L (ppt). If these five standards give peak heights of 10 units, 23 units, 52 units, 110 units, and 207 units, respectively, then \ref{2} is used to calculate the calibration factor, where CFx is the calibration factor, Ax is the area of the peak or peak height, and Cx is the concentration in ng/L of the standard, \ref{3}.\[ \text{CF}_{x}\ =\ \text{A}_{X}/\text{C}_{X} \label{2} \]\[ 10/5.0\ \text{ng}/\text{L}\ =\ 2.00\text{ units L/ng} \label{3} \]The calibration factors for the other four standards are calculated in the same fashion: 2.30, 2.08, 2.20, and 2.07, respectively. The average of the five calibration factors is then taken, \ref{4}.\[ \text{CF}_{m}\ =\ (2.00\ +\ 2.30\ +\ 2.08\ +\ 2.20\ +\ 2.07)/5\ =\ 2.13\text{ units L/ng} \label{4} \]Now to calculate the concentration of mercury in the sample, \ref{5} is used, where As is the area of the peak sample, CFm is the mean calibration factor, Vstd is the volume of the standard solution minus the reagents added, and Vsmp is the volume of the initial sample (total volume minus volume of reagents added). If As is measured at 49 units, Vstd = 0.47 L, and Vsmp = 0.26 L, then the concentration can be calculated, \ref{6}.\[ [\text{Hg}]\ (\text{ng/L})\ =\ (\text{A}_{s}/\text{CF}_{m})\cdot (\text{V}_{std}/V_{smp}) \label{5} \]\[ 49\ units/2.13\ units\ L/ng)\cdot (0.47\ L/0.26\ L)\ =\ 43.2\ \text{ng}/\text{L of Hg present} \label{6} \]Contamination from the sample collection is one of the biggest sources of error: if the sample is not properly collected or hands/gloves are not clean, this can tamper with the concentration. Also, making sure the glassware and equipment is clean from any sources of contamination.Furthermore, sample vials that are used to store mercury-containing samples should be made out of borosilicate glass or fluoropolymer, because mercury can leach or absorb other materials, which could cause an inaccurate concentration reading.Mercury pollution has become a global problem and seriously endangers human health. Inorganic mercury can be easily released into the environment through a variety of anthropogenic sources, such as the coal mining, solid waste incineration, fossil fuel combustion, and chemical manufacturing. It can also be released through the nonanthropogenic sources in the form of forest fires, volcanic emissions, and oceanic emission.Mercury can be easily transported into the atmosphere as the form of the mercury vapor. The atmospheric deposition of mercury ions leads to the accumulation on plants, in topsoil, in water, and in underwater sediments. Some prokaryotes living in the sediments can convert the inorganic mercury into methylmercury, which can enter food chain and finally is ingested by human.Mercury seriously endangers people’s health. One example is that many people died due to exposure to methylmercury through seafood consumption in Minamata, Japan. Exposure in the organic mercury causes a serious of neurological problems, such as prenatal brain damage, cognitive and motion disorders, vision and hearing loss, and even death. Moreover, inorganic mercury also targets the renal epithelial cells of the kidney, which results in tubular necrosis and proteinuria.The crisis of mercury in the environment and biological system compel people to carry out related work to confront the challenge. To design and implement new mercury detection tools will ultimately aid these endeavors. Therefore, in this paper, we will mainly introduce fluorescence molecular sensor, which is becoming more and more important in mercury detection due to its easy use, low cost and high efficiency.Fluorescence molecular sensor, one type of fluorescence molecular probe, can be fast, reversible response in the recognition process. There are four factors, selectivity, sensitivity, in-situ detection, and real time, that are generally used to evaluate the performance of the sensor. In this paper, four fundamental principles for design fluorescence molecular sensors are introduced.Photoinduced electron transfer is the most popular principle in the design of fluorescence molecular sensors. The characteristic structure of PET sensors includes three parts as shown in :In the PET sensors, photoinduced electron transfer makes the transfer of recognition information to fluorescence signal between receptor and fluorophore come true. shows the detailed process of how PET works in the fluorescence molecular sensor. The receptor could provide the electron to the vacated electoral orbital of the excited fluorophore. The excited electron in the fluorophore could not come back the original orbital, resulting in the quenching of fluorescence emission. The coordination of receptor and guest decreased the electron donor ability of receptor reduced or even disrupted the PET process, then leading to the enhancement of intensity of fluorescence emission. Therefore, the sensors had weak or no fluorescence emission before the coordination. However, the intensity of fluorescence emission would increase rapidly after the coordination of receptor and gust.Intramolecular charge transfer (ICT) is also named photoinduced charge transfer. The characteristic structure of ICT sensors includes only the fluorophore and recognition group, but no spacer. The recognition group directly binds to the fluorophore. The electron withdrawing or electron donating substituents on the recognition group plays an important role in the recognition. When the recognition happens, the coordination between the recognition group and guest affects the electron density in the fluorophore, resulting in the change of fluorescence emission in the form of blue shift or red shift.When the two fluorophores are in the proper distance, an intermolecular excimer can be formed between the excited state and ground state. The fluorescence emission of the excimer is different with the monomer and mainly in the form of new, broad, strong, and long wavelength emission without fine structures. The proper distance determines the formation of excimer, therefore modulation of the distance between the two fluorophores becomes crucial in the design of the sensors based on this mechanism. The fluorophores have long lifetime in the singlet state to be easily forming the excimers. They are often used in such sensors.FRET is a popular principle in the design of the fluorescence molecular sensor. In one system, there are two different fluorophores, in which one acts as a donor of excited state energy to the receptor of the other. As shown in , the receptor accepts the energy from the excited state of the donor and gives the fluorescence emission, while the donor will return back to the electronic ground state. There are three factors affecting the performance of FRET. They are the distance between the donor and the acceptor, the proper orientation between the donor emission dipole moment and acceptor absorption moment, and the extent of spectral overlap between the donor emission and acceptor absorption spectrum ).Fluorescence is a process involving the emission of light from any substance in the excited states. Generally speaking, fluorescence is the emission of electromagnetic radiation (light) by the substance absorbed the different wavelength radiation. Its absorption and emission is illustrated in the Jablonski diagram ), a fluorophore is excited to higher electronic and vibrational state from ground state after excitation. The excited molecules can relax to lower vibrational state due to the vibrational relaxation and, then further retune to the ground state in the form of fluorescence emission.Most spectrofluorometers can record both excitation and emission spectra. They mainly consists of four parts: light sources, monochromators, optical filters and detector ). Light sources that can emit wavelength of light over the ultraviolet and the visible range can provide the excitation energy. There are different light sources, including arc and incandescent xenon lamps, high-pressure mercury (Hg) lamps, Xe-Hg arc lamps, low pressure Hg and Hg-Ar lamps, pulsed xenon lamps, quartz-tungsten halogen (QTH) lamps, LED light sources, etc. The proper light source is chosen based on the application.Prisms and diffraction gratings are two mainly used types of monocharomators, which help to get the experimentally needed chromatic light with a wavelength range of 10 nm. Typically, the monocharomators are evaluated based on dispersion, efficiency, stray light level and resolution.Optical filters are used in addition to monochromators in order to further purifying the light. There are two kinds of optical filters. The first one is the colored filter, which is the most traditional filter and is also divided into two catagories: monochromatic filter and long-pass filter. The other one is thin film filter that is the supplement for the former one in the application and being gradually instead of colored filter.An InGaAs array is the standard detector used in many spectrofluorometers. It can provide rapid and robust spectral characterization in the near-IR.As a PET sensor 2-{5-[(2-{[bis-(2-ethylsulfanyl-ethyl)-amino]-methyl}-phenylamino)-methyl]-2-chloro-6-hydroxy-3-oxo-3H-xanthen-9-yl}-benzoic acid (MS1) ) shows good selectivity for mercury ions in buffer solution (pH = 7, 50 mM PIPES, 100 mM KCl). From , it is clear that, upon the increase of the concentration of Hg2+ ions, the coordination between the sensor and Hg2+ ions disrupted the PET process, leading to the increase of the intensity of fluorescence emission with slight red shift to 528 nm. Sensor MS1 also showed good selectivity for Hg2+ ions over other cations of interest as shown in the right bars in ; moreover, it had good resistance to the interference from other cations when detected Hg2+ ions in the mixture solution excluding Cu2+ ions as shown in the dark bars in the .2,2',2'',2'''-(3-(benzo[d]thiazol-2-yl)-2-oxo-2-H-chromene-6,7-diyl) bis(azanetriyl)tetrakis(N-(2-hydroxyethyl)acetamide) (RMS) ) has been shown to be an ICT fluorescence sensor. From , it is clear that, with the gradual increase of the concentration of Hg2+ ions, fluorescence emission spectra revealed a significant blue shift, which was about 100-nm emission band shift from 567 to 475 nm in the presence of 40 equiv of Hg2+ ions. The fluorescence change came from the coexistence of two electron-rich aniline nitrogen atoms in the electron-donating receptor moiety, which prevented Hg2+ ions ejection from them simultaneously in the excited ICT fluorophore. Sensor RMS also showed good selectivity over other cations of interest. As shown in , it is easy to find that only Hg2+ ions can modulate the fluorescence of RMS in a neutral buffered water solution.The (NE,N'E)-2,2'-(ethane-1,2-diyl-bis(oxy))bis(N-(pyren-4-ylmethylene)aniline) (BA) is the excimer fluorescence sensor. As shown in , when BA existed without mercury ions in the mixture of HEPES-CH3CN (80:20, v/v, pH 7.2), it only had the weak monomer fluorescence emission. Upon the increase of the concentration of mercury ions in the solution of BA, a strong excimer fluorescence emission at 462 nm appeared and increased with the change of the concentration of mercury ions. From , it is clear that BA showed good selectivity for mercury ions. Moreover, it had good resistance to the interference when detecting mercury ions in the mixture solution.The calixarene derivative bearing two pyrene and rhodamine fluorophores (CPR) ) is a characteristic FRET fluorescence sensor. Fluorescence titration experiment of CPR (10.0 μM) with Hg2+ ions was carried out in CHCl3/CH3CN (50:50, v/v) with an excitation of 343 nm. As shown in , upon gradual increase the concentration of Hg2+ ions in the solution of CPR, the increased fluorescence emission of the ring-opened rhodamine at 576 nm was observed with a concomitantly declining excimer emission of pyrene at 470 nm. Moreover, an isosbestic point centered at 550 nm appeared. This change in the fluorescence emission demonstrated that an energy from the pyrene excimer transferred to rhodamine, resulting from the trigger of Hg2+ ions. showed that CPR had good resistance to other cations of interest when detected Hg2+ ions, though Pb2+ ions had little interference in this process.This page titled 1.11: Fluorescence Spectroscopy is shared under a CC BY 4.0 license and was authored, remixed, and/or curated by Pavan M. V. Raja & Andrew R. Barron (OpenStax CNX) via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.
517
1.12: An Introduction to Energy Dispersive X-ray Spectroscopy
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Physical_Methods_in_Chemistry_and_Nano_Science_(Barron)/01%3A_Elemental_Analysis/1.12%3A_An_Introduction_to_Energy_Dispersive_X-ray_Spectroscopy
Energy-dispersive X-ray spectroscopy (EDX or EDS) is an analytical technique used to probe the composition of a solid materials. Several variants exist, but the all rely on exciting electrons near the nucleus, causing more distant electrons to drop energy levels to fill the resulting “holes.” Each element emits a different set of X-ray frequencies as their vacated lower energy states are refilled, so measuring these emissions can provide both qualitative and quantitative information about the near-surface makeup of the sample. However, accurate interpretation of this data is dependent on the presence of high-quality standards, and technical limitations can compromise the resolution.In the quantum mechanical model of the atom, an electron’s energy state is defined by a set of quantum numbers. The primary quantum number, n, provides the coarsest description of the electron’s energy level, and all the sublevels that share the same primary quantum number are sometimes said to comprise an energy “shell.” Instead of describing the lowest-energy shell as the “n = 1 shell,” it is more common in spectroscopy to use alphabetical labels: The K shell has n = 1, the L shell has n = 2, the M shell has n = 3, and so on. Subsequent quantum numbers divide the shells into subshells: one for K, three for L, and five for M. Increasing primary quantum numbers correspond with increasing average distance from the nucleus and increasing energy ). An atom’s core shells are those with lower primary quantum numbers than the highest occupied shell, or valence shell.Transitions between energy levels follow the law of conservation of energy. Excitation of an electron to a higher energy state requires an input of energy from the surroundings, and relaxation to a lower energy state releases energy to the surroundings. One of the most common and useful ways energy can be transferred into and out of an atom is by electromagnetic radiation. Core shell transitions correspond to radiation in the X-ray portion of the spectrum; however, because the core shells are normally full by definition, these transitions are not usually observed.X-ray spectroscopy uses a beam of electrons or high-energy radiation (see instrument variations, below) to excite core electrons to high energy states, creating a low-energy vacancy in the atoms’ electronic structures. This leads to a cascade of electrons from higher energy levels until the atom regains a minimum-energy state. Due to conservation of energy, the electrons emit X-rays as they transition to lower energy states. It is these X-rays that are being measured in X-ray spectroscopy. The energy transitions are named using the letter of the shell where ionization first occurred, a Greek letter denoting the group of lines that transition belongs to, in order of decreasing importance, and a numeric subscript ranking the peak's the intensity within that group. Thus, the most intense peak resulting from ionization in the K shell would be Kα1 ). Since each element has a different nuclear charge, the energies of the core shells and, more importantly, the spacing between them vary from one element to the next. While not every peak in an element’s spectrum is exclusive to that element, there are enough characteristic peaks to be able to determine composition of the sample, given sufficient resolving power.There are two common methods for exciting the core electrons off the surface atoms. The first is to use a high-energy electron beam like the one in a scanning electron microscope (SEM). The beam is produced by an electron gun, in which electrons emitted thermionically from a hot cathode are guided down the column by an electric field and focused by a series of negatively charged “lenses.” X-rays emitted by the sample strike a lithium-drifted silicon p-i-n junction plate. This promotes electrons in the plate into the conduction band, inducing a voltage proportional to the energy of the impacting X-ray which generally falls between about 1 and 10 keV. The detector is cooled to liquid nitrogen temperatures to reduce electronic noise from thermal excitations.It is also possible to use X-rays to excite the core electrons to the point of ionization. In this variation, known as energy-dispersive X-ray fluorescence analysis (EDXRFA or XRF), the electron column is replaced by an X-ray tube and the X-rays emitted by the sample in response to the bombardment are called secondary X-rays, but these variants are otherwise identical.Regardless of the excitation method, subsequent interactions between the emitted X-rays and the sample can lead to poor resolution in the X-ray spectrum, producing a Gaussian-like curve instead of a sharp peak. Indeed, this spreading of energy within the sample combined with the penetration of the electron or X-ray beam leads to the analysis of a roughly 1 µm3 volume instead of only the surface features. Peak broadening can lead to overlapping peaks and a generally misleading spectrum. In cases where a normal EDS spectrum is inadequately resolved, a technique called wavelength-dispersive X-ray spectroscopy (WDS) can be used. The required instrument is very similar to the ones discussed above, and can use either excitation method. The major difference is that instead of having the X-rays emitted by the sample hit the detector directly, they first encounter an analytical crystal of know lattice dimensions. Bragg’s law predicts that the strongest reflections off the crystal will occur for wavelengths such that the path difference between a rays reflecting from consecutive layers in the lattice is equal to an integral number of wavelengths. This is represented mathematically as \ref{1}, where n is an integer, λ is the wavelength of impinging light, d is the distance between layers in the lattice, and θ is the angle of incidence. The relevant variables for the equation are labeled in .\[ n\lambda \ =\ 2d\ sin\ \theta \label{1} \]By moving the crystal and the detector around the Rowland circle, the spectrometer can be tuned to examine specific wavelengths (\ref{1}). Generally, an initial scan across all wavelengths is taken first, and then the instrument is programmed to more closely examine the wavelengths that produced strong peaks. The resolution available with WDS is about an order of magnitude better than with EDS because the analytical crystal helps filter out the noise of subsequent, non-characteristic interactions. For clarity, “X-ray spectroscopy” will be used to refer to all of the technical variants just discussed, and points made about EDS will hold true for XRF unless otherwise noted.Compared with some analytical techniques, the sample preparation required for X-ray spectroscopy or any of the related methods just discussed is trivial. The sample must be stable under vacuum, since the sample chamber is evacuated to prevent the atmosphere from interfering with the electron beam or X-rays. It is also advisable to have the surface as clean as possible; X-ray spectroscopy is a near-surface technique, so it should analyze the desired material for the most part regardless, but any grime on the surface will throw off the composition calculations. Simple qualitative readings can be obtained from a solid of any thickness, as long as it fits in the machine, but for reliable quantitative measurements, the sample should be shaved as thin as possible.Qualitative analysis, the determination of which elements are present in the sample but not necessarily the stoichiometry, relies on empirical standards. The energies of the commonly used core shell transitions have been tabulated for all the natural elements. Since combinations of elements can act differently than a single element alone, standards with compositions as similar as possible to the suspected makeup of the sample are also employed. To determine the sample’s composition, the peaks in the spectrum are matched with peaks from the literature or standards.Quantitative analysis, the determination of the sample’s stoichiometry, needs high resolution to be good enough that the ratio of the number of counts at each characteristic frequency gives the ratio of those elements in the sample. It takes about 40,000 counts for the spectrum to attain a 2σ precision of ±1%. It is important to note, however, that this is not necessarily the same as the empirical formula, since not all elements are visible. Spectrometers with a beryllium window between the sample and the detector typically cannot detect anything lighter than sodium. Spectrometers equipped with polymer based windows can quantify elements heavier than beryllium. Either way, hydrogen cannot be observed by X-ray spectroscopy.X-ray spectra are presented with energy in keV on the x-axis and the number of counts on the y-axis. The EDX spectra of biotite and NIST glass K309 are shown as examples and respectively). Biotite is a mineral similar to mica which has the approximate chemical formula K(Mg,Fe)3AlSi3O10(F,OH)2. Strong peaks for manganese, aluminum, silicon, potassium, and iron can be seen in the spectrum. The lack of visible hydrogen is expected, and the absence of oxygen and fluorine peaks suggests the instrument had a beryllium window. The titanium peak is small and unexpected, so it may only be present in trace amounts. K309 is a mix of glass developed by the National Institute for Standards and Technology. The spectrum shows that it contains significant amounts of silicon, aluminum, calcium, oxygen, iron, and barium. The large peak at the far left is the carbon signal from the carbon substrate the glass was placed on.As has just been discussed, X-ray spectroscopy is incapable of seeing elements lighter than boron. This is a problem given the abundance of hydrogen in natural and man-made materials. The related techniques X-ray photoelectron spectroscopy (XPS) and Auger spectroscopy are able to detect Li and Be, but are likewise unable to measure hydrogen.X-ray spectroscopy relies heavily on standards for peak identification. Because a combination of elements can have noticeably different properties from the individual constituent elements in terms of X-ray fluorescence or absorption, it is important to use a standard as compositionally similar to the sample as possible. Naturally, this is more difficult to accomplish when examining new materials, and there is always a risk of the structure of the sample being appreciably different than expected.The energy-dispersive variants of X-ray spectroscopy sometimes have a hard time distinguishing between emissions that are very near each other in energy or distinguishing peaks from trace elements from background noise. Fortunately, the wavelength-dispersive variants are much better at both of these. The rough, stepwise curve in represents the EDS spectrum of molybdenite, a mineral with the chemical formula MoS2. Broadened peaks make it difficult to distinguish the molybdenum signals from the sulfur ones. Because WDS can select specific wavelengths, it has much better resolution and can pinpoint the separate peaks more accurately. Similarly, the trace silicon signal in the EDS spectrum of the nickel-aluminum-manganese alloy in a is barely distinguishable as a bump in the baseline, but the WDS spectrum in b clearly picks it up.This page titled 1.12: An Introduction to Energy Dispersive X-ray Spectroscopy is shared under a CC BY 4.0 license and was authored, remixed, and/or curated by Pavan M. V. Raja & Andrew R. Barron (OpenStax CNX) via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.
518
1.13: X-ray Photoelectron Spectroscopy
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Physical_Methods_in_Chemistry_and_Nano_Science_(Barron)/01%3A_Elemental_Analysis/1.13%3A_X-ray_Photoelectron_Spectroscopy
X-Ray photoelectron spectroscopy (XPS), also known as electron spectroscopy for chemical analysis (ESCA), is one of the most widely used surface techniques in materials science and chemistry. It allows the determination of atomic composition of the sample in a non-destructive manner, as well as other chemical information, such as binding constants, oxidation states and speciation. The sample under study is subjected to irradiation by a high energy X-ray source. The X-rays penetrate only 5 – 20 Å into the sample, allowing for surface specific, rather than bulk chemical, analysis. As an atom absorbs the X-rays, the energy of the X-ray will cause a K-shell electron to be ejected, as illustrated by . The K-shell is the lowest energy shell of the atom. The ejected electron has a kinetic energy (KE) that is related to the energy of the incident beam (hν), the electron binding energy (BE), and the work function of the spectrometer (φ) (\ref{1}). Thus, the binding energy of the electron can be calculated.\[ BE\ =\ h\nu \ -\ KE\ -\ \psi _{s} \label{1} \]Table \(\PageIndex{1}\) shows the binding energy of the ejected electron, and the orbital from which the electron is ejected, which is characteristic of each element. The number of electrons detected with a specific binding energy is proportional to the number of corresponding atoms in the sample. This then provides the percent of each atom in the sample.The chemical environment and oxidation state of the atom can be determined through the shifts of the peaks within the range expected (Table \(\PageIndex{2}\)). If the electrons are shielded then it is easier, or requires less energy, to remove them from the atom, i.e., the binding energy is low. The corresponding peaks will shift to a lower energy in the expected range. If the core electrons are not shielded as much, such as the atom being in a high oxidation state, then just the opposite occurs. Similar effects occur with electronegative or electropositive elements in the chemical environment of the atom in question. By synthesizing compounds with known structures, patterns can be formed by using XPS and structures of unknown compounds can be determined.Sample preparation is important for XPS. Although the technique was originally developed for use with thin, flat films, XPS can be used with powders. In order to use XPS with powders, a different method of sample preparation is required. One of the more common methods is to press the powder into a high purity indium foil. A different approach is to dissolve the powder in a quickly evaporating solvent, if possible, which can then be drop-casted onto a substrate. Using sticky carbon tape to adhere the powder to a disc or pressing the sample into a tablet are an option as well. Each of these sample preparations are designed to make the powder compact, as powder not attached to the substrate will contaminate the vacuum chamber. The sample also needs to be completely dry. If it is not, solvent present in the sample can destroy the necessary high vacuum and contaminate the machine, affecting the data of the current and future samples.When analyzing a sample a) by XPS, questions often arise that deal with layers of the sample. For example, is the sample homogenous, with a consistent composition throughout, or layered, with certain elements or components residing in specific places in the sample? b,c). A simple way to determine the answer to this question is to perform a depth analysis. By sputtering away the sample, data can be collected at different depths within the sample. It should be noted that sputtering is a destructive process. Within the XPS instrument, the sample is subjected to an Ar+ ion beam that etches the surface. This creates a hole in the surface, allowing the X-rays to hit layers that would not have otherwise been analyzed. However, it should be realized that different surfaces and layers may be etched at different rates, meaning the same amount of etching does not occur during the same amount of time, depending on the element or compound currently being sputtered.It is important to note that hydrocarbons sputter very easily and can contaminate the high vacuum of the XPS instrument and thus later samples. They can also migrate to a recently sputtered (and hence unfunctionalized) surface after a short amount of time, so it is imperative to sputter and take a measurement quickly, otherwise the sputtering may appear to have had no effect.When running XPS, it is important that the sample is prepared correctly. If it is not, there is a high chance of ruining not only data acquisition, but the instrument as well. With organic functionalization, it is very important to ensure the surface functional group (or as is the case with many functionalized nanoparticles, the surfactant) is immobile on the surface of the substrate. If it is removed easily in the vacuum chamber, it not only will give erroneous data, but it will contaminate the machine, which may then contaminate future samples. This is particularly important when studying thiol functionalization of gold samples, as thiol groups bond strongly with the gold. If there is any loose thiol group contaminating the machine, the thiol will attach itself to any gold sample subsequently placed in the instrument, providing erroneous data. Fortunately, with the above exception, preparing samples that have been functionalized is not much different than standard preparation procedures. However, methods for analysis may have to be modified in order to obtain good, consistent data.A common method for the analysis of surface modified material is angle resolved X-ray photoelectron spectroscopy (ARXPS). ARXPS is a non-destructive alternative to sputtering, as it relies upon using a series of small angles to analyze the top layer of the sample, giving a better picture of the surface than standard XPS. ARXPS allows for the analysis of the topmost layer of atoms to be analyzed, as opposed to standard XPS, which will analyze a few layers of atoms into the sample, as illustrated in . ARXPS is often used to analyze surface contaminations, such as oxidation, and surface modification or passivation. Though the methodology and limitations are beyond the scope of this module, it is important to remember that, like normal XPS, ARXPS assumes homogeneous layers are present in samples, which can give erroneous data, should the layers be heterogeneous.There are many limitations to XPS that are not based on the samples or preparation, but on the machine itself. One such limitation is that XPS cannot detect hydrogen or helium. This, of course, leads to a ratio of elements in the sample that is not entirely accurate, as there is always some amount of hydrogen. It is a common fallacy to assume the percent of atoms obtained from XPS data are completely accurate due to this presence of undetected hydrogen (Table \(\PageIndex{1}\)).It is possible to indirectly measure the amount of hydrogen in a sample using XPS, but it is not very accurate and has to be done in a roundabout, often time consuming manner. If the sample contains hydrogen with a partial positive charge (i.e. OH), the sample can be washed in sodium naphthalenide (C10H8Na). This replaces this hydrogen with sodium, which can then be measured. The sodium to oxygen ratio that is obtained infers the hydrogen to oxygen ratio, assuming that all the hydrogen atoms have reacted.XPS can only give an average measurement, as the electrons lower down in the sample will lose more energy as they pass other atoms while the electrons on the surface retain their original kinetic energy. The electrons from lower layers can also undergo inelastic or elastic scattering, seen in . This scattering may have a significant impact on data at higher angles of emission. The beam itself is also relatively wide, with the smallest width ranging from 10 – 200 μm, lending to the observed average composition inside the beam area. Due to this, XPS cannot differentiate sections of elements if the sections are smaller than the size of the beam.Sample reaction or degredation are important considerations. Caution should be exercised when analyzing polymers, as they are often chemically active and X-rays will provide energy to start degrading the polymer, altering the properties of the sample. One method found to help overcome this particular limitation is to use angle-resolved X-ray photoelectron spectroscopy (ARXPS). XPS can often reduce certain metal salts, such as Cu2+. This reduction will give peaks that indicate a certain set of properties or chemical environments when it could be completely different. It needs to be understood that charges can build up on the surface of the sample due to a number of reasons, specifically due to the loss of electrons during the XPS experiment. The charge on the surface will interact with the electrons escaping from the sample, affecting the data obtained. If the charge collecting is positive, the electrons that have been knocked off will be attracted to the charge, slowing the electrons. The detector will pick up a lower kinetic energy of the electrons, and thus calculate a different binding energy than the one expected, giving peaks which could be labeled with an incorrect oxidation state or chemical environment. To overcome this, the spectra must be charge referenced by one of the following methods: using the naturally occurring graphite peak as a reference, sputtering with gold and using the gold peak as a reference or flooding the sample with the ion gun and waiting until the desired peak stops shifting.While it is known that sputtering is destructive, there are a few other limitations that are not often considered. As mentioned above, the beam of X-rays is relatively large, giving an average composition in the analysis. Sputtering has the same limitation. If the surfactant or layers are not homogeneous, then when the sputtering is finished and detection begins, the analysis will show a homogeneous section, due to the size of both the beam and sputtered area, while it is actually separate sections of elements.The chemistry of the compounds can be changed with sputtering, as it removes atoms that were bonded, changing the oxidation state of a metal or the hybridization of a non-metal. It can also introduce charges if the sample is non-conducting or supported on a non-conducting surface.X-ray photoelectron spectroscopy (XPS) is a surface technique developed for use with thin films. More recently, however, it has been used to analyze the chemical and elemental composition of nanoparticles. The complication of nanoparticles is that they are neither flat nor larger than the diameter of the beam, creating issues when using the data obtained at face value. Samples of nanoparticles will often be large aggregates of particles. This creates problems with the analysis acquisition, as there can be a variety of cross-sections, as seen in . This acquisition problem is also compounded by the fact that the surfactant may not be completely covering the particle, as the curvature of the particle creates defects and divots. Even if it is possible to create a monolayer of particles on a support, other issues are still present. The background support will be analyzed with the particle, due to their small size and the size of the beam and the depth at which it can penetrate.Many other factors can introduce changes in nanoparticles and their properties. There can be probe, environmental, proximity, and sample preparation effects. The dynamics of particles can wildly vary depending on the reactivity of the particle itself. Sputtering can also be a problem. The beam used to sputter will be roughly the same size or larger than the particles. This means that what appears in the data is not a section of particle, but an average composition of several particles.Each of these issues needs to be taken into account and preventative measures need to be used so the data is the best representation possible.Sample preparation of nanoparticles is very important when using XPS. Certain particles, such as iron oxides without surfactants, will interact readily with oxygen in the air. This causes the particles to gain a layer of oxygen contamination. When the particles are then analyzed, oxygen appears where it should not and the oxidation state of the metal may be changed. As shown by these particles, which call for handling, mounting and analysis without exposure to air, knowing the reactivity of the nanoparticles in the sample is very important even before starting analysis. If the reactivity of the nanoparticle is known, such as the reactivity of oxygen and iron, then preventative steps can be taken in sample preparation in order to obtain the best analysis possible.When preparing a sample for XPS, a powder form is often used. This preparation, however, will lead to aggregation of nanoparticles. If analysis is performed on such a sample, the data obtained will be an average of composition of each nanoparticle. If composition of a single particle is what is desired, then this average composition will not be sufficient. Fortunately, there are other methods of sample preparation. Samples can be supported on a substrate, which will allow for analysis of single particles. A pictorial representation in shows the different types of samples that can occur with nanoparticles.Nanoparticles are dynamic; their properties can change when exposed to new chemical environments, leading to a new set of applications. It is the dynamics of nanoparticles that makes them so useful and is one of the reasons why scientists strive to understand their properties. However, it is this dynamic ability that makes analysis difficult to do properly. Nanoparticles are easily damaged and can change properties over time or with exposure to air, light or any other environment, chemical or otherwise. Surface analysis is often difficult because of the high rate of contamination. Once the particles are inserted into XPS, even more limitations appear.There are often artifacts introduced from the simple mechanism of conducting the analysis. When XPS is used to analyze the relatively large surface of thin films, there is small change in temperature as energy is transferred. The thin films, however, are large enough that this small change in energy has to significant change to its properties. A nanoparticle is much smaller. Even a small amount of energy can drastically change the shape of particles, in turn changing the properties, giving a much different set of data than expected.The electron beam itself can affect how the particles are supported on a substrate. Theoretically, nanoparticles would be considered separate from each other and any other chemical environments, such as solvents or substrates. This, however, is not possible, as the particles must be suspended in a solution or placed on a substrate when attempting analysis. The chemical environment around the particle will have some amount of interaction with the particle. This interaction will change characteristics of the nanoparticles, such as oxidation states or partial charges, which will then shift the peaks observed. If particles can be separated and suspended on a substrate, the supporting material will also be analyzed due to the fact that the X-ray beam is larger than the size of each individual particle. If the substrate is made of porous materials, it can adsorb gases and those will be detected along with the substrate and the particle, giving erroneous data.Nanoparticles will often react, or at least interact, with their environments. If the particles are highly reactive, there will often be induced charges in the near environment of the particle. Gold nanoparticles have a well-documented ability to undergo plasmon interactions with each other. When XPS is performed on these particles, the charges will change the kinetic energy of the electrons, shifting the apparent binding energy. When working with nanoparticles that are well known for creating charges, it is often best to use an ion gun or a coating of gold. The purpose of the ion gun or gold coating is to try to move peaks back to their appropriate energies. If the peaks do not move, then the chance of there being no induced charge is high and thus the obtained data is fairly reliable.The proximity of the particles to each other will cause interactions between the particles. If there is a charge accumulation near one particle, and that particle is in close proximity with other particles, the charge will become enhanced as it spreads, affecting the signal strength and the binding energies of the electrons. While the knowledge of charge enhancement could be useful to potential applications, it is not beneficial if knowledge of the various properties of individual particles is sought.Less isolated (i.e., less crowded) particles will have different properties as compared to more isolated particles. A good example of this is the plasmon effect in gold nanoparticles. The closer gold nanoparticles are to each other, the more likely they will induce the plasmon effect. This can change the properties of the particles, such as oxidation states and partial charges. These changes will then shift peaks seen in XPS spectra. These proximity effects are often introduced in the sample preparation. This, of course, shows why it is important to prepare samples correctly to get desired results.Unfortunately there is no good general procedure for all nanoparticles samples. There are too many variables within each sample to create a basic procedure. A scientist wanting to use XPS to analyze nanoparticles must first understand the drawbacks and limitations of using their sample as well as how to counteract the artifacts that will be introduced in order to properly use XPS.One must never make the assumption that nanoparticles are flat. This assumption will only lead to a misrepresentation of the particles. Once the curvature and stacking of the particles, as well as their interactions with each other are taken into account, XPS can be run.This page titled 1.13: X-ray Photoelectron Spectroscopy is shared under a CC BY 4.0 license and was authored, remixed, and/or curated by Pavan M. V. Raja & Andrew R. Barron (OpenStax CNX) via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.
519
1.14: Auger Electron Spectroscopy
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Physical_Methods_in_Chemistry_and_Nano_Science_(Barron)/01%3A_Elemental_Analysis/1.14%3A_Auger_Electron_Spectroscopy
Auger electron spectroscopy (AES) is one of the most commonly employed surface analysis techniques. It uses the energy of emitted electrons to identify the elements present in a sample, similar to X-ray photoelectron spectroscopy (XPS). The main difference is that XPS uses an X-ray beam to eject an electron while AES uses an electron beam to eject an electron. In AES, the sample depth is dependent on the escape energy of the electrons. It is not a function of the excitation source as in XPS. In AES, the collection depth is limited to 1-5 nm due to the small escape depth of electrons, which permits analysis of the first 2 - 10 atomic layers. In addition, a typical analysis spot size is roughly 10 nm. A representative AES spectrum illustrating the number of emitted electrons, N, as a function of kinetic energy, E, in direct form (red) and in differentiated form (black) is shown in .Like XPS, AES measures the kinetic energy (Ek) of an electron to determine its binding energy (Eb). The binding energy is inversely proportional to the kinetic energy and can be found from \ref{1}, where hν is the energy of the incident photon and ΔΦ is the difference in work function between the sample and the detector material.\[ E_{b}\ =\ h\nu \ -\ E_{k}\ +\ \Delta \Phi \label{1} \]Since the Eb is dependent on the element and the electronic environment of the nucleus, AES can be used to distinguish elements and their oxidation states. For instance, the energy required to remove an electron from Fe3+ is more than in Fe0. Therefore, the Fe3+ peak will have a lower Ek than the Fe0 peak, effectively distinguishing the oxidation states.An Auger electron comes from a cascade of events. First, an electron beam comes in with sufficient energy to eject a core electron creating a vacancy (see a). Typical energies of the primary electrons range from 3 - 30 keV. A secondary electron (imaging electron) of higher energy drops down to fill the vacancy (see b) and emits sufficient energy to eject a tertiary electron (Auger electron) from a higher shell (see c).The shells from which the electrons move from lowest to highest energy are described as the K shell, L shell, and M shell. This nomenclature is related to quantum numbers. Explicitly, the K shell represents the 1s orbital, the L shell represents the 2s and 2p orbitals, and the M shell represents the 3s, 3p, and 3d orbitals. The cascade of events typically begins with the ionization of a K shell electron, followed by the movement of an L shell electron into the K shell vacancy. Then, either an L shell electron or M shell electron is ejected. It depends on the element, which peak is prevalent but often both peaks will be present. The peak seen in the spectrum is labeled according to the shells involved in the movement of the electrons. For example, an electron ejected from a gold atom could be labeled as Au KLL or Au KLM.The intensity of the peak depends on the amount of material present, while the peak position is element dependent. Auger transitions characteristic of each elements can be found in the literature. Auger transitions of the first forty detectable elements are listed in Table \(\PageIndex{1}\).Important elements of an Auger spectrometer include a vacuum system, an electron source, and a detector. AES must be performed at pressures less than 10-3 pascal (Pa) to keep residual gases from adsorbing to the sample surface. This can be achieved using an ultra-high-vacuum system with pressures from 10-8 to 10-9 Pa. Typical electron sources include tungsten filaments with an electron beam diameter of 3 - 5 μm, LaB6 electron sources with a beam diameter of less than 40 nm, and Schottky barrier filaments with a 20 nm beam diameter and high beam current density. Two common detectors are the cylindrical mirror analyzer and the concentric hemispherical analyzer discussed below. Notably, concentric hemispherical analyzers typically have better energy resolution.A CMA is composed of an electron gun, two cylinders, and an electron detector ). The operation of a CMA involves an electron gun being directed at the sample. An ejected electron then enters the space between the inner and outer cylinders (IC and OC). The inner cylinder is at ground potential, while the outer cylinder’s potential is proportional to the kinetic energy of the electron. Due to its negative potential, the outer cylinder deflects the electron towards the electron detector. Only electrons within the solid angle cone are detected. The resulting signal is proportional to the number of electrons detected as a function of kinetic energy.A CHA contains three parts ):Electrons ejected from the surface enter the input lens, which focuses the electrons and retards their energy for better resolution. Electrons then enter the hemispheres through an entrance slit. A potential difference is applied on the hemispheres so that only electrons with a small range of energy differences reach the exit. Finally, an electron detector analyzes the electrons.AES has widespread use owing to its ability to analyze small spot sizes with diameters from 5 μm down to 10 nm depending on the electron gun. For instance, AES is commonly employed to study film growth and surface-chemical composition, as well as grain boundaries in metals and ceramics. It is also used for quality control surface analyses in integrated circuit production lines due to short acquisition times. Moreover, AES is used for areas that require high spatial resolution, which XPS cannot achieve. AES can also be used in conjunction with transmission electron microscopy (TEM) and scanning electron microscopy (SEM) to obtain a comprehensive understanding of microscale materials, both chemically and structurally. As an example of combining techniques to investigate microscale materials, shows the characterization of a single wire from a Sn-Nb multi-wire alloy. a is a SEM image of the singular wire and b is a schematic depicting the distribution of Nb and Sn within the wire. Point analysis was performed along the length of the wire to determine the percent concentrations of Nb and Sn.AES is widely used for depth profiling. Depth profiling allows the elemental distributions of layered samples 0.2 – 1 μm thick to be characterized beyond the escape depth limit of an electron. Varying the incident and collection angles, and the primary beam energy controls the analysis depth. In general, the depth resolution decreases with the square root of the sample thickness. Notably, in AES, it is possible to simultaneously sputter and collect Auger data for depth profiling. The sputtering time indicates the depth and the intensity indicates elemental concentrations. Since, the sputtering process does not affect the ejection of the Auger electron, helium or argon ions can be used to sputter the surface and create the trench, while collecting Auger data at the same time. The depth profile does not have the problem of diffusion of hydrocarbons into the trenches. Thus, AES is better for depth profiles of reactive metals (e.g., gold or any metal or semiconductor). Yet, care should be taken because sputtering can mix up different elements, changing the sample composition.While AES is a very valuable surface analysis technique, there are limitations. Because AES is a three-electron process, elements with less than three electrons cannot be analyzed. Therefore, hydrogen and helium cannot be detected. Nonetheless, detection is better for lighter elements with fewer transitions. The numerous transition peaks in heavier elements can cause peak overlap, as can the increased peak width of higher energy transitions. Detection limits of AES include 0.1 – 1% of a monolayer, 10-16 – 10-15 g of material, and 1012 – 1013 atoms/cm2.Another limitation is sample destruction. Although focusing of the electron beam can improve resolution; the high-energy electrons can destroy the sample. To limit destruction, beam current densities of greater than 1 mA/cm2 should be used. Furthermore, charging of the electron beam on insulating samples can deteriorate the sample and result in high-energy peak shifts or the appearance of large peaks.This page titled 1.14: Auger Electron Spectroscopy is shared under a CC BY 4.0 license and was authored, remixed, and/or curated by Pavan M. V. Raja & Andrew R. Barron (OpenStax CNX) via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.
520
1.15: Rutherford Backscattering of Thin Films
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Physical_Methods_in_Chemistry_and_Nano_Science_(Barron)/01%3A_Elemental_Analysis/1.15%3A_Rutherford_Backscattering_of_Thin_Films
One of the main research interests of the semiconductor industry is to improve the performance of semiconducting devices and to construct new materials with reduced size or thickness that have potential application in transistors and microelectronic devices. However, the most significant challenge regarding thin film semiconductor materials is measurement. Properties such as the thickness, composition at the surface, and contamination, all are critical parameters of the thin films. To address these issues, we need an analytical technique which can measure accurately through the depth of the of the semiconductor surface without destruction of the material. Rutherford backscattering spectroscopy is a unique analysis method for this purpose. It can give us information regarding in-depth profiling in a non-destructive manner. However X-ray photo electron spectroscopy (XPS), energy dispersive X-ray analysis (EDX) and Auger electron spectroscopy are also able to study the depth-profile of semiconductor films. Table \(\PageIndex{1}\) demonstrates the comparison between those techniques with RBS.At a basic level, RBS demonstrates the electrostatic repulsion between high energy incident ions and target nuclei. The specimen under study is bombarded with monoenergetic beam of 4He+ particles and the backscattered particles are detected by the detector-analysis system which measures the energies of the particles. During the collision, energy is transferred from the incident particle to the target specimen atoms; the change in energy of the scattered particle depends on the masses of incoming and target atoms. For an incident particle of mass M1, the energy is E0 while the mass of the target atom is M2. After the collision, the residual energy E of the particle scattered at angle Ø can be expressed as:\[ E\ =\ k^{2}E_{0} \label{1} \]\[\ k\ =\ \frac{(M_{1}\ cos(\theta )\ +\ \sqrt{M_{2}^{2}\ -\ M_{1}^{2}sin^{2}\theta })}{M_{1}\ +\ M_{2}} \label{2} \]where k is the kinematic scattering factor, which is actually the energy ratio of the particle before and after the collision. Since k depends on the masses of the incident particle and target atom and the scattering angle, the energy of the scattered particle is also determined by these three parameters. A simplified layout of backscattering experiment is shown in . The probability of a scattering event can be described by the differential scattering cross section of a target atom for scattering an incoming particle through the angle Ø into differential solid angle as follows,\[ \frac{d \sigma R}{d \phi }\ = (\frac{zZe2}{2E_{0}sin(2\theta )} )\ =\ \frac{[cos\theta \ +\ \sqrt{1\ -\ (\frac{M_{1}}{M_{2}}sin\theta )^{2}}]^{2}}{\sqrt{1\ -\ (\frac{M_{1}}{M_{2}}sin\theta )^{2}}} \label{3} \]where dσR is the effective differential cross section for the scattering of a particle. The above equation may looks complicated but it conveys the message that the probability of scattering event can be expressed as a function of scattering cross section which is proportional to the zZ when a particle with charge ze approaches the target atom with charge Ze.Helium ions not scattered at the surface lose energy as they traverse the solid. They lose energy due to interaction with electrons in the target. After collision the He particles lose further energy on their way out to the detector. We need to know two quantities to measure the energy loss, the distance Δt that the particles penetrate into the target and the energy loss ΔE in this distance . The rate of energy loss or stopping power is a critical component in backscattering experiments as it determines the depth profile in a given experiment.In thin film analysis, it is convenient to assume that total energy loss ΔE into depth t is only proportional to t for a given target. This assumption allows a simple derivation of energy loss in backscattering as more complete analysis requires many numerical techniques. In constant dE/dx approximation, total energy loss becomes linearly related to depth t, .The apparatus for Rutherford backscattering analysis of thin solid surface typically consist of three components:There are two types of accelerator/ion source available. In single stage accelerator, the He+ source is placed within an insulating gas-filled tank ). It is difficult to install new ion source when it is exhausted in this type of accelerator. Moreover, it is also difficult to achieve particles with energy much more than 1 MeV since it is difficult to apply high voltages in this type of system.Another variation is “tandem accelerator.” Here the ion source is at ground and produces negative ion. The positive terminal is located is at the center of the acceleration tube ). Initially the negative ion is accelerated from ground to terminal. At terminal two-electron stripping process converts the He- to He++. The positive ions are further accelerated toward ground due to columbic repulsion from positive terminal. This arrangement can achieve highly accelerated He++ ions (~ 2.25 MeV) with moderate voltage of 750 kV.Particles that are backscattered by surface atoms of the bombarded specimen are detected by a surface barrier detector. The surface barrier detector is a thin layer of p-type silicon on the n-type substrate resulting p-n junction. When the scattered ions exchange energy with the electrons on the surface of the detector upon reaching the detector, electrons get promoted from the valence band to the conduction band. Thus, each exchange of energy creates electron-hole pairs. The energy of scattered ions is detected by simply counting the number of electron-hole pairs. The energy resolution of the surface barrier detector in a standard RBS experiment is 12 - 20 keV. The surface barrier detector is generally set between 90° and 170° to the incident beam. Films are usually set normal to the incident beam. A simple layout is shown in .As stated earlier, it is a good approximation in thin film analysis that the total energy loss ΔE is proportional to depth t. With this approximation, we can derive the relation between energy width ΔE of the signal from a film of thickness Δt as follows,\[ \Delta E\ =\ \Delta t (k\ \frac{dE}{dx_{in}}\ +\ \frac{1}{\cos \theta } \ \frac{dE}{dx_{out}}) \label{4} \]where Ø = lab scattering angle.It is worth noting that k is the kinematic factor defined in equation above and the subscripts “in” and “out” indicate the energies at which the rate of loss of energy or dE/dx is evaluated. As an example, we consider the backscattering spectrum, at scattering angle 170°, for 2 MeV He++ incidents on silicon layer deposited onto 2 mm thick niobium substrate .The energy loss rate of incoming He++ or dE/dx along inward path in elemental Si is ≈24.6 eV/Å at 2 MeV and is ≈26 eV/Å for the outgoing particle at 1.12 MeV (Since K of Si is 0.56 when the scattering angle is 170°, energy of the outgoing particle would be equal to 2 x 0.56 or 1.12 MeV) . Again the value of ΔESi is ≈133.3 keV. Putting the values into above equation we get\[ \Delta t \approx \frac{133.6\ keV}{(0.56*24.6\ \frac{eV}{Å}) \ +\ (\frac{1}{\cos 170^{\circ } }\ *\ 26\ \frac{eV}{Å})} \nonumber \]\[ =\ \frac{133.3\ keV}{13.77\ eV/Å \ +\ 29.985\ eV/Å} \nonumber \]\[ =\ \frac{133.3\ keV}{40.17 eV/Å} \nonumber \]\[ =\ 3318\ Å \nonumber \]Hence a Si layer of ca. 3300 Å thickness has been deposited on the niobium substrate. However we need to remember that the value of dE/dx is approximated in this calculation. In addition to depth profile analysis, we can study the composition of an element quantitatively by backscattering spectroscopy. The basic equation for quantitative analysis is\[ Y\ =\ \sigma \Omega Q N \Delta t \nonumber \]Where Y is the yield of scattered ions from a thin layer of thickness Δt, Q is the number of incident ions and Ω is the detector solid angle, and NΔt is the number of specimen atoms (atom/cm2). shows the RBS spectrum for a sample of silicon deposited on a niobium substrate and subjected to laser mixing. The Nb has reacted with the silicon to form a NbSi2 interphase layer. The Nb signal has broadened after the reaction as show in .We can use ratio of the heights HSi/HNb of the backscattering spectrum after formation of NbSi2 to determine the composition of the silicide layer. The stoichiometric ratio of Nb and Si can be approximated as,\[ \frac{N_{Si}}{N_{Nb}}\ \approx \frac{[H_{Si}\ *\ \sigma_{Si}]}{[H_{Nb}\ *\ \sigma _{Nb}]} \nonumber \]Hence the concentration of Si and Nb can be determined if we can know the appropriate cross sections σSiand σNb. However the yield in the backscattering spectra is better represented as the product of signal height and the energy width ΔE. Thus stoichiometric ratio can be better approximated as\[ \frac{N_{Si}}{N_{Nb}}\ \approx \frac{[H_{Si}\ *\ \Delta E_{Si}\ *\ \sigma_{Si}]}{[H_{Nb}\ *\ \Delta E_{Nb}\ *\ \ \sigma _{Nb}]} \nonumber \]It is of interest to understand the limitations of the backscattering technique in terms of the comparison with other thin film analysis technique such as AES, XPS and SIMS (Table \(\PageIndex{1}\)). AES has better mass resolution, lateral resolution and depth resolution than RBS. But AES suffers from sputtering artifacts. Compared to RBS, SIMS has better sensitivity. RBS does not provide any chemical bonding information which we can get from XPS. Again, sputtering artifact problems are also associated in XPS. The strength of RBS lies in quantitative analysis. However, conventional RBS systems cannot analyze ultrathin films since the depth resolution is only about 10 nm using surface barrier detector.Rutherford Backscattering analysis is a straightforward technique to determine the thickness and composition of thin films (< 4000 Å). Areas that have been lately explored are the use of backscattering technique in composition determination of new superconductor oxides; analysis of lattice mismatched epitaxial layers, and as a probe of thin film morphology and surface clustering.This page titled 1.15: Rutherford Backscattering of Thin Films is shared under a CC BY 4.0 license and was authored, remixed, and/or curated by Pavan M. V. Raja & Andrew R. Barron (OpenStax CNX) via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.
521
1.16: An Accuracy Assessment of the Refinement of Crystallographic Positional Metal Disorder in Molecular Solid Solutions
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Physical_Methods_in_Chemistry_and_Nano_Science_(Barron)/01%3A_Elemental_Analysis/1.16%3A_An_Accuracy_Assessment_of_the_Refinement_of_Crystallographic_Positional_Metal_Disorder_in_Molecular_Solid_Solutions
Crystallographic positional disorder is evident when a position in the lattice is occupied by two or more atoms; the average of which constitutes the bulk composition of the crystal. If a particular atom occupies a certain position in one unit cell and another atom occupies the same position in other unit cells, the resulting electron density will be a weight average of the situation in all the unit cells throughout the crystal. Since the diffraction experiment involves the average of a very large number of unit cells (ca. 1018 in a crystal used for single crystal X-ray diffraction analysis), minor static displacements of atoms closely simulate the effects of vibrations on the scattering power of the “average” atom. Unfortunately, the determination of the “average” atom in a crystal may be complicated if positional disorder is encountered.Crystal disorder involving groups such as CO, CN and Cl have been documented to create problems in assigning the correct structure through refinement procedures. While attempts have been made to correlate crystallographic lattice parameters with bulk chemical composition of the solution from which single crystal was grown, there has been little effort to correlate crystallographic site occupancy with chemical composition of the crystal from which single crystal diffraction data was obtained. These are two very different issues that must be considered when solving a crystal structure with site occupancy disorder.What is the relationship of a single crystal to the bulk material?Is the refinement of a site-occupancy-factor actually gives a realistic value for % occupancy when compared to the "actual" % composition for that particular single crystal?The following represents a description of a series of methods for the refinement of a site occupancy disorder between two atoms (e.g., two metal atoms within a mixture of isostructural compounds).An atom in a structure is defined by several parameters: the type of atom, the positional coordinates (x, y, z), the occupancy factor (how many “atoms” are at that position) and atomic displacement parameters (often called temperature or thermal parameters). The latter can be thought of as being a “picture” of the volume occupied by the atom over all the unit cells, and can be isotropic (1 parameter defining a spherical volume) or anisotropic (6 parameters defining an ellipsoidal volume). For a “normal” atom, the occupancy factor is fixed as being equal to one, and the positions and displacement parameters are “refined” using least-squares methods to values in which the best agreement with the observed data is obtained. In crystals with site-disorder, one position is occupied by different atoms in different unit cells. This refinement requires a more complicated approach. Two broad methods may be used: either a new atom type that is the appropriate combination of the different atoms is defined, or the same positional parameters are used for different atoms in the model, each of which has occupancy values less than one, and for which the sum is constrained to total one. In both approaches, the relative occupancies of the two atoms are required. For the first approach, these occupancies have to be defined. For the second, the value can be refined. However, there is a relationship between the thermal parameter and the occupancy value so care must be taken when doing this. These issues can be addressed in several ways.The simplest assumption is that the crystal from which the X-ray structure is determined represents the bulk sample was crystallized. With this value, either a new atom type can be generated that is the appropriate combination of the measured atom type 1 (M) and atom type 2 (M’) percent composition or two different atoms can be input with the occupancy factor set to reflect the percent composition of the bulk material. In either case the thermal parameters can be allowed to refine as usual.The occupancy values for two atoms (M and M’) are refined (such that their sum was equal to 1), while the two atoms are constrained to have the same displacement parameters.The occupancy values (such that their sum was equal to 1) and the displacement parameters are refined independently for the two atoms.Once the best values for occupancy is obtained using either Methods 2 or 3, these values were fixed and the displacement parameters are allowed to refine freely.Metal β-diketonate complexes ) for metals in the same oxidation state are isostructural and often isomorphous. Thus, crystals obtained from co-crystallization of two or more metal β-diketonate complexes [e.g., Al(acac)3 and Cr(acac)3] may be thought of as a hybrid of the precursors; that is, the metal position in the crystal lattice may be defined as having the average metal composition.A series of solid solutions of Al(acac)3 and Cr(acac)3 can be prepared for study by X-ray diffraction, by the crystallization from acetone solutions of specific mixtures of Al(acac)3 and Cr(acac)3 (Table \(\PageIndex{1}\), Column 1). The pure derivatives and the solid solution, Al1-xCrx(acac)3, crystallize in the monoclinic space group P21/c with Z = 4.Substitution of Cr for Al in the M(acac)3 structure could possibly occur in a random manner, i.e., a metal site has an equal probability of containing an aluminum or a chromium atom. Alternatively, if the chromium had preference for specific sites a super lattice structure of lower symmetry would be present. Such an ordering is not observed since all the samples show no additional reflections other than those that may be indexed to the monoclinic cell. Therefore, it may be concluded that the Al(acac)3 and Cr(acac)3 do indeed form solid solutions: Al1-xCrx(acac)3.Electron microprobe analysis, using wavelength-dispersive spectrometry (WDS), on the individual crystal from which X-ray crystallographic data was collected provides the “actual” composition of each crystal. Analysis was performed on at least 6 sites on each crystal using a 10 μm sized analysis spot providing a measure of the homogeneity within the individual crystal for which X-ray crystallographic data was collected. An example of a SEM image of one of the crystals and the point analyses is given in . The data in Table \(\PageIndex{1}\) and demonstrate that while a batch of crystals may contain individual crystals with different compositions, each individual crystal is actually reasonably homogenous. There is, for most samples, a significant variance between the molar Al:Cr ratio in the bulk material and an individual crystal chosen for X-ray diffraction. The variation in Al:Cr ratio within each individual crystal (±10%) is much less than that between crystals.Since Method 1 does not refine the %Cr and relies on an input for the Al and Cr percent composition of the "bulk" material, i.e., the %Cr in the total mass of the material (Table \(\PageIndex{1}\), Column 1), as opposed to the analysis of the single crystal on which X-ray diffraction was performed, (Table \(\PageIndex{1}\), Column 2), the closer these values were to the "actual" value determined by WDS for the crystal on which X-ray diffraction was performed (Table \(\PageIndex{1}\), Column 1 vs 2) then the closer the overall refinement of the structure to those of Methods 2 - 4.While this assumption is obviously invalid for many of the samples, it is one often used when bulk data (for example, from NMR) is available. However, as there is no reason to assume that one crystal is completely representative of the bulk sample, it is unwise to rely only on such data.This method always produced final, refined, occupancy values that were close to those obtained from WDS (Table \(\PageIndex{1}\)). This approach assumes that the motion of the central metal atoms is identical. While this is obviously not strictly true as they are of different size, the results obtained herein imply that this is a reasonable approximation where simple connectivity data is required. For samples where the amount of one of the elements (i.e., Cr) is very low so low a good refinement can not often be obtained. In theses cases, when refining the occupancy values, that for Al would exceed 1 while that of Cr would be less than 1!In some cases, despite the interrelationship between the occupancy and the displacement parameters, convergence was obtained successfully. In these cases the refined occupancies were both slightly closer to those observed from WDS than the occupancy values obtained using Method 2. However, for some samples with higher Cr content the refinement was unstable and would not converge. Whether this observation was due to the increased percentage of Cr or simply lower data quality is not certain.While this method does allow refinement of any differences in atomic motion between the two metals, it requires extremely high quality data for this difference to be determined reliably.This approach adds little to the final results. shows the relationship between the chromium concentration (%Cr) determined from WDS and the refinement of X-ray diffraction data using Methods 2 or 3 (labeled in . Clearly there exists a good correlation, with only a slight divergence at high Cr concentration. This is undoubtedly a consequence of trying to refine a low fraction of a light atom (Al) in the presence of a large fraction of a heavier atom (Cr). X-ray diffraction is, therefore, an accurate method of determining the M:M' ratios in crystalline solid solution.This page titled 1.16: An Accuracy Assessment of the Refinement of Crystallographic Positional Metal Disorder in Molecular Solid Solutions is shared under a CC BY 4.0 license and was authored, remixed, and/or curated by Pavan M. V. Raja & Andrew R. Barron (OpenStax CNX) via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.
522
1.17: Principles of Gamma-ray Spectroscopy and Applications in Nuclear Forensics
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Physical_Methods_in_Chemistry_and_Nano_Science_(Barron)/01%3A_Elemental_Analysis/1.17%3A_Principles_of_Gamma-ray_Spectroscopy_and_Applications_in_Nuclear_Forensics
Gamma-ray (γ-ray) spectroscopy is a quick and nondestructive analytical technique that can be used to identify various radioactive isotopes in a sample. In gamma-ray spectroscopy, the energy of incident gamma-rays is measured by a detector. By comparing the measured energy to the known energy of gamma-rays produced by radioisotopes, the identity of the emitter can be determined. This technique has many applications, particularly in situations where rapid nondestructive analysis is required.The field of chemistry typically concerns itself with the behavior and interactions of stable isotopes of the elements. However, elements can exist in numerous states which are not stable. For example, a nucleus can have too many neutrons for the number of protons it has or contrarily, it can have too few neutrons for the number of protons it has. Alternatively, the nuclei can exist in an excited state, wherein a nucleon is present in an energy state that is higher than the ground state. In all of these cases, the unstable state is at a higher energy state and the nucleus must undergo some kind of decay process to reduce that energy.There are many types of radioactive decay, but type most relevant to gamma-ray spectroscopy is gamma decay. When a nucleus undergoes radioactive decay by α or β decay, the resultant nucleus produced by this process, often called the daughter nucleus, is frequently in an excited state. Similar to how electrons are found in discrete energy levels around a nucleus, nucleons are found in discrete energy levels within the nucleus. In γ decay, the excited nucleon decays to a lower energy state and the energy difference is emitted as a quantized photon. Because nuclear energy levels are discrete, the transitions between energy levels are fixed for a given transition. The photon emitted from a nuclear transition is known as a γ-ray.Radioactive decay, with few exceptions, is independent of the physical conditions surrounding the radioisotope. As a result, the probability of decay at any given instant is constant for any given nucleus of that particular radioisotope. We can use calculus to see how the number of parent nuclei present varies with time. The time constant, λ, is a representation of the rate of decay for a given nuclei, \ref{1}.\[ \frac{dN}{N}\ =\ -\lambda dt \label{1} \]If the symbol N0 is used to represent the number of radioactive nuclei present at t = 0, then \ref{2} describes the number of nuclei present at some given time.\[ N\ =\ N_{0}e^{-\lambda t} \label{2} \]The same equation can be applied to the measurement of radiation with some sort of detector. The count rate will decrease from some initial count rate in the same manner that the number of nuclei will decrease from some initial number of nuclei.The decay rate can also be represented in a way that is more easily understood. The equation describing half-life (t1/2) is shown in \ref{3}.\[ t_{1/2}\ =\ \frac{ln\ 2}{\lambda } \label{3} \]The half-life has units of time and is a measure of how long it takes for the number of radioactive nuclei in a given sample to decrease to half of the initial quantity. It provides a conceptually easy way to compare the decay rates of two radioisotopes. If one has a the same number of starting nuclei for two radioisotopes, one with a short half-life and one with a long half-life, then the count rate will be higher for the radioisotope with the short half-life, as many more decay events must happen per unit time in order for the half-life to be shorter.When a radioisotope decays, the daughter product can also be radioactive. Depending upon the relative half-lives of the parent and daughter, several situations can arise: no equilibrium, a transient equilibrium, or a secular equilibrium. This module will not discuss the former two possibilities, as they are off less relevance to this particular discussion.Secular equilibrium takes place when the half-life of the parent is much longer than the half-life of the daughter. In any arbitrary equilibrium, the ratio of atoms of each can be described as in \ref{4}.\[ \frac{N_{P}}{N_{D}}\ =\ \frac{\lambda _{D}\ -\ \lambda _{P}}{\lambda _{P}} \label{4} \]Because the half-life of the parent is much, much greater than the daughter, as the parent decays, the observed amount of activity changes very little.\[ \frac{N_{P}}{N_{D}}\ =\ \frac{\lambda _{D}}{\lambda _{P}} \label{5} \]This can be rearranged to show that the activity of the daughter should equal the activity of the parent.\[ A_{P}\ =\ A_{D} \label{6} \]Once this point is reached, the parent and the daughter are now in secular equilibrium with one another and the ratio of their activities should be fixed. One particularly useful application of this concept, to be discussed in more detail later, is in the analysis of the refinement level of long-lived radioisotopes that are relevant to trafficking.A scintillation detector is one of several possible methods for detecting ionizing radiation. Scintillation is the process by which some material, be it a solid, liquid, or gas, emits light in response to incident ionizing radiation. In practice, this is used in the form of a single crystal of sodium iodide that is doped with a small amount of thallium, referred to as NaI(Tl). This crystal is coupled to a photomultiplier tube which converts the small flash of light into an electrical signal through the photoelectric effect. This electrical signal can then be detected by a computer.A semiconductor accomplishes the same effect as a scintillation detector, conversion of gamma radiation into electrical pulses, except through a different route. In a semiconductor, there is a small energy gap between the valence band of electrons and the conduction band. When a semiconductor is hit with gamma-rays, the energy imparted by the gamma-ray is enough to promote electrons to the conduction band. This change in conductivity can be detected and a signal can be generated correspondingly. Germanium crystals doped with lithium, Ge(Li), and high-purity germanium (HPGe) detectors are among the most common types.Each detector type has its own advantages and disadvantages. The NaI(Tl) detectors are generally inferior to Ge(Li) or HPGe detectors in many respects, but are superior to Ge(Li) or HPGe detectors in cost, ease of use, and durability. Germanium-based detectors generally have much higher resolution than NaI(Tl) detectors. Many small photopeaks are completely undetectable on NaI(Tl) detectors that are plainly visible on germanium detectors. However, Ge(Li) detectors must be kept at cryogenic temperatures for the entirety of their lifetime or else they rapidly because incapable of functioning as a gamma-ray detector. Sodium iodide detectors are much more portable and can even potentially be used in the field because they do not require cryogenic temperatures so long as the photopeak that is being investigated can be resolved from the surrounding peaks.There are several dominant features that can be observed in a gamma spectrum. The dominant feature that will be seen is the photopeak. The photopeak is the peak that is generated when a gamma-ray is totally absorbed by the detector. Higher density detectors and larger detector sizes increase the probability of the gamma-ray being absorbed.The second major feature that will be observed is that of the Compton edge and distribution. The Compton edge arises due to Compton Effect, wherein a portion of the energy of the gamma-ray is transferred to the semiconductor detector or the scintillator. This occurs when the relatively high energy gamma ray strikes a relatively low energy electron. There is a relatively sharp edge to the Compton edge that corresponds to the maximum amount of energy that can be transferred to the electron via this type of scattering. The broad peak lower in energy than the Compton edge is the Compton distribution and corresponds to the energies that result from a variety of scattering angles. A feature in Compton distribution is the backscatter peak. This peak is a result of the same effect but corresponds to the minimum energy amount of energy transferred. The sum of the energies of the Compton edge and the backscatter peak should yield the energy of the photopeak.Another group of features in a gamma spectrum are the peaks that are associated with pair production. Pair production is the process by which a gamma ray of sufficiently high energy (>1.022 MeV) can produce an electron-positron pair. The electron and positron can annihilate and produce two 0.511 MeV gamma photons. If all three gamma rays, the original with its energy reduced by 1.022 MeV and the two annihilation gamma rays, are detected simultaneously, then a full energy peak is observed. If one of the annihilation gamma rays is not absorbed by the detector, then a peak that is equal to the full energy less 0.511 MeV is observed. This is known as an escape peak. If both annihilation gamma rays escape, then a full energy peak less 1.022 MeV is observed. This is known as a double escape peak.Natural uranium is composed mostly of 238U with low levels of 235U and 234U. In the process of making enriched uranium, uranium with a higher level of 235U, depleted uranium is produced. Depleted uranium is used in many applications particularly for its high density. Unfortunately, uranium is toxic and is a potential health hazard and is sometimes found in trafficked radioactive materials, so it is important to have a methodology for detection and analysis of it.One easy method for this determination is achieved by examining the spectrum of the sample and comparing it qualitatively to the spectrum of a sample that is known to be natural uranium. This type of qualitative approach is not suitable for issues that are of concern to national security. Fortunately, the same approach can be used in a quantitative fashion by examining the ratios of various gamma-ray photopeaks.The concept of a radioactive decay chain is important in this determination. In the case of 238U, it decays over many steps to 206Pb. In the process, it goes through 234mPa, 234Pa, and 234Th. These three isotopes have detectable gamma emissions that are capable of being used quantitatively. As can be seen in Table \(\PageIndex{1}\), the half-life of these three emitters is much less than the half-life of 238U. As a result, these should exist in secular equilibrium with 238U. Given this, the ratio of activity of 238U to each daughter products should be 1:1. They can thus be used as a surrogate for measuring 238U decay directly via gamma spectroscopy. The total activity of the 238U can be determined by \ref{7}, where A is the total activity of 238U, R is the count rate of the given daughter isotope, and B is the probability of decay via that mode. The count rate may need to be corrected for self-absorption of the sample is particularly thick. It may also need to be corrected for detector efficiency if the instrument does not have some sort of internal calibration.\[ A= R/B \nonumber \]QuestionA gamma spectrum of a sample is obtained. The 63.29 keV photopeak associated with 234Th was found to have a count rate of 5.980 kBq. What is the total activity of 238U present in the sample?Answer234Th exists in secular equilibrium with 238U. The total activity of 234Th must be equal to the activity of the 238U. First, the observed activity must be converted to the total activity using Equation A=R/B. It is known that the emission probability for the 63.29 kEv gamma-ray for 234Th is 4.84%. Therefore, the total activity of 238U in the sample is 123.6 kBq.The count rate of 235U can be observed directly with gamma spectroscopy. This can be converted, as was done in the case of 238U above, to the total activity of 235U present in the sample. Given that the natural abundances of 238U and 235U are known, the ratio of the expected activity of 238U to 235U can be calculated to be 21.72 : 1. If the calculated ratio of disintegration rates varies significantly from this expected value, then the sample can be determined to be depleted or enriched.QuestionAs shown above, the activity of 238U in a sample was calculated to be 123.6 kBq. If the gamma spectrum of this sample shows a count rate 23.73 kBq at the 185.72 keV photopeak for 235U, can this sample be considered enriched uranium? The emission probability for this photopeak is 57.2%.AnswerAs shown in the example above, the count rate can be converted to a total activity for 235U. This yields a total activity of 41.49 kBq for 235U. The ratio of activities of 238U and 235U can be calculated to be 2.979. This is lower than the expected ratio of 21.72, indicating that the 235U content of the sample greater than the natural abundance of 235U.This type of calculation is not unique to 238U. It can be used in any circumstance where the ratio of two isotopes needs to be compared so long as the isotope itself or a daughter product it is in secular equilibrium with has a usable gamma-ray photopeak.Particularly in the investigation of trafficked radioactive materials, particularly fissile materials, it is of interest to determine how long it has been since the sample was enriched. This can help provide an idea of the source of the fissile material—if it was enriched for the purpose of trade or if it was from cold war era enrichment, etc.When uranium is enriched, 235U is concentrated in the enriched sample by removing it from natural uranium. This process will separate the uranium from its daughter products that it was in secular equilibrium with. In addition, when 235U is concentrated in the sample, 234U is also concentrated due to the particulars of the enrichment process. The 234U that ends up in the enriched sample will decay through several intermediates to 214Bi. By comparing the activities of 234U and 214Bi or 226Ra, the age of the sample can be determined.\[ A_{Bi}\ =\ A_{Ra}\ =\ \frac{A_{U}}{2} \lambda _{Th}\lambda _{Ra} T^{2} \label{7} \]In \ref{7}, ABi is the activity of 214Bi, ARais the activity of 226Ra, AU is the activity of 234U, λTh is the decay constant for 230Th, λRa is the decay constant for 226Ra, and T is the age of the sample. This is a simplified form of a more complicated equation that holds true over all practical sample ages (on the order of years) due to the very long half-lives of the isotopes in question. The results of this can be graphically plotted as they are in . QuestionThe gamma spectrum for a sample is obtained. The count rate of the 121 keV 234U photopeak is 4500 counts per second and the associated emission probability is 0.0342%. The count rate of the 609.3 keV 214Bi photopeak is 5.83 counts per second and the emission probability is 46.1%. How old is the sample?AnswerThe observed count rates can be converted to the total activities for each radionuclide. Doing so yields a total activity for 234U of 4386 kBq and a total activity for 214Bi of 12.65 Bq. This gives a ratio of 9.614 x 10-7. Using , as graphed this indicates that the sample must have been enriched 22.0 years prior to analysis.This page titled 1.17: Principles of Gamma-ray Spectroscopy and Applications in Nuclear Forensics is shared under a CC BY 4.0 license and was authored, remixed, and/or curated by Pavan M. V. Raja & Andrew R. Barron (OpenStax CNX) via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.
523
10.1: A Simple Test Apparatus to Verify the Photoresponse of Experimental Photovoltaic Materials and Prototype Solar Cells
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Physical_Methods_in_Chemistry_and_Nano_Science_(Barron)/10%3A_Device_Performance/10.01%3A_A_Simple_Test_Apparatus_to_Verify_the_Photoresponse_of_Experimental_Photovoltaic_Materials_and_Prototype_Solar_Cells
One of the problems associated with testing a new unproven photovoltaic material or cell design is that significant processing required in order to create a fully functioning solar cell. If it is desired to screen a wide range of materials or synthetic conditions it can be time consuming (and costly of research funds) to prepare fully functioning devices. In addition, the success of each individual cell may be more dependent on fabrication steps not associated with the variations under study. For example, lithography and metallization could cause more variability than the parameters of the materials synthesis. Thus, the result could be to give no useful information as to the viability of each material under study, or even worse a false indication of research direction.So-called quick and dirty qualitative measurements can be employed to assess not only the relative photoresponse of new absorber layer materials, but also the relative power output of photovoltaic devices. The measurement procedure can provide a simple, inexpensive and rapid evaluation of cell materials and structures that can help guide the development of new materials for solar cell applications.Everything needed for the measurements can be purchased at a local electronics store and a hardware or big box store. Needed items are:A qualitative measurement of a solar cell’s current-voltage (I-V) characteristics can be obtained using the simple circuit diagram illustrated in . shows an I-V test setup using a household flood lamp for the light source. A small fan sits to the right just out of the picture.Driving the potentiometer to its maximum value will place the cell close to open circuit operation, depending on the potentiometer range, so that the open circuit voltage can be simply extrapolated from the I versus V curve. If desired, the circuit can simply be opened to make the actual measurement once the rest of the data have been recorded. Data in this case were simply recorded by hand and later entered into a spreadsheet so an I-V plot could be generated. A sample plot is shown in . Keep in mind that cell efficiency cannot be determined with this technique unless the light source has been calibrated and color corrected to match terrestrial sunlight. The fact that the experimental device actually generated net power was the result sought. The shape of the curve and the very low voltage are the result of very large resistive losses in the device along with a very “leaky” junction.One improvement that can be made to the above system is to replace the floodlight with a simple slide projector. The floodlight will typically have a spectrum very heavily weighted in the red and infrared and will be deficient in the shorter wavelengths. Though still not a perfect match to the solar spectrum, the slide projector does at least have more output at the shorter wavelengths; at the same time it will have less IR output compared to the floodlight and the combination should give a somewhat more representative response. A typical set up is shown in .The mirror in serves two purposes. First, it turns the beam so the test object can be laid flat a measurement bed and second it serves to collimate and concentrate the beam by focusing it on a smaller area, giving a better approximation of terrestrial solar intensity over a range of intensities such as AM2 (air mass 2) through AM0 ). An estimate of the intensity can be made using a calibrated silicon solar cell of the sort that can be purchased online from any of several scientific hobby shops such as Edmunds Scientific. While still far from enabling a quantitative measurement of device output, the technique will at least provide indications within a ballpark range of actual cell efficiency. shows a measurement made with the test device placed at a distance from the mirror for which the intensity was previously determined to be equivalent to AM1 solar intensity, or 1000 watts per square meter. Since the beam passes through the projector lens and reflects from the second surface of the slightly concave mirror, there is essentially no UV light left in the beam that could be harmful to the naked eye. Still, if this technique is used, it is recommended that observations be made through a piece of ordinary glass such as eyeglasses or even a small glass shield inserted for that purpose. The blue area in the figure represents the largest rectangle that can be drawn under the curve and gives the maximum output power of the cell, which is simply the product of the current and voltage at maximum power. is a plot of current density, obtained by dividing the current from the device by its area. It is common to normalize the output is this manner.If the power density of the incident light (P0) is known in W/cm2, the device efficiency can be obtained by dividing the maximum power (as determined from Im and Vm) by the incident power density times the area of the cell (Acell), \ref{1}.\[ \eta \ =\ I_{m}V_{m}/P_{0}A_{cell} \label{1} \]In many cases it is beneficial to determine the photoconductivity of a new material prior to cell fabrication. This allows for the rapid screening of materials or synthesis variable of a single material even before issues of cell design and construction are considered. shows the circuit diagram of a simple photoconductivity test made with a slightly different set up compared to that shown above. In this case a voltage is placed across the sample after it has been connected to a resistor placed in series with the sample. A simple 9 V battery secured with a battery holder or a small ac to dc power converter can be used to supply the voltage. The sample and resistor sit inside a small box with an open top.The voltage across (in this case) the 10 ohm resister was measured with a shutter held over the sample (a simple piece of cardboard sitting on the top of the box) and with the shutter removed. The difference in voltage is a direct indication of the change in the photoconductance of the sample and again is a very quick and simple test to see if the material being developed does indeed have a photoresponse of some sort without having to make a full device structure. Adjusting the position of the light source so that the incident light power density at the sample surface is 200 or 500 or 1000 W/m2 enables an approximate numerical estimate of the photocurrent that was generated and again can help guide the development of new materials for solar cell applications. The results from such a measurement are shown in for a sample of carbon nanotubes (CNT) coated with CdSe by liquid phase deposition (LPD).This page titled 10.1: A Simple Test Apparatus to Verify the Photoresponse of Experimental Photovoltaic Materials and Prototype Solar Cells is shared under a CC BY 4.0 license and was authored, remixed, and/or curated by Pavan M. V. Raja & Andrew R. Barron (OpenStax CNX) via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.
525
10.2: Measuring Key Transport Properties of FET Devices
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Physical_Methods_in_Chemistry_and_Nano_Science_(Barron)/10%3A_Device_Performance/10.02%3A_Measuring_Key_Transport_Properties_of_FET_Devices
Arguably the most important invention of modern times, the transistor was invented in 1947 at Bell Labs by John Bardeen, William Shockley, and Walter Brattain. The result of efforts to replace inefficient and bulky vacuum tubes in current regulation and switching functions. Further advances in transistor technology led to the field effect transistors (FETs), the bedrock of modern electronics. FETs operate by utilizing an electric field to control the flow of charge carriers along a channel, analogous to a water valve to control the flow of water in your kitchen sink. The FET consists of 3 terminals, a source (S), drain (D), and gate (G). The region between the source and drain is called the channel. The conduction in the channel depends on the availability of charge carriers controlled by the gate voltage. the associated cross-section of a FET with the source, draing and gate terminals labeled. FETs come in a variety of flavors depending on their channel doping (leading to enhancement and depletion modes) and gate types, as seen in . The two FET types are junction field effect transistors (JFETs) and metal oxide semiconductor field effect transistors (MOSFETs).Junction field effect transistors (JFETs) as their name implies utilize a PN-junction to control the flow of charge carriers. The PN-junction is formed when opposing doping schemes are broght together on both sides of the channel. The doping schemes can be made to be either n-type (electrons) or p-type (holes) by doping with boron/gallium or phosphorus/arsenic respectively. The n-channel JFETs consists of pnp junctions where the source and drain are n-doped and the gate is p-doped. shows the cross section of a n-channel JFET in the “ON” state obtained by applying a positive drain-source voltage in the absence of a gate-source voltage. Alternatively the p-channel JFET consists of npn junctions where the source and drain are p-doped and the gate is n-doped. For p-channel a negative drain-source voltage is applied in the absence of a gate voltage to turn “ON” the npn device, as seen in . Since JFETs are “ON” when no gate-source voltage is applied they are called depletion mode devices. Meaning that a depletion region is required to turn “OFF” the device. This is where the PN-junction comes into play. The PN-junction works by enabling a depletion region to form where electrons and holes combine leaving behind positive and negative ions which inhibit further charge transfer as well as depleting the availability of charge carriers at the interface. This depletion region is pushed further into the channel by applying a gate-source voltage. If the voltage is sufficient the depletion region on either side of the channel will “pinch off” the flow through the channel and the device will be “OFF”. This voltage is called the pinch off voltage, VP. The n-channel VP is obtained by increasing the gate-source voltage in the negative direction, while the p-channel VP is obtained by increasing the gate-source voltage in the positive direction.The metal oxide semiconductor field effect transistor (MOSFET) utilizes an oxide layer (typically SiO2) to isolate the gate from the source and drain. The thin layer of oxide prevents flow of current to the gate, but enables an electric field to be applied to the channel which regulates the flow of charge carriers through the channel.MOSFETs unlike JFETs can operate in depletion or enhancement mode characterized by their ON or OFF state at zero gate-source voltage, VGS.For depletion mode MOSFETs the device is “ON” when the VGS is zero as a result of the devices structure and doping scheme. The n-channel depletion mode MOSFET consists of heavily n-doped source and drain terminals on top of a p-doped substrate. Underneath an insulating oxide layer there is a thin layer of n-type silicon which allows charge carriers to flow in the absence of a gate voltage. When a negative voltage is applied to the gate a depletion region forms inside the channel, as seen in for n-channel enhancement mode MOSFETs and for p-channel enhancement mode MOSFETs respectively.The thickness of this inversion layer is controlled by the magnitude of the gate voltage. The minimum voltage required to form the inversion layer is called the gate-to-source threshold voltage, VT. In the case of n-channel enhancement mode MOSFETs, the “ON” state is reached when VGS > VT and a positive drain-source voltage, VDS, is applied. If the VGS is too low, then increasing the VDS further results only in increasing the depletion region around the drain. The p-channel enhancement mode MOSFETs operate similarly except that the voltages are reversed. Specifically, the “ON” state occurs when VGS < VT and a negative drain-source voltage is applied.In both an academic and industrial setting characterization of FETs is beneficial for determining device performance. Identifying the quality and type of FET can easily be addressed by measuring the transport characteristics under different experimental conditions utilizing a semiconductor characterization system (SCS). By analyzing the V-I characteristics through what are called voltage sweeps, the following key device parameters can be determined:The voltage needed to turn “OFF” a JFET. When designing circuits it is essential that the pinch-off voltage be determined to avoid current leakage which can dramatically reduce performance.The voltage needed to turn “ON” a MOSFET. This is a critical parameter in effective circuit design.The resistance between the drain and source in the channel. This influences the amount of current being transferred between the two terminals.The power dissipation determines the amount of heat generated by the transistor. This becomes a real problem since the transport properties deteriorate as the channel is heated.The charge carrier mobility determines how quickly the charge carrier can move through the channel. In most cases higher mobility leads to better device performance. The mobility can also be used to gauge the impurity, defect, temperature, and charge carrier concentrations.The gm is a measure of gain or amplification of a current for a given change in gate voltage. This is critical for amplification type electronics.PC with Keithley Interactive Test Environment (KITE) software.Semiconductor characterization system (Keithley 4200-SCS or equivalent).Probe station.Probe tips.Protective gloves.The Semiconductor Characterization System is an automated system that provides both (V-I) and (V-C) characterization of semiconductor devices and test structures. The advanced digital sweep parameter analyzer provides sub-micron characterization with accuracy and speed. This system utilizes the Keithley Interactive Test Environment (KITE) software designed specifically for semiconductor characterization.Voltage sweeps are a great way to learn about the device. shows a typical plot of drain-source voltage sweeps at various gate-source voltages while measuring the drain current, ID for a n-channel JFET. The V-I characteristics have four distinct regions. Analysis of these regions can provides critical information about the device characteristics such as the pinch off voltage, VP, transcunductance gain, gm, drain-source channel resistance, RDS, and power dissipation, PD.This region is bounded by VDS < VP. Here the JFET begins to flow a drain current with a linear response to the voltage, behaving like a variable resistor. In this region the drain-source channel resistance, RDS is modeled by \ref{1}, where ΔVDS is the change in drain-source voltage, ΔID is the change in drain current, and gm is the transcunductance gain. Solving for gm results in \ref{2}.\[ R_{DS}\ =\ \frac{\Delta V_{DS}}{\Delta I_{D}}\ =\ \frac{1}{g_{m}} \label{1} \]\[ g_m\ =\ \frac{\Delta I_{D}}{\Delta V_{DS}}\ =\ \frac{1}{R_{DS}} \label{2} \]This is the region where the JFET is completely “ON”. The maximum amount of current is flowing for the given gate-source voltage. In this region the drain current can be modeled by the \ref{3}, where ID is the drain current, IDSS is the maximum current, VGS is the gate-source voltage, and VP is the pinch off voltage. Solving for the pinch off voltage results in \ref{4}.\[ I_{D}\ =\ I_{DSS}(1\ -\ \frac{V_{GS}}{V_{P}}) \label{3} \]\[ V_{P}\ =\ 1\ -\ \frac{V_{GS}}{\sqrt{\frac{I_D}{I_{DSS}}}} \label{4} \]This region is characterized by the sudden increase in current. The drain-source voltage supplied exceeds the resistive limit of the semiconducting channel, resulting in the transistor to break down and flow an uncontrolled current.In this region the gate-source voltage is sufficient to restrict the flow through the channel, in effect cutting off the drain current. The power dissipation, PD, can be solved utilizing Ohms law (I = V/R) for any region using \ref{5}.\[ P_{D}\ =\ I_{D}\ \times \ V_{DC}\ =\ (I_{D})^{2}\ \times \ R_{DS}\ =\ (V_{DS})^{2}/R_{DS} \label{5} \]The p-channel JFET V-I characteristics behave similarly except that the voltages are reversed. Specifically, the pinch off point is reached when the gate-source voltage is increased in a positive direction, and the saturation region is met when the drain-source voltage is increased in the negative direction. shows a typical plot of drain-source voltage sweeps at various gate-source voltages while measuring the drain current, ID for an ideal n-channel enhancement MOSFET. Like JFETs, the V-I characteristics of MOSFETS have distinct regions that provide valuable information about device transport properties.The n-channel enhanced MOSFET behaves linearly, acting like a variable resistor, when the gate-source voltage is greater than the threshold voltage and the drain-source voltage is greater than the gate-source voltage. In this region the drain current can be modeled by \ref{6}, where ID is the drain current, VGS is the gate-source voltage, VT is the threshold voltage, VDS is the drain-source voltage, and k is the geometric factor described by \ref{7}, where µn is the charge-carrier effective mobility, COX is the gate oxide capacitance, W is the channel width, and L is the channel length.\[ I_{D}\ =\ 2k{(V_{GS}-V_{T})V_{DS}\ -\ [(V_{DS})^{2}/2]} \label{6} \]\[ k\ =\ \mu _{n} C_{OX} \frac{W}{L} \label{7} \]In this region the MOSFET is considered fully “ON”. The drain current for the saturation region is modeled by \ref{8}. The drain current is mainly influenced by the gate-source voltage, while the drain-source voltage has no effect.\[ I_{D}\ =\ k(V_{GS}\ -\ V_{T})^{2} \label{8} \]Solving for the threshold voltage VT results in \ref{9}.\[ V_{T}\ =\ V_{GS}\ -\ \sqrt{\frac{I_{D}}{k}} \label{9} \]When the gate-source voltage, VGS, is below the threshold voltage VT the charge carriers in the channel are not available “cutting off” the charge flow. Power dissipation for MOSFETs can also be solved using equation 6 in any region as in the JFET case.The typical I-V characteristics for the whole family of FETs seen in are plotted in .From we can see how the doping schemes that lead to enhancement and depletion are displaced along the VGS axis. In addition, from the plot the ON or OFF state can be determined for a given gate-source voltage, where (+) is positive, is zero, and (-) is negative, as seen in Table \(\PageIndex{1}\).This page titled 10.2: Measuring Key Transport Properties of FET Devices is shared under a CC BY 4.0 license and was authored, remixed, and/or curated by Pavan M. V. Raja & Andrew R. Barron (OpenStax CNX) via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.
526
2.1: Melting Point Analysis
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Physical_Methods_in_Chemistry_and_Nano_Science_(Barron)/02%3A_Physical_and_Thermal_Analysis/2.01%3A_Melting_Point_Analysis
Melting point (Mp) is a quick and easy analysis that may be used to qualitatively identify relatively pure samples (approximately <10% impurities). It is also possible to use this analysis to quantitatively determine purity. Melting point analysis, as the name suggests, characterizes the melting point, a stable physical property, of a sample in a straightforward manner, which can then be used to identify the sample.Although different designs of apparatus exist, they all have some sort of heating or heat transfer medium with a control, a thermometer, and often a backlight and magnifying lens to assist in observing melting ). Most models today utilize capillary tubes containing the sample submerged in a heated oil bath. The sample is viewed with a simple magnifying lens. Some new models have digital thermometers and controls and even allow for programming. Programming allows more precise control over the starting temperature, ending temperature and the rate of change of the temperature.For melting point analysis, preparation is straight forward. The sample must be thoroughly dried and relatively pure ( <10% impurities). The dry sample should then be packed into a melting point analysis capillary tube, which is simply a glass capillary tube with only one open end. Only 1 to 3 mm of sample is needed for sufficient analysis. The sample needs to be packed down into the closed end of the tube. This may be done by gently tapping the tube or dropping it upright onto a hard surface ). Some apparatuses have a vibrator to assist in packing the sample. Finally the tube should be placed into the machine. Some models can accommodate multiple samples.Performing analysis is different from machine to machine, but the overall process is the same ). If possible, choose a starting temperature, ending temperature, and rate of change of temperature. If the identity of the sample is known, base the starting and ending temperatures from the known melting point of the chemical, providing margins on both sides of the range. If using a model without programming, simply turn on the machine and monitor the rate of temperature change manually. A video discussing sample preparation, recording data and melting point analysis in general. Made by Indiana University-Purdue University Indianapolis chemistry department.Visually inspect the sample as it heats. Once melting begins, note the temperature. When the sample is completely melted, note the temperature again. That is the melting point range for the sample. Pure samples typically have a 1 - 2 °C melting point range, however, this may be broadened due to colligative properties.There are two primary uses of melting point analysis data. The first is for qualitative identification of the sample, and the second is for quantitative purity characterization of the sample.For identification, compare the experimental melting point range of the unknown to literature values. There are several vast databases of these values. Obtain a pure sample of the suspected chemical and mix a small amount of the unknown with it and conduct melting point analysis again. If a sharp melting point range is observed at similar temperatures to the literature values, then the unknown has likely been identified correctly. Conversely, if the melting point range is depressed or broadened, which would be due to colligative properties, then the unknown was not successfully identified.To characterize purity, first the identity of the solvent (the main constituent of the sample) and the identity of the primary solute need to be known. This may be done using other forms of analysis, such as gas chromatography-mass spectroscopy coupled with a database. Because melting point depression is unique between chemicals, a mixed melting curve comparing molar fractions of the two constituents with melting point needs to either be obtained or prepared ). Simply prepare standards with known molar fraction ratios, then perform melting point analysis on each standard and plot the results. Compare the melting point range of the experimental sample to the curve to identify the approximate molar fractions of the constituents. This sort of purity characterization cannot be performed if there are more than two primary components to the sample.Melting point analysis is fairly specific and accurate given its simplicity. Because melting point is a unique physical characteristic of a substance, melting point analysis does have high specificity. Although, many substances have similar melting points, so having an idea of possible chemicals in mind can greatly narrow down the choices. The thermometers used are also accurate. However, melting point is dependent on pressure as well, so experimental results can vary from literature values, especially at extreme locations, i.e., places of high altitude. The biggest source of error stems from the visual detection of melting by the experimenter. Controlling the change rate and running multiple trials can lessen the degree of error introduced at this step.Melting point analysis is a quick, relatively easy, and inexpensive preliminary analysis if the sample is already mostly pure and has a suspected identity. Additionally, analysis requires small samples only.As with any analysis, there are certain drawbacks to melting point analysis. If the sample is not solid, melting point analysis cannot be done. Also, analysis is destructive of the sample. For qualitative identification analysis, there are now more specific and accurate analyses that exist, although they are typically much more expensive. Also, samples with more than one solute cannot be analyzed quantitatively for purity.This page titled 2.1: Melting Point Analysis is shared under a CC BY 4.0 license and was authored, remixed, and/or curated by Pavan M. V. Raja & Andrew R. Barron (OpenStax CNX) via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.
527
2.2: Molecular Weight Determination
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Physical_Methods_in_Chemistry_and_Nano_Science_(Barron)/02%3A_Physical_and_Thermal_Analysis/2.02%3A_Molecular_Weight_Determination
The cryoscopic method was formally introduced in the 1880’s when François-Marie Raoult published how solutes depressed the freezing points of various solvents such as benzene, water, and formic acid. He concluded from his experimentation “if one molecule of a substance can be dissolved in one-hundred molecules of any given solvent then the solvent temperature is lowered by a specific temperature increment”. Based on Raoult’s research, Ernst Otto Beckmann invented the Beckmann thermometer and the associated freezing - point apparatus, which was a significant improvement in measuring freezing - point depression values for a pure solvent. The simplicity, ease, and accuracy of this apparatus has allowed it to remain as a current standard with few modifications for molecular weight determination of unknown compounds.The historical significance of Raoult and Beckmann’s research, among many other investigators, has revolutionized a physical chemistry technique that is currently applied to a vast range of disciplines from food science to petroleum fluids. For example, measured cryoscopic molecular weights of crude oil are used to predict the viscosity and surface tension for necessary fluid flow calculations in pipeline.Freezing point depression is a colligative property in which the freezing temperature of a pure solvent decreases in proportion to the number of solute molecules dissolved in the solvent. The known mass of the added solute and the freezing point of the pure solvent information permit an accurate calculation of the molecular weight of the solute.In Equation \ref{1} the freezing point depression of a non-ionic solution is described. Where ∆Tf is the change in the initial and final temperature of the pure solvent, Kf is the freezing point depression constant for the pure solvent, and m (moles solute/kg solvent) is the molality of the solution.\[\Delta T _ { f } = K _ { f } m \label{1} \]For an ionic solution shown in , the dissociation particles must be accounted for with the number of solute particles per formula unit, \(i\) (the van’t Hoff factor).\[\Delta T _ { f } = K _ { f } m i\ \label{2} \]For cryoscopy, the apparatus to measure freezing point depression of a pure solvent may be representative of the Beckmann apparatus previously shown in . The apparatus consists of a test tube containing the solute dissolved in a pure solvent, stir bar or magnetic wire and closed with a rubber stopper encasing a mercury thermometer. The test tube component is immersed in an ice-water bath in a beaker. An example of the apparatus is shown in . The rubber stopper and stir bar/wire stirrer are not shown in the figure.The cryoscopic method may be used for a wide range of samples with various degrees of polarity. The solute and solvent selection should follow the premise of like dissolved like or in terms of Raoult’s principle of the dissolution of one molecule of solute in one-hundred molecules of a solvent. The most common solvents such as benzene are generally selected because it is unreactive, volatile, and miscible with many compounds.Table \(\PageIndex{1}\) shows the cryoscopic constants (Kf) for the common solvents used for cryoscopy. A complete list of Kf values are available in Knovel Critical Tables.The detailed information about the procedure used for cryoscopy is shown below:Allow the solution to stir continuously to avoid supercoolingThe observed freezing point of the solution is when the temperature reading remains constant.Table \(\PageIndex{2}\) represents an example of a data set collection for cryoscopy.Calculate the freezing point (Fpt) depression of the solution (TΔf) from Equation \ref{3}\[T \Delta _{f} =\ (Fpt\ of\ pure\ solvent)\ -\ (Fpt\ of\ solution)\ \label{3} \]\[T \Delta_{f} = \ 6.5^{\circ}C-4.2^{\circ}C \nonumber \]\[T \Delta_{f} =\ 2.3^{\circ} \nonumber \]Calculate the molal concentration, m, of the solution using the freezing point depression and Kf (see \label{4})\[ T \Delta_{f} = K_{f}m\ \label{4} \]\[m = (2.3^{\circ}C)/(20.2^{\circ}C/molal) \nonumber \]\[m = 0.113 molal \nonumber \]\[m = g(solute)/kg(solvent) \nonumber \]Calculate the MW of the unknown sample.i = 1 for covalent compounds in \ref{2}\[M_{W} =\frac{K_{f}(g\ solute)}{\Delta T_{f} (kg solvent)} \nonumber \]\[M_{W} = \frac{20.2^{\circ}C*kg/moles \times 0.405\ g}{2.3^{\circ}C \times 0.00903\ kg} \nonumber \]\[M_{W} =\ 393\ g/mol \nonumber \]1. Nicotine is an extracted pale yellow oil from tobacco leaves that dissolves in water at temperatures less than 60°C. What is the molality of nicotine in an aqueous solution that begins to freeze at -0.445°C? See Table \(\PageIndex{1}\) for Kf values.2. If the solution used in Problem 1 is obtained by dissolving 1.200 g of nicotine in 30.56 g of water, what is the molar mass of nicotine?3. What would be the freezing point depression when 0.500 molal of Ca(NO3)2 is dissolved in 60 g of water?4. Calculate the number of weighted grams of Ca(NO3)2 added to the 60 g of water to achieve the freezing point depression from Problem 3.1.2.3.4.Knowledge of the molecular weight of polymers is very important because the physical properties of macromolecules are affected by their molecular weight. For example, shown in the interrelation between molecular weight and strength for a typical polymer. Dependence of mechanical strength on polymer molecular weight. Adapted from G. Odian, Principles of Polymerization, 4th edition, Willey-Interscience, New York.The melting point of polymers are also slightly depend on their molecular weight. shows relationship between molecular weight and melting temperatures of polyethylene ) Most linear polyethylenes have melting temperatures near 140 °C. The approach to the theoretical asymptote, that is a line whose distance to a given curve tends to zero, indicative that a theoretical polyethylene of infinite molecular weight (i.e., M = ∞) would have a melting point of 145 °C.The molecular weight-melting temperature relationship for the alkane series. Adapted from L. H. Sperling, Introduction to physical polymer science, 4th edition, Wiley-Interscience, New York.There are several ways to calculate molecular weight of polymers like number average of molecular weight, weight average of molecular weight, Z-average molecular weight, viscosity average molecular weight, and distribution of molecular weight.Number average of molecular weight is measured to determine number of particles. Number average of molecular weight is the total weight of polymer, \ref{5}, divided by the number of polymer molecules, \ref{6} . The number average molecular weight (Mn) is given by \ref{7} , where Mi is molecular weight of a molecule of oligomer n, and Ni is number of molecules of that molecular weight.\[ Total\ weight =\ \Sigma _{i=1} ^{∞} M_{i} N_{i} \label{5} \]\[ Total\ number = \ \Sigma _{i=1} ^{∞} N _{i} \label{6} \]\[ M_{n} =\frac{ \Sigma _{i=1} ^{∞} M_{i} N_{i}}{\Sigma _{i=1} ^{∞} N _{i}} \label{7} \]Consider a polymer sample comprising of 5 moles of polymer molecules having molecular weight of 40.000 g/mol and 15 moles of polymer molecules having molecular weight of 30.000 g/mol.Weight average of molecular weight (MW) is measured to determine the mass of particles. MW defined as \ref{8} , where Mi is molecular weight of oligomer n, and Ni is number of molecules of that molecular weight.\[ M_{W} =\frac{\Sigma _{i=1} ^{∞} N_{i} (M_{i})^{2}}{\Sigma _{i=1} ^{∞} N_{i} M_{i}} \label{8} \]Consider the polymer described in the previous problem.Calculate the MW for a polymer sample comprising of 9 moles of polymer molecules having molecular weight of 30.000 g/mol and 5 moles of polymer molecules having molecular weight of 50.000 g/mol.The Z-average molecular weight (Mz) is measured in some sedimentation equilibrium experiments. Mz isn’t common technique for molecular weight of polymers. The molar mass depends on size and mass of the molecules. The ultra centrifugation techniques employ to determine Mz. Mz emphasizes large particles and it defines the EQ, where Mi is molecular weight and Ni is number of molecules.\[ M_{W} =\frac{\Sigma N_{i} M_{i}^{3}}{\Sigma N_{i} M{i}^{2}} \nonumber \]Consider the polymer described in the previous problem.One of the ways to measure the average molecular weight of polymers is viscosity of solution. Viscosity of a polymer depend on concentration and molecular weight of polymers. Viscosity techniques is common since it is experimentally simple. Viscosity average molecular weight defines as \ref{9} , where Mi is molecular weight and Ni is number of molecules, a is a constant which depend on the polymer-solvent in the viscosity experiments. When a is equal 1, Mv is equal to the weight average molecular weight, if it isn’t equal 1 it is between weight average molecular weight and the number average molecular weight.\[(\frac{\Sigma N_{i} M_{i} ^{1+a}}{\Sigma N_{i} M_{i}})^{\frac{1}{2}} \label{9} \]Molecular weight distribution is one of the important characteristic of polymer because it affects polymer properties. A typical molecular distribution of polymers show in \(\PageIndex{6}\). There are various molecular weights in the range of curve. The distribution of sizes in a polymer sample isn't totally defined by its central tendency. The width and shape of distribution must be known. It is always true that the various range molecular weight is \ref{10} . The equality is occurring when all polymer in the sample have the same molecular weight.\[ M_{N} \geq M_{V} \geq M_{W} \geq M_{Z} \geq M_{Z+1} \label{10} \]Gel permeation chromatography is also called size exclusion chromatography. It is widely used method to determine high molecular weight distribution. In this technique, substances separate according to their molecule size. Firstly, large molecules begin to elute then smaller molecules are eluted . The sample is injected into the mobile phase then the mobile phase enters into the columns. Retention time is the length of time that a particular fraction remains in the column. As shown in , while the mobile phase passes through the porous particles, the area between large molecules and small molecules is getting increase. GPC gives a full molecular distribution, but its cost is high.According to basic theory of GPC, the basic quantity measured in chromatography is the retention volume, \ref{11}, where V0 is mobile phase volume and Vp is the volume of a stationary phase. K is a distribution coefficient related to the size and types of the molecules.\[V_{e} = V_{0} + V_{p} K \label{11} \]The essential features of gel permeation chromatography are shown in . Solvent leaves the solvent supply, then solvent is pumped through a filter. The desired amount of flow through the sample column is adjusted by sample control valves and the reference flow is adjusted that the flow through the reference and flow through the sample column reach the detector in common front. The reference column is used to remove any slight impurities in the solvent. In order to determine the amount of sample, a detector is located at the end of the column. Also, detectors may be used to continuously verify the molecular weight of species eluting from the column. The flow of solvent volume is as well monitored to provide a means of characterizing the molecular size of the eluting species.As an example, consider the block copolymer of ethylene glycol (PEG, ) and poly(lactide) (PLA, ), i.e., . In the first step starting with a sample of PEG with a Mn of 5,700 g/mol. After polymerization, the molecular weight increased because of the progress of lactide polymerization initiated from end of PEG chain. Varying composition of PEG-PLA shown in Table \(\PageIndex{3}\) can be detected by GPC ).One of the most used methods to characterize the molecular weight is light scattering method. When polarizable particles are placed in the oscillating electric field of a beam of light, the light scattering occurs. Light scattering method depends on the light, when the light is passing through polymer solution, it is measure by loses energy because of absorption, conversion to heat and scattering. The intensity of scattered light relies on the concentration, size and polarizability that is proportionality constant which depends on the molecular weight. shows light scattering off a particle in solution.A schematic laser light-scattering is shown in . A major problem of light scattering is to prepare perfectly clear solutions. This problem is usually accomplished by ultra-centrifugation. A solution should be as possible as clear and dust free to determine absolute molecular weight of polymer. The advantages of this method, it doesn’t need calibration to obtain absolute molecular weight and it can give information about shape and Mw information. Also, it can be performed rapidly with less amount of sample and absolute determinations of the molecular weight can be measured. The weaknesses of the method is high price and most times it requires difficult clarification of the solutions.The weight average molecular weight value of scattering polymers in solution related to their light scattering properties that define by \ref{12} , where K is the wave vector, that defined by \ref{13} . C is solution concentration, R(θ) is the reduced Rayleigh ratio, P(θ) the particle scattering function, θ is the scattering angle, A is the osmotic virial coefficients, where n0 solvent refractive index, λ the light wavelength and Na Avagadro’s number. The particle scattering function is given by \ref{14} , where Rz is the radius of gyration.\[ KC / R( \theta )\ = \ 1/M_{W} ( P( \theta) \ +\ 2A_{2}C\ +\ 3A_{3}C_{2}\ +\ ...) \label{12} \]\[ K\ =\ 2 \pi ^{2}n_{0}^{2}(dn/dC)^{2}/N_{a} \lambda ^{2} \label{13} \]\[ 1/(P(\theta )) \ =\ 1+16 \pi ^{2} n_{0} ^{2} ( R _{z} ^{2} )sin ^{2} ( \theta /2) 3 \lambda ^{2} \label{14} \]Weight average molecular weight of a polymer is found from extrapolation of data in the form of a Zimm plot ( ). Experiments are performed at several angles and at least at 4 different concentrations. The straight line extrapolations provides Mw.X-rays are a form of electromagnetic wave with wavelengths between 0.001 nm and 0.2 nm. X-ray scattering is particularly used for semicrystalline polymers which includes thermoplastics, thermoplastic elastomers, and liquid crystalline polymers. Two types of X-ray scattering are used for polymer studies:Schematic representation of X-ray scattering shows in .At least two SAXS curves are required to determine the molecular weight of a polymer. The SAXS procedure to determine the molecular weight of polymer sample in monomeric or multimeric state solution requires the following conditions.a. The system should be monodispersed.b. The solution should be dilute enough to avoid spatial correlation effects.c. The solution should be isotropic.d. The polymer should be homogenous.Osmometry is applied to determine number average of molecular weight (Mn). There are two types of osmometer:Vapor pressure osmometry measures vapor pressure indirectly by measuring the change in temperature of a polymer solution on dilution by solvent vapor and is generally useful for polymers with Mn below 10,000–40,000 g/mol. When molecular weight is more than that limit, the quantity being measured becomes very small to detect. A typical vapor osmometry shows in the . Vapor pressure is very sensitive because of this reason it is measured indirectly by using thermistors to measure voltage changes caused by changes in temperature.Membrane osmometry is absolute technique to determine Mn ). The solvent is separated from the polymer solution with semipermeable membrane that is strongly held between the two chambers. One chamber is sealed by a valve with a transducer attached to a thin stainless steel diaphragm which permits the measurement of pressure in the chamber continuously. Membrane osmometry is useful to determine Mn about 20,000-30,000 g/mol and less than 500,000 g/mol. When Mn of polymer sample more than 500,000 g/mol, the osmotic pressure of polymer solution becomes very small to measure absolute number average of molecular weight. In this technique, there are problems with membrane leakage and symmetry. The advantages of this technique is that it doesn’t require calibration and it gives an absolute value of Mn for polymer samples.Properties of polymers depend on their molecular weight. There are different kind of molecular weight and each can be measured by different techniques. The summary of these techniques and molecular weight is shown in the Table \(\PageIndex{4}\).Size exclusion chromatography (SEC) is a useful technique that is specifically applicable to high-molecular-weight species, such as polymer. It is a method to sort molecules according to their sizes in solution. The sample solution is injected into the column, which is filled with rigid, porous, materials, and is carried by the solvent through the packed column. The sizes of molecules are determined by the pore size of the packing particle in the column within which the separation occurs.For polymeric materials, the molecular weight (Mw) or molecular size plays a key role in determining the mechanical, bulk, and solution properties of materials. It is known that the sizes of polymeric molecules depend on their molecular weights, side chain configurations, molecular interaction, and so on. For example, the exclusion volume of polymers with rigid side group is larger than those with soft long side chains. Therefore, in order to determine the molecular weight and molecular weight distribution of a polymer, one of the most widely applied methods is gel-permeation chromatography.Gel permeation chromatography (GPC) is a term used for when the separation technique size exclusion chromatography (SEC) is applied to polymers.The primary purpose and use of the SEC technique is to provide molecular weight distribution information about a particular polymeric material. Typically, in about 30 minutes using standard SEC, the complete molecular weight distribution of a polymer as well as all the statistical information of the distribution can be determined. Thus, SEC has been considered as a technique essentially supplanting classical molecular weight techniques. To apply this powerful technique, there is some basic work that needs to be done before its use. The selection of an appropriate solvent and the column, as well as the experimental conditions, are important for proper separation of a sample. Also, it is necessary to have calibration curves in order to determine the relative molecular weight from a given retention volume/time.It is well known that both the majority of natural and synthetic polymers are polydispersed with respect to molar mass. For synthetic polymers, the more mono-dispersed a polymer can be made, the better the understanding of its inherent properties will be obtained.A polymer is a large molecule (macromolecule) composed of repeating structural units typically connected by covalent chemical bonds. Polymers are common materials that are widely used in our lives. One of the most important features which distinguishes most synthetic polymers from simple molecular compounds is the inability to assign an exact molar mass to a polymer. This is a consequence of the fact that during the polymerization reaction the length of the chain formed is determined by several different events, each of which have different reaction rates. Hence, the product is a mixture of chains of different length due to the random nature of growth. In addition, some polymers are also branched (rather than linear) as a consequence of alternative reaction steps. The molecular weight (Mw) and molecular weight distribution influences many of the properties of polymers:Consequently, it is important to understand how to determine the molecular weight and molecular weight distribution.Simpler pure compounds contain the same molecular composition for the same species. For example, the molecular weight of any sample of styrene will be the same (104.16 g/mol). In contrast, most polymers are not composed of identical molecules. The molecular weight of a polymer is determined by the chemical structure of the monomer units, the lengths of the chains and the extent to which the chains are interconnected to form branched molecules. Because virtually all polymers are mixtures of many large molecules, we have to resort to averages to describe polymer molecular weight.The polymers produced in polymerization reactions have lengths which are distributed according to a probability function which is governed by the polymerization reaction. To define a particular polymer weight average, the average molecular weight Mavg is defined by \ref{15} Where Ni is the number of molecules with molecular weight Mi.\[ M_{avg} \ =\ \frac{\Sigma N_{i} M_{i}^{a}}{\Sigma N_{i} M_{i} ^{a-1}} \label{15} \]There are several possible ways of reporting polymer molecular weight. Three commonly used molecular weight descriptions are: the number average (Mn), weight average (Mw), and z-average molecular weight (Mz). All of three are applicable to different constant a in \ref{16} and are shown in .When a = 1, the number average molecular weight, \ref{16} .\[ M_{n,\ avg} \ =\ \frac{\Sigma N_{i}M_{i}}{\Sigma N_{i}}\ =\ \frac{w}{N} \label{16} \]When a = 2, the number average molecular weight, \ref{17} .\[ M_{n,\ avg} \ =\ \frac{\Sigma N_{i}M_{i}^{2}}{\Sigma N_{i}M_{i}}\ =\ \frac{\Sigma N_{i}M_{i}}{w} \label{17} \]When a = 2, the number average molecular weight, \ref{18} .\[ M_{n,\ avg} \ =\ \frac{\Sigma N_{i}M_{i}^{3}}{\Sigma N_{i}M_{i}^{2}}\ =\ \frac{\Sigma N_{i}M_{i}^{2}}{\Sigma N_{i}M_{i}} \label{18} \]Bulk properties weight average molecular weight, Mw is the most useful one, because it fairly accounts for the contributions of different sized chains to the overall behavior of the polymer, and correlates best with most of the physical properties of interest.There are various methods published to detect these three different primary average molecular weights respectively. For instance, a colligative method, such as osmotic pressure, effectively calculates the number of molecules present and provides a number average molecular weight regardless of their shape or size of polymers. The classical van’t Hoff equation for the osmotic pressure of an ideal, dilute solution is shown in \ref{19} .\[ \frac{\pi }{c}\ =\ \frac{RT}{M_{n}} \label{19} \]The weight average molecular weight of a polymer in solution can be determined by either measuring the intensity of light scattered by the solution or studying the sedimentation of the solute in an ultracentrifuge. From light scattering method which is depending on the size rather than the number of molecules, weight average molecular weight is obtained. This work requires concentration fluctuations which are the main source of the light scattered by a polymer solution. The intensity of the light scattering of polymer solution is often expressed by its turbidity τ which is given in Rayleigh’s law in \ref{20} . Where iθ is scattered intensity at only one angle θ, r is the distance from the scattering particle to the detection point, and I0 is the incident intensity.\[ \tau\ =\ \frac{16\pi i_{\Theta} r^{2}}{3I_{0}(1+cos^{2}\Theta )} \label{20} \]The intensity scattered by molecules (Ni) of molecular weight (Mi) is proportional to NiMi2. Thus, the total light scattered by all molecules is described in \ref{21} , where c is the total weight of the sample ∑NiMi.\[ \frac{\pi}{c} ~ \frac{\Sigma N_{i}M_{i}^{2}}{\Sigma N_{i}M_{i}}\ =\ M_{W,\ avg} \label{21} \]The polydispersity index (PDI), is a measure of the distribution of molecular mass in a given polymer sample. As shown in , it is the result of the definitions that Mw ≥ Mn. The equality of Mw and Mn would correspond with a perfectly uniform (monodisperse) sample. The ratio of these average molecular weights is often used as a guide to the dispersity of the chain lengths in a polymer sample. The greater Mw / Mn is, the greater the dispersity is.The properties of a polymer sample are strongly dependent on the way in which the weights of the individual molecules are distributed about the average. The ratio Mw/Mn gives sufficient information to characterize the distribution when the mathematical form of the distribution curve is known.Generally, the narrow molecular weight distribution materials are the models for much of work aimed at understanding the materials’ behaviors. For example, polystyrene and its block copolymer polystyrene-b-polyisoprene have quite narrow distribution. As a result, narrow molecular weight distribution materials are a necessary requirement when people study their behavior, such as self-assembly behavior for block copolymer. Nonetheless, there are still lots of questions for scientists to explore the influence of polydispersity. For example, research on self-assembly which is one of the interesting fields in polymer science shows that we cannot throw polydispersity away.In SEC, sample components migrate through the column at different velocities and elute separately from the column at different times. In liquid chromatography and gas chromatography, as a solute moves along with the carrier fluid, it is at times held back either by surface of the column packing, by stationary phase, or by both. Unlike gas chromatography (GC) and liquid chromatography (LC), molecular size, or more precisely, molecular hydrodynamic volume governs the separation process of SEC, not varied by the type of mobile phase. The smallest molecules are able to penetrate deeply into pores whereas the largest molecules are excluded by the smaller pore sizes. shows the regular instrumental setup of SEC.The properties of mobile phase are still important in that it is supposed to be strong affinity to stationary phase and be a good solubility to samples. The purpose of well soluble of sample is to make the polymer be perfect coil suspending in solution. Thus, as a mixture of solutes of different size passes through a column packed with porous particles. As shown in , it clearly depicts the general idea for size separation by SEC. the main setup of SEC emphasizes three concepts: stationary phase (column), mobile phase (solvent) and sample preparation.Solvent selection for SEC involves a number if considerations, such as convenience, sample type, column packing, operating variables, safety, and purity.For samples concern, the solvents used for mobile phase of SEC are limited to those follows following criteria:Therefore, several solvents are qualified to be solvents such as THF, chlorinated hydrocarbons (chloroform, methylene chloride, dichloroethane, etc), aromatic hydrocarbons (benzene, toluene , trichlorobenzene, etc).Normally, high purity of solvent (HPLC-grade) is recommended. The reasons are to avoid suspended particulates that may abrade the solvent pumping system or cause plugging of small-particle columns, to avoid impurities that may generate baseline noise, and to avoid impurities that are concentrated due to evaporation of solvent.Column selection of SEC depends mainly on the desired molecular weight range of separation and the nature of the solvents and samples. Solute molecules should be separated solely by the size of gels without interaction of packing materials. Better column efficiencies and separations can be obtained with small particle packing in columns and high diffusion rates for sample solutes. Furthermore, optimal performance of an SEC packing materials involves high resolution and low column backpressure. Compatible solvent and column must be chosen because, for example, organic solvent is used to swell the organic column packing and used to dissolve and separate the samples.It is convenient that columns are now usually available from manufacturers regarding the various types of samples. They provide the information such as maximum tolerant flow rates, backpressure tolerances, recommended sample concentration, and injection volumes, etc. Nonetheless, users have to notice a few things upon using columns:The sample solutions are supposed to be prepared in dilute concentration (less than 2 mg/mL) for several concerns. For polymer samples, samples must be dissolved in the solvent same as used for mobile phase except some special cases. A good solvent can dissolve a sample in any proportion in a range of temperatures. It is a slow process for dissolution because the rate determining step is solvent diffusion into polymers to produce swollen gels. Then, gradual disintegration of gels makes sample-solvent mixture truly become solution. Agitation and warming the mixtures are useful methods to speed up sample preparation.It is recommended to filter the sample solutions before injecting into columns or storing in sample vials in order to get rid of clogging and excessively high pressure problems. If unfortunately the excessively high pressure or clogging occur due to higher concentration of sample solution, raising the column temperature will reduce the viscosity of the mobile phase, and may be helpful to redissolve the precipitated or adsorbed solutes in the column. Back flushing of the columns should only be used as the last resort.The size exclusion separation mechanism is based on the effective hydrodynamic volume of the molecule, not the molecular weight, and therefore the system must be calibrated using standards of known molecular weight and homogeneous chemical composition. Then, the curve of sample is used to compare with calibration curve and obtain information relative to standards. The further step is required to covert relative molecular weight into absolute molecular weight of a polymer.The purpose of calibration in SEC is to define the relationship between molecular weight and retention volume/time in the chosen permeation range of column set and to calculate the relative molecular weight to standard molecules. There are several calibration methods are commonly employed in modern SEC: direct standard calibration, poly-disperse standard calibration, universal calibration.The most commonly used calibration method is direct standard calibration. In the direct standard calibration method, narrowly distributed standards of the same polymer being analyzed are used. Normally, narrow-molecular weight standards commercially available are polystyrene (PS). The molecular weight of standards are measured originally by membrane osmometry for number-average molecular weight, and by light scattering for weight-average molecular weight as described above. The retention volume at the peak maximum of each standard is equated with its stated molecular weight.The molecular weight and molecular weight distribution can be determined from the calibration curves as described above. But as the relationship between molecular weight and size depends on the type of polymer, the calibration curve depends on the polymer used, with the result that true molecular weight can only be obtained when the sample is the same type as calibration standards. As depicted, large deviations from the true molecular weight occur in the instance of branched samples because the molecular density of these is higher than in the linear chains.Light-scattering detector is now often used to overcome the limitations of conventional SEC. These signals depend only on concentration, not on molecular weight or polymer size. For instance, for LS detector, \ref{22} applies:\[LS\ Signal\ =\ K_{LS} \cdot (dn/dc)^{2}\cdot M_{W} \cdot c \label{22} \]Where KLS is an apparatus-specific sensitivity constant, dn/dc is the refractive index increment and c is concentration. Therefore, accurate molecular weight can be determined while the concentration of the sample is known without calibration curve.The syntheses of poly(3-hexylthiophene) are well developed during last decade. It is an attractive polymer due to its potential as electronic materials. Due to its excellent charge transport performances and high solubility, several studies discuss its further improvement such as making block copolymer even triblock copolymer. The details are not discussed here. However, the importance of molecular weight and molecular weight distribution is still critical.As shown in , they studied the mechanism of chain-growth polymerization and successfully produced low polydispersity P3HT. The figure also demonstrates that the molecule with larger molecular size/ or weight elutes out of the column earlier than those which has smaller molecular weight.The real molecular weight of P3HT is smaller than the molecular weight relative to polystyrene. In this case, the backbone of P3HT is harder compared with polystyrenes’ backbone because of the position of aromatic groups. It results in less flexibility. We can briefly judge the authentic molecular weight of the synthetic polymer according to its molecular structure.This page titled 2.2: Molecular Weight Determination is shared under a CC BY 4.0 license and was authored, remixed, and/or curated by Pavan M. V. Raja & Andrew R. Barron (OpenStax CNX) via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.
528
2.3: BET Surface Area Analysis of Nanoparticles
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Physical_Methods_in_Chemistry_and_Nano_Science_(Barron)/02%3A_Physical_and_Thermal_Analysis/2.03%3A_BET_Surface_Area_Analysis_of_Nanoparticles
In the past few years, nanotechnology research has expanded out of the chemistry department and into the fields of medicine, energy, aerospace and even computing and information technology. With bulk materials, the surface area to volume is insignificant in relation to the number of atoms in the bulk, however when the particles are only 1 to 100 nm across, different properties begin to arise. For example, commercial grade zinc oxide has a surface area range of 2.5 to 12 m2/g while nanoparticle zinc oxide can have surface areas as high as 54 m2/g . The nanoparticles have superior UV blocking properties when compared to the bulk material, making them useful in applications such as sunscreen. Many useful properties of nanoparticles rise from their small size, making it very important to be able to determine their surface area.The BET theory was developed by Stephen Brunauer ), Paul Emmett ), and Edward Teller ) in 1938. The first letter of each publisher’s surname was taken to name this theory. The BET theory was an extension of the Langmuir theory, developed by Irving Langmuir ) in 1916.The Langmuir theory relates the monolayer adsorption of gas molecules ), also called adsorbates, onto a solid surface to the gas pressure of a medium above the solid surface at a fixed temperature to \ref{1} , where θ is the fractional cover of the surface, P is the gas pressure and α is a constant.\[ \Theta \ =\ \frac{\alpha \cdot P}{1\ +\ (\alpha \cdot P)} \label{1} \]The Langmuir theory is based on the following assumptions:The Langmuir theory has a few flaws that are addressed by the BET theory. The BET theory extends the Langmuir theory to multilayer adsorption ) with three additional assumptions:Adsorption is defined as the adhesion of atoms or molecules of gas to a surface. It should be noted that adsorption is not confused with absorption, in which a fluid permeates a liquid or solid. The amount of gas adsorbed depends on the exposed surface are but also on the temperature, gas pressure and strength of interaction between the gas and solid. In BET surface area analysis, nitrogen is usually used because of its availability in high purity and its strong interaction with most solids. Because the interaction between gaseous and solid phases is usually weak, the surface is cooled using liquid N2 to obtain detectable amounts of adsorption. Known amounts of nitrogen gas are then released stepwise into the sample cell. Relative pressures less than atmospheric pressure is achieved by creating conditions of partial vacuum. After the saturation pressure, no more adsorption occurs regardless of any further increase in pressure. Highly precise and accurate pressure transducers monitor the pressure changes due to the adsorption process. After the adsorption layers are formed, the sample is removed from the nitrogen atmosphere and heated to cause the adsorbed nitrogen to be released from the material and quantified. The data collected is displayed in the form of a BET isotherm, which plots the amount of gas adsorbed as a function of the relative pressure. There are five types of adsorption isotherms possible.Type I is a pseudo-Langmuir isotherm because it depicts monolayer adsorption ). A type I isotherm is obtained when P/Po < 1 and c > 1 in the BET equation, where P/Po is the partial pressure value and c is the BET constant, which is related to the adsorption energy of the first monolayer and varies from solid to solid. The characterization of microporous materials, those with pore diameters less than 2 nm, gives this type of isotherm.A type II isotherm ) is very different than the Langmuir model. The flatter region in the middle represents the formation of a monolayer. A type II isotherm is obtained when c > 1 in the BET equation. This is the most common isotherm obtained when using the BET technique. At very low pressures, the micropores fill with nitrogen gas. At the knee, monolayer formation is beginning and multilayer formation occurs at medium pressure. At the higher pressures, capillary condensation occurs.A type III isotherm ) is obtained when the c < 1 and shows the formation of a multilayer. Because there is no asymptote in the curve, no monolayer is formed and BET is not applicable.Type IV isotherms ) occur when capillary condensation occurs. Gases condense in the tiny capillary pores of the solid at pressures below the saturation pressure of the gas. At the lower pressure regions, it shows the formation of a monolayer followed by a formation of multilayers. BET surface area characterization of mesoporous materials, which are materials with pore diameters between 2 - 50 nm, gives this type of isotherm.Type V isotherms ) are very similar to type IV isotherms and are not applicable to BET.The BET Equation, \ref{2} , uses the information from the isotherm to determine the surface area of the sample, where X is the weight of nitrogen adsorbed at a given relative pressure (P/Po), Xm is monolayer capacity, which is the volume of gas adsorbed at standard temperature and pressure (STP), and C is constant. STP is defined as 273 K and 1 atm.\[ \frac{1}{X[(P_{0}/P)-1]} = \frac{1}{X_{m}C} + \frac{C-1}{X_{m}C} (\frac{P}{P_{0}}) \label{2} \]Ideally five data points, with a minimum of three data points, in the P/P0 range 0.025 to 0.30 should be used to successfully determine the surface area using the BET equation. At relative pressures higher than 0.5, there is the onset of capillary condensation, and at relative pressures that are too low, only monolayer formation is occurring. When the BET equation is plotted, the graph should be of linear with a positive slope. If such a graph is not obtained, then the BET method was insufficient in obtaining the surface area.\[ X_{m}\ = \frac{1}{s\ +\ i} = \frac{C-1}{C_{s}} \label{3} \]\[S\ = \frac{X_{m} L_{av} A_{m}}{M_{v}} \label{4} \]Single point BET can also be used by setting the intercept to 0 and ignoring the value of C. The data point at the relative pressure of 0.3 will match up the best with a multipoint BET. Single point BET can be used over the more accurate multipoint BET to determine the appropriate relative pressure range for multi-point BET.Prior to any measurement the sample must be degassed to remove water and other contaminants before the surface area can be accurately measured. Samples are degassed in a vacuum at high temperatures. The highest temperature possible that will not damage the sample’s structure is usually chosen in order to shorten the degassing time. IUPAC recommends that samples be degassed for at least 16 hours to ensure that unwanted vapors and gases are removed from the surface of the sample. Generally, samples that can withstand higher temperatures without structural changes have smaller degassing times. A minimum of 0.5 g of sample is required for the BET to successfully determine the surface area.Samples are placed in glass cells to be degassed and analyzed by the BET machine. Glass rods are placed within the cell to minimize the dead space in the cell. Sample cells typically come in sizes of 6, 9 and 12 mm and come in different shapes. 6 mm cells are usually used for fine powders, 9 mm cells for larger particles and small pellets and 12 mm are used for large pieces that cannot be further reduced. The cells are placed into heating mantles and connected to the outgas port of the machine.After the sample is degassed, the cell is moved to the analysis port ). Dewars of liquid nitrogen are used to cool the sample and maintain it at a constant temperature. A low temperature must be maintained so that the interaction between the gas molecules and the surface of the sample will be strong enough for measurable amounts of adsorption to occur. The adsorbate, nitrogen gas in this case, is injected into the sample cell with a calibrated piston. The dead volume in the sample cell must be calibrated before and after each measurement. To do that, helium gas is used for a blank run, because helium does not adsorb onto the sample.The BET technique has some disadvantages when compared to NMR, which can also be used to measure the surface area of nanoparticles. BET measurements can only be used to determine the surface area of dry powders. This technique requires a lot of time for the adsorption of gas molecules to occur. A lot of manual preparation is required.The BET technique was used to determine the surface areas of metal-organic frameworks (MOFs), which are crystalline compounds of metal ions coordinated to organic molecules. Possible applications of MOFs, which are porous, include gas purification and catalysis. An isoreticular MOF (IRMOF) with the chemical formula Zn4O(pyrene-1,2-dicarboxylate)3 ) was used as an example to see if BET could accurately determine the surface area of microporous materials. The predicted surface area was calculated directly from the geometry of the crystals and agreed with the data obtained from the BET isotherms. Data was collected at a constant temperature of 77 K and a type II isotherm ) was obtained.The isotherm data obtained from partial pressure range of 0.05 to 0.3 is plugged into the BET equation, \ref{2} , to obtain the BET plot ).Using \ref{5} , the monolayer capactiy is determined to be 391.2 cm3/g.\[ X_{m}\ = \frac{1}{(2.66E\ -\ 3)\ +\ (-5.212E\ -\ 0.05)} \label{5} \]Now that Xm is known, then \ref{6} can be used to determine that the surface area is 1702.3 m2/g.\[S\ =\frac{391.2cm^{2} \ast 0.162nm^{2} \ast 6.02\ast 10^{23}}{22.414:L} \label{6} \]This page titled 2.3: BET Surface Area Analysis of Nanoparticles is shared under a CC BY 4.0 license and was authored, remixed, and/or curated by Pavan M. V. Raja & Andrew R. Barron (OpenStax CNX) via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.
529
2.4: Dynamic Light Scattering
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Physical_Methods_in_Chemistry_and_Nano_Science_(Barron)/02%3A_Physical_and_Thermal_Analysis/2.04%3A_Dynamic_Light_Scattering
Dynamic light scattering (DLS), which is also known as photon correlation spectroscopy (PCS) or quasi-elastic light scattering (QLS), is a spectroscopy method used in the fields of chemistry, biochemistry, and physics to determine the size distribution of particles (polymers, proteins, colloids, etc.) in solution or suspension. In the DLS experiment, normally a laser provides the monochromatic incident light, which impinges onto a solution with small particles in Brownian motion. And then through the Rayleigh scattering process, particles whose sizes are sufficiently small compared to the wavelength of the incident light will diffract the incident light in all direction with different wavelengths and intensities as a function of time. Since the scattering pattern of the light is highly correlated to the size distribution of the analyzed particles, the size-related information of the sample could be then acquired by mathematically processing the spectral characteristics of the scattered light.Herein a brief introduction of basic theories of DLS will be demonstrated, followed by descriptions and guidance on the instrument itself and the sample preparation and measurement process. Finally, data analysis of the DLS measurement, and the applications of DLS as well as the comparison against other size-determine techniques will be shown and summarized.The theory of DLS can be introduced utilizing a model system of spherical particles in solution. According to the Rayleigh scattering ), when a sample of particles with diameter smaller than the wavelength of the incident light, each particle will diffract the incident light in all directions, while the intensity \(I\) is determined by \ref{1} , where \(I_0\) and \(λ\) is the intensity and wavelength of the unpolarized incident light, \(R\) is the distance to the particle, \(θ\) is the scattering angel, \(n\) is the refractive index of the particle, and \(r\) is the radius of the particle.\[ I\ =\ I_{0} \frac{1\ +\cos^{2}\theta}{2R^{2}} \left(\frac{2\pi }{\lambda }\right)^{4}\left(\frac{n^{2}\ -\ 1}{n^{2}\ +\ 2}\right)^{2}r^{6} \label{1} \]If that diffracted light is projected as an image onto a screen, it will generate a “speckle" pattern ); the dark areas represent regions where the diffracted light from the particles arrives out of phase interfering destructively and the bright area represent regions where the diffracted light arrives in phase interfering constructively.In practice, particle samples are normally not stationary but moving randomly due to collisions with solvent molecules as described by the Brownian motion, \ref{2}, where \(\overline{(\Delta x)^{2}} \) is the mean squared displacement in time t, and D is the diffusion constant, which is related to the hydrodynamic radius a of the particle according to the Stokes-Einstein equation, \ref{3} , where kB is Boltzmann constant, T is the temperature, and μ is viscosity of the solution. Importantly, for a system undergoing Brownian motion, small particles should diffuse faster than large ones.\[ \overline{(\Delta x)^{2}}\ =\ 2\Delta t \label{2} \]\[D\ =\frac{k_{B}T}{6\pi \mu a} \label{3} \]As a result of the Brownian motion, the distance between particles is constantly changing and this results in a Doppler shift between the frequency of the incident light and the frequency of the scattered light. Since the distance between particles also affects the phase overlap/interfering of the diffracted light, the brightness and darkness of the spots in the “speckle” pattern will in turn fluctuate in intensity as a function of time when the particles change position with respect to each other. Then, as the rate of these intensity fluctuations depends on how fast the particles are moving (smaller particles diffuse faster), information about the size distribution of particles in the solution could be acquired by processing the fluctuations of the intensity of scattered light. shows the hypothetical fluctuation of scattering intensity of larger particles and smaller particles.In order to mathematically process the fluctuation of intensity, there are several principles/terms to be understood. First, the intensity correlation function is used to describe the rate of change in scattering intensity by comparing the intensity \(I(t)\) at time \(t\) to the intensity \(I(t + τ)\) at a later time \((t + τ)\), and is quantified and normalized by \ref{4} and \ref{5}, where braces indicate averaging over \(t\).\[ G_{2} ( \tau ) =\ \langle I(t)I(t\ +\ \tau)\rangle \label{4} \]\[g_{2}(\tau )=\frac{\langle I(t)I(t\ +\ \tau)\rangle}{\langle I(t)\rangle ^{2}} \label{5} \]Second, since it is not possible to know how each particle moves from the fluctuation, the electric field correlation function is instead used to correlate the motion of the particles relative to each other, and is defined by \ref{6} and \ref{7} , where E(t) and E(t + τ) are the scattered electric fields at times t and t+ τ. \[ G_{1}(\tau ) =\ \langle E(t)E(t\ +\ \tau )\rangle \label{6} \]\[g_{1}(\tau ) = \frac{\langle E(t)E(t\ +\ \tau)\rangle}{\langle E(t) E(t)\rangle} \label{7} \]For a monodisperse system undergoing Brownian motion, g1(τ) will decay exponentially with a decay rate Γ which is related by Brownian motion to the diffusivity by \ref{8} , \ref{9} , and \ref{10} , where q is the magnitude of the scattering wave vector and q2 reflects the distance the particle travels, n is the refraction index of the solution and θ is angle at which the detector is located.\[ g_{1}(\tau )=\ e^{- \Gamma \tau} \label{8} \]\[ \Gamma \ =\ -Dq^{2} \label{9} \]\[ q = \frac{4\pi n}{\lambda } \sin\frac{\Theta }{2} \label{10} \]For a polydisperse system however, g1(τ) can no longer be represented as a single exponential decay and must be represented as a intensity-weighed integral over a distribution of decay rates \(G(Γ)\) by \ref{11} where G(Γ) is normalized, \ref{12} .\[ g_{1}(\tau )= \int ^{\infty}_{0} G(\Gamma )e^{-\Gamma \tau} d\Gamma \label{11} \]\[ \int ^{\infty}_{0} G(\Gamma ) d\Gamma\ =\ 1 \label{12} \]Third, the two correlation functions above can be equated using the Seigert relationship based on the principles of Gaussian random processes (which the scattering light usually is), and can be expressed as \ref{13} , where β is a factor that depends on the experimental geometry, and B is the long-time value of g2(τ), which is referred to as the baseline and is normally equal to 1. shows the decay of g2(τ) for small size sample and large size sample.\[ g_{2}(\tau ) =\ B\ +\ \beta [g_{1}(\tau )]^{2} \label{13} \]When determining the size of particles in solution using DLS, g2(τ) is calculated based on the time-dependent scattering intensity, and is converted through the Seigert relationship to g1(τ) which usually is an exponential decay or a sum of exponential decays. The decay rate Γ is then mathematically determined (will be discussed in section ) from the g1(τ) curve, and the value of diffusion constant D and hydrodynamic radius a can be easily calculated afterwards.In a typical DLS experiment, light from a laser passes through a polarizer to define the polarization of the incident beam and then shines on the scattering medium. When the sizes of the analyzed particles are sufficiently small compared to the wavelength of the incident light, the incident light will scatters in all directions known as the Rayleigh scattering. The scattered light then passes through an analyzer, which selects a given polarization and finally enters a detector, where the position of the detector defines the scattering angle θ. In addition, the intersection of the incident beam and the beam intercepted by the detector defines a scattering region of volume V. As for the detector used in these experiments, a phototube is normally used whose dc output is proportional to the intensity of the scattered light beam. shows a schematic representation of the light-scattering experiment.In modern DLS experiments, the scattered light spectral distribution is also measured. In these cases, a photomultiplier is the main detector, but the pre- and postphotomultiplier systems differ depending on the frequency change of the scattered light. The three different methods used are filter (f > 1 MHz), homodyne (f > 10 GHz), and heterodyne methods (f < 1 MHz), as schematically illustrated in . Note that that homodyne and heterodyne methods use no monochromator of “filter” between the scattering cell and the photomultiplier, and optical mixing techniques are used for heterodyne method. shows the schematic illustration of the various techniques used in light-scattering experiments.As for an actual DLS instrument, take the Zetasizer Nano (Malvern Instruments Ltd.) as an example ), it actually looks like nothing other than a big box, with components of power supply, optical unit (light source and detector), computer connection, sample holder, and accessories. The detailed procedure of how to use the DLS instrument will be introduced afterwards.Although different DLS instruments may have different analysis ranges, we are usually looking at particles with a size range of nm to μm in solution. For several kinds of samples, DLS can give results with rather high confidence, such as monodisperse suspensions of unaggregated nanoparticles that have radius > 20 nm, or polydisperse nanoparticle solutions or stable solutions of aggregated nanoparticles that have radius in the 100 - 300 nm range with a polydispersity index of 0.3 or below. For other more challenging samples such as solutions containing large aggregates, bimodal solutions, very dilute samples, very small nanoparticles, heterogeneous samples, or unknown samples, the results given by DLS could not be really reliable, and one must be aware of the strengths and weaknesses of this analytical technique.Then, for the sample preparation procedure, one important question is how much materials should be submit, or what is the optimal concentration of the solution. Generally, when doing the DLS measurement, it is important to submit enough amount of material in order to obtain sufficient signal, but if the sample is overly concentrated, then light scattered by one particle might be again scattered by another (known as multiple scattering), and make the data processing less accurate. An ideal sample submission for DLS analysis has a volume of 1 – 2 mL and is sufficiently concentrated as to have strong color hues, or opaqueness/turbidity in the case of a white or black sample. Alternatively, 100 - 200 μL of highly concentrated sample can be diluted to 1 mL or analyzed in a low-volume microcuvette.In order to get high quality DLS data, there are also other issues to be concerned with. First is to minimize particulate contaminants, as it is common for a single particle contaminant to scatter a million times more than a suspended nanoparticle, by using ultra high purity water or solvents, extensively rinsing pipettes and containers, and sealing sample tightly. Second is to filter the sample through a 0.2 or 0.45 μm filter to get away of the visible particulates within the sample solution. Third is to avoid probe sonication to prevent the particulates ejected from the sonication tip, and use the bath sonication in stead.Now that the sample is readily prepared and put into the sample holder of the instrument, the next step is to actually do the DLS measurement. Generally the DLS instrument will be provided with software that can help you to do the measurement rather easily, but it is still worthwhile to understand the important parameters used during the measurement.Firstly, the laser light source with an appropriate wavelength should be selected. As for the Zetasizer Nano series (Malvern Instruments Ltd.), either a 633 nm “red” laser or a 532 nm “green” laser is available. One should keep in mind that the 633 nm laser is least suitable for blue samples, while the 532 nm laser is least suitable for red samples, since otherwise the sample will just absorb a large portion of the incident light.Then, for the measurement itself, one has to select the appropriate stabilization time and the duration time. Normally, longer striation/duration time can results in more stable signal with less noises, but the time cost should also be considered. Another important parameter is the temperature of the sample, as many DLS instruments are equipped with the temperature-controllable sample holders, one can actually measure the size distribution of the data at different temperature, and get extra information about the thermal stability of the sample analyzed.Next, as is used in the calculation of particle size from the light scattering data, the viscosity and refraction index of the solution are also needed. Normally, for solutions with low concentration, the viscosity and refraction index of the solvent/water could be used as an approximation.Finally, to get data with better reliability, the DLS measurement on the same sample will normally be conducted multiple times, which can help eliminate unexpected results and also provide additional error bar of the size distribution data.Although size distribution data could be readily acquired from the software of the DLS instrument, it is still worthwhile to know about the details about the data analysis process.As is mentioned in the Theory portion above, the decay rate Γ is mathematically determined from the g1(τ) curve; if the sample solution is monodispersed, g1(τ) could be regard as a single exponential decay function e-Γτ, and the decay rate Γ can be in turn easily calculated. However, in most of the practical cases, the sample solution is always polydispersed, g1(τ) will be the sum of many single exponential decay functions with different decay rates, and then it becomes significantly difficult to conduct the fitting process.There are however, a few methods developed to meet this mathematical challenge: linear fit and cumulant expansion for mono-modal distribution, exponential sampling and CONTIN regularization for non-monomodal distribution. Among all these approaches, cumulant expansion is most common method and will be illustrated in detail in this section.Generally, the cumulant expansion method is based on two relations: one between g1(τ) and the moment-generating function of the distribution, and one between the logarithm of g1(τ) and the cumulant-generating function of the distribution.To start with, the form of g1(τ) is equivalent to the definition of the moment-generating function M(-τ, Γ) of the distribution G(Γ), \ref{14} .\[ g_{1}(\tau ) =\ \int _{0}^{\infty} G(\Gamma )e^{- \Gamma \tau} d\Gamma \ =\ M(-\tau ,\Gamma) \label{14} \]The mth moment of the distribution \(mm(Γ)\) is given by the mth derivative of M(-τ, Γ) with respect to τ, \ref{15} .\[ m_{m}(\Gamma )=\ \int ^{\infty}_{0} G(\Gamma )\Gamma^{m} e^{-\Gamma \tau} d\Gamma \mid_{- \tau = 0} \label{15} \]Similarly, the logarithm of g1(τ) is equivalent to the definition of the cumulant-generating function K(-τ, Γ), EQ, and the mth cumulant of the distribution km(Γ) is given by the mth derivative of K(-τ, Γ) with respect to τ, \ref{16} and \ref{17} .\[ ln\ g_{1}(\tau )= ln\ M(-\tau ,\Gamma)\ =\ K(-\tau , \Gamma) \label{16} \]\[ k_{m}(\Gamma )=\frac{d^{m}K(-\tau , \Gamma )}{d(-\tau )^{m} } \mid_{-\tau = 0} \label{17} \]By making use of that the cumulants, except for the first, are invariant under a change of origin, the km(Γ) could be rewritten in terms of the moments about the mean as \ref{18} , \ref{19} , \ref{20} , and \ref{21} where here μm are the moments about the mean, defined as given in \ref{22} .\[ \begin{align} k_{1}(\tau ) &=\ \int _{0}^{\infty} G(\Gamma )\Gamma d\Gamma = \bar{\Gamma } \label{18} \\[4pt] k_{2}(\tau ) &=\ \mu _{2} \label{19} \\[4pt] k_{3}(\tau ) &=\ \mu _{3} \label{20} \\[4pt] k_{4}(\tau ) &=\ \mu _{4} - 3\mu ^{2}_{2} \cdots \label{21} \end{align} \]\[ \mu_{m}\ =\ \int _{0}^{\infty} G(\Gamma )(\Gamma \ -\ \bar{\Gamma})^{m} d\Gamma \label{22} \]Based on the Taylor expansion of K(-τ, Γ) about τ = 0, the logarithm of g1(τ) is given as \ref{23} .\[ ln\ g_{1}(\tau )=\ K(-\tau , \Gamma )=\ -\bar{\Gamma} \tau \ +\frac{k_{2}}{2!}\tau ^{2}\ -\frac{k_{3}}{3!}\tau^{3}\ +\frac{k_{4}}{4!}\tau^{4} \cdots \label{23} \]Importantly, if look back at the Seigert relationship in the logarithmic form, \ref{24} .\[ ln(g_{2}(\tau )-B)=ln\beta \ +\ 2ln\ g_{1}(\tau ) \label{24} \]The measured data of g2(τ) could be fitted with the parameters of km using the relationship of \ref{25} , where \(\bar{\Gamma }\) (k1), k2, and k3 describes the average, variance, and skewness (or asymmetry) of the decay rates of the distribution, and polydispersity index \(\gamma \ =\ \frac{k_{2}}{\bar{\Gamma}^{2}} \) is used to indicate the width of the distribution. And parameters beyond k3 are seldom used to prevent overfitting the data. Finally, the size distribution can be easily calculated from the decay rate distribution as described in theory section previously. shows an example of data fitting using the cumulant method.\[ ln(g_{2}(\tau )-B)=] ln\beta \ +\ 2(-\bar{\Gamma} \tau \ +\frac{k_{2}}{2!}\tau^{2}\ -\frac{k_{3}}{3!}\tau^{3} \cdots ) \label{25} \]When using the cumulant expansion method however, one should keep in mind that it is only suitable for monomodal distributions (Gaussian-like distribution centered about the mean), and for non-monomodal distributions, other methods like exponential sampling and CONTIN regularization should be applied instead.Now that the size distribution is able to be acquired from the fluctuation data of the scattered light using cumulant expansion or other methods, it is worthwhile to understand the three kinds of distribution index usually used in size analysis: number weighted distribution, volume weighted distribution, and intensity weighted distribution.First of all, based on all the theories discussed above, it should be clear that the size distribution given by DLS experiments is the intensity weighted distribution, as it is always the intensity of the scattering that is being analyzed. So for intensity weighted distribution, the contribution of each particle is related to the intensity of light scattered by that particle. For example, using Rayleigh approximation, the relative contribution for very small particles will be proportional to a6.For number weighted distribution, given by image analysis as an example, each particle is given equal weighting irrespective of its size, which means proportional to a0. This index is most useful where the absolute number of particles is important, or where high resolution (particle by particle) is required.For volume weighted distribution, given by laser diffraction as an example, the contribution of each particle is related to the volume of that particle, which is proportional to a3. This is often extremely useful from a commercial perspective as the distribution represents the composition of the sample in terms of its volume/mass, and therefore its potential money value.When comparing particle size data for the same sample represented using different distribution index, it is important to know that the results could be very different from number weighted distribution to intensity weighted distribution. This is clearly illustrated in the example below ), for a sample consisting of equal numbers of particles with diameters of 5 nm and 50 nm. The number weighted distribution gives equal weighting to both types of particles, emphasizing the presence of the finer 5 nm particles, whereas the intensity weighted distribution has a signal one million times higher for the coarser 50 nm particles. The volume weighted distribution is intermediate between the two.Furthermore, based on the different orders of correlation between the particle contribution and the particle size a, it is possible to convert particle size data from one type of distribution to another type of distribution, and that is also why the DLS software can also give size distributions in three different forms (number, volume, and intensity), where the first two kinds are actually deducted from the raw data of intensity weighted distribution.As the DLS method could be used in many areas towards size distribution such as polymers, proteins, metal nanoparticles, or carbon nanomaterials, here gives an example about the application of DLS in size-controlled synthesis of monodisperse gold nanoparticles.The size and size distribution of gold particles are controlled by subtle variation of the structure of the polymer, which is used to stabilize the gold nanoparticles during the reaction. These variations include monomer type, polymer molecular weight, end-group hydrophobicity, end-group denticity, and polymer concentration; a total number of 88 different trials have been conducted based on these variations. By using the DLS method, the authors are able to determine the gold particle size distribution for all these trials rather easily, and the correlation between polymer structure and particle size can also be plotted without further processing the data. Although other sizing techniques such as UV-V spectroscopy and TEM are also used in this paper, it is the DLS measurement that provides a much easier and reliable approach towards the size distribution analysis.Since DLS is not the only method available to determine the size distribution of particles, it is also necessary to compare DLS with the other common-used general sizing techniques, especially TEM and AFM.First of all, it has to be made clear that both TEM and AFM measure particles that are deposited on a substrate (Cu grid for TEM, mica for AFM), while DLS measures particles that are dispersed in a solution. In this way, DLS will be measuring the bulk phase properties and give a more comprehensive information about the size distribution of the sample. And for AFM or TEM, it is very common that a relatively small sampling area is analyzed, and the size distribution on the sampling area may not be the same as the size distribution of the original sample depending on how the particles are deposited.On the other hand however, for DLS, the calculating process is highly dependent on the mathematical and physical assumptions and models, which is, monomodal distribution (cumulant method) and spherical shape for the particles, the results could be inaccurate when analyzing non-monomodal distributions or non-spherical particles. Yet, since the size determining process for AFM or TEM is nothing more than measuring the size from the image and then using the statistic, these two methods can provide much more reliable data when dealing with “irregular” samples.Another important issue to consider is the time cost and complication of size measurement. Generally speaking, the DLS measurement should be a much easier technique, which requires less operation time and also cheaper equipment. And it could be really troublesome to analysis the size distribution data coming out from TEM or AFM images without specially programmed software.In addition, there are some special issues to consider when choosing size analysis techniques. For example, if the originally sample is already on a substrate (synthesized by the CVD method), or the particles could not be stably dispersed within solution, apparently the DLS method is not suitable. Also, when the particles tend to have a similar imaging contrast against the substrate (carbon nanomaterials on TEM grid), or tend to self-assemble and aggregate on the surface of the substrate, the DLS approach might be a better choice.In general research work however, the best way to do size distribution analysis is to combine these analyzing methods, and get complimentary information from different aspects. One thing to keep in mind, since the DLS actually measures the hydrodynamic radius of the particles, the size from DLS measurement is always larger than the size from AFM or TEM measurement. As a conclusion, the comparison between DLS and AFM/TEM is shown in Table \(\PageIndex{1}\).In general, relying on the fluctuating Rayleigh scattering of small particles that randomly moves in solution, DLS is a very useful and rapid technique used in the size distribution of particles in the fields of physics, chemistry, and bio-chemistry, especially for monomodally dispersed spherical particles, and by combining with other techniques such as AFM and TEM, a comprehensive understanding of the size distribution of the analyte can be readily acquired.This page titled 2.4: Dynamic Light Scattering is shared under a CC BY 4.0 license and was authored, remixed, and/or curated by Pavan M. V. Raja & Andrew R. Barron (OpenStax CNX) via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.
530
2.5: Zeta Potential Analysis
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Physical_Methods_in_Chemistry_and_Nano_Science_(Barron)/02%3A_Physical_and_Thermal_Analysis/2.05%3A_Zeta_Potential_Analysis
The physical properties of colloids (nanoparticles) and suspensions are strongly dependent on the nature and extent of the particle-liquid interface. The behavior of aqueous dispersions between particles and liquid is especially sensitive to the ionic and electrical structure of the interface.Zeta potential is a parameter that measures the electrochemical equilibrium at the particle-liquid interface. It measures the magnitude of electrostatic repulsion/attraction between particles and thus, it has become one of the fundamental parameters known to affect stability of colloidal particles. It should be noted that that term stability, when applied to colloidal dispersions, generally means the resistance to change of the dispersion with time. illustrates the basic concept of zeta potential.From the fundamental theory’s perspective, zeta potential is the electrical potential in the interfacial double layer (DL) at the location of the slipping plane (shown in ). We can regard zeta potential as the potential difference between the dispersion medium and the stationary layer of the fluid attached to the particle layer. Therefore, in experimental concerns, zeta potential is key factor in processes such as the preparation of colloidal dispersions, utilization of colloidal phenomena and the destruction of unwanted colloidal dispersions. Moreover, zeta potential analysis and measurements nowadays have a lot of real-world applications. In the field of biomedical research, zeta potential measurement, in contrast to chemical methods of analysis which can disrupt the organism, has the particular merit of providing information referring to the outermost regions of an organism. It is also largely utilized in water purification and treatment. Zeta potential analysis has established optimum coagulation conditions for removal of particulate matter and organic dyestuffs from aqueous waste products.Zeta potential is a scientific term for electrokinetic potential in colloidal dispersions. In prior literature, it is usually denoted using the Greek letter zeta, Ζ, hence it has obtained the name zeta potential as Ζ-potential. The earliest theory for calculating Zeta potential from experimental data was developed by Marian Smoluchowski in 1903 ). Even till today, this theory is still the most well-known and widely used method for calculating zeta potential.Interestingly, this theory was originally developed for electrophoresis. Later on, people started to apply his theory in calculation of zeta potential. The main reason that this theory is powerful is because of its universality and validity for dispersed particles of any shape and any concentration. However, there still some limitations to this early theory as it was mainly determined experimentally. The main limitations are that Smoluchowski’s theory neglects the contribution of surface conductivity and only works for particles which have sizes much larger than the interface layer, denoted as κa (1/κ is called Debye length and a is the particle radius).Overbeek and Booth as early pioneers in this direction started to develop more theoretical and rigorous electrokinetic theories that were able to incorporate surface conductivity for electrokinetic applications. Modern rigorous electrokinetic theories that are valid almost any κa mostly are generated from Ukrainian (Dukhin) and Australian (O’Brien) scientists.Because an electric double-layer (EDL) exists between a surface and solution, then any relative motion between the rigid and mobile parts of the EDL will result in the generation of an electrokinetic potential. As described above, zeta potential is essentially a electrokinetic potential which rises from electrokinetic phenomena. So it is important to understand different situations where electrokinetic potential can be produced. There are generally four fundamental ways which zeta potential can be produced, via electrophoresis, electro-osmosis, streaming potential, and sedimentation potential as shown from .There are many different ways of calculating zeta potential . In this section, the methods of calculating zeta potential in electrophoresis and electroosmosis will be introduced.Electrophoresis is the movement of charged colloidal particles or polyelectrolytes, immersed in a liquid, under the influence of an external electric field. In such case, the electrophoretic velocity, ve (ms-1) is the velocity during electrophoresis and the electrophoretic mobility, u­­e (m 2 V -1 s -1 ) is the magnitude of the velocity divided by the magnitude of the electric field strength. The mobility is counted positive if the particles move toward lower potential and negative in the opposite case. And therefore, we have the relationship ve­= ueE, where E is the externally applied field.Thus, the formula accounted for zeta potential in electrophoresis case is given in EQ, where εrs is the relative permittivity of the electrolyte solution, ε0 is the electric permittivity of vacuum and η is the viscosity.\[ \mathit{u}_{e}\ =\frac{\varepsilon _{rs} \varepsilon_{0} \zeta}{\eta } \label{1} \]\[ \mathit{v}_{e}\ =\frac{\varepsilon _{rs} \varepsilon_{0} \zeta}{\eta } E \label{2} \]There are two cases regarding the size of κa:\[ \mathit{u}_{e} = \frac{2}{3} \frac{\varepsilon _{rs} \varepsilon_{0} \zeta}{\eta } \label{3} \]\[ \frac{3}{2}\frac{\eta e}{\varepsilon _{rs} \varepsilon _{0}kT} \mathit{u_{e}} =\frac{3}{2}y^{ek} -\frac{6[\frac{y^{ek}}{2}-\frac{ln\ 2}{\zeta}\{1-e^{-\zeta y^{ek}}\}]}{2+ \frac{ka}{1+3m/\zeta ^{2}}e^{\frac{-\zeta y^{ek}}{2}}} \label{4} \]Electroosmosis is the motion of a liquid through an immobilized set of particles, a porous plug, a capillary, or a membrane, in response to an applied electric field. Similar to electrophoresis, it has the electroosmotic velocity, veo (ms -1 ) as the uniform velocity of the liquid far from the charged interface. Usually, the measured quantity is the volume flow rate of liquid divided by electric field strength, Qeo,E (m 4 V -1 s -1 ) or diveided by the electric current, Qeo,I (m 3 C -1 ). Therefore, the relationship is given by \ref{5} .\[ Q_{eo} =\ \int \int v_{eo} dS \label{5} \]Thus the formula accounted for Zeta potential in electroosmosis is given in EQ.As with electrophoresis there are two cases regarding the size of κa:\[ Q_{eo , E} =\frac{-\varepsilon _{rs} \varepsilon_{0} \zeta }{\eta} Ac \nonumber \]\[ Q_{eo , I} =\frac{-\varepsilon _{rs} \varepsilon_{0} \zeta }{\eta} \frac{1}{K_{L}} \nonumber \]\[ Q_{eo , I} =\frac{-\varepsilon _{rs} \varepsilon_{0} \zeta }{\eta} \frac{1}{K_{L}(1+2\Delta u)} \label{6} \]Using the above theoretical methods, we can calculate zeta potential for particles in electrophoresis. The following table summarizes the stability behavior of the colloid particles with respect to zeta potential. Thus, we can use zeta potential to predict the stability of colloidal particles in the electrokinetic phenomena of electrophoresis.In this section, a market-available zeta potential analyzer will be used as an example of how experimentally zeta potential is analyzed. shows an example of a typical zeta potential analyzer for electrophoresis.The inside measuring principle is described in the following diagram, which shows the detailed mechanism of zeta potential analyzer ).When a voltage is applied to the solution in which particles are dispersed, particles are attracted to the electrode of the opposite polarity, accompanied by the fixed layer and part of the diffuse double layer, or internal side of the "sliding surface". Using the following formula below of this specific Analyzer and the computer program, we can obtain the zeta potential for electrophoresis using this typical zeta potential analyzer .This page titled 2.5: Zeta Potential Analysis is shared under a CC BY 4.0 license and was authored, remixed, and/or curated by Pavan M. V. Raja & Andrew R. Barron (OpenStax CNX) via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.
531
2.6: Viscosity
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Physical_Methods_in_Chemistry_and_Nano_Science_(Barron)/02%3A_Physical_and_Thermal_Analysis/2.06%3A_Viscosity
All liquids have a natural internal resistance to flow termed viscosity. Viscosity is the result of frictional interactions within a given liquid and is commonly expressed in two different ways.The first is dynamic viscosity, also known as absolute viscosity, which measures a fluid’s resistance to flow. In precise terms, dynamic viscosity is the tangential force per unit area necessary to move one plane past another at unit velocity at unit distance apart. As one plane moves past another in a fluid, a velocity gradient is established between the two layers ). Viscosity can be thought of as a drag coefficient proportional to this gradient.The force necessary to move a plane of area A past another in a fluid is given by Equation \ref{1} where \(V\) is the velocity of the liquid, Y is the separation between planes, and η is the dynamic viscosity.\[ F = \eta A \frac{V}{Y} \label{1} \]V/Y also represents the velocity gradient (sometimes referred to as shear rate). Force over area is equal to τ, the shear stress, so the equation simplifies to Equation \ref{2} .\[ \tau = \eta \frac{V}{Y} \label{2} \]For situations where V does not vary linearly with the separation between plates, the differential formula based on Newton’s equations is given in Equation \ref{3}.\[ \tau = \eta \frac{\delta V}{\delta Y} \label{3} \]Kinematic viscosity, the other type of viscosity, requires knowledge of the density, ρ, and is given by Equation \ref{4} , where v is the kinematic viscosity and the \(\eta \) is the dynamic viscosity.\[ \nu = \frac{\eta }{\rho } \label{4} \]Viscosity is commonly expressed in Stokes, Poise, Saybolt Universal Seconds, degree Engler, and SI units.The SI units for dynamic (absolute) viscosity is given in units of N·S/m2, Pa·S, or kg/(m·s), where N stands for Newton and Pa for Pascal. Poise are metric units expressed as dyne·s/cm2 or g/(m·s). They are related to the SI unit by g/(m·s) = 1/10 Pa·S. 100 centipoise, the centipoise (cP) being the most used unit of viscosity, is equal to one Poise. Table \(\PageIndex{1}\) shows the interconversion factors for dynamic viscosity.The CGS unit for kinematic viscosity is the Stoke which is equal to 10-4 m2/s. Dividing by 100 yields the more commonly used centistoke. The SI unit for viscosity is m2/s. The Saybolt Universal second is commonly used in the oilfield for petroleum products represents the time required to efflux 60 milliliters from a Saybolt Universal viscometer at a fixed temperature according to ASTM D-88. The Engler scale is often used in Britain and quantifies the viscosity of a given liquid in comparison to water in an Engler viscometer for 200cm3 of each liquid at a set temperature.One of the invaluable applications of the determination of viscosity is identifying a given liquid as Newtonian or non-Newtonian in nature.Moreover, non-Newtonian liquids can be further subdivided into classes by their viscous behavior with shear stress:Viscometers are used to measure viscosity. There are seven different classes of viscometer:Capillary viscometers are the most widely used viscometers when working with Newtonian fluids and measure the flow rate through a narrow, usually glass tube. In some capillary viscometers, an external force is required to move the liquid through the capillary; in this case, the pressure difference across the length of the capillary is used to obtain the viscosity coefficient.Capillary viscometers require a liquid reservoir, a capillary of known dimensions, a pressure controller, a flow meter, and a thermostat be present. These viscometers include, Modified Ostwald viscometers, Suspended-level viscometers, and Reverse-flow viscometers and measure kinematic viscosity.The equation governing this type of viscometry is the Pouisille law (Equation \ref{5} ), where Q is the overall flowrate, ΔP, the pressure difference, a, the internal radius of the tube, η, the dynamic viscosity, and l the path length of the fluid.\[ Q\ =\frac{\pi \Delta Pa^{4}}{8\eta l} \label{5} \]Here, Q is equal to V/t; the volume of the liquid measured over the course of the experiment divided by the time required for it to move through the capillary where V is volume and t is time.For gravity-type capillary viscometers, those relying on gravity to move the liquid through the tube rather than an applied force, Equation \ref{6} is used to find viscosity, obtained by substituting the relation Equation \ref{5} with the experimental values, where P is pressure, ρ is density, g is the gravitational constant, and h is the height of the column.\[ \eta \ =\frac{\pi gha^{4}}{8lV} \rho t \label{6} \]An example of a capillary viscometer (Ostwald viscometer) is shown in .Commonly found in the oil industry, orifice viscometers consist of a reservoir, an orifice, and a receiver. These viscometers report viscosity in units of efflux time as the measurement consists of measuring the time it takes for a given liquid to travel from the orifice to the receiver. These instruments are not accurate as the set-up does not ensure that the pressure on the liquid remains constant and there is energy lost to friction at the orifice. The most common types of these viscometer include Redwood, Engler, Saybolt, and Ford cup viscometers. A Saybolt viscometer is represented in .These viscometers, also known as cylinder-piston type viscometers are employed when viscosities above 1000 poise, need to be determined, especially of non-Newtonian fluids. In a typical set-up, fluid in a cylindrical reservoir is displaced by a piston. As the pressure varies, this type of viscometry is well-suited for determining the viscosities over varying shear rates, ideal for characterizing fluids whose primary environment is a high temperature, high shear rate environment, e.g., motor oil. A typical cylinder-piston type viscometer is shown in .Well-suited for non-Newtonian fluids, rotational viscometers measure the rate at which a solid rotates in a viscous medium. Since the rate of rotation is controlled, the amount of force necessary to spin the solid can be used to calculate the viscosity. They are advantageous in that a wide range of shear stresses and temperatures and be sampled across. Common rotational viscometers include: the coaxial-cylinder viscometer, cone and plate viscometer, and coni-cylinder viscometer. A cone and plate viscometer is shown in .This type of viscometer relies on the terminal velocity achieved by a balling falling through the viscous liquid whose viscosity is being measured. A sphere is the simplest object to be used because its velocity can be determined by rearranging Stokes’ law Equation \ref{7} to Equation \ref{8} , where r is the sphere’s radius, η the dynamic viscosity, v the terminal velocity of the sphere, σ the density of the sphere, ρ the density of the liquid, and g the gravitational constant\[ 6\pi r\eta v\ =\ \frac{4}{3} \pi r^{3} (\sigma - \rho)g \label{7} \]\[ \eta\ =\frac{\frac{4}{3} \pi r^{2}(\sigma - \rho)g}{6\pi v} \label{8} \]A typical falling ball viscometric apparatus is shown in .ften used in industry, these viscometers are attached to fluid production processes where a constant viscosity quality of the product is desired. Viscosity is measured by the damping of an electrochemical resonator immersed in the liquid to be tested. The resonator is either a cantilever, oscillating beam, or a tuning fork. The power needed to keep the oscillator oscillating at a given frequency, the decay time after stopping the oscillation, or by observing the difference when waveforms are varied are respective ways in which this type of viscometer works. A typical vibrational viscometer is shown in .This type of viscometer is most like vibrational viscometers in that it is obtaining viscosity information by exposing a liquid to an oscillating system. These measurements are continuous and instantaneous. Both ultrasonic and vibrational viscometers are commonly found on liquid production lines and constantly monitor the viscosity.This page titled 2.6: Viscosity is shared under a CC BY 4.0 license and was authored, remixed, and/or curated by Pavan M. V. Raja & Andrew R. Barron (OpenStax CNX) via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.
532
2.7: Electrochemistry
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Physical_Methods_in_Chemistry_and_Nano_Science_(Barron)/02%3A_Physical_and_Thermal_Analysis/2.07%3A_Electrochemistry
Cyclic voltammetry (CV) is one type of potentiodynamic electrochemical measurements. Generally speaking, the operating process is a potential-controlled reversible experiment, which scans the electric potential before turning to reverse direction after reaching the final potential and then scans back to the initial potential, as shown in -a . When voltage is applied to the system changes with time, the current will change with time accordingly as shown in -b. Thus the curve of current and voltage, illustrated in -c, can be represented from the data, which can be obtained from -a and -b.Cyclic voltammetry is a very important analytical characterization in the field of electrochemistry. Any process that includes electron transfer can be investigated with this characterization. For example, the investigation of catalytical reactions, analyzing the stoichiometry of complex compounds, and determining of the photovoltaic materials’ band gap. In this module, I will focus on the application of CV measurement in the field of characterization of solar cell materials.Although CV was first practiced using a hanging mercury drop electrode, based on the work of Nobel Prize winner Heyrovský ), it did not gain widespread until solid electrodes like Pt, Au and carbonaceous electrodes were used, particularly to study anodic oxidations. A major advance was made when mechanistic diagnostics and accompanying quantitations became known through the computer simulations. Now, the application of computers and related software packages make the analysis of data much quicker and easier.As shown in , the CV systems are as follows:In order to better understand the electrodes mentioned above, three kinds of electrodes will be discussed in more detail.Cyclic voltammetry systems employ different types of potential waveforms ) that can be used to satisfy different requirements. Potential waveforms reflect the way potential is applied to this system. These different types are referred to by characteristic names, for example, cyclic voltammetry, and differential pulse voltammetry. The cyclic voltammetry analytical method is the one whose potential waveform is generally an isosceles triangle a).As mentioned above, there are two main parts of a CV system: the electrochemical cell and the epsilon. shows the schematic drawing of circuit diagram in electrochemical cell.In a voltammetric experiment, potential is applied to a system, using working electrode (W in ) and the reference electrode (R = ) and the current response is measured using the working electrode and a third electrode, the counter electrode (C in ). The typical current-voltage curve for ferricyanide/ferrocyanide, \ref{1} , is shown in .\[ E_{eq} \ =\ E^{\circ ' } \ +\ ( 0.059/n ) \ log( [ reactant ] / [ product ] ) \label{1} \]The information we are able to obtain from CV experimental data is the current-voltage curve. From the curve we can then determine the redox potential, and gain insights into the kinetics of electron reactions, as well as determine the presence of reaction intermediate.Despite some limitations, cyclic voltammetry is very well suited for a wide range of applications. Moreover, in some areas of research, cyclic voltammetry is one of the standard techniques used for characterization. Due to its characteristic shapes of curves, it has been considered as ‘electrochemical spectroscopy’. In addition, the system is quite easy to operate, and sample preparation is relatively simple.The band gap of a semiconductor is a very important value to be determined for photovoltaic materials. shows the relative energy level involved in light harvesting of an organic solar cell. The energy difference (Eg) between the lowest unoccupied molecular orbital (LUMO) and the highest occupied molecular orbital (HOMO), which determines the efficiency. The oxidation and reduction of an organic molecule involve electron transfers ), and CV measurements can be used to determine the potential change during redox. Through the analysis of data obtained by the CV measurement the electronic band gap is obtained.Graphene nanoribbons (GNRs) are long, narrow sheets of graphene formed from the unzipping of carbon nanotubes ). GNRs can be both semiconducting and semi-metallic, depending on their width, and they represent a particularly versatile variety of graphene. The high surface area, high aspect ratio, and interesting electronic properties of GNRs render them promising candidates for applications of energy-storage materials.Graphene nanoribbons can be oxidized to oxidized graphene nanoribbons (XGNRs), are readily soluble in water easily. Cyclic voltammetry is an effective method to characterize the band gap of semiconductor materials. To test the band gap of oxidized graphene nanoribbons (XGNRs), operating parameters can be set as follows:To make sure that the results are accurate, two samples can be tested under the same condition to see whether the redox peaks are at the same position. The amount of XGNRs will vary from sample to sample, thus the height of peaks will vary also. Typical curves obtained from the oxidation reaction a) and reduction reaction b) are shown in and , respectively.From the curves shown in and the following conclusions can be obtained:In conclusion, there are many applications for CV system, efficient method, and the application in the field of solar cell provides the band gap information for research.Proton exchange membrane fuel cells (PEMFCs) are one promising alternative to traditional combustion engines. This method takes advantage of the exothermic hydrogen oxidation reaction in order to generate energy and water (Table \(\PageIndex{1}\) ).The basic PEMFC consists of an anode and a cathode separated by a proton exchange membrane ). This membrane is a key component of the fuel cell because for the redox couple reactions to successfully occur, protons must be able to pass from the anode to the cathode. The membrane in a PEMFC is usually composed of Nafion, which is a polyfluorinated sulfonic acid, and exclusively allows protons to pass through. As a result, electrons and protons travel from the anode to the cathode through an external circuit and through the proton exchange membrane, respectively, to complete the circuit and form water.PEMFCs present many advantages compared to traditional combustion engines. They are more efficient and have a greater energy density than traditional fossil fuels. Additionally, the fuel cell itself is very simple with few or no moving parts, which makes it long-lasting, reliable, and very quiet. Most importantly, however, the operation of a PEMFC results in zero emissions as the only byproduct is water (Table \(\PageIndex{2}\) ). However, the use of PEMFCs has been limited because of the slow reaction rate for the oxygen reduction half-reaction (ORR). Reaction rates, k°, for reduction-oxidation reactions such as these tend to be on the order of 10-10 – 10-9 where 10-10 is the fastest reaction rate and 10-9 is the slowest reaction rate. Compared to the hydrogen oxidation half-reaction (HOR), which has a reaction rate of k° = 1x10-10 cm/s, the reaction rate for the ORR is k° ~ 1x10-9 cm/s. Thus, the ORR is the kinetic rate-limiting half-reaction and its reaction rate must be increased for PEMFCs to be a viable alternative to combustion engines. Because cyclic voltammetry can be used to examine the kinetics of the ORR reaction, it is a critical technique in evaluating potential solutions to this problem.Cyclic voltammetry is a key electrochemical technique that, among its other uses, can be employed to examine the kinetics of oxidation-reduction reactions in electrochemical systems. Specifically, data collected with cyclic voltammetry can be used to determine the rate of reaction. In its simplest form, this technique requires a simple three electrode cell and a potentiostat .A potential applied to the working electrode is varied linearly with time and the response in the current is measured . Typically the potential is cycled between two values once in the forward direction and once in the reverse direction. For example, in , the potential is cycled between 0.8V and -0.2V with the forward scan moving from positive to negative potential and the reverse scan moving from negative to positive potential. Various parameters can be adjusted including the scan rate, the number of scan cycles, and the direction of the potential scan i.e. whether the forward scan moves from positive to negative voltages or vice versa. For publication, data is typically collected at a scan rate of 20 mV/s with at least 3 scan cycles.From a cyclic voltammetry experiment, a graph called a voltammogram will be obtained. Because both the oxidation and reduction half-reactions occur at the working electrode surface, steep changes in the current will be observed when either of these half-reactions occur.A typical voltammogram will feature two peaks where one peak corresponds to the oxidation half-reaction and the other to the reduction half-reaction. In an oxidation half-reaction in an electrochemical cell, electrons flow from the species in solution to the electrode resulting in an anodic current, ia. Frequently, this oxidation peak appears when scanning from negative to positive potentials ). In a reduction half-reaction in an electrochemical cell, electrons flow from the electrode to the species in solution, resulting in a cathodic current, ic. This type of current is most often observed when scanning from positive to negative potentials. When the starting reactant is completely oxidized or completely reduced, peak anodic current, ipa, and peak cathodic current, ipc, respectively, are reached. Then, the current decays as the oxidized or reduced species leaves the electrode surface. The shape of these anodic and cathodic peaks can be modeled with the Nernst equation, \ref{2} , where number of electrons transferred and E˚’ (formal reduction potential) = (Epa + Epc)/2\[ E_{eq}\ =\ E^{\circ '} \ +\ (0.059/n)\ log\ ( [ reactant ] / [ product ] ) \label{2} \]Several key pieces of information can be obtained through examination of the voltammogram including ipa, ipc, and the anodic and cathodic peak potentials. ipa and ipcboth serve as important measures of catalytic activity: the larger the peak currents, the greater the activity of the catalyst. Values for ipa and ipc can be obtained through one of two methods: physical examination of the graph or the Randles-Sevick equation. To determine the peak potentials directly from the graph, a vertical tangent line from the peak current is intersected with an extrapolated baseline. In contrast, the Randles-Sevick equation uses information about the electrode and the experimental parameters to calculate the peak current, \ref{3} ,where A = electrode area; D = diffusion coefficient; C = concentration; v = scan rate.\[ i_{p} \ =\ (2.69x10^{5})n^{3/2}AD^{1/2}C\nu ^{12} \label{3} \]Anodic peak potential, Epa, and cathodic peak potential, Epc, can also be obtained from the voltammogram by determining the potential at which ipa and ipc respectively occur. These values are an indicator of the relative magnitude of the reaction rate. If the exchange of electrons between the oxidizing and reducing agents is fast, they form an electrochemically reversible couple. These redox couples fulfill the relationship: ΔEp = Epa – Epc ≡ 0.059/n. In contrast, a nonreversible couple will have a slow exchange of electrons and ΔEp > 0.059/n. However, it is important to note that ΔEp is dependent on scan rate.The Tafel and Butler-Volmer equations allow for the calculation of the reaction rate from the current-potential data generated by the voltammogram. In these analyses, the rate of the reaction can be expressed as two values: k° and io. k˚, the standard rate constant, is a measure of how fast the system reaches equilibrium: the larger the value of k°, the faster the reaction. The exchange current density, (io) is the current flow at the surface of the electrode at equilibrium: the larger the value of io, the faster the reaction. While both io and k° can be used, io is more frequently used because it is directly related to the overpotential through the current-overpotential and Butler-Volmer equations. When the reaction is at equilibrium, ko and io are related by \ref{4} , where Co,eq and CR,eq= equilibrium concentrations of the oxidized and reduced species respectively and a = symmetry factor.\[ i_{O} \ =\ nFk^{\circ }C_{O, eq} ^{1-a} C_{R, eq} ^{a} \label{4} \]In its simplest form, the Tafel equation is expressed as \ref{4} , where a and b can be a variety of constants. Any equation which has the form of \ref{5} is considered a Tafel equation.\[ E-E^{\circ} \ =\ a\ +\ b\ log(i) \label{5} \]For example, the relationship between current, potential, the concentration of reactants and products, and k˚ can be expressed as \ref{6} , where CO(0,t) and CR(0,t) = concentrations of the oxidized and reduced species respectively at a specific reaction time, F = Faraday constant, R = gas constant, and T = temperature.\[ C_{O}(0,t)\ -\ C_{R}(0,t)e^{ {[nf/RT] (E-E^{\circ } ) } } \ =\ [i/nFk^{\circ } ][e^{ {[anF/RT](E-E^{\circ } ) } } ] \label{6} \]At very large overpotentials, this equation reduces to a Tafel equation, \ref{7} , where a = -[RT/(1-a)nF]ln(io) and b = [RT/(1-a)nF].\[ E-E^{\circ } \ =\ [RT/(1-a)nF] ln(i)\ -\ [RT/(1-a)nF]ln(i_{0}) \label{7} \]The linear relationship between E-E˚ and log(i) can be exploited to determine io through the formation of a Tafel plot ), E-E˚ versus log(i).The resulting anodic and cathodic branches of the graph have slopes of [(1-a)nF/2.3RT] and[-anF/2.3RT], respectively. An extrapolation of these two branches results in a y-intercept = log(io). Thus, this plot directly relates potential and current data collected by cyclic voltammetry to io.While the Butler-Volmer equation resembles the Tafel equation, and in some cases can even be reduced to the Tafel formulation, it uniquely provides a direct relationship between io and Η. Without simplification, the Butler-Volmer equation is known as the current-overpotential \ref{8} .\[ i/i_{O}\ =\ C_{O}(0,t)/C_{O,eq}]e^{ { [anF/RT] (E-E^{\circ } ) } } \ -\ [ C_{R}(0,t)/C_{R,eq} ] e^{ { [ (1-a)nF/RT] (E-E^{\circ } ) } } \label{8} \]If the solution is well-stirred, the bulk and surface concentrations can be assumed to be equal and \ref{8} can be reduced to Butler-Volmer equation, \ref{9} .\[ I\ =\ i_{O}[ e^{\{ [ anF/RT] (E-E^{\circ } )\} } - e^{ [ (1-a)nF/RT] (E-E^{\circ }) } ] \label{9} \]While the issue of a slow ORR reaction rate has been addressed in many ways, it is most often overcome with the use of catalysts. Traditionally, platinum catalysts have demonstrated the best performance at 30 °C, the ORR io on a Pt catalyst is 2.8 x 10-7 A/cm2 compared to the limiting case of ORR where io = 1 x 10-10A/cm2. Pt is particularly effective as a catalyst for the ORR in PEMFCs because its binding energy for both O and OH is the closest to ideal of all the bulk metals, its activity is the highest of all the bulk metals, its selectivity for O2 adsorption is close to 100%, and its extreme stability under a variety of acidic and basic conditions as well as high operating voltages .Nonprecious metal catalysts (NPMCs) show great potential to reduce the cost of the catalyst without sacrificing catalytic activity. The best NPMCs currently in development have comparable or even better ORR activity and stability than platinum-based catalysts in alkaline electrolytes; in acidic electrolytes, however, NPMCs perform significantly worse than platinum-based catalysts.In particular, transition metal-nitrogen-carbon composite catalysts (M-N-C) are the most promising type of NPMC. The highest-performing members of this group catalyze the ORR at potentials within 60 mV of the highest-performing platinum catalysts ). Additionally, these catalysts have excellent stability: after 700 hours at 0.4 V, they do not show any performance degradation. In a comparison of high-performing PANI-Co-C and PANI-Fe-C (PANI = polyaniline), Zelenay and coworkers used cyclic voltammetry to compare the activity and performance of these two catalysts in H2SO4. The Co-PANI-C catalyst was found to have no reduction-oxidation features on its voltammogram whereas Fe-PANI-C was found to have two redox peaks at ~0.64 ). These Fe-PANI-C peaks have a full width at half maximum of ~100 mV, which is indicative of the reversible one-electron Fe3+/Fe2+ reduction-oxidation (theoretical FWHM = 96 mV). Zelenay and coworkers also determined the exchange current density using the Tafel analysis and found that Fe-PANI-C has a significantly greater io (io = 4 x 10-8 A/cm2) compared to Co-PANI-C (io = 5 x 10-10 A/cm2). These differences not only demonstrate the higher ORR activity of Fe-PANI-C when compared to Co-PANI-C, but also suggest that the ORR-active sites and reaction mechanisms are different for these two catalysts. While the structure of Fe-PANI-C has been examined ) the structure of Co-PANI-C is still being investigated.While the majority of the M-N-C catalysts show some ORR activity, the magnitude of this activity is highly dependent upon a variety of factors; cyclic voltammetry is critical in the examination of the relationships between each factor and catalytic activity. For example, the activity of M-N-Cs is highly dependent upon the synthetic procedure. In their in-depth examination of Fe-PANI-C catalysts, Zelenay and coworkers optimized the synthetic procedure for this catalyst by examining three synthetic steps: the first heating treatment, the acid-leaching step, and the second heating treatment. Their synthetic procedure involved the formation of a PANI-Fe-carbon black suspension that was vacuum-dried onto a carbon support. Then, the intact catalyst underwent a one-hour heating treatment followed by acid leaching and a three-hour heating treatment. The heating treatments were performed at 900˚C, which was previously determined to be the optimal temperature to achieve maximum ORR activity ).To determine the effects of the synthetic steps on the intact catalyst, the Fe-PANI-C catalysts were analyzed by cyclic voltammetry after the first heat treatment (HT1), after the acid-leaching (AL), and after the second heat treatment (HT2). Compared to HT1, both the AL and HT2 steps showed increases in the catalytic activity. Additionally, HT2 was found to increase the catalytic activity even more than AL ). Based on this data, Zelenay and coworkers concluded HT1 likely either creates active sites in the catalytic surface while both the AL step removes impurities, which block the surface pores, to expose more active sites. However, this step is also known to oxidize some of the catalytic area. Thus, the additional increase in activity after HT2 is likely a result of “repairing” the catalytic surface oxidation.With further advancements in catalytic research, PEMFCs will become a viable and advantageous technology for the replacement of combustion engines. The analysis of catalytic activity and reaction rate that cyclic voltammetry provides is critical in comparing novel catalysts to the current highest-performing catalyst: Pt.A chemical reaction that involves a change in the charge of a chemical species is called an electrochemical reaction. As the name suggests, these reactions involve electron transfer between chemicals. Many of these reactions occur spontaneously when the various chemicals come in contact with one another. In order to force a nonspontaneous electrochemical reaction to occur, a driving force needs to be provided. This is because every chemical species has a relative reduction potential. These values provide information on the ability of the chemical to take extra electrons. Conversely, we can think if relative oxidation potentials, which indicate the ability of a chemical to give away electrons. It is important to note that these values are relative and need to be defined against a reference reaction. A list of standard reduction potentials (standard indicating measurement against the normal hydrogen electrode as seen in ) for common electrochemical half-reactions is given in Table \(\PageIndex{3}\). Nonspontaneous electrochemical systems, often called electrolytic cells, as mentioned previously, require a driving force to occur. This driving force is an applied voltage, which forces reduction of the chemical that is less likely to gain an electron.A schematic of an electrochemical cell is seen in . Any electrochemical cell must have two electrodes – a cathode, where the reduction half-reaction takes place, and an anode, where the oxidation half-reaction occurs. Examples of half reactions can be seen in Table \(\PageIndex{3}\). The two electrodes are electrically connected in two ways – the electrolyte solution and the external wire. The electrolyte solution typically includes a small amount of the electroactive analyte (the chemical species that will actually participate in electron transfer) and a large amount of supporting electrolyte (the chemical species that assist in the movement of charge, but are not actually involved in electron transfer. The external wire provides a path for the electrons to travel from the oxidation half-reaction to the reduction half-reaction. As mentioned previously, when an electrolytic reaction (nonspontaneous) is being forced to occur a voltage needs to be applied. This requires the wires to be connected to a potentiostat. As its name suggests, a potentiostat controls voltage (i.e., “potentio” = potential measured in volts). The components of an electrochemical cell and their functions are also given in Table \(\PageIndex{4}\).Chronocoulometry, as indicated by the name, is a technique in which the charge is measured (i.e. “coulometry”) as a function of time (i.e., “chrono”). There are various types of coulometry. The one discussed here is potentiostatic coulometry in which the potential (or voltage) is set and, as a result, charge flows through the cell. The input and output example graphs can be seen in . The input is a potential step that spans the reduction potential of the electroactive species. If this potential step is performed in an electrochemical cell that does not contain and electroactive species, only capacitive current will flow ), in which the ions migrate in such a way that charges are aligned (positive next to negative, but no charge is transferred. Once an electroactive species is introduced into the system however, the faradaic current begins to flow. This current is a result of the electron transfer between the electrode and the electroactive species.Electroplating is an electrochemical process that utilizes techniques such as chronocoulometry to electrodeposit a charged chemical from a solution as a neutral chemical on the surface of another chemical. These chemicals are typically metals. The science of electroplating dates back to the early 1800s when Luigi Valentino Brugnatelli ) electroplated gold from solution onto silver metals. By the mid 1800s, the process of electroplating was patented by cousins George and Henry Elkington ). The Elkingtons brought electroplated goods to the masses by producing consumer products such as artificial jewelry and other commemorative items ).Recent scientific studies have taken interest in studying electroplating. Trejo and coworkers have demonstrated that a quartz microbalance can be used to measure the change in mass over time during electrodeposition via chronocoulometry. a shows the charge transferred at various potential steps. b shows the change in mass as a function of potential step. It is clear that the magnitude of the potential step is directly related to the amount of charge transferred and consequently the mass of the electroactive species deposited.The effect of electroplating via chronocoulometry on the localized surface plasmon resonance (LSPR) has been studied on metallic nanoparticles. An LSPR is the collective oscillation of electrons as induced by an electric field ). In various studies by Mulvaney and coworkers, a clear effect on the LSPR frequency was seen as potentials were applied ). In initial studies, no evidence of electroplating was reported. In more recent studies by the same group, it was shown that nanoparticles could be electroplated using chronocoulometry . Such developments can lead to an expansion of the applications of both electroplating and plasmonics.This page titled 2.7: Electrochemistry is shared under a CC BY 4.0 license and was authored, remixed, and/or curated by Pavan M. V. Raja & Andrew R. Barron (OpenStax CNX) via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.
533
2.8: Thermal Analysis
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Physical_Methods_in_Chemistry_and_Nano_Science_(Barron)/02%3A_Physical_and_Thermal_Analysis/2.08%3A_Thermal_Analysis
Thermogravimetric analysis (TGA) and the associated differential thermal analysis (DTA) are widely used for the characterization of both as-synthesized and side-wall functionalized single walled carbon nanotubes (SWNTs). Under oxygen, SWNTs will pyrolyze leaving any inorganic residue behind. In contrast in an inert atmosphere since most functional groups are labile or decompose upon heating and as SWNTs are stable up to 1200 °C, any weight loss before 800 °C is used to determine the functionalization ratio of side-wall functionalized SWNTs. The following properties of SWNTs can be determined using this TGA;Quantitative determination of these properties are used to define the purity of SWNTs, and the extent of their functionalization.The main function of TGA is the monitoring of the thermal stability of a material by recording the change in mass of the sample with respect to temperature. shows a simple diagram of the inside of a typical TGA.Inside the TGA, there are two pans, a reference pan and a sample pan. The pan material can be either aluminium or platinum. The type of pan used depends on the maximum temperature of a given run. As platinum melts at 1760 °C and alumium melts at 660 °C, platinum pans are chosen when the maximum temperature exceeds 660 °C. Under each pan there is a thermocouple which reads the temperature of the pan. Before the start of each run, each pan is balanced on a balance arm. The balance arms should be calibrated to compensate for the differential thermal expansion between the arms. If the arms are not calibrated, the instrument will only record the temperature at which an event occurred and not the change in mass at a certain time. To calibrate the system, the empty pans are placed on the balance arms and the pans are weighed and zeroed.As well as recording the change in mass, the heat flow into the sample pan (differential scanning calorimetry, DSC) can also be measured and the difference in temperature between the sample and reference pan (differential thermal analysis, DTA). DSC is quantitative and is a measure of the total energy of the system. This is used to monitor the energy released and absorbed during a chemical reaction for a changing temperature. The DTA shows if and how the sample phase changed. If the DTA is constant, this means that there was no phase change. shows a DTA with typical examples of an exotherm and an endotherm.When the sample melts, the DTA dips which signifies an endotherm. When the sample is melting it requires energy from the system. Therefore the temperature of the sample pan decreases compared with the temperature of the reference pan. When the sample has melted, the temperature of the sample pan increases as the sample is releasing energy. Finally the temperatures of the reference and sample pans equilibrate resulting in a constant DTA. When the sample evaporates, there is a peak in the DTA. This exotherm can be explained in the same way as the endotherm.Typically the sample mass range should be between 0.1 to 10 mg and the heating rate should be 3 to 5 °C/min.SWNTs are typically synthesized using metal catalysts. Those prepared using the HiPco method, contain residual Fe catalyst. The metal (i.e., Fe) is usually oxidized upon exposure to air to the appropriate oxide (i.e., Fe2O3). While it is sometimes unimportant that traces of metal oxide are present during subsequent applications it is often necessary to quantify their presence. This is particularly true if the SWNTs are to be used for cell studies since it has been shown that the catalyst residue is often responsible for observed cellular toxicity.In order to calculate the mass of catalyst residue the SWNTs are pyrolyzed under air or O2, and the residue is assumed to be the oxide of the metal catalyst. Water can be added to the raw SWNTs, which enhances the low-temperature catalytic oxidation of carbon. A typical TGA plot of a sample of raw HiPco SWNTs is shown in .The weight gain (of ca. 5%) at 300 °C is due to the formation of metal oxide from the incompletely oxidized catalyst. To determine the mass of iron catalyst impurity in the SWNT, the residual mass must be calculated. The residual mass is the mass that is left in the sample pan at the end of the experiment. From this TGA diagram, it is seen that 70% of the total mass is lost at 400 °C. This mass loss is attributed to the removal of carbon. The residual mass is 30%. Given that this is due to both oxide and oxidized metal, the original total mass of residual catalyst in raw HiPCO SWNTs is ca. 25%.The limitation of using SWNTs in any practical applications is their solubility; for example SWNTs have little to no solubility in most solvents due to aggregation of the tubes. Aggregation/roping of nanotubes occurs as a result of the high van der Waals binding energy of ca. 500 eV per μm of tube contact. The van der Waals force between the tubes is so great, that it take tremendous energy to pry them apart, making it very difficult to make combination of nanotubes with other materials such as in composite applications. The functionalization of nanotubes, i.e., the attachment of “chemical functional groups”, provides the path to overcome these barriers. Functionalization can improve solubility as well as processability, and has been used to align the properties of nanotubes to those of other materials. In this regard, covalent functionalization provides a higher degree of fine-tuning for the chemical and physical properties of SWNTs than non-covalent functionalization.Functionalized nanotubes can be characterized by a variety of techniques, such as atomic force microscopy (AFM), transmission electron microscopy (TEM), UV-vis spectroscopy, and Raman spectroscopy, however, the quantification of the extent of functionalization is important and can be determined using TGA. Because any sample of functionalized-SWNTs will have individual tubes of different lengths (and diameters) it is impossible to determine the number of substituents per SWNT. Instead the extent of functionalization is expressed as number of substituents per SWNT carbon atom (CSWNT), or more often as CSWNT/substituent, since this is then represented as a number greater than 1. shows a typical TGA for a functionalized SWNT. In this case it is polyethyleneimine (PEI) functionalized SWNTs prepared by the reaction of fluorinated SWNTs (F-SWNTs) with PEI in the presence of a base catalyst.In the present case the molecular weight of the PEI is 600 g/mol. When the sample is heated, the PEI thermally decomposes leaving behind the unfunctionalized SWNTs. The initial mass loss below 100 °C is due to residual water and ethanol used to wash the sample.In the following example the total mass of the sample is 25 mg.Solid-state 13C NMR of PEI-SWNTs shows the presence of carboxylate substituents that can be attributed to carbamate formation as a consequence of the reversable CO2 absorption to the primary amine substituents of the PEI. Desorption of CO2 is accomplished by heating under argon at 75 °C.The quantity of CO2 absorbed per PEI-SWNT unit may be determined by initially exposing the PEI-SWNT to a CO2 atmosphere to maximize absorption. The gas flow is switched to either Ar or N2 and the sample heated to liberate the absorbed CO2 without decomposing the PEI or the SWNTs. An example of the appropriate TGA plot is shown in .The sample was heated to 75 °C under Ar, and an initial mass loss due to moisture and/or atmospherically absorbed CO2 is seen. In the temperature range of 25 °C to 75 °C the flow gas was switched from an inert gas to CO2. In this region an increase in m-depenass is seen, the increase is due to CO2 absorption by the PEI (10000Da)-SWNT. Switching the carrier gas back to Ar resulted in the desorption of the CO2.The total normalized mass of CO2 absorbed by the PEI-SWNT can be calculated as follows;The binary compound of one or more oxygen atoms with at least one metal atom that forms a structure ≤100 nm is classified as metal oxide (MOx) nanoparticle. MOxnanoparticles have exceptional physical and chemical properties (especially if they are smaller than 10 nm) that are strongly related to their dimensions and to their morphology. These enhanced features are due to the increased surface to volume ratio which has a strong impact on the measured binding energies. Based on theoretical models, binding or cohesive energy is inversely related to particle size with a linear relationship \ref{1} .\[E_{NP} = E_{bulk} /cdot [1 - c \cdot r^{-1} \label{1} \]where ENP and Ebulk is the binding energy of the nanoparticle and the bulk binding energy respectively, c is a material constant and r is the radius of the cluster. As seen from \ref{1} , nanoparticles have lower binding energies than bulk material, which means lower electron cloud density and therefore more mobile electrons. This is one of the features that have been identified to contribute to a series of physical and chemical properties.Since today, numerous synthetic methods have been developed with the most common ones presented in Table \(\PageIndex{1}\). These methods have been successfully applied for the synthesis of a variety of materials with 0-D to 3-D complex structures. Among them, the solvothermal methods are by far the most popular ones due to their simplicity. Between the two classes of solvothermal methods, slow decomposition methods, usually called thermal decomposition methods, are preferred over the hot injection methods since they are less complicated, less dangerous and avoid the use of additional solvents.SolvothermalA general schematic diagram of the stages involving the nanoparticles formation is shown in . As seen, first step is the M-atom generation by dissociation of the metal-precursor. Next step is the M-complex formulation, which is carried out before the actual particle assembly stage. Between this step and the final particle formulation, oxidation of the activated complex occurs upon interaction with an oxidant substance. The x-axis is a function of temperature or time or both depending on the synthesis procedure.In all cases, the particles synthesized consist of MOx nanoparticle structures stabilized by one or more types of ligand(s) as seen in . The ligands are usually long-chained organic molecules that have one more functional groups. These molecules protect the nanoparticles from attracting each other under van der Waals forces and therefore prevent them from aggregating.Even though often not referred to specifically, all particles synthesized are stabilized by organic (hydrophilic, hydrophobic or amphoteric) ligands. The detection and the understanding of the structure of these ligands can be of critical importance for understanding the controlling the properties of the synthesized nanoparticles.In this work, we refer to MOx nanoparticles synthesized via slow decomposition of a metal complex. In Table \(\PageIndex{2}\), a number of different MOxnanoparticles are presented, synthesized via metal complex dissociation. Metal–MOx and mixed MOx nanoparticles are not discussed here.A significant number of metal oxides synthesized using slow decomposition is reported in literature. If we use the periodic table to map the different MOx nanoparticles ), e notice that most of the alkali and transition metals generate MOx nanoparticles, while only a few of the poor metals seem to do so, using this synthetic route. Moreover, two of the rare earth metals (Ce and Sm) have been reported to successfully give metal oxide nanoparticles via slow decomposition.Among the different characterization techniques used for defining these structures, transition electron microscopy (TEM) holds the lion’s share. Nevertheless, most of the modern characterization methods are more important when it comes to understanding the properties of nanoparticles. X-ray photoelectron spectroscopy (XPS), X-ray diffraction (XRD), nuclear magnetic resonance (NMR), IR spectroscopy, Raman spectroscopy, and thermogravimetric analysis (TGA) methods are systematically used for characterization.The synthesis of WO3-x nanorods is based on the method published by Lee et al. A slurry mixture of Me3NO∙2H2O, oleylamine and W(CO)6 was heated up to 250 °C at a rate of 3 °C/min ). The mixture was aged at this temperature for 3 hours before cooling down to room temperature.Multiple color variations were observed between 100 - 250 °C with the final product having a dark blue color. Tungsten oxide nanorods (W18O49 identified by XRD) with a diameter of 7±2 nm and 50±2 nm long were acquired after centrifugation of the product solution. A TEM image of the W18O49 nanorods is shown in .Thermogravimetric analysis (TGA) is a technique widely used for determining the organic and inorganic content of various materials. Its basic rule of function is the high precision measurement of weight gain/loss with increasing temperature under inert or reactive atmospheres. Each weight change corresponds to physical (crystallization, phase transformation) or chemical (oxidation, reduction, reaction) processes that take place by increasing the temperature. The sample is placed into platinum or alumina pan and along with an empty or standard pan are placed onto two high precision balances inside a high temperature oven. A method for pretreating the samples is selected and the procedure is initiated. Differential scanning calorimetry (DSC) is a technique usually accompanying TGA and is used for calculating enthalpy energy changes or heat capacity changes associated with phase transitions and/or ligand-binding energy cleavage.In the TGA/DSC plot acquired for the ligand decomposition of WO3-x nanorods is presented. The sample was heated at constant rate under N2 atmosphere up to 195 °C for removing moisture and then up to 700 °C for removing the oleylamine ligands. It is important to use an inert gas for performing such a study to avoid any premature oxidation and/or capping agent combustion. 26.5% of the weight loss is due to oleylamine evaporations which means about 0.004 moles per gram of sample. After isothermal heating at 700 °C for 25 min the flow was switched to air for oxidizing the ligand-free WO3-x to WO3. From the DSC curve we noticed the following changes of the weight corrected heat flow:The heat flow increase during the WO3-x to WO3 oxidation is proportional to the crystal phase defects (or W atoms of oxidation state +5) and can be used for performing qualitative studies between different WOx nanoparticles.The detailed information about the procedure used to acquire the TGA/DSC plot shown in is as follows.Fourier transform infrared spectroscopy (FTIR) is the most popular spectroscopic method used for characterizing organic and inorganic compounds. The basic modification of an FTIR from a regular IR instrument is a device called interferometer, which generates a signal that allows very fast IR spectrum acquisition. For doing so, the generatated interferogram has to be “expanded” using a Fourier transformation to generate a complete IR frequency spectrum. In the case of performing FTIR transmission studies the intensity of the transmitted signal is measured and the IR fingerprint is generated \ref{2} .\[T = \frac{I}{L} = e^{c \varepsilon l} \label{2} \]Where I is the intensity of the samples, Ib is the intensity of the background, c is the concentration of the compound, ε is the molar extinction coefficient and l is the distance that light travels through the material. A transformation of transmission to absorption spectra is usually performed and the actual concentration of the component can be calculated by applying the Beer-Lambert law \ref{3}\[A = -ln(T) = c \varepsilon l \label{3} \]A qualitative IR-band map is presented in .The absorption bands between 4000 to 1600 cm-1 represent the group frequency region and are used to identify the stretching vibrations of different bonds. At lower frequencies (from 1600 to 400 cm-1) vibrations due to intermolecular bond bending occurs upon IR excitation and therefore are usually not taken into account.TGA/DSC is a powerful tool for identifying the different compounds evolved during the controlled pyrolysis and therefore provide qualitative and quantitative information about the volatile components of the sample. In metal oxide nanoparticle synthesis TGA/DSC-FTIR studies can provide qualitative and quantitative information about the volatile compounds of the nanoparticles.TGA–FTIR results presented below were acquired using a Q600 Simultaneous TGA/DSC (SDT) instrument online with a Nicolet 5700 FTIR spectrometer. This system has a digital mass flow control and two gas inlets giving the capability to switch reacting gas during each run. It allows simultaneous weight change and differential heat flow measurements up to 1500 °C, while at the same time the outflow line is connected to the FTIR for performing gas phase compound identification. Grand-Schmidt thermographs were usually constructed to present the species evolution with time in 3 dimensions.Selected IR spectra are presented in . Four regions with intense peaks are observed. Between 4000 – 3550 cm-1 due to O-H bond stretching assigned to H2O that is always present and due to due to N-H group stretching that is assigned to the amine group of oleylamine. Between 2400 – 2250 cm-1 due to O=C=O stretching, between 1900 – 1400 cm-1 which is mainly to C=O stretching and between 800 – 400 cm-1 cannot be resolved as explained previously.The peak intensity evolution with time can be more easily observed in and . As seen, CO2 evolution increases significantly with time especially after switching our flow from N2 to air. H2O seems to be present in the outflow stream up to 700 °C while the majority of the N-H amine peaks seem to disappear at about 75 min. C=N compounds are not expected to be present in the stream which leaves bands between 1900 – 1400 cm-1 assigned to C=C and C=O stretching vibrations. Unsaturated olefins resulting from the cracking of the oleylamine molecule are possible at elevated temperatures as well as the presence of CO especially under N2atmosphere.From the above compound identification we can summarize and propose the following applications for TGA-FTIR. First, more complex ligands, containing aromatic rings and maybe other functional groups may provide more insight in the ligand to MOx interaction. Second, the presence of CO and CO2 even under N2 flow means that complete O2 removal from the TGA and the FTIR cannot be achieved under these conditions. Even though the system was equilibrated for more than an hour, traces of O2 are existent which create errors in our calculations.Metal compounds and complexes are invaluable precursors for the chemical vapor deposition (CVD) of metal and non-metal thin films. In general, the precursor compounds are chosen on the basis of their relative volatility and their ability to decompose to the desired material under a suitable temperature regime. Unfortunately, many readily obtainable (commercially available) compounds are not of sufficient volatility to make them suitable for CVD applications. Thus, a prediction of the volatility of a metal-organic compounds as a function of its ligand identity and molecular structure would be desirable in order to determine the suitability of such compounds as CVD precursors. Equally important would be a method to determine the vapor pressure of a potential CVD precursor as well as its optimum temperature of sublimation.It has been observed that for organic compounds it was determined that a rough proportionality exists between a compound’s melting point and sublimation enthalpy; however, significant deviation is observed for inorganic compounds.Enthalpies of sublimation for metal-organic compounds have been previously determined through a variety of methods, most commonly from vapor pressure measurements using complex experimental systems such as Knudsen effusion, temperature drop microcalorimetry and, more recently, differential scanning calorimetry (DSC). However, the measured values are highly dependent on the experimental procedure utilized. For example, the reported sublimation enthalpy of Al(acac)3 a where M = Al, n = 3) varies from 47.3 to 126kJ/mol.Thermogravimetric analysis offers a simple and reproducible method for the determination of the vapor pressure of a potential CVD precursor as well as its enthalpy of sublimation.The enthalpy of sublimation is a quantitative measure of the volatility of a particular solid. This information is useful when considering the feasibility of a particular precursor for CVD applications. An ideal sublimation process involves no compound decomposition and only results in a solid-gas phase change, i.e., \ref{4}\[ [M(L)_{n}I]_{ (solid) } \rightarrow [M(L_{n})]_{ (vapor) } \label{4} \]Since phase changes are thermodynamic processes following zero-order kinetics, the evaporation rate or rate of mass loss by sublimation (msub), at a constant temperature (T), is constant at a given temperature, \ref{5} . Therefore, the msub values may be directly determined from the linear mass loss of the TGA data in isothermal regions.\[ m_{sub} \ =\ \frac{\Delta [mass]}{\Delta t} \label{5} \]The thermogravimetric and differential thermal analysis of the compound under study is performed to determine the temperature of sublimation and thermal events such as melting. shows a typical TG/DTA plot for a gallium chalcogenide cubane compound ).In a typical experiment 5 - 10 mg of sample is used with a heating rate of ca. 5 °C/min up to under either a 200-300 mL/min inert (N2 or Ar) gas flow or a dynamic vacuum (ca. 0.2 Torr if using a typical vacuum pump). The argon flow rate was set to 90.0 mL/min and was carefully monitored to ensure a steady flow rate during runs and an identical flow rate from one set of data to the next.Once the temperature range is defined, the TGA is run with a preprogrammed temperature profile ). It has been found that sufficient data can be obtained if each isothermal mass loss is monitored over a period (between 7 and 10 minutes is found to be sufficient) before moving to the next temperature plateau. In all cases it is important to confirm that the mass loss at a given temperature is linear. If it is not, this can be due to either (a) temperature stabilization had not occurred and so longer times should be spent at each isotherm, or (b) decomposition is occurring along with sublimation, and lower temperature ranges must be used. The slope of each mass drop is measured and used to calculate sublimation enthalpies as discussed below.As an illustrative example, displays the data for the mass loss of Cr(acac)3 a, where M = Cr, n = 3 ) at three isothermal regions under a constant argon flow. Each isothermal data set should exhibit a linear relation. As expected for an endothermal phase change, the linear slope, equal to msub, increases with increasing temperature.Samples of iron acetylacetonate a, where M = Fe, n = 3) may be used as a calibration standard through ΔHsub determinations before each day of use. If the measured value of the sublimation enthalpy for Fe(acac)3 is found to differ from the literature value by more than 5%, the sample is re-analyzed and the flow rates are optimized until an appropriate value is obtained. Only after such a calibration is optimized should other complexes be analyzed. It is important to note that while small amounts (< 10%) of involatile impurities will not interfere with the ΔHsub analysis, competitively volatile impurities will produce higher apparent sublimation rates.It is important to discuss at this point the various factors that must be controlled in order to obtain meaningful (useful) msub data from TGA data.The basis of analyzing isothermal TGA data involves using the Clausius-Clapeyron relation between vapor pressure (p) and temperature (T), \ref{6} , where ∆Hsub is the enthalpy of sublimation and R is the gas constant (8.314 J/K.mol).\[ \frac{d\ ln(p)}{dT}\ =\ \frac{\Delta H_{sub} }{RT^{2} } \label{6} \]Since msub data are obtained from TGA data, it is necessary to utilize the Langmuir equation, \ref{7} , that relates the vapor pressure of a solid with its sublimation rate.\[ p\ =\ [\frac{2\pi RT}{M_{W} }]^{0.5} m_{sub} \label{7} \]After integrating \ref{6} in log form, substituting \ref{7} , and consolidating, one one obtains the useful equality, \ref{8} .\[log(m_{sub} \sqrt{T} ) = \frac{-0.0522(\Delta H_{sub} )}{T} + [ \frac{0.0522(\Delta H_{sub} )}{T} - \frac{1}{2} log(\frac{1306}{M_{W} } ) ] \label{8} \]Hence, the linear slope of a log(msubT1/2) versus 1/T plot yields ΔHsub. An example of a typical plot and the corresponding ΔHsub value is shown in . In addition, the y intercept of such a plot provides a value for Tsub, the calculated sublimation temperature at atmospheric pressure.Table \(\PageIndex{3}\) lists the typical results using the TGA method for a variety of metal β-diketonates, while Table \(\PageIndex{4}\) lists similar values obtained for gallium chalcogenide cubane compounds.A common method used to enhance precursor volatility and corresponding efficacy for CVD applications is to incorporate partially b ) or fully c) fluorinated ligands. As may be seen from Table \(\PageIndex{3}\) this substitution does results in significant decrease in the ΔHsub, and thus increased volatility. The observed enhancement in volatility may be rationalized either by an increased amount of intermolecular repulsion due to the additional lone pairs or that the reduced polarizability of fluorine (relative to hydrogen) causes fluorinated ligands to have less intermolecular attractive interactions.The entropy of sublimation is readily calculated from the ΔHsub and the calculated Tsub data, \ref{9}\[ \Delta S_{sub} \ =\ \frac{ \Delta H_{sub} }{ T_{sub} } \label{9} \]Table \(\PageIndex{3}\) and Table \(\PageIndex{4}\) show typical values for metal β-diketonate compounds and gallium chalcogenide cubane compounds, respectively. The range observed for gallium chalcogenide cubane compounds (ΔSsub = 330 ±20 J/K.mol) is slightly larger than values reported for the metal β-diketonates compounds (ΔSsub = 130 - 330 J/K.mol) and organic compounds (100 - 200 J/K.mol), as would be expected for a transformation giving translational and internal degrees of freedom. For any particular chalcogenide, i.e., [(R)GaS]4, the lowest ΔSsubare observed for the Me3C derivatives, and the largest ΔSsub for the Et2MeC derivatives, see Table \(\PageIndex{4}\). This is in line with the relative increase in the modes of freedom for the alkyl groups in the absence of crystal packing forces.While the sublimation temperature is an important parameter to determine the suitability of a potential precursor compounds for CVD, it is often preferable to express a compound's volatility in terms of its vapor pressure. However, while it is relatively straightforward to determine the vapor pressure of a liquid or gas, measurements of solids are difficult (e.g., use of the isoteniscopic method) and few laboratories are equipped to perform such experiments. Given that TGA apparatus are increasingly accessible, it would therefore be desirable to have a simple method for vapor pressure determination that can be accomplished on a TGA.Substitution of \ref{5} into \ref{8} allows for the calculation of the vapor pressure (p) as a function of temperature (T). For example, shows the calculated temperature dependence of the vapor pressure for [(Me3C)GaS]4. The calculated vapor pressures at 150 °C for metal β-diketonates compounds and gallium chalcogenide cubane compounds are given in Table \(\PageIndex{3}\) and Table \(\PageIndex{4}\)The TGA approach to show reasonable agreement with previous measurements. For example, while the value calculated for Fe(acac)3(2.78 Torr 113 °C) is slightly higher than that measured directly by the isoteniscopic method (0.53 Torr 113 °C); however, it should be noted that measurements using the sublimation bulb method obtained values much lower (8 x 10-3 Torr 113 °C). The TGA method offers a suitable alternative to conventional (direct) measurements of vapor pressure.Differential scanning calorimetry (DSC) is a technique used to measure the difference in the heat flow rate of a sample and a reference over a controlled temperature range. These measurements are used to create phase diagrams and gather thermoanalytical information such as transition temperatures and enthalpies.DSC was developed in 1962 by Perkin-Elmer employees Emmett Watson and Michael O’Neill and was introduced at the Pittsburgh Conference on Analytical Chemistry and Applied Spectroscopy. The equipment for this technique was available to purchase beginning in 1963 and has evolved to control temperatures more accurately and take measurements more precisely, ensuring repeatability and high sensitivity.Phase transitions refer to the transformation from one state of matter to another. Solids, liquids, and gasses are changed to other states as the thermodynamic system is altered, thereby affecting the sample and its properties. Measuring these transitions and determining the properties of the sample is important in many industrial settings and can be used to ensure purity and determine composition (such as with polymer ratios). Phase diagrams ) can be used to clearly demonstrate the transitions in graphical form, helping visualize the transition points and different states as the thermodynamic system is changed.Prior to DSC, differential thermal analysis (DTA) was used to gather information about transition states of materials. In DTA, the sample and reference are heated simultaneously with the same amount of heat and the temperature of each is monitored independently. The difference between the sample temperature and the reference temperature gives information about the exothermic or endothermic transition occurring in the sample. This strategy was used as the foundation for DSC, which sought to measure the difference in energy needed to keep the temperatures the same instead of measure the difference in temperature from the same amount of energy.Instead of measuring temperature changes as heat is applied as in DTA, DSC measures the amount of heat that is needed to increase the temperatures of the sample and reference across a temperature gradient. The sample and reference are kept at the same temperature as it changes across the gradient, and the differing amounts of heat required to keep the temperatures synchronized are measured. As the sample undergoes phase transitions, more or less heat is needed, which allows for phase diagrams to be created from the data. Additionally, specific heat, glass transition temperature, crystallization temperature, melting temperature, and oxidative/thermal stability, among other properties, can be measured using DSC.DSC is often used in industrial manufacturing, ensuring sample purity and confirming compositional analysis. Also used in materials research, providing information about properties and composition of unknown materials can be determined. DSC has also been used in the food and pharmaceutical industries, providing characterization and enabling the fine-tuning of certain properties. The stability of proteins and folding/unfolding information can also be measured with DSC experiments.The sample and reference cells (also known as pans), each enclosing their respective materials, are contained in an insulted adiabatic chamber ). The cells can be made of a variety of materials, such as aluminum, copper, gold and platinum. The choice of which is dictated by the necessary upper temperature limit. A variable heating element around each cell transfers heat to the sample, causing both cells’ temperature to rise in coordination with the other cell. A temperature monitor measures the temperatures of each cell and a microcontroller controls the variable heating elements and reports the differential power required for heating the sample versus the reference. A typical setup, including a computer for controlling software, is shown in .With advancement in DSC equipment, several different modes of operations now exist that enhance the applications of DSC. Scanning mode typically refers to conventional DSC, which uses a linear increase or decrease in temperature. An example of an additional mode often found in newer DSC equipment is an isothermal scan mode, which keeps temperature constant while the differential power is measured. This allows for stability studies at constant temperatures, particularly useful in shelf life studies for pharmaceutical drugs.As with practically all laboratory equipment, calibration is required. Calibration substances, typically pure metals such as indium or lead, are chosen that have clearly defined transition states to ensure that the measured transitions correlate to the literature values.Sample preparation mostly consists of determining the optimal weight to analyze. There needs to be enough of the sample to accurately represent the material, but the change in heat flow should typically be between 0.1 - 10 mW. The sample should be kept as thin as possible and cover as much of the base of the cell as possible. It is typically better to cut a slice of the sample rather than crush it into a thin layer. The correct reference material also needs to be determined in order to obtain useful data.DSC curves (e.g., ) typically consist of heat flow plotted versus the temperature. These curves can be used to calculate the enthalpies of transitions, (ΔH), \ref{10} , by integrating the peak of the state transition, where K is the calorimetric constant and A is the area under the curve.\[ \Delta H \ =\ KA \label{10} \]Common error sources apply, including user and balance errors and improper calibration. Incorrect choice of reference material and improper quantity of sample are frequent errors. Additionally, contamination and how the sample is loaded into the cell affect the DSC.Differential scanning calorimetry (DSC), at the most fundamental level, is a thermal analysis technique used to track changes in the heat capacity of some substance. To identify this change in heat capacity, DSC measures heat flow as a function of temperature and time within a controlled atmosphere. The measurements provide a quantitative and qualitative look into the physical and chemical alterations of a substance related to endothermic or exothermic events.The discussion done here will be focused on the analysis of polymers; therefore, it is important to have an understanding of polymeric properties and how heat capacity is measured within a polymer.A polymer is, essentially, a chemical compound whose molecular structure is a composition of many monomer units bonded together ). The physical properties of a polymer and, in turn, its thermal properties are determined by this very ordered arrangement of the various monomer units that compose a polymer. The ability to correctly and effectively interpret differential scanning calorimetry data for any one polymer stems from an understanding of a polymer’s composition. As such, some of the more essential dynamics of polymers and their structures are briefly addressed below.An aspect of the ordered arrangement of a polymer is its degree of polymerization, or, more simply, the number of repeating units within a polymer chain. This degree of polymerization plays a role in determining the molecular weight of the polymer. The molecular weight of the polymer, in turn, plays a role in determining various thermal properties of the polymer such as the perceived melting temperature.Related to the degree of polymerization is a polymer’s dispersity, i.e. the uniformity of size among the particles that compose a polymer. The more uniform a series of molecules, the more monodisperse the polymer; however, the more non-uniform a series of molecules, the more polydisperse the polymer. Increases in initial transition temperatures follow an increase in polydispersity. This increase is due to higher intermolecular forces and polymer flexibility in comparison to more uniform molecules.In continuation with the study of a polymer’s overall composition is the presence of cross-linking between chains. The ability for rotational motion within a polymer decreases as more chains become cross-linked, meaning initial transition temperatures will increase due to a greater level of energy needed to overcome this restriction. In turn, if a polymer is composed of stiff functional groups, such as carbonyl groups, the flexibility of the polymer will drastically decrease, leading to higher transitional temperatures as more energy will be required to break these bonds. The same is true if the backbone of a polymer is composed of stiff molecules, like aromatic rings, as this also causes the flexibility of the polymer to decrease. However, if the backbone or internal structure of the polymer is composed of flexible groups, such as aliphatic chains, then either the packing or flexibility of the polymer decreases. Thus, transitional temperatures will be lower as less energy is needed to break apart these more flexible polymers.Lastly, the actual bond structure (i.e. single, double, triple) and chemical properties of the monomer units will affect the transitional temperatures. For examples, molecules more predisposed towards strong intermolecular forces, such as molecules with greater dipole to dipole interactions, will result in the need for higher transitional temperatures to provide enough energy to break these interactions.In terms of the relationship between heat capacity and polymers: heat capacity is understood to be the amount of energy a unit or system can hold before its temperature raises one degree; further, in all polymers, there is an increase in heat capacity with an increase in temperature. This is due to the fact that as polymers are heated, the molecules of the polymer undergo greater levels of rotation and vibration which, in turn, contribute to an increase in the internal energy of the system and thus an increase in the heat capcity of the polymer.In knowing the composition of a polymer, it becomes easier to not only pre-emptively hypothesize the results of any DSC analysis but also troubleshoot why DSC data does not seem to corroborate with the apparent properties of a polymer.Note, too, that there are many variations in DSC techniques and types as they relate to characterization of polymers. These differences are discussed below.The composition of a prototypical, unmodified DSC includes two pans. One is an empty reference plate and the other contains the polymer sample. Within the DSC system is also a thermoelectric disk. Calorimetric measurements are then taken by heating both the sample and empty reference plate at a controlled rate, say 10 °C/min, through the thermoelectric disk. A purge gas is admitted through an orifice in the system, which is preheated by circulation through a heating block before entering the system. Thermocouples within the thermoelectric disk then register the temperature difference between the two plates. Once a temperature difference between the two plates is measured, the DSC system will alter the applied heat to one of the pans so as to keep the temperature between the two pans constant. In is a cross-section of a common heat flux DSC instrument.The resulting plot that is one in which the heat flow is understood to be a function of temperature and time. As such, the slope at any given point is proportional to the heat capacity of the sample. The plot as a whole, however, is reperesentative of thermal events within the polymer. The orientation of peaks or stepwise movements within the plot, therefore, lend themselves to interpretation as thermal events.To interpret these events, it is important to define the thermodynamic system of the DSC instrument. For most heat flux systems, the thermodynamic system is understood to be only the sample. This means that when, for example, an exothermic event occurs, heat from the polymer is released to the outside environment and a positive change is measured on the plot. As such, all exothermic events will be positive shifts within the plot while all endothermic events will be negative shifts within the plot. However, this can be flipped within the DSC system, so be sure to pay attention to the orientation of your plot as “exo up” or “exo down.” See for an example of a standard DSC plot of polymer poly(ethylene terephthalate) (PET). By understanding this relationship within the DSC system, the ability to interpret thermal events, such as the ones described below, becomes all the more approachable.As previously stated, a typical plot created via DSC will be a measure of heat flow vs temperature. If the polymer undergoes no thermal processes, the plot of heat flow vs temperature will be zero slope. If this is the case, then the heat capacity of the polymer is proportional to the distance between the zero-slopped line and the x-axis. However, in most instances, the heat capacity is measured to be the slope of the resulting heat flow vs temperature plot. Note that any thermal alteration to a polymer will result in a change in the polymer’s heat capacity; therefore, all DSC plots with a non-zero slope indicate some thermal event must have occurred.However, it is also possible to directly measure the heat capacity of a polymer as it undergoes some phase change. To do so, a heat capacity vs temperature plot is to be created. In doing so it becomes easier to zero in on and analyze a weak thermal event in a reproducible manner. To measure heat capacity as a function of increasing temperature, it is necessary to divide all values of a standard DSC plot by the measured heating rate.For example, say a polymer has undergone a subtle thermal event at a relatively low temperature. To confirm a thermal event is occurring, zero in on the temperature range the event was measured to have occurred at and create a heat capacity vs temperature plot. The thermal event becomes immediately identifiable by the presence of a change in the polymer’s heat capacity as shown in .As a polymer is continually heated within the DSC system, it may reach the glass transition: a temperature range under which a polymer can undergo a reversible transition between a brittle or viscous state. The temperature at which this reversible transition can occur is understood to be the glass transition temperature (Tg); however, make note that the transition does not occur suddenly at one temperature but, instead, transitions slowly across a range of temperatures.Once a polymer is heated to the glass transition temperature, it will enter a molten state. Upon cooling the polymer, it loses its elastic properties and instead becomes brittle, like glass, due to a decrease in chain mobility. Should the polymer continue to be heated above the glass transition temperature, it will become soft due to increased heat energy inducing different forms of transitional and segmental motion within the polymer, promoting chain mobility. This allows the polymer to be deformed or molded without breaking.Upon reaching the glass transition range, the heat capacity of the polymer will change, typically become higher. In turn, this will produce a change in the DSC plot. The system will begin heating the sample pan at a different rate than the reference pan to accommodate this change in the polymer’s heat capacity. is an example of the glass transition as measured by DSC. The glass transition has been highlighted, and the glass transition temperature is understood to be the mid-point of the transitional range.While the DSC instrument will capture a glass transition, the glass transition temperature cannot, in actuality, be exactly defined with a standard DSC. The glass transition is a property that is completely dependent on the extent that the polymer is heated or cooled. As such, the glass transition is dependent on the applied heating or cooling rate of the DSC system. Therefore, the glass transition of the same polymer can have different values when measured on separate occasions. For example, if the applied cooling rate is lower during a second trial, then the measured glass transition temperature will also be lower.However, in having a general knowledge of the glass transition temperature, it becomes possible to hypothesize the polymers chain length and structure. For example, the chain length of a polymer will affect the number of Van der Waal or entangling chain interactions that occur. These interactions will in turn determine just how resistant the polymer is to increasing heat. Therefore, the temperature at which Tg occurs is correlated to the magnitude of chain interactions. In turn, if the glass transition of a polymer is consistently shown to occur quickly at lower temperatures, it may be possible to infer that the polymer has flexible functional groups that promote chain mobility.Should a polymer sample continue to be heated beyond the glass transition temperature range, it becomes possible to observe crystallization of the polymer sample. Crystallization is understood to be the process by which polymer chains form ordered arrangements with one another, thereby creating crystalline structures.Essentially, before the glass transition range, the polymer does not have enough energy from the applied heat to induce mobility within the polymer chains; however, as heat is continually added, the polymer chains begin to have greater and greater mobility. The chains eventually undergo transitional, rotational, and segmental motion as well as stretching, disentangling, and unfolding. Finally, a peak temperature is reached and enough heat energy has been applied to the polymer that the chains are mobile enough to move into very ordered parallel, linear arrangements. At this point, crystallization begins. The temperature at which crystallization begins is the crystallization temperature (Tc).As the polymer undergoes crystalline arrangements, it will release heat since intramolecular bonding is occurring. Because heat is being released, the process is exothermic and the DSC system will lower the amount of heat being supplied to the sample plate in relation to the reference plate so as to maintain a constant temperature between the two plates. As a result, a positive amount of energy is released to the environment and an increase in heat flow is measured in an “exo up” DSC system, as seen in . The maximum point on the curve is known to be the Tc of the polymer while the area under the curve is the latent energy of crystallization, i.e., the change in the heat content of the system associated with the amount of heat energy released by the polymer as it undergoes crystallization.The degree to which crystallization can be measured by the DSC is dependent not only on the measured conditions but also on the polymer itself. For example, in the case of a polymer with very random ordering, i.e., an amorphous polymer, crystallization will not even occur.In knowing the crystallization temperature of the polymer, it becomes possible to hypothesize on the polymer’s chain structure, average molecular weight, tensile strength, impact strength, resistance to solvents, etc. For example, if the polymer tends to have a lower crystallization temperature and a small latent heat of crystallization, it becomes possible to assume that the polymer may already have a chain structure that is highly linear since not much energy is needed to induce linear crystalline arrangements.In turn, in obtaining crystallization data via DSC, it becomes possible to determine the percentage of crystalline structures within the polymer, or, the degree of crystallinity. To do so, compare the latent heat of crystallization, as determined by the area under the crystallization curve, to the latent heat of a standard sample of the same polymer with a known crystallization degree.Knowledge of the polymer sample’s degree of crystallinity also provides an avenue for hypothesizing the composition of the polymer. For example, having a very high degree of crystallinity may suggest that the polymer contains small, brittle molecules that are very ordered.As the heat being applied pushes the temperature of the system beyond Tc, the polymer begins to approach a thermal transition associated with melting. In the melting phase, the heat applied provides enough energy to, now, break apart the intramolecular bonds holding together the crystalline structure, undoing the polymer chains’ ordered arrangements. As this occurs, the temperature of the sample plate does not change as the applied heat is no longer being used to raise the temperature but instead to break apart the ordered arrangements.As the sample melts, the temperature slowly increases as less and less of the applied heat is needed to break apart crystalline structures. Once all the polymer chains in the sample are able to move around freely, the temperature of the sample is said to reach the melting temperature (Tm). Upon reaching the melting temperature, the applied heat begins exclusively raising the temperature of the sample; however, the heat capacity of the polymer will have increased upon transitioning from the solid crystalline phase to the melt phase, meaning the temperature will increase more slowly than before.Since, during the endothermic melting process of the polymer, most of the applied heat is being absorbed by the polymer, the DSC system must substantially increase the amount of heat applied to the sample plate so as to maintain the temperature between the sample plate and the reference plate. Once the melting temperature is reached, however, the applied heat of the sample plate decreases to match the applied heat of the reference plate. As such, since heat is being absorbed from the environment, the resulting “exo up” DSC plot will have a negative curve as seen in where the lowest point is understood to be the melt phase temperature. The area under the curve is, in turn, understood to be the latent heat of melting, or, more precisely, the change in the heat content of the system associated with the amount of heat energy absorbed by the polymer to undergo melting.Once again, in knowing the melting range of the polymer, insight can be gained on the polymer’s average molecular weight, composition, and other properties. For example, the greater the molecular weight or the stronger the intramolecular attraction between functional groups within crosslinked polymer chains, the more heat energy that will be needed to induce melting in the polymer.While standard DSC is useful in characterization of polymers across a broad temperature range in a relatively quick manner and has user-friendly software, it still has a series of limitations with the main limitation being that it is highly operator dependent. These limitations can, at times, reduce the accuracy of analysis regarding the measurements of Tg, Tc and Tm, as described in the previous section. For example, when using a synthesized polymer that is composed of multiple blends of different monomer compounds, it can become difficult to interpret the various transitions of the polymer due to overlap. In turn, some transitional events are completely dependent on what the user decides to input for the heating or cooling rate.To resolve some of the limitations associated with standard DSC, there exists modulated DSC (MDSC). MDSC not only uses a linear heating rate like standard DSC, but also uses a sinusoidal, or modulated, heating rate. In doing so, it is as though the MDSC is performing two, simultaneous experiements on the sample.What is meant by a modulated heating rate is that the MDSC system will vary the heating rate of the sample by a small range of heat across some modulating period of time. However, while the temperature rate of change is sinusoidal, it is still ultimately increasing acorss time as indicated in . In turn, also shows the sinusoidal heating rate as a function of time overlaying the linear heating rate of standard DSC. The linear heating rate of DSC is 2 °C/min and the modulated heating rate of MDSC varies from roughly ~0.1 °C/min and ~3.8 °C/min across a period of time.By providing two heating rates, a linear and a modulated one, MDSC is able to measure more accurately how heating rates affect the rate of heat flow within a polymer sample. As such, MDSC offers a means to eliminate the applied heating rate aspects of operator dependency.In turn, the MDSC instrument also performs mathematical processes that separate the standard DSC plot into reversing and a non-reversing components. The reversing signal is representative of properties that respond to temperature modulation and heating rate, such as glass transition and melting. On the other hand, the non-reversing component is representative of kinetic, time-dependent process such as decomposition, crystallization, and curing. provides an example of such a plot using PET.The mathematics behind MDSC is most simply represented by this formula: dH/dt = Cp(dT/dt) + f(T,t) where dH/dt is the total change in heat flow that would be derived from a standard DSC. Cp is heat capacity derived from modulated heating rate, dT/dt is representative of both the linear and modulated heating rate, and f(T,t) is representative of kinetic, time-dependent events, i.e the non-reversing signal. When combining Cp and dT/dt, creating Cp(dT/dt), the reversing signal is produced. The non-reversing signal is, therefore, found by simply subtracting the reversing signal from the total heat flow singal, i.e. dH/dt = Cp(dT/dt) + f(T,t)As such, MDSC is capable of independently measuring not only total heat flow but also the heating rate and kinetic components of said heat flow, meaning MDSC can break down complex or small transitions into their many singular components with improved sensitivity, allowing for more accurate analysis. Below are some cases in which MDSC proved to be useful for analytics.Using a standard DSC, it can be difficult to ascertain the accuracy of measured transitions that are relatively weak, such as Tg, since these transitions can be overlapped by stronger, kinetic transitions. This is quite the problem as missing a weak transition could cause the misinterpretation of polymer to be a uniform sample as opposed to a polymer blend. To resolve this, it is useful to split the plot into its reversing component, i.e. the portion which will contain heat dependent properties like Tg, and its non-reversing, kinetic component.For example, shown in the is the MDSC of an unknown polymer blend which, upon analysis, is composed of PET, amorphous polycarbonate (PC), and a high density polyethylene (HDPE). Looking at the reversing signal, the Tg of polycarbonate is around 140 °C and the Tg of PET is around 75 °C. As seen in the total heat flow signal, which is representative of a standard DSC plot, the Tg of PC would have been more difficult to analyze and, as such, may have been incorrectly analyzed.Further, there are instances in which a polymer or, more likely, a polymer blend will produce two different sets of crystalline structures. With two crystalline structures, the resulting melting peak will be poorly defined and, thus, difficult to analyze via a standard DSC.Using MDSC, however, it becomes possible to isolate the reversing signal, which will contain the melting curve. Through isolation of the reversing signal, it becomes clear that there is an overlapping of two melting peaks such that the MDSC system reveals two melting points. For example, as seen in the analysis of a poly(lactic acid) polymer (PLA) with 10% wt of a plasticize (P600) reveals two melting peaks in the reversing signal not visible in the total heat flow. The presence of two melting peaks could, in turn, suggest the formation of two crystalline structures within the polymer sample. Other interpretations are, of course, possible via analyzing the reversing signal.In many instances, polymers may be left to sit in refrigeration or stored at temperatures below their respective glass transition temperatures. By leaving a polymer under such conditions, the polymer is situated to undergo physical aging. Typically, the more flexible the chains of a polymer are, the more likely they will undergo time-related changes in storage. That is to say, the polymer will begin to undergo molecular relaxation such that the chains will form very dense regions while they conglomerate together. As the polymer ages, it will tend towards embrittlement and develop internal stresses. As such, it is very important to be aware if the polymer being studied has gone through aging while in storage.If a polymer has undergone physical aging, it will develop a new endothermic peak when undergoing thermal analysis. This occurs because, as the polymer is being heated, the polymer chains absorb heat, increase mobility, and move to a more relaxed condition as time goes on, transforming back to pre-aged conditions. In turn an endothermic shift, in association with this heat absorbance, will occur just before the Tg step change. This peak is known as the enthalpy of relaxation (ΔHR).Since the Tg and ΔHR are relatively close to one another energy-wise, they will tend to overlap, making it difficult to distinguish the two from one another. However, ΔHR is a kinetics dependent thermal shift while Tg is a heating dependent thermal shift; therefore, the two can be separated into a non-reversing and reversing plot via MDSC and be independently analyzed. is an example of an MDSC plot of a polymer blend of PET, PC, and HDPE in which the enthalpy of relaxation of PET is visible in the dashed non reversing signal around 75 °C. In turn, within the reversing signal, the glass transition of PET is visible around 75 °C as well.While MDSC is a strong step in the direction of elinating operator error, it is possible to have an even higher level of precision and accuracy when analyzing a polymer. To do so, the DSC system must expose the sample to quasi-isothermal conditions. In creating quasi-isothermal conditions, the polymer sample is held at a specific temperature for extended periods of time with no applied heating rate. With the heating rate being efficticely zero, the conditions are isothermal. The temperature of the sample may change, but the change will be derived solely from a kinetic transition that has occurred within the polymer. Once a kinetic transition has occurred within the polymer, it will absorb or release some heat, which will raise or decrease the temperature of the system without the application of any external heat.In creating these conditions, issues created by the variation of the applied heating rate by operators is no longer a large concern. Further, in subjecting a polymer sample to quasi-isothermal conditions, it becomes possible to get improved and more accurate measurements of heat dependent thermal events, such as events typically found in the reversing signal, as a function of time.As mentioned earlier, the glass transition is volatile in the sense that it is highly dependent on the heating and cooling rate of the DSC system as applied by the operator. An minor change in the heating or cooling rate between two experimental measurements of the same polymer sample can result in fairly different measured glass transitions, even though the sample itself has not been altered.Remember also, that the glass transition is a measure of the changing Cp of the polymer sample as it crosses certain heat energy thresholds. Therefore, it should be possible to capture a more accurate and precise glass transition under quasi-isothermal conditions since these conditions produce highly accurate Cpmeasurements as a function of time.By applying quasi-isothermal conditions, the polymer’s Cp can be measured in fixed-temperature steps within the apparent glass transition range as measured via standard DSC. In measuring the polymer across a set of quasi-isothermal steps, it becomes possible to obtain changing Cp rates that, in turn, would be nearly reflective of an exact glass transition range for a polymer.In the glass transition of polystyrene is shown to vary depending on the heating or cooling rate of the DSC; however, when applying qusi-isothermal conditions and measuring the heat capacity at temperature steps produces a very accurate glass transition that can be used as a standard for comparison.Magnetic materials attract the attention of researchers and engineers because of their potential for application in magnetic and electronic devices such as navigational equipment, computers, and even high-speed transportation. Perhaps more valuable still, however, is the insight they provide into fundamental physicals. Magnetic materials provide an opportunity for studying exotic quantum mechanical phenomena such as quantum criticality, superconductivity, and heavy fermionic behavior intrinsic to these materials. A battery of characterization techniques exist for measuring the physical properties of these materials, among them a method for measuring the specific heat of a material throughout a large range of temperatures. Specific heat measurments are an important means of determining the transition temperature of magnetic materials—the temperature below which magnetic ordering occurs. Additionally, the functionality of specific heat with temperature is characteristic of the behavior of electrons within the material and can be used to classify materials into different categories.The molar specific heat of a material is defined as the amount of energy required to raise the temperature of 1 mole of the material by 1 K. This value is calculated theoretically by taking the partial derivative of the internal energy with respect to temperature. This value is not a constant, as it is typically treated in high-school science courses: it depends on the temperature of the material. Moreover, the temperature-dependence itself also changes based on the type of material. There are three broad families of solid state materials defined by their specific heat behaviors. Each of these families is discussed in the following sections.Insulators have specific heat with the simplest dependence on temperature. According to the Debye theory of specific heat, which models materials as phonons (lattice vibrational modes) in a potential well, the internal energy of an insulating system is given by \ref{11} , where TD is the Debye temperature, defined as the temperature associated with the energy of the highest allowed phonon mode of the material. In the limit that T<D, the energy expression reduces to \ref{12} .\[ U\ =\frac{9Nk_{B}T^{4} }{T^{3}_{D}} \int ^{T_{D}/T}_{0} \frac{x^{3}}{e^{x}-1} dx \label{11} \]\[ U\ =\frac{3 \pi ^{4} N k_{B} T^{4}}{5T^{3}_{D} } \label{12} \]For most magnetic materials, the Debye temperature is several orders of magnitude higher than the temperature at which magnetic ordering occurs, making this a valid approximation of the internal energy. The specific heat derived from this expression is given by \ref{13}\[ C_{\nu }\ =\frac{\delta U}{\delta T} =\frac{12 \pi ^{4} Nk_{B} }{5T^{3}_{D}} T^{3} = \beta T^{3} \label{13} \]The behavior described by the Debye theory accurately matches experimental measurements of specific heat for insulators at low temperatures. Normal insulators, then, have a T3 dependence in the specific heat that is dominated by contributions from phonon excitations. Essentially all energy absorbed by insulating materials is stored in the vibrational modes of a solid lattice. At very low temperatures this contribution is very small, and insulators display a high sensitivity to changes in heat energy.While the Debye theory of specific heat accurately describes the behavior of insulators, it does not adequately describe the temperature dependence of the specific heat for metallic materials at low temperatures, where contributions from delocalized conduction electrons becomes significant. The predictions made by the Debye model are corrected in the Einstein-Debye model of specific heat, where an additional term describing the contributions from the electrons (as modeled by a free electron gas) is added to the phonon contribution. The internal energy of a free electron gas is given by \ref{14} ,where g(Ef) is the density of states at the Fermi level, which is material dependent. The partial derivative of this expression with respect to temperature yields the specific heat of the electron gas, \ref{15} .\[ U = \frac{\pi ^{2}}{6}(k_{B}T)^{2}g(E_{f})+U_{0} \label{14} \]\[ C_{\nu }= \frac{ \pi^{2}} {3} k^{2}_{B}g(E_{f})T= \gamma T \label{15} \]Combining this expression with the phonon contribution to specific heat gives the expression predicted by the Einstein-Debye model, \ref{16} .\[ C_{\nu }= \frac{pi^{2}}{3} k^{2}_{B} g(E_{f})T\ + \frac{12 \pi^{4}Nk_{B}}{5T^{3}_{D}}T^{3} = \gamma T\ +\ \beta T^{3} \label{16} \]This is the general expression for the specific heat of a Fermi liquid—a variation on the Fermi gas in which fermions (typically electrons) are allowed to interact with each other and form quasiparticles—weakly bound and often short-lived composites of more fundamental particles such as electron-hole pairs or the Cooper pairs of BCS superconductor theory.Most metallic materials follow this behavior and are thus classified as Fermi liquids. This is easily confirmed by measuring the heat capacity as a function of temperature and linearizing the results by plotting C/T vs. T2. The slope of this graph equal the coefficient β, and the y-intercept is equal to γ. The ability to obtain these coefficients is important for gaining understanding of some unique physical phenomena. For example, the compound YbRh2Si2 is a heavy fermionic material—a material with charge carriers that have an “effective” mass much greater than the normal mass of an electron. The increased mass is due to coupling of magnetic moments between conduction electrons and localized magnetic ions. The coefficient γ is related to the density of states at the Fermi level, which is dependent on the carrier mass. Determination of this coefficient via specific heat measurements provides a way to determine the effective carrier mass and the coupling strength of the quasiparticles.Additionally, knowledge of Fermi-liquid behavior provides insight for application development. The temperature dependence of the specific heat shows that the phonon contribution dominates at higher temperatures, where the behavior of metals and insulators is very similar. At low temperatures, the electronic term is dominant, and metals can absorb more heat without a signficant change in temperature. As will be discussed breifly later, this property of metals is utilized in low-temperature refrigeration systems for heat storage at low temperatures.While most metals fall under the category of Fermi liquids, there are some that show a different dependence on temperature. Naturally, these are classified as non-Fermi liquids. Often, deviation from Fermi-liquid behavior is an indicator of some of the interesting physical phenomena that currently garner the attention of many condensed matter researchers. For instance, non-Fermi liquid behavior has been observed near quantum critical points. Classically, fluctuations in physical properties such as magnetic susceptibility and resistivity occur near critical points which include phase changes or magnetic ordering transitions. Normally, these fluctuations are suppressed at low temperatures—at absolute zero, classical systems collapse into the lowest energy state and remain stable; However, when the critical transition temperature is lowered by the application of pressure, doping, or magnetic field to absolute zero, the fluctuations are enhanced as the temperature approaches absolute zero, propagating throughout the whole of the material. As this is not classically allowed, this behavior indicates a quantum mechanical effect at play that is currently not well understood. The transition point is then called a quantum critical point. Non-fermi liquid behavior as identified by deviations in the expected specific heat, then, is used to identify materials that can provide an experimental basis for development of a theory that describes the physics of quantum criticality.While analysis of the temperature dependence of specific heat is a vital tool for studying the strange physical behaviors of quantum mechanics in solid state materials, these are studied by only a small subsection of the physics community. The utility of specific heat measurements is not limited to a few niche subjects, however. Possibly the most important use for specific heat measurements is the determination of critical transition temperatures. For any sort of physical state transition—phase transitions, magnetic ordering, transitions to superconducting states—a sharp increase in the specific heat occurs during the transition. This increase in specific heat is the reason why, for example, water does not change temperature as it changes from a liquid to a solid. These increases are quite obvious in plots of the specific heat vs. temperature as seen in . These transition-associated peaks are called Schottky anomalies, as normal specific heat behavior is not followed near to the transition temperature.For the purposes of this chapter, the following sections will focus on specific heat measurements as they relate to magnetic ordering transititions. The following sections will describe the practical aspects of measuring the specific heat of these materials.Specific heat is measured using a calorimeter. The design of basic calorimeters for use over a short range of temperatures is relatively simple. They consist of a sample with a known mass and an unknown specific heat, an energy source which provides heat energy to the sample, a heat reservoir (of known mass and specific heat) that absorbs heat from the sample, insulation to provide adiabatic conditions inside the calorimeter, and probes for measuring the temperature of the sample and the reservoir. The sample is heated with a pulse to a temperature higher than the heat reservoir, which decreases as energy is absorbed by the reservoir until a thermal equilibrium is established. The total energy change is calculated using the specific heat and temperature change of the reservoir. The specific heat of the sample is calculated by dividing the total energy change by the product of the mass of the sample and the temperature change of the sample.However, this method of measurement produces an average value of the specific heat over the range of the change in temperature of the sample, and therefore, is insufficient for producing accurate measurements of the specific heat as a function of temperature. The solution, then, is to minimize the temperature change by reducing the amount of heat added to the system; yet, this presents another obstacle to making measurement as, in general, the temperature change of the reservoir is much smaller than that of the sample. If the change in temperature of the sample is minimized, the temperature change of reservoir becomes too small to measure with precision. A more direct method of measurement, then, seems to be required.Fortunately, such a method exists: it is known as the thermal relaxation method. This method involves measurement of the specific heat without the need for precise knowledge of temperature changes in the reservoir. In this method, solid samples are affixed to a platform. Both the specific heat of the sample and the platform itself contribute to the measured specific heat; therefore, the contribution from the platform must be subtracted. This contribution is determined by measuring the specific heat without a sample present. Both the sample and the platform are in thermal contact with a heat reservoir at low temperature as depicted in .A heat pulse is delivered to the sample to produce a minimal increase in the temperature of the sample. The temperature is measured vs. time as it decays back to the temperature of the reservoir as shown in \(\PageIndex{44}\).The temperature of the sample decays according to \ref{17} , where T0 is the temperature of the heat reservoir, and ΔT is the temperature difference between the initial sample temperature and the reservoir temperature. The decay time constant τ is directly related to the specific heat of the sample by \ref{18} , Where K is the thermal conductance of the thermal link between the sample and the heat reservoir. In order for this to be valid, however, the thermal conductance must be sufficiently large that the energy transfer from the heated sample to the reservoir can be treated as a single process. If the thermal conduction is poor, a two-τ behavior arises corresponding to two separate processes with different time constants—slow heat transfer from the sample to the platform, and fast transfer from the platform to the reservoir. shows a relaxation curve in which the two- τ behavior plays a significant role.\[ T = \Delta e^{t/ \tau}\ +\ T_{0} \label{17} \]\[ \tau \ =\ C_{p}/K \label{18} \]The two-τ effect is generally undesireable for making measurements. It can be avoided by reducing thermal conductance between the sample and the platform, effectively making the contribution from the heat transfer from the sample to the platform insignificant compared to the transfer from the platform to the reservoir; however, if the conductance between the sample and the platform is too low, the time required to reach thermal equilibrium becomes excessively long, translating into very long measurement times. It is necessary, then, to optimize the conductance to compensate for both of these issues. This essentially provides a limitation on the temperature range over which these effects are insignificant.In order to measure at different temepratures, the temperature of the heat reservoir is increased stepwise from the lowest temperature until the desired temperature range is covered. At each step, the temperature is allowed to equilibrate, and a data point is measured.Thermal relaxation calorimeters use advanced technology to make precise measurements of the specific heat using components made of highly specialized materials. For example, the sample platform is made of synthetic sapphire which is used as a standard material, the grease which is applied to the sample to provide even thermal contact with the platform is a special hydrocarbon-based material which can withstand millikelvin temperatures without creeping, cracking, or releasing vapor, and the resistance thermometers used for ultralow temperatures are often made of treated graphite or germanium. The culmination of years of materials science research and careful engineering has produced instrumentation with the capability for precise measurements from temperatures down to the millikelvin level. There are four main systems that function to provide the proper conditions for measurement: the reservoir temperature control, the sample temperature control, the magnetic field control, and the pressure control system. The essential components of these systems will be discussed in more detail in the following sections with special emphasis on the cooling systems that allow these extreme low temperatures to be achieved.The first of these is responsible for maintaining the low baseline temperature to which the sample temperature relaxes. This is typically accomplished with the use of liquid helium cryostats or, in more recent years, so-called “cryogen-free” pulse tube coolers.A cryostat is simply a bath of cryogenic fluid that is kept in thermal contact with the sample. The fluid bath may be static or may be pumped through a circulation system for better cooling. The cryostat must also be thermally insulated from the external environment in order to maintain low temperatures. Insulation is provided by a metallic vacuum dewar: The vacuum virtually eliminates conuductive or convective heat transfer from the environment and the reflective metallic outer sleeve acts as a radiation shield. For the low temperatures required to observe some magnetic transitions, liquid helium is generally required. 4He liquefies at 4.2 K, and the rarer (and much more expensive) isotope, 3He, liquefies at 1.8 K. For temperatures lower than 1.8 K, modern instruments employ evaporative attachments such as a 1-K pot, 3He refrigerator, or a dilution refrigerator. The 1-K pot is so named because it can achieve temperatures down to 1 K. It consists of a small vessel filled with liquid 4He under reduced pressure. Heat is absorbed as the liquid evaporates and is carried away by the vapor. The 3He refrigerator utilizes a 1-K pot for liquefaction of 3He, then evaporation of 3He provide cooling to the sample. 3He refrigerators can provide temperatures as low as 200 mK. The dilution refrigerator works on a similar principle, however the working fluid is a mixture of 3He and 4He. Phase separation of the 3He from the mixture provides further heat absorption as the 3He evaporates. Dilution refrigerators can achieve temperatures as low as 0.002 K (That’s cold!). Evaporative refrigerators work only on a small area in thermal contact with the sample, rather than delivering cooling power to the entire volume of the cryostat bath.Cryostat baths provide very high cooling power for very efficient cooling; however, they come with a major drawback: the cost of helium is prohibitively high. The helium vapor that boils off as it provides cooling to the sample must leave the system in order to carry the heat away and must therefore be replaced. Even when the instrument is not in use, there is some loss of helium due to the imperfect nature of the insulating dewars. In order to get the most use out of the helium, then, cryostat systems must always be in use. In addition, rather than allowing expensive helium to simply escape, recovery systems for helium exhaust must be installed in order to operate in a cost-effective manner, though these systems are not 100% efficient, and the cost of operation and maintenance of recovery systems is not small either. “Cryogen-free” coolers provide an alternative to cryostats in order to avoid the costs associated with helium usage and recovery. shows a Gifford-McMahon type pulse tube—one example of the cryogen-free coolers.In this type of cooler, helium gas is driven through the regenerator by a compressor. As a small volume element of the gas passes throughout the regenerator, it drops in temperature as it deposits heat into the regenerator. The regenerator must have a high specific heat in order to effectively absorb energy from the helium gas. For higher-temperature pulse tube coolers, the regenerator is often made of copper mesh; however, for very low temperatures, helium has a higher specific heat than most metals. Regenerators for this temperature range are often made of porous rare earth ceramics with magnetic transitions in the low temperature range. The increase in specific heat near the Schottky anomaly for these materials provides the necessary capacity for heat absorption. As the gas enters the tube at a temperature TL(from the diagram above) it is compressed, raising the temperature in accordance with the ideal gas law. At this point, the gas is at a temperature higher than TH and excess heat is exhausted through the heat exchanger marked X3 until the temperature is in equilibrium with TH. When the rotary valve in the compressor turns, the expansion cycle begins, and the gas cools as it expands adiabatically to a temperature below TL. It then absorbs heat from the sample through the heat exchanger X2. This step provides the cooling power in pulse tube coolers. Afterward, it travels back through the regenerator at a cold temperature and reabsorbs the heat that was initially stored during compression, and regains it’s original temperature through the heat exchanger X1. illustrates the temperature cyle experienced by a volume element of the working gas as it moves through the pulse tube.Pulse tube coolers are not truly “cryogen-free” as they are advertised, but they are preferable to cryostats because there is no net loss of the helium in the system. However, pulse tubes are not a perfect solution. They have very low efficiency over large changes in temperature and at very low temperatures as given by \ref{19} .\[ \zeta \ =\ 1\ -\ \frac{\Delta T}{T_{H}} \label{19} \]As a result, pulse tube coolers consume a lot of electricity to provide the necessary cooling and may take a long time to achieve the desired temperature. Over large temperature ranges such as the 4 – 300 K range typically used in specific heat measurements, pulse tubes can be used in stages, with one providing pre-cooling for the next, to increase the cooling power and provide a shorter cooling time, though this tends to increase the energy consumption. The cost of running a pulse tube system is still generally less that that of a cryostat, however, and unlike cryostats, pulse tube systems do not have to be used constantly in order to remain cost-effective.While the cooling system works more or less independently, the other systems—the sample temperature control, the magnetic field control, and the pressure control systems—work together to create the proper conditions for measurement of the sample. The sample temperature control system provides the heat pulse used to increase the temperature of the sample before relaxation occurs. The components of this system are incorporated into the sapphire sample platform as shown in . The sample platform with important components of the sample temperature control system. Reused with permission from R. J. Schutz. Rev. Sci. Instrum., 1974, 45, 548. Copyright: AIP publishing.The sample is affixed to the platform over the thermometer with a small amount of grease, which also provides thermal conductance between the heating element and the sample. The heat pulse is delivered to the sample by running a small current pulse through the heating element, and the response is measured by a resistance thermometer. The resistance thermometer is made of specially-treated carbon or germanium which have standardized resistances for given temperatures. The thermometer is calibrated to these standards to provide accurate temperature readings throughout the range of temperatures used for specific heat measurements. A conductive wire provides thermal connection between the sample platform and the heat reservoir. This wire must provide high conductivity to ensure that the heat transfer from the sample to the platform is the dominant process and prevent significant two-τ behavior. Sample preparation is also governed by the temperature control system. The sample must be in good thermal contact with the platform, therefore, a sample with a flat face is preferable. The volume of the sample cannot be too large, either, or the heating element will not be able to heat the sample uniformly. A temperature gradient throughout the sample skews the measurement of the temperature made by the thermometer. Moreover, it is impossible to assign a 1:1 correspondence between the specific heat and temperature if the specific heat values do not correspond to a singular temperature. For the best measurements, heat capacity samples must be cut from large single-crystals or polycrystalline solids using a hard diamond saw to prevent contamination of the sample with foreign material.The magnetic field control system provides magnetic fields ranging from 0 to >15 T. As was mentioned previously, strong magnetic fields can suppress the transition to magnetically ordered states to lower temepratures, which is important for studying quantum critical behaviors. The magnetic field control consists of a high-current solenoid and regulating electronics to ensure stable current and field outputs.The pressure systems controls the pressure in the sample chamber, which is physically separated from the bath by a wall which allows thermal transfer only. While the sample is installed in the chamber, the vacuum system must be able to maintain low pressures (~10-5 torr) to ensure that no gas is present. If the vacuum system fails, water from any air present in the system can condense inside the sample chamber, including on the sample platform, which alters thermal conductance and throws off measurement of the specific heat. Moreover, as the temperature in the chamber drops, water can freeze and expand in the chamber which can cause significant damage to the instrument itself.Through the application of specialized materials and technology, measurements of the specific heat have become both highly accurate and very precise. As our measurement capabilities expand toward the 0 K limit, exciting prospects arise for completion of our understanding, discovery of new phenomena, and development of important applications of novel magnetic materials. Specific heat measurements, then, are a vital tool for studying magnetic materials, whether as a means of exploring the strange phenomena of quantum physics such as quantum criticality or heavy fermions, or simply as a routine method of characterizing physical transitions between different states.This page titled 2.8: Thermal Analysis is shared under a CC BY 4.0 license and was authored, remixed, and/or curated by Pavan M. V. Raja & Andrew R. Barron (OpenStax CNX) via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.
534
2.9: Electrical Permittivity Characterization of Aqueous Solutions
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Physical_Methods_in_Chemistry_and_Nano_Science_(Barron)/02%3A_Physical_and_Thermal_Analysis/2.09%3A_Electrical_Permittivity_Characterization_of_Aqueous_Solutions
Permittivity (in the framework of electromagnetics) is a fundamental material property that describes how a material will affect, and be affected by, a time-varying electromagnetic field. The parameters of permittivity are often treated as a complex function of the applied electromagnetic field as complex numbers allow for the expression of magnitude and phase. The fundamental equation for the complex permittivity of a substance (εs) is given by \ref{1} , where ε’ and ε’’ are the real and imaginary components, respectively, ω is the radial frequency (rad/s) and can be easily converted to frequency (Hertz, Hz) using \ref{2} .\[ \varepsilon _{s} = \varepsilon ' ( \omega )\ -\ i\varepsilon ''(\omega ) \label{1} \]\[ \omega \ =\ 2\pi f \label{2} \]Specifically, the real and imaginary parameters defined within the complex permittivity equation describe how a material will store electromagnetic energy and dissipate that energy as heat. The processes that influence the response of a material to a time-varying electromagnetic field are frequency dependent and are generally classified as either ionic, dipolar, vibrational, or electronic in nature. These processes are highlighted as a function of frequency in . Ionic processes refer to the general case of a charged ion moving back and forth in response a time-varying electric field, whilst dipolar processes correspond to the ‘flipping’ and ‘twisting’ of molecules, which have a permanent electric dipole moment such as that seen with a water molecule in a microwave oven. Examples of vibrational processes include molecular vibrations (e.g. symmetric and asymmetric) and associated vibrational-rotation states that are Infrared (IR) active. Electronic processes include optical and ultra-violet (UV) absorption and scattering phenomenon seen across the UV-visible range.The most common relationship scientists that have with permittivity is through the concept of relative permittivity: the permittivity of a material relative to vacuum permittivity. Also known as the dielectric constant, the relative permittivity (εr) is given by \ref{3} , where εs is the permittivity of the substance and ε0 is the permittivity of a vacuum (ε0 = 8.85 x 10-12 Farads/m). Although relative permittivity is in fact dynamic and a function of frequency, the dielectric constants are most often expressed for low frequency electric fields where the electric field is essential static in nature. Table \(\PageIndex{1}\) depicts the dielectric constants for a range of materials.\[ \varepsilon _{r} \ =\ \varepsilon_{s} / \varepsilon_{0} \label{3} \]Dielectric constants may be useful for generic applications whereby the high-frequency response can be neglected, although applications such as radio communications, microwave design, and optical system design call for a more rigorous and comprehensive analysis. This is especially true for electrical devices such as capacitors, which are circuit elements that store and discharge electrical charge in both a static and time-varying manner. Capacitors can be thought of as two parallel plate electrodes that are separated by a finite distance and ‘sandwich’ together a piece of material with characteristic permittivity values. As can be seen in , the capacitance is a function of the permittivity of the material between the plates, which in turn is dependent on frequency. Hence, for capacitors incorporated into the circuit design for radio communication applications, across the spectrum 8.3 kHz – 300 GHz, the frequency response would be important as this will determine the capacitors ability to charge and discharge as well as the thermal response from electric fields dissipating their power as heat through the material.Evaluating the electrical characteristics of materials is become increasingly popular – especially in the field of electronics whereby miniaturization technologies often require the use of materials with high dielectric constants. The composition and chemical variations of materials such as solids and liquids can adopt characteristic responses, which are directly proportional to the amounts and types of chemical species added to the material. The examples given herein are related to aqueous suspensions whereby the electrical permittivity can be easily modulated via the addition of sodium chloride (NaCl).A common and reliable method for measuring the dielectric properties of liquid samples is to use an impedance analyzer in conjunction with a dielectric probe. The impedance analyzer directly measures the complex impedance of the sample under test and is then converted to permittivity using the system software. There are many methods used for measuring impedance, each of which has their own inherent advantages and disadvantages and factors associated with that particular method. Such factors include frequency range, measurement accuracy, and ease of operation. Common impedance measurements include bridge method, resonant method, current-voltage (I-V) method, network analysis method, auto-balancing bridge method, and radiofrequency (RF) I-V method. The RF I-V method used herein has several advantages over previously mentioned methods such as extended frequency coverage, better accuracy, and a wider measured impedance range. The principle of the RF I-V method is based on the linear relationship of the voltage-current ratio to impedance, as given by Ohm’s law (V=IZ where V is voltage, I is current, and Z is impedance). This results in the impedance measurement sensitivity being constant regardless of measured impedance. Although a full description of this method involves circuit theory and is outside the scope of this module (see “Impedance Measurement Handbook” for full details) a brief schematic overview of the measurement principles is shown in .As can be seen in is given by \ref{4} , for a high-impedance sample, the impedance of the sample (Zx) is given by \ref{5} .\[ Z_{x} \ =\ V/I \ =\frac{2R}{ \frac{V_{2}}{V_{1}}\ -\ 1} \label{4} \]\[ Z_{x} \ =\ V/I \ =\frac{R}{2}[\frac{V_{1}}{V_{2}} -\ 1] \label{5} \]The instrumentation and methods described herein consist of an Agilent E4991A impedance analyzer connected to an Agilent 85070E dielectric probe kit. The impedance analyzer directly measures the complex impedance of the sample under test by measuring either the frequency-dependent voltage or current across the sample. These values are then converted to permittivity values using the system software.In order to acquire the electrical permittivity of aqueous solutions the impedance analyzer and dielectric probe must first be calibrated. In the first instance, the impedance analyzer unit is calibrated under open-circuit, short-circuit, 50 ohm load, and low loss capacitance conditions by attaching the relevant probes shown in . The dielectric probe is then attached to the system and re-calibrated in open-air, with an attached short circuit probe, and finally with 500 μl of highly purified deionized water (with a resistivity of 18.2 MΩ/cm at 25 °C) ). The water is then removed and the system is ready for acquiring data.In order to maintain accurate calibration only the purest deionized water with a resistivity of 18.2 MΩ/cm at 25 °C should be used. To perform an analysis simply load the dielectric probe with 500 μl of the sample and click on the ‘acquire data’ tab in the software. The system will perform a scan across the frequency range 200 MHz – 3 GHz and acquire the real and imaginary parts of the complex permittivity. The period with which a data point is taken as well as the scale (i.e. log or linear) can also be altered in the software if necessary. To analyze another sample, remove the liquid and gently dry the dielectric probe with a paper towel. An open air refresh calibration should then be performed (by pressing the relevant button in the software) as this prevents errors and instrument drift from sample to sample. To analyze a normal saline (0.9 % NaCl w/v) solution, dissolve 8.99 g of NaCl in 1 litre of DI water (18.2 MΩ/cm at 25 °C) to create a 154 mM NaCl solution (equivalent to a 0.9 % NaCl w/v solution). Load 500 μl of the sample on the dielectric probe and acquire a new data set as mentioned previously.Users should consult the “Agilent Installation and Quick Start Guide” manual for full specifics in regards to impedance analyzer and dielectric probe calibration settings.The data files extracted from the impedance analyzer and dielectric probe setup previously described can be opened using any standard data processing software such as Microsoft Excel. The data will appear in three columns, which will be labeled frequency (Hz), ε', and ε" (representing the real and imaginary components of the permittivity, respectively). Any graphing software can be used to create simple graphs of the complex permittivity versus frequency. In the example below ) we have used Prism to graph the real and complex permittivity’s versus frequency (200 MHz – 3 GHz) for the water and saline samples. For this frequency range no error correction is needed. For the analysis of frequencies below 200 MHz down to 10 MHz, which can be achieved using the impedance analyzer and dielectric probe configuration, error correction algorithms are needed to take into account electrode polarization effects that skew and distort the data. Gach et al. cover these necessary algorithms that can be used if needed.This page titled 2.9: Electrical Permittivity Characterization of Aqueous Solutions is shared under a CC BY 4.0 license and was authored, remixed, and/or curated by Pavan M. V. Raja & Andrew R. Barron (OpenStax CNX) via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.
535
2.10: Dynamic Mechanical Analysis
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Physical_Methods_in_Chemistry_and_Nano_Science_(Barron)/02%3A_Physical_and_Thermal_Analysis/2.10%3A_Dynamic_Mechanical_Analysis
Dynamic mechanical analysis (DMA), also known as forced oscillatory measurements and dynamic rheology, is a basic tool used to measure the viscoelastic properties of materials (particularly polymers). To do so, DMA instrument applies an oscillating force to a material and measures its response; from such experiments, the viscosity (the tendency to flow) and stiffness of the sample can be calculated. These viscoelastic properties can be related to temperature, time, or frequency. As a result, DMA can also provide information on the transitions of materials and characterize bulk properties that are important to material performance. DMA can be applied to determine the glass transition of polymers or the response of a material to application and removal of a load, as a few common examples. The usefulness of DMA comes from its ability to mimic operating conditions of the material, which allows researchers to predict how the material will perform.Oscillatory experiments have appeared in published literature since the early 1900s and began with rudimentary experimental setups to analyze the deformation of metals. In an initial study, the material in question was hung from a support, and torsional strain was applied using a turntable. Early instruments of the 1950s from manufacturers Weissenberg and Rheovibron exclusively measured torsional stress, where force is applied in a twisting motion.Due to its usefulness in determining polymer molecular structure and stiffness, DMA became more popular in parallel with the increasing research on polymers. The method became integral in the analysis of polymer properties by 1961. In 1966, the revolutionary torsional braid analysis was developed; because this technique used a fine glass substrate imbued with the material of analysis, scientists were no longer limited to materials that could provide their own support. Using torsional braid analysis, the transition temperatures of polymers could be determined through temperature programming. Within two decades, commercial instruments became more accessible, and the technique became less specialized. In the early 1980s, one of the first DMAs using axial geometries (linear rather than torsional force) was introduced.Since the 1980s, DMA has become much more user-friendly, faster, and less costly due to competition between vendors. Additionally, the developments in computer technology have allowed easier and more efficient data processing. Today, DMA is offered by most vendors, and the modern instrument is detailed in the Instrumentationsection.DMA is based on two important concepts of stress and strain. Stress (σ) provides a measure of force (F) applied to area (A), \ref{1} .\[ \sigma \ =\ F/A \label{1} \]Stress to a material causes strain (γ), the deformation of the sample. Strain can be calculated by dividing the change in sample dimensions (∆Y) by the sample’s original dimensions (Y) (\ref{2} ). This value is often given as a percentage of strain.\[ \gamma \ =\ \Delta Y/Y \label{2} \]The modulus (E), a measure of stiffness, can be calculated from the slope of the stress-strain plot, , as displayed in \label{3} . This modulus is dependent on temperature and applied stress. The change of this modulus as a function of a specified variable is key to DMA and determination of viscoelastic properties. Viscoelastic materials such as polymers display both elastic properties characteristic of solid materials and viscous properties characteristic of liquids; as a result, the viscoelastic properties are often a compromise between the two extremes. Ideal elastic properties can be related to Hooke’s spring, while viscous behavior is often modeled using a dashpot, or a motion-resisting damper.\[ E \ =\ \sigma /y \label{3} \]Creep-recovery testing is not a true dynamic analysis because the applied stress or strain is held constant; however, most modern DMA instruments have the ability to run this analysis. Creep-recovery tests the deformation of a material that occurs when load applied and removed. In the “creep” portion of this analysis, the material is placed under immediate, constant stress until the sample equilibrates. “Recovery” then measures the stress relaxation after the stress is removed. The stress and strain are measured as functions of time. From this method of analysis, equilibrium values for viscosity, modulus, and compliance (willingness of materials to deform; inverse of modulus) can be determined; however, such calculations are beyond the scope of this review.Creep-recovery tests are useful in testing materials under anticipated operation conditions and long test times. As an example, multiple creep-recovery cycles can be applied to a sample to determine the behavior and change in properties of a material after several cycles of stress.DMA instruments apply sinusoidally oscillating stress to samples and causes sinusoidal deformation. The relationship between the oscillating stress and strain becomes important in determining viscoelastic properties of the material. To begin, the stress applied can be described by a sine function where σo is the maximum stress applied, ω is the frequency of applied stress, and t is time. Stress and strain can be expressed with the following \ref{4} .\[ \sigma \ = \ \sigma_{0} sin(\omega t + \delta);\ y=y_{0} cos(\omega t) \label{4} \]The strain of a system undergoing sinusoidally oscillating stress is also sinuisoidal, but the phase difference between strain and stress is entirely dependent on the balance between viscous and elastic properties of the material in question. For ideal elastic systems, the strain and stress are completely in phase, and the phase angle (δ) is equal to 0. For viscous systems, the applied stress leads the strain by 90o. The phase angle of viscoelastic materials is somewhere in between ).In essence, the phase angle between the stress and strain tells us a great deal about the viscoelasticity of the material. For one, a small phase angle indicates that the material is highly elastic; a large phase angle indicates the material is highly viscous. Furthermore, separating the properties of modulus, viscosity, compliance, or strain into two separate terms allows the analysis of the elasticity or the viscosity of a material. The elastic response of the material is analogous to storage of energy in a spring, while the viscosity of material can be thought of as the source of energy loss.A few key viscoelastic terms can be calculated from dynamic analysis; their equations and significance are detailed in Table \(\PageIndex{1}\).A temperature sweep is the most common DMA test used on solid materials. In this experiment, the frequency and amplitude of oscillating stress is held constant while the temperature is increased. The temperature can be raised in a stepwise fashion, where the sample temperature is increased by larger intervals (e.g., 5 oC) and allowed to equilibrate before measurements are taken. Continuous heating routines can also be used (1-2 oC/minute). Typically, the results of temperature sweeps are displayed as storage and loss moduli as well as tan delta as a function of temperature. For polymers, these results are highly indicative of polymer structure. An example of a thermal sweep of a polymer is detailed later in this module.In time scans, the temperature of the sample is held constant, and properties are measured as functions of time, gas changes, or other parameters. This experiment is commonly used when studying curing of thermosets, materials that change chemically upon heating. Data is presented graphically using modulus as a function of time; curing profiles can be derived from this information.Frequency scans test a range of frequencies at a constant temperature to analyze the effect of change in frequency on temperature-driven changes in material. This type of experiment is typically run on fluids or polymer melts. The results of frequency scans are displayed as modulus and viscosity as functions of log frequency.The most common instrument for DMA is the forced resonance analyzer, which is ideal for measuring material response to temperature sweeps. The analyzer controls deformation, temperature, sample geometry, and sample environment. displays the important components of the DMA, including the motor and driveshaft used to apply torsional stress as well as the linear variable differential transformer (LVDT) used to measure linear displacement. The carriage contains the sample and is typically enveloped by a furnace and heat sink.The DMA should be ideally selected to analyze the material at hand. The DMA can be either stress or strain controlled: strain-controlled analyzers move the probe a certain distance and measure the stress applied; strain-controlled analyzers provide a constant deformation of the sample ) Although the two techniques are nearly equivalent when the stress-strain plot ) is linear, stress-controlled analyzers provide more accurate results.DMA analyzers can also apply stress or strain in two manners—axial and torsional deformation ) Axial deformation applies a linear force to the sample and is typically used for solid and semisolid materials to test flex, tensile strength, and compression. Torsional analyzers apply force in a twisting motion; this type of analysis is used for liquids and polymer melts but can also be applied to solids. Although both types of analyzers have wide analysis range and can be used for similar samples, the axial instrument should not be used for fluid samples with viscosities below 500 Pa-s, and torsional analyzers cannot handle materials with high modulus.Different fixtures can be used to hold the samples in place and should be chosen according to the type of samples analyzed. The sample geometry affects both stress and strain and must be factored into the modulus calculations through a geometry factor. The fixture systems are specific to the type of stress application. Axial analyzers have a greater number of fixture options; one of the most commonly used fixtures is extension/tensile geometry used for thin films or fibers. In this method, the sample is held both vertically and lengthwise by top and bottom clamps, and stress is applied upwards For torsional analyzers, the simplest geometry is the use of parallel plates. The plates are separated by a distance determined by the viscosity of the sample. Because the movement of the sample depends on its radius from the center of the plate, the stress applied is uneven; the measured strain is an average value.As the temperature of a polymer increases, the material goes through a number of minor transitions (Tγ and Tβ) due to expansion; at these transitions, the modulus also undergoes changes. The glass transition of polymers (Tg) occurs with the abrupt change of physical properties within 140-160 oC; at some temperature within this range, the storage (elastic) modulus of the polymer drops dramatically. As the temperature rises above the glass transition point, the material loses its structure and becomes rubbery before finally melting. The idealized modulus transition is pictured in .The glass transition temperature can be determined using either the storage modulus, complex modulus, or tan δ (vs temperature) depending on context and instrument; because these methods result in such a range of values ), the method of calculation should be noted. When using the storage modulus, the temperature at which E’ begins to decline is used as the Tg. Tan δ and loss modulus E” show peaks at the glass transition; either onset or peak values can be used in determining Tg. These different methods of measurement are depicted graphically in .Dynamic mechanical analysis is an essential analytical technique for determining the viscoelastic properties of polymers. Unlike many comparable methods, DMA can provide information on major and minor transitions of materials; it is also more sensitive to changes after the glass transition temperature of polymers. Due to its use of oscillating stress, this method is able to quickly scan and calculate the modulus for a range of temperatures. As a result, it is the only technique that can determine the basic structure of a polymer system while providing data on the modulus as a function of temperature. Finally, the environment of DMA tests can be controlled to mimic real-world operating conditions, so this analytical method is able to accurately predict the performance of materials in use.DMA does possess limitations that lead to calculation inaccuracies. The modulus value is very dependent on sample dimensions, which means large inaccuracies are introduced if dimensional measurements of samples are slightly inaccurate. Additionally, overcoming the inertia of the instrument used to apply oscillating stress converts mechanical energy to heat and changes the temperature of the sample. Since maintaining exact temperatures is important in temperature scans, this also introduces inaccuracies. Because data processing of DMA is largely automated, the final source of measurement uncertainty comes from computer error.This page titled 2.10: Dynamic Mechanical Analysis is shared under a CC BY 4.0 license and was authored, remixed, and/or curated by Pavan M. V. Raja & Andrew R. Barron (OpenStax CNX) via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.
536
2.11: Finding a Representative Lithology
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Physical_Methods_in_Chemistry_and_Nano_Science_(Barron)/02%3A_Physical_and_Thermal_Analysis/2.11%3A_Finding_a_Representative_Lithology
Sample sediments are typically sent in a large plastic bag inside a brown paper bag labeled with the company or organization name, drill site name and number, and the depth the sediment was taken (in meters).The first step in determining a lithology is to prepare a sample from your bulk sediment. To do this, you will need to crush some of the bulk rocks of your sediment into finer grains ). You will need a hard surface, a hammer or mallet, and your sediment. An improvised container such as the cardboard one shown in may be useful in containing fragments that try to escape the hard surface during vigorous hammering. Remove the plastic sediment bag from the brown mailer bag. Empty approximately 10-20 g of bulk sediment onto the hard surface. Repeatedly strike the larger rock sized portions of the sediment until the larger units are broken into grains that are approximately the size of a grain of rice.Some samples will give off oily or noxious odors when crushed. This is because of trapped hydrocarbons or sulfurous compounds and is normal. The next step in the process, washing, will take care of these impurities and the smell.Once the sample has been appropriately crushed on the macro scale, a micro uniformity in grain size can be achieved through the use of a pulverizing micro mill machine such as the Planetary Mills Pulverisette 7 in .To use the mill, load your crushed sample into the milling cup ) along with milling stones of 15 mm diameter. Set your rotational speed and time using the machine interface. A speed of 500-600 rpm and mill time of 3-5 minutes is suggested. Using higher speeds or longer times can result in loss of sample as dust. Load the milling cup into the mill and press start; make sure to lower the mill hood. Once the mill has completed its cycle, retrieve the sample and dump it into a plastic cup labelled with the drill site name and depth in order to prepare it for washing. Be sure to wash and dry the mill cup and mill stones between samples if multiple samples are being tested.If your sample is dirty, as in contaminated with hydrocarbons such as crude oil, it will need to be washed. To wash your sample you will need your sample cup, a washbasin, a spoon, a 150-300 µm sieve, household dish detergent, and a porcelain ramekin if a drying oven is available ).Take your sample cup to the wash basin and fill the cup halfway with water, adding a squirt of dish detergent. Vigorously stir the cup with the spoon for 20 seconds, ensuring each grain is coated with the detergent water. Pour your sample into the sieve and turn on the faucet. Run water over the sample to allow the detergent and dust particles to wash through the sieve. Continue to wash the sample this way until all the detergent is washed from the sample. Once clean, empty the sieve onto a surface to leave to dry overnight, or into a ramekin if a drying oven is available. Place ramekin into drying oven set to at least 100 °C for a minimum of 2 hours to allow thorough drying ). Once dry, the sample is ready to be picked.Picking the sample is arguably the most important step in determining the lithology ).During this step you will create a sample uniformity to eliminate random minerals, macro contaminates such as wood, and dropstones that dropped into your sediment depth when the sediment was drilled. You will also be able to get a general judgment as to the lithology after picking, though further analysis is needed if chemical composition is desired. Remove sample from drying oven. Take a piece of weighing paper and weigh out 5-10 g of sample. Use a light microscope to determine whether most of the sample is either silt, clay, silty-clay, or sand.To prepare your sample for X-ray fluorescence (XRF) analysis you will need to prepare a sample pellet. To pellet your sample you will need a mortar and pestle, pellet binder such as Cerox, a scapula to remove binder, a micro scale, a pellet press with housing, and a pellet tin cup. Measure out and pour 2-4 g of sample into your mortar. Measure out and add 50% of your sample weight of pellet binder. For example, if your sample weight was 2 g, add 1 g of binder. Grind the sample into a fine, uniform powder, ensuring that all of the binder is thoroughly mixed with the sample ).Drop a sample of tin foil into the press housing. Pour sample into the tin foil, and then gently tap the housing against a hard surface two to three times to ensure sample settles into the tin. Place the top press disk into the channel. Place the press housing into the press, oriented directly under the pressing arm. Crank the lever on the press until the pressure gauge reads 15 tons ). Wait for one minute, then twist the pressure release valve and remove the press housing from the press. Reverse the press and apply the removal cap to the bottom of the press. Place the housing into the press bottom side up and manually apply pressure by turning the crank on top of the press until the sample pops out of the housing. Retrieve the pelleted sample ). The pelleted sample is now ready for X-ray fluorescence analysis (XRF).Place the sample pellet into the XRF and ) and close the XRF hood. The XRF obtain the spectrum from the associated computer.The XRF spectrum is a plot of energy and intensity. The software equipped with the XRF will be pre-programmed to recognize the characteristic energies associated with the X-ray emissions of the elements. The XRF functions by shooting a beam of high energy photons that are absorbed by the atoms of the sample. The inner shell electrons of sample atoms are ejected. This leaves the atom in an excited state, with a vacancy in the inner shell. Outer shell electrons then fall into the vacancy, emitting photons with energy equal to the energy difference between these two energy levels. Each element has a unique set of energy levels, therefore each element emits a pattern of X-rays characteristic of that element. The intensity of these characteristic X-rays increases with the concentration of the corresponding element leading to higher counts and higher peaks on the spectrum ).This page titled 2.11: Finding a Representative Lithology is shared under a CC BY 4.0 license and was authored, remixed, and/or curated by Pavan M. V. Raja & Andrew R. Barron (OpenStax CNX) via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.
537
3.1: Principles of Gas Chromatography
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Physical_Methods_in_Chemistry_and_Nano_Science_(Barron)/03%3A_Principles_of_Gas_Chromatography/3.01%3A_Principles_of_Gas_Chromatography
Archer J.P. Martin ) and Anthony T. James ) introduced liquid-gas partition chromatography in 1950 at the meeting of the Biochemical Society held in London, a few months before submitting three fundamental papers to the Biochemical Journal. It was this work that provided the foundation for the development of gas chromatography. In fact, Martin envisioned gas chromatography almost ten years before, while working with R. L. M. Synge ) on partition chromatography. Martin and Synge, who were awarded the chemistry Nobel prize in 1941, suggested that separation of volatile compounds could be achieved by using a vapor as the mobile phase instead of a liquid.Gas chromatography quickly gained general acceptance because it was introduced at the time when improved analytical controls were required in the petrochemical industries, and new techniques were needed in order to overcome the limitations of old laboratory methods. Nowadays, gas chromatography is a mature technique, widely used worldwide for the analysis of almost every type of organic compound, even those that are not volatile in their original state but can be converted to volatile derivatives.Gas chromatography is a separation technique in which the components of a sample partition between two phases:According to the state of the stationary phase, gas chromatography can be classified in gas-solid chromatography (GSC), where the stationary phase is a solid, and gas-liquid chromatography (GLC) that uses a liquid as stationary phase. GLC is to a great extent more widely used than GSC.During a GC separation, the sample is vaporized and carried by the mobile gas phase (i.e., the carrier gas) through the column. Separation of the different components is achieved based on their relative vapor pressure and affinities for the stationary phase. The affinity of a substance towards the stationary phase can be described in chemical terms as an equilibrium constant called the distribution constant Kc, also known as the partition coefficient, \ref{1} , where [A]s is the concentration of compound A in the stationary phase and [A]m is the concentration of compound A in the mobile phase.\[ K_{c} = [A]_{s}/[A]_{m} \label{1} \]The distribution constant (Kc) controls the movement of the different compounds through the column, therefore differences in the distribution constant allow for the chromatographic separation. shows a schematic representation of the chromatographic process. Kc is temperature dependent, and also depends on the chemical nature of the stationary phase. Thus, temperature can be used as a way to improve the separation of different compounds through the column, or a different stationary phase. shows a chromatogram of the analysis of residual methanol in biodiesel, which is one of the required properties that must be measured to ensure the quality of the product at the time and place of delivery.Chromatogram a) shows a standard solution of methanol with 2-propanol as the internal standard. From the figure it can be seen that methanol has a higher affinity for the mobile phase (lower Kc) than 2-propanol (iso-propanol), and therefore elutes first. Chromatograms b and c) show two samples of biodiesel, one with methanol b) and another with no methanol detection. The internal standard was added to both samples for quantitation purposes. shows a schematic diagram of the components of a typical gas chromatograph, while shows a photograph of a typical gas chromatograph coupled to a mass spectrometer (GC/MS).The role of the carrier gas -GC mobile phase- is to carry the sample molecules along the column while they are not dissolved in or adsorbed on the stationary phase. The carrier gas is inert and does not interact with the sample, and thus GC separation's selectivity can be attributed to the stationary phase alone. However, the choice of carrier gas is important to maintain high efficiency. The effect of different carrier gases on column efficiency is represented by the van Deemter (packed columns) and the Golay equation (capillary columns). The van Deemter equation, \ref{2} , describes the three main effects that contribute to band broadening in packed columns and, as a consequence, to a reduced efficiency in the separation process.\[ HEPT\ =\ A+\frac{B}{u} + Cu \label{2} \]These three factors are:The broadening is described in terms of the height equivalent to a theoretical plate, HEPT, as a function of the average linear gas velocity, u. A small HEPT value indicates a narrow peak and a higher efficiency.Since capillary columns do not have any packing, the Golay equation, \ref{3} , does not have an A-term. The Golay equation has 2 C-terms, one for mass transfer in then stationary phase (Cs) and one for mass transfer in the mobile phase (CM).\[ HEPT\ =\ \frac{B}{u} \ +\ (C_{s}\ +\ C_{M})u \label{3} \]High purity hydrogen, helium and nitrogen are commonly used for gas chromatography. Also, depending on the type of detector used, different gases are preferred.This is the place where the sample is volatilized and quantitatively introduced into the carrier gas stream. Usually a syringe is used for injecting the sample into the injection port. Samples can be injected manually or automatically with mechanical devices that are often placed on top of the gas chromatograph: the auto-samplers.The gas chromatographic column may be considered the heart of the GC system, where the separation of sample components takes place. Columns are classified as either packed or capillary columns. A general comparison of packed and capillary columns is shown in Table \(\PageIndex{1}\). Images of packed columns are shown in and .Since most common applications employed nowadays use capillary columns, we will focus on this type of columns. To define a capillary column, four parameters must be specified:The detector senses a physicochemical property of the analyte and provides a response which is amplified and converted into an electronic signal to produce a chromatogram. Most of the detectors used in GC were invented specifically for this technique, except for the thermal conductivity detector (TCD) and the mass spectrometer. In total, approximately 60 detectors have been used in GC. Detectors that exhibit an enhanced response to certain analyte types are known as "selective detectors".During the last 10 years there had been an increasing use of GC in combination with mass spectrometry (MS). The mass spectrometer has become a standard detector that allows for lower detection limits and does not require the separation of all components present in the sample. Mass spectroscopy is one of the types of detection that provides the most information with only micrograms of sample. Qualitative identification of unknown compounds as well as quantitative analysis of samples is possible using GC-MS. When GC is coupled to a mass spectrometer, the compounds that elute from the GC column are ionized by using electrons (EI, electron ionization) or a chemical reagent (CI, chemical ionization). Charged fragments are focused and accelerated into a mass analyzer: typically a quadrupole mass analyzer. Fragments with different mass to charge ratios will generate different signals, so any compound that produces ions within the mass range of the mass analyzer will be detected. Detection limits of 1-10 ng or even lower values (e.g., 10 pg) can be achieved selecting the appropriate scanning mode.Gas chromatography is primarily used for the analysis of thermally stable volatile compounds. However, when dealing with non-volatile samples, chemical reactions can be performed on the sample to increase the volatility of the compounds. Compounds that contain functional groups such as OH, NH, CO2H, and SH are difficult to analyze by GC because they are not sufficiently volatile, can be too strongly attracted to the stationary phase or are thermally unstable. Most common derivatization reactions used for GC can be divided into three types:Samples are derivatized before being analyzed to:GC is the premier analytical technique for the separation of volatile compounds. Several features such as speed of analysis, ease of operation, excellent quantitative results, and moderate costs had helped GC to become one of the most popular techniques worldwide.Unlike gas chromatography, which is unsuitable for nonvolatile and thermally fragile molecules, liquid chromatography can safely separate a very wide range of organic compounds, from small-molecule drug metabolites to peptides and proteins.This page titled 3.1: Principles of Gas Chromatography is shared under a CC BY 4.0 license and was authored, remixed, and/or curated by Pavan M. V. Raja & Andrew R. Barron (OpenStax CNX) via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.
538
3.2: High Performance Liquid chromatography
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Physical_Methods_in_Chemistry_and_Nano_Science_(Barron)/03%3A_Principles_of_Gas_Chromatography/3.02%3A_High_Performance_Liquid_chromatography
High-performance liquid chromatography (HPLC) is a technique in analytical chemistry used to separate the components in a mixture, and to identify and quantify each component. It was initially discovered as an analytical technique in the early twentieth century and was first used to separate colored compounds. The word chromatography means color writing. It was the botanist M. S. Tswett ) who invented this method in around 1900 to study leaf pigments (mainly chlorophyll). He separated the pigments based on their interaction with a stationary phase. In 1906 Tswett published two fundamental papers describing the various aspects of liquid-adsorption chromatography in detail. He also pointed out that in spite of its name, other substances also could be separated by chromatography. The modern high performance liquid chromatography has developed from this separation; the separation efficiency, versatility and speed have been improved significantly.The molecular species subjected to separation exist in a sample that is made of analytes and matrix. The analytes are the molecular species of interest, and the matrix is the rest of the components in the sample. For chromatographic separation, the sample is introduced in a flowing mobile phase that passes a stationary phase. Mobile phase is a moving liquid, and is characterized by its composition, solubility, UV transparency, viscosity, and miscibility with other solvents. Stationary phase is a stationary medium, which can be a stagnant bulk liquid, a liquid layer on the solid phase, or an interfacial layer between liquid and solid. In HPLC, the stationary phase is typically in the form of a column packed with very small porous particles and the liquid mobile phase is moved through the column by a pump. The development of HPLC is mainly the development of the new columns, which requires new particles, new stationary phases (particle coatings), and improved procedures for packing the column. A picture of modern HPLC is shown in .The major components of a HPLC are shown in . The role of a pump is to force a liquid (mobile phase) through at a specific flow rate (milliliters per minute). The injector serves to introduce the liquid sample into the flow stream of the mobile phase. Column is the most central and important component of HPLC, and the column’s stationary phase separates the sample components of interest using various physical and chemical parameters. The detector is to detect the individual molecules that elute from the column. The computer usually functions as the data system, and the computer not only controls all the modules of the HPLC instrument but it takes the signal from the detector and uses it to determine the retention time, the sample components, and quantitative analysis.Different separation mechanisms were used based on different property of the stationary phase of the column. The major types include normal phase chromatography, reverse phase chromatography, ion exchange, size exclusion chromatography, and affinity chromatography.In this method the columns are packed with polar, inorganic particles and a nonpolar mobile phase is used to run through the stationary phase (Table \(\PageIndex{1}\) ). Normal phase chromatography is mainly used for purification of crude samples, separation of very polar samples, or analytical separations by thin layer chromatography. One problem when using this method is that, water is a strong solvent for the normal-phase chromatography, traces of water in the mobile phase can markedly affect sample retention, and after changing the mobile phase, the column equilibration is very slow.In reverse-phase (RP) chromatography the stationary phase has a hydrophobic character, while the mobile phase has a polar character. This is the reverse of the normal-phase chromatography (Table \(\PageIndex{2}\) ). The interactions in RP-HPLC are considered to be the hydrophobic forces, and these forces are caused by the energies resulting from the disturbance of the dipolar structure of the solvent. The separation is typically based on the partition of the analyte between the stationary phase and the mobile phase. The solute molecules are in equilibrium between the hydrophobic stationary phase and partially polar mobile phase. The more hydrophobic molecule has a longer retention time while the ionized organic compounds, inorganic ions and polar metal molecules show little or no retention time.The ion exchange mechanism is based on electrostatic interactions between hydrated ions from a sample and oppositely charged functional groups on the stationary phase. Two types of mechanisms are used for the separation: in one mechanism, the elution uses a mobile phase that contains competing ions that would replace the analyte ions and push them off the column; another mechanism is to add a complexing reagent in the mobile phase and to change the sample species from their initial form. This modification on the molecules will lead them to elution. In addition to the exchange of ions, ion-exchange stationary phases are able to retain specific neutral molecules. This process is related to the retention based on the formation of complexes, and specific ions such as transition metals can be retained on a cation-exchange resin and can still accept lone-pair electrons from donor ligands. Thus neutral ligand molecules can be retained on resins treated with the transitional metal ions.The modern ion exchange is capable of quantitative applications at rather low solute concentrations, and can be used in the analysis of aqueous samples for common inorganic anions (range 10 μg/L to 10 mg/L). Metal cations and inorganic anions are all separated predominantly by ionic interactions with the ion exchange resin. One of the largest industrial users of ion exchange is the food and beverage sector to determine the nitrogen-, sulfur-, and phosphorous- containing species as well as the halide ions. Also, ion exchange can be used to determine the dissolved inorganic and organic ions in natural and treated waters.It is a chromatographic method that separate the molecules in the solutions based on the size (hydrodynamic volume). This column is often used for the separation of macromolecules and of macromolecules from small molecules. After the analyte is injected into the column, molecules smaller than he pore size of the stationary phase enter the porous particles during the separation and flow through he intricate channels of the stationary phase. Thus smaller components have a longer path to traverse and elute from the column later than the larger ones. Since the molecular volume is related to molecular weight, it is expected that retention volume will depend to some degree on the molecular weight of the polymeric materials. The relation between the retention time and the molecular weight is shown in .Usually the type of HPLC separation method to use depends on the chemical nature and physicochemical parameters of the samples. shows a flow chart of preliminary selection for the separation method according to the properties of the analyte.Detectors that are commonly used for liquid chromatography include ultraviolet-visible absorbance detectors, refractive index detectors, fluorescence detectors, and mass spectrometry. Regardless of the class, a LC detector should ideally have the characteristics of about 10-12-10-11 g/mL, and a linear dynamic range of five or six orders. The principal characteristics of the detectors to be evaluated include dynamic range, response index or linearity, linear dynamic range, detector response, detector sensitivity, etc.Among these detectors, the most economical and popular methods are UV and refractive index (RI) detectors. They have rather broad selectivity reasonable detection limits most of the time. The RI detector was the first detector available for commercial use. This method is particularly useful in the HPLC separation according to size, and the measurement is directly proportional to the concentration of polymer and practically independent of the molecular weight. The sensitivity of RI is 10-6 g/mL, the linear dynamic range is from 10-6to 10-4 g/mL, and the response index is between 0.97 and 1.03.UV detectors respond only to those substances that absorb UV light at the wavelength of the source light. A great many compounds absorb light in the UV range (180-350 nm) including substances having one or more double bonds and substances having unshared electrons. and the relationship between the intensity of UV light transmitted through the cell and solute concentration is given by Beer’s law, \ref{1} and \ref{2} .\[ I_{T} \ =\ I_{0} e^{kcl} \label{1} \]\[ ln(I_{T})\ =\ ln(I_{0}) (-kcl) \label{2} \]Where I0 is the intensity of the light entering the cell, and IT is the light transmitted through the cell, l is the path length of the cell, c is the concentration of the solute, and k is the molar absorption coefficient of the solute. UV detectors include fixed wavelength UV detector and multi wavelength UV detector. The fixed wavelength UV detector has sensitivity of 5*10-8 g/mL, has linear dynamic range between 5*10-8 and 5*10-4g/mL, and the response index is between 0.98 and 1.02. The multi-wavelength UV detector has sensitivity of 10-7 g/mL, the linear dynamic range is between 5*10-7 and 5*10-4 g/mL, and the response index is from 0.97 to 1.03. UV detectors could be used effectively for the reverse-phase separations and ion exchange chromatography. UV detectors have high sensitivity, are economically affordable, and easy to operate. Thus UV detector is the most common choice of detector for HPLC.Another method, mass spectrometry, has certain advantages over other techniques. Mass spectra could be obtained rapidly; only small amount (sub-μg) of sample is required for analysis, and the data provided by the spectra is very informative of the molecular structure. Mass spectrometry also has strong advantages of specificity and sensitivity compared with other detectors. The combination of HPLC-MS is oriented towards the specific detection and potential identification of chemicals in the presence of other chemicals. However, it is difficult to interface the liquid chromatography to a mass-spectrometer, because all the solvents need to be removed first. The common used interface includes electrospray ionization, atmospheric pressure photoionization, and thermospray ionization.Flow rate shows how fast the mobile phase travels across the column, and is often used for calculation of the consumption of the mobile phase in a given time interval. There are volumetric flow rate U and linear flow rate u. These two flow rate is related by \ref{3} , where A is the area of the channel for the flow, \ref{4} .\[ U = Au \label{3} \]\[ A\ =\ (1/4) \pi \varepsilon d^{2} \label{4} \]The retention time (tR) can be defined as the time from the injection of the sample to the time of compound elution, and it is taken at the apex of the peak that belongs to the specific molecular species. The retention time is decided by several factors including the structure of the specific molecule, the flow rate of the mobile phase, column dimension. And the dead time t0 is defined as the time for a non-retained molecular species to elute from the column.Retention volume (VR) is defined as the volume of the mobile phase flowing from the injection time until the corresponding retention time of a molecular species, and are related by \ref{5} . The retention volume related to the dead time is known as dead volume V0.\[ V_{R} \ =\ U_{tR} \label{5} \]The migration rate can be defined as the velocity at which the species moves through the column. And the migration rate (UR) is inversely proportional to the retention times. If only a fraction of molecules that are present in the mobile phase are moving. The value of migration rate is then given by \ref{6} .\[ u_{R} \ =\ u*V_{mo}/(V_{mo}+V_{st}) \label{6} \]Capacity factor (k) is the ratio of reduced retention time and the dead time, \ref{7} .\[ K \ =\ (t_{R} - t_{0})/t_{0} \ =\ (v_{R} - v_{0})/v_{0} \label{7} \]In the separation, the molecules running through the column can also be considered as being in a continuous equilibrium between the mobile phase and the stationary phase. This equilibrium could be governed by an equilibrium constant K, defined as \ref{8} , in which Cmo is the molar concentration of the molecules in the mobile phase, and Cst is the molar concentration of the molecules in the stationary phase. The equilibrium constant K can also be written as \ref{9} .\[ K\ =\ C_{st}/C_{mo} \label{8} \]\[ K\ =\ k(V_{0}/V_{st}) \label{9} \]The most important aspect of HPLC is the high separation capacity which enables the batch analysis of multiple components. Even if the sample consists of a mixture, HPLC will allows the target components to be separated, detected, and quantified. Also, under appropriate condition, it is possible to attain a high level of reproducibility with a coefficient of variation not exceeding 1%. Also, it has a high sensitivity while a low sample consumption. HPLC has one advantage over GC column that analysis is possible for any sample can be stably dissolved in the eluent and need not to be vaporized.With this reason, HPLC is used much more frequently in the field of biochemistry and pharmaceutical than the GC column.This page titled 3.2: High Performance Liquid chromatography is shared under a CC BY 4.0 license and was authored, remixed, and/or curated by Pavan M. V. Raja & Andrew R. Barron (OpenStax CNX) via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.
539
3.3: Basic Principles of Supercritical Fluid Chromatography and Supercrtical Fluid Extraction
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Physical_Methods_in_Chemistry_and_Nano_Science_(Barron)/03%3A_Principles_of_Gas_Chromatography/3.03%3A_Basic_Principles_of_Supercritical_Fluid_Chromatography_and_Supercrtical_Fluid_Extraction
The discovery of supercritical fluids led to novel analytical applications in the fields of chromatography and extraction known as supercritical fluid chromatography (SFC) and supercritical fluid extraction (SFE). Supercritical fluid chromatography is accepted as a column chromatography methods along with gas chromatography (GC) and high-performance liquid chromatography (HPLC). Due to to the properties of supercritical fluids, SFC combines each of the advantages of both GC and HPLC in one method. In addition, supercritical fluid extraction is an advanced analytical technique.A supercritical fluid is the phase of a material at critical temperature and critical pressure of the material. Critical temperature is the temperature at which a gas cannot become liquid as long as there is no extra pressure; and, critical pressure is the minimum amount of pressure to liquefy a gas at its critical temperature. Supercritical fluids combine useful properties of gas and liquid phases, as it can behave like both a gas and a liquid in terms of different aspects. A supercritical fluid provides a gas-like characteristic when it fills a container and it takes the shape of the container. The motion of the molecules are quite similar to gas molecules. On the other hand, a supercritical fluid behaves like a liquid because its density property is near liquid and, thus, a supercritical fluid shows a similarity to the dissolving effect of a liquid.The characteristic properties of a supercritical fluid are density, diffusivity and viscosity. Supercritical values for these features take place between liquids and gases. Table \(\PageIndex{1}\) demonstrates numerical values of properties for gas, supercritical fluid and liquid.The formation of a supercritical fluid is the result of a dynamic equilibrium. When a material is heated to its specific critical temperature in a closed system, at constant pressure, a dynamic equilibrium is generated. This equilibrium includes the same number of molecules coming out of liquid phase to gas phase by gaining energy and going in to liquid phase from gas phase by losing energy. At this particular point, the phase curve between liquid and gas phases disappears and supercritical material appears.In order to understand the definition of SF better, a simple phase diagram can be used. displays an ideal phase diagram. For a pure material, a phase diagram shows the fields where the material is in the form of solid, liquid, and gas in terms of different temperature and pressure values. Curves, where two phases (solid-gas, solid-liquid and liquid-gas) exist together, defines the boundaries of the phase regions. These curves, for example, include sublimation for solid-gas boundary, melting for solid-liquid boundary, and vaporization for liquid-gas boundary. Other than these binary existence curves, there is a point where all three phases are present together in equilibrium; the triple point (TP).There is another characteristic point in the phase diagram, the critical point (CP). This point is obtained at critical temperature (Tc) and critical pressure (Pc). After the CP, no matter how much pressure or temperature is increased, the material cannot transform from gas to liquid or from liquid to gas phase. This form is the supercritical fluid form. Increasing temperature cannot result in turning to gas, and increasing pressure cannot result in turning to liquid at this point. In the phase diagram, the field above Tc and Pc values is defined as the supercritical region.In theory, the supercritical region can be reached in two ways:The critical point is characteristic for each material, resulting from the characteristic Tc and Pc values for each substance.As mentioned above, SF shares some common features with both gases and liquids. This enables us to take advantage of a correct combination of the properties.Density characteristic of a supercritical fluid is between that of a gas and a liquid, but closer to that of a liquid. In the supercritical region, density of a supercritical fluid increases with increased pressure (at constant temperature). When pressure is constant, density of the material decreases with increasing temperature. The dissolving effect of a supercritical fluid is dependent on its density value. Supercritical fluids are also better carriers than gases thanks to their higher density. Therefore, density is an essential parameter for analytical techniques using supercritical fluids as solvents.Diffusivity of a supercritical fluid can be 100 x that of a liquid and 1/1,000 to 1/10,000 x less than a gas. Because supercritical fluids have more diffusivity than a liquid, it stands to reason a solute can show better diffusivity in a supercritical fluid than in a liquid. Diffusivity is parallel with temperature and contrary with pressure. Increasing pressure affects supercritical fluid molecules to become closer to each other and decreases diffusivity in the material. The greater diffusivity gives supercritical fluids the chance to be faster carriers for analytical applications. Hence, supercritical fluids play an important role for chromatography and extraction methods.Viscosity for a supercritical fluid is almost the same as a gas, being approximately 1/10 of that of a liquid. Thus, supercritical fluids are less resistant than liquids towards components flowing through. The viscosity of supercritical fluids is also distinguished from that of liquids in that temperature has a little effect on liquid viscosity, where it can dramatically influence supercritical fluid viscosity.These properties of viscosity, diffusivity, and density are related to each other. The change in temperature and pressure can affect all of them in different combinations. For instance, increasing pressure causes a rise for viscosity and rising viscosity results in declining diffusivity.Just like supercritical fluids combine the benefits of liquids and gases, SFC bring the advantages and strong aspects of HPLC and GC together. SFC can be more advantageous than HPLC and GC when compounds which decompose at high temperatures with GC and do not have functional groups to be detected by HPLC detection systems are analyzed.There are three major qualities for column chromatographies:Generally, HPLC has better selectivity that SFC owing to changeable mobile phases (especially during a particular experimental run) and a wide range of stationary phases. Although SFC does not have the selectivity of HPLC, it has good quality in terms of sensitivity and efficiency. SFC enables change of some properties during the chromatographic process. This tuning ability allows the optimization of the analysis. Also, SFC has a broader range of detectors than HPLC. SFC surpasses GC for the analysis of easily decomposable substances; these materials can be used with SFC due to its ability to work with lower temperatures than GC.As it can be seen in SFC has a similar setup to an HPLC instrument. They use similar stationary phases with similar column types. However, there are some differences. Temperature is critical for supercritical fluids, so there should be a heat control tool in the system similar to that of GC. Also, there should be a pressure control mechanism, a restrictor, because pressure is another essential parameter in order for supercritical fluid materials to be kept at the required level. A microprocessor mechanism is placed in the instrument for SFC. This unit collects data for pressure, oven temperature, and detector performance to control the related pieces of the instrument.SFC columns are similar to HPLC columns in terms of coating materials. Open-tubular columns and packed columns are the two most common types used in SFC. Open-tubular ones are preferred and they have similarities to HPLC fused-silica columns. This type of column contains an internal coating of a cross-linked siloxane material as a stationary phase. The thickness of the coating can be 0.05-1.0 μm. The length of the column can range from of 10 to 20 m.There is a wide variety of materials used as mobile phase in SFC. The mobile phase can be selected from the solvent groups of inorganic solvents, hydrocarbons, alcohols, ethers, halides; or can be acetone, acetonitrile, pyridine, etc. The most common supercritical fluid which is used in SFC is carbon dioxide because its critical temperature and pressure are easy to reach. Additionally, carbon dioxide is low-cost, easy to obtain, inert towards UV, non-poisonous and a good solvent for non-polar molecules. Other than carbon dioxide, ethane, n-butane, N2O, dichlorodifluoromethane, diethyl ether, ammonia, tetrahydrofuran can be used. Table \(\PageIndex{2}\) shows select solvents and their Tc and Pc values.One of the biggest advantage of SFC over HPLC is the range of detectors. Flame ionization detector (FID), which is normally present in GC setup, can also be applied to SFC. Such a detector can contribute to the quality of analyses of SFC since FID is a highly sensitive detector. SFC can also be coupled with a mass spectrometer, an UV-visible spectrometer, or an IR spectrometer more easily than can be done with an HPLC. Some other detectors which are used with HPLC can be attached to SFC such as fluorescence emission spectrometer or thermionic detectors.The physical properties of supercritical fluids between liquids and gases enables the SFC technique to combine with the best aspects of HPLC and GC, as lower viscosity of supercritical fluids makes SFC a faster method than HPLC. Lower viscosity leads to high flow speed for the mobile phase.Thanks to the critical pressure of supercritical fluids, some fragile materials that are sensitive to high temperature can be analyzed through SFC. These materials can be compounds which decompose at high temperatures or materials which have low vapor pressure/volatility such as polymers and large biological molecules. High pressure conditions provide a chance to work with lower temperature than normally needed. Hence, the temperature-sensitive components can be analyzed via SFC. In addition, the diffusion of the components flowing through a supercritical fluid is higher than observed in HPLC due to the higher diffusivity of supercritical fluids over traditional liquids mobile phases. This results in better distribution into the mobile phase and better separation.The applications of SFC range from food to environmental to pharmaceutical industries. In this manner, pesticides, herbicides, polymers, explosives and fossil fuels are all classes of compounds that can be analyzed. SFC can be used to analyze a wide variety of drug compounds such as antibiotics, prostaglandins, steroids, taxol, vitamins, barbiturates, non-steroidal anti-inflammatory agents, etc. Chiral separations can be performed for many pharmaceutical compounds. SFC is dominantly used for non-polar compounds because of the low efficiency of carbon dioxide, which is the most common supercritical fluid mobile phase, for dissolving polar solutes. SFC is used in the petroleum industry for the determination of total aromatic content analysis as well as other hydrocarbon separations.The unique physical properties of supercritical fluids, having values for density, diffusivity and viscosity values between liquids and gases, enables supercritical fluid extraction to be used for the extraction processes which cannot be done by liquids due to their high density and low diffusivity and by gases due to their inadequate density in order to extract and carry the components out.Complicated mixtures containing many components should be subject to an extraction process before they are separated via chromatography. An ideal extraction procedure should be fast, simple, and inexpensive. In addition, sample loss or decomposition should not be experienced at the end of the extraction. Following extraction, there should be a quantitative collection of each component. Ideally, the amount of unwanted materials coming from the extraction should be kept to a minimum and be easily disposable; the waste should not be harmful for environment. Unfortunately, traditional extraction methods often do not meet these requirements. In this regard, SFE has several advantages in comparison with traditional techniques.The extraction speed is dependent on the viscosity and diffusivity of the mobile phase. With a low viscosity and high diffusivity, the component which is to be extracted can pass through the mobile phase easily. The higher diffusivity and lower viscosity of supercritical fluids, as compared to regular extraction liquids, help the components to be extracted faster than other techniques. Thus, an extraction process can take just 10-60 minutes with SFE, while it would take hours or even days with classical methods.The dissolving efficiency of a supercritical fluid can be altered by temperature and pressure. In contrast, liquids are not affected by temperature and pressure changes as much. Therefore, SFE has the potential to be optimized to provide a better dissolving capacity.In classical methods, heating is required to get rid of the extraction liquid. However, this step causes the temperature-sensitive materials to decompose. For SFE, when the critical pressure is removed, a supercritical fluid transforms to gas phase. Because supercritical fluid solvents are chemically inert, harmless and inexpensive; they can be released to atmosphere without leaving any waste. Through this, extracted components can be obtained much more easily and sample loss is minimized.The necessary apparatus for a SFE setup is simple. depicts the basic elements of a SFE instrument, which is composed of a reservoir of supercritical fluid, a pressure tuning injection unit, two pumps (to take the components in the mobile phase in and to send them out of the extraction cell), and a collection chamber.There are two principle modes to run the instrument:In dynamic extraction, the second pump sending the materials out to the collection chamber is always open during the extraction process. Thus, the mobile phase reaches the extraction cell and extracts components in order to take them out consistently.In the static extraction experiment, there are two distinct steps in the process:In order to choose the mobile phase for SFE, parameters taken into consideration include the polarity and solubility of the samples in the mobile phase. Carbon dioxide is the most common mobile phase for SFE. It has a capability to dissolve non-polar materials like alkanes. For semi-polar compounds (such as polycyclic aromatic hydrocarbons, aldehydes, esters, alcohols, etc.) carbon dioxide can be used as a single component mobile phase. However, for compounds which have polar characteristic, supercritical carbon dioxide must be modified by addition of polar solvents like methanol (CH3OH). These extra solvents can be introduced into the system through a separate injection pump.There are two modes in terms of collecting and detecting the components:Off-line extraction is done by taking the mobile phase out with the extracted components and directing them towards the collection chamber. At this point, supercritical fluid phase is evaporated and released to atmosphere and the components are captured in a solution or a convenient adsorption surface. Then the extracted fragments are processed and prepared for a separation method. This extra manipulation step between extractor and chromatography instrument can cause errors. The on-line method is more sensitive because it directly transfers all extracted materials to a separation unit, mostly a chromatography instrument, without taking them out of the mobile phase. In this extraction/detection type, there is no extra sample preparation after extraction for separation process. This minimizes the errors coming from manipulation steps. Additionally, sample loss does not occur and sensitivity increases.SFE can be applied to a broad range of materials such as polymers, oils and lipids, carbonhydrates, pesticides, organic pollutants, volatile toxins, polyaromatic hydrocarbons, biomolecules, foods, flavors, pharmaceutical metabolites, explosives, and organometallics, etc. Common industrial applications include the pharmaceutical and biochemical industry, the polymer industry, industrial synthesis and extraction, natural product chemistry, and the food industry.Examples of materials analyzed in environmental applications: oils and fats, pesticides, alkanes, organic pollutants, volatile toxins, herbicides, nicotin, phenanthrene, fatty acids, aromatic surfactants in samples from clay to petroleum waste, from soil to river sediments. In food analyses: caffeine, peroxides, oils, acids, cholesterol, etc. are extracted from samples such as coffee, olive oil, lemon, cereals, wheat, potatoes and dog feed. Through industrial applications, the extracted materials vary from additives to different oligomers, and from petroleum fractions to stabilizers. Samples analyzed are plastics, PVC, paper, wood etc. Drug metabolites, enzymes, steroids are extracted from plasma, urine, serum or animal tissues in biochemical applications.Supercritical fluid chromatography and supercritical fluid extraction are techniques that take advantage of the unique properties of supercritical fluids. As such, they provide advantages over other related methods in both chromatography and extraction. Sometimes they are used as alternative analytical techniques, while other times they are used as complementary partners for binary systems. Both SFC and SFE demonstrate their versatility through the wide array of applications in many distinct domains in an advantageous way.This page titled 3.3: Basic Principles of Supercritical Fluid Chromatography and Supercrtical Fluid Extraction is shared under a CC BY 4.0 license and was authored, remixed, and/or curated by Pavan M. V. Raja & Andrew R. Barron (OpenStax CNX) via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.
540
3.4: Supercritical Fluid Chromatography
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Physical_Methods_in_Chemistry_and_Nano_Science_(Barron)/03%3A_Principles_of_Gas_Chromatography/3.04%3A_Supercritical_Fluid_Chromatography
A popular and powerful tool in the chemical world, chromatography separates mixtures based on chemical properties – even some than were previously thought inseparable. It combines a multitude of pieces, concepts, and chemicals to form an instrument suited to specific separation. One form of chromatography that is often overlooked is that of supercritical fluid chromatography.Supercritical fluid chromatography (SFC) begins its history in 1962 under the name “high pressure gas chromatography”. It started off slow and was quickly overshadowed by the development of high performance liquid chromatography (HPLC) and the already developed gas chromatography. SFC was not a popular method of chromatography until the late 1980s, when more publications began exemplifying its uses and techniques.SFC was first reported by Klesper et al. They succeeded in separating thermally labile porphyrin mixtures on polyethylene glycol stationary phase with two mobile phase units: dichlorodifluoromethane (CCl2F2) and monochlorodifluoromethane (CHCl2F), as shown in . Their results proved that supercritical fluids’ low viscosity but high diffusivity functions well as a mobile phase.After Klesper’s paper detailing his separation procedure, subsequent scientists aimed to find the perfect mobile phase and the possible uses for SFC. Using gases such as He, N2, CO2, and NH3, they examined purines, nucleotides, steroids, sugars, terpenes, amino acids, proteins, and many more substances for their retention behavior. They discovered that CO2 was an ideal supercritical fluid due to its low critical temperature of 31 °C and relatively low critical pressure of 72.8 atm. Extra advantages of CO2 included it being cheap, non-flammable, and non-toxic. CO2 is now the standard mobile phase for SFC.In the development of SFC over the years, the technique underwent multiple trial-and-error phases. Open tubular capillary column SFC had the advantage of independently and cooperatively changing all three parameters (pressure, temperature, and modifier content) to a certain extent. Like any chromatography method, however, it had its drawbacks. Changing the pressure, the most important parameter, often required changing the flow velocity due to the constant diameter of the capillaries. Additionally, CO2, the ideal mobile phase, is non-polar, and its polarity could not be altered easily or with a gradient.Over the years, many uses were discovered for SFC. It was identified as a useful tool in the separation of chiral compounds, drugs, natural products, and organometallics (see below for more detail). Most SFCs currently are involved a silica (or silica + modifier) packed column with a CO2 (or CO2 + modifier) mobile phase. Mass spectrometry is the most common tool used to analyze the separated samples.As mentioned previously, the advantage to supercritical fluids is the combination of the useful properties from two phases: liquids and gases. Supercritical fluids are gas-like in the ways of expanding to fill a given volume, and the motions of the particles are close to that of a gas. On the side of liquid properties, supercritical fluids have densities near that of liquids and thus dissolve and interact with other particles, as you would expect of a liquid. To visualize phase changes in relation to pressure and temperature, phase diagrams are used as shown in shows the stark differences between two phases in relation to the surrounding conditions. There exist two ambiguous regions. One of these is the point at which all three lines intersect: the triple point. This is the temperature and pressure at which all three states can exist in a dynamic equilibrium. The second ambiguous point comes at the end of the liquid/gas line, where it just ends. At this temperature and pressure, the pure substance has reached a point where it will no longer exist as just one phase or the other: it exists as a hybrid phase – a liquid and gas dynamic equilibrium.As a result of the dynamic liquid-gas equilibrium, supercritical fluids possess three unique qualities: increased density (on the scale of a liquid), increased diffusivity (similar to that of a gas), and lowered viscosity (on the scale of a gas). Table \(\PageIndex{1}\) shows the similarities in each of these properties. Remember, each of these explains a part of why SFC is an advantageous method of chemical separation.How are these properties useful? An ideal mobile phase and solvent will do three things well: interact with other particles, carry the sample through the column, and quickly (but accurately) elute it.Density, as a concept, is simple: the denser something is, the more likely that it will interact with particles it moves through. Affected by an increase in pressure (given constant temperature), density is largely affected by a substance entering the supercritical fluid zone. Supercritical fluids are characterized with densities comparable to those of liquids, meaning they have a better dissolving effect and act as a better carrier gas. High densities among supercritical fluids are imperative for both their effect as solvents and their effect as carrier gases.Diffusivity refers to how fast the substance can spread among a volume. With increased pressure comes decreased diffusivity (an inverse relationship) but with increased temperature comes increased diffusivity (a direct relationship related to their kinetic energy). Because supercritical fluids have diffusivity values between a gas and liquid, they carry the advantage of a liquid’s density, but the diffusivity closer to that of a gas. Because of this, they can quickly carry and elute a sample, making for an efficient mobile phase.Finally, dynamic viscosity can be viewed as the resistance to other components flowing through, or intercalating themselves, in the supercritical fluid. Dynamic viscosity is hardly affected by temperature or pressure for liquids, whereas it can be greatly affected for supercritical fluids. With the ability to alter dynamic viscosity through temperature and pressure, the operator can determine how resistant their supercritical fluid should be.Because of its widespread use in SFC, it’s important to discuss what makes CO2 an ideal supercritical fluid. One of the biggest limitations to most mobile phases in SFC is getting them to reach the critical point. This means extremely high temperatures and pressures, which is not easily attainable. The best gases for this are ones that can achieve a critical point at relatively low temperatures and pressures.As seen from , CO2 has a critical temperature of approximately 31 °C and a critical pressure of around 73 atm. These are both relatively low numbers and are thus ideal for SFC. Of course, with every upside there exists a downside. In this case, CO2 lacks polarity, which makes it difficult to use its mobile phase properties to elute polar samples. This is readily fixed with a modifier, which will be discussed later.SFC has a similar instrument setup to most other chromatography machines, notably HPLC. The functions of the parts are very similar, but it is important to understand them for the purposes of understanding the technique. shows a schematic representation of a typical apparatus.There are two main types of columns used with SFC: open tubular and packed, as seen below. The columns themselves are near identical to HPLC columns in terms of material and coatings. Open tubular columns are most used and are coated with a cross-linked silica material (powdered quartz, SiO2) for a stationary phase. Column lengths range, but usually fall between 10 and 20 meters and are coated with less than 1 µm of silica stationary phase. demonstrates the differences in the packing of the two columns.Injectors act as the main site for the insertion of samples. There are many different kinds of injectors that depend on a multitude of factors. For packed columns, the sample must be small and the exact amount depends on the column diameter. For open tubular columns, larger volumes can be used. In both cases, there are specific injectors that are used depending on how the sample needs to be placed in the instrument. A loop injector is used mainly for preliminary testing. The sample is fed into a chamber that is then flushed with the supercritical fluid and pushed down the column. It uses a low-pressure pump before proceeding with the full elution at higher pressures. An inline injector allows for easy control of sample volume. A high-pressure pump forces the (specifically measured) sample into a stream of eluent, which proceeds to carry the sample through the column. This method allows for specific dilutions and greater flexibility. For samples requiring no dilution or immediate interaction with the eluent, an in-column injector is useful. This allows the sample to be transferred directly into the packed column and the mobile phase to then pass through the column.The existence of a supercritical fluid, as discussed previously, depends on high temperatures and high pressures. The pump is responsible for delivering the high pressures. By pressurizing the gas (or liquid), it can cause the substance to become dense enough to exhibit signs of the desired supercritical fluid. Because pressure couples with heat to create the supercritical fluid, the two are usually very close together on the instrument.The oven, as referenced before, exists to heat the mobile phase to its desired temperature. In the case of SFC, the desired temperature is always the critical temperature of the supercritical fluid. These ovens are precisely controlled and standard across SFC, HPLC, and GC.So far, there has been one largely overlooked component of the SFC machine: the detector. Technically not a part of the chromatographic separation process, the detector still plays an important role: identifying the components of the solution. While the SFC aims to separate components with good resolution (high purity, no other components mixed in), the detector aims to define what each of these components is made of.The two detectors most often found on SFC instruments are either flame ionization detectors (FID) or mass spectrometers (MS):Generally speaking, samples need little preparation. The only major requirement is that it dissolves in a solvent less polar than methanol: it must have a dielectric constant lower than 33, since CO2 has a low polarity and cannot easily elute polar samples. To combat this, modifiers are added to the mobile phase.The stationary phase is a neutral compound that acts as a source of “friction” for certain molecules in the sample as they slide through the column. Silica attracts polar molecules and thus the molecules attach strongly, holding until enough of the mobile phase has passed through to attract them away. The combination of the properties in the stationary phase and the mobile phase help determine the resolution and speed of the experiment.The mobile phase (the supercritical fluid) pushes the sample through the column and elutes separate, pure, samples. This is where the supercritical fluid’s properties of high density, high diffusivity, and low viscosity come into play. With these three properties, the mobile phase is able to adequately interact with the sample, quickly push through it, and strongly plow through the sample to separate it out. The mobile phase also partly determines how it separates out: it will first carry out similar molecules, ones with similar polarities, and follow gradually with molecules with larger polarities.Modifiers are added to the mobile phase to play with its properties. As mentioned a few times previously, CO2supercritical fluid lacks polarity. In order to add polarity to the fluid (without causing reactivity), a polar modifier will often be added. Modifiers usually raise the critical pressure and temperature of the mobile phase a little, but in return add polarity to the phase and result in a fully resolved sample. Unfortunately, with too much modifier, higher temperatures and pressures are needed and reactivity increases (which is dangerous and bad for the operator). Modifiers, such as ethanol or methanol, are used in small amounts as needed for the mobile phase in order to create a more polar fluid.Clearly, SFC possesses some extraordinary potential as far as chromatography techniques go. It has some incredible capabilities that allow efficient and accurate resolution of mixtures. Below is a summary of its advantages and disadvantages stacked against other conventional (competing) chromatography methods.While the use of SFC has been mainly organic-oriented, there are still a few ways that inorganic compound mixtures are separated using the method. The two main ones, separation of chiral compounds (mainly metal-ligand complexes) and organometallics are discussed here.For chiral molecules, the procedures and choice of column in SFC are very similar to those used in HPLC. Packed with cellulose type chiral stationary phase (or some other chiral stationary phase), the sample flows through the chiral compound and only molecules with a matching chirality will stick to the column. By running a pure CO2 supercritical fluid mobile phase, the non-sticking enantiomer will elute first, followed eventually (but slowly) with the other one.In the field of inorganic chemistry, a racemic mixture of Co(acac)3, both isomers shown in has been resolved using a cellulose-based chiral stationary phase. The SFC method was one of the best and most efficient instruments in analyzing the chiral compound. While SFC easily separates coordinate covalent compounds, it is not necessary to use such an extensive instrument to separate mixtures of it since there are many simpler techniques.Many d-block organometallics are highly reactive and easily decompose in air. SFC offers a way to chromatograph mixtures of large, unusual organometallic compounds. Large cobalt and rhodium based organometallic compound mixtures have been separated using SFC ) without exposing the compounds to air.By using a stationary phase of siloxanes, oxygen-linked silicon particles with different substituents attached, the organometallics were resolved based on size and charge. Thanks to the non-polar, highly diffusive, and high viscosity properties of a 100% CO2 supercritical fluid, the mixture was resolved and analyzed with a flame ionization detector. It was determined that the method was sensitive enough to detect impurities of 1%. Because the efficiency of SFC is so impressive, the potential for it in the organometallic field is huge. Identifying impurities down to 1% shows promise for not only preliminary data in experiments, but quality control as well.While it may have its drawbacks, SFC remains an untapped resource in the ways of chromatography. The advantages to using supercritical fluids as mobile phases demonstrate how resolution can be increased without sacrificing time or increasing column length. Nonetheless, it is still a well-utilized resource in the organic, biomedical, and pharmaceutical industries. SFC shows promise as a reliable way of separating and analyzing mixtures.This page titled 3.4: Supercritical Fluid Chromatography is shared under a CC BY 4.0 license and was authored, remixed, and/or curated by Pavan M. V. Raja & Andrew R. Barron (OpenStax CNX) via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.
541
3.5: Ion Chromatography
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Physical_Methods_in_Chemistry_and_Nano_Science_(Barron)/03%3A_Principles_of_Gas_Chromatography/3.05%3A_Ion_Chromatography
Ion Chromatography is a method of separating ions based on their distinct retention rates in a given solid phase packing material. Given different retention rates for two anions or two cations, the elution time of each ion will differ, allowing for detection and separation of one ion before the other. Detection methods are separated between electrochemical methods and spectroscopic methods. This guide will cover the principles of retention rates for anions and cations, as well as describing the various types of solid-state packing materials and eluents that can be used.The retention model for anionic chromatography can be split into two distinct models, one for describing eluents with a single anion, and the other for describing eluents with complexing agents present. Given an eluent anion or an analyte anion, two phases are observed, the stationary phase (denoted by S) and the mobile phase (denoted by M). As such, there is equilibrium between the two phases for both the eluent anions and the analyte anions that can be described by Equation \ref{1}.\[ y*[A^{x-}_{M}]\ +\ x*[E^{y-}_{S}]\ \Leftrightarrow \ y*[A^{x-}_{S}]\ +\ x*[E^{y-}_{M}] \label{1} \]This yields an equilibrium constant as given in Equation \ref{2} .\[ K_{A,E} = \frac{ [A^{x-}_{S}]^{y} [E^{y-}_{M}]^{x} \gamma ^{y} _{A^{x-}_{S} } \gamma ^{x} _{E^{y-}_{S}} }{ [A^{x-}_{M}] ^{y} [E^{y-}_{S}]^{x} \gamma ^{y} _{A^{x-}_{M}} \gamma ^{x} _{E^{y-}_{S}}} \label{2} \]Given the activity of the two ions cannot be found in the stationary or mobile phases, the activity coefficients are set to 1. Two new quantities are then introduced. The first is the distribution coefficient, DA, which is the ratio of analyte concentrations in the stationary phase to the mobile phase, Equation \ref{3} . The second is the retention factor, k1A, which is the distribution coefficient times the ratio of volume between the two phases, Equation \ref{4} .\[ D_{A} \ =\ \frac{[A_{S}]}{[A_{M}]} \label{3} \]\[k_{A}^{1} \ = \ D_{A} * \frac{V_{S}}{V_{M}} \label{4} \]Substituting the two quantities from Equation \ref{3} and Equation \ref{4} into Equation \ref{2} , the equilibrium constant can be written as Equation \ref{5}\[ K_{A,E} \ = (k_{A}^{1} \frac{V_{M}}{V_{S}})^{y} * (\frac{[E_{M}^{y-} ]}{[E^{y-}_{S}]})^{x} \label{5} \]Given there is usually a large difference in concentrations between the eluent and the analyte (with magnitudes of 10 greater eluent), equation 4 can be re-written under the assumption that all the solid phase packing material’s functional groups are taken up by Ey-. As such, the stationary Ey- can be substituted with the exchange capacity divided by the charge of Ey-. This yields Equation \ref{6}\[ K_{A,E} \ = (k_{A}^{1} \frac{V_{M}}{V_{S}})^{y} * (\frac{Q}{\gamma })^{-x} [E_{M}^{y-}] \label{6} \]Solving for the retention factor Equation \ref{7} is developed.\[ z*[A^{x-}_{M}] \ +\ x*[B^{z-}_{S}] \Leftrightarrow z* [A^{x-}_{S}] \ +\ x*[B^{z-}_{M}] \label{7} \]Equation \ref{8} shows the relationship between retention factor and parameters like eluent concentration and the exchange capacity, which allows parameters of the ion chromatography to be manipulated and the retention factors to be determined. Equation \ref{9} only works for a single analyte present, but a relationship for the selectivity between two analytes [A] and [B] can easily be determined.First the equilibrium between the two analytes is determined as Equation \ref{8}\[ K_{A,B} \ = \frac{[A^{x-}_{S}]^{z} [B^{z-}_{M}]^{x}}{[A^{x-}_{M}]^{z} [B^{z-}_{S}]^{x}} \label{8} \]The equilibrium constant can be written as Equation \ref{9} (ignoring activity):\[ \alpha _{A,B} \ = \frac{[A^{x-}_{S}][B^{z-}_{M}]}{[A^{x-}_{M}][B^{z-}_{S}]} \label{9} \]The selectivity can then be determined to be Equation \ref{10}\[ \alpha _{A,B} \ = \frac{[A^{x-}_{S}][B^{z-}_{M}]}{[A^{x-}_{M}][B^{z-}_{S}]} \label{10} \]Equation \ref{10} can then be simplified into a logarithmic form as the following two equations:\[ \log \alpha _{A,B} = \frac{1}{z} log K_{A,B} \ + \frac{x-z}{z} log \frac{ k_{A}^{1} V_{M}}{V_{S}} \label{11} \]\[ \log \alpha _{A,B} = \frac{1}{x} log K_{A,B} \ + \frac{x-z}{z} log \frac{ k_{A}^{1} V_{M}}{V_{S}} \label{12} \]When the two charges are the same, it can be seen that the selectivity is only a factor of the selectivity coefficients and the charges. When the two charges are different, it can be seen that the two retention factors are dependent upon each other.In situations with a polyatomic eluent, three models are used to account for the multiple anions in the eluent. The first is the dominant equilibrium model, in which one anion is so dominant in concentration; the other eluent anions are ignored. The dominant equilibrium model works best for multivalence analytes. The second is the effective charge model, where an effective charge of the eluent anions is found, and a relationship similar to EQ is found with the effective charge. The effective charge models works best with monovalent analytes. The third is the multiple eluent species model, where Equation \ref{13} describes the retention factor:\[ \log K_{A}^{1} \ =\ C_{3} - (\frac{X_{1}}{a} + \frac{X_{2}}{b} + \frac{X_{3}}{c}) -\ log C_{P} \label{13} \]C3 is a constant that includes the phase volume ratio between stationary, the equilibrium constant, and mobile and the exchange capacity. Cp is the total concentration of the eluent species. X1, X2, X3, correspond to the shares of a particular eluent anion in the retention of the analyte.For eluents with a single cation and analytes that are alkaline earth metals, heavy metals or transition metals, a complexing agent is used to bind with the metal during chromatography. This introduces the quantity A(m) to the retention rate calculations, where A(m) is the ratio of free metal ion to the total concentration of metal. Following a similar derivation to the single anion case, Equation \ref{14} is found.\[ K_{A,E} = \ (\frac{ k_{A}^{1}}{ \alpha _{M} \phi } )^{y} * (\frac{Q}{\gamma })^{-x} [E ^{y+} _{M} ]^{x} \label{14} \]Solving for the retention coefficient, Equation \ref{15} is found.\[ k_{A}^{1} = \alpha _{M} \phi * K_{A,E} ^{\frac{1}{\gamma } } (\frac{Q}{\gamma })^{\frac{x}{y} } ([E_{M}^{y+}]^{- \frac{x}{y} } \label{15} \]From this expression, the retention rate of the cation can be determined from eluent concentration and the ratio of free metal ions to the total concentration of the metal, which itself is depended on the equilibrium of the metal ion with the complexing agent.The solid phase packing material used in the chromatography column is important to the exchange capacity of the anion or cation. There are many types of packing material, but all share a functional group that can bind either the anion or the cation complex. The functional group is mounted on a polymer surface or sphere, allowing large surface area for interaction.The primary functional group used for anion chromatography is the ammonium group. Amine groups are mounted on the polymer surface, and the pH is lowered to produce ammonium groups. As such, the exchange capacity is depended on the pH of the eluent. To reduce the pH dependency, the protons on the ammonium are successively replaced with alkyl groups until the all the protons are replaced and the functional group is still positively charged, but pH independent. The two packing materials used in almost all anion chromatography are trimethylamine (NMe3, ) and dimethylanolamine ).Cation chromatography allows for the use of both organic polymer based and silica gel based packing material. In the silica gel based packing material, the most common packing material is a polymer-coated silica gel. The silicate is coated in polymer, which is held together by cross-linking of the polymer. Polybutadiene maleic acid ) is then used to create a weakly acidic material, allowing the analyte to diffuse through the polymer and exchange. Silica gel based packing material is limited by the pH dependent solubility of the silica gel and the pH dependent linking of the silica gel and the functionalized polymer. However, silica gel based packing material is suitable for separation of alkali metals and alkali earth metals.Organic polymer based packing material is not limited by pH like the silica gel materials are, but are not suitable for separation of alkali metals and alkali earth metals. The most common functional group is the sulfonic acid group ) attached with a spacer between the polymer and the sulfonic acid group.Photometric detection in the UV region of the spectrum is a common method of detection in ion chromatography. Photometric methods limit the eluent possibilities, as the analyte must have a unique absorbance wavelength to be detectable. Cations that do not have a unique absorbance wavelength, i.e. the eluent and other contaminants have similar UV visible spectra can be complexed to for UV visible compounds. This allows detection of the cation without interference from eluents.Coupling the chromatography with various types of spectroscopy such as Mass spectroscopy or IR spectroscopy can be a useful method of detection. Inductively coupled plasma atomic emission spectroscopy is a commonly used method.Direct conductivity methods take advantage of the change in conductivity that an analyte produces in the eluent, which can be modeled by Equation \ref{16} where equivalent conductivity is defined as Equation \ref{17} .\[ \Delta K \ =\frac{(\Lambda _{A} \ -\ \Lambda _{g} ) * C_{s}}{1000} \label{16} \]\[ \Lambda \ =\frac{L}{A*R} * \frac{1}{C} \label{17} \]With L being the distance between two electrodes of area A and R being the resistance the ion creates. C is the concentration of the ion. The conductivity can be plotted over time, and the peaks that appear represent different ions coming through the column as described by Equation \ref{18}\[ K_{peak} \ =\ (\Lambda _{A} \ -\ \Lambda _{g})*C_{A} \label{18} \]The values of Equivalent conductivity of the analyte and of the eluent common ions can be found in Table \(\PageIndex{1}\)The choice of eluent depends on many factors, namely, pH, buffer capacity, the concentration of the eluent, and the nature of the eluent’s reaction with the column and the packing material.In non-suppressed anion chromatography, where the eluent and analyte are not altered between the column and the detector, there is a wide range of eluents to be used. In the non-suppressed case, the only issue that could arise is if the eluent impaired the detection ability (absorbing in a similar place in a UV-spectra as the analyte for instance). As such, there are a number of commonly used eluents. Aromatic carboxylic acids are used in conductivity detection because of their low self-conductivity. Aliphatic carboxylic acids are used for UV/visible detection because they are UV transparent. Inorganic acids can only be used in photometric detection.In suppressed anion chromatography, where the eluent and analyte are treated between the column and detection, fewer eluents can be used. The suppressor modifies the eluent and the analyte, reducing the self-conductivity of the eluent and possibly increasing the self-conductivity of the analyte. Only alkali hydroxides and carbonates, borates, hydrogen carbonates, and amino acids can be used as eluents.The primary eluents used in cation chromatography of alkali metals and ammoniums are mineral acids such as HNO3. When the cation is multivalent, organic bases such as ethylenediamine ) serve as the main eluents. If both alkali metals and alkali earth metals are present, hydrochloric acid or 2,3-diaminopropionic acid ) is used in combination with a pH variation. If the chromatography is unsuppressed, the direct conductivity measurement of the analyte will show up as a negative peak due to the high conductivity of the H+ in the eluent, but simple inversion of the data can be used to rectify this discrepancy.If transition metals or H+ are the analytes in question, complexing carboxylic acids are used to suppress the charge of the analyte and to create photometrically detectable complexes, forgoing the need for direct conductivity as the detection method.This page titled 3.5: Ion Chromatography is shared under a CC BY 4.0 license and was authored, remixed, and/or curated by Pavan M. V. Raja & Andrew R. Barron (OpenStax CNX) via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.
542
3.6: Capillary Electrophoresis
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Physical_Methods_in_Chemistry_and_Nano_Science_(Barron)/03%3A_Principles_of_Gas_Chromatography/3.06%3A_Capillary_Electrophoresis
Capillary electrophoresis (CE) encompasses a family of electrokinetic separation techniques that uses an applied electric field to separate out analytes based on their charge and size. The basic principle is hinged upon that of electrophoresis, which is the motion of particles relative to a fluid (electrolyte) under the influence of an electric field. The founding father of electrophoresis, Arne W. K. Tiselius ), first used electrophoresis to separate proteins, and he went on to win a Nobel Prize in Chemistry in 1948 for his work on both electrophoresis and adsorption analysis. However, it was Stellan Hjerten ) who worked under Arne W. K. Tiselius, who pioneered work in CE in 1967, although CE was not well recognized until 1980 when James W. Jorgenson ) and Krynn D. Lukacs published a series of papers describing this new technique.The main components of CE are shown in . The electric circuit of the CE is the heart of the instrument.The samples that are studied in CE are mainly liquid samples. A typical capillary column has an inner diameter of 50 μm and a length of 25 cm. Because the column can only contain a minimal amount of running buffer, only small sample volumes can be tested (nL to μL). The samples are introduced mainly by two injection methods: hydrodynamic and electrokinetic injection. The two methods are displayed in Table \(\PageIndex{1}\) A disadvantage of electrokinetic injection is that the composition of the injected sample may not be the same as the composition of the original sample. This is because the injection method is dependent on the electrophoretic and electroosmotic mobility of the species in the sample. However, both injection methods depend on the temperature and the viscosity of the solution. Hence, it is important to control both parameters when a reproducible volume of sample injections is desired. It is advisable to use internal standards instead of external standards when performing quantitative analysis on the samples as it is hard to control both the temperature and viscosity of the solution.After the samples have been injected, the capillary column is used as the main medium to separate the components. The capillary column used in CE shares the same characteristics as the capillary column used in gas chromatography (GC); however, the most critical components of the CE column are:The solvent buffer carries the sample through the column. It is crucial to employ a good buffer as a successful CE experiment is hinged upon this. CE is based on the separation of charges in an electric field. Therefore, the buffer should either sustain the pre-existing charge on the analyte or enable the analyte to obtain a charge, and it is important to consider the pH of the buffer before using it.The applied voltage is important in the separation of the analytes as it drives the movement of the analyte. It is important that it is not too high as it may become a safety concern.Analytes that have been separated after the applying the voltage can be detected by many detection methods. The most common method is UV-visible absorbance. The detection takes place across the capillary with a small portion of the capillary acting as the detection cell. The on-tube detection cell is usually made optically transparent by scraping off the polyimide coating and coating it with another optically transparent material so that the capillary would not break easily. For species that do not have a chromophore, a chromophore can be added to the buffer solution. When the analyte passes by, there would be a decrease in signal. This decreased signal will correspond to the amount of analyte present. Other common detection techniques employable in CE are fluorescence and mass spectrometry (MS).In CE, the sample is introduced into the capillary by the above-mentioned methods. A high voltage is then applied causing the ions of the sample to migrate towards the electrode in the destination reservoir, in this case, the cathode. Sample components migration and separation are determined by two factors, electrophoretic mobility and electroosmotic mobility.The electrophoretic mobility, \(μ_{ep}\), is inherently dependent on the properties of the solute and the medium in which the solute is moving. Essentially, it is a constant value, that can be calculated as given by \ref{1} where \(q\) is the solute`s charge, \(η\) is the buffer viscosity and \(r\) is the solute radius.\[ \mu _{ep} = \dfrac{q}{6\pi \eta r} \label{1} \]The electrophoretic velocity, \(v_{ep}\), is dependent on the electrophoretic mobility and the applied electric field, \(E\) (\ref{2}).\[ \nu _{ep} = \mu _{ep} E \label{2} \]Thus, when solutes have a larger charge to size ratio the electrophoretic mobility and velocity will increase. Cations and the anion would move in opposing directions corresponding to the sign of the electrophoretic mobility with is a result of their charge. Thus, neutral species that have no charge do not have an electrophoretic mobility.The second factor that controls the migration of the solute is the electroosmotic flow. With zero charge, it is expected that the neutral species should remain stationary. However, under normal conditions, the buffer solution moves towards the cathode as well. The cause of the electroosmotic flow is the electric double layer that develops at the silica solution interface.At pH more than 3, the abundant silanol (-OH) groups present on the inner surface of the silica capillary, de-protonate to form negatively charged silanate ions (-SiO-). The cations present in the buffer solution will be attracted to the silanate ions and some of them will bind strongly to it forming a fixed layer. The formation of the fixed layer only partially neutralizes the negative charge on the capillary walls. Hence, more cations than anions will be present in the layer adjacent to the fixed layer, forming the diffuse layer. The combination of the fixed layer and diffuse layer is known as the double layer as shown in . The cations present in the diffuse layer will migrate towards the cathode, as these cations are solvated the solution will also flow with it, producing the electroosmotic flow. The anions present in the diffuse layer are solvated and will move towards the anode. However, as there are more cations than anions the cations will push the anions together with it in the direction of the cathode. Hence, the electroosmotic flow moves in the direction of the cathode.The electroosmotic mobility, μeof, is described by \ref{3} where ξ is the zeta potential, ε is the buffer dielectric constant and η is the buffer viscosity. The electroosmotic velocity, veof, is the rate at which the buffer moves through the capillary is given by \ref{4} .\[ \mu _{eof} \ =\ \frac{\zeta \varepsilon }{4\pi \eta } \label{3} \]\[ \nu _{eof}\ =\ \mu _{eof}E \label{4} \]The zeta potential, ξ, also known as the electrokinetic potential is the electric potential at the interface of the double layer. Hence, in our case, it is the potential of the diffuse layer that is at a finite distance from the capillary wall. Zeta potential is mainly affected and directly proportional to two factors:Electroosmotic flow of the buffer is generally greater than the electrophoretic flow of the analytes. Hence, even the anions would move to the cathode as illustrated in . Small, highly charged cations would be the first to elute before larger cations with lower charge. This is followed by the neutral species which elutes as one band in the middle. The larger anions with low charge elute next and lastly, the highly charged small anion would have the longest elution time. This is clearly portrayed in the electropherogram in .There are several components that can be varied to optimize the electropherogram obtained from CE. Hence, for any given setup certain parameters should be known:To shorten the analysis time, a higher voltage can be used or a shorter capillary tube can be used. However, it is important to note that the voltage cannot be arbitrarily high as it will lead to joule heating. Another possibility is to increase μeof by increasing pH or decreasing the ionic strength of the buffer, \ref{5} .\[ t_{mn} \ =\ \frac{1\ L}{(\mu _{ep} \ +\ \mu_{eof}) V } \label{5} \]In chromatography, the efficiency is given by the number of theoretical plates, N. In CE, there exist a similar parameter, \ref{6} where D is the solute`s diffusion coefficient. Efficiency increase s with an increase in voltage applied as the solute spends less time in the capillary there will be less time for the solute to diffuse. Generally, for CE, N will be very large.\[ N\ =\frac{1^{2}}{2Dt_{mn}} = \frac{\mu _{tot} V l}{2DL} \label{6} \]The resolution between two peaks, R, is defined by \ref{7} where Δv is the difference in velocity of two solutes and ṽ is the average velocity of two solutes.\[ R= \frac{\sqrt{N} }{4} \times \frac{\Delta v}{ \tilde{\nu } } \label{7} \]Substituting the equation by N gives \ref{8}\[ R\ = 0.177(\mu _{ep,1} \ -\ \mu _{ep,2}) \sqrt{ \frac{V}{D(\nu _{av} + \mu _{eof})} } \label{8} \]Therefore, increasing the applied voltage, V, will increase the resolution. However, it is not very effective as a 4-fold increase in applied voltage would only give a 2-fold increase in resolution. In addition, increase in N, the number of theoretical plates would result in better resolution.In chromatography, selectivity, α, is defined as the ratio of the two retention factors of the solute. This is the same for CE, \ref{9} , where t2 and t1 are the retention times for the two solutes such that, α is more than 1.\[ \alpha =\frac{t_{2}}{t_{1}} \label{9} \]Selectivity can be improved by adjusting the pH of the buffer solution. The purpose is to change the charge of the species being eluted.CE unlike High-performance liquid chromatography (HPLC) accommodates many samples and tends to have a better resolution and efficiency. A comparison between the two methods is given in Table \(\PageIndex{2}\).CE allows the separation of charged particles, and it is mainly compared to ion chromatography. However, no separation takes place for neutral species in CE. Thus, a modified CE technique named micellar electrokinetic chromatography (MEKC) can be used to separate neutrals based on its size and its affinity to the micelle. In MEKC, surfactant species is added to the buffer solution at a concentration at which micelles will form. An example of a surfactant is sodium dodecyl sulfate (SDS) as seen in Neutral molecules are in dynamic equilibrium between the bulk solution and interior of the micelle. In the absence of the micelle the neutral species would reach the detector at t0 but in the presence of the micelle, it reaches the detector at tmc, where tmc is greater than t0. The longer the neutral molecule remains in the micelle, the longer it's migration time. Thus small, non-polar neutral species that favor interaction with the interior of the micelle would take a longer time to reach the detector than a large, polar species. Anionic, cationic and zwitter ionic surfactants can be added to change the partition coefficient of the neutral species. Cationic surfactants would result in positive micelles that would move in the direction of electroosmotic flow. This enables it to move faster towards the cathode. However, due to the fast migration, it is possible that insufficient time is given for the neutral species to interact with the micelle resulting in poor separation. Thus, all factors must be considered before choosing the right surfactant to be used. The mechanism of separation between MEKC and liquid chromatography is the same. Both are dependent on the partition coefficient of the species between the mobile phase and stationary phase. The main difference lies in the pseudo stationary phase in MEKC, the micelles. The micelle which can be considered the stationary phase in MEKC moves at a slower rate than the mobile ions.Quantum dots (QD) are semiconductor nanocrystals that lie in the size range of 1-10 nm, and they have different electrophoretic mobility due to their varying sizes and surface charge. CE can be used to separate and characterize such species, and a method to characterize and separate CdSe QD in the aqueous medium has been developed. The QDs were synthesized with an outer layer of trioctylphosphine (TOP, ) and trioctylphosphine oxide (TOPO, ), making the surface of the QD hydrophobic. The background electrolyte solution used was SDS, in order to make the QDs soluble in water and form a QD-TOPO/TOP-SDS complex. Different sizes of CdSe were used and the separation was with respect to the charge-to-mass ratio of the complexes. It was concluded from the study that the larger the CdSe core (i.e., the larger the charge-to-mass ratio) eluted out last. The electropherogram from the study is shown in from which it is visible that good separation had taken place by using CE. Laser-induced fluorescence detection was used, the buffer system was SDS, and the pH of the system set up was fixed at 6.5. The pH is highly important in this case as the stability of the system and the separation is dependent on it.This page titled 3.6: Capillary Electrophoresis is shared under a CC BY 4.0 license and was authored, remixed, and/or curated by Pavan M. V. Raja & Andrew R. Barron (OpenStax CNX) via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.
543
4.1: Magnetism
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Physical_Methods_in_Chemistry_and_Nano_Science_(Barron)/04%3A_Chemical_Speciation/4.01%3A_Magnetism
The magnetic moment of a material is the incomplete cancelation of the atomic magnetic moments in that material. Electron spin and orbital motion both have magnetic moments associated with them ) but in most atoms the electronic moments are oriented usually randomly so that overall in the material they cancel each other out ) this is called diamagnetism.If the cancelation of the moments is incomplete then the atom has a net magnetic moment. There are many subclasses of magnetic ordering such as para-, superpara-, ferro-, antiferro- or ferromagnetism which can be displayed in a material and which usually depends, upon the strength and type of magnetic interactions and external parameters such as temperature and crystal structure atomic content and the magnetic environment which a material is placed in.\[ \mu _{B} \ =\ \frac{eh}{4\pi m} \ = \ 9.72 \times 10^{-23} J/T \label{1} \]The magnetic moments of atoms, molecules or formula units are often quoted in terms of the Bohr magneton, which is equal to the magnetic moment due to electron spinThe magnetism of a material, the extent that which a material is magnetic, is not a static quantity, but varies compared to the environment that a material is placed in. It is similar to the temperature of a material. For example if a material is placed in an oven it will heat up to a temperature similar to that of the ovens. However the speed of heating of that material, and also that of cooling are determined by the atomic structure of the material. The magnetization of a material is similar. When a material is placed in a magnetic field it maybe become magnetized to an extent and retain that magnetization after it is removed from the field. The extent of magnetization, and type of magnetization and the length of time that a material remains magnetized, depends again on the atomic makeup of the material.Measuring a materials magnetism can be done on a micro or macro scale. Magnetism is measured over two parameters direction and strength. Thus magnetization has a vector quantity. The simplest form of a magnetometer is a compass. It measures the direction of a magnetic field. However more sophisticated instruments have been developed which give a greater insight into a materials magnetism.So what exactly are you reading when you observe the output from a magnetometer?The magnetism of a sample is called the magnetic moment of that sample and will be called that from now on. The single value of magnetic moment for the sample, is a combination of the magnetic moments on the atoms within the sample ( ), it is also the type and level of magnetic ordering and the physical dimensions of the sample itself.The "intensity of magnetization", M, is a measure of the magnetization of a body. It is defined as the magnetic moment per unit volume or\[ M \ =\ m/V \label{2} \]with units of Am (emucm3 in cgs notation).A material contains many atoms and their arrangement affects the magnetization of that material. In (a) a magnetic moment m is contained in unit volume. This has a magnetization of m Am. (b) shows two such units, with the moments aligned parallel. The vector sum of moments is 2m in this case, but as the both the moment and volume are doubled M remains the same. In (c) the moments are aligned antiparallel. The vector sum of moments is now 0 and hence the magnetization is 0 Am.Scenarios (b) and (c) are a simple representation of ferro- and antiferromagnetic ordering. Hence we would expect a large magnetization in a ferromagnetic material such as pure iron and a small magnetization in an antiferromagnet such as γ-Fe2O3When a material is passed through a magnetic field it is affected in two ways:The concept of magnetic moment is the starting point when discussing the behavior of magnetic materials within a field. If you place a bar magnet in a field it will experience a torque or moment tending to align its axis in the direction of the field. A compass needle behaves in the same way. This torque increases with the strength of the poles and their distance apart. So the value of magnetic moment tells you, in effect, 'how big a magnet' you have.If you place a material in a weak magnetic field, the magnetic field may not overcome the binding energies that keep the material in a non magnetic state. This is because it is energetically more favorable for the material to stay exactly the same. However, if the strength of the magnetic moment is increased, the torque acting on the smaller moments in the material, it may become energetically more preferable for the material to become magnetic. The reasons that the material becomes magnetic depends on factors such as crystal structure the temperature of the material and the strength of the field that it is in. However a simple explanation of this is that as the magnetic moment strength increases it becomes more favorable for the small fields to align themselves along the path of the magnetic field, instead of being opposed to the system. For this to occur the material must rearrange its magnetic makeup at the atomic level to lower the energy of the system and restore a balance.It is important to remember that when we consider the magnetic susceptibility and take into account how a material changes on the atomic level when it is placed in a magnetic field with a certain moment. The moment that we are measuring with our magnetometer is the total moment of that sample.\[ \chi \ =\ \frac{M}{H} \label{3} \]where X = susceptibility, M = variation of magnetization, and H = applied field.Magnetic permeability is the ability of a material to conduct an electric field. In the same way that materials conduct or resist electricity, materials also conduct or resist a magnetic flux or the flow of magnetic lines of force ).Ferromagnetic materials are usually highly permeable to magnetic fields. Just as electrical conductivity is defined as the ratio of the current density to the electric field strength, so the magnetic permeability, μ, of a particular material is defined as the ratio of flux density to magnetic field strength. However unlike in electrical conductivity magnetic permeability is nonlinear.\[ \mu \ =\ B/H \label{4} \]Permeability, where μ is written without a subscript, is known as absolute permeability. Instead a variant is used called relative permeability.\[ \mu \ =\ \mu _{0} \times \mu _{r} \label{5} \]Absolute permeability is a variation upon 'straight' or absolute permeability, μ, but is more useful as it makes clearer how the presence of a particular material affects the relationship between flux density and field strength. The term 'relative' arises because this permeability is defined in relation to the permeability of a vacuum, μ0.\[ \mu _{r} \ =\ \mu / \mu_{0} \label{6} \]For example, if you use a material for which μr = 3 then you know that the flux density will be three times as great as it would be if we just applied the same field strength to a vacuum.Initial permeability describes the relative permeability of a material at low values of B (below 0.1 T). The maximum value for μ in a material is frequently a factor of between 2 and 5 or more above its initial value.Low flux has the advantage that every ferrite can be measured at that density without risk of saturation. This consistency means that comparison between different ferrites is easy. Also, if you measure the inductance with a normal component bridge then you are doing so with respect to the initial permeability.The permeability of a vacuum has a finite value - about 1.257 × 10-6 H m-1 - and is denoted by the symbol μ0. Note that this value is constant with field strength and temperature. Contrast this with the situation in ferromagnetic materials where μ is strongly dependent upon both. Also, for practical purposes, most non-ferromagnetic substances (such as wood, plastic, glass, bone, copper aluminum, air and water) have permeability almost equal to μ0; that is, their relative permeability is 1.0.The permeability, μ, the variation of magnetic induction, with applied field,\[ \mu \ =\ B/H \label{7} \]A single measurement of a sample's magnetization is relatively easy to obtain, especially with modern technology. Often it is simply a case of loading the sample into the magnetometer in the correct manner and performing a single measurement. This value is, however, the sum total of the sample, any substrate or backing and the sample mount. A sample substrate can produce a substantial contribution to the sample total.For substrates that are diamagnetic, under zero applied field, this means it has no effect on the measurement of magnetization. Under applied fields its contribution is linear and temperature independent. The diamagnetic contribution can be calculated from knowledge of the volume and properties of the substrate and subtracted as a constant linear term to produce the signal from the sample alone. The diamagnetic background can also be seen clearly at high fields where the sample has reached saturation: the sample saturates but the linear background from the substrate continues to increase with field. The gradient of this background can be recorded and subtracted from the readings if the substrate properties are not known accurately.When a material exhibits hysteresis, it means that the material responds to a force and has a history of that force contained within it. Consider if you press on something until it depresses. When you release that pressure, if the material remains depressed and doesn’t spring back then it is said to exhibit some type of hysteresis. It remembers a history of what happened to it, and may exhibit that history in some way. Consider a piece of iron that is brought into a magnetic field, it retains some magnetization, even after the external magnetic field is removed. Once magnetized, the iron will stay magnetized indefinitely. To demagnetize the iron, it is necessary to apply a magnetic field in the opposite direction. This is the basis of memory in a hard disk drive.The response of a material to an applied field and its magnetic hysteresis is an essential tool of magnetometry. Paramagnetic and diamagnetic materials can easily be recognized, soft and hard ferromagnetic materials give different types of hysteresis curves and from these curves values such as saturation magnetization, remnant magnetization and coercivity are readily observed. More detailed curves can give indications of the type of magnetic interactions within the sample.The intensity of magnetization depends upon both the magnetic moments in the sample and the way that they are oriented with respect to each other, known as the magnetic ordering.Diamagnetic materials, which have no atomic magnetic moments, have no magnetization in zero field. When a field is applied a small, negative moment is induced on the diamagnetic atoms proportional to the applied field strength. As the field is reduced the induced moment is reduced.In a paramagnet the atoms have a net magnetic moment but are oriented randomly throughout the sample due to thermal agitation, giving zero magnetization. As a field is applied the moments tend towards alignment along the field, giving a net magnetization which increases with applied field as the moments become more ordered. As the field is reduced the moments become disordered again by their thermal agitation. The figure shows the linear response M v H where μH << kT.The hysteresis curves for a ferromagnetic material are more complex than those for diamagnets or paramagnets. Below diagram shows the main features of such a curve for a simple ferromagnet.In the virgin material (point 0) there is no magnetization. The process of magnetization, leading from point 0 to saturation at M = Ms, is outlined below. Although the material is ordered ferromagnetically it consists of a number of ordered domains arranged randomly giving no net magnetization. This is shown in below (a) with two domains whose individual saturation moments, Ms, lie antiparallel to each other.As the magnetic field, H, is applied, (b), those domains which are more energetically favorable increase in size at the expense of those whose moment lies more antiparallel to H. There is now a net magnetization; M. Eventually a field is reached where all of the material is a single domain with a moment aligned parallel, or close to parallel, with H. The magnetization is now M = MsCosΘ where Θ is the angle between Ms along the easy magnetic axis and H. Finally Ms is rotated parallel to H and the ferromagnet is saturated with a magnetization M = Ms.The process of domain wall motion affects the shape of the virgin curve. There are two qualitatively different modes of behavior known as nucleation and pinning, shown in as curves 1 and 2, respectively.In a nucleation-type magnet saturation is reached quickly at a field much lower than the coercive field. This shows that the domain walls are easily moved and are not pinned significantly. Once the domain structure has been removed the formation of reversed domains becomes difficult, giving high coercivity. In a pinning-type magnet fields close to the coercive field are necessary to reach saturation magnetization. Here the domain walls are substantially pinned and this mechanism also gives high coercivity.As the applied field is reduced to 0 after the sample has reached saturation the sample can still possess a remnant magnetization, Mr. The magnitude of this remnant magnetization is a product of the saturation magnetization, the number and orientation of easy axes and the type of anisotropy symmetry. If the axis of anisotropy or magnetic easy axis is perfectly aligned with the field then Mr = Ms, and if perpendicular Mr= 0.At saturation the angular distribution of domain magnetizations is closely aligned to H. As the field is removed they turn to the nearest easy magnetic axis. In a cubic crystal with a positive anisotropy constant, K1, the easy directions are <100>. At remnance the domain magnetizations will lie along one of the three <100> directions. The maximum deviation from H occurs when H is along the <111> axis, giving a cone of distribution of 55o around the axis. Averaging the saturation magnetization over this angle gives a remnant magnetization of 0.832 Ms.The coercive field, Hc, is the field at which the remnant magnetization is reduced to zero. This can vary from a few Am for soft magnets to 107Am for hard magnets. It is the point of magnetization reversal in the sample, where the barrier between the two states of magnetization is reduced to zero by the applied field allowing the system to make a Barkhausen jump to a lower energy. It is a general indicator of the energy gradients in the sample which oppose large changes of magnetization.The reversal of magnetization can come about as a rotation of the magnetization in a large volume or through the movement of domain walls under the pressure of the applied field. In general materials with few or no domains have a high coercivity whilst those with many domains have a low coercivity. However, domain wall pinning by physical defects such as vacancies, dislocations and grain boundaries can increase the coercivity.The loop illustrated in is indicative of a simple bi-stable system. There are two energy minima: one with magnetization in the positive direction, and another in the negative direction. The depth of these minima is influenced by the material and its geometry and is a further parameter in the strength of the coercive field. Another is the angle, ΘH, between the anisotropy axis and the applied field. The above fig shows how the shape of the hysteresis loop and the magnitude of Hc varies with ΘH. This effect shows the importance of how samples with strong anisotropy are mounted in a magnetometer when comparing loops.A hysteresis curve gives information about a magnetic system by varying the applied field but important information can also be gleaned by varying the temperature. As well as indicating transition temperatures, all of the main groups of magnetic ordering have characteristic temperature/magnetization curves. These are summarized in and .At all temperatures a diamagnet displays only any magnetization induced by the applied field and a small, negative susceptibility.The curve shown for a paramagnet ) is for one obeying the Curie law,\[ \chi \ =\ \frac{c}{t} \label{8} \]and so intercepts the axis at T = 0. This is a subset of the Curie-Weiss law,\[ \chi \ =\ \frac{C}{T- \Theta } \label{9} \]where θ is a specific temperature for a particular substance (equal to 0 for paramagnets).Above TN and TC both antiferromagnets and ferromagnets behave as paramagnets with 1/χ linearly proportional to temperature. They can be distinguished by their intercept on the temperature axis, T = Θ. Ferromagnetics have a large, positive Θ, indicative of their strong interactions. For paramagnetics Θ = 0 and antiferromagnetics have a negative Θ.The net magnetic moment per atom can be calculated from the gradient of the straight line graph of 1/χ versus temperature for a paramagnetic ion, rearranging Curie's law to give \ref{10} .\[ \mu \ = \sqrt{ \frac{3Ak}{N_{X} } } \label{10} \]where A is the atomic mass, k is Boltzmann's constant, N is the number of atoms per unit volume and x is the gradient.Ferromagnets below TC display spontaneous magnetization. Their susceptibility above TC in the paramagnetic region is given by the Curie-Weiss lawwhere g is the gyromagnetic constant. In the ferromagnetic phase with T greater than TC the magnetization M (T) can be simplified to a power law, for example the magnetization as a function of temperature can be given by \ref{11} .\[ M(T) \approx (T_{C} \ -\ T) ^{\beta} \label{11} \]where the term β is typically in the region of 0.33 for magnetic ordering in three dimensions.The susceptibility of an antiferromagnet increases to a maximum at TN as temperature is reduced, then decreases again below TN. In the presence of crystal anisotropy in the system this change in susceptibility depends on the orientation of the spin axes: χ (parallel)decreases with temperature whilst χ (perpendicular) is constant. These can be expressed as \ref{12} .\[ \chi \perp = \frac{C}{2 \Theta} \label{12} \]where C is the Curie constant and Θ is the total change in angle of the two sublattice magnetizations away from the spin axis, and \ref{13}\[ \chi \parallel \ =\ \frac{2n_{g} \mu ^{2} _{H} B'(J,a' _{0} ) }{2kT\ +\ n_{g} \mu ^{2} _{H} \gamma \rho B'(J,a' _{0}) } \perp \ = \frac{C}{2 \Theta} \label{13} \]where ng is the number of magnetic atoms per gramme, B’ is the derivative of the Brillouin function with respect to its argument a’, evaluated at a’0, μH is the magnetic moment per atom and γ is the molecular field coefficient.One of the most sensitive forms of magnetometry is SQUID magnetometry. This uses technique uses a combination of superconducting materials and Josephson junctions to measure magnetic fields with resolutions up to ~10-14 kG or greater. In the proceeding pages we will describe how a SQUID actually works.In superconductors the resistanceless current is carried by pairs of electrons, known as Cooper Pairs. A Cooper Pair is a pair of electrons. Each electron has a quantized wavelength. With a Cooper pair each electrons wave couples with its opposite number over a large distances. This phenomenon is a result of the very low temperatures at which many materials will superconduct.What exactly is superconductance? When a material is at very low temperatures, its crystal lattice behaves differently than when it higher temperatures. Usually at higher temperatures a material will have large vibrations called in the crystal lattice. These vibrations scatter electrons as they pass through this lattice ), and this is the basis for bad conductance.With a superconductor the material is designed to have very small vibrations, these vibrations are lessened even more by cooling the material to extremely low temperatures. With no vibrations there is no scattering of the electrons and this allows the material to superconduct.The origin of a Cooper pair is that as the electron passes through a crystal lattice at superconducting temperatures it negative charge pulls on the positive charge of the nuclei in the lattice through coulombic interactions producing a ripple. An electron traveling in the opposite direction is attracted by this ripple. This is the origin of the coupling in a Cooper pair ).A passing electron attracts the lattice, causing a slight ripple toward its path. Another electron passing in the opposite direction is attracted to that displacement ).Each pair can be treated as a single particle with a whole spin, not half a spin such as is usually the case with electrons. This is important, as an electron which is classed in a group of matter called Fermions are governed by the Fermi exclusion principle which states that anything with a spin of one half cannot occupy the same space as something with the same spin of one half. This turns the electron means that a Cooper pair is in fact a Boson the opposite of a Fermion and this allows the Coopers pairs to condensate into one wave packet. Each Coopers pair has a mass and charge twice that of a single electron, whose velocity is that of the center of mass of the pair. This coupling can only happen in extremely cold conditions as thermal vibrations become greater than the force that an electron can exert on a lattice. And thus scattering occurs.Each pair can be represented by a wavefunction of the formwhere P is the net momentum of the pair whose center of mass is at r. However, all the Cooper pairs in a superconductor can be described by a single wavefunction yet again due to the fact that the electrons are in a Coopers pair state and are thus Bosons in the absence of a current because all the pairs have the same phase - they are said to be "phase coherent"This electron-pair wave retains its phase coherence over long distances, and essentially produces a standing wave over the device circuit. In a SQUID there are two paths which form a circle and are made with the same standing wave ). The wave is split in two sent off along different paths, and then recombined to record an interference pattern by adding the difference between the two.This allows measurement at any phase differences between the two components, which if there is no interference will be exactly the same, but if there is a difference in their path lengths or in some interaction that the waves encounters such as a magnetic field it will correspond in a phase difference at the end of each path length.A good example to use is of two water waves emanating from the same point. They will stay in phase if they travel the same distance, but will fall out of phase if one of them has to deviate around an obstruction such as a rock. Measuring the phase difference between the two waves then provides information about the obstruction.Another implication of this long range coherence is the ability to calculate phase and amplitude at any point on the wave's path from the knowledge of its phase and amplitude at any single point, combined with its wavelength and frequency. The wavefunction of the electron-pair wave in the above eqn. can be rewritten in the form of a one-dimensional wave as\[ \psi_{p} \ =\ \psi \ sin(2 \pi ) (\frac{ \chi }{ \lambda } \ - \nu t ) \label{14} \]If we take the wave frequency, V, as being related to the kinetic energy of the Cooper pair with a wavelength, λ, being related to the momentum of the pair by the relation λ = h/p then it is possible to evaluate the phase difference between two points in a current carrying superconductor.If a resistanceless current flows between points X and Y on a superconductor there will be a phase difference between these points that is constant in time.The parameters of a standing wave are dependent on a current passing through the circuit; they are also strongly affected by an applied magnetic field. In the presence of a magnetic field the momentum, p, of a particle with charge q in the presence of a magnetic field becomes mV + qA where A is the magnetic vector potential. For electron-pairs in an applied field their moment P is now equal to 2mV+2eA.In an applied magnetic field the phase difference between points X and Y is now a combination of that due to the supercurrent and that due to the applied field.One effect of the long range phase coherence is the quantization of magnetic flux in a superconducting ring. This can either be a ring, or a superconductor surrounding a non-superconducting region. Such an arrangement can be seen in where region N has a flux density B within it due to supercurrents flowing around it in the superconducting region S.In the closed path XYZ encircling the non-superconducting region there will be a phase difference of the electron-pair wave between any two points, such as X and Y, on the curve due to the field and the circulating current.If the superelectrons are represented by a single wave then at any point on XYZX it can only have one value of phase and amplitude. Due to the long range coherence the phase is single valued also called quantized meaning around the circumference of the ring Δφ must equal 2πn where n is any integer. Due to the wave only having a single value the fluxoid can only exist in quantized units. This quantum is termed the fluxon, φ0, given by \ref{15} .\[ \Phi_{0} = \dfrac{h}{2e} = 2.07 \times 10^{-15} W b \label{15} \]If two superconducting regions are kept totally isolated from each other the phases of the electron-pairs in the two regions will be unrelated. If the two regions are brought together then as they come close electron-pairs will be able to tunnel across the gap and the two electron-pair waves will become coupled. As the separation decreases, the strength of the coupling increases. The tunneling of the electron-pairs across the gap carries with it a superconducting current as predicted by B.D. Josephson and is called "Josephson tunneling" with the junction between the two superconductors called a "Josephson junction" ).The Josephson tunneling junction is a special case of a more general type of weak link between two superconductors. Other forms include constrictions and point contacts but the general form is of a region between two superconductors which has a much lower critical current and through which a magnetic field can penetrate.A superconducting quantum interference device (SQUID) uses the properties of electron-pair wave coherence and Josephson Junctions to detect very small magnetic fields. The central element of a SQUID is a ring of superconducting material with one or more weak links called Josephesons Junctions. An example is shown in the below. With weak-links at points W and X whose critical current, ic, is much less than the critical current of the main ring. This produces a very low current density making the momentum of the electron-pairs small. The wavelength of the electron-pairs is thus very long leading to little difference in phase between any parts of the ring.If a magnetic field, Ba , is applied perpendicular to the plane of the ring , a phase difference is produced in the electron-pair wave along the path XYW and WZX. One of the features of a superconducting loop is that the magnetic flux, Φ, passing through it which is the product of the magnetic field and the area of the loop and is quantized in units of Φ0 = h/ (2e), where h is Planck’s constant, 2e is the charge of the Cooper pair of electrons, and Φ0 has a value of 2 × 10–15 tesla m2. If there are no obstacles in the loop, then the superconducting current will compensate for the presence of an arbitrary magnetic field so that the total flux through the loop (due to the external field plus the field generated by the current) is a multiple of Φ0.Josephson predicted that a superconducting current can be sustained in the loop, even if its path is interrupted by an insulating barrier or a normal metal. The SQUID has two such barriers or ‘Josephson junctions’. Both junctions introduce the same phase difference when the magnetic flux through the loop is 0, Φ0, 2Φ0 and so on, which results in constructive interference, and they introduce opposite phase difference when the flux is Φ0/2, 3Φ0/2 and so on, which leads to destructive interference. This interference causes the critical current density, which is the maximum current that the device can carry without dissipation, to vary. The critical current is so sensitive to the magnetic flux through the superconducting loop that even tiny magnetic moments can be measured. The critical current is usually obtained by measuring the voltage drop across the junction as a function of the total current through the device. Commercial SQUIDs transform the modulation in the critical current to a voltage modulation, which is much easier to measure.An applied magnetic field produces a phase change around a ring, which in this case is equal\[ \Delta \Phi (B) \ =\ 2 \pi \frac{ \Phi _{a} }{ \Phi _{0} } \label{16} \]where Φa is the flux produced in the ring by the applied magnetic field. The magnitude of the critical measuring current is dependent upon the critical current of the weak-links and the limit of the phase change around the ring being an integral multiple of 2π. For the whole ring to be superconducting the following condition must be met\[ \alpha \ +\ \beta \ +\ 2 \pi \frac{ \Phi _{a} }{ \Phi _{0} } \label{17} \]where α and β are the phase changes produced by currents across the weak-links and 2πΦa/Φo is the phase change due to the applied magnetic field.When the measuring current is applied α and β are no longer equal, although their sum must remain constant. The phase changes can be written as \ref{18}\[ \alpha = \pi [ n - \frac{ \Phi _{a} }{ \Phi _{0} } ] \ -\ \delta \beta \ = \ \pi \ [n \ - \frac{ \Phi _{a} }{ \Phi _{0} } ] + \delta \label{18} \]where δ is related to the measuring current I. Using the relation between current and phase from the above Eqn. and rearranging to eliminate i we obtain an expression for I, \ref{19}\[ I_{c} \ =\ 2i_{c} | cos \pi \frac{ \Phi _{a} }{ \Phi _{0} } , sin \delta | \label{19} \]As sinδ cannot be greater than unity we can obtain the critical measuring current, Ic from the above \ref{20}\[ I_{c} \ =\ 2i_{c} | cos \pi \frac{ \Phi _{a} }{ \Phi _{0} } | \label{20} \]which gives a periodic dependence on the magnitude of the magnetic field, with a maximum when this field is an integer number of fluxons and a minimum at half integer values as shown in the below figure.SQUIDs offer the ability to measure at sensitivities unachievable by other magnetic sensing methodologies. However, their sensitivity requires proper attention to cryogenics and environmental noise. SQUIDs should only be used when no other sensor is adequate for the task. There are many exotic uses for SQUID however we are just concerned with the laboratory applications of SQUID.In most physical and chemical laboratories a device called a MPMS )is used to measure the magnetic moment of a sample by reading the output of the SQUID detector. In a MPMS the sample moves upward through the electronic pick up coils called gradiometers. One upward movement is one whole scan. Multiple scans are used and added together to improve measurement resolution. After collecting the raw voltages, there is computation of the magnetic moments of the sample.The MPMS measures the moment of a sample by moving it through a liquid Helium cooled, superconducting sensing coil. Many different measurements can be carried out using an MPMS however we will discuss just a few.DC magnetization is the magnetic per unit volume (M) of a sample. If the sample doesn’t have a permanent magnetic moment, a field is applied to induce one. The sample is then stepped through a superconducting detection array and the SQUID’s output voltage is processed and the sample moment computed. Systems can be configured to measure hysteresis loops, relaxation times, magnetic field, and temperature dependence of the magnetic moment.A DC field can be used to magnetize samples. Typically, the field is fixed and the sample is moved into the detection coil’s region of sensitivity. The change in detected magnetization is directly proportional to the magnetic moment of the sample. Commonly referred to as SQUID magnetometers, these systems are properly called SQUID susceptometers ).They have a homogeneous superconducting magnet to create a very uniform field over the entire sample measuring region and the superconducting pickup loops. The magnet induces a moment allowing a measurement of magnetic susceptibility. The superconducting detection loop array is rigidly mounted in the center of the magnet. This array is configured as a gradient coil to reject external noise sources. The detection coil geometry determines what mathematical algorithm is used to calculate the net magnetization.An important feature of SQUIDs is that the induced current is independent of the rate of flux change. This provides uniform response at all frequencies i.e., true dc response and allows the sample to be moved slowly without degrading performance. As the sample passes through a coil, it changes the flux in that coil by an amount proportional to the magnetic moment M of the sample. The peak-to-peak signal from a complete cycle is thus proportional to twice M. The SQUID sensor shielded inside a niobium can is located where the fringe fields generated by the magnet are less than 10 mT. The detection coil circuitry is typically constructed using NbTi ). This allows measurements in applied fields of 9 T while maintaining sensitivities of 10−8 emu. Thermal insulation not shown is placed between the detection coils and the sample tube to allow the sample temperature to be varied.The use of a variable temperature insert can allow measurements to be made over a wide range 1.8–400 K. Typically, the sample temperature is controlled by helium gas flowing slowly past the sample. The temperature of this gas is regulated using a heater located below the sample measuring region and a thermometer located above the sample region. This arrangement ensures that the entire region has reached thermal equilibrium prior to data acquisition. The helium gas is obtained from normal evaporation in the Dewar, and its flow rate is controlled by a precision regulating valve.The magnetic moment calibration for the SQUID is determined by measuring a palladium standard over a range of magnetic fields and then by adjusting to obtain the correct moment for the standard. The palladium standard samples are effectively point sources with an accuracy of approximately 0.1%.The type, size and geometry of a sample is usually sufficient to determine the method you use to attach it to the sample. However mostly for MPMS measurements a plastic straw is used. This is due to the straw having minimal magnetic susceptibility.However there are a few important considerations for the sample holder design when mounting a sample for measurement in a magnetometer. The sample holder can be a major contributor to the background signal. Its contribution can be minimized by choosing materials with low magnetic susceptibility and by keeping the mass to a minimum such as a plastic straw mentioned above.The materials used to hold a sample must perform well over the temperature range to be used. In a MPMS, the geometric arrangement of the background and sample is critical when their magnetic susceptibilities will be of similar magnitude. Thus, the sample holder should optimize the sample’s positioning in the magnetometer. A sample should be mounted rigidly in order to avoid excess sample motion during measurement. A sample holder should also allow easy access for mounting the sample, and its background contribution should be easy to measure. This advisory introduces some mounting methods and discusses some of the more important considerations when mounting samples for the MPMS magnetometer. Keep in mind that these are only recommendations, not guaranteed procedures. The researcher is responsible for assuring that the methods and materials used will meet experimental requirements.For many types of samples, mounting to a platform is the most convenient method. The platform’s mass and susceptibility should be as small as possible in order to minimize its background contribution and signal distortion.A plastic disc about 2 mm thick with an outside diameter equivalent to the pliable plastic tube’s diameter (a clear drinking straw is suitable) is inserted and twisted into place. The platform should be fairly rigid. Mount samples onto this platform with glue. Place a second disc, with a diameter slightly less than the inside diameter of the tube and with the same mass, on top of the sample to help provide the desired symmetry. Pour powdered samples onto the platform and place a second disc on top. The powders will be able to align with the field. Make sure the sample tube is capped and ventilated.Make one of the lowest mass sample platforms by threading a cross of white cotton thread (colored dyes can be magnetic). Using a needle made of a nonmagneticmetal, or at least carefully cleaned, thread some white cotton sewingthread through the tube walls and tie a secure knot so that the thread platform isrigid. Glue a sample to this platform or use the platform as asupport for a sample in a container. Use an additional thread cross on top to holdthe container in place.Gelatin capsules can be very useful for containing and mounting samples. Many aspects of using gelatin capsules have been mentioned in the section, Containing the Sample. It is best if the sample is mounted near the capsule’s center, or if it completely fills the capsule. Use extra capsule parts to produce mirror symmetry. The thread cross is an excellent way of holding a capsule in place.Another method of sample mounting is attaching the sample to a thread that runs through the sample tube. The thread can be attached to the sample holder at the ends of the sample tube with tape, for example. This method can be very useful with flat samples, such as those on substrates, particularly when the field is in the plane of the film. Be sure to close the sample tube with caps.The sample must be centered in the SQUID pickup coils to ensure that all coils sense the magnetic moment of the sample. If the sample is not centered, the coils read only part of the magnetic moment.During a centering measurement the MPMS scans the entire length of the samples vertical travel path, and the MPMS reads the maximum number of data points. During centering there are a number of terms which need to be understood.As soon as a centering measurement is initiated, the sample transport moves upward, carrying the sample through the pickup coils. While the sample moves through the coils, the MPMS measures the SQUID’s response to the magnetic moment of the sample and saves all the data from the centering measurement.After a centering plot is performed the plot is examined to determine whether the sample is centered in the SQUID pickup coils. The sample is centered when the part of the large, middle curve is within 5cm of the half-way point of the scan length.The shape of the plot is a function of the geometry of the coils. The coils are wound in a way which strongly rejects interference from nearby magnetic sources and lets the MPMS function without a superconducting shield around the pickup coils.To minimize background noise and stray field effects, the MPMS magnetometer pick-up coil takes the form of a second-order gradiometer. An important feature of this gradiometer is that moving a long, homogeneous sample through it produces no signal as long as the sample extends well beyond the ends of the coil during measurement.As a sample holder is moved through the gradiometer pickup coil, changes in thickness, mass, density, or magnetic susceptibility produce a signal. Ideally, only the sample to be measured produces this change. A homogeneous sample that extends well beyond the pick-up coils does not produce a signal, yet a small sample does produce a signal. There must be a crossover between these two limits. The sample length (along the field direction) should not exceed 10 mm. In order to obtain the most accurate measurements, it is important to keep the sample susceptibility constant over its length; otherwise distortions in the SQUID signal (deviations from a dipole signal) can result. It is also important to keep the sample close to the magnetometer centerline to get the most accurate measurements. When the sample holder background contribution is similar in magnitude to the sample signal, the relative positions of the sample and the materials producing the background are important. If there is a spatial offset between the two along the magnet axis, the signal produced by the combined sample and background can be highly distorted and will not be characteristic of the dipole moment being measured.Even if the signal looks good at one temperature, a problem can occur if either of the contributions are temperature dependent.Careful sample positioning and a sample holder with a center, or plane, of symmetry at the sample (i.e. materials distributed symmetrically about the sample, or along the principal axis for a symmetry plane) helps eliminate problems associated with spatial offsets.Keep the sample space of the MPMS magnetometer clean and free of contamination with foreign materials. Avoid accidental sample loss into the sample space by properly containing the sample in an appropriate sample holder. In all cases it is important to close the sample holder tube with caps in order to contain a sample that might become unmounted. This helps avoid sample loss and subsequent damage during the otherwise unnecessary recovery procedure. Position caps well out of the sample-measuring region and introduce proper venting.Work area cleanliness and avoiding sample contamination are very important concerns. There are many possible sources of contamination in a laboratory. Use diamond tools when cutting hard materials. Avoid carbide tools because of potential contamination by the cobalt binder found in many carbide materials. The best tools for preparing samples and sample holders are made of plastic, titanium, brass, and beryllium copper (which also has a small amount of cobalt). Tools labeled non-magnetic can actually be made of steel and often be made "magnetic" from exposure to magnetic fields. However, the main concern from these "non-magnetic" tools is contamination by the iron and other ferrous metals in the tool. It is important to have a clean white-papered workspace and a set of tools dedicated to mounting your own samples. In many cases, the materials and tools used can be washed in dilute acid to remove ferrous metal impurities. Follow any acid washes with careful rinsing with deionized water.Powdered samples pose a special contamination threat, and special precautions must be taken to contain them. If the sample is highly magnetic, it is often advantageous to embed it in a low susceptibility epoxy matrix like Duco cement. This is usually done by mixing a small amount of diluted glue with the powder in a suitable container such as a gelatin capsule. Potting the sample in this way can keep the sample from shifting or aligning with the magnetic field. In the case of weaker magnetic samples, measure the mass of the glue after drying and making a background measurement. If the powdered sample is not potted, seal it into a container, and watch it carefully as it is cycled in the airlock chamber.The sample space of the MPMS has a helium atmosphere maintained at low pressure of a few torr. An airlock chamber is provided to avoid contamination of the sample space with air when introducing samples into the sample space. By pushing the purge button, the airlock is cycled between vacuum and helium gas three times, then pumped down to its working pressure. During the cycling, it is possible for samples to be displaced in their holders, sealed capsules to explode, and sample holders to be deformed. Many of these problems can be avoided if the sample holder is properly ventilated. This requires placing holes in the sample holder, out of the measuring region that will allow any closed spaces to be opened to the interlock chamber.When working with highly air-sensitive samples or liquid samples it is best to first seal the sample into a glass tube. NMR and EPR tubes make good sample holders since they are usually made of a high-quality, low-susceptibility glass or fused silica. When the sample has a high susceptibility, the tube with the sample can be placed onto a platform like those described earlier. When dealing with a low susceptibility sample, it is useful to rest the bottom of the sample tube on a length of the same type of glass tubing. By producing near mirror symmetry, this method gives a nearly constant background with position and provides an easy method for background measurement (i.e., measure the empty tube first, then measure with a sample). Be sure that the tube ends are well out of the measuring region.When going to low temperatures, check to make sure that the sample tube will not break due to differential thermal expansion. Samples that will go above room temperature should be sealed with a reduced pressure in the tube and be checked by taking the sample to the maximum experimental temperature prior to loading it into the magnetometer. These checks are especially important when the sample may be corrosive, reactive, or valuable.This application note describes potential sources for oxygen contamination in the sample chamber and discusses its possible effects. Molecular oxygen, which undergoes an antiferromagnetic transition at about 43 K, is strongly paramagnetic above this temperature. The MPMS system can easily detect the presence of a small amount of condensed oxygen on the sample, which when in the sample chamber can interfere significantly with sensitive magnetic measurements. Oxygen contamination in the sample chamber is usually the result of leaks in the system due to faulty seals, improper operation of the airlock valve, outgassing from the sample, or cold samples being loaded.This page titled 4.1: Magnetism is shared under a CC BY 4.0 license and was authored, remixed, and/or curated by Pavan M. V. Raja & Andrew R. Barron (OpenStax CNX) via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.
545
4.2: IR Spectroscopy
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Physical_Methods_in_Chemistry_and_Nano_Science_(Barron)/04%3A_Chemical_Speciation/4.02%3A_IR_Spectroscopy
Infrared spectroscopy is based on molecular vibrations caused by the oscillation of molecular dipoles. Bonds have characteristic vibrations depending on the atoms in the bond, the number of bonds and the orientation of those bonds with respect to the rest of the molecule. Thus, different molecules have specific spectra that can be collected for use in distinguishing products or identifying an unknown substance (to an extent.)Collecting spectra through this method goes about one of three general ways. Nujol mulls and pressed pellets are typically used for collecting spectra of solids, while thin-film cells are used for solution-phase IR spectroscopy. Spectra of gases can also be obtained but will not be discussed in this guide.While it is all well and wonderful that substances can be characterized in this fashion one still has to be able to hold the substances inside of the instrument and properly prepare the samples. In an infrared spectrometer )the sample to be analyzed is held in front of an infrared laser beam, in order to do this, the sample must be contained in something, consequently this means that the very container the sample is in will absorb some of the infrared beam.This is made somewhat complicated by the fact that all materials have some sort of vibration associated with them. Thus, if the sample holder has an optical window made of something that absorbs near where your sample does, the sample might not be distinguishable from the optical window of the sample holder. The range that is not blocked by a strong absorbance is known as a window (not to be confused with the optical materials of the cell.)Windows are an important factor to consider when choosing the method to perform an analysis, as seen in Table \(\PageIndex{1}\) there are a number of different materials each with their own characteristic absorption spectra and chemical properties. Keep these factors in mind when performing analyses and precious sample will be saved. For most organic compounds NaCl works well though it is susceptible to attack from moisture. For metal coordination complexes KBr, or CsI typically work well due to their large windows. If money is not a problem then diamond or sapphire can be used for plates.Proper handling of these plates will ensure they have a long, useful life. Here follows a few simple pointers on how to handle plates:That said, these simple guidelines will likely reduce most damage that can occur to a plate by simply holding it other faults such as dropping the plate from a sufficient height can result in more serious damage.A common method of preparing solid samples for IR analysis is mulling. The principle here is by grinding the particles to below the wavelength of incident radiation that will be passing through there should be limited scattering. To suspend those tiny particles, an oil, often referred to as Nujol is used. IR-transparent salt plates are used to hold the sample in front of the beam in order to acquire data. To prepare a sample for IR analysis using a salt plate, first decide what segment of the frequency band should be studied, refer to Table \(\PageIndex{1}\) for the materials best suited for the sample. shows the materials needed for preparing a mull.Preparing the mull is performed by taking a small portion of sample and adding approximately 10% of the sample volume worth of the oil and grinding this in an agate mortar and pestle as demonstrated in . The resulting mull should be transparent with no visible particles.Another method involves dissolving the solid in a solvent and allowing it to dry in the agate pestle. If using this method ensure that all of the solvent has evaporated since the solvent bands will appear in the spectrum. Some gentle heating may assist this process. This method creates very fine particles that are of a relatively consistent size. After addition of the oil further mixing (or grinding) may be necessary.Plates should be stored in a desiccator to prevent erosion by atmospheric moisture and should appear roughly transparent. Some materials such as silicon will not, however. Gently rinse the plates with hexanes to wash any residual material off of the plates. Removing the plates from the desiccator and cleaning them should follow the preparation of the mull in order to maintain the integrity of the salt plates. Of course, if the plate is not soluble in water then it is still a good idea just to prevent the threat of mechanical trauma or a stray jet of acetone from a wash bottle.Once the mull has been prepared, add a drop to one IR plate ), place the second plate on top of the drop and give it a quarter turn in order to evenly coat the plate surface as seen in . Place it into the spectrometer and acquire the desired data.Always handle with gloves and preferably away from any sinks, faucets, or other sources of running or spraying water.Spectra acquired by this method will have strong C-H absorption bands throughout several ranges 3,000 – 2,800 cm-1 and 1,500 – 1,300 cm-1 and may obscure signal.Cleaning the plate is performed as previously mentioned with hexanes or chloroform can easily be performed by rinsing and leaving them to dry in the hood. Place the salt plates back into the desiccator as soon as reasonably possible to prevent damage. It is highly advisable to polish the plates after use, no scratches, fogging, or pits should be visible on the face of the plate. Chips, so long as they don’t cross the center of the plate are survivable but not desired. The samples of damaged salt plates in show common problems associated with use or potentially mishandling. Clouding, and to an extent, scratches can be polished out with an iron rouge. Areas where the crystal lattice is disturbed below the surface are impossible to fix and chips cannot be reattached.FIgure \(\PageIndex{6}\) A series of plates indicating various forms of physical damage with a comparison to a good plate (Copyright: Colorado University-Boulder).In an alternate method, this technique is along the same lines of the nujol mull except instead of the suspending medium being mineral oil, the suspending medium is a salt. The solid is ground into a fine powder with an agate mortar and pestle with an amount of the suspending salt. Preparing pellets with diamond for the suspending agent is somewhat illadvised considering the great hardness of the substance. Generally speaking, an amount of KBr or CsI is used for this method since they are both soft salts. Two approaches can be used to prepare pellets, one is somewhat more expensive but both usually yield decent results.The first method is the use of a press. The salt is placed into a cylindrical holder and pressed together with a ram such as the one seen in ). Afterwards, the pellet, in the holder, is placed into the instrument and spectra acquired.An alternate, and cheaper method requires the use of a large hex nut with a 0.5 inch inner diameter, two bolts, and two wrenches such as the kit seen in . Step-by-step instructions for loading and using the press follows:Some pellet presses also have a vacuum barb such as the one seen in . If your pellet press has one of these, consider using it as it will help remove air from the salt pellet as it is pressed. This ensures a more uniform pellet and removes absorbances in the collected spectrum due to air trapped in the pellet.Solution cells ) are a handy way of acquiring infrared spectra of compounds in solution and is particularly handy for monitoring reactions.A thin-film cell consists of two salt plates with a very thin space in between them ). Two channels allow liquid to be injected and then subsequently removed. The windows on these cells can be made from a variety of IR optical materials. One particularly useful one for water-based solutions is CaF2 as it is not soluble in water.Cleaning these cells can be performed by removing the solution, flushing with fresh solvent and gently removing the solvent by syringe. Do not blow air or nitrogen through the ports as this can cause mechanical deformation in the salt window if the pressure is high enough.One of the other aspects to solution-phase IR is that the solvent utilized in the cell has a characteristic absorption spectra. In some cases this can be alleviated by replacing the solvent with its deuterated sibling. The benefit here is that C-H bonds are now C-D bonds and have lower vibrational frequencies. Compiled in is a set of common solvents.This effect has numerous benefits and is often applied to determining what vibrations correspond to what bond in a given molecular sample. This is often accomplished by using isotopically labeled “heavy” reagents such as ones that contain 2H, 15N, 18O, or 13C.There are numerous problems that can arise from improperly prepared samples, this section will go through some of the common problems and how to correct them. For this demonstration, spectra of ferrocene will be used. The molecular structure and a photograph of the brightly colored organometallic compound are shown in and . illustrates what a good sample of ferrocene looks like prepared in a KBr pellet. The peaks are well defined and sharp. No peak is flattened at 0% transmittance and Christiansen scattering is not evident in the baseline. illustrates a sample with some peaks with intensities that are saturated and lose resolution making peak-picking difficult. In order to correct for this problem, scrape some of the sample off of the salt plate with a rubber spatula and reseat the opposite plate. By applying a thinner layer of sample one can improve the resolution of strongly absorbing vibrations. illustrates a sample in which too much mineral oil was added to the mull so that the C-H bonds are far more intense than the actual sample. This can be remedied by removing the sample from the plate, grinding more sample and adding a smaller amount of the mull to the plate. Another possible way of doing this is if the sample is insoluble in hexanes, add a little to the mull and wick away the hexane-oil mixture to leave a dry solid sample. Apply a small portion of oil and replate. illustrates the result of particles being too large and scattering light. To remedy this, remove the mull and grind further or else use the solvent deposition technique described earlier.The infrared (IR) range of the electromagnetic spectrum is usually divided into three regions:For classical light material interaction theory, if a molecule can interact with an electromagnetic field and absorb a photon of certain frequency, the transient dipole of molecular functional group must oscillate at that frequency. Correspondingly, this transition dipole moment must be a non-zero value, however, some special vibration can be IR inactive for the stretching motion of a homonuclear diatomic molecule and vibrations do not affect the molecule’s dipole moment (e.g., N2).A molecule can vibrate in many ways, and each way is called a "vibrational mode". If a molecule has N atoms, linear molecules have 3N-5 degrees of vibrational modes whereas nonlinear molecules have 3N-6 degrees of vibrational modes. Take H2O for example; a single molecule of H2O has O-H bending mode a), antisymmetric stretching mode b), and symmetric stretching mode c).If a diatomic molecule has a harmonic vibration with the energy, \ref{1} , where n+1/2 with n = 0, 1, 2 ...). The motion of the atoms can be determined by the force equation, \ref{2} , where k is the force constant). The vibration frequency can be described by \ref{3} . In which m is actually the reduced mass (mred or μ), which is determined from the mass m1 and m2 of the two atoms, \ref{4} .\[ E_{n} \ =\ -hv \label{1} \]\[ F \ =\ -kx \label{2} \]\[ \omega \ =\ (k/m)^{1/2} \label{3} \]\[ m_{red} \ =\ \mu \ =\ \frac{m_{1} m_{2}}{m_{1}\ +\ m_{2} } \label{4} \]In IR spectrum, absorption information is generally presented in the form of both wavenumber and absorption intensity or percent transmittance. The spectrum is generally showing wavenumber (cm-1) as the x-axis and absorption intensity or percent transmittance as the y-axis.Transmittance, "T", is the ratio of radiant power transmitted by the sample (I) to the radiant power incident on the sample (I0). Absorbance (A) is the logarithm to the base 10 of the reciprocal of the transmittance (T). The absorption intensity of molecule vibration can be determined by the Lambert-Beer Law, \label{5} . In this equation, the transmittance spectra ranges from 0 to 100%, and it can provide clear contrast between intensities of strong and weak bands. Absorbance ranges from infinity to zero. The absorption of molecules can be determined by several components. In the absorption equation, ε is called molar extinction coefficient, which is related to the molecule behavior itself, mainly the transition dipole moment, c is the concentration of the sample, and l is the sample length. Line width can be determined by the interaction with surroundings.\[ A\ =\ log(1/T) \ =\ -log(I/I_{0} )\ =\ \varepsilon c l \label{5} \]As shown in , there are mainly four parts for fourier transform infrared spectrometer (FTIR):It is well known that all molecules chemicals have distinct absorption regions in the IR spectrum. Table \(\PageIndex{7}\) shows the absorption frequencies of common types of functional groups. For systematic evaluation, the IR spectrum is commonly divided into some sub-regions.The metal electrons fill into the molecular orbital of ligands (CN, CO, etc.) to form complex compound. As shown in , a simple molecular orbital diagram for CO can be used to explain the binding mechanism.The CO and metal can bind with three ways:Herein, we mainly consider two properties: ligand stretch frequency and their absorption intensity. Take the ligand CO for example again. The frequency shift of the carbonyl peaks in the IR mainly depends on the bonding mode of the CO (terminal or bridging) and electron density on the metal. The intensity and peak numbers of the carbonyl bands depends on some factors: CO ligands numbers, geometry of the metal ligand complex and fermi resonance.As shown in Table \(\PageIndex{8}\), a greater charge on the metal center result in the CO stretches vibration frequency decreasing. For example, [Ag(CO)]+show higher frequency of CO than free CO, which indicates a strengthening of the CO bond. σ donation removes electron density from the nonbonding HOMO of CO. From the CO bond strength. Therefore, the effect of charge and electronegativity depends on the amount of metal to CO π-back bonding and the CO IR stretching frequency.If the electron density on a metal center is increasing, more π-back bonding to the CO ligand(s) will also increase, as shown in Table \(\PageIndex{9}\). It means more electron density would enter into the empty carbonyl π* orbital and weaken the C-O bond. Therefore, it makes the M-CO bond strength increasing and more double-bond-like (M=C=O).Some cases, as shown in Table \(\PageIndex{9}\), different ligands would bind with same metal at the same metal-ligand complex. For example, if different electron density groups bind with Mo(CO)3 as the same form, as shown in , the CO vibrational frequencies would depend on the ligand donation effect. Compared with the PPh3 group, CO stretching frequency which the complex binds the PF3 group (2090, 2055 cm-1) is higher. It indicates that the absolute amount of electron density on that metal may have certain effect on the ability of the ligands on a metal to donate electron density to the metal center. Hence, it may be explained by the Ligand donation effect. Ligands that are trans to a carbonyl can have a large effect on the ability of the CO ligand to effectively π-backbond to the metal. For example, two trans π-backbonding ligands will partially compete for the same d-orbital electron density, weakening each other’s net M-L π-backbonding. If the trans ligand is a π-donating ligand, the free metal to CO π-backbonding can increase the M-CO bond strength (more M=C=O character). It is well known that pyridine and amines are not those strong π-donors. However, they are even worse π-backbonding ligands. So the CO is actually easy for π-back donation without any competition. Therefore, it naturally reduces the CO IR stretching frequencies in metal carbonyl complexes for the ligand donation effect.Some cases, metal-ligand complex can form not only terminal but also bridging geometry. As shown in , in the compound Fe2(CO)7(dipy), CO can act as a bridging ligand. Evidence for a bridging mode of coordination can be easily obtained through IR spectroscopy. All the metal atoms bridged by a carbonyl can donate electron density into the π* orbital of the CO and weaken the CO bond, lowering vibration frequency of CO. In this example, the CO frequency in terminal is around 2080 cm-1, and in bridge, it shifts to around 1850 cm-1.The dynamics of molecular functional group plays an important role during a chemical process, chemical bond forming and breaking, energy transfer and other dynamics happens within picoseconds domain. It is very difficult to study such fast processes directly, for decades scientists can only learn from theoretical calculations, lacking experimental methods.However, with the development of ultrashort pulsed laser enable experimental study of molecular functional group dynamics. With ultrafast laser technologies, people develop a series of measuring methods, among which, pump-probe technique is widely used to study the molecular functional group dynamics. Here we concentrate on how to use pump-probe experiment to measure functional group vibrational lifetime. The principle, experimental setup and data analysis will be introduced.For every function group within a molecule, such as the C≡N triple bond in phenyl selenocyanate (C6H5SeCN) or the C-D single bond in deuterated chloroform (DCCl3), they have an individual infrared vibrational mode and associated energy levels. For a typical 3-level system , both the 0 to 1 and the 1 to 2 transition are near the probe pulse frequency (they don't necessarily need to have exactly the same frequency).In a pump-probe experiment, we use the geometry as is shown in . Two synchronized laser beams, one of which is called pump beam (Epu) while the other probe beam (Epr). There is a delay in time between each pulse. The laser pulses hit the sample, the intensity of ultrafast laser (fs or ps) is strong enough to generated 3rd order polarization and produce 3rd order optical response signal which is use to give dynamics information of molecular function groups. For the total response signals we have \label{6} , where µ10 µ21 are transition dipole moment and E0, E1, and E2 are the energies of the three levels, and t3 is the time delay between pump and probe beam. The delay t3 is varied and the response signal intensity is measured. The functional group vibration life time is determined from the data.\[ S \ =\ 4 \mu _{10} ^{4} e^{ -i(E_{1} - E_{0} ) t3/h - \Gamma t3} \label{6} \]The optical layout of a typical pump-probe setup is schematically displayed in . In the setup, the output of the oscillator (500 mW at 77 MHz repetition rate, 40 nm bandwidth centered at 800 nm) is split into two beams (1:4 power ratio). Of this, 20% of the power is to seed a femtosecond (fs) amplifier whose output is 40 fs pulses centered at 800 nm with power of ~3.4 W at 1 KHz repetition rate. The rest (80%) of the seed goes through a bandpass filter centered at 797.5nm with a width of 0.40 nm to seed a picosecond (ps) amplifier. The power of the stretched seed before entering the ps amplifier cavity is only ~3 mW. The output of the ps amplifier is 1ps pulses centered at 800 nm with a bandwidth ~0.6 nm. The power of the ps amplifier output is ~3 W. The fs amplifier is then to pump an optical parametric amplifier (OPA) which produces ~100 fs IR pulses with bandwidth of ~200 cm-1 that is tunable from 900 to 4000 cm-1. The power of the fs IR pulses is 7~40 mW, depending on the frequencies. The ps amplifier is to pump a ps OPA which produces ~900 fs IR pulses with bandwidth of ~21 cm-1, tunable from 900 - 4000 cm-1. The power of the fs IR pulses is 10 ~ 40 mW, depending on frequencies.In a typical pump-probe setup, the ps IR beam is collimated and used as the pump beam. Approximately 1% of the fs IR OPA output is used as the probe beam whose intensity is further modified by a polarizer placed before the sample. Another polarizer is placed after the sample and before the spectrograph to select different polarizations of the signal. The signal is then sent into a spectrograph to resolve frequency, and detected with a mercury cadmium telluride (MCT) dual array detector. Use of a pump pulse (femtosecond, wide band) and a probe pulse (picoseconds, narrow band), scanning the delay time and reading the data from the spectrometer, will give the lifetime of the functional group. The wide band pump and spectrometer described here is for collecting multiple group of pump-probe combination.For a typical pump-probe curve shown in life time t is defined as the corresponding time value to the half intensity as time zero.Table \(\PageIndex{10}\) shows the pump-probe data of the C≡N triple bond in a series of aromatic cyano compounds: n-propyl cyanide (C3H7CN), ethyl thiocyanate (C2H5SCN), and ethyl selenocyanate (C2H5SeCN) for which the νC≡N for each compound (measured in CCl4 solution) is 2252 cm-1), 2156 cm-1, and ~2155 cm-1, respectively.A plot of intensity versus time for the data from TABLE is shown . From these curves the C≡N stretch lifetimes can be determined for C3H7CN, C2H5SCN, and C2H5SeCN as ~5.5 ps, ~84 ps, and ~282 ps, respectively.From what is shown above, the pump-probe method is used in detecting C≡N vibrational lifetimes in different chemicals. One measurement only takes several second to get all the data and the lifetime, showing that pump-probe method is a powerful way to measure functional group vibrational lifetime.Attenuated total reflectance-Fourier transform infrared spectroscopy (ATR-FTIR) is a physical method of compositional analysis that builds upon traditional transmission FTIR spectroscopy to minimize sample preparation and optimize reproducibility. Condensed phase samples of relatively low refractive index are placed in close contact with a crystal of high refractive index and the infrared (IR) absorption spectrum of the sample can be collected. Based on total internal reflection, the absorption spectra of ATR resemble those of transmission FTIR. To learn more about transmission IR spectroscopy (FTIR) please refer to the section further up this page titled Fourier Transform Infrared Spectroscopy of Metal Ligand Complexes.First publicly proposed in 1959 by Jacques Fahrenfort from the Royal Dutch Shell laboratories in Amsterdam, ATR IR spectroscopy was described as a technique to effectively measure weakly absorbing condensed phase materials. In Fahrenfort's first article describing the technique, published in 1961, he used a hemicylindrical ATR crystal (see Experimental Conditions) to produce single-reflection ATR ). ATR IR spectroscopy was slow to become accepted as a method of characterization due to concerns about its quantitative effectiveness and reproducibility. The main concern being the sample and ATR crystal contact necessary to achieve decent spectral contrast. In the late 1980’s FTIR spectrometers began improving due to an increased dynamic range, signal to noise ratio, and faster computers. As a result ATR-FTIR also started gaining traction as an efficient spectroscopic technique. These days ATR accessories are often manufactured to work in conjunction with most FTIR spectrometers, as can be seen in .For additional information on light waves and their properties please refer to the module on Vertical Scanning Interferometry (VSI) in chapter 10.1.When considering light propagating across an interface between two materials with different indices of refraction, the angle of refraction can be given by Snell’s law, \ref{7} , where none of the incident light will be transmitted.\[ \varphi _{c} \ =\ \varphi _{max} \label{7} \]The reflectance of the interface is total and whenever light is incident from a higher refractive index medium onto a lower refractive index medium, the reflection is deemed internal (as opposed to external in the opposite scenario). Total internal reflectance experiences no losses, or no transmitted light Supercritical internal reflection refers to angles of incidence above the critical angle of incidence allowing total internal reflectance. It is in this angular regime where only incident and reflected waves will be present. The transmitted wave is confined to the interface where its amplitude is at a maximum and will damp exponentially into the lower refractive index medium as a function of distance. This wave is referred to as the evanescent wave and it extends only a very short distance beyond the interface.To apply total internal reflection to the experimental setup in ATR, consider n2 to be the internal reflectance element or ATR crystal (the blue trapezoid in )where n2 is the material with the higher index of refraction. This should be a material that is fully transparent to the incident infrared radiation to give a real value for the refractive index. The ATR crystal must also have a high index of refraction to allow total internal reflection with many samples that have an index of refraction n1, where n1<n2.We can consider the sample to be absorbing in the infrared. Electromagnetic energy will pass through the crystal/sample interface and propagate into the sample via the evanescent wave. This energy loss must be compensated with the incident IR light. Thus, total reflectance is no longer occurring and the reflection inside the crystal is attenuated. If a sample does not absorb, the reflectance at the interface shows no attenuation. Therefore if the IR light at a particular frequency does not reach the detector, the sample must have absorbed it.The penetration depth of the evanescent wave within the sample is on the order of 1µm. The expression of the penetration depth is given in \ref{8} and is dependent upon the wavelength and angle of incident light as well as the refractive indices of the ATR crystal and sample. The effective path length is the product of the depth of penetration of the evanescent wave and the number of points that the IR light reflects at the interface between the crystal and sample. This path length is equivalent to the path length of a sample in a traditional transmission FTIR setup.\[ d_{p} = \frac{ \lambda }{2 \pi n_{1}} (sin \omega - ( \frac{n_{1}}{n_{2}} )^{2} )^{1/2} \label{8} \]Typically an ATR attachment can be used with a traditional FTIR where the beam of incident IR light enters a horizontally positioned crystal with a high refractive index in the range of 1.5 to 4, as can be seen in Table \(\PageIndex{11}\) will consist of organic compounds, inorganic compounds, and polymers which have refractive indices below 2 and can readily be found on a database.Multiple reflection ATR was initially more popular than single reflection ATR because of the weak absorbances associated with single reflection ATR. More reflections increased the evanescent wave interaction with the sample, which was believed to increase the signal to noise ratio of the spectrum. When IR spectrometers developed better spectral contrast, single reflection ATR became more popular. The number of reflections and spectral contrast increases with the length of the crystal and decreases with the angle of incidence as well as thickness. Within multiple reflection crystals some of the light is transmitted and some is reflected as the light exits the crystal, resulting in some of the light going back through the crystal for a round trip. Therefore, light exiting the ATR crystal contains components that experienced different number of reflections at the crystal-sample interface.It was more common in earlier instruments to allow selection of the incident angle, sometimes offering selection between 30°, 45°, and 60°. In all cases for total internal reflection to hold, the angle of incidence must exceed the critical angle and ideally complement the angle of the crystal edge so that the light enters at a normal angle of incidence. These days 45° is the standard angle on most ATR-FTIR setups.For the most part ATR crystals will have a trapezoidal shape as shown in . This shape facilitates sample preparation and handling on the crystal surface by enabling the optical setup to be placed below the crystal. However, different crystal shapes ) may be used for particular purposes, whether it is to achieve multiple reflections or reduce the spot size. For example, a hemispherical crystal may be used in a microsampling experiment in which the beam diameter can be reduced at no expense to the light intensity. This allows appropriate measurement of a small sample without compromising the quality of the resulting spectral features.Crystal-sample contactBecause the path length of the evanescent wave is confined to the interface between the ATR crystal and sample, the sample should make firm contact with the ATR crystal ). The sample sits atop the crystal and intimate contact can be ensured by applying pressure above the sample. However, one must be mindful of the ATR crystal hardness. Too much pressure may distort the crystal and affect the reproducibility of the resulting spectrum.The wavelength effect expressed in \label{7} shows an increase in penetration depth at increased wavelength. In terms of wavenumbers the relationship becomes inverse. At 4000 cm-1 penetration of the sample is 10x less than penetration at 400 cm-1 meaning the intensity of the peaks may appear higher at lower wavenumbers in the absorbance spectrum compared to the spectral features in a transmission FTIR spectrum (if an automated correction to the ATR setup is not already in place).ATR functions effectively on the condition that the refractive index of the crystal is of a higher refractive index than the sample. Several crystals are available for use and it is important to select an appropriate option for any given experiment (Table \(\PageIndex{11}\) ).When selecting a material, it is important to consider reactivity, temperature, toxicity, solubility, and hardness.The first ATR crystals in use were KRS-5, a mixture of thallium bromide and iodide, and silver halides. These materials are not listed in the table because they are not in use any longer. While cost-effective, they are not practical due to their light sensitivity, softness, and relatively low refractive indices. In addition KRS-5 is terribly toxic and dissolves on contact with many solvents, including water.At present diamond is a favorable option for its hardness, inertness and wide spectral range, but may not be a financially viable option for some experiments. ZnSe and germanium are the most common crystal materials. ZnSe is reasonably priced, has significant mechanical strength and a long endurance. However, the surface will become etched with exposure to chemicals on either extreme of the pH scale. With a strong acid ZnSe will react to form toxic hydrogen selenide gas. ZnSe is also prone to oxidation and care must be taken to avoid the formation of an IR absorbing layer of SeO2. Germanium has a higher refractive index, which reduces the depth of penetration to 1 µm and may be preferable to ZnSe in applications involving intense sample absorptions or for use with samples that produce strong background absorptions. Sapphire is physically robust with a wide spectral range, but has a relatively low refractive index in terms of ATR crystals, meaning it may not be able to test as many samples as another crystal might.The versatility of ATR is reflected in the various forms and phases that a sample can assume. Solid samples need not be compressed into a pellet, dispersed into a mull or dissolve in a solution. A ground solid sample is simply pressed to the surface of the ATR crystal. For hard samples that may present a challenge to grind into a fine solid, the total area in contact with the crystal may be compromised unless small ATR crystals with exceptional durability are used (e.g., 2 mm diamond). Loss of contact with the crystal would result in decreased signal intensity because the evanescent wave may not penetrate the sample effectively. The inherently short path length of ATR due to the short penetration depth (0.5-5 µm) enables surface-modified solid samples to be readily characterized with ATR.Powdered samples are often tedious to prepare for analysis with transmission spectroscopy because they typically require being made into a KBr pellet to and ensuring the powdered sample is ground up sufficiently to reduce scattering. However, powdered samples require no sample preparation when taking the ATR spectra. This is advantageous in terms of time and effort, but also means the sample can easily be recovered after analysis.The advantage of using ATR to analyze liquid samples becomes apparent when short effective path lengths are required. The spectral reproducibility of liquid samples is certain as long as the entire length of the crystal is in contact with the liquid sample, ensuring the evanescent wave is interacting with the sample at the points of reflection, and the thickness of the liquid sample exceeds the penetration depth. A small path length may be necessary for aqueous solutions in order to reduce the absorbance of water.ATR-FTIR has been used in fields spanning forensic analysis to pharmaceutical applications and even art preservation. Due to its ease of use and accessibility ATR can be used to determine the purity of a compound. With only a minimal amount of sample this researcher is able to collect a quick analysis of her sample and determine whether it has been adequately purified or requires further processing. As can be seen in , the sample size is minute and requires no preparation. The sample is placed in close contact with the ATR crystal by turning a knob that will apply pressure to the sample ).ATR has an added advantage in that it inherently encloses the optical path of the IR beam. In a transmission FTIR, atmospheric compounds are constantly exposed to the IR beam and can present significant interference with the sample measurement. Of course the transmission FTIR can be purged in a dry environment, but sample measurement may become cumbersome. In an ATR measurement, however, light from the spectrometer is constantly in contact with the sample and exposure to the environment is reduced to a minimum.One exciting application of ATR is in the study of classical works of art. In the study of fragments of a piece of artwork, where samples are scarce and one-of-a-kind, ATR is a suitable method of characterization because it requires only a small sample size. Determining the compounds present in art enables proper preservation and historical insight into the pieces.In a study examining several paint samples from a various origins, a micro-ATR was employed for analysis. This study used a silicon crystal with a refractive index of 2.4 and a reduced beam size. Going beyond a simple surface analysis, this study explored the localization of various organic and inorganic compounds in the samples by performing a stratigraphic analysis. The researchers did so by embedding the samples in both KBr and a polyester resins. Two embedding techniques were compared to observe cross-sections of the samples. The mapping of the samples took approximately 1-3 hours which may seem quite laborious to some, but considering the precious nature of the sample, the wait time was acceptable to the researchers.The optical microscope picture ( ) shows a sample of a blue painted area from the robe of a 14th century Italian polychrome statue of a Madonna. The spectra shown in were acquired from the different layers pictured in the box marked in . All spectra were collected from the cross-sectioned sample and the false-color map on each spectrum indicates the location of each of these compounds within the embedded sample. The spectra correspond to the inorganic compounds listed in Table \(\PageIndex{12}\), which also highlights characteristic vibrational bands.The deep blue layer 3 corresponds to azurite and the light blue paint layer 2 to a mixture of silicate based blue pigments and white lead. Although beyond the ATR crystal’s spatial resolution limit of 20 µm, the absorption of bole was detected by the characteristic triple absorption bands of 3697, 3651, and 3619 cm-1 as seen in spectrum d of . The white layer 0 was identified as gypsum.To identify the binding material, the KBr embedded sample proved to be more effective than the polyester resin. This was due in part to the overwhelming IR absorbance of gypsum in the same spectral range (1700-1600 cm-1) as a characteristic stretch of the binding as well as some contaminant absorption due to the polyester embedding resin.To spatially locate specific pigments and binding media, ATR mapping was performed on the area highlighted with a box in . The false color images alongside each spectrum in indicate the relative presence of the compound corresponding to each spectrum in the boxed area. ATR mapping was achieved by taking 108 spectra across the 220x160 µm area and selecting for each identified compound by its characteristic vibrational band.This page titled 4.2: IR Spectroscopy is shared under a CC BY 4.0 license and was authored, remixed, and/or curated by Pavan M. V. Raja & Andrew R. Barron (OpenStax CNX) via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.
546
4.3: Raman Spectroscopy
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Physical_Methods_in_Chemistry_and_Nano_Science_(Barron)/04%3A_Chemical_Speciation/4.03%3A_Raman_Spectroscopy
Raman spectroscopy is a powerful tool for determining chemical species. As with other spectroscopic techniques, Raman spectroscopy detects certain interactions of light with matter. In particular, this technique exploits the existence of Stokes and Anti-Stokes scattering to examine molecular structure. When radiation in the near infrared (NIR) or visible range interacts with a molecule, several types of scattering can occur. Three of these can be seen in the energy diagram in .In all three types of scattering, an incident photon of energy hν raises the molecule from a vibrational state to one of the infinite number of virtual states located between the ground and first electronic states. The type of scattering observed is dependent on how the molecule relaxes after excitation.Rayleigh ScatteringStokes ScatteringAnti-Stokes ScatteringRayleigh scattering is by far the most common transition, due to the fact that no change has to occur in the vibrational state of the molecule. The anti-Stokes transition is the least common, as it requires the molecule to be in a vibrationally excited state before the photon is incident upon it. Due to the lack of intensity of the anti-Stokes signal and filtering requirements that eliminate photons with incident energy and higher, generally only Stokes scattering is used in Raman measurements. The relative intensities of Rayleigh, Stokes and anti-Stokes scattering can be seen in .Raman spectroscopy observes the change in energy between the incident and scattered photons associated with the Stokes and anti-Stokes transitions. This is typically measured as the change in the wavenumber (cm-1), from the incident light source. Because Raman measures the change in wavenumber, measurements can be taken using a source at any wavelength; however, near infrared and visible radiation are commonly used. Photons with ultraviolet wavelengths could work as well, but tend to cause photodecomposition of the sample.Raman spectroscopy sounds very much like infrared (IR) spectroscopy; however, IR examines the wavenumber at which a functional group has a vibrational mode, while Raman observes the shift in vibration from an incident source. The Raman frequency shift is identical to the IR peak frequency for a given molecule or functional group. As mentioned above, this shift is independent of the excitation wavelength, giving versatility to the design and applicability of Raman instruments.The cause of the vibration is also mechanistically different between IR and Raman. This is because the two operate on different sets of selection rules. IR absorption requires a dipole moment or change in charge distribution to be associated with the vibrational mode. Only then can photons of the same energy as the vibrational state of molecule interact. A schematic of this can be seen in .Raman signals, on the other hand, due to scattering, occur because of a molecule’s polarizability, illustrated in . Many molecules that are inactive or weak in the IR will have intense Raman signals. This results in often complementary techniques.Raman activity depends on the polarizability of a bond. This is a measure of the deformability of a bond in an electric field. This factor essentially depends on how easy it is for the electrons in the bond to be displaced, inducing a temporary dipole. When there is a large concentration of loosely held electrons in a bond, the polarizability is also large, and the group or molecule will have an intense Raman signal. Because of this, Raman is typically more sensitive to the molecular framework of a molecule rather than a specific functional group as in IR. This should not be confused with the polarity of a molecule, which is a measure of the separation of electric charge within a molecule. Polar molecules often have very weak Raman signals due to the fact that electronegative atoms hold electrons so closely.Raman spectroscopy can provide information about both inorganic and organic chemical species. Many electron atoms, such as metals in coordination compounds, tend to have many loosely bound electrons, and therefore tend to be Raman active. Raman can provide information on the metal ligand bond, leading to knowledge of the composition, structure, and stability of these complexes. This can be particularly useful in metal compounds that have low vibrational absorption frequencies in the IR. Raman is also very useful for determining functional groups and fingerprints of organic molecules. Often, Raman vibrations are highly characteristic to a specific molecule, due to vibrations of a molecule as a whole, not in localized groups. The groups that do appear in Raman spectra have vibrations that are largely localized within the group, and often have multiple bonds involved.Raman measurements provide useful characterization of many materials. However, the Raman signal is inherently weak (less than 0.001% of the source intensity), restricting the usefulness of this analytical tool. Placing the molecule of interest near a metal surface can dramatically increase the Raman signal. This is the basis of surface-enhanced Raman spectroscopy (SERS). There are several factors leading to the increase in Raman signal intensity near a metal surfaceThe ever-rising interest in nanotechnology involves the synthesis and application of materials with a very high surface area to volume ratio. This places increasing importance on understanding the chemistry occurring at a surface, particularly the surface of a nanoparticle. Slight modifications of the nanoparticle or its surrounding environment can greatly affect many properties including the solubility, biological toxicity, and reactivity of the nanomaterial. Noble metal nanomaterials are of particular interest due to their unique optical properties and biological inertness.One tool employed to understand the surface chemistry of noble metal nanomaterial, particularly those composed of gold or silver is surface-enhanced Raman spectroscopy (SERS). Replacing a metal surface with a metal nanoparticle increases the available surface area for the adsorption of molecules. Compared to a flat metal surface, a similar sample size using nanoparticles will have a dramatically stronger signal, since signal intensity is directly related to the concentration of the molecule of interest. Due to the shape and size of the structure, the electrons in the nanoparticle oscillate collectively when exposed to incident electromagnetic radiation. This is called the localized surface plasmon resonance (LSPR) of the nanoparticle. The LSPR of the nanoparticles boosts the Raman signal intensity dramatically for molecules of interest near the surface of the nanoparticle. In order to maximize this effect, a nanoparticle should be selected with its resonant wavelength falling in the middle of the incident and scattered wavelengths.The overall intensity enhancement of SERS can be as large as a factor of 106, with the surface plasmon resonance responsible for roughly four orders of magnitude of this signal increase. The other two orders of magnitude have been attributed to chemical enhancement mechanisms arising charge interactions between the metal particle and the adsorbate or from resonances in the adsorbate alone, as discussed above.Traditionally, SERS uses nanoparticles made of conductive materials, such as gold, to learn more about a particular molecule. However, of interest in many growing fields that incorporate nanotechnology is the structure and functionalization of a nanoparticle stabilized by some surfactant or capping agent. In this case, SERS can provide valuable information regarding the stability and surface structure of the nanoparticle. Another use of nanoparticles in SERS is to provide information about a ligand’s structure and the nature of ligand binding. In many applications it is important to know whether a molecule is bound to the surface of the nanoparticle or simply electrostatically interacting with it.The standard Raman instrument is composed of three major components. First, the instrument must have an illumination system. This is usually composed of one or more lasers. The major restriction for the illumination system is that the incident frequency of light must not be absorbed by the sample or solvent. The next major component is the sample illumination system. This can vary widely based on the specifics of the instrument, including whether the system is a standard macro-Raman or has micro-Raman capabilities. The sample illumination system will determine the phase of material under investigation. The final necessary piece of a Raman system is the spectrometer. This is usually placed 90° away from the incident illumination and may include a series of filters or a monochromator. An example of a macro-Raman and micro-Raman setup can be and . A macro-Raman spectrometer has a spatial resolution anywhere from 100 μm to one millimeter while a micro-Raman spectrometer uses a microscope to magnify its spatial resolution.Carbon nanotubes (CNTs) have proven to be a unique system for the application of Raman spectroscopy, and at the same time Raman spectroscopy has provided an exceedingly powerful tool useful in the study of the vibrational properties and electronic structures of CNTs. Raman spectroscopy has been successfully applied for studying CNTs at single nanotube level.The large van der Waals interactions between the CNTs lead to an agglomeration of the tubes in the form of bundles or ropes. This problem can be solved by wrapping the tubes in a surfactant or functionalizing the SWNTs by attaching appropriate chemical moieties to the sidewalls of the tube. Functionalization causes a local change in the hybridization from sp2 to sp3 of the side-wall carbon atoms, and Raman spectroscopy can be used to determine this change. In addition information on length, diameter, electronic type (metallic or semiconducting), and whether nanotubes are separated or in bundle can be obtained by the use of Raman spectroscopy. Recent progress in understanding the Raman spectra of single walled carbon nanotubes (SWNT) have stimulated Raman studies of more complicated multi-wall carbon nanotubes (MWNT), but unfortunately quantitative determination of the latter is not possible at the present state of art.Raman spectroscopy is a single resonance process, i.e., the signals are greatly enhanced if either the incoming laser energy (Elaser) or the scattered radiation matches an allowed electronic transition in the sample. For this process to occur, the phonon modes are assumed to occur at the center of the Brillouin zone (q = 0). Owing to their one dimensional nature, the Π-electronic density of states of a perfect, infinite, SWNTs form sharp singularities which are known as van Hove singularities (vHs), which are energetically symmetrical with respect to Fermi level (Ef) of the individual SWNTs. The allowed optical transitions occur between matching vHs of the valence and conduction band of the SWNTs, i.e., from first valence band vHs to the first conduction band vHs (E11) or from the second vHs of the valence band to the second vHs of the conduction band (E22). Since the quantum state of an electron (k) remains the same during the transition, it is referred to as k-selection rule.The electronic properties, and therefore the individual transition energies in SWNTs are given by their structure, i.e., by their chiral vector that determines the way SWNT is rolled up to form a cylinder. shows a SWNT having vector R making an angle θ, known as the chiral angle, with the so-called zigzag or r1 direction.Raman spectroscopy of an ensemble of many SWNTs having different chiral vectors is sensitive to the subset of tubes where the condition of allowed transition is fulfilled. A ‘Kataura-Plot’ gives the allowed electronic transition energies of individual SWNTs as a function of diameter d, hence information on which tubes are resonant for a given excitation wavelength can be inferred. Since electronic transition energies vary roughly as 1/d, the question whether a given laser energy probes predominantly semiconducting or metallic tubes depends on the mean diameter and diameter distribution in the SWNT ensemble. However, the transition energies that apply to an isolated SWNT do not necessarily hold for an ensemble of interacting SWNTs owing to the mutual van der Waals interactions. shows a typical Raman spectrum from 100 to 3000 cm-1 taken of SWNTs produced by catalytic decomposition of carbon monoxide (HiPco-process). The two dominant Raman features are the radial breathing mode (RBM) at low frequencies and tangential (G-band) multifeature at higher frequencies. Other weak features, such as the disorder induced D-band and the G’ band (an overtone mode) are also shown.Out of all Raman modes observed in the spectra of SWNTs, the radial breathing modes are unique to SWNTs. They appear between 150 cm-1 < ωRBM < 300 cm-1 from the elastically scattered laser line. It corresponds to the vibration of the C atoms in the radial direction, as if the tube is breathing ). An important point about these modes is the fact that the energy (or wavenumber) of these vibrational modes depends on the diameter (d) of the SWNTs, and not on the way the SWNT is rolled up to form a cylinder, i.e., they do not depend on the θ of the tube.These features are very useful for characterizing nanotube diameters through the relation ωRBM = A/d + B, where A and B are constants and their variations are often attributed to environmental effects, i.e., whether the SWNTs are present as individual tubes wrapped in a surfactant, isolated on a substrate surface, or in the form of bundles. However, for typical SWNT bundles in the diameter range, d = 1.5 ± 0.2 nm, A = 234 cm-1 nm and B = 10 cm-1(where B is an upshift coming from tube-tube interactions). For isolated SWNTs on an oxidized Si substrate, A= 248 cm-1 nm and B = 0. As can be seen from , the relation ωRBM = A/d + B holds true for the usual diameter range i.e., when d lies between 1 and 2 nm. However, for d less than 1 nm, nanotube lattice distortions lead to chirality dependence of ωRBM and for large diameters tubes when, d is more than 2 nm the intensity of RBM feature is weak and is hardly observable.Hence, a single Raman measurement gives an idea of the tubes that are in resonance with the laser line, but does not give a complete characterization of the diameter distribution of the sample. However, by taking Raman spectra using many laser lines, a good characterization of the diameter distributions in the sample can be obtained. Also, natural line widths observed for isolated SWNTs are ωRBM = 3 cm-1, but as the tube diameter is increased, broadening is observed which is denoted by ΓRBM. It has been observed that for d > 2 nm, ΓRBM > 20 cm-1. For SWNT bundles, the line width does not reflect ΓRMB, it rather reflects an ensemble of tubes in resonance with the energy of laser.Functionalization of SWNTs leads to variations of relative intensities of RBM compared to the starting material (unfunctionalized SWNTs). Owing to the diameter dependence of the RBM frequency and the resonant nature of the Raman scattering process, chemical reactions that are sensitive to the diameter as well as the electronic structure, i.e., metallic or semiconducting of the SWNTs can be sorted out. The difference in Raman spectra is usually inferred by thermal defunctionalization, where the functional groups are removed by annealing. The basis of using annealing for defunctionalizing SWNTs is based on the fact that annealing restores the Raman intensities, in contrast to other treatments where a complete disintegration of the SWNTs occurs. shows the Raman spectra of the pristine, functionalized and annealed SWNTs. It can be observed that the absolute intensities of the radial breathing modes is drastically reduced after functionalization. This decrease can be attributed to vHs, which themselves are a consequence of translational symmetry of the SWNTs. Since the translational symmetry of the SWNTs is broken as a result of irregular distribution of the sp3-sites due to the functionalization, these vHs are broadened and strongly reduced in intensity. As a result, the resonant Raman cross section of all modes is strongly reduced as well.For an ensemble of functionalized SWNTs, a decrease in high wavenumber RBM intensities has been observed which leads to an inference that destruction of small diameter SWNT takes place. Also, after prolonged treatment with nitric acid and subsequent annealing in oxygen or vacuum, diameter enlargement of SWNTs is observed from the disappearance of RBMs from small diameter SWNTs and the appearance of new RBMs characteristic of SWNTs with larger diameters. In addition, laser irradiation seems to damage preferentially small diameter SWNTs. In all cases, the decrease of RBM intensities is either attributed to the complete disintegration of SWNTs or reduction in resonance enhancement of selectively functionalized SWNTs. However, change in RBM intensities can also have other reasons. One reason is doping induced bleaching of electronic transitions in SWNTs. When a dopant is added, a previously occupied electronic state can be filled or emptied, as a result of which Ef in the SWNTs is shifted. If this shift is large enough and the conduction band vHs corresponding to the respective Eiitransition that is excited by the laser light gets occupied (n-type doping) or the valence band vHs is emptied (p-type doping), the resonant enhancement is lost as the electronic transitions are quenched.Sample morphology has also seen to affect the RBMs. The same unfunctionalized sample in different aggregation states gives rise to different spectra. This is because the transition energy, Eii depends on the aggregation state of the SWNTs.The tangential modes are the most intensive high-energy modes of SWNTs and form the so-called G-band, which is typically observed at around 1600 cm-1. For this mode, the atomic displacements occur along the cicumferential direction ). Spectra in this frequency can be used for SWNT characterization, independent of the RBM observation. This multi-peak feature can, for example, also be used for diameter characterization, although the information provided is less accurate than the RBM feature, and it gives information about the metallic character of the SWNTs in resonance with laser line.The tangential modes are useful in distinguishing semiconducting from metallic SWNTs. The difference is evident in the G- feature and \(\PageIndex{14}\)) which broadens and becomes asymmetric for metallic SWNTs in comparison with the Lorentzian lineshape for semiconducting tubes, and this broadening is related to the presence of free electrons in nanotubes with metallic character. This broadened G-feature is usually fit using a Breit-Wigner-Fano (BWF) line that accounts for the coupling of a discrete phonon with a continuum related to conduction electrons. This BWF line is observed in many graphite-like materials with metallic character, such as n-doped graphite intercalation compounds (GIC), n-doped fullerenes, as well as metallic SWNTs. The intensity of this G- mode depends on the size and number of metallic SWNTs in a bundle ).Chemical treatments are found to affect the line shape of the tangential line modes. Selective functionalization of SWNTs or a change in the ratio of metallic to semiconducting SWNTs due to selective etching is responsible for such a change. According to , it can be seen that an increase or decrease of the BWF line shape is observed depending on the laser wavelength. At λexc = 633 nm, the preferentially functionalized small diameter SWNTs are semiconducting, therefore the G-band shows a decrease in the BWG asymmetry. However, the situation is reversed at 514 nm, where small metallic tubes are probed. BWF resonance intensity of small bundles increases with bundle thickness, so care should be taken that the effect ascribed directly to functionalization of the SWNTs is not caused by the exfoliation of the previously bundles SWNT.This is one of the most discussed modes for the characterization of functionalized SWNTs and is observed at 1300-1400 cm-1. Not only for functionalized SWNTs, D-band is also observed for unfunctionalized SWNTs. From a large number of Raman spectra from isolated SWNTs, about 50% exhibit observable D-band signals with weak intensity ).A large D-peak compared with the G-peak usually means a bad resonance condition, which indicates the presence of amorphous carbon.The appearance of D-peak can be interpreted due to the breakdown of the k-selection rule. It also depends on the laser energy and diameter of the SWNTs. This behavior is interpreted as a double resonance effect, where not only one of the direct, k-conserving electronic transitions, but also the emission of phonon is a resonant process. In contrast to single resonant Raman scattering, where only phonons around the center of the Brillouin zone (q = 0) are excited, the phonons that provoke the D-band exhibit a non-negligible q vector. This explains the double resonance theory for D-band in Raman spectroscopy. In few cases, the overtone of the D-band known as the G’-band (or D*-band) is observed at 2600-2800 cm-1, and it does not require defect scattering as the two phonons with q and –q are excited. This mode is therefore observed independent of the defect concentration.The presence of D-band cannot be correlated to the presence of various defects (such as hetero-atoms, vacancies, heptagon-pentagon pairs, kinks, or even the presence of impurities, etc). Following are the two main characteristics of the D-band found in carbon nanotubes:Since D-peak appears due to the presence defects, an increase in the intensity of the band is taken as a fingerprint for successful functionalization. But, whether D-band intensity is a measure of degree of functionalization or not is still sure. So, it is not correct to correlate D-peak intensity or D-peak area to the degree of functionalization. From , it can be observed that for lower degree of functionalization, intensity of the D-band scales linearly with defect density. As the degree of functionalization is further increased, both D and G-band area decrease, which is explained by the loss of resonance enhancement due to functionalization. Also, normalization of the D-peak intensity to the G-band in order to correct for changes in resonance intensities also leads to a decrease for higher densities of functional groups.Though Raman spectroscopy has provides an exceedingly important tool for characterization of SWNTs, however, it suffers from few serious limitations. One of the main limitations of Raman spectroscopy is that it does not provide any information about the extent of functionalization in the SWNTs. The presence of D-band indicates disorder, i.e. side wall distribution, however it cannot differentiate between the number of substituents and their distribution. Following are the two main limitations of Raman Spectroscopy:This can be illustrated by the following examples. Purified HiPco tubes may be fluorinated at 150 °C to give F-SWNTs with a C:F ratio of approximately 2.4:1. The Raman spectra (using 780 nm excitation) for F-SWNTs shows in addition to the tangential mode at ~1587 cm-1 an intense broad D (disorder) mode at ~ 1295 cm-1consistent with the side wall functionalization. Irrespective of the arrangements of the fluorine substituents, thermolysis of F-SWNTs results in the loss of fluorine and the re-formation of unfunctionalized SWNTs alnog with their cleavage into shorter length tubes. As can be seen from , the intensity of the D-band decreases as the thermolysis temperature increases. This is consistent with the loss of F-substituents. The G-band shows a concomitant sharpening and increase in intensity.As discussed above, the presence of a significant D mode has been the primary method for determining the presence of sidewall functionalization. It has been commonly accepted that the relative intensity of the D mode versus the tangential G mode is a quantitative measure of level of substitution. However, as discussed below, the G:D ratio is also dependent on the distribution of substituents. Using Raman spectroscopy in combination with XPS analysis of F-SWNTs that have been subjected to thermolysis at different temperatures, a measure of the accuracy of Raman as a quantitative tool for determining substituent concentration can be obtained. As can be seen from , there is essentially no change in the G:D band ratio despite a doubling amount of functional groups.Thus, at low levels of functionalization the use of Raman spectroscopy to quantify the presence of fluorine substituents is a clearly suspect.On the basis of above data it can be concluded that Raman spectroscopy does not provide an accurate quantification of small differences at low levels of functionalization, whereas when a comparison between samples with high levels of functionalization or large differences in degree of functionalization is requires Raman spectroscopy provides a good quantification.Fluorinated nanotubes may be readily functionalized by reaction with the appropriate amine in the presence of base according to the scheme shown in .When the Raman spectra of the functionalized SWNTs is taken ), it is found out that the relative intensity of the disorder D-band at ~1290 cm-1versus the tangential G-band (1500 - 1600 cm-1) is much higher for thiophene-SWNT than thiol-SWNT. If the relative intensity of the D mode is the measure of the level of substitution, it can be concluded that there are more number of thiophene groups present per C than thiol groups. However, from the TGA weight loss data the SWNT-C:substituent ratios are calculated to be 19:1 and 17.5:1. Thus, contrary to the Raman data the TGA suggest that the number of substituents per C (in the SWNT) is actually similar for both substituents.This result would suggest that Raman spectroscopy is potentially unsuccessful in correctly providing the information about the number of substituents on the SWNTs. Subsequent imaging of the functionalized SWNTs by STM showed that the distribution of the functional groups was the difference between the thiol and thiphene functionalized SWNTs ). Thus, relative ratio of the D- and G-bands is a measure of concentration and distribution of functional groups on SWNTs.Most of the characteristic differences that distinguish the Raman spectra in SWNTs from the spectra of graphite are not so evident for MWNTs. It is because the outer diameter for MWNTs is very large and the ensemble of CNTs in them varies from small to very large. For example, the RBM Raman feature associated with a small diameter inner tube (less than 2 nm) can sometimes be observed when a good resonance condition is established, but since the RBM signal from large diameter tubes is usually too weak to be observable and the ensemble average of inner tube diameter broadens the signal, a good signal is not observed. However, when hydrogen gas in the arc discharge method is used, a thin innermost nanotube within a MWNT of diameter 1 nm can be obtained which gives strong RBM peaks in the Raman spectra.Thereas the G+ - G- splitting is large for small diameter SWNT, the corresponding splitting of the G-band in MWNTs is both small in intensity and smeared out due to the effect of the diameter distribution. Therefore the G-band feature predominantly exists a weakly asymmetric characteristic lineshape, and a peak appearing close to the graphite frequency of 1582 cm-1.however for isolated MWNTs prepared in the presence of hydrogen gas using the arc discharge method, it is possible to observe multiple G-band splitting effects even more clearly than for the SWNTs, and this is because environmental effects become relatively small for the innermost nanotube in a MWNT relative to the interactions occurring between SWNTs and different environments. The Raman spectroscopy of MWNTs has not been well investigated up to now. The new directions in this field are yet to be explored.This page titled 4.3: Raman Spectroscopy is shared under a CC BY 4.0 license and was authored, remixed, and/or curated by Pavan M. V. Raja & Andrew R. Barron (OpenStax CNX) via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.
547
4.4: UV-Visible Spectroscopy
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Physical_Methods_in_Chemistry_and_Nano_Science_(Barron)/04%3A_Chemical_Speciation/4.04%3A_UV-Visible_Spectroscopy
Ultraviolet-visible (UV-vis) spectroscopy is used to obtain the absorbance spectra of a compound in solution or as a solid. What is actually being observed spectroscopically is the absorbance of light energy or electromagnetic radiation, which excites electrons from the ground state to the first singlet excited state of the compound or material. The UV-vis region of energy for the electromagnetic spectrum covers 1.5 - 6.2 eV which relates to a wavelength range of 800 - 200 nm. The Beer-Lambert Law, Equation \ref{1} , is the principle behind absorbance spectroscopy. For a single wavelength, A is absorbance (unitless, usually seen as arb. units or arbitrary units), ε is the molar absorptivity of the compound or molecule in solution (M-1cm-1), b is the path length of the cuvette or sample holder (usually 1 cm), and c is the concentration of the solution (M).\[ A\ =\ \varepsilon b c \label{1} \]All of these instruments have a light source (usually a deuterium or tungsten lamp), a sample holder and a detector, but some have a filter for selecting one wavelength at a time. The single beam instrument ) has a filter or a monochromator between the source and the sample to analyze one wavelength at a time. The double beam instrument ) has a single source and a monochromator and then there is a splitter and a series of mirrors to get the beam to a reference sample and the sample to be analyzed, this allows for more accurate readings. In contrast, the simultaneous instrument ) does not have a monochromator between the sample and the source; instead, it has a diode array detector that allows the instrument to simultaneously detect the absorbance at all wavelengths. The simultaneous instrument is usually much faster and more efficient, but all of these types of spectrometers work well.UV-vis spectroscopic data can give qualitative and quantitative information of a given compound or molecule. Irrespective of whether quantitative or qualitative information is required it is important to use a reference cell to zero the instrument for the solvent the compound is in. For quantitative information on the compound, calibrating the instrument using known concentrations of the compound in question in a solution with the same solvent as the unknown sample would be required. If the information needed is just proof that a compound is in the sample being analyzed, a calibration curve will not be necessary; however, if a degradation study or reaction is being performed, and concentration of the compound in solution is required, thus a calibration curve is needed.To make a calibration curve, at least three concentrations of the compound will be needed, but five concentrations would be most ideal for a more accurate curve. The concentrations should start at just above the estimated concentration of the unknown sample and should go down to about an order of magnitude lower than the highest concentration. The calibration solutions should be spaced relatively equally apart, and they should be made as accurately as possible using digital pipettes and volumetric flasks instead of graduated cylinders and beakers. An example of absorbance spectra of calibration solutions of Rose Bengal (4,5,6,7-tetrachloro-2',4',5',7'-tetraiodofluorescein, , can be seen in . To make a calibration curve, the value for the absorbances of each of the spectral curves at the highest absorbing wavelength, is plotted in a graph similar to that in of absorbance versus concentration. The correlation coefficient of an acceptable calibration is 0.9 or better. If the correlation coefficient is lower than that, try making the solutions again as the problem may be human error. However, if after making the solutions a few times the calibration is still poor, something may be wrong with the instrument; for example, the lamps may be going bad.UV-vis spectroscopy works well on liquids and solutions, but if the sample is more of a suspension of solid particles in liquid, the sample will scatter the light more than absorb the light and the data will be very skewed. Most UV-vis instruments can analyze solid samples or suspensions with a diffraction apparatus ), but this is not common. UV-vis instruments generally analyze liquids and solutions most efficiently.A blank reference will be needed at the very beginning of the analysis of the solvent to be used (water, hexanes, etc), and if concentration analysis needs to be performed, calibration solutions need to be made accurately. If the solutions are not made accurately enough, the actual concentration of the sample in question will not be accurately determined.Every solvent has a UV-vis absorbance cutoff wavelength. The solvent cutoff is the wavelength below which the solvent itself absorbs all of the light. So when choosing a solvent be aware of its absorbance cutoff and where the compound under investigation is thought to absorb. If they are close, chose a different solvent. Table \(\PageIndex{1}\) provides an example of solvent cutoffs.The material the cuvette (the sample holder) is made from will also have a UV-vis absorbance cutoff. Glass will absorb all of the light higher in energy starting at about 300 nm, so if the sample absorbs in the UV, a quartz cuvette will be more practical as the absorbance cutoff is around 160 nm for quartz (Table \(\PageIndex{2}\)).To obtain reliable data, the peak of absorbance of a given compound needs to be at least three times higher in intensity than the background noise of the instrument. Obviously using higher concentrations of the compound in solution can combat this. Also, if the sample is very small and diluting it would not give an acceptable signal, there are cuvettes that hold smaller sample sizes than the 2.5 mL of a standard cuvettes. Some cuvettes are made to hold only 100 μL, which would allow for a small sample to be analyzed without having to dilute it to a larger volume, lowering the signal to noise ratio.This page titled 4.4: UV-Visible Spectroscopy is shared under a CC BY 4.0 license and was authored, remixed, and/or curated by Pavan M. V. Raja & Andrew R. Barron (OpenStax CNX) via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.
548
4.5: Photoluminescence, Phosphorescence, and Fluorescence Spectroscopy
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Physical_Methods_in_Chemistry_and_Nano_Science_(Barron)/04%3A_Chemical_Speciation/4.05%3A_Photoluminescence_Phosphorescence_and_Fluorescence_Spectroscopy
Photoluminescence spectroscopy is a contactless, nondestructive method of probing the electronic structure of materials. Light is directed onto a sample, where it is absorbed and imparts excess energy into the material in a process called photo-excitation. One way this excess energy can be dissipated by the sample is through the emission of light, or luminescence. In the case of photo-excitation, this luminescence is called photoluminescence.Photo-excitation causes electrons within a material to move into permissible excited states. When these electrons return to their equilibrium states, the excess energy is released and may include the emission of light (a radiative process) or may not (a nonradiative process). The energy of the emitted light (photoluminescence) relates to the difference in energy levels between the two electron states involved in the transition between the excited state and the equilibrium state. The quantity of the emitted light is related to the relative contribution of the radiative process.In most photoluminescent systems chromophore aggregation generally quenches light emission via aggregation-caused quenching (ACQ). This means that it is necessary to use and study fluorophores in dilute solutions or as isolated molecules. This in turn results in poor sensitivity of devices employing fluorescence, e.g., biosensors and bioassays. However, there have recently been examples reported in which luminogen aggregation played a constructive, instead of destructive role in the light-emitting process. This aggregated-induced emission (AIE) is of great potential significance in particular with regard to solid state devices. Photoluminescence spectroscopy provides a good method for the study of luminescent properties of a fluorophore.Fluorescence and phosphorescence come at lower energy than absorption (the excitation energy). As shown in , in absorption, wavelength λ0 corresponds to a transition from the ground vibrational level of S0 to the lowest vibrational level of S1. After absorption, the vibrationally excited S1 molecule relaxes back to the lowest vibrational level of S1 prior to emitting any radiation. The highest energy transition comes at wavelength λ0, with a series of peaks following at longer wavelength. The absorption and emission spectra will have an approximate mirror image relation if the spacings between vibrational levels are roughly equal and if the transition probabilities are similar. The λ0 transitions in , do not exactly overlap. As shown in , a molecule absorbing radiation is initially in its electronic ground state, S0. This molecule possesses a certain geometry and solvation. As the electronic transition is faster than the vibrational motion of atoms or the translational motion of solvent molecules, when radiation is first absorbed, the excited S1 molecule still possesses its S0 geometry and solvation. Shortly after excitation, the geometry and solvation change to their most favorable values for S1 state. This rearrangement lowers the energy of excited molecule. When an S1 molecule fluoresces, it returns to the S0 state with S1 geometry and solvation. This unstable configuration must have a higher energy than that of an S0molecule with S0 geometry and solvation. The net effect in is that the λ0 emission energy is less than the λ0 excitation energy.A schematic of an emiision experiment is give in . An excitation wavelength is selected by one monochromator, and luminescence is observed through a second monochromator, usually positioned at 90° to the incident light to minimize the intensity of scattered light reaching the dector. If the excitation wavelength is fixed and the emitted radiation is scanned, an emission spectrum is produced.Ultraviolet-visible (UV-vis) spectroscopy or ultraviolet-visible spectrophotometry refers to absorption spectroscopy or reflectance spectroscopy in the untraviolet-visible spectral region. The absorption or reflectance in the visible range directly affects the perceived color of the chemicals involved. In the UV-vis spectrum, an absorbance versus wavelength graph results and it measures transitions from the ground state to excited state, while photoluminescence deals with transitions from the excited state to the ground state.An excitation spectrum is a graph of emission intensity versus excitation wavelength. An excitation spectrum looks very much like an absorption spectrum. The greater the absorbance is at the excitation wavelength, the more molecules are promoted to the excited state and the more emission will be observed.By running an UV-vis absorption spectrum, the wavelength at which the molecule absorbs energy most and is excited to a large extent can be obtained. Using such value as the excitation wavelength can thus provide a more intense emission at a red-shifted wavelength, which is usually within twice of the excitation wavelength.Aggregation-caused quenching (ACQ) of light emission is a general phenomenon for many aromatic compounds that fluorescence is weakened with an increase in its solution concentration and even condensed phase. Such effect, however, comes into play in the solid state, which has prevented many lead luminogens identified by the laboratory solution-screening process from finding real-world applications in an engineering robust form.Aggregation-induced emission (AIE), on the other hand, is a novel phenomenon that aggregation plays a constructive, instead of destructive role in the light-emitting process, which is exactly opposite to the ACQ effect.From the photoluminescence spectra of hexaphenylsilole (HPS, ) show in , it can be seen that as the water (bad solvent) fraction increases, the emission intensity of HPS increases. For BODIPY derivative in , it shows that the PL intensity peaks at 0 water content resulted from intramolecular rotation or twisting, known as twisted intramolecular charge transfer (TICT).The emission color of an AIE luminogen is scarcely affected by solvent polarity, whereas that of a TICT luminogen typically bathochromically shifts with increasing solvent polarity. In , however, it shows different patterns of emission under different excitation wavelengths. At the excitation wavelength of 372 nm, which is corresponding to the BODIPY group, the emission intensity increases as water fraction increases. However, it decreases at the excitation wavelength of 530 nm, which is corresponding to the TPE group. The presence of two emissions in this compound is due to the presence of two independent groups in the compound with AIE and ACQ properties, respectively. shows the photoluminescence spectroscopy of a BODIPY-TPE derivative of different concentrations. At the excitation wavelength of 329 nm, as the molarity increases, the emission intensity decreases. Such compounds whose PL emission intensity enhances at low concentration can be a good chemo-sensor for the detection of the presence of compounds with low quantity.Apart from the detection of light emission patterns, photoluminescence spectroscopy is of great significance in other fields of analysis, especially semiconductors.Band gap is the energy difference between states in the conduction and valence bands, of the radiative transition in semiconductors. The spectral distribution of PL from a semiconductor can be analyzed to nondestructively determine the electronic band gap. This provides a means to quantify the elemental composition of compound semiconductor and is a vitally important material parameter influencing solar cell device efficiency.Radiative transitions in semiconductors involve localized defect levels. The photoluminescence energy associated with these levels can be used to identify specific defects, and the amount of photoluminescence can be used to determine their concentration. The PL spectrum at low sample temperatures often reveals spectral peaks associated with impurities contained within the host material. Fourier transform photoluminescence microspectroscopy, which is of high sensitivity, provides the potential to identify extremely low concentrations of intentional and unintentional impurities that can strongly affect material quality and device performance.The return to equilibrium, known as “recombination”, can involve both radiative and nonradiative processes. The quantity of PL emitted from a material is directly related to the relative amount of radiative and nonradiative recombination rates. Nonradiative rates are typically associated with impurities and the amount of photoluminescence and its dependence on the level of photo-excitation and temperature are directly related to the dominant recombination process. Thus, analysis of photoluminescence can qualitatively monitor changes in material quality as a function of growth and processing conditions and help understand the underlying physics of the recombination mechanism.The widely used conventional methods such as XRD, IR and Raman spectroscopy, are very often not sensitive enough for supported oxide catalysts with low metal oxide concentrations. Photoluminescence, however, is very sensitive to surface effects or adsorbed species of semiconductor particles and thus can be used as a probe of electron-hole surface processes.Very low concentrations of optical centers can be detected using photoluminescence, but it is not generally a quantitative technique. The main scientific limitation of photoluminescence is that many optical centers may have multiple excited states, which are not populated at low temperature.The disappearance of luminescence signal is another limitation of photoluminescence spectroscopy. For example, in the characterization of photoluminescence centers of silicon no sharp-line photoluminescence from 969 meV centers was observed when they had captured self-interstitials.Luminescence is a process involving the emission of light from any substance, and occurs from electronically excited states of that substance. Normally, luminescence is divided into two categories, fluorescence and phosphorescence, depending on the nature of the excited state.Fluorescence is the emission of electromagnetic radiation light by a substance that has absorbed radiation of a different wavelength. Phosphorescence is a specific type of photoluminescence related to fluorescence. Unlike fluorescence, a phosphorescent material does not immediately re-emit the radiation it absorbs.The process of fluorescent absorption and emission is easily illustrated by the Jablonski diagram. A classic Jablonski diagram is shown in , where Sn represents the nth electronic states. There are different vibrational and rotational states in every electronic state. After light absorption, a fluorophore is excited to a higher electronic and vibrational state from ground state (here rotational states are not considered for simplicity). By internal conversion of energy, these excited molecules relax to lower vibrational states in S1 ) and then return to ground states by emitting fluorescence. Actually, excited molecules always return to higher vibration states in S0 and followed by some thermal process to ground states in S1. It is also possible for some molecules to undergo intersystem crossing process to T­2 states ). After internal conversion and relaxing to T1, these molecules can emit phosphorescence and return to ground states.The Stokes shift, the excited state lifetime and quantum yield are the three most important characteristics of fluorescence emission. Stokes shift is the difference between positions of the band maxima of the absorption and emission spectra of the same electronic transition. According to mechanism discussed above, an emission spectrum must have lower energy or longer wavelength than absorption light. The quantum yield is a measure of the intensity of fluorescence, as defined by the ratio of emitted photons over absorbed photons. Excited state lifetime is a measure of the decay times of the fluorescence.Most spectrofluorometers can record both excitation and emission spectra. An emission spectrum is the wavelength distribution of an emission measured at a single constant excitation wavelength. In comparison, an excitation spectrum is measured at a single emission wavelength by scanning the excitation wavelength.Specific light sources are chosen depending on the application.Arc and Incandescent Xenon LampsThe high-pressure xenon (Xe) arc is the most versatile light source for steady-state fluorometers now. It can provides a steady light output from 250 - 700 nm ), with only some sharp lines near 450 and 800 nm. The reason that xenon arc lamps emit a continuous light is the recombination of electrons with ionized Xe atoms. These ions produced by collision between Xe and electrons. Those sharp lines near 450 nm are due to the excited Xe atoms that are not ionized.During fluorescence experiment, some distortion of the excitation spectra can be observed, especially the absorbance locating in visible and ultraviolet region. Any distortion displayed in the peaks is the result of wavelength-dependent output of Xe lamps. Therefore, we need to apply some mathematic and physical approaches for correction.High Pressure Mercury LampsCompared with xenon lamps, Hg lamps have higher intensities. As shown in the intensity of Hg lamps is concentrated in a series of lines, so it is a potentially better excitation light source if matched to certain fluorophorescence.Xe-Hg Arc LampsHigh-pressure xenon-mercury lamps have been produced. They have much higher intensity in ultraviolet region than normal Xe lamps. Also, the introduction of Xe to Hg lamps broadens the sharp-line output of Hg lamps. Although the wavelength of output is still dominated by those Hg lines, these lines are broadened and fit to various fluorophores better. The Xe-Hg lamp output depends on the operating temperature.Low Pressure Hg and Hg-Ar LampsDue to their very sharp line spectra, they are primarily useful for calibration purpose. The combination of Hg and Ar improve the output scale, from 200 - 1000 nm.Other Light SourceThere are many other light source for experimental and industrial application, such as pulsed xenon lamps, quartz-tungsten halogen (QTH) lamps, LED light sources, etc.Most of the light sources used provide only polychromatic or white light. However, what is needed for experiments are various chromatic light with a wavelength range of 10 nm. Monocharomators help us to achieve this aim. Prisms and diffraction gratings are the two main kinds of monochromators used, although diffraction gratings are most useful, especially in spectrofluorometers.Dispersion, efficiency, stray light level and resolution are important parameters for monochromators. Dispersion is mainly determined by slit width and expressed in nm/mm. It is prepared to have low stray light level. Stray light is defined as light transmitted by the monochromator at wavelength outside the chosen range. Also, a high efficiency is required to increase the ability to detect low light levels. Resolution depends on the slit width. There are normally two slits, entrance and exit in a fluorometers. Light intensity that passes through the slits is proportional to the square of the slit width. Larger slits have larger signal levels, but lower resolution, and vice verse. Therefore, it is important to balance the signal intensity and resolution with the slit width.Optical filters are used in addition to monochromators, because the light passing through monochromator is rarely ideal, optical filters are needed for further purifying light source. If the basic excitation and emission properties of a particular system under study, then selectivity by using optical filters is better than by the use of monochromators. Two kinds of optical filter are gradually employed: colored filters and thin-film filters.Colored FiltersColored filters are the most traditional filter used before thin-film filter were developed. They can be divided into two categories: monochromatic filter and long-pass filter. The first one only pass a small range of light (about 10 - 25 nm) centered at particular chosen wavelength. In contrast, long pass filter transmit all wavelengths above a particular wavelength. In using these bandpass filters, special attention must be paid to the possibility of emission from the filter itself, because many filters are made up of luminescent materials that are easily excited by UV light. In order to avoid this problem, it is better to set up the filter further away from the sample.Thin-film FiltersThe transmission curves of colored class filter are not suitable for some application and as such they are gradually being substituted by thin-film filters. Almost any desired transmission curve can be obtained using a thin film filter.The standard detector used in many spectrofluorometers is the InGaAs array, which can provides rapid and robust spectral characterization in the near-IR. And the liquid-nitrogen cooling is applied to decrease the background noise. Normally, detectors are connected to a controller that can transfer a digital signal to and from the computer.At present a wide range of fluorophores have been developed as fluorescence probes in bio-system. They are widely used for clinical diagnosis, bio-tracking and labeling. The advance of fluorometers has been accompanied with developments in fluorophore chemistry. Thousands of fluorophores have been synthesized, but herein four categories of fluorophores will be discussed with regard their spectral properties and application.Tryptophan (trp), tyrosine (tyr), and phenylalanine (phe) are three natural amino acid with strong fluorescence ). In tryptophan, the indole groups absorbs excitation light as UV region and emit fluorescence.Green fluorescent proteins (GFP) is another natural fluorophores. GFP is composed of 238 amino acids ), and it exhibits a characteristic bright green fluorescence when excited. They are mainly extracted from bioluminescent jellyfish Aequorea vicroria, and are employed as signal reporters in molecular biology.Extrinsic FluorophoresMost bio-molecules are nonfluorescent, therefore it is necessary to connect different fluorophores to enable labeling or tracking of the biomolecules. For example, DNA is an example of a biomolecule without fluorescence. The Rhodamine ) and BODIPY ) families are two kinds of well-developed organic fluorophores. They have been extensively employed in design of molecular probes due to their excellent photophysical properties.With the development of fluorophores, red and near-infrared (NIR) dyes attract increasing attention since they can improve the sensitivity of fluorescence detection. In biological system, autofluorescence always increase the ratio of signal-to-noise (S/N) and limit the sensitivity. As the excitation wavelength turns to longer, autopfluorescence decreases accordingly, and therefore signal-to-noise ratio increases. Cyanines are one such group of long-wavelength dyes, e.g., Cy-3, Cy-5 and Cy-7 ), which have emission at 555, 655 and 755 nm respectively.Almost all of the fluorophores mentioned above are organic fluorophores that have relative short lifetime from 1-10 ns. However, there are also a few long-lifetime organic fluorophore, such as pyrene and coronene with lifetime near 400 ns and 200 ns respectively ). Long-lifetime is one of the important properties to fluorophores. With its help, the autofluorescence in biological system can be removed adequately, and hence improve the detectability over background.Although their emission belongs to phosphorescence, transition metal complexes are a significant class of long-lifetime fluorophores. Ruthenium (II), iridium (III), rhenium (I), and osmium (II) are the most popular transition metals that can combine with one to three diimine ligands to form fluorescent metal complexes. For example, iridium forms a cationic complex with two phenyl pyridine and one diimine ligand ). This complex has excellent quantum yield and relatively long lifetime.With advances in fluorometers and fluorophores, fluorescence has been a dominant techonology in the medical field, such clinic diagnosis and flow cytometry. Herein, the application of fluorescence in DNA and RNA detecition is discussed.The low concentration of DNA and RNA sequences in cells determine that high sensitivity of the probe is required, while the existence of various DNA and RNA with similar structures requires a high selectivity. Hence, fluorophores were introduced as the signal group into probes, because fluorescence spectroscopy is most sensitive technology until now.The general design of a DNA or RNA probe involves using an antisense hybridization oligonucleotide to monitor target DNA sequence. When the oligonucleotide is connected with the target DNA, the signal groups-the fluorophores-emit designed fluorescence. Based on fluorescence spectroscopy, signal fluorescence can be detected which help us to locate the target DNA sequence. The selectively inherent in the hybridization between two complementary DNA/RNA sequences make this kind of DNA probes extremely high selectivity. A molecular Beacon is one kind of DNA probes. This simple but novel design is reported by Tyagi and Kramer in 1996 ) and gradually developed to be one of the most common DNA/RNA probes.Generally speaking, a molecular beacon it is composed of three parts: one oligonucleotide, a fluorophore and a quencher at different ends. In the absence of the target DNA, the molecular beacon is folded like a hairpin due to the interaction between the two series nucleotides at opposite ends of the oligonucleotide. At this time, the fluorescence is quenched by the close quencher. However, in the presence of the target, the probe region of the MB will hybridize to the target DNA, open the folded MB and separate the fluorophore and quencher. Therefore, the fluorescent signal can be detected which indicate the existence of a particular DNA.Florescence correlation spectroscopy (FCS) is an experimental technique that that measures fluctuations in fluorescence intensity caused by the Brownian motion of particles. Fluorescence is a form of luminescence that involves the emission of light by a substance that has absorbed light or other electromagnetic radiation. Brownian motion is the random motion of particles suspended in a fluid that results from collisions with other molecules or atoms in the fluid. The initial experimental data is presented as intensity over time but statistical analysis of fluctuations makes it possible to determine various physical and photo-physical properties of molecules and systems. When combined with analysis models, FCS can be used to find diffusion coefficients, hydrodynamic radii, average concentrations, kinetic chemical reaction rates, and single-triplet state dynamics. Singlet and triplet states are related to electron spin. Electrons can have a spin of (+1/2) or (-1/2). For a system that exists in the singlet state, all spins are paired and the total spin for the system is ((-1/2) + (1/2)) or 0. When a system is in the triplet state, there exist two unpaired electrons with a total spin state of 1.The first scientists to be credited with the application of fluorescence to signal-correlation techniques were Douglas Magde, Elliot L. Elson, and Walt W.Webb, therefore they are commonly referred to as the inventors of FCS. The technique was originally used to measure the diffusion and binding of ethidium bromide ) onto double stranded DNA.Initially, the technique required high concentrations of fluorescent molecules and was very insensitive. Starting in 1993, large improvements in technology and the development of confocal microscopy and two-photon microscopy were made, allowing for great improvements in the signal to noise ratio and the ability to do single molecule detection. Recently, the applications of FCS have been extended to include the use of FörsterResonance Energy Transfer (FRET), the cross-correlation between two fluorescent channels instead of auto correlation, and the use of laser scanning. Today, FCS is mostly used for biology and biophysics.A basic FCS setup ) consists of a laser line that is reflected into a microscope objective by a dichroic mirror. The laser beam is focused on a sample that contains very dilute amounts of fluorescent particles so that only a few particles pass through the observed space at any given time. When particles cross the focal volume (the observed space) they fluoresce. This light is collected by the objective and passes through the dichroic mirror (collected light is red-shifted relative to excitation light), reaching the detector. It is essential to use a detector with high quantum efficiency (percentage of photons hitting the detector that produce charge carriers). Common types of detectors are a photo-multiplier tube (rarely used due to low quantum yield), an avalanche photodiode, and a super conducting nanowire single photo detector. The detector produces an electronic signal that can be stored as intensity over time or can be immediately auto correlated. It is common to use two detectors and cross- correlate their outputs leading to a cross-correlation function that is similar to the auto correlation function but is free from after-pulsing (when a photon emits two electronic pulses). As mentioned earlier, when combined with analysis models, FCS data can be used to find diffusion coefficients, hydrodynamic radii, average concentrations, kinetic chemical reaction rates, and single-triplet dynamics.When particles pass through the observed volume and fluoresce, they can be described mathematically as point spread functions, with the point of the source of the light being the center of the particle. A point spread function (PSF) is commonly described as an ellipsoid with measurements in the hundreds of nanometer range (although not always the case depending on the particle). With respect to confocal microscopy, the PSF is approximated well by a Gaussian, \ref{1}, where I0 is the peak intensity, r and z are radial and axial position, and wxy and wzare the radial and axial radii (with wz > wxy).\[ PSF(r,z) \ =\ I_{0} e^{-2r^{2}}/\omega^{2}_{xy}e^{-2z^{2}/\omega^{2}_{z}} \label{1} \]This Gaussian is assumed with the auto-correlation with changes being applied to the equation when necessary (like the case of a triplet state, chemical relaxation, etc.). For a Gaussian PSF, the autocorrelation function is given by \ref{2}, where \ref{3} is the stochastic displacement in space of a fluorophore after time T.\[ G(\tau )\ =\frac{1}{\langle N \rangle } \langle exp (- \frac{\Delta (\tau)^{2} \ +\ \Delta Y(\tau )^{2}}{w^{2}_{xy}}\ -\ \frac{\Delta Z(\tau )^{2}}{w^{2}_{z}}) \rangle \label{2} \]\[ \Delta \vec{R} (\tau )\ =\ (\Delta X(\tau ), \Delta (\tau ), \Delta (\tau )) \label{3} \]The expression is valid if the average number of particles, N, is low and if dark states can be ignored. Because of this, FCS observes a small number of molecules (nanomolar and picomolar concentrations), in a small volume (~1μm3) and does not require physical separation processes, as information is determined using optics. After applying the chosen autocorrelation function, it becomes much easier to analyze the data and extract the desired information ). FCS is often seen in the context of microscopy, being used in confocal microscopy and two-photon excitation microscopy. In both techniques, light is focused on a sample and fluorescence intensity fluctuations are measured and analyzed using temporal autocorrelation. The magnitude of the intensity of the fluorescence and the amount of fluctuation is related to the number of individual particles; there is an optimum measurement time when the particles are entering or exiting the observation volume. When too many particles occupy the observed space, the overall fluctuations are small relative to the total signal and are difficult to resolve. On the other hand, if the time between molecules passing through the observed space is too long, running an experiment could take an unreasonable amount of time. One of the applications of FCS is that it can be used to analyze the concentration of fluorescent molecules in solution. Here, FCS is used to analyze a very small space containing a small number of molecules and the motion of the fluorescence particles is observed. The fluorescence intensity fluctuates based on the number of particles present; therefore analysis can give the average number of particles present, the average diffusion time, concentration, and particle size. This is useful because it can be done in vivo, allowing for the practical study of various parts of the cell. FCS is also a common technique in photo-physics, as it can be used to study triplet state formation and photo-bleaching. State formation refers to the transition between a singlet and a triplet state while photo-bleaching is when a fluorophore is photo-chemically altered such that it permanently looses its ability to fluoresce. By far, the most popular application of FCS is its use in studying molecular binding and unbinding often, it is not a particular molecule that is of interest but, rather, the interaction of that molecule in a system. By dye labeling a particular molecule in a system, FCS can be used to determine the kinetics of binding and unbinding (particularly useful in the study of assays).When a material that has been radiated emits light, it can do so either via incandescence, in which all atoms in the material emit light, or via luminescence, in which only certain atoms emit light, . There are two types of luminescence: fluorescence and phosphorescence. Phosphorescence occurs when excited electrons of a different multiplicity from those in their ground state return to their ground state via emission of a photon, . It is a longer-lasting and less common type of luminescence, as it is a spin forbidden process, but it finds applications across numerous different fields. This module will cover the physical basis of phosphorescence, as well as instrumentation, sample preparation, limitations, and practical applications relating to molecular phosphorescence spectroscopy.Phosphorescence is the emission of energy in the form of a photon after an electron has been excited due to radiation. In order to understand the cause of this emission, it is first important to consider the molecular electronic state of the sample. In the singlet molecular electronic state, all electron spins are paired, meaning that their spins are antiparallel to one another. When one paired electron is excited to a higher-energy state, it can either occupy an excited singlet state or an excited triplet state. In an excited singlet state, the excited electron remains paired with the electron in the ground state. In the excited triplet state, however, the electron becomes unpaired with the electron in ground state and adopts a parallel spin. When this spin conversion happens, the electron in the excited triplet state is said to be of a different multiplicity from the electron in the ground state. Phosphorescence occurs when electrons from the excited triplet state return to the ground singlet state, \ref{4} - \ref{6}, where E represents an electron in the singlet ground state, E* represent the electron in the singlet excited state, and T* represents the electron in the triplet excited state.\[ E\ +\ hv \rightarrow E* \label{4} \]\[ E* \rightarrow T* \label{5} \]\[T* \rightarrow \ E\ +\ hv' \label{6} \]Electrons in the triplet excited state are spin-prohibited from returning to the singlet state because they are parallel to those in the ground state. In order to return to the ground state, they must undergo a spin conversion, which is not very probable, especially considering that there are many other means of releasing excess energy. Because of the need for an internal spin conversion, phosphorescence lifetimes are much longer than those of other kinds of luminescence, lasting from 10-4 to 104 seconds.Historically, phosphorescence and fluorescence were distinguished by the amount of time after the radiation source was removed that luminescence remained. Fluorescence was defined as short-lived chemiluminescence (< 10-5 s) because of the ease of transition between the excited and ground singlet states, whereas phosphorescence was defined as longer-lived chemiluminescence. However, basing the difference between the two forms of luminescence purely on time proved to be a very unreliable metric. Fluorescence is now defined as occurring when decaying electrons have the same multiplicity as those of their ground state.Because phosphorescence is unlikely and produces relatively weak emissions, samples using molecular phosphorescence spectroscopy must be very carefully prepared in order to maximize the observed phosphorescence. The most common method of phosphorescence sample preparation is to dissolve the sample in a solvent that will form a clear and colorless solid when cooled to 77 K, the temperature of liquid nitrogen. Cryogenic conditions are usually used because, at low temperatures, there is little background interference from processes other than phosphorescence that contribute to loss of absorbed energy. Additionally, there is little interference from the solvent itself under cryogenic conditions. The solvent choice is especially important; in order to form a clear, colorless solid, the solvent must be of ultra-high purity. The polarity of the phosphorescent sample motivates the solvent choice. Common solvents include ethanol for polar samples and EPA (a mixture of diethyl ether, isopentane, and ethanol in a 5:5:2 ratio) for non-polar samples. Once a disk has been formed from the sample and solvent, it can be analyzed using a phosphoroscope.While using a rigid medium is still the predominant choice for measuring phosphorescence, there have been recent advances in room temperature spectroscopy, which allows samples to be measured at warmer temperatures. Similar the sample preparation using a rigid medium for detection, the most important aspect is to maximize recorded phosphorescence by avoiding other forms of emission. Current methods for allowing good room detection of phosphorescence include absorbing the sample onto an external support and putting the sample into a molecular enclosure, both of which will protect the triplet state involved in phosphorescence.Phosphorescence is recorded in two distinct methods, with the distinguishing feature between the two methods being whether or not the light source is steady or pulsed. When the light source is steady, a phosphoroscope, or an attachment to a fluorescence spectrometer, is used. The phosphoroscope was experimentally devised by Alexandre-Edmond Becquerel, a pioneer in the field of luminescence, in 1857, .There are two different kinds of phosphoroscopes: rotating disk phosphoroscopes and rotating can phosphoroscopes. A rotating disk phosphoroscope, , comprises two rotating disk with holes, in the middle of which is placed the sample to be tested. After a light beam penetrates one of the disks, the sample is electronically excited by the light energy and can phosphoresce; a photomultiplier records the intensity of the phosphorescence. Changing the speed of the disks’ rotation allows a decay curve to be created, which tells the user how long phosphorescence lasts.The second type of phosphoroscope, the rotating can phosphoroscope, employs a rotating cylinder with a window to allow passage of light, . The sample is placed on the outside edge of the can and, when light from the source is allowed to pass through the window, the sample is electronically excited and phosphoresces, and the intensity is again detected via photomultiplier. One major advantage of the rotating can phosphoroscope over the rotating disk phosphoroscope is that, at high speeds, it can minimize other types of interferences such as fluorescence and Raman and Rayleigh scattering, the inelastic and elastic scattering of photons, respectively.The more modern, advanced measurement of phosphorescence uses pulsed-source time resolved spectrometry and can be measured on a luminescence spectrometer. A luminescence spectrometer has modes for both fluorescence and phosphorescence, and the spectrometer can measure the intensity of the wavelength with respect to either the wavelength of the emitted light or time, .The spectrometer employs a gated photomultiplier to measure the intensity of the phosphorescence. After the initial burst of radiation from the light source, the gate blocks further light, and the photomultiplier measures both the peak intensity of phosphorescence as well as the decay, as shown in .The lifetime of the phosphorescence is able to be calculated from the slope of the decay of the sample after the peak intensity. The lifetime depends on many factors, including the wavelength of the incident radiation as well as properties arising from the sample and the solvent used. Although background fluorescence as well as Raman and Rayleigh scattering are still present in pulsed-time source resolved spectrometry, they are easily detected and removed from intensity versus time plots, allowing for the pure measurement of phosphorescence.The biggest single limitation of molecular phosphorescence spectroscopy is the need for cryogenic conditions. This is a direct result of the unfavorable transition from an excited triplet state to a ground singlet state, which unlikely and therefore produces low-intensity, difficult to detect, long-lasting irradiation. Because cooling phosphorescent samples reduces the chance of other irradiation processes, it is vital for current forms of phosphorescence spectroscopy, but this makes it somewhat impractical in settings outside of a specialized laboratory. However, the emergence and development of room temperature spectroscopy methods give rise to a whole new set of applications and make phosphorescence spectroscopy a more viable method.Currently, phosphorescent materials have a variety of uses, and molecular phosphorescence spectrometry is applicable across many industries. Phosphorescent materials find use in radar screens, glow-in-the-dark toys, and in pigments, some of which are used to make highway signs visible to drivers. Molecular phosphorescence spectroscopy is currently in use in the pharmaceutical industry, where its high selectivity and lack of need for extensive separation or purification steps make it useful. It also shows potential in forensic analysis because of the low sample volume requirement.This page titled 4.5: Photoluminescence, Phosphorescence, and Fluorescence Spectroscopy is shared under a CC BY 4.0 license and was authored, remixed, and/or curated by Pavan M. V. Raja & Andrew R. Barron (OpenStax CNX) via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.
549
4.6: Mössbauer Spectroscopy
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Physical_Methods_in_Chemistry_and_Nano_Science_(Barron)/04%3A_Chemical_Speciation/4.06%3A_Mossbauer_Spectroscopy
In 1957 Rudolf Mössbauer achieved the first experimental observation of the resonant absorption and recoil-free emission of nuclear γ-rays in solids during his graduate work at the Institute for Physics of the Max Planck Institute for Medical Research in Heidelberg Germany. Mössbauer received the 1961 Nobel Prize in Physics for his research in resonant absorption of γ-radiation and the discovery of recoil-free emission a phenomenon that is named after him. The Mössbauer effect is the basis of Mössbauer spectroscopy.The Mössbauer effect can be described very simply by looking at the energy involved in the absorption or emission of a γ-ray from a nucleus. When a free nucleus absorbs or emits a γ-ray to conserve momentum the nucleus must recoil, so in terms of energy:\[ E_{ \gamma - ray} \ = \ E_{\text{nuclear transition}}\ -\ E_{\text{recoil}} \label{1} \]When in a solid matrix the recoil energy goes to zero because the effective mass of the nucleus is very large and momentum can be conserved with negligible movement of the nucleus. So, for nuclei in a solid matrix:\[ E_{\gamma - ray} \ =\ E_{\text{nuclear transition}} \label{2} \]This is the Mössbauer effect which results in the resonant absorption/emission of γ-rays and gives us a means to probe the hyperfine interactions of an atoms nucleus and its surroundings.A Mössbauer spectrometer system consists of a γ-ray source that is oscillated toward and away from the sample by a “Mössbauer drive”, a collimator to filter the γ-rays, the sample, and a detector. hows the two basic set ups for a Mössbauer spectrometer. The Mössbauer drive oscillates the source so that the incident γ-rays hitting the absorber have a range of energies due to the doppler effect. The energy scale for Mössbauer spectra (x-axis) is generally in terms of the velocity of the source in mm/s. The source shown (57Co) is used to probe 57Fe in iron containing samples because 57Co decays to 57Fe emitting a γ-ray of the right energy to be absorbed by 57Fe. To analyze other Mössbauer isotopes other suitable sources are used. Fe is the most common element examined with Mössbauer spectroscopy because its 57Fe isotope is abundant enough (2.2), has a low energy γ-ray, and a long lived excited nuclear state which are the requirements for observable Mössbauer spectrum. Other elements that have isotopes with the required parameters for Mössbauer probing are seen in Table \(\PageIndex{1}\).The primary characteristics looked at in Mössbauer spectra are isomer shift (IS), quadrupole splitting (QS), and magnetic splitting (MS or hyperfine splitting). These characteristics are effects caused by interactions of the absorbing nucleus with its environment.Isomer shift is due to slightly different nuclear energy levels in the source and absorber due to differences in the s-electron environment of the source and absorber. The oxidation state of an absorber nucleus is one characteristic that can be determined by the IS of a spectra. For example due to greater d electron screening Fe2+ has less s-electron density than Fe3+ at its nucleus which results in a greater positive IS for Fe2+.For absorbers with nuclear angular momentum quantum number I > ½ the non-spherical charge distribution results in quadrupole splitting of the energy states. For example Fe with a transition from I=1/2 to 3/2 will exhibit doublets of individual peaks in the Mössbauer spectra due to quadrupole splitting of the nuclear states as shown in red in .In the presence of a magnetic field the interaction between the nuclear spin moments with the magnetic field removes all the degeneracy of the energy levels resulting in the splitting of energy levels with nuclear spin I into 2I + 1 sublevels. Using Fe for an example again, magnetic splitting will result in a sextet as shown in green in . Notice that there are 8 possible transitions shown, but only 6 occur. Due to the selection rule ІΔmIІ = 0, 1, the transitions represented as black arrows do not occur.Numerous schemes have been devised to synthesize magnetite nanoparticles (nMag). The different methods of nMag synthesis can be generally grouped as aqueous or non-aqueous according to the solvents used. Two of the most widely used and explored methods for nMag synthesis are the aqueous co-precipitation method and the non-aqueous thermal decomposition method.The co-precipitation method of nMag synthesis consists of precipitation of Fe3O4 (nMag) by addition of a strong base to a solution of Fe2+ and Fe3+ salts in water. This method is very simple, inexpensive and produces highly crystalline nMag. The general size of nMag produced by co-precipitation is in the 15 to 50 nm range and can be controlled by reaction conditions, however a large size distribution of nanoparticles is produced by this method. Aggregation of particles is also observed with aqueous methods.The thermal decomposition method consists of the high temperature thermal decomposition of an iron-oleate complex derived from an iron precursor in the presence of surfactant in a high boiling point organic solvent under an inert atmosphere. For the many variations of this synthetic method many different solvents and surfactants are used. However, in most every method nMag is formed through the thermal decomposition of an iron-oleate complex to form highly crystalline nMag in the 5 to 40 nm range with a very small size distribution. The size of nMag produced is a function of reaction temperature, the iron to surfactant ratio, and the reaction time, and various methods are used that achieve good size control by manipulation of these parameters. The nMag synthesized by organic methods is soluble in organic solvents because the nMag is stabilized by a surfactant surface coating with the polar head group of the surfactant attached to and the hydrophobic tail extending away from the nMag ). An example of a thermal decomposition method is shown in .Due to the potential applications of magnetite nanoparticles (Fe3O4, nMag) many methods have been devised for its synthesis. However, stoichiometric Fe3O4 is not always achieved by different synthetic methods. B-site vacancies introduced into the cubic inverse spinel crystal structure of nMag result in nonstoichiometric iron oxide of the formula (Fe3+)A(Fe(1-3x)2+ Fe(1+2X)3+Øx)BO4 where Ø represents B-site vacancy. The magnetic susceptibility which is key to most nMag applications decreases with increased B-site vacancy hence the extent of B-site vacancy is important. The very high sensitivity of the Mössbauer spectrum to the oxidation state and site occupancy of Fe3+ in cubic inverse spinel iron oxides makes Mössbauer spectroscopy valuable for addressing the issues of whether or not the product of a synthetic method is actually nMag and the extent of B-site vacancy.As with most analysis using multiple instrumental methods in conjunction is often helpful. This is exemplified by the use of XRD along with Mössbauer spectroscopy in the following analysis. shows the XRD results and Mössbauer spectra “magnetite” samples prepared by a Fe2+/Fe3+ co-precipitation (Mt025), hematite reduction by hydrogen (MtH2) and hematite reduction with coal(MtC). The XRD analysis shows MtH2 and MT025 exhibiting only magnetite peaks while MtC shows the presence of magnetite, maghemite, and hematite. This information becomes very useful when fitting peaks to the Mössbauer spectra because it gives a chemical basis for peak fitting parameters and helps to fit the peaks correctly.Being that the iron occupies two local environments, the A-site and B site, and two species (Fe2+ and Fe3+) occupy the B-site one might expect the spectrum to be a combination of 3 spectra, however delocalization of electrons or electron hopping between Fe2+ and Fe3+ in the B site causes the nuclei to sense an average valence in the B site thus the spectrum are fitted with two curves accordingly. This is most easily seen in the Mt025 spectrum. The two fitted curves correspond to Fe3+ in the A-site and mixed valance Fe2.5+ in the B-site. The isomer shift of the fitted curves can be used to determined which curve corresponds to which valence. The isomer shift relative to the top fitted curve is reported to be 0.661 and the bottom fitted curve is 0.274 relative to αFe thus the top fitted curve corresponds to less s-electron dense Fe2.5+. The magnetic splitting is quite apparent. In each of the spectra, six peaks are present due to magnetic splitting of the nuclear energy states as explained previously. Quadrupole splitting is not so apparent, but actually is present in the spectra. The three peaks to the left of the center of a spectrum should be spaced the same as those to the right due to magnetic splitting alone since the energy level spacing between sublevels is equal. This is not the case in the above spectra, because the higher energy I = 3/2 sublevels are split unevenly due to magnetic and quadrupole splitting interactions.Once the peaks have been fitted appropriately, determination of the extent of B-site vacancy in (Fe3+)A(Fe(1-3x)2+ Fe(1+2X)3+Øx)BO4 is a relatively simple matter. All one has to due to determine the number of vacancies (x) is solve the equation:\[ \frac{RA_{B}}{RA_{A}} = \frac{2-6x}{1-5x} \label{3} \]where RAB or A = relative area \[ \frac{Area\ A\ or\ B\ site\ curve}{Area\ of\ both\ curves} \label{4} \]of the curve for the B or A site respectivelyThe reasoning for this equation is as follows. Taking into account that the mixed valance Fe2.5+ curve is a result of paired interaction between Fe2+ and Fe3+ the nonstochiometric chemical formula is (Fe3+)A(Fe(1-3x)2+Fe(1+2X)3+Øx)BO4. The relative intensity (or relative area) of the Fe-A and Fe-B curves is very sensitive to stoichiometry because vacancies in the B-site reduce the Fe-A curve and increase Fe-B curve intensities. This is due to the unpaired Fe5x3+ adding to the intensity of the Fe-A curve rather than the Fe-B curve. Since the relative area is directly proportional to the number of Fe contributing to the spectrum the ratio of the relative areas is equal to stoichiometric ratio of Fe2.5+ to Fe3+, which yields the above formula.Example Calculation:For MtH2 RAA/RAB = 1.89Plugging x into the nonstoichiometric iron oxide formula yeilds:\[ \frac{RA_{B}}{RA_{A}} = \frac{2-6x}{1-5x} \label{5} \]solving for x yields\[ x=\frac{2-\frac{RA_{A}}{RA_{B}}}{5 \frac{RA_{A}}{RA_{B}}\ +\ 6} \label{6} \](Fe3+)A(Fe 1.95722+ Fe0.03563+)BO4 (very close to stoichiometric)Magnetite (Fe3O4) nanoparticles (n-Mag) are nanometer sized, superparamagnetic, have high saturation magnetization, high magnetic susceptibility, and low toxicity. These properties could be utilized for many possible applications; hence, n-Mag has attracted much attention in the scientific community. Some of the potential applications include drug delivery, hyperthermia agents, MRI contrast agents, cell labeling, and cell separation to name a few.The crystal structure of n-Mag is cubic inverse spinel with Fe3+ cations occupying the interstitial tetrahedral sites(A) and Fe3+ along with Fe2+ occupying the interstitial octahedral sites(B) of an FCC latticed of O2-. Including the site occupation and charge of Fe, the n-Mag chemical formula can be written (Fe3+)A(Fe2+Fe3+)BO4. Non-stoichiometric iron oxide results from B-site vacancies in the crystal structure. To maintain balanced charge and take into account the degree of B-site vacancies the iron oxide formula is written (Fe3+)A(Fe(1-3x)2+ Fe(1+2X)3+Øx)BO4 where Ø represents B-site vacancy. The extent of B-site vacancy has a significant effect on the magnetic properties of iron oxide and in the synthesis of n-Mag stoichiometric iron oxide is not guaranteed; therefore, B-site vacancy warrants attention in iron oxide characterization, and can be addressed using Mössbauer spectroscopy.This page titled 4.6: Mössbauer Spectroscopy is shared under a CC BY 4.0 license and was authored, remixed, and/or curated by Pavan M. V. Raja & Andrew R. Barron (OpenStax CNX) via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.
550
4.7: NMR Spectroscopy
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Physical_Methods_in_Chemistry_and_Nano_Science_(Barron)/04%3A_Chemical_Speciation/4.07%3A_NMR_Spectroscopy
Nuclear magnetic resonance spectroscopy (NMR) is a widely used and powerful method that takes advantage of the magnetic properties of certain nuclei. The basic principle behind NMR is that some nuclei exist in specific nuclear spin states when exposed to an external magnetic field. NMR observes transitions between these spin states that are specific to the particular nuclei in question, as well as that nuclei's chemical environment. However, this only applies to nuclei whose spin, I, is not equal to 0, so nuclei where I = 0 are ‘invisible’ to NMR spectroscopy. These properties have led to NMR being used to identify molecular structures, monitor reactions, study metabolism in cells, and is used in medicine, biochemistry, physics, industry, and almost every imaginable branch of science.The chemical theory that underlies NMR spectroscopy depends on the intrinsic spin of the nucleus involved, described by the quantum number S. Nuclei with a non-zero spin are always associated with a non-zero magnetic moment, as described by Equation \ref{1}, where μ is the magnetic moment, \(S\) is the spin, and γ is always non-zero. It is this magnetic moment that allows for NMR to be used; therefore nuclei whose quantum spin is zero cannot be measured using NMR. Almost all isotopes that have both an even number of protons and neutrons have no magnetic moment, and cannot be measured using NMR.\[ \mu =\ \gamma \cdot S \label{1} \]In the presence of an external magnetic field (B) for a nuclei with a spin I = 1/2, there are two spin states present of +1/2 and -1/2. The difference in energy between these two states at a specific external magnetic field (Bx) are given by Equation \ref{2}, and are shown in where E is energy, I is the spin of the nuclei, and μ is the magnetic moment of the specific nuclei being analyzed. The difference in energy shown is always extremely small, so for NMR strong magnetic fields are required to further separate the two energy states. At the applied magnetic fields used for NMR, most magnetic resonance frequencies tend to fall in the radio frequency range.\[ E\ =\ \mu \cdot B_{x} / I \label{2} \]The reason NMR can differentiate between different elements and isotopes is due to the fact that each specific nuclide will only absorb at a very specific frequency. This specificity means that NMR can generally detect one isotope at a time, and this results in different types of NMR: such as 1H NMR, 13C NMR, and 31P NMR, to name only a few.The subsequent absorbed frequency of any type of nuclei is not always constant, since electrons surrounding a nucleus can result in an effect called nuclear shielding, where the magnetic field at the nucleus is changed (usually lowered) because of the surrounding electron environment. This differentiation of a particular nucleus based upon its electronic (chemical) environment allows NMR be used to identify structure. Since nuclei of the same type in different electron environments will be more or less shielded than another, the difference in their environment (as observed by a difference in the surrounding magnetic field) is defined as the chemical shift.An example of an NMR spectrometer is given in . NMR spectroscopy works by varying the machine’s emitted frequency over a small range while the sample is inside a constant magnetic field. Most of the magnets used in NMR machines to create the magnetic field range from 6 to 24 T. The sample is placed within the magnet and surrounded by superconducting coils, and is then subjected to a frequency from the radio wave source. A detector then interprets the results and sends it to the main console.The different local chemical environments surrounding any particular nuclei causes them to resonate at slightly different frequencies. This is a result of a nucleus being more or less shielded than another. This is called the chemical shift (δ). One factor that affects chemical shift is the changing of electron density from around a nucleus, such as a bond to an electronegative group. Hydrogen bonding also changes the electron density in 1H NMR, causing a larger shift. These frequency shifts are miniscule in comparison to the fundamental NMR frequency differences, on a scale of Hz as compared to MHz. For this reason chemical shifts (δ) are described by the unit ppm on an NMR spectra, \ref{3}, where Href = the resonance frequency of the reference, Hsub = resonance frequency of the substance, and Hmachine = operating frequency of the spectrometer.\[ \delta \ =\ (\frac{H_{ref}-H_{sub}}{H_{machine}})\ \times 10^{6} \label{3} \]Since the chemical shift (δ in ppm) is reported as a relative difference from some reference frequency, so a reference is required. In 1H and 13C NMR, for example, tetramethylsilane (TMS, Si(CH3)4) is used as the reference. Chemical shifts can be used to identify structural properties in a molecule based on our understanding of different chemical environments. Some examples of where different chemical environments fall on a 1H NMR spectra are given in Table \(\PageIndex{1}\).In , an 1H NMR spectra of ethanol, we can see a clear example of chemical shift. There are three sets of peaks that represent the six hydrogens of ethanol (C2H6O). The presence of three sets of peaks means that there are three different chemical environments that the hydrogens can be found in: the terminal methyl (CH3) carbon’s three hydrogens, the two hydrogens on the methylene (CH2) carbon adjacent to the oxygen, and the single hydrogen on the oxygen of the alcohol group (OH). Once we cover spin-spin coupling, we will have the tools available to match these groups of hydrogens to their respective peaks.Another useful property that allows NMR spectra to give structural information is called spin-spin coupling, which is caused by spin coupling between NMR active nuclei that are not chemically identical. Different spin states interact through chemical bonds in a molecule to give rise to this coupling, which occurs when a nuclei being examined is disturbed or influenced by a nearby nuclear spin. In NMR spectra, this effect is shown through peak splitting that can give direct information concerning the connectivity of atoms in a molecule. Nuclei which share the same chemical shift do not form splitting peaks in an NMR spectra.In general, neighboring NMR active nuclei three or fewer bonds away lead to this splitting. The splitting is described by the relationship where n neighboring nuclei result in n+1 peaks, and the area distribution can be seen in Pascal’s triangle ). However, being adjacent to a strongly electronegative group such as oxygen can prevent spin-spin coupling. For example a doublet would have two peaks with intensity ratios of 1:1, while a quartet would have four peaks of relative intensities 1:3:3:1. The magnitude of the observed spin splitting depends on many factors and is given by the coupling constant J, which is in units of Hz.Referring again to , we have a good example of how spin-spin coupling manifests itself in an NMR spectra. In the spectra we have three sets of peaks: a quartet, triplet, and a singlet. If we start with the terminal carbon’s hydrogens in ethanol, using the n+1 rule we see that they have two hydrogens within three bonds (i.e., H-C-C-H), leading us to identify the triplet as the peaks for the terminal carbon’s hydrogens. Looking next at the two central hydrogens, they have four NMR active nuclei within three bonds (i.e., H-C-C-H), but there is no quintet on the spectra as might be expected. This can be explained by the fact that the single hydrogen bonded to the oxygen is shielded from spin-spin coupling, so it must be a singlet and the two central hydrogens form the quartet. We have now interpreted the NMR spectra of ethanol by identifying which nuclei correspond to each peak.Mainly useful for proton NMR, the size of the peaks in the NMR spectra can give information concerning the number of nuclei that gave rise to that peak. This is done by measuring the peak’s area using integration. Yet even without using integration the size of different peaks can still give relative information about the number of nuclei. For example a singlet associated with three hydrogen atoms would be about 3 times larger than a singlet associated with a single hydrogen atom.This can also be seen in the example in . If we integrated the area under each peak, we would find that the ratios of the areas of the quartet, singlet, and triplet are approximately 2:1:3, respectively.Despite all of its upsides, there are several limitations that can make NMR analysis difficult or impossible in certain situations. One such issue is that the desired isotope of an element that is needed for NMR analysis may have little or no natural abundance. For example the natural abundance of 13C, the active isotope for carbon NMR, is about 11%, which works well for analysis. However, in the case of oxygen the active isotope for NMR is 17O, which is only 0.035% naturally abundant. This means that there are certain elements that can essentially never be measured through NMR.Another problem is that some elements have an extremely low magnetic moment, μ. The sensitivity of NMR machines is based on the magnetic moment of the specific element, but if the magnetic moment is too low it can be very difficult to obtain an NMR spectra with enough peak intensity to properly analyze.Nuclear magnetic resonance (NMR) signals arise when nuclei absorb a certain radio frequency and are excited from one spin state to another. The exact frequency of electromagnetic radiation that the nucleus absorbs depends on the magnetic environment around the nucleus. This magnetic environment is controlled mostly by the applied field, but is also affected by the magnetic moments of nearby nuclei. Nuclei can be in one of many spin states , giving rise to several possible magnetic environments for the observed nucleus to resonate in. This causes the NMR signal for a nucleus to show up as a multiplet rather than a single peak.When nuclei have a spin of I = 1/2 (as with protons), they can have two possible magnetic moments and thus split a single expected NMR signal into two signals. When more than one nucleus affects the magnetic environment of the nucleus being examined, complex multiplets form as each nucleus splits the signal into two additional peaks. If those nuclei are magnetically equivalent to each other, then some of the signals overlap to form peaks with different relative intensities. The multiplet pattern can be predicted by Pascal’s triangle ), looking at the nth row, where n = number of nuclei equivalent to each other but not equivalent to the one being examined. In this case, the number of peaks in the multiplet is equal to n + 1When there is more than one type of nucleus splitting an NMR signal, then the signal changes from a multiplet to a group of multiplets ). This is caused by the different coupling constants associated with different types of nuclei. Each nucleus splits the NMR signal by a different width, so the peaks no longer overlap to form peaks with different relative intensities.When nuclei have I > 1/2, they have more than two possible magnetic moments and thus split NMR signals into more than two peaks. The number of peaks expected is 2I + 1, corresponding to the number of possible orientations of the magnetic moment. In reality however, some of these peaks may be obscured due to quadrupolar relaxation. As a result, most NMR focuses on I = 1/2 nuclei such as 1H, 13C, and 31P.Multiplets are centered around the chemical shift expected for a nucleus had its signal not been split. The total area of a multiplet corresponds to the number of nuclei resonating at the given frequency. Looking at actual molecules raises questions about which nuclei can cause splitting to occur. First of all, it is important to realize that only nuclei with I ≠ 0 will show up in an NMR spectrum. When I = 0, there is only one possible spin state and obviously the nucleus cannot flip between states. Since the NMR signal is based on the absorption of radio frequency as a nucleus transitions from one spin state to another, I = 0 nuclei do not show up on NMR. In addition, they do not cause splitting of other NMR signals because they only have one possible magnetic moment. This simplifies NMR spectra, in particular of organic and organometallic compounds, greatly, since the majority of carbon atoms are 12C, which have I = 0.For a nucleus to cause splitting, it must be close enough to the nucleus being observed to affect its magnetic environment. The splitting technically occurs through bonds, not through space, so as a general rule, only nuclei separated by three or fewer bonds can split each other. However, even if a nucleus is close enough to another, it may not cause splitting. For splitting to occur, the nuclei must also be non-equivalent. To see how these factors affect real NMR spectra, consider the spectrum for chloroethane ).Notice that in there are two groups of peaks in the spectrum for chloroethane, a triplet and a quartet. These arise from the two different types of I ≠ 0 nuclei in the molecule, the protons on the methyl and methylene groups. The multiplet corresponding to the CH3 protons has a relative integration (peak area) of three (one for each proton) and is split by the two methylene protons (n = 2), which results in n + 1 peaks, i.e., 3 which is a triplet. The multiplet corresponding to the CH2 protons has an integration of two (one for each proton) and is split by the three methyl protons ((n = 3) which results in n + 1 peaks, i.e., 4 which is a quartet. Each group of nuclei splits the other, so in this way, they are coupled.The difference (in Hz) between the peaks of a mulitplet is called the coupling constant. It is particular to the types of nuclei that give rise to the multiplet, and is independent of the field strength of the NMR instrument used. For this reason, the coupling constant is given in Hz, not ppm. The coupling constant for many common pairs of nuclei are known (Table \(\PageIndex{3}\)), and this can help when interpreting spectra.Coupling constants are sometimes written nJ to denote the number of bonds (n) between the coupled nuclei. Alternatively, they are written as J(H-H) or JHH to indicate the coupling is between two hydrogen atoms. Thus, a coupling constant between a phosphorous atom and a hydrogen would be written as J(P-H) or JPH. Coupling constants are calculated empirically by measuring the distance between the peaks of a multiplet, and are expressed in Hz.Coupling constants may be calculated from spectra using frequency or chemical shift data. Consider the spectrum of chloroethane shown in and the frequency of the peaks (collected on a 60 MHz spectrometer) give in Table \(\PageIndex{4}\).To determine the coupling constant for a multiplet (in this case, the quartet in , the difference in frequency (ν) between each peak is calculated and the average of this value provides the coupling constant in Hz. For example using the data from Table \(\PageIndex{4}\):Frequency of peak c - frequency of peak d = 212.71 Hz - 205.65 Hz = 7.06 HzFrequency of peak b - frequency of peak c = 219.77 Hz – 212.71 Hz = 7.06 HzFrequency of peak a - frequency of peak b = 226.83 Hz – 219.77 Hz = 7.06 HzAverage: 7.06 Hz∴ J(H-H) = 7.06 HzIn this case the difference in frequency between each set of peaks is the same and therefore an average determination is not strictly necessary. In fact for 1st order spectra they should be the same. However, in some cases the peak picking programs used will result in small variations, and thus it is necessary to take the trouble to calculate a true average.To determine the coupling constant of the same multiplet using chemical shift data (δ), calculate the difference in ppm between each peak and average the values. Then multiply the chemical shift by the spectrometer field strength (in this case 60 MHz), in order to convert the value from ppm to Hz:Chemical shift of peak c - chemical shift of peak d = 3.5452 ppm – 3.4275 ppm = 0.1177 ppmChemical shift of peak b - chemical shift of peak c = 3.6628 ppm – 3.5452 ppm = 0.1176 ppmChemical shift of peak a - chemical shift of peak b = 3.7805 ppm – 3.6628 ppm = 0.1177 ppmAverage: 0.1176 ppmAverage difference in ppm x frequency of the NMR spectrometer = 0.1176 ppm x 60 MHz = 7.056 Hz∴ J(H-H) = 7.06 HzCalculate the coupling constant for triplet in the spectrum for chloroethane ) using the data from Table \(\PageIndex{5}\).Using frequency data: Frequency of peak f - frequency of peak g = 74.82 Hz – 67.76 Hz = 7.06 Hz Frequency of peak e - frequency of peak f = 81.88 Hz – 74.82 Hz = 7.06 Hz Average = 7.06 Hz ∴ J(H-H) = 7.06 Hz Alternatively, using chemical shift data: Chemical shift of peak f - chemical shift of peak g = 1.2470 ppm – 1.1293 ppm = 0.1177 ppm Chemical shift of peak e - chemical shift of peak f = 1.3646 ppm – 1.2470 ppm = 0.1176 ppm Average = 0.11765 ppm 0.11765 ppm x 60 MHz = 7.059 Hz ∴ J(H-H) = 7.06 Hz Notice the coupling constant for this multiplet is the same as that in the example. This is to be expected since the two multiplets are coupled with each other.When coupled nuclei have similar chemical shifts (more specifically, when Δν is similar in magnitude to J), second-order coupling or strong coupling can occur. In its most basic form, second-order coupling results in “roofing” ). The coupled multiplets point to or lean toward each other, and the effect becomes more noticeable as Δν decreases. The multiplets also become off-centered with second-order coupling. The midpoint between the peaks no longer corresponds exactly to the chemical shift.In more drastic cases of strong coupling (when Δν ≈ J), multiplets can merge to create deceptively simple patterns. Or, if more than two spins are involved, entirely new peaks can appear, making it difficult to interpret the spectrum manually. Second-order coupling can often be converted into first-order coupling by using a spectrometer with a higher field strength. This works by altering the Δν (which is dependent on the field strength), while J (which is independent of the field strength) stays the same.Phosphorus-31 nuclear magnetic resonance (31P NMR) is conceptually the same as proton (1H) NMR. The 31P nucleus is useful in NMR spectroscopy due to its relatively high gyromagnetic ratio (17.235 MHzT-1). For comparison, the gyromagnetic ratios of 1H and 13C are (42.576 MHz T-1) and (10.705 MHz T-1), respectively. Furthermore, 31P has a 100% natural isotopic abundance. Like the 1H nucleus, the 31P nucleus has a nuclear spin of 1/2 which makes spectra relatively easy to interpret. 31P NMR is an excellent technique for studying phosphorus containing compounds, such as organic compounds and metal coordination complexes.There are certain significant differences between 1H and 31P NMR. While 1H NMR spectra is referenced to tetramethylsilane [Si(CH3)4], the chemical shifts in 31P NMR are typically reported relative to 85% phosphoric acid (δ = 0 ppm), which is used as an external standard due to its reactivity. However, trimethyl phosphite, P(OCH3)3, is also used since unlike phosphoric acid its shift (δ = 140 ppm) is not dependent on concentration or pH. As in 1H NMR, positive chemical shifts correspond to a downfield shift from the standard. However, prior to the mid-1970s, the convention was the opposite. As a result, older texts and papers report shifts using the opposite sign. Chemical shifts in 31P NMR commonly depend on the concentration of the sample, the solvent used, and the presence of other compounds. This is because the external standard does not take into account the bulk properties of the sample. As a result, reported chemical shifts for the same compound could vary by 1 ppm or more, especially for phosphate groups (P=O). 31P NMR spectra are often recorded with all proton signals decoupled, i.e., 31P-{1H}, as is done with 13C NMR. This gives rise to single, sharp signals per unique 31P nucleus. Herein, we will consider both coupled and decoupled spectra.As in 1H NMR, phosphorus signals occur at different frequencies depending on the electron environment of each phosphorus nucleus . In this section we will study a few examples of phosphorus compounds with varying chemical shifts and coupling to other nuclei.Consider the structure of 2,6,7-trioxa-1,4-diphosphabicyclo[2.2.2]octane [Pα(OCH2)3Pβ] shown in . The subscripts α and β are simply used to differentiate the two phosphorus nuclei. According to Table 1, we expect the shift of Pα to be downfield of the phosphoric acid standard, roughly around 125 ppm to 140 ppm and the shift of Pβ to be upfield of the standard, between -5 ppm and -70 ppm. In the decoupled spectrum shown in , we can assign the phosphorus shift at 90.0 ppm to Pα and the shift at -67.0 ppm to Pβ. shows the coupling of the phosphorus signals to the protons in the compound. We expect a stronger coupling for Pβ because there are only two bonds separating Pβ from H, whereas three bonds separate Pαfrom H (JPCH > JPOCH). Indeed, JPCH = 8.9 Hz and JPOCH = 2.6 Hz, corroborating our peak assignments above.Finally, shows the 1H spectrum of Pα(OCH2)3Pβ ), which shows a doublet of doublets for the proton signal due to coupling to the two phosphorus nuclei.As suggested by the data in we can predict and observe changes in phosphorus chemical shift by changing the coordination of P. Thus for the series of compounds with the structure shown in the different chemical shifts corresponding to different phosphorus compounds are shown in Table \(\PageIndex{3}\).19F NMR is very similar to 31P NMR in that 19F has spin 1/2 and is a 100% abundant isotope. As a result, 19F NMR is a great technique for fluorine-containing compounds and allows observance of P-F coupling. The coupled 31P and 19F NMR spectra of ethoxybis(trifluoromethyl)phosphine, P(CF3)2(OCH2CH3), are shown in . It is worth noting the splitting due to JPCF = 86.6 Hz.Consider the structure of dimethyl phosphonate, OPH(OCH3)2, shown in . As the phosphorus nucleus is coupled to a hydrogen nucleus bound directly to it, that is, a coupling separated by a single bond, we expect JPH to be very high. Indeed, the separation is so large (715 Hz) that one could easily mistake the split peak for two peaks corresponding to two different phosphorus nuclei.This strong coupling could also lead us astray when we consider the 1H NMR spectrum of dimethyl phosphonate ). Here we observe two very small peaks corresponding to the phosphine proton. The peaks are separated by such a large distance and are so small relative to the methoxy doublet (ratio of 1:1:12), that it would be easy to confuse them for an impurity. To assign the small doublet, we could decouple the phosphorus signal at 11 ppm, which will cause this peak to collapse into a singlet.Unlike 13C NMR, which requires high sample concentrations due to the low isotopic abundance of 13C, 31P sample preparation is very similar to 1H sample preparation. As in other NMR experiments, a 31P NMR sample must be free of particulate matter. A reasonable concentration is 2-10 mg of sample dissolved in 0.6-1.0 mL of solvent. If needed, the solution can be filtered through a small glass fiber. Note that the solid will not be analyzed in the NMR experiment. Unlike 1H NMR, however, the sample does not to be dissolved in a deuterated solvent since common solvents do not have 31P nuclei to contribute to spectra. This is true, of course, only if a 1H NMR spectrum is not to be obtained from this sample. Being able to use non-deuterated solvents offers many advantages to 31P NMR, such as the simplicity of assaying purity and monitoring reactions, which will be discussed later.Instrument operation will vary according to instrumentation and software available. However, there are a few important aspects to instrument operation relevant to 31P NMR. The instrument probe, which excites nuclear spins and detects chemical shifts, must be set up appropriately for a 31P NMR experiment. For an instrument with a multinuclear probe, it is a simple matter to access the NMR software and make the switch to a 31P experiment. This will select the appropriate frequency for 31P. For an instrument which has separate probes for different nuclei, it is imperative that one be trained by an expert user in changing the probes on the spectrometer.Before running the NMR experiment, consider whether the 31P spectrum should include coupling to protons. Note that 31P spectra are typically reported with all protons decoupled, i.e., 311P-{1H}. This is usually the default setting for a 31P NMR experiment. To change the coupling setting, follow the instructions specific to your NMR instrument software.As mentioned previously, chemical shifts in 31P NMR are reported relative to 85% phosphoric acid. This must be an external standard due to the high reactivity of phosphoric acid. One method for standardizing an experiment uses a coaxial tube inserted into the sample NMR tube ). The 85% H3PO4 signal will appear as part of the sample NMR spectrum and can thus be set to 0 ppm.Another way to reference an NMR spectrum is to use a 85% H3PO4 standard sample. These can be prepared in the laboratory or purchased commercially. To allow for long term use, these samples are typically vacuum sealed, as opposed to capped the way NMR samples typically are. The procedure for using a separate reference is as follows.31P NMR spectroscopy gives rise to single sharp peaks that facilitate differentiating phosphorus-containing species, such as starting materials from products. For this reason, 31P NMR is a quick and simple technique for assaying sample purity. Beware, however, that a “clean” 31P spectrum does not necessarily suggest a pure compound, only a mixture free of phosphorus-containing contaminants.31P NMR can also be used to determine the optical purity of a chiral sample. Adding an enantiomer to the chiral mixture to form two different diastereomers will give rise to two unique chemical shifts in the 31P spectrum. The ratio of these peaks can then be compared to determine optical purity.As suggested in the previous section, 31P NMR can be used to monitor a reaction involving phosphorus compounds. Consider the reaction between a slight excess of organic diphosphine ligand and a nickel bis-cyclooctadiene, . The reaction can be followed by 31P NMR by simply taking a small aliquot from the reaction mixture and adding it to an NMR tube, filtering as needed. The sample is then used to acquire a 31P NMR spectrum and the procedure can be repeated at different reaction times. The data acquired for these experiments is found in . The changing in 31P peak intensity can be used to monitor the reaction, which begins with a single signal at -4.40 ppm, corresponding to the free diphosphine ligand. After an hour, a new signal appears at 41.05 ppm, corresponding the the diphosphine nickel complex. The downfield peak grows as the reaction proceeds relative to the upfield peak. No change is observed between four and five hours, suggesting the conclusion of the reaction.There are a number of advantages for using 31P for reaction monitoring when available as compared to 1H NMR:31P NMR does not eliminate the need for 1H NMR chacterization, as impurities lacking phosphorus will not appear in a 31P experiment. However, at the completion of the reaction, both the crude and purified products can be easily analyzed by both 1H and 31P NMR spectroscopy.One can measure the amount of epoxide on nanomaterials such as carbon nanotubes and fullerenes by monitoring a reaction involving phosphorus compounds in a similar manner to that described above. This technique uses the catalytic reaction of methyltrioxorhenium ). An epoxide reacts with methyltrioxorhenium to form a five membered ring. In the presence of triphenylphosphine (PPH3), the catalyst is regenerated, forming an alkene and triphenylphosphine oxide (OPPh3). The same reaction can be applied to carbon nanostructures and used to quantify the amount of epoxide on the nanomaterial. illustrates the quantification of epoxide on a carbon nanotube.Because the amount of initial PPh3 used in the reaction is known, the relative amounts of PPh3 and OPPh3can be used to stoichiometrically determine the amount of epoxide on the nanotube. 31P NMR spectroscopy is used to determine the relative amounts of PPh3 and OPPh3 ).The integration of the two 31P signals is used to quantify the amount of epoxide on the nanotube according to \ref{4}.\[ Moles\ of\ Epoxide\ =\ \frac{area\ of\ OPPH_{3}\ peak}{area\ of\ PPh_{3}\ peak} \times \ moles\ PPh_{3} \label{4} \]Thus, from a known quantity of PPh3, one can find the amount of OPPh3 formed and relate it stoichiometrically to the amount of epoxide on the nanotube. Not only does this experiment allow for such quantification, it is also unaffected by the presence of the many different species present in the experiment. This is because the compounds of interest, PPh3 and OPPh3, are the only ones that are characterized by 31P NMR spectroscopy.31P NMR spectroscopy is a simple technique that can be used alongside 1H NMR to characterize phosphorus-containing compounds. When used on its own, the biggest difference from 1H NMR is that there is no need to utilize deuterated solvents. This advantage leads to many different applications of 31P NMR, such as assaying purity and monitoring reactions.Nuclear magnetic resonance (NMR) spectroscopy is a very useful tool used widely in modern organic chemistry. It exploits the differences in the magnetic properties of different nuclei in a molecule to yield information about the chemical environment of the nuclei, and subsequently the molecule, in question. NMR analysis lends itself to scientists more easily than say the more cryptic data achieved form ultraviolet or infared spectra because the differences in magnetic properties lend themselves to scientists very well. The chemical shifts that are characteristic of different chemical environments and the multiplicity of the peaks fit well with our conception of the way molecules are structured.Using NMR spectroscopy, we can differentiate between constitutional isomers, stereoisomers, and enantiomers. The later two of these three classifications require close examination of the differences in NMR spectra associated with changes in chemical environment due to symmetry differences; however, the differentiation of constitutional isomers can be easily obtained.Nuclei both posses charge and spin, or angular momentum, and from basic physics we know that a spinning charge generates a magnetic moment. The specific nature of this magnetic moment is the main concern of NMR spectroscopy.For proton NMR, the local chemical environment makes different protons in a molecule resonate at different frequencies. This difference in resonance frequencies can be converted into a chemical shift (δ) for each nucleus being studied. Because each chemical environment results in a different chemical shift, one can easily assign peaks in the NMR data to specific functional groups based upon president. Presidents for chemical shifts can be found in any number of basic NMR text. For example, shows the spectra of ethyl formate and benzyl acetate. In the lower spectra, benzyl acetate, notice peaks at δ = 1.3, 4.2, and 8.0 ppm characteristic of the primary, secondary, and aromatic protons, respectively, present in the molecule. In the spectra of ethyl formate b), notice that the number of peaks is is the same as that of benzyl acetate a); however, the multiplicity of peaks and their shifts is very different.The difference between these two spectra is due to geminal spin-spin coupling. Spin-spin coupling is the result of magnetic interaction between individual protons transmitted by the bonding electrons between the protons. This spin-spin coupling results in the speak splitting we see in the NMR data. One of the benefits of NMR spectroscopy is the sensitivity to very slight changes in chemical environment.Based on their definition, diastereomers are stereoisomers that are not mirror images of each other and are not superimposable. In general, diastereomers have differing reactivity and physical properties. One common example is the difference between threose and erythrose . As one can see from , these chemicals are very similar each having the empirical formula of C4H7O4. One may wonder: how are these slight differences in chemical structure represented in NMR? To answer this question, we must look at the Newman projections for a molecule of the general structure . One can easily notice that the two protons represented are always located in different chemical environments. This is true because the R group makes the proton resonance frequencies v1(I) ≠ v2(III), v2(I) ≠ v1(II), and v2(II) ≠ v1(III). Thus, diastereomers have different vicinal proton-proton couplings and the resulting chemical shifts can be used to identify the isomeric makeup of the sample.Enantiomers are compounds with a chiral center. In other words, they are non-superimposable mirror images. Unlike diastereomers, the only difference between enantiomers is their interaction with polarized light. Unfortunately, this indistinguishability of racemates includes NMR spectra. Thus, in order to differentiate between enantiomers, we must make use of an optically active solvent also called a chiral derivatizing agent (CDA). The first CDA was (α-methoxy-α-(trifluoromethyl)phenylacetic acid) (MTPA also known as Mosher's acid) ).Now, many CDAs exist and are readily available. It should also be noted that CDA development is a current area of active research. In simple terms, one can think of the CDA turning an enantiomeric mixture into a mixture of diastereomeric complexes, producing doublets where each half of the doublet corresponds to each diastereomer, which we already know how to analyze. The resultant peak splitting in the NMR spectra due to diastereomeric interaction can easily determine optical purity. In order to do this, one may simply integrate the peaks corresponding to the different enantiomers thus yielding optical purity of incompletely resolved racemates. One thing of note when performing this experiment is that this interaction between the enantiomeric compounds and the solvent, and thus the magnitude of the splitting, depends upon the asymmetry or chirality of the solvent, the intermolecular interaction between the compound and the solvent, and thus the temperature. Thus, it is helpful to compare the spectra of the enantiomer-CDA mixture with that of the pure enantiomer so that changes in chemical shift can be easily noted.NMR stands for nuclear magnetic resonance and functions as a powerful tool for chemical characterization. Even though NMR is used mainly for liquids and solutions, technology has progressed to where NMR of solids can be obtained with ease. Aptly named as solid state NMR, the expansion of usable phases has invariably increased our ability to identify chemical compounds. The reason behind difficulties using the solid state lie in the fact that solids are never uniform. When put through a standard NMR, line broadening interactions cannot be removed by rapid molecular motions, which results in unwieldy wide lines which provide little to no useful information. The difference is so staggering that lines broaden by hundreds to thousands of hertz as opposed to less than 0.1 Hz in solution when using an I = 1/2 spin nucleus.A process known as magic angle spinning (MAS), where the sample is tilted at a specific angle, is used in order to overcome line broadening interactions and achieve usable peak resolutions. In order to understand solid state NMR, its history, operating chemical and mathematical principles, and distinctions from gas phase/solution NMR will be explained.The first notable contribution to what we know today as NMR was Wolfgang Pauli’s ) prediction of nuclear spin in 1926. In 1932 Otto Stern ) used molecular beams and detected nuclear magnetic moments.Four years later, Gorter performed the first NMR experiment with lithium fluoride (LiF) and hydrated potassium alum (K[Al(SO4)2]•12H2O) at low temperatures. Unfortunately, he was unable to characterize the molecules and the first successful NMR for a solution of water was taken in 1945 by Felix Bloch ). In the same year, Edward Mills Purcell ) managed the first successful NMR for the solid paraffin. Continuing their research, Bloch obtained the 1H NMR of ethanol and Purcell obtained that of paraffin in 1949. In the same year, the chemical significance of chemical shifts was discovered. Finally, high resolution solid state NMR was made possible in 1958 by the discovery of magic angle spinning.NMR spectroscopy works by measuring the nuclear shielding, which can also be seen as the electron density, of a particular element. Nuclear shielding is affected by the chemical environment, as different neighboring atoms will have different effects on nuclear shielding, as electronegative atoms will tend to decrease shielding and vice versa. NMR requires the elements analyzed to have a spin state greater than zero. Commonly used elements are 1H, 13C, and 29Si. Once inside the NMR machine, the presence of a magnetic field splits the spin states ).From we see that a spin state of 1/2 is split into two spin states. As spin state value increases, so does the number of spin states. A spin of 1 will have three spin states, 3/2 will have four spin states, and so on. However, higher spin states increases the difficulty to accurately read NMR results due to confounding peaks and decreased resolution, so spin states of ½ are generally preferred. The E, or radiofrequency shown in can be described by \ref{5}, where µ is the magnetic moment, a property intrinsic to each particular element. This constant can be derived from \ref{6}, where ϒ is the gyromagnetic ratio, another element dependent quantity, h is Planck’s constant, and I is the spin.\[ E\ =\ \mu B_{0}H_{0} \label{5} \]\[ \mu \ =\ \gamma h (I(I + 1))^{1/2} \label{6} \]In \ref{5} can have E substituted for hν, leading to \ref{7}, which can solve for the NMR resonance frequency (v).\[ h \nu \ =\ \mu B_{0}H_{0} \label{7} \]Using the frequency (v), the δ, or expected chemical shift may be computed using \ref{8}.\[ \delta \ =\ \frac{(\nu _{observed} - \nu _{reference})}{\nu _{spectrometer}} \label{8} \]Delta (δ) is observed in ppm and gives the distance from a set reference. Delta is directly related to the chemical environment of the particular atom. For a low field, or high delta, an atom is in an environment which produces induces less shielding than in a high field, or low delta.An NMR can be divided into three main components: the workstation computer where one operates the NMR instrument, the NMR spectrometer console, and the NMR magnet. A standard sample is inserted through the bore tube and pneumatically lowered into the magnet and NMR probe ).The first layer inside the NMR is the liquid nitrogen jacket. Normally, this space is filled with liquid nitrogen at 77 K. The liquid nitrogen reservoir space is mostly above the magnet so that it can act as a less expensive refrigerant to block infrared radiation from reaching the liquid helium jacket.The layer following the liquid nitrogen jacket is a 20 K radiation shield made of aluminum wrapped with alternating layers of aluminum foil and open weave gauze. Its purpose is to block infrared radiation which the 77 K liquid nitrogen vessel was unable to eliminate, which increases the ability for liquid helium to remain in the liquid phase due to its very low boiling point. The liquid helium vessel itself, the next layer, is made of stainless steel wrapped in a single layer of aluminum foil, acting once again as an infrared radiation shield. It is about 1.6 mm thick and kept at 4.2 K.Inside the vessel and around the magnet is the aluminum baffle, which acts as another degree of infrared radiation protection as well as a layer of protection for the superconducting magnet from liquid helium reservoir fluctuations, especially during liquid helium refills. The significance is that superconducting magnets at low fields are not fully submerged in liquid helium, but higher field superconducting magnets must maintain the superconducting solenoid fully immersed in liquid helium The vapor above the liquid itself is actually enough to maintain superconductivity of most magnets, but if it reaches a temperature above 10 K, the magnet quenches. During a quench, the solenoid exceeds its critical temperature for superconductivity and becomes resistive, generating heat. This heat, in turn, boils off the liquid helium. Therefore, a small opening at the very base of the baffle exists as a path for the liquid helium to reach the magnet surface so that during refills the magnet is protected from accidental quenching.The most notable difference between solid samples and solution/gas in terms of NMR spectroscopy is that molecules in solution rotate rapidly while those in a solid are fixed in a lattice. Different peak readings will be produced depending on how the molecules are oriented in the magnetic field because chemical shielding depends upon the orientation of a molecule, causing chemical shift anisotropy. Therefore, the effect of chemical shielding also depends upon the orientation of the molecule with respect to the spectrometer. These counteracting forces are balanced out in gases and solutions because of their randomized molecular movement, but become a serious issue with fixed molecules observed in solid samples. If the chemical shielding isn’t determined accurately, neither will the chemical shifts (δ).Another issue with solid samples are dipolar interactions which can be very large in solid samples causing linewidths of tens to hundreds of kilohertz to be generated. Dipolar interactions are tensor quantities, which demonstrate values dependent on the orientation and placement of a molecule in reference to its surroundings. Once again the issue goes back to the lattice structure of solids, which are in a fixed location. Even though the molecules are fixed, this does not mean that nuclei are evenly spread apart. Closer nuclei display greater dipolar interactions and vice versa, creating the noise seen in spectra of NMR not adapted for solid samples. Dipolar interactions are averaged out in solution states because of randomized movement. Spin state repulsions are averaged out by molecular motion of solutions and gases. However, in solid state, these interactions are not averaged and become a third source of line broadening.In order to counteract chemical shift anisotropy and dipolar interactions, magic angle spinning was developed. As discussed above, describing dipolar splitting and chemical shift aniostoropy interactions respectively, it becomes evident that both depend on the geometric factor (3cos2θ-1).\[ Dipolar\ splitting \ =\ C(\mu _{0}/8 \pi )(\gamma _{a} \gamma _{x} / r^{2}_{ax})(3 cos^{2} \theta _{iz} - 1) \label{9} \]\[ \sigma _{zz} \ =\ \bar{\sigma } + 1/3 \Sigma \sigma_{ii} (3 cos^{2} \theta _{iz} - 1) \label{10} \]If this factor is decreased to 0, then line broadening due to chemical shift anisotropy and dipolar interactions will disappear. Therefore, solid samples are rotated at an angle of 54.74˚, effectively allowing solid samples to behave similarly to solutions/gases in NMR spectroscopy. Standard spinning rates range from 12 kHz to an upper limit of 35 kHz, where higher spin rates are necessary to remove higher intermolecular interactions.The development of solid state NMR is a technique necessary to understand and classify compounds that would not work well in solutions, such as powders and complex proteins, or study crystals too small for a different characterization method.Solid state NMR gives information about local environment of silicon, aluminum, phosphorus, etc. in the structures, and is therefore an important tool in determining structure of molecular sieves. The main issue frequently encountered is that crystals large enough for X-Ray crystallography cannot be grown, so NMR is used since it determines the local environments of these elements. Additionally, by using 13C and 15N, solid state NMR helps study amyloid fibrils, filamentous insoluble protein aggregates related to neurodegenerative diseases such as Alzheimer’s disease, type II diabetes, Huntington’s disease, and prion diseases.There are several types of carbon nanomaterial. Members of this family are graphene, single-walled carbon nanotubes (SWNT), multi-walled carbon nanotubes (MWNT), and fullerenes such as C60. Nano materials have been subject to various modification and functionalizations, and it has been of interest to develop methods that could observe these changes. Herein we discuss selected applications of 13C NMR in studying graphene and SWNTs. In addition, a discussion of how 13C NMR could be used to analyze a thin film of amorphous carbon during a low-temperature annealing process will be presented.Since carbon is found in any organic molecule NMR that can analyze carbon could be very helpful, unfortunately the major isotope, 12C, is not NMR active. Fortunately, 13C with a natural abundance of 1.1% is NMR active. This low natural abundance along with lower gyromagnetic ratio for 13C causes sensitivity to decrease. Due to this lower sensitivity, obtaining a 13C NMR spectrum with a specific signal-to-noise ratio requires averaging more spectra than the number of spectra that would be required to average in order to get the same signal to noise ratio for a 1H NMR spectrum. Although it has a lower sensitivity, it is still highly used as it discloses valuable information.Peaks in a 1H NMR spectrum are split to n + 1 peak, where n is the number of hydrogen atoms on the adjacent carbon atom. The splitting pattern in 13C NMR is different. First of all, C-C splitting is not observed, because the probability of having two adjacent 13C is about 0.01%. Observed splitting patterns, which is due to the hydrogen atoms on the same carbon atom not on the adjacent carbon atom, is governed by the same n + 1 rule.In 1H NMR, the integral of the peaks are used for quantitative analysis, whereas this is problematic in 13C NMR. The long relaxation process for carbon atoms takes longer comparing to that of hydrogen atoms, which also depends on the order of carbon (i.e., 1°, 2°, etc.). This causes the peak heights to not be related to the quantity of the corresponding carbon atoms.Another difference between 13C NMR and 1H NMR is the chemical shift range. The range of the chemical shifts in a typical NMR represents the different between the minimum and maximum amount of electron density around that specific nucleus. Since hydrogen is surrounded by fewer electrons in comparison to carbon, the maximum change in the electron density for hydrogen is less than that for carbon. Thus, the range of chemical shift in 1H NMR is narrower than that of 13C NMR.13C NMR spectra could also be recorded for solid samples. The peaks for solid samples are very broad because the sample, being solid, cannot have all anisotropic, or orientation-dependent, interactions canceled due to rapid random tumbling. However, it is still possible to do high resolution solid state NMR by spinning the sample at 54.74° with respect to the applied magnetic field, which is called the magic angle. In other words, the sample can be spun to artificially cancel the orientation-dependent interaction. In general, the spinning frequency has a considerable effect on the spectrum.Single-walled carbon nanotubes contain sp2 carbons. Derivatives of SWNTs contain sp3 carbons in addition. There are several factors that affect the 13C NMR spectrum of a SWNT sample, three of which will be reviewed in this module: 13C percentage, diameter of the nanotube, and functionalization.For sp2 carbons, there is a slight dependence of 13C NMR peaks on the percentage of 13C in the sample. Samples with lower 13C percentage are slighted shifted downfield (higher ppm). Data are shown in Table \(\PageIndex{4}\). Please note that these peaks are for the sp2 carbons.The peak position for SWNTs also depends on the diameter of the nanotubes. It has been reported that the chemical shift for sp2 carbons decreases as the diameter of the nanotubes increases. shows this correlation. Since the peak position depends on the diameter of nanotubes, the peak broadening can be related to the diameter distribution. In other words, the narrower the peak is, the smaller the diameter distribution of SWNTs is. This correlation is shown in . Solid stated 13C NMR can also be used to analyze functionalized nanotubes. As a result of functionalizing SWNTs with groups containing a carbonyl group, a slight shift toward higher fields (lower ppm) for the sp2carbons is observed. This shift is explained by the perturbation applied to the electronic structure of the whole nanotube as a result of the modifications on only a fraction of the nanotube. At the same time, a new peak emerges at around 172 ppm, which is assigned to the carboxyl group of the substituent. The peak intensities could also be used to quantify the level of functionalization. shows these changes, in which the substituents are –(CH2)3COOH, –(CH2)2COOH, and –(CH2)2CONH(CH2)2NH2 for the spectra b, c, and d, respectively. Note that the bond between the nanotube and the substituent is a C-C bond. Due to low sensitivity, the peak for the sp3 carbons of the nanotube, which does not have a high quantity, is not detected. There is a small peak around 35 ppm in , can be assigned to the aliphatic carbons of the substituent.For substituents containing aliphatic carbons, a new peak around 35 ppm emerges, as was shown in , which is due to the aliphatic carbons. Since the quantity for the substituent carbons is low, the peak cannot be detected. Small substituents on the sidewall of SWNTs can be chemically modified to contain more carbons, so the signal due to those carbons could be detected. This idea, as a strategy for enhancing the signal from the substituents, can be used to analyze certain types of sidewall modifications. For example, when Gly (–NH2CH2CO2H) was added to F-SWNTs (fluorinated SWNTs) to substitute the fluorine atoms, the 13C NMR spectrum for the Gly-SWNTs was showing one peak for the sp2 carbons. When the aliphatic substituent was changed to 6-aminohexanoic acid with five aliphatic carbons, the peak was detectable, and using 11-aminoundecanoic acid (ten aliphatic carbons) the peak intensity was in the order of the size of the peak for sp2 carbons. In order to use 13C NMR to enhance the substituent peak (for modification quantification purposes as an example), Gly-SWNTs was treated with 1-dodecanol to modify Gly to an amino ester. This modification resulted in enhancing the aliphatic carbon peak at around 30 ppm. Similar to the results in , a peak at around 170 emerged which was assigned to the carbonyl carbon. The sp3 carbon of the SWNTs, which was attached to nitrogen, produced a small peak at around 80 ppm, which is detected in a cross-polarization magic angle spinning (CP-MAS) experiment.F-SWNTs (fluorinated SWNTs) are reported to have a peak at around 90 ppm for the sp3 carbon of nanotube that is attached to the fluorine. The results of this part are summarized in (approximate values).The peak intensities that are weak in depend on the level of functionalization and for highly functionalized SWNTs, those peaks are not weak. The peak intensity for aliphatic carbons can be enhanced as the substituents get modified by attaching to other molecules with aliphatic carbons. Thus, the peak intensities can be used to quantify the level of functionalization.Graphene is a single layer of sp2 carbons, which exhibits a benzene-like structure. Functionalization of graphene sheets results in converting some of the sp2 carbons to sp3. The peak for the sp2 carbons of graphene shows a peak at around 140 ppm. It has been reported that fluorinated graphene produces an sp3peak at around 82 ppm. It has also been reported for graphite oxide (GO), which contains –OH and epoxy substituents, to have peaks at around 60 and 70 ppm for the epoxy and the –OH substituents, respectively. There are chances for similar peaks to appear for graphene oxide. Table \(\PageIndex{6}\) summarizes these results.13C NMR spectroscopy has been used to study the effects of low-temperature annealing (at 650 °C) on thin films of amorphous carbon. The thin films were synthesized from a 13C enriched carbon source (99%). There were two peaks in the 13C NMR spectrum at about 69 and 142 ppm which were assigned to sp3 and sp2carbons, respectively . The intensity of each peak was used to find the percentage of each type of hybridization in the whole sample, and the broadening of the peaks was used to estimate the distribution of different types of carbons in the sample. It was found that while the composition of the sample didn’t change during the annealing process (peak intensities didn’t change, see b), the full width at half maximum (FWHM) did change a). The latter suggested that the structure became more ordered, i.e., the distribution of sp2 and sp3carbons within the sample became more homogeneous. Thus, it was concluded that the sample turned into a more homogenous one in terms of the distribution of carbons with different hybridization, while the fraction of sp2 and sp3 carbons remained unchanged.Aside from the reported results from the paper, it can be concluded that 13C NMR is a good technique to study annealing, and possibly other similar processes, in real time, if the kinetics of the process is slow enough. For these purposes, the peak intensity and FWHM can be used to find or estimate the fraction and distribution of each type of carbon respectively.13C NMR can reveal important information about the structure of SWNTs and graphene. 13C NMR chemical shifts and FWHM can be used to estimate the diameter size and diameter distribution. Though there are some limitations, it can be used to contain some information about the substituent type, as well as be used to quantify the level of functionalization. Modifications on the substituent can result in enhancing the substituent signal. Similar type of information can be achieved for graphene. It can also be employed to track changes during annealing and possibly during other modifications with similar time scales. Due to low natural abundance of 13C it might be necessary to synthesize 13C-enhanced samples in order to obtain suitable spectra with a sufficient signal-to-noise ratio. Similar principles could be used to follow the annealing process of carbon nano materials. C60will not be discussed herein.Nuclear magnetic resonance spectroscopy (NMR) is the most powerful tool for organic and organometallic compound determination. Even structures can be determined just using this technique. In general NMR gives information about the number of magnetically distinct atoms of the specific nuclei under study, as well as information regarding the nature of the immediate environment surrounding each nuclei. Because hydrogen and carbon are the major components of organic and organometallic compounds, proton (1H) NMR and carbon-13 (13C) NMR are the most useful nuclei to observe.Not all the protons experience resonance at the same frequency in a 1H NMR, and thus it is possible to differentiate between them. The diversity is due to the existence of a different electronic environment around chemically different nuclei. Under an external magnetic field (B0), the electrons in the valence shell are affected; they start to circulate generating a magnetic field, which is apposite to the applied magnetic field. This effect is called diamagnetic shielding or diamagnetic anisotropy .The greater the electron density around one specific nucleus, the greater will be the induced field that opposes the applied field, and this will result in a different resonance frequency. The identification of protons sounds simple, however, the NMR technique has a relatively low sensitivity of proton chemical shifts to changes in the chemical and stereochemical environment; as a consequence the resonance of chemically similar proton overlap. There are several methods that have been used to resolve this problem, such as: the use of higher frequency spectrometers or by the use of shift reagents as aromatic solvents or lanthanide complexes. The main issue with high frequency spectrometers is that they are very expensive, which reduces the number of institutions that can have access to them. In contrast, shift reagents work by reducing the equivalence of nuclei by altering their magnetic environment, and can be used on any NMR instrument. The simplest shift reagent is the one of different solvents, however problems with some solvents is that they can react with the compound under study, and also that these solvents usually just alter the magnetic environment of a small part of the molecule. Consequently, although there are several methods, most of the work has been done with lanthanide complexes.The first significant induced chemical shift using paramagnetic ions was reported in 1969 by Conrad Hinckley ), where he used bispyridine adduct of tris(2,2,6,6-tetramethylhepta-3,5-dionato)europium(III) (Eu(tmhd)3), better known as Eu(dpm)3, where dpm is the abbreviation of dipivaloyl- methanato, the chemical structure is shown in . Hinckley used Eu(tmhd)3 on the 1H NMR spectrum of cholesterol from 347 – 2 Hz. The development of this new chemical method to improve the resolution of the NMR spectrum was the stepping-stone for the work of Jeremy Sanders and Dudley Williams, and respectively. They observed a significant increase in the magnitude of the induced shift after using just the lanthanide chelate without the pyridine complex. Sugesting that the pyridine donor ligands are in competition for the active sides of the lanthanide complex. The efficiency of Eu(tmhd)3 as a shift reagent was published by Sanders and Williams in 1970, where they showed a significant difference in the 1H NMR spectrum of n-pentanol using the shift reagent, see .Analyzing the spectra in it is easy to see that with the use of Eu(tmhd)3 there is any overlap between peaks. Instead, the multiplets of each proton are perfectly clear. After these two publications the potential of lanthanide as shift reagents for NMR studies became a popular topic. Other example is the fluorinate version of Eu(dpm)3; (tris(7,7,-dimethyl-1,1,2,2,2,3,3-heptafluoroocta-7,7-dimethyl-4,6-dionato)europium(III), best known as Eu(fod)3, which was synthesized in 1971 by Rondeau and Sievers. This LSR presents better solubility and greater Lewis acid character, the chemical structure is show in .Lanthanide atoms are Lewis acids, and because of that, they have the ability to cause chemical shift by the interaction with the basic sites in the molecules. Lanthanide metals are especially effective over other metals because there is a significant delocalization of the unpaired f electrons onto the substrate as a consequence of unpaired electrons in the f shell of the lanthanide. The lanthanide metal in the complexes interacts with the relatively basic lone pair of electrons of aldehydes, alcohols, ketones, amines and other functional groups within the molecule that have a relative basic lone pair of electrons, resulting in a NMR spectral simplification.There are two possible mechanisms by which a shift can occur: shifts by contact and shifts by pseudocontact. The first one is a result of the transfer of electron spin density via covalent bond formation from the lanthanide metal ion to the associated nuclei. While the magnetic effects of the unpaired electron magnetic moment causes the pseudocontact shift. Lanthanide complexes give shifts primarily by the pseudocontact mechanism. Under this mechanism, there are several factors that influence the shift of a specific NMR peak. The principal factor is the distance between the metal ion and the proton; the shorter the distance, the greater the shift obtained. On the other hand, the direction of the shift depends on the lanthanide complex used. The complexes that produce a shift to a lower field (downfield) are the ones containing erbium, europium, thulium and ytterbium, while complexes with cerium, neodymium, holmium, praseodymium, samarium and terbium, shift resonances to higher field. shows the difference betwen an NMR spectrum without the use of shift reagent versus the same spectrum in the present of a europium complex (downfield shift) and a praseodymium complex (high-field shift).Linewidth broadening is not desired because of loss of resolution, and lanthanide complexes unfortunately contribute extremely to this effect when they are used in high concentrations due to their mechanism that shortens the relaxation times (T2), which in turn increases the bandwidth. However europium and praseodymium are an extraordinary exception giving a very low shift broadening, 0.003 and 0.005 Hz/Hz respectively. Europium specially is the most used lanthanide as shift reagent because of its inefficient nuclear spin-lattice ratio properties. It has low angular momentum quantum numbers and a diamagnetic 7F0 ground state. These two properties contribute to a very small separation of the highest and lowest occupied metal orbitals leading to an inefficient relaxation and a very little broadening in the NMR spectra. The excited 7F1 state will then contribute to the pseudocontact shift.We have mentioned above that lanthanide complexes have a mechanism that influences relaxation times, and this is certainly because paramagnetic ions have an influence in both: chemical shifts and relaxation rates. The relaxation times are of great significant because they depend on the width of a specific resonance (peak). Changes in relaxation time could also be related with the geometry of the complex.The easiest and more practical way to measure the lanthanide-induced shift (LIS) is to add aliquots of the lanthanide shift reagent (LSR or Δvi) to the sample that has the compound of interest (substrate), and take an NMR spectra after each addition. Because the shift of each proton will change after each addition of the LSR to lower or upper field, the LIS can me measured. With the data collected, a plot of the LIS against the ratio of LSR: substrate will generate a straight line where the slope is representative of the compound that is being studied. The identification of the compound by the use of chiral lanthanide shift reagents can be so precise that it is possible to estimate the composition of enantiomers in the solution under study, see .Now, what is the mechanism that is actually happening between the LSR and the compound under study? The LSR is a metal complex of six coordinate sides. The LSR, in presence of substrate that contains heteroatoms with Lewis basicity character, expands its coordination sides in solution in order to accept additional ligands. An equilibrium mixture is formed between the substrate and the LSR. \ref{11} and \ref{12} show the equilibrium, where L is LSR, S is the substrate, and LS is the concentration of the complex formed is solution.\[ L\ +\ S \mathrel{\mathop{\rightleftarrows}^{\mathrm{K_{1}}}} \ [LS] \label{11} \]\[ [LS] \ +\ S \mathrel{\mathop{\rightleftarrows}^{\mathrm{K_{2}}}} [LS_{2}] \label{12} \]The abundance of these species depends on K1 and K2, which are the binding constant. The binding constant is a special case of equilibrium constant, but it refers with the binding and unbinding mechanism of two species. In most of the cases like, K2 is assumed to be negligible and therefore just the first complex [LS] is assumed to be formed. The equilibrium between L + S and LS in solution is faster than the NMR timescale, consequently a single average signal will be recorded for each nucleus.Besides the great potential of lanthanide shift reagents to improve the resolution of NMR spectrums, these complexes also have been used to identify enantiomeric mixtures in solution. To make this possible the substrate must meet certain properties. The fist one is that the organic compounds in the enantiomeric composition must to have a hard organic base as functional group. The shift reagents are not effective with most of the soft bases. Though hundreds of chelates have been synthesized after Eu(dcm)3, this one is the LSR that resulted in the most effective reagent for the resolution of enantiotopic resonances. Basically if you take an NMR of an enantiomeric mixture sample, a big variety of peaks will appear and the hard part is to identify which of those peaks correspond to which specific enantiomer. The differences in chemical shifts observed for enantiomeric mixtures in solution containing LSR might arise from at least two sources: the equilibrium constants of the formation of the possible diastereometic complexes between the substrate and the LSR, and the geometries of these complexes, which might be distinct. The enantiomeric shift differences sometimes are defined as ΔΔδ.In solution the exchange between substrate coordinated to the europium ion and the free substrate in solution is very fast. To be sure that the europium complexes are binding with one or two substrate molecules, an excess of substrate is usually added.Magnetic resonance imaging (MRI) (also known as nuclear magnetic resonance imaging (NMRI) or magnetic resonance tomography (MRT)) is a powerful noninvasive diagnostic technique, which is used to generate magnetic field (B0) and interacts with spin angular momentum of the nucleus in the tissue. Spin angular momentum depends on number of protons and neutrons of nucleus. Nuclei with even number of protons plus neutrons are insensitive to magnetic field, so cannot be viewed by MRI.Each nucleus can be considered as an arrow with arbitrary direction in absence of external magnetic field ). And we consider them to get oriented in the same direction once magnetic field applied ). In order to get nuclei orient in specific direction, energy is supplied, and to bring it to original position energy is emitted. All this transitions eventually lead to changes in angular velocity, which is defined as Larmor frequency and the expression \ref{13}, where ω is the Larmor frequency, γ is the gyromagnetic ratio, and B0 is the magnetic field. It is not easy to detect energy, which is involved in such a transition, that’s why use of high resolution spectrometers required, those which are developed by nowadays as a most powerful MRI are close to 9 Tesla with mass approaching forty five tons. Unfortunately it is expensive tool to purchase and to operate. That’s why new techniques should be developed, so most of the MRI spectrometers can be involved in imaging. Fortunately presence of huge amount of nuclei in analyzed sample or body can provide with some information.\[ \omega \ =\ \gamma B_{0} \label{13} \]Each nucleus possesses microscopic magnetic spins of x, y and z. Presence of randomly distributed atoms with varying x and y spins will lead to zero upon summation of x and y planes. But in case of z, summation of magnetic spins will not lead to cancellation. According to Currie’s law, \ref{14}, (Mzis the resulting magnetization of z axis, C is a material specific Curie constant, B0 is the magnetic field, and T is absolute temperature), magnetization of z axis proportional to magnetic field applied from outside. Basically, excitation happens by passing current through coil which leads to magnetization of x, y and z axis. It is the way of changing magnetism from z axis to x and y axis. Once external current supply is turned off, magnetization will eventually quench. This means a change of magnetization from x and y axis to z axis, were it eventually become equilibrated and device no more can detect the signals. Energy which is emitted from excited spin leads to development of new current inside of the same coil recorded by detector; hence same coil can be used as detector and source of magnetic field. This process called as a relaxation and that's why, return of magnetization to z axis called as spin-lattice relaxation or T1 relaxation (time required for magnetization to align on z axis). Eventual result of zero magnetization on x and y axis called as spin-spin relaxation or T2 relaxation ).\[ M_{z}\ =\ CB_{0}/T \label{14} \]In MRI imaging contrast is determined according to T1, T2 or the proton density parameter. Therefor we can obtain three different images. By changing intervals between radio frequency (RF) 90° pulses and RF 180° pulses, the desired type of image can be obtained. There are few computational techniques available to improve contrast of image; those are repetitive scans and different mathematical computations. Repetitive scans take a long time, therefore cannot be applied in MRI. Mathematical computation on their own, do not provide with desired results. For that reason, in order to obtain high resolution images, contrast agents (CA) are important part of medical imaging.There are different types of contrast agents available in markets which reduce the supremacy of T1or T2, and differentiate according to relaxivity1 (r1) and relaxivity2 (r2) values. The relaxivity (ri) can be described as 1/Ti (s-1) of water molecules per mM concentration of CA. Contrast agents are paramagnetic and can interact with dipole moments of water molecules, causing fluctuations in molecules. This theory is known as Solomon-Bloembergen-Morgan (SBM) theory. Those which are efficient were derivatives of gadolinium (e.g., gadobenic acid a) and gadoxetic acid b), iron (e.g., superparamagnetic iron oxide and ultrasmall superparamagnetic iron oxide) and manganese (e.g., manganese dipyridoxal diphosphate). Fundamentally the role of contrast agents can be played by any paramagnetic species.There are two main principles of interactions of contrast agents with water molecules. One is direct interaction, which is called inner sphere relaxation, and the other mechanism that happens in the absence of direct interaction with water molecule which is outer sphere relaxation. If we have water molecules in the first coordination sphere of metal ion, we can consider them as the inner sphere, and if diffusion of protons from outside happens randomly we define them as outer sphere relaxation. Another type of relaxivity comes from already affected water molecules, which transfers their relaxivity to protons of close proximity, this type of relaxivity called second sphere and is usually neglected or contributed as outer sphere. In inner sphere proton relaxivity there are two main mechanisms involved in relaxation. One is dipole-dipole interactions between metal and proton and another is scalar mechanism. Dipole-dipole interaction affects electron spin vectors and scalar mechanism usually controls water exchange. Effect of contrast agents on T1 relaxation is much larger than on T2, since T1 is much larger for tissues than T2.Determination of relaxivity became very easy with the advancements of NMR and computer technology, where you need just to load your sample and read values from the screen. But let’s consider in more detail what are the precautions should be taken during sample preparation and data acquisition.The sample to be analyzed is dissolved in water or another solvent. Generally water is used since contrast agents for medical MRI are used in aqueous media. The amount of solution used is determined according to the internal standard volume, which is used for calibration purposes of device and is usually provided by company producing device. A suitable sample holder is a NMR tube. It is important to degas solvent prior measurements by bubbling gas through it (nitrogen or argon works well), so no any traces of oxygen remains in solution, since oxygen is paramagnetic.Before collecting data it is better to keep the sample in the device compartment for few minutes, so temperature of magnet and your solution equilibrates. The relaxivity (ri) calculated according to (\ref{15} ), where Ti is the relaxation time in the presence of CAs, Tid is the relaxation time in the absence of CAs, and CA is the concentration of paramagnetic CAs (mM). Having the relaxivity values allows for a comparison of a particular compound to other known contrast agents.\[ r_{i} \ =\ (1/T_{i} \ -\ 1/T_{id})/[CA] \label{15} \]Jean Jeener from the Université Libre de Bruxelles first proposed 2D NMR in 1971. In 1975 Walter P. Aue, Enrico Bartholdi, and Richard R. Ernst first used Jeener’s ideas of 2D NMR to produce 2D spectra, which they published in their paper “Two-dimensional spectroscopy, application to nuclear magnetic resonance”. Since this first publication, 2D NMR has increasing been utilized for structure determination and elucidation of natural products, protein structure, polymers, and inorganic compounds. With the improvement of computer hardware and stronger magnets, newly developed 2D NMR techniques can easily become routine procedures. In 1991 Richard R. Ernst won the Nobel Prize in Chemistry for his contributions to Fourier Transform NMR. Looking back on the development of NMR techniques, it is amazing that 2D NMR took so long to be developed considering the large number of similarities that it has with the simpler 1D experiments.2D NMR was developed in order to address two major issues with 1D NMR. The first issue is the limited scope of a 1D spectrum. A 2D NMR spectrum can be used to resolve peaks in a 1D spectrum and remove any overlap present. With a 1D spectrum, this is typically performed using an NMR with higher field strength, but there is a limit to the resolution of peaks that can be obtained. This is especially important for large molecules that result in numerous peaks as well as for molecules that have similar structural motifs in the same molecule. The second major issue addressed is the need for more information. This could include structural or stereochemical information. Usually to overcome this problem, 1D NMR spectra are obtained studying specific nuclei present in the molecule (for example, this could include fluorine or phosphorus). Of course this task is limited to only nuclei that have active spin states/spin states other than zero and it requires the use of specialized NMR probes.2D NMR can address both of these issues in several different ways. The following four techniques are just few of the methods that can be used for this task. The use of J-resolved spectroscopy is used to resolve highly overlapping resonances, usually seen as complex multiplicative splitting patterns. Homonuclear correlation spectroscopy can identify spin-coupled pairs of nuclei that overlap in 1D spectra. Heteronuclear shift-correlation spectroscopy can identify all directly bonded carbon-proton pairs, or other combinations of nuclei pairs. Lastly, Nuclear Overhauser Effect (NOE) interactions can be used to obtain information about through-space interactions (rather than through-bond). This technique is often used to determine stereochemistry or protein/peptide interactions.The concept of 2D NMR can be considered as an extension of the concept of 1D NMR. As such there are many similarities between the two. Since the acquisition of a 2D spectrum is almost always preceded by the acquisition of a 1D spectrum, the standard used for reference Since 2D NMR is a more complicated experiment than 1D NMR, there are also some differences between the two. One of the differences is in the complexity of the data obtained. A 2D spectrum often results from a change in pulse time; therefore, it is important to set up the experiment correctly in order to obtain meaningful information. Another difference arises from the fact that one spectrum is 1D while the other is 2D. As such interpreting a 2D spectrum requires a much greater understanding of the experiment parameters. For example, one 2D experiment might investigate the specific coupling of two protons or carbons, rather than focusing on the molecule as a whole (which is generally the target of a 1D experiment). The specific pulse sequence used is often very helpful in interpreting the information obtained. The software used for 1D spectra is not always compatible with 2D spectra. This is due to the fact that a 2D spectrum requires more complex processing, and the 2D spectra generated often look quite different than 1D spectra. Some software that is commonly used to interpret 2D spectra is either Sparky or Bruker’s TopSpin. Lastly the NMR instrument used to obtain a 2D spectrum typically generates a much larger magnetic field (700-1000 MHz). Due to the increased cost of buying and maintaining such an instrument, 2D NMR is usually reserved for rather complex molecules.(TMS) and the solvent used (typically CDCl3 or other deuterated solvent) are the same for both experiments. Furthermore, 2D NMR is most often used to reveal any obscurity in a 1D spectrum (whether that is peak overlap, splitting overlap, or something else), so the nuclei studied are the same. Most often these are 1H and 13C, but other nuclei could also be studied.Since 2D NMR is a more complicated experiment than 1D NMR, there are also some differences between the two. One of the differences is in the complexity of the data obtained. A 2D spectrum often results from a change in pulse time; therefore, it is important to set up the experiment correctly in order to obtain meaningful information. Another difference arises from the fact that one spectrum is 1D while the other is 2D. As such interpreting a 2D spectrum requires a much greater understanding of the experiment parameters. For example, one 2D experiment might investigate the specific coupling of two protons or carbons, rather than focusing on the molecule as a whole (which is generally the target of a 1D experiment). The specific pulse sequence used is often very helpful in interpreting the information obtained. The software used for 1D spectra is not always compatible with 2D spectra. This is due to the fact that a 2D spectrum requires more complex processing, and the 2D spectra generated often look quite different than 1D spectra. Some software that is commonly used to interpret 2D spectra is either Sparky or Bruker’s TopSpin. Lastly the NMR instrument used to obtain a 2D spectrum typically generates a much larger magnetic field (700-1000 MHz). Due to the increased cost of buying and maintaining such an instrument, 2D NMR is usually reserved for rather complex molecules.One of the central ideas that is associated with 2D NMR is the rotating frame, because it helps to visualize the changes that take place in dimensions. Our ordinary “laboratory” frame consists of three axes (the Cartesian x, y, and z). This frame can be visualized if one pictures the corner of a room. The intersections of the floor and the walls are the x and the y dimensions, while the intersection of the walls is the z axis. This is usually considered the “fixed frame.” When an NMR experiment is carried out, the frame still consists of the Cartesian coordinate system, but the x and ycoordinates rotate around the z axis. The speed with which the x-y coordinate system rotates is directly dependent on the frequency of the NMR instrument.When any NMR experiment is carried out, a majority of the spin states of the nucleus of interest line up with one of these three coordinates (which we can pick to be z). Once an equilibrium of this alignment is achieved, a magnetic pulse can be exerted at a certain angle to the z axis (usually 90° or 180°) which temporarily disrupts the equilibrium alignment of the nuclei. As the pulse is removed, the nuclei are allowed to relax back to this equilibrium alignment with the magnetic field of the instrument. When this relaxation takes place, the progression of the nuclei back to the equilibrium orientation is detected by a computer as a free induction decay (FID). When a sample has different nuclei or the same nucleus in different environments, different FID can be recorded for each individual relaxation to the equilibrium position. The FIDs of all of the individual nuclei can be recorded and superimposed. The complex FID signal obtained can be converted to a recording of the NMR spectrum obtained by a Fourier transform(FT). The FT is a complex mathematical concept that can be described by \ref{16}, where ω is the angular frequency.\[ z(t)\ =\ \sum^{\infty }_{k \rightarrow \infty} c_{i}e^{ik\omega t} \label{16} \]This concept of the FT is similar for both 1D and 2D NMR. In 2D NMR a FID is obtained in one dimension first, then through the application of a pulse a FID can be obtained in a second dimension. Both FIDs can be converted to a series of NMR spectra through a Fourier transform, resulting in a spectrum that can be interpreted. The coupling of the two FID's in 2D NMR usually reveals a lot more information about the specific connectivity between two atoms.There are four general stages or time periods that are present for any 2D NMR experiment. These are preparation, evolution, mixing, and detection. A general schematic representation is seen in . The preparation period defines the system at the first time phase. The evolution period allows the nuclei to precess (or move relative to the magnetic field). The mixing period introduces a change in the way the spectra is obtained. The detection period records the FID. In obtaining a spectrum, the pulse sequence is the most important factor that determines what data will be obtained. In general 2D experiments are a combination of 1D experiments collected by varying the timing and pulsing.This is the first step in any 2D NMR experiment. It is a way to start all experiments from the same state. This state is typically either thermal equilibrium, obeying Boltzmann statistics, or it could be a state where the spins of one nucleus are randomized in orientation and the spins of another nucleus are in thermal equilibrium. At the end of the preparation period, the magnetizations are usually placed perpendicular, or at a specific angle, to the magnetic field axis. This phase creates magnetizations in the x-y plane.The nuclei are then allowed to precess around the direction of the magnetic field. This concept is very similar to the precession of a top in the gravitational field of the Earth. In this phase of the experiment, the rates at which different nuclei precess, as shown in determine how the nuclei are reacting based on their environment. The magnetizations that are created at the end of the preparation step are allowed to evolve or change for a certain amount of time (t1) in the environment defined by the magnetic and radio frequency (RF) fields. In this phase, the chemical shifts of the nuclei are measured similarly to a 1D experiment, by letting the nucleus magnetization rotate in the x-y plane. This experiment is carried out a large number of times, and then the recorded FID is used to determine the chemical shifts.Once the evolution period is over, the nuclear magnetization is distributed among the spins. The spins are allowed to communicate for a fixed period of time. This typically occurs using either magnetic pulses and/or variation in the time periods. The magnetic pulses typically consist of a change in the rotating frame of reference relative to the original "fixed frame" that was introduced in the preparation period, as seen in . Experiments that only use time periods are often tailored to look at the effect of the RF field intensity. Using either the bonds connecting the different nuclei (J-coupling) or using the small space between them (NOE interaction), the magnetization is allowed to move from one nucleus to another. Depending on the exact experiment performed, these changes in magnetizations are going to differ based on what information is desired. This is the step in the experiment that determines exactly what new information would be obtained by the experiment. Depending on which chemical interactions require suppression and which need to be intensified to reveal new information, the specific "mixing technique" can be adjusted for the experiment.This is always the last period of the experiment, and it is the recording of the FID of the second nucleus studied. This phase records the second acquisition time (t2) resulting in a spectrum, similar to the first spectrum, but typically with differences in intensity and phase. These differences can give us information about the exact chemical and magnetic environment of the nuclei that are present. The two different Fourier transforms are used to generate the 2D spectrum, which consists of two frequency dimensions. These two frequencies are independent of each other, but when plotted on a single spectrum the frequency of the signal obtained in time t1 has been converted in another coherence affected by the frequency in time t2. While the first dimension represents the chemical shifts of the nucleus in question, the second dimension reveals new information. The overall spectrum, , is the result of a matrix in the two frequency domains obtained during the experiment.As mentioned earlier, the pulse sequence and the mixing period are some of the most important factors that determine the type of spectrum that will be identified. Depending on whether the magnetization is transferred through a J-coupling or NOE interaction, different information and spectra can be obtained. Furthermore, depending on the experimental setup, the mixing period could transfer magnetization either through a single J-coupling or through several J-couplings for nuclei that are connected together. Similarly NOE interactions can also be controlled to specific distances. Two types of NOE interactions can be observed, positive and negative. When the rate at which fluctuation occurs in the transverse plane of a fluctuating magnetic field matches the frequency of double quantum transition, a positive NOE is observed. When the fluctuation is slower, a negative NOE is produced.Sample preparation for 2D NMR is essentially the same as that for 1D NMR. Particular caution should be exercised to use clean and dry sample tubes and use only deuterated solvents. The amount of sample used should be anywhere between 15 and 25 mg although with sufficient time even smaller quantities may be used. The filling height of the solvent should be about 4 cm. The solution must be clear and homogenous. Any participate needs to be filtered off prior to obtaining the spectra.The acquisition of a 2D spectrum will vary from instrument to instrument, but the process is virtually identical to obtaining a 13C spectrum. It is important to obtain a 1D spectrum (especially 1H) before proceeding to obtain a 2D spectrum. The acquisition range should be adjusted based on the 1D spectrum to minimize instrument time. Depending on the specific type of 2D experiment (such as COSY or NOESY) several parameters need to be adjusted. The following 6 steps can followed to obtain almost any 2D NMR spectrum.The parameters listed in Table \(\PageIndex{7}\) should be given special attention, as they can significantly affect the quality of the spectra obtained.After a 2D spectrum has successfully been obtained, depending on the type of spectrum (COSY, NOESY, INEPT), it might need to be phased. Phasing is the adjustment of the spectrum so that all of the peaks across the spectrum are in the absorptive mode (pointing either up or down). With 2D spectra, phasing is done in both frequency dimensions. This can either be done automatically by a software program (for simple 2D spectra with no cluster signals) or manually by the user (for more complex 2D spectra). Sometimes, phasing can be done with the program that is used to obtain the spectrum. Afterwards the spectrum could either be printed out or further analyzed. One example of further analysis is integrating parts of the spectrum. This could give the user meaningful information about the relative ratio of different types of nuclei (and even quantify the ratios between two diasteriomeric molecules).Two-dimensional NMR is increasingly becoming a routine method for analyzing complex molecules, whether they are inorganic compounds, organic natural products, proteins, or polymers. A basic understanding of 2D NMR can make it significantly easier to analyze complex molecules and provide further confirmation for results obtained by other methods. The variation in pulse sequences provides chemists the opportunity to analyze a large diversity of compounds. The increase in the magnetic strength of NMR machines has allowed 2D NMR to be more often used even for simpler molecules. Furthermore, higher dimension techniques have also been introduced, and they are slowly being integrated into the repertoire of chemists. These are essentially simple extensions of the ideas of 2D NMR.Since the advent of NMR, synthetic chemists have had an excellent way to characterize their synthetic products. With the arrival of multidimensional NMR into the realm of analytical techniques, scientists have been able to study larger and more complicated molecules much easier than before, due to the great amount of information 2D and 3D NMR experiments can offer. With 2D NMR, overlapping multiplets and other complex splitting patterns seen in 1D NMR can be easily deciphered, since instead of one frequency domain, two frequency domains are plotted and the couplings are plotted with respect to each other, which makes it easier to determine molecular connectivity.Spectra are obtained using a specific sequence of radiofrequency (RF) pulses that are administered to the sample, which can vary in the angle at which the pulse is given and/or the number of pulses. shows a schematic diagram for a generic pulse sequence in a 2D NMR experiment. First, a pulse is administered to the sample in what is referred to as the preparation period. This period could be anything from a single pulse to a complex pattern of pulses. The preparation period is followed by a “wait” time (also known as the evolution time), t1, during which no data is observed. The evolution time also can be varied to suit the needs of the specific experiment. A second pulse is administered next during what is known as the mixing period, where the coherence at the end of t1 is converted into an observable signal, which is recorded during the observation time, t2. shows a schematic diagram of how data is converted from the time domain (depicted in the free induction decay, or FID) to a frequency domain. The process of this transformation using Fourier Transform (FT) is the same as it is in 1D NMR, except here, it is done twice (or three times when conducting a 3D NMR experiment).In 1D NMR, spectra are plotted with frequency (in ppm or Hz, although most commonly ppm) on the horizontal axis and with intensity on the vertical axis. However, in 2D NMR spectra, there are two frequency domains being plotted, each on the vertical and horizontal axes. Intensity, therefore, can be shown as a 3D plot or topographically, much like a contour map, with more contour lines representing greater intensities, as shown in a. Since it is difficult to read a spectrum in a 3D plot, all spectra are plotted as contour plots. Furthermore, since resolution in a 2D NMR spectrum is not needed as much as in a 1D spectrum, data acquisition times are often short.2D NMR is very advantageous for many different applications, though it is mainly used for determining structure and stereochemistry of large molecules such as polymers and biological macromolecules, that usually exhibit higher order splitting effects and have small, overlapping coupling constants between nuclei. Further, some 2D NMR experiments can be used to elucidate the components of a complex mixture. This module aims to describe some of the common two-dimensional NMR experiments used to determine qualitative information about molecular structure.COSY (COrrelation SpectroscopY) was one of the first and most popular 2D NMR experiments to be developed. It is a homonuclear experiment that allows one to correlate different signals in the spectrum to each other. In a COSY spectrum (see b), the chemical shift values of the sample’s 1D NMR spectrum are plotted along both the vertical and horizontal axes (some 2D spectra will actually reproduce the 1D spectra along the axes, along with the frequency scale in ppm, while others may simply show the scale). This allows for a collection of peaks to appear down the diagonal of the spectrum known as diagonal peaks (shown in b, highlighted by the red dotted line). These diagonal peaks are simply the peaks that appear in the normal 1D spectrum, because they show nuclei that couple to themselves. The other type of peaks appears symmetric across the diagonal and is known as cross peaks. These peaks show which groups in the molecule that have different chemical shifts are coupled to each other by producing a signal at the intersection of the two frequency values.One can then determine the structure of a sample by examining what chemical shift values the cross peaks occur at in a spectrum. Since the cross peaks are symmetric across the diagonal peaks, one can easily identify which cross peaks are real (if a certain peak has a counterpart on the other side of the diagonal) and which are digital artifacts of the experiment. The smallest coupling that can be detected using COSY is dependent on the linewidth of the spectrum and the signal-to-noise ratio; a maximum signal-to-noise ratio and a minimum linewidth will allow for very small coupling constants to be detected.Although COSY is very useful, it does have its disadvantages. First of all, because the anti-phase structure of the cross peaks, which causes the spectral lines to cancel one another out, and the in-phase structure of the diagonal peaks, which causes reinforcement among the peaks, there is a significant difference in intensity between the diagonal and cross peaks. This difference in intensity makes identifying small cross peaks difficult, especially if they lie near the diagonal. Another problem is that when processing the data for a COSY spectrum, the broad lineshapes associated with the experiment can make high-resolution work difficult.In one of the more popular COSY variations known as DQF COSY (Double-Quantum Filtered COSY), the pulse sequence is altered so that all of the signals are passed through a double-quantum coherence filter, which suppresses signals with no coupling (i.e. singlets) and allows cross peaks close to the diagonal to be clearly visible by making the spectral lines much sharper. Since most singlet peaks are due to the solvent, DQF COSY is useful to suppress those unwanted peaks.ECOSY (Exclusive COrrelation SpectroscopY) is another derivative of COSY that was made to detect small J-couplings, predominantly among multiplets, usually when J ≤ 3 Hz. Also referred to as long-range COSY, this technique involves adding a delay of about 100-400 ms to the pulse sequence. However, there is more relaxation that is occurring during this delay, which causes a loss of magnetization, and therefore a loss of signal intensity. This experiment would be advantageous for one who would like to further investigate whether or not a certain coupling exists that did not appear in the regular COSY spectrum.GS-COSY (Gradient Selective COSY) is a very applied offshoot of COSY since it eliminates the need for what is known as phase cycling. Phase cycling is a method in which the phase of the pulses is varied in such a way to eliminate unwanted signals in the spectrum, due to the multiple ways which magnetization can be aligned or transferred, or even due to instrument hardware. In practical terms, this means that by eliminating phase cycling, GS-COSY can produce a cleaner spectrum (less digital artifacts) in much less time than can normal COSY.Another variation of COSY is COSY-45, which administers a pulse at 45° to the sample, unlike DQF COSY which administers a pulse perpendicular to the sample. This technique is useful because one can elucidate the sign of the coupling constant by looking at the shape of the peak and in which direction it is oriented. Knowing the sign of the coupling constant can be useful in discriminating between vicinal and geminal couplings. However, COSY-45 is less sensitive than other COSY experiments that use a 90° RF pulse.TOCSY (TOtal Correlation SpectroscopY) is very similar to COSY in that it is a homonuclear correlation technique. It differs from COSY in that it not only shows nuclei that are directly coupled to each other, but also signals that are due to nuclei that are in the same spin system, as shown in below. This technique is useful for interpreting large, interconnected networks of spin couplings. The pulse sequence is arranged in such a way to allow for isotropic mixing during the sequence that transfers magnetization across a network of atoms coupled to each other. An alternative technique to 2D TOCSY is selective 1D TOCSY, which can excite certain regions of the spectrum by using shaped pulses. By specifying particular chemical shift values and setting a desired excitation width, one can greatly simplify the 1D experiment. Selective 1D TOCSY is particularly useful for analyzing polysaccharides, since each sugar subunit is an isolated spin system, which can produce its own subspectrum, as long as there is at least one resolved multiplet. Furthermore, each 2D spectrum can be acquired with the same resolution as a normal 1D spectrum, which allows for an accurate measurement of multiplet splittings, especially when signals from different coupled networks overlap with one another.HETCOR (Heteronuclear Correlation) refers to a 2D NMR experiment that correlates couplings between different nuclei (usually 1H and a heteroatom, such as 13C or 15N). Heteronuclear experiments can easily be extended into three or more dimensions, which can be thought of as experiments that correlate couplings between three or more different nuclei. Because there are two different frequency domains, there are no diagonal peaks like there are in COSY or TOCSY. Recently, inverse-detected HETCOR experiments have become extremely useful and commonplace, and it will be those experiments that will be covered here. Inverse-detection refers to detecting the nucleus with the higher gyromagnetic ratio, which offers higher sensitivity. It is ideal to determine which nucleus has the highest gyromagnetic ratio for detection and set the probe to be the most sensitive to this nucleus. In HETCOR, the nucleus that was detected first in a 1H -13C experiment was 13C, whereas now 1H is detected first in inverse-detection experiments, since protons are inherently more sensitive. Today, regular HETCOR experiments are not usually in common laboratory practice.The HMQC (Heteronuclear Multiple-Quantum Coherence) experiment acquires a spectrum (see a) by transferring the proton magnetization by way of 1JCH to a heteronucleus, for example, 13C. The 13C atom then experiences its chemical shift in the t1 time period of the pulse sequence. The magnetization then returns to the 1H for detection. HMQC detects 1JCH coupling and can also be used to differentiate between geminal and vicinal proton couplings just as in COSY-45. HMQC is very widely used and offers very good sensitivity at much shorter acquisition times than HETCOR (about 30 min as opposed to several hours with HETCOR).However, because it shows the 1H -1H couplings in addition to 1H -13C couplings and because the cross peaks appear as multiplets, HMQC suffers when it comes to resolution in the 13C peaks. The HSQC (Heteronuclear Single-Quantum Coherence) experiment can assist, as it can suppress the 1H -1H couplings and collapse the multiplets seen in the cross peaks into singlets, which greatly enhances resolution (an example of an HSQC is shown in b). shows a side-by-side comparison of spectra from HMQC and HSQC experiments, in which some of the peaks in the HMQC spectrum are more resolved in the HSQC spectrum. However, HSQC administers more pulses than HMQC, which causes miss-settings and inhomogeneity between the RF pulses, which in turn leads to loss of sensitivity. In HMBC (Heteronuclear Multiple Bond Coherence) experiments, two and three bond couplings can be detected. This technique is particularly useful for putting smaller proposed fragments of a molecule together to elucidate the larger overall structure. HMBC, on the other hand, cannot distinguish between 2JCH and 3JCH coupling constants. An example spectrum is shown in d.NOESY (Nuclear Overhauser Effect SpectroscopY) is an NMR experiment that can detect couplings between nuclei through spatial proximity (< 5 Å apart) rather than coupling through covalent bonds. The Nuclear Overhauser Effect (NOE) is the change in the intensity of the resonance of a nucleus upon irradiation of a nearby nucleus (about 2.5-3.5 Å apart). For example, when an RF pulse specifically irradiates a proton, its spin population is equalized and it can transfer its spin polarization to another proton and alter its spin population. The overall effect is dependent on a distance of r-6. NOESY uses a mixing time without pulses to accumulate NOEs and its counterpart ROESY (Rotating frame nuclear Overhauser Effect SpectroscopY) uses a series of pulses to accumulate NOEs. In NOESY, NOEs are positive when generated from small molecules, are negative when generated from large molecules (or molecules dissolved in a viscous solvent to restrict molecular tumbling), and are quite small (near zero) for medium-sized molecules. On the contrary, ROESY peaks are always positive, regardless of molecular weight. Both experiments are useful for determine proximity of nuclei in large biomolecules, especially proteins, where two atoms may be nearby in space, but not necessarily through covalent connectivity. Isomers, such as ortho-, meta-, and para-substituted aromatic rings, as well as stereochemistry, can also be distinguished through the use of an NOE experiment. Although NOESY and ROESY can generate COSY and TOCSY artifacts, respectively, those unwanted signals could be minimized by variations in the pulse sequences. Example NOESY and ROESY spectra are shown in .Much of the interpretation one needs to do with 2D NMR begins with focusing on the cross peaks and matching them according to frequency, much like playing a game of Battleship®. The 1D spectrum usually will be plotted along the axes, so one can match which couplings in one spectrum correlate to which splitting patterns in the other spectrum using the cross peaks on the 2D spectrum (see ). Also, multiple 2D NMR experiments are used to elucidate the structure of a single molecule, combining different information from the various sources. For example, one can combine homonuclear and heteronuclear experiments and piece together the information from the two techniques, with a process known as Parallel Acquisition NMR Spectroscopy or PANSY. In the 1990s, co-variance processing came onto the scene, which allowed scientists to process information from two separate experiments, without having to run both experiments at the same time, which made for shorter data acquisition time. Currently, the software for co-variance processing is available from various NMR manufacturers. There are many possible ways to interpret 2D NMR spectra, though one common method is to label the cross peaks and make connections between the signals as they become apparent. Prof. James Nowick at UC Irvine describes his method of choice for putting the pieces together when determining the structure of a sample; the lecture in which he describes this method is posted in the links above. In this video, he provides a stepwise method to deciphering a spectrum.Within NMR spectroscopy, there are a vast variety of different methods to acquire data on molecular structure. In 1D and 2D experiments, one can simply adjust the appearance of the spectrum by changing any one of the many parameters that are set when running a sample, such as number of scans, relaxation delay times, the amount of pulses at various angles, etc. Many 3D and 4D NMR experiments are actually simply multiple 2D NMR pulse sequences run in sequence, which generates more correlation between different nuclei in a spin system. With 3D NMR experiments, three nuclei, for example 1H, 13C, and 15N can be studied together and their connectivity can be elucidated. These techniques become invaluable when working with biological molecules with complex 3D structures, such as proteins and polysaccharides, to analyze their structures in solution. These techniques, coupled with ultra-fast data acquisition, leads to monitoring complex chemical reactions and/or non-covalent interactions in real time. Through the use of these and other techniques, one can begin to supplement a characterization “toolbox” in order to continue solving complex chemical problems.Paramagnetic chemical exchange saturation transfer (PARACEST) is a powerful analytical tool that can elucidate many physical properties of molecules and systems of interest both in vivo and in vitro through specific paramagnetic agents. Although a relatively new imaging technique, applications for PARACEST imaging are growing as new imaging agents are being developed with enhanced exchange properties. Current applications revolve around using these PARACEST agents for MRI imaging to enhance contrast. However, the fundamentals of PARACEST can be used to measure properties such as temperature, pH, and concentration of molecules and systems as we will discuss. PARACEST was developed in response to several imaging limitations presented by diamagnetic agents. PARACEST spectral data can be easily obtained using NMR Spectroscopy while imaging can be typically achieved with widely available clinical 1.5/4 T MRI scanners.Chemical exchange saturation transfer (CEST) is a phenomenon that has been around since the 1960s. It was first discovered by Forsén, pictured below in , and Hoffman in 1963 and was termed magnetization transfer NMR. This technique was limited in its applications to studying rapid chemical exchange reactions. However in 2000, Balaban, pictured below in , revisited this topic and discovered the application of this phenomenon for imaging purposes. He termed the phenomenon chemical exchange saturation transfer. From this seminal finding, Balaban elucidated techniques to modulate MRI contrasts to reflect the exchange for imaging purposes.CEST imaging focuses on N-H, O-H, or S-H exchangeable protons. Observing these exchanges in diamagnetic molecules can be very challenging. Several models have been developed to overcome the challenges associated with imaging with clinical scanners. The focus of recent research has been to develop paramagnetic chemical exchange saturation transfer (PARACEST) agents. Typical PARACEST complexes are based on lanthanide atoms. Historically, these molecules were thought to be useless for chemical exchange due to their very fast water exchanges rates. However, recent works by Silvio Aime and Dean Sherry have shown modified lanthanide complexes that have very slow exchange rates that make it ideal for CEST imaging. In addition to slow exchange rates, these molecules have vastly different resonance frequencies which contributes to their enhanced contrast.Chemical exchange is defined as the process of proton exchange with surrounding bulk water. Exchange can occur with non-water exchange sites but it has been shown that its’ contribution is negligible. As stated before, CEST imaging focuses on N-H, O-H, or S-H exchangeable protons. Molecularly every exchange proton has a very specific saturation frequency. Applying a radio-frequency pulse that is the same as the proton’s saturation frequency results in a net loss of longitudinal magnetization. Longitudinal magnetization exists by virtue of being in a magnet. All protons in a solution line up with the magnetic field either in a parallel or antiparallel manner. There is a net longitudinal magnetization at equilibrium as the antiparallel state is higher in energy. A 90° RF pulse sequence causes many of the parallel protons to move to the higher energy antiparallel state causing zero longitudinal magnetization. This nonequilibrium state is termed as saturation, where the same amount of nuclear spins is aligned against and with the magnetic field. These saturated protons are exchangeable and the surrounding bulk water participates in this exchange called chemical exchange saturation transfer.This exchange can be visualized through spectral data. The saturated proton exchange with the surrounding bulk water causes the spectral signal from the bulk water to decrease due to decreased net longitudinal magnetization. This decrease can then be quantified and used to measure a wide variety of properties of a molecule or a solution. In the next sub-section, we will explore the quantification in more detail to provide a stronger conceptual understanding.Derivations of the chemical exchange saturation transfer mathematical models arise fundamentally from an understanding of the Boltzmann equation, \ref{17}. The Boltzmann equation mathematically defines the distribution of spins of a molecule placed in a magnetic field. There are many complex models that are used to provide a better understanding of the phenomenon. However, we will stick with a two-system model to simplify the mathematics to focus on conceptual understanding. In this model, there are two systems: bulk water (alpha) and an agent pool (beta). When the agent pool is saturated with a radiofrequency pulse, we make two important assumptions. The first is that all the exchangeable protons are fully saturated and the second is that the saturation process does not affect the bulk water protons, which retain their characteristic longitudinal magnetization.\[ \frac{N_{high\ energy}}{N_{low\ energy}}\ =\ exp( \frac{-\Delta E}{kT}) \label{17} \]To quantify the following proton exchange we shall define the equilibrium proton concentration. The Boltzmann equation gives us the distribution of the spin states at equilibrium which is proportional to the proton concentration. As such, we shall label the two system’s equilibrium states as \( M_{\alpha }^{0} \) and \( M_{\beta }^{0} \). Following saturation, the saturated protons of the bulk pool exchange with the agent pool at a rate \( k_{\alpha } \). As such the decrease in longitudinal (Z) magnetization is given by \( k_{\alpha } M^{Z}_{\alpha } \). Furthermore, another effect that needs to be considered is the inherent relaxation of the protons which increase the Z magnetization back to equilibrium levels, \( M_{\alpha }^{0} \). This can be estimated with the following \ref{18} where \( T_{1 \alpha } \) is the longitudinal relaxation time for bulk water. Setting the two systems equal to represent equilibrium we get the following relationship \ref{19} that can be manipulated mathematically to yield the generalized chemical exchange Equation \ref{20} where \( \tau _{\alpha } \ = k_{\alpha }^{-1} \) and defined as lifetime of a proton in the system and c being the concentrations of protons in their respective system. [n] represents the number of exchangeable protons per CEST molecule. In terms of CEST calculations, the lower the ratio of Z the more prominent the CEST effect. A plot of this equation over a range of pulse frequencies results in what is called a Z-spectra also known as a CEST spectra, shown in . This spectrum is then used to create CEST Images.\[ \frac{M^{0}_{\alpha } - M^{Z}_{\alpha }}{T_{1\alpha }} \label{18} \]\[ k_{\alpha }M^{Z}_{\alpha } \ =\ \frac{M^{0}_{\alpha } - M^{Z}_{\alpha }}{T_{1\alpha }} \label{19} \]\[ Z\ = \frac{M^{Z}_{\alpha }}{M^{0}_{\alpha }} = \frac{1}{1\ +\ \frac{C_{\beta }[n]}{C_{\alpha }} \frac{T_{1\alpha }}{\tau _{\alpha }}} \label{20} \]A CEST agent must have several properties to maximize the CEST effect. Maximum CEST effect is observed when the residence lifetime of bulk water ( \(\tau _{\alpha } \) ) is as short as possible. This indirectly means that an effective CEST agent has a high exchange rate, \( k_{\alpha } \). Furthermore, maximum effect is noted when the CEST agent concentration is high.In addition to these two properties, we need to consider the fact that the two-system model’s assumptions are almost never true. There is often a less than saturated system resulting in a decrease in the observed CEST effect. As a result, we need to consider the power of the saturation pulses, B1. The relationship between the \(\tau _{\alpha } \) and B1 is shown in the below \ref{21}. As such, an increase in saturation pulse power results in increase CEST effect. However, we cannot apply too much B1 due to in vivo limitations. Furthermore, the ideal \(\tau _{\alpha } \) can be calculated using the above relationship.\[ \tau \ =\ \frac{1}{2\pi B_{1}} \label{21} \]Finally, another limitation that needs to be considered is the inherent only to diamagnetic CEST and provides an important distinction between CEST and PARACEST as we will soon discuss. We assumed with the two-system model that saturation with a radiofrequency pulse did not affect the surrounded bulk water Z-magnetization. However, this is large generalization that can only be made for PARACEST agents as we shall soon see. Diamagnetic species, whether endogenous or exogenous, have a chemical shift difference (Δω) between the exchangeable –NH or –OH groups and the bulk water of less than 5 ppm. This small shift difference is a major limitation. Selective saturation often lead to partial saturation of bulk water protons. This is a more important consideration where in-vivo water peak is very broad. As such, we need to maximize the shift difference between bulk water and the contrast agent.PARACEST addresses the two complications that arise with CEST. Application of a radio frequency pulse close to the bulk water signal will result in some off-resonance saturation of the water. This essentially limits power which enhances CEST effect. Furthermore, a slow exchange condition less than the saturation frequency difference (Δω) means that a very slow exchange rate is required for diamagnetic CEST agents of this sort. Both problems can be alleviated by using an agent that has a larger chemical shift separation such as paramagnetic species. shows the broad Δω of Eu3+complex. Eu3+ complex broadens the chemical shift leading to a larger saturation frequency difference that can easily be detected. Red spectral line represents EuDOTA-(glycine ethyl ester)4. Blue spectral line represents barbituric acid. Adapted from A. D. Sherry and M. Woods, Annu. Rev. Biomed. Eng., 2008, 10, 391.Based on the criteri a established in \ref{22}, we see that only Eu3+, Tb3+, Dy3+, and Ho3+ are effective lanthanide CEST agents at the most common MRI power level (1.5 T). However, given stronger field strengths the Table \(\PageIndex{8}\) suggests more CEST efficiency. With exception of Sm3+, all other lanthanide molecules have shifts far from water peak providing a large Δω that is desired of CEST agents. This table should be considered before design of a PARACEST experiment. Furthermore, this table eludes the relationship between power of the saturation pulse and the observed CEST effect. Referring to the following \ref{23}, we see that for increased saturation pulse we notice increased CEST effect. In fact, varying B1 levels changes saturation offset. The higher the B1frequency the higher the signal intensity of the saturation offset As such, it is important to select a proper saturation pulse before experimentation.Based on the criteria established in \ref{22}, we see that only Eu3+, Tb3+, Dy3+, and Ho3+ are effective lanthanide CEST agents at the most common MRI power level (1.5 T). However, given stronger field strengths the Table \(\PageIndex{9}\) suggests more CEST efficiency. With exception of Sm3+, all other lanthanide molecules have shifts far from water peak providing a large Δω that is desired of CEST agents. This table should be considered before design of a PARACEST experiment. Furthermore, this table eludes the relationship between power of the saturation pulse and the observed CEST effect. Referring to the following \ref{23}, we see that for increased saturation pulse we notice increased CEST effect. In fact, varying B1 levels changes saturation offset. The higher the B1frequency the higher the signal intensity of the saturation offset As such, it is important to select a proper saturation pulse before experimentation.\[ \Delta \omega \cdot \tau _{\alpha } \ =\ \frac{1}{2\pi B_{1}} \label{22} \] \[ \tau _{\alpha } \ =\ \frac{1}{2\pi B_{1}} \label{23} \]Two types of experiments can be run to quantify PARACEST. The first produces quantifiable Z-spectral data and is typically run on 400 MHz spectrometers with a B1 power between 200-1000 KHz and an irradiation time between 2 and 6 seconds based on the lanthanide complex. Imaging experiments are typically performed on either clinical scanners are small bore MRI scanner at room temperature using a custom surface coil. Imaging experiments usually require the followings sequence of steps:\[ \frac{S_{sat(-\Delta \omega)} \ -\ S_{sat(\Delta \omega)}}{S_{0}} \label{24} \]PARACEST imaging has shown to be a promising area of research in developing a noninvasive technique for temperature mapping. Sherry et. al shows a variable-temperature dependence of a lanthanide bound water molecule resonance frequency. They establish a linear correspondence over the range of 20-50 °C. Furthermore, they show a feasible analysis technique to locate the chemical shift (δ) of a lanthanide in images with high spatial resolution. By developing a plot of pixel intensity versus frequency offset they can individually identify temperature at each pixel and hence create a temperature map as shown in the . Divalent zinc is an integral transition-metal that is prominent in many aqueous solutions and plays an important role in physiological systems. The ability to detect changes in sample concentrations of Zinc ions provides valuable information regarding a system’s. Developing specific ligands that coordinate with specific ions to enhance wate-rexchange characteristics can amplify CEST profile. In this paper, the authors develop a Eu(dotampy) sensor shown in for Zn ions. This authors theorize that the sensor coordinates with Zinc using its four pyridine donors in a square anti-prism manner as determined by NMR Spectroscopy by observing water exchange rates and by base catalysis by observing CEST sensitivity. Authors were unable to analyze coordination by X-ray crystallography. Following, determination of successful CEST profiles, the authors mapped in-vitro samples of varying concentrations of Zn and were successfully able to correlate image voxel intensity with Zn concentrations as shown in . Furthermore, they were able to successfully demonstrate specificity of the sensor to Zn over Magnesium and Calcium. This application proves promising as a potential detection method for Zn ions in solutions with a range of concentrations between 5 nm to 0.12 μm.This page titled 4.7: NMR Spectroscopy is shared under a CC BY 4.0 license and was authored, remixed, and/or curated by Pavan M. V. Raja & Andrew R. Barron (OpenStax CNX) via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.
551
4.8: EPR Spectroscopy
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Physical_Methods_in_Chemistry_and_Nano_Science_(Barron)/04%3A_Chemical_Speciation/4.08%3A_EPR_Spectroscopy
Electron paramagnetic resonance spectroscopy (EPR) is a powerful tool for investigating paramagnetic species, including organic radicals, inorganic radicals, and triplet states. The basic principles behind EPR are very similar to the more ubiquitous nuclear magnetic resonance spectroscopy (NMR), except that EPR focuses on the interaction of an external magnetic field with the unpaired electron(s) in a molecule, rather than the nuclei of individual atoms. EPR has been used to investigate kinetics, mechanisms, and structures of paramagnetic species and along with general chemistry and physics, has applications in biochemistry, polymer science, and geosciences.The degeneracy of the electron spin states is lifted when an unpaired electron is placed in a magnetic field, creating two spin states, ms = ± ½, where ms = - ½, the lower energy state, is aligned with the magnetic field. The spin state on the electron can flip when electromagnetic radiation is applied. In the case of electron spin transitions, this corresponds to radiation in the microwave range.The energy difference between the two spin states is given by the equation\[ \Delta E \ =\ E_{+} - E_{-} = h \nu = g \beta B \label{1} \]where h is Planck’s constant (6.626 x 10-34 J s-1), v is the frequency of radiation, ß is the Bohr magneton (9.274 x 10-24 J T-1), B is the strength of the magnetic field in Tesla, and g is known as the g-factor. The g-factor is a unitless measurement of the intrinsic magnetic moment of the electron, and its value for a free electron is 2.0023. The value of g can vary, however, and can be calculated by rearrangement of the above equation, i.e.,\[ g = \dfrac{h \nu}{\beta B }\label{2} \]using the magnetic field and the frequency of the spectrometer. Since h, v, and ß should not change during an experiment, g values decrease as B increases. The concept of g can be roughly equated to that of chemical shift in NMR.EPR spectroscopy can be carried out by either 1) varying the magnetic field and holding the frequency constant or 2) varying the frequency and holding the magnetic field constant (as is the case for NMR spectroscopy). Commercial EPR spectrometers typically vary the magnetic field and holding the frequency constant, opposite of NMR spectrometers. The majority of EPR spectrometers are in the range of 8-10 GHz (X-band), though there are spectrometers which work at lower and higher fields: 1-2 GHz (L-band) and 2-4 GHz (S-band), 35 GHz (Q-band) and 95 GHz (W-band).EPR spectrometers work by generating microwaves from a source (typically a klystron), sending them through an attenuator, and passing them on to the sample, which is located in a microwave cavity ).Microwaves reflected back from the cavity are routed to the detector diode, and the signal comes out as a decrease in current at the detector analogous to absorption of microwaves by the sample.Samples for EPR can be gases, single crystals, solutions, powders, and frozen solutions. For solutions, solvents with high dielectric constants are not advisable, as they will absorb microwaves. For frozen solutions, solvents that will form a glass when frozen are preferable. Good glasses are formed from solvents with low symmetry and solvents that do not hydrogen bond. Drago provides an extensive list of solvents that form good glasses.EPR spectra are generally presented as the first derivative of the absorption spectra for ease of interpretation. An example is given in .Magnetic field strength is generally reported in units of Gauss or mTesla. Often EPR spectra are very complicated, and analysis of spectra through the use of computer programs is usual. There are computer programs that will predict the EPR spectra of compounds with the input of a few parameters.Hyperfine coupling in EPR is analogous to spin-spin coupling in NMR. There are two kinds of hyperfine coupling: 1) coupling of the electron magnetic moment to the magnetic moment of its own nucleus; and 2) coupling of the electron to a nucleus of a different atom, called super hyperfine splitting. Both types of hyperfine coupling cause a splitting of the spectral lines with intensities following Pascal’s triangle for I = 1/2 nuclei, similar to J-coupling in NMR. A simulated spectrum of the methyl radical is shown in . The line is split equally by the three hydrogens giving rise to four lines of intensity 1:3:3:1 with hyperfine coupling constant a.The hyperfine splitting constant, known as a, can be determined by measuring the distance between each of the hyperfine lines. This value can be converted into Hz (A) using the g value in the equation:\[ hA\ =\ g \beta a \label{3} \]In the specific case of Cu(II), the nuclear spin of Cu is I = 3/2, so the hyperfine splitting would result in four lines of intensity 1:1:1:1. Similarly, super hyperfine splitting of Cu(II) ligated to four symmetric I = 1 nuclei, such as 14N, would yield nine lines with intensities would be 1:8:28:56:70:56:28:8:1.The g factor of many paramagnetic species, including Cu(II), is anisotropic, meaning that it depends on its orientation in the magnetic field. The g factor for anisotropic species breaks down generally into three values of g following a Cartesian coordinate system which is symmetric along the diagonal: gx, gy, and gz. There are four limits to this system:Condition ii corresponds to Cu(II) in a square planar geometry with the unpaired electron in the dx2-y2 orbital. Where there is also hyperfine splitting involved, g is defined as being the weighted average of the lines.Copper compounds play a valuable role in both synthetic and biological chemistry. Copper catalyzes a vast array of reactions, primarily oxidation-reduction reactions which make use of the Cu(I)/Cu(II) redox cycle. Copper is found in the active site of many enzymes and proteins, including the oxygen carrying proteins called hemocyanins.Common oxidation states of copper include the less stable copper(I) state, Cu+; and the more stable copper(II) state, Cu2+. Copper (I) has a d10 electronic configuration with no unpaired electrons, making it undetectable by EPR. The d9 configuration of Cu2+ means that its compounds are paramagnetic making EPR of Cu(II) containing species a useful tool for both structural and mechanistic studies. Two literature examples of how EPR can provide insight into the mechanisms of reactivity of Cu(II) are discussed herein.Copper (II) centers typically have tetrahedral, or axially elongated octahedral geometry. Their spectra are anisotropic and generally give signals of the axial or orthorhombic type. From EPR spectra of copper (II) compounds, the coordination geometry can be determined. An example of a typical powder Cu(II) spectrum is shown in .The spectrum above shows four absorption-like peaks corresponding to g‖ indicating coordination to four identical atoms, most likely nitrogen. There is also an asymmetric derivative peak corresponding to g⊥ at higher field indicating elongation along the z axis.The reactivity and mechanism of Cu(II)-peroxy systems was investigated by studying the decomposition of the Cu(II) complex 1 with EPR as well as UV-Vis and Raman spectroscopy. The structure ) and EPR spectrum of 1 are given. It was postulated that decomposition of 1 may go through intermediates LCu(II)OOH, LCu(II)OO•, or LCu(II)O• where L = ligand.To determine the intermediate, a common radical trap 5,5-dimethyl-1-pyrroline-N-oxide (DMPO) was added. A 1:1 complex of intermediate and DMPO was isolated, and given the possible structure 2 , which is shown along with its EPR specturm ).The EPR data show similar though different spectra for Cu(II) in each compound, indicating a similar coordination environment – elongated axial, and most likely a LCu(II)O• intermediate.The mechanism of oxidizing alcohols to aldehydes using a Cu(II) catalyst, TEMPO, and O2 was investigated using EPR. A proposed mechanism is given in .EPR studies were conducted during the reaction by taking aliquots at various time points and immediately freezing the samples for EPR analysis. The resulting spectra are shown in .The EPR spectrum (a) in with g‖= 2.26, g⊥ = 2.06, A‖ = 520 MHz, and A⊥ < 50 MHz. After 4 hours, the signal for Cu(II) is no longer in the reaction mixture, and the TEMPO signal has decreased significantly. Suggesting that all the Cu(II) has been reduced to Cu(I) and the majority of TEMPO has been oxidized. After 8 hours, the signals for both Cu(II) and TEMPO have returned indicating regeneration of both species. In this way, EPR evidence supports the proposed mechanism.Electron nuclear double resonance (ENDOR) uses magnetic resonance to simplify the electron paramagnetic resonance (EPR) spectra of paramagnetic species (one which contains an unpaired electron). It is very powerful and advanced and it works by probing the environment of these species. ENDOR was invented in 1956 by George Feher ).A transition’s metal electron spin can interact with the nuclear spins of ligands through dipolar contact interactions. This causes shifts in the nuclear magnetic resonance (NMR) Spectrum lines caused by the ligand nuclei. The NMR technique uses these dipolar interactions, as they correspond to the nuclear spin’s relative position to the metal atom, to give information about the nuclear coordinates. However, a paramagnetic species (one that contains unpaired electrons) complicates the NMR spectrum by broadening the lines considerably.EPR is a technique used to study paramagnetic compounds. However, EPR has its limitations as it offers low resolution that result in line broadening and line splitting. This is partly due to the electron spins coupling to surrounding nuclear spins. However, this coupling are important to understand a paramagnetic compound and determine the coordinates of its ligands. While neither NMR or EPR can be used to study these coupling interaction, one can use both techniques simultaneously, which is the concept behind ENDOR. An ENDOR experiment is a double resonance experiment in which NMR resonances are detected using intensity changes of an EPR line that is irradiated simultaneously. An important difference is that the NRM portion of an ENDOR experiment uses microwaves rather than radiofrequencies, which results in an enhancement of the sensitivity by several orders of magnitude.The ENDOR technique involves monitoring the effects of EPR transitions of a simultaneously driven NMR transition, which allows for the detection of the NMR absorption with much greater sensitivity than EPR. In order to illustrate the ENDOR system, a two-spin system is used. This involves a magnetic field (Bo) interacting with one electron (S = 1/2) and one proton (I = 1/2).The Hamiltonian equation for a two-spin system is described by \ref{4}. The equation lists four terms: the electron Zeeman interaction (EZ), the nuclear Zeeman interaction (NZ), the hyperfine interaction (HFS), respectively. The EZ relates to the interaction the spin of the electron and the magnetic field applied. The NZ describes the interaction of the proton’s magnetic moment and the magnetic field. The HSF is the interaction of the coupling that occurs between spin of the electron and the nuclear spin of the proton. ENDOR spectra contain information on all three terms of the Hamiltonian.\[ H_{0} \ =\ H_{EZ}\ +\ H_{NZ}\ +\ H_{HFS} \label{4} \]\ref{4} can be further expanded to \ref{5}. gn is the nuclear g-factor, which characterizes the magnetic moment of the nucleus. S and I are the vector operators for the spins of the electron and nucleus, respectively. μB is the Bohr magneton (9.274 x 10-24 JT-1). μn is the nuclear magneton (5.05 x 10-27 JT-1). h is the Plank constant (6.626 x 10-34 J s). g and A are the g and hyperfine tensors. \ref{5} becomes \ref{6} by assuming only isotropic interactions and the magnetic filed aligned along the Z-axis. In \ref{6}, g is the isotropic g-factor and a is the isotropic hyperfine constant.\[ H\ =\ \mu_{B}B_{0}gS\ -\ g_{n}\mu _{n}B_{0}I \ +\ hSAI \label{5} \]\[ H\ =\ g\mu_{B}B_{0}S_{Z} - g_{n}\mu _{n} B_{0} I_{Z} \ +\ haSI \label{6} \]The energy levels for the two spin systems can be calculated by ignoring second order terms in the high filed approximation by \ref{7}. This equation can be used to express the four possible energy levels of the two-spin system (S = 1/2, I = 1/2) in \ref{8} - \ref{11}\[E(M_{S},M_{I}) = g \mu _{B} B_{0} M_{S} - g_{n} \mu _{n} B_{0} M_{I} \ +\ haM_{S}M_{I} \label{7} \]\[E_{a}\ =\ -1/2g\mu _{B} B_{0} - 1/2g_{n} \mu _{n} B_{0} - 1/4ha \label{8} \]\[E_{b}\ =\ +1/2g\mu _{B} B_{0} - 1/2g_{n} \mu _{n} B_{0} + 1/4ha \label{9} \]\[E_{c}\ =\ +1/2g\mu _{B} B_{0} + 1/2g_{n} \mu _{n} B_{0} - 1/4ha \label{10} \]\[E_{d}\ =\ -1/2g\mu _{B} B_{0} + 1/2g_{n} \mu _{n} B_{0} + 1/4ha \label{11} \]We can apply the EPR selection rules to these energy levels (ΔMI = 0 and ΔMS = ±1) to find the two possible resonance transitions that can occur, shown in \ref{12} and \ref{13}. These equations can be further simplified by expressing them in frequency units, where νe = gμnB0/h to derive \ref{14}, which defines the EPR transitions ). In the spectrum this would give two absorption peaks that are separated by the isotropic hyperfine splitting, a ).\[ \Delta E_{cd}\ =\ E_{c}\ -\ E_{d} \ =\ g \mu_{B} B - 1/2ha \label{12} \]\[ \Delta E_{ab}\ =\ E_{b}\ -\ E_{a} \ =\ g \mu_{B} B + 1/2ha \label{13} \]\[V_{EPR} \ =\ v_{e} \pm a/2 \label{14} \]ENDOR has advantages in both organic and inorganic paramagnetic species as it is helpful in characterizing their structure in both solution and in the solid state. First, it enhances the resolution gained in organic radicals in solution. In ENDOR, each group of equivalent nuclei contributes only 2 lines to the spectrum, and nonequivalent nuclei cause only an additive increase as opposed to a multiplicative increase like in EPR. For example, the radical cation 9,10-dimethilanthracene ) would produce 175 lines in an EPR spectrum because the spectra would include 3 sets of inequivalent protons. However ENDOR produces only three pairs of lines (1 for each set of equivalent nuclei), which can be used to find the hyperfine couplings. This is also shown in .ENDOR can also be used to obtain structural information from the powder EPR spectra of metal complexes. ENDOR spectroscopy can be used to obtain the electron nuclear hyperfine interaction tensor, which is the most sensitive probe for structure determination. A magnetic filed that assumes all possible orientations with respect to the molecular frame is applied to the randomly oriented molecules. The resonances from this are superimposed on each other and make up the powder EPR spectrum. ENDOR measurements are made at a selected field position in the EPR spectrum, which only contain that subset of molecules that have orientations that contribute to the EPR intensity at the chosen value of the observing field. By selected EPR turning points at magnetic filed values that correspond to defined molecular orientations, a “single crystal like” ENDOR spectra is obtained. This is also called a “orientation selective” ENDOR experiment which can use simulation of the data to obtain the principal components of the magnetic tensors for each interacting nucleus. This information can then be used to provide structural information about the distance and spatial orientation of the remote nucleus. This can be especially interesting since a three dimensional structure for a paramagnetic system where a single crystal cannot be prepared can be obtained.This page titled 4.8: EPR Spectroscopy is shared under a CC BY 4.0 license and was authored, remixed, and/or curated by Pavan M. V. Raja & Andrew R. Barron (OpenStax CNX) via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.
552
4.9: X-ray Photoelectron Spectroscopy
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Physical_Methods_in_Chemistry_and_Nano_Science_(Barron)/04%3A_Chemical_Speciation/4.09%3A_X-ray_Photoelectron_Spectroscopy
X-ray photoelectron spectroscopy (XPS), also called electron spectroscopy for chemical analysis (ESCA), is a method used to determine the elemental composition of a material’s surface. It can be further applied to determine the chemical or electronic state of these elements.The photoelectric effect is the ejection of electrons from the surface of a material upon exposure to electromagnetic radiation of sufficient energy. Electrons emitted have characteristic kinetic energies proportional to the energy of the radiation, according to \ref{1}, where KE is the kinetic energy of the electron, h is Planck’s constant, ν is the frequency of the incident radiation, Eb is the ionization, or binding, energy, and φ is the work function. The work function is a constant which is dependent upon the spectrometer.\[ KE\ =\ h \nu \ -\ E_{b}\ -\ \varphi \label{1} \]In photoelectron spectroscopy, high energy radiation is used to expel core electrons from a sample. The kinetic energies of the resulting core electrons are measured. Using the equation with the kinetic energy and known frequency of radiation, the binding energy of the ejected electron may be determined. By Koopman’s theorem, which states that ionization energy is equivalent to the negative of the orbital energy, the energy of the orbital from which the electron originated is determined. These orbital energies are characteristic of the element and its state.As a surface technique, samples are particularly susceptible to contamination. Furthermore, XPS samples must be prepared carefully, as any loose or volatile material could contaminate the instrument because of the ultra-high vacuum conditions. A common method of XPS sample preparation is embedding the solid sample into a graphite tape. Samples are usually placed on 1 x 1 cm or 3 x 3 cm sheets.Monochromatic aluminum (hν = 1486.6 eV) or magnesium (hν = 1253.6 eV) Kα X-rays are used to eject core electrons from the sample. The photoelectrons ejected from the material are detected and their energies measured. Ultra-high vacuum conditions are used in order to minimize gas collisions interfering with the electrons before they reach the detector.XPS analyzes material between depths of 1 and 10 nm, which is equivalent to several atomic layers, and across a width of about 10 µm. Since XPS is a surface technique, the orientation of the material affects the spectrum collected.X-ray photoelectron (XP) spectra provide the relative frequencies of binding energies of electrons detected, measured in electron-volts (eV). Detectors have accuracies on the order of ±0.1 eV. The binding energies are used to identify the elements to which the peaks correspond. XPS data is given in a plot of intensity versus binding energy. Intensity may be measured in counts per unit time (such as counts per second, denoted c/s). Often, intensity is reported as arbitrary units (arb. units), since only relative intensities provide relevant information. Comparing the areas under the peaks gives relative percentages of the elements detected in the sample. Initially, a survey XP spectrum is obtained, which shows all of the detectable elements present in the sample. Elements with low detection or with abundances near the detection limit of the spectrometer may be missed with the survey scan. shows a sample survey XP scan of fluorinated double-walled carbon nanotubes (DWNTs).Subsequently, high resolution scans of the peaks can be obtained to give more information. Elements of the same kind in different states and environments have slightly different characteristic binding energies. Computer software is used to fit peaks within the elemental peak which represent different states of the same element, commonly called deconvolution of the elemental peak. and show high resolutions scans of C1s and F1s peaks, respectively, from , along with the peak designations.Both hydrogen and helium cannot be detected using XPS. For this reason, XPS can provide only relative, rather than absolute, ratios of elements in a sample. Also, elements with relatively low atomic percentages close to that of the detection limit or low detection by XPS may not be seen in the spectrum. Furthermore, each peak represents a distribution of observed binding energies of ejected electrons based on the depth of the atom from which they originate, as well as the state of the atom. Electrons from atoms deeper in the sample must travel through the above layers before being liberated and detected, which reduces their kinetic energies and thus increases their apparent binding energies. The width of the peaks in the spectrum consequently depends on the thickness of the sample and the depth to which the XPS can detect; therefore, the values obtained vary slightly depending on the depth of the atom. Additionally, the depth to which XPS can analyze depends on the element being detected.High resolution scans of a peak can be used to distinguish among species of the same element. However, the identification of different species is discretionary. Computer programs are used to deconvolute the elemental peak. The peaks may then be assigned to particular species, but the peaks may not correspond with species in the sample. As such, the data obtained must be used cautiously, and care should be taken to avoid over-analyzing data.Despite the aforementioned limitations, XPS is a powerful surface technique that can be used to accurately detect the presence and relative quantities of elements in a sample. Further analysis can provide information about the state and environment of atoms in the sample, which can be used to infer information about the surface structure of the material. This is particularly useful for carbon nanomaterials, in which surface structure and composition greatly influence the properties of the material. There is much research interest in modifying carbon nanomaterials to modulate their properties for use in many different applications.Carbon nanomaterials present certain issues in regard to sample preparation. The use of graphite tape is a poor option for carbon nanomaterials because the spectra will show peaks from the graphite tape, adding to the carbon peak and potentially skewing or overwhelming the data. Instead, a thin indium foil (between 0.1 and 0.5 mm thick) is used as the sample substrate. The sample is simply pressed onto a piece of the foil.The XP survey scan is an effective way to determine the identity of elements present on the surface of a material, as well as the approximate relative ratios of the elements detected. This has important implications for carbon nanomaterials, in which surface composition is of greatest importance in their uses. XPS may be used to determine the purity of a material. For example, nanodiamond powder is a created by detonation, which can leave nitrogenous groups and various oxygen containing groups attached to the surface. shows a survey scan of a nanodiamond thin film with the relative atomic percentages of carbon, oxygen, and nitrogen being 91.25%, 6.25%, and 1.7%, respectively. Based on the XPS data, the nanodiamond material is approximately 91.25% pure.XPS is a useful method to verify the efficacy of a purification process. For example, high-pressure CO conversion single-walled nanotubes (HiPco SWNTs) are made using iron as a catalyst, shows the Fe2p XP spectra for pristine and purified HiPco SWNTs.For this application, XPS is often done in conjunction with thermogravimetric analysis (TGA), which measures the weight lost from a sample at increasing temperatures. TGA data serves to corroborate the changes observed with the XPS data by comparing the percentage of weight loss around the region of the impurity suspected based on the XP spectra. The TGA data support the reduction in iron content with purification suggested by the XP spectra above, for the weight loss at temperatures consistent with iron loss decreases from 27% in pristine SWNTs to 18% in purified SWNTs. Additionally, XPS can provide information about the nature of the impurity. In , the Fe2p spectrum for pristine HiPco SWNTs shows two peaks characteristic of metallic iron at 707 and 720 eV. In contrast, the Fe2p spectrum for purified HiPco SWNTs also shows two peaks at 711 and 724 eV, which are characteristic of either Fe2O3 or Fe3O4. In general, the atomic percentage of carbon obtained from the XPS spectrum is a measure of the purity of the carbon nanomaterials.XP spectra give evidence of functionalization and can provide insight into the identity of the functional groups. Carbon nanomaterials provide a versatile surface which can be functionalized to modulate their properties. For example, the sodium salt of phenyl sulfonated SWNTs is water soluble. In the XP survey scan of the phenyl sulfonated SWNTs, there is evidence of functionalization owing to the appearance of the S2p peak. shows the survey XP spectrum of phenyl sulfonated SWNTs.The survey XP spectrum of the sodium salt shows a Na1s peak and the high resolution scans of Na1s and S2p show that the relative atomic percentages of Na1s and S2p are nearly equal , which supports the formation of the sodium salt.High resolution scans of each of the element peaks of interest can be obtained to give more information about the material. This is a way to determine with high accuracy the presence of elements as well as relative ratios of elements present in the sample. This can be used to distinguish species of the same element in different chemical states and environments, such as through bonding and hybridization, present in the material. The distinct peaks may have binding energies that differ slightly from that of the convoluted elemental peak. Assignment of peaks can be done using XPS databases, such as that produced by NIST. The ratios of the intensities of these peaks can be used to determine the percentage of atoms in a particular state. Discrimination between and identity of elements in different states and environments is a strength of XPS that is of particular interest for carbon nanomaterials.HybridizationThe hybridization of carbons influences the properties of a carbon nanomaterial and has implications in its structure. XPS can be used to determine the hybridization of carbons on the surface of a material, such as graphite and nanodiamond. Graphite is a carbon material consisting of sp2 carbons. Thus, theoretically the XPS of pure graphite would show a single C1s peak, with a binding energy characteristic of sp2 carbon (around 284.2 eV). On the other hand, nanodiamond consists of sp3 bonded carbons. The XPS of nanodiamond should show a single C1s peak, with a binding energy characteristic of sp3 carbon (around 286 eV). The ratio of the sp2 and sp3 peaks in the C1s spectrum gives the ratio of sp2 and sp3 carbons in the nanomaterial. This ratio can be altered and compared by collecting the C1s spectra. For example, laser treatment of graphite creates diamond-like material, with more sp3 character when a higher laser power is used. This can be observed in , in which the C1s peak is broadened and shifted to higher binding energies as increased laser power is applied.Alternatively, annealing nanodiamond thin films at very high temperatures creates graphitic layers on the nanodiamond surface, increasing sp2 content. The extent of graphitization increases with the temperature at which the sample is annealed, as shown in .Reaction CompletionComparing the relative intensities of various C1s peaks can be powerful in verifying that a reaction has occurred. Fluorinated carbon materials are often used as precursors to a broad range of variously functionalized materials. Reaction of fluorinated SWNTs (F-SWNTs) with polyethyleneimine (PEI) leads to decreases in the covalent carbon-fluoride C1s peak, as well as the evolution of the amine C1s peak. These changes are observed in the C1s spectra of the two samples ). Nature and Extent of FunctionalizationXPS can also be applied to determine the nature and extent of functionalization. In general, binding energy increases with decreasing electron density about the atom. Species with more positive oxidation states have higher binding energies, while more reduced species experience a greater degree of shielding, thus increasing the ease of electron removal.The method of fluorination of carbon materials and such factors as temperature and length of fluorination affect the extent of fluoride addition as well as the types of carbon-fluorine bonds present. A survey scan can be used to determine the amount of fluorine compared to carbon. High resolution scans of the C1s and F1s peaks can also give information about the proportion and types of bonds. A shift in the peaks, as well as changes in peak width and intensity, can be observed in spectra as an indication of fluorination of graphite. shows the Cls and F1s spectra of samples containing varying ratios of carbon to fluorine.Furthermore, different carbon-fluorine bonds show characteristic peaks in high resolution C1s and F1s spectra. The carbon-fluorine interactions in a material can range from ionic to covalent. Covalent carbon-fluorine bonds show higher core electron binding energies than bonds more ionic in character. The method of fluorination affects the nature of the fluorine bonds. Graphite intercalation compounds are characterized by ionic carbon-fluorine bonding. shows the F1s spectra for two fluorinated exfoliated graphite samples prepared with different methods.Also, the peaks for carbons attached to a single fluorine atom, two fluorine atoms, and carbons attached to fluorines have characteristic binding energies. These peaks are seen in that C1s spectra of F- and PEI-SWNTs shown in .Table \(\PageIndex{1}\) lists various bonds and functionalities and the corresponding C1s binding energies, which may be useful in assigning peaks in a C1s spectrum, and consequently in characterizing the surface of a material.X-ray photoelectron spectroscopy is a facile and effective method for determining the elemental composition of a material’s surface. As a quantitative method, it gives the relative ratios of detectable elements on the surface of the material. Additional analysis can be done to further elucidate the surface structure. Hybridization, bonding, functionalities, and reaction progress are among the characteristics that can be inferred using XPS. The application of XPS to carbon nanomaterials provides much information about the material, particularly the first few atomic layers, which are most important for the properties and uses of carbon nanomaterials.This page titled 4.9: X-ray Photoelectron Spectroscopy is shared under a CC BY 4.0 license and was authored, remixed, and/or curated by Pavan M. V. Raja & Andrew R. Barron (OpenStax CNX) via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.
553
4.10: ESI-QTOF-MS Coupled to HPLC and its Application for Food Safety
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Physical_Methods_in_Chemistry_and_Nano_Science_(Barron)/04%3A_Chemical_Speciation/4.10%3A_ESI-QTOF-MS_Coupled_to_HPLC_and_its_Application_for_Food_Safety
High-performance liquid chromatography (HPLC) is a very powerful separation method widely used in environmental science, pharmaceutical industry, biological and chemical research and other fields. Generally, it can be used to purify, identify and/or quantify one or several components in a mixture simultaneously.Mass spectrometry (MS) is a detection technique by measuring mass-to-charge ratio of ionic species. The procedure consists of different steps. First, a sample is injected in the instrument and then evaporated. Second, species in the sample are charged by certain ionized methods, such as electron ionization (EI), electrospray ionization (ESI), chemical ionization (CI), matrix-assisted laser desorption/ionization (MALDI). Finally, the ionic species wil be analyzed depending on their mass-to-charge ratio (m/z) in the analyzer, such as quadrupole, time-of-flight (TOF), ion trap and fourier transform ion cyclotron resonance.The mass spectrometric identification is widely used together with chromatographic separation. The most common ones are gas chromatography-mass spectrometry (GC-MS) and liquid chromatography-mass spectrometry (LC-MS). Because of the high sensitivity, selectivity and relatively low price of GC-MS, it has very wide applications in drug detection, environmental analysis and so forth. For those organic chemistry research groups, it is also a daily-used and convenient equipment. However, GC-MS is ineffective if the molecules have high boiling point and/or will be decomposed at high temperature.In this module, we will mainly talk about liquid chromatography and electrospray ionization quadrupole time-of-flight mass spectrometry (LC/ESI-QTOF-MS). As mentioned above, the LC has an efficient capacity of separation and MS has a high sensitivity and strong ability of structural characterization. Furthermore, TOF-MS, has several distinctive properties on top of regular MS, including fast acquisition rates, high accuracy in mass measurements and a large mass range. The combination of LC and ESI-TOF-MS allow us to obtain a powerful in the quantitative and qualitative analysis of molecules in complex matrices by reducing the matrix interferences. It may play an important role in the area of food safety.Generally, LC-MS has four components, including an autosampler, HPLC, ionization source and mass spectrometer, as shown in . Here we need to pay attention to the interface of HPLC and MS so that they can be suitable to each other and be connected. There are specified separation column for HPLC-MS, whose inner diameter (I.D.) is usually 2.0 mm. And the flow rate, which is 0.05 - 0.2 mL/min, is slower than typical HPLC. For the mobile phase, we use the combination of water and methanol and/acetonitrile. And because ions will inhibit the signals in MS, if we want to modify to mobile phase, the modifier should be volatile, such as HCO2H, CH3CO2H, [NH4][HCO2] and [NH4][CH3CO2].As the interface between HPLC and MS, the ionization source is also important. There are many types and ESI and atmospheric pressure chemical ionization (APCI) are the most common ones. Both of them are working at atmospheric pressure, high voltage and high temperature. In ESI, the column eluent as nebulized in high voltage field (3 - 5 kV). Then there will be very small charged droplet. Finally individual ions formed in this process and goes into mass spectrometer.There are many types of mass spectrometers which can connect with the HPLC. One of the most widely-used MS systems is single quadrupole mass spectrometer, whichis not very expensive, shown in . This system has two modes. One mode is total ion monitoring (TIM) mode which can provide the total ion chromatograph. The other is selected ion monitoring (SIM) mode, in which the user can choose to monitor some specific ions, and the latter’s sensitivity is much higher than the former’s. Further, the mass resolution of the single quadrupole mass spectrometer is 1 Da and its largest detection mass range is 30 - 3000 Da.The second MS system is the triple quadrupole MS-MS system, shown in . Using this system, people can select the some ions, called parent ions, and use another electron beam to collide them again to get the fragment ions, called daughter ions. In other words, there are two steps to select the target molecules. So it reduces the matrix effect a lot. This system is very useful in the analysis of biological samples because biological samples always have very complex matrix; however, the mass resolution is still 1 Da.The third system is time-of-flight (TOF) MS, shown in , which can provide a higher mass resolution spectrum, 3 to 4 decimals of Da. Furthermore, it can detect a very large range of mass at a very fast speed. The largest detection mass range is 20 - 10000 Da. But the price of this kind of MS is very high. The last technique is a hybrid mass spectrometer, Q-TOF MS, which combines a single quadrupole MS and a TOF MS. Using this MS, we can get high resolution chromatograph and we also can use the MS-MS system to identify the target molecules.Quinolones are a family of common antibacterial veterinary medicine which can inhibit DNA-gyrase in bacterial cells. However, the residues of quinolone in edible animal products may be directly toxic or cause resistant pathogens in humans. Therefore, sensitive methods are required to monitor such residues possibly present in different animal-producing food, such as eggs, chicken, milk and fish. The molecular structures of eight quinolones, ciprofloxacin (CIP), anofloxacin methanesulphonate (DAN), enrofloxacin (ENR), difloxacin (DIF), sarafloxacin (SARA), oxolinic, acid (OXO), flumequine (FLU), ofloxacin (OFL), are shown in .LC-MS is a common detection approach in the field of food safety. But because of the complex matrix of the samples, it is always difficult to detect those target molecules of low concentration by using single quadrupole MS. The following gives an example of the application of LC/ESI-QTOF-MS.Using a quaternary pump system, a Q-TOF-MS system, a C18 column (250 mm × 2.0 mm I.D., 5 µm) with a flow rate of 0.2 mL/min, and a mixture of solvents as the mobile phase comprising of 0.3% formic acid solution and acetonitrile. The gradient phofile for mobile phase is shown in Table \(\PageIndex{1}\). Since at acidic pH condition, the quinolones carried a positive charge, all mass spectra were acquired in the positive ion mode and summarizing 30,000 single spectra in the mass range of 100-500 Da.The optimal ionization source working parameters were as follows: capillary voltage 4.5 kV; ion energy of quadrupole 5 eV/z; dry temperature 200 °C; nebulizer 1.2 bar; dry gas 6.0 L/min. During the experiments, HCO2Na (62 Da) was used to externally calibrate the instrument. Because of the high mass accuracy of the TOF mass spectrometer, it can extremely reduce the matrix effects. Three different chromatographs are shown in . The top one is the total ion chromatograph at the window range of 400 Da. It’s impossible to distinguish the target molecules in this chromatograph. The middle one is at one Da resolution, which is the resolution of single quadrupole mass spectrometer. In this chromatograph, some of the molecules can be identified. But noise intensity is still very high and there are several peaks of impurities with similar mass-to-charge ratios in the chromatograph. The bottom one is at 0.01 Da resolution. It clearly shows the peaks of eight quinolones with very high signal to noise ratio. In other words, due to the fast acquisition rates and high mass accuracy, LC/TOF-MS can significantly reduce the matrix effects.The quadrupole MS can be used to further confirm the target molecules. shows the chromatograms obtained in the confirmation of CIP (17.1 ng/g) in a positive milk sample and ENR (7.5 ng/g) in a positive fish sample. The chromatographs of parent ions are shown on the left side. On the right side, they are the characteristic daughter ion mass spectra of CIP and ENR.Some of the drawbacks of LC/Q-TOF-MS are its high costs of purchase and maintenance. It is hard to apply this method in daily detection in the area of environmental protection and food safety.In order to reduce the matrix effect and improve the detection sensitivity, people may use some sample preparation methods, such as liquid-liquid extraction (LLE), solid-phase extraction (SPE), distillation. But these methods would consume large amount of samples, organic solvent, time and efforts. Nowadays, there appear some new sample preparation methods. For example, people may use online microdialysis, supercritical fluid extraction (SFE) and pressurized liquid extraction. In the method mentioned in the Application part, we use online in-tube solid-phase microextraction (SPME), which is an excellent sample preparation technique with the features of small sample volume, simplicity solventless extraction and easy automation.This page titled 4.10: ESI-QTOF-MS Coupled to HPLC and its Application for Food Safety is shared under a CC BY 4.0 license and was authored, remixed, and/or curated by Pavan M. V. Raja & Andrew R. Barron (OpenStax CNX) via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.
554