title
stringlengths
13
247
url
stringlengths
35
578
text
stringlengths
197
217k
__index_level_0__
int64
1
8.68k
4.11: Mass Spectrometry
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Physical_Methods_in_Chemistry_and_Nano_Science_(Barron)/04%3A_Chemical_Speciation/4.11%3A_Mass_Spectrometry
Mass spectrometry (MS) is a powerful characterization technique used for the identification of a wide variety of chemical compounds. At its simplest, MS is merely a tool for determining the molecular weight of the chemical species in a sample. However, with the high resolution obtainable from modern machines, it is possible to distinguish isomers, isotopes, and even compounds with nominally identical molecular weights. Libraries of mass spectra have been compiled which allow rapid identification of most known compounds, including proteins as large as 100 kDa (100,000 amu).Mass spectrometers separate compounds based on a property known as the mass-to-charge ratio. The sample to be identified is first ionized, and then passed through some form of magnetic field. Based on parameters such as how long it takes the molecule to travel a certain distance or the amount of deflection caused by the field, a mass can be calculated for the ion. As will be discussed later, there are a wide variety of techniques for ionizing and detecting compounds.Limitations of MS generally stem from compounds that are not easily ionizable, or which decompose upon ionization. Geometric isomers can generally be distinguished easily, but differences in chirality are not easily resolved. Complications can also arise from samples which are not easily dissolved in common solvents.In electon impact ionization, a vaporized sample is passed through a beam of electrons. The high energy (typically 70 eV) beam strips electrons from the sample molecules leaving a positively charged radical species. The molecular ion is typically unstable and undergoes decomposition or rearrangement to produce fragment ions. Because of this, electron impact is classified as a “hard” ionization technique. With regards to metal-containing compounds, fragments in EI will almost always contain the metal atom (i.e., [MLn]+•fragments to [MLn-1]+ + L•, not MLn-1• + L+). One of the main limitations of EI is that the sample must be volatile and thermally stable.In chemical ionization, the sample is introduced to a chamber filled with excess reagent gas (such as methane). The reagent gas is ionized by electrons, forming a plasma with species such as CH5+, which react with the sample to form the pseudomolecular ion [M+H]+. Because CI does not involve radical reactions, fragmentation of the sample is generally much lower than that of EI. CI can also be operated in negative mode (to generate anions) by using different reagent gases. For example, a mixture of CH4 and NO2 will generate hydroxide ions, which can abstract protons to yield the [M-H]- species. A related technique, atmospheric pressure chemical ionization (APCI) delivers the sample as a neutral spray, which is then ionized by corona discharge, producing ions in a similar manner as described above. APCI is particularly suited for low molecular weight, nonpolar species that cannot be easily analyzed by other common techniques such as ESI.Field ionization and desorption are two closely related techniques which use quantum tunneling of electrons to generate ions. Typically, a highly positive potential is applied to an electrode with a sharp point, resulting in a high potential gradient at the tip . As the sample reaches this field, electron tunneling occurs to generate the cation, which is repelled into the mass analyzer. Field ionization utilizes gaseous samples whereas in field desorption the sample is adsorbed directly onto the electrode. Both of these techniques are soft, resulting in low energy ions which do not easily fragment.In ESI, a highly charged aerosol is generated from a sample in solution. As the droplets shrink due to evaporation, the charge density increases until a coulombic explosion occurs, producing daughter droplets that repeat the process until individualized sample ions are generated . One of the limitations of is the requirement that the sample be soluble. ESI is best applied to charged, polar, or basic compounds.Laser desorption ionization generates ions by ablation from a surface using a pulsed laser. This technique is greatly improved by the addition of a matrix co-crystallized with the sample. As the sample is irradiated, a plume of desorbed molecules is generated. It is believed that ionization occurs in this plume due to a variety of chemical and physical interactions between the sample and the matrix ). One of the major advantages of MALDI is that it produces singly charged ions almost exclusively and can be used to volatilize extremely high molecular weight species such as polymers and proteins. A related technique, desorption ionization on silicon (DIOS) also uses laser desorption, but the sample is immobilized on a porous silicon surface with no matrix. This allows the study of low molecular weight compounds which may be obscured by matrix peaks in conventional MALDI.A plasma torch generated by electromagnetic induction is used to ionize samples. Because the effective temperature of the plasma is about 10,000 °C, samples are broken down to ions of their constituent elements. Thus, all chemical information is lost, and the technique is best suited for elemental analysis. ICP-MS is typically used for analysis of trace elements.Both of these techniques involve sputtering a sample to generate individualized ions; FAB utilizes a stream of inert gas atoms (argon or xenon) whereas SIMS uses ions such as Cs+. Ionization occurs by charge transfer between the ions and the sample or by protonation from the matrix material ). Both solid and liquid samples may be analyzed. A unique aspect of these techniques for analysis of solids is the ability to do depth profiling because of the destructive nature of the ionization technique.Depending on the information desired from mass spectrometry analysis, different ionization techniques may be desired. For example, a hard ionization method such as electron impact may be used for a complex molecule in order to determine the component parts by fragmentation. On the other hand, a high molecular weight sample of polymer or protein may require an ionization method such as MALDI in order to be volatilized. Often, samples may be easily analyzed using multiple ionization methods, and the choice is simplified to choosing the most convenient method. For example, electrospray ionization may be easily coupled to liquid chromatography systems, as no additional sample preparation is required. Table \(\PageIndex{1}\) provides a quick guide to ionization techniques typically applied to various types of samples.A magnetic or electric field is used to deflect ions into curved trajectories depending on the m/z ratio, with heavier ions experiencing less deflection ). Ions are brought into focus at the detector slit by varying the field strength; a mass spectrum is generated by scanning field strengths linearly or exponentially. Sector mass analyzers have high resolution and sensitivity, and can detect high mass ranges, but are expensive, require large amounts of space, and are incompatible with the most popular ionization techniques MALDI and ESI.The amount of time required for an ion to travel a known distance is measured ). A pulse of ions is accelerated through and electric analyzer such that they have identical kinetic energies. As a result, their velocity is directly dependent on their mass. Extremely high vacuum conditions are required to extend the mean free path of ions and avoid collisions. TOF mass analyzers are fastest, have unlimited mass ranges, and allow simultaneous detection of all species, but are best coupled with pulsed ionization sources such as MALDI.Ions are passed through four parallel rods which apply a varying voltage and radiofrequency potential ). As the field changes, ions respond by undergoing complex trajectories. Depending on the applied voltage and RF frequencies, only ions of a certain m/z ratio will have stable trajectories and pass through the analyzer. All other ions will be lost by collision with the rods. Quadrupole analyzers are relatively inexpensive, but have limited resolution and low mass range.Ion traps operate under the same principle as quadrupole, but contain the ions in space. Electrodes can be manipulated to selectively eject ions of desired m/z ratios, allowing for mass analysis. Ion traps are uniquely suited for repeated cycles of mass spectrometry because of their ability to retain ions of desired m/z ratios. Selected fragments can be further fragmented by collision induced dissociation with helium gas. Ion traps are compact, relatively inexpensive, and can be adapted to many hybrid instruments.Mass spectrometry is a powerful tool for identification of compounds, and is frequently combined with separation techniques such as liquid or gas chromatography for rapid identification of the compounds within a mixture. Typically, liquid chromatography systems are paired with ESI-quadrupole mass spectrometers to take advantage of the solvated sample. GC-MS systems usually employ electron impact ionization and quadrupole or ion trap mass analyzers to take advantage of the gas-phase molecules and fragmentation libraries associated with EI for rapid identification.Mass spectrometers are also often coupled in tandem to form MS-MS systems. Typically the first spectrometer utilizes a hard ionization technique to fragment the sample. The fragments are passed on to a second mass analyzer where they may be further fragmented and analyzed. This technique is particularly important for studying large, complex molecules such as proteins.Fast atom bombardment (FAB) is an ionization technique for mass spectroscopy employing secondary ion mass spectroscopy (SIMS). Before the appearance of this technique, there was only limited way to obtain the mass spectrum of the intact oligopeptide which is not easy to be vaporized. Prior to 1970, electron ionization (EI) or chemical ionization (CI) was widely used but those methods require the destructive vaporization of the sample. Field desorption of ions with nuclear fission overcame this problem though due to the necessity of special technique and nuclear fission of 252Cf limits the generality of this approach. FAB became prevalent solving those underlying problems by using bombardment of fast atom or ion which has high kinetic energy onto the sample in matrix.The FAB utilizes the bombardment of accelerated atom or ion beams and the ionized sample is emitted by the collision of the beams and the sample in matrix. In this section, the detail of each step is discussed.Although ions can be accelerated by electric field relatively easily, that is not the case for the neutral atom. Therefore, in the FAB conversion of neutral atom into ion is significant to generate the accelerated species. The fast atom such as xenon used for the bombardment is produced through three steps ):In the same way as the atom beam, a fast ion beam also can be used. Although cesium ion (Cs+) cheaper and heavier than xenon is often employed, they have drawback that the mass spectroscopy can be contaminated by the ions.The obtained fast atom or ion is then bombarded to the sample in matrix which is a type of solvent having high boiling point, resulting in momentum transfer and vaporization of the sample ). The fast atom used for the bombardment is called primary beam of atoms or ions while secondary beam of atoms or ions corresponds to the sputtered ions and neutrals. The ionized sample is directed by ion optics, leading to the detection of those ion in mass analyzer.One of the crucial characteristics of FAB is using liquid matrix. For example, long-lived signal in FAB is responsible for using matrix. Due to the high vacuum condition, usual solvent for chemistry laboratory such as water and other common organic solvent is precluded for FAB and, therefore, solvent with high boiling point called matrix is necessary to be employed. Table \(\PageIndex{1}\) shows examples of matrix.An image of a typical instrument for fast atom bombardment mass spectrometry is shown in . The obtained spectrum by FAB has information of structure or bond nature of the compound in addition to the mass. Here, three spectrum are shown as examples.Typical FAB mass spectrum of glycerol alone is shown in .Glycerol shows signal at m/z 93 which is corresponding to the protonated glycerol with small satellite derived from isotope of carbon (13C). At the same time, signals for cluster of protonated glycerol are also often observed at m/z 185, 277, and 369. As is seen in this example, signal from aggregation of the sample also can be detected and this will provide the information of the sample. shows positive FAB spectrum of sulfonated azo compound X and structure of the plausible fragments in the spectrum. The signal of the target compound X (Mw = 409) was observed at m/z 432 and 410 as an adduct with sodium and proton, respectively. Because of the presence of some type of relatively weak bonds, several fragmentation was observed. For example, signal at m/z 352 and 330 resulted from the cleavage of aryl-sulfonate bond. Also, nitrogen-nitrogen bond cleavage in the azo moiety occurred, producing the fragment signal at m/z 267 and 268. Furthermore, taking into account the fact that favorable formation of nitrogen-nitrogen triple bond from azo moiety, aryl-nitrogen bond can be cleaved and in fact those were detected at m/z 253 and 252. As is shown in these example, fragmentation can be used for obtaining information regarding structure and bond nature of desired compound.The mass spectrum of protonated molecule (MH+ = m/z 1052) of bradykinin potentiator C is shown in . In this case fragmentation occurs between certain amino acids, affording the information of peptide sequence. For example, signal at m/z 884 is corresponding to the fragment as a result of scission of Gly-Leu bond. It should be noted that the pattern of fragmentation is not only done by one type of bond cleavage. Fragmentation at the bond between Gly-Pro is a good example; two type of fragment (m/z 533 and 520) are observed. Thus, pattern of fragmentation can afford the information of sequence of peptide.Secondary ion mass spectrometry (SIMS) is an analytical method which has very low detection limits, is capable of analyzing over a broad dynamic range, has high sensitivity, and has high mass resolution. In this technique, primary ions are used to sputter a solid (and sometimes a liquid) surface of any composition. This causes the emission of electrons, ions, and neutral species, so called secondary particles, from the solid surface. The secondary ions are then analyzed by a mass spectrometer. Depending on the operating mode selected, SIMS can be used for surface composition and chemical structure analysis, depth profiling, and imaging.Of all the secondary particles that are sputtered from the sample surface, only about 1 in every 1,000 is emitted as an ion. Because only the ions may be detected by mass spectrometry, an understanding of how these secondary ions form is important.Sputtering can be defined as the emission of atoms, molecules, or ions from a target surface as a result of particle bombardment of the surface. This phenomenon has been described by two different sets of models.The first approach to describe sputtering, called linear collision cascade theory, compares the atoms to billiard balls and assumes that atomic collisions are completely elastic. Although there are a few different types of sputtering defined by this model, the type which is most important to SIMS is slow collisional sputtering. In this type of sputtering, the primary ion collides with the surface of the target and causes a cascade of random collisions between the atoms in the target. Eventually, these random collisions result in the emission of an atom from the target surface, as can be seen in . This model does not take into account the location of atoms- it only requires that the energy of the incoming ion be higher than the energy required to sublimate atoms from the target surface.Despite that fact that this method makes oversimplifications regarding atomic interactions and structure, its predicted sputter yield data is actually fairly close to the experimental data for elements such as Cu, Zn, Ag, and Au, which have high sputter yields. However, for low sputter yield elements, the model predicts three times more sputtered ions than what is actually observed.The second method to describe sputtering uses computer-generated three-dimensional models of the atoms and molecules in the sample to predict the effect of particle bombardment. All models under this category describe the target solid in terms of its constituent atoms and molecules and their interactions with one another. However, these models only take into account atomic forces (not electronic forces) and describe atomic behavior using classical mechanics (not quantum mechanics). Two specific examples of this type of model are:The ionization models of sputtering can be divided into two categories, theories that predict ionization outside the target and theories that predict that they are generated inside the target. In the theories that describe ionization outside of the target, the primary particle strikes the target, causing the emission of an excited atom or molecule from the target. This particle relaxes by emitting an Auger electron, thus becoming an ion. Because no simple mathematical equation has been described for this theory, it is of little practical use. For this reason, ionization inside the target models are used more often. Additionally, it has been shown that ionization occurs more often inside the target. Although there are many models that describe ionization within the target, two representative models of this type are the bond-breaking model and the local thermal equilibrium theory.In the bond breaking model, the primary particle strikes the target and causes the heterolytic cleavage of a bond in the target. So, either an anion or a cation is emitted directly from the target surface. This is an important model to mention because it has useful implications. Stated simply, the yield of positive ions can be increased by the presence of electronegative atoms in the target, in the primary ion beam, or in the sample chamber in general. The reverse is also true- the negative ion yield may be increased by the presence of electropositive atoms.The local thermal equilibrium theory can be described as an expansion of the bond breaking model. Here, the increase in yield of positive ions when the target is in the presence of electronegative atoms is said to be the result of the high potential barrier of the metal oxide which is formed. This results in a low probability of the secondary ion being neutralized by an electron, thus giving a high positive ion yield.The primary ions in a SIMS instrument (labeled “Primary ion source” in ) are generated by one of three types of ion guns. The first type, called an electron bombardment plasma source, uses accelerating electrons (produced from a heated filament) to bombard an anode. If the energy of these electrons is two to three times higher than the ionization energy of the atom, ionization occurs. Once a certain number of ions and electrons are obtained, a plasma forms. Then, an extractor is used to make a focused ion beam from the plasma.In the second type of source, called the liquid metal source, a liquid metal film flows over a blunt needle. When this film is subjected to a strong electric field, electrons are ejected from the atoms in the liquid metal, leaving them ionized. An extractor then directs the ions out of the ion gun.The last source is called a surface ionization source. Here, atoms of low ionization energy are absorbed onto a high work function metal. This type of system allows for the transfer of electrons from the surface atoms to the metal. When the temperature is increased, more atoms (or ions) leave the surface than absorb on the surface, causing an increase in absorbed ions compared to absorbed atoms. Eventually, nearly all of the atoms that leave the surface are ionized and can be used as an ion beam.The type of source used depends on the type of SIMS experiment which is going to be run as well as the composition of the sample to be analyzed. A comparison of the three different sources is given in Table \(\PageIndex{2}\). Of the three sources, electron bombardment plasma has the largest spot size. Thus, this source has a high-diameter beam and does not have the best spatial resolution. For this reason, this source is commonly used for bulk analysis such as depth profiling. The liquid metal source is advantageous for imaging SIMS because it has a high spatial resolution (or low spot size). Lastly, the surface ionization source works well for dynamic SIMS (see above) because its very small energy spread allows for a uniform etch rate.In addition to the ion gun type, the identity of the primary ion is also important. O2+ and Cs+ are commonly used because they enhance the positive or negative secondary ion yield, respectively. However, use of the inert gas plasma source is advantageous because it allows for surface studies without reacting with the surface itself. Using the O2+ plasma source allows for an increased output of positively charged secondary ions, but it will alter the surface that is being studied. Also, a heavy primary ion allows for better depth resolution because it does not penetrate as far into the sample as a light ion.The sputter rate, or the number of secondary ions that are removed from the sample surface by bombardment by one primary ion, depends both on the properties of the target and on the parameters of the primary beam.There are many target factors that affect the sputter rate. A few examples are crystal structure and the topography of the target. Specifically, hexagonal close-packed crystals and rough surfaces give the highest sputter yield. There are many other properties of the target which effect sputtering, but they will not be discussed here.As was discussed earlier, different primary ion sources are used for different SIMS applications. In addition to the source used, the manner in which the source is used is also important. First, the sputter rate can be increased by increasing the energy of the beam. For example, using a beam of energy greater than 10 keV gives a maximum of 10 sputtered particles per primary ion impact. Second, increasing the primary ion mass will also increase the secondary ion yield. Lastly, the angle of incidence is also important. It has been found that a maximum sputter rate can be achieved if the angle of impact is 70° relative to the surface normal.The detector which measures the amount and type of secondary ions sputtered from the sample surface is a mass spectrometer. See for a diagram that shows where the mass spectrometer is relative to the other instrument components. The type of analysis one wishes to do determines which type of spectrometer is used. Both dynamic and static SIMS usually use a magnetic sector mass analyzer because it has a high mass resolution. Static SIMS (as well as imaging SIMS) may also use a time-of-flight system, which allows for high transmission. A description of how each of these mass spectrometers works and how the ions are detected can be found elsewhere (see can be used to analyze the surface and about 30 µm below the surface of almost any solid sample and some liquid samples. Depending on the type of SIMS analysis chosen, it is possible to obtain both qualitative and quantitative data about the sample.There are three main types of SIMS experiments: Dynamic SIMS, static SIMS, and imaging SIMS.In dynamic SIMS analysis, the target is sputtered at a high rate. This allows for bulk analysis when the mass spectrometer is scanned over all mass ranges to get a mass spectrum and multiple measurements in different areas of the sample are taken. If the mass spectrometer is set to rapidly analyze individual masses sequentially as the target is eroded rapidly, it is possible to see the depth at which specific atoms are located up to 30 µm below the sample surface. This type of analysis is called a depth profile. Depth profiling is very useful because it is a quantitative method- it allows for the calculation of concentration as a function of depth so long as ion-implanted standards are used and the crater depth is measured. See the previous section for more information on ion-implants.SIMS may also be used to obtain an image in a way similar to SEM while giving better sensitivity than SEM. Here, a finely focused ion beam (rather than an electron beam, as in SEM) is raster-scanned over the target surface and the resulting secondary ions are analyzed at each point. Using the identity of the ions at each analyzed spot, an image may be assembled based on the distributions of these ions.In static SIMS, the surface of the sample is eroded very slowly so that the ions which are emitted are from areas which have not already been altered by the primary ion. By doing this, it is possible to identify the atoms and some of the molecules just on the surface of the sample.An example that shows the usefulness of SIMS is the analysis of fingerprints using this instrument. Many other forms of analysis have been employed to characterize the chemical composition of fingerprints such as GC-MS. This is important in forensics to determine fingerprint degradation, to detect explosives or narcotics, and to help determine age of the person who left the print by analyzing differences in sebaceous secretions. Compared to GC-MS, SIMS is a better choice of analysis because it is relatively less destructive. In order to do a GC-MS, the fingerprint must be dissolved. SIMS, on the other hand, is a solid state method. Also, because SIMS only erodes through a few monolayers, the fingerprint can be kept for future analysis and for record-keeping. Additionally, SIMS depth profiling allows the researcher to determine the order in which substances were touched. Lastly, an image of the fingerprint can be obtained using the imaging SIMS analysis.As with any other instrumental analysis, SIMS does require some sample preparation. First, rough samples may require polishing because the uneven texture will be maintained as the surface is sputtered. Because surface atoms are the analyte in imaging and static SIMS, polishing is obviously not required. However, it is required for depth profiling. Without polishing, layers beneath the surface of the sample will appear to be mixed with the upper layer in the spectrum, as can be seen in .But, polishing before analysis does not necessarily guarantee even sputtering. This is because different crystal orientations sputter at different rates. So, if the sample is polycrystalline or has grain boundaries (this is often a problem with metal samples), the sample may develop small cones where the sputtering is occurring, leading to an inaccurate depth profile, as is seen in . Analyzing insulators using SIMS also requires special sample preparation as a result of electrical charge buildup on the surface (since the insulator has no conductive path to diffuse the charge through). This is a problem because it distorts the observed spectra. To prevent surface charging, it is common practice to coat the sample with a conductive layer such as gold.Once the sample has been prepared for analysis, it must be mounted to the sample holder. There are a few methods to doing this. One way is to place the sample on a spring loaded sample holder which pushes the sample against a mask. This method is advantageous because the researcher doesn’t have to worry about adjusting the sample height for different samples (see below to find out why sample height is important). However, because the mask is on top of the sample, it is possible to accidentally sputter the mask. Another method used to mount samples is to simply glue them to a backing plate using silver epoxy. This method requires drying under a heat lamp to ensure that all volatiles are evaporated off the glue before analysis. Alternatively, the sample can be pressed in a soft metal like indium. The last two methods are especially useful for mounting of insulating samples, since they provide a conductive path to help prevent charge buildup.When loading the mounted sample into the instrument, it is important that the sample height relative to the instrument lens is correct. If the sample is either too close or too far away, the secondary ions will either not be detected or they will be detected at the edge of the crater being produced by the primary ions (see ). Ideally, the secondary ions that are analyzed should be those resulting from the center of the primary beam where the energy and intensity are most uniform.In order to do quantitative analysis using SIMS, it is necessary to use calibration standards since the ionization rate depends on both the atom (or molecule) and the matrix. These standards are usually in the form of ion implants which can be deposited in the sample using an implanter or using the primary ion beam of the SIMS (if the primary ion source is mass filtered). By comparing the known concentration of implanted ions to the number of sputtered implant ions, it is possible to calculate the relative sensitivity factor (RSF) value for the implant ion in the particular sample. By comparing this RSF value to the value in a standard RSF table and adjusting all the table RSF values by the difference between them, it is possible to calculate the concentrations of other atoms in the sample. For more information on RSF values, see above. When choosing an isotope to use for ion implantation, it is important take into consideration possible mass interferences. For example, 11B, 16O, and 27Al have the same overall masses and will interfere with each others ion intensity in the spectra. Therefore, one must chose an ion implant that does not have the same mass as any other species in the sample which are of interest.Also, the depth at which the implant is deposited is also important. The implanted ion must be lower than the equilibration depth, above which, chaotic sputtering occurs until a sputter equilibrium is reached. However, care should be taken to ensure that the implanted ions do not pass the layer of interest in the sample- if the matrix changes, the implanted ions will no longer sputter at the same rate, causing concentrations to be inaccurate.In SIMS, matrix effects are common and originate from changes in the ionization efficiency (the number of ionized species compared to totally number of sputtered species) and the sputtering yield. One of the main causes of matrix effects is the primary beam. As was discussed earlier, electronegative primary ions increases the number of positively charged secondary ions, while electropositive primary ions increases the number of negatively charged secondary ions. Matrix effects can also be caused by species present in the sample. The consequences of these matrix effects depends on the identity of the effecting species and the composition of the sample. To correct for matrix effects, it is necessary to use a standards and compare the results with RSFs (see above).For most atoms, SIMS can accurately detect down to a concentration of 1ppm. For some atoms, a concentration of 10 ppb may be achieved. The detection limit in this instrument is set by the count rate (how many ions may be counted per second) rather than by a limitation due to the mass of the ion. So, to decrease detection limit, the sample can be sputtered at a higher rate.The sensitivity of SIMS analysis depends on the element of interest, the matrix the element is in, and what primary ion is used. The sensitivity of SIMS towards a particular ion may easily be determined by looking at an RSF table. So, for example, looking at an RSF table for an oxygen primary ion and positive secondary ions shows that the alkali metals have the highest sensitivity (they have low RSF values). This makes sense, since these atoms have the lowest electron affinities and are the easiest to ionize. Similarly, looking at the RSF table for a cesium primary ion beam and negative secondary ions shows that the halogens have the highest sensitivity. Again, this makes sense since the halogens have the highest electron affinities and accept electrons easily.Three types of spectra can be obtained from a SIMS analysis. From static SIMS, a mass spectrum is produced. From dynamic SIMS, a depth profile or mass spectrum is produced. And, not surprisingly, an image is produced from imaging SIMS.As with a typical mass spectrum, the mass to charge ratio (m/z) is compared to the ion intensity. However, because SIMS is capable of a dynamic range of 9 orders of magnitude, the intensity of the SIMS mass spectra is displayed on a logarithmic scale. From this data, it is possible to observe isotopic data as well as molecular ion data and their relative abundances on the sample surface.A depth profile displays the intensity of one or more ions with respect to the depth (or, equivalently, time). Caution should be taken when interpreting this data- if ions are collected off the wall of the crater rather than from the bottom, it will appear that the layer in question runs deeper in the sample than it actually does.As alluded to in previous sections, laser desorption (LD) was originally developed to produce ions in the gas phase. This is accomplished by pulsing a laser on the sample surface to ablate material causing ionization and vaporization of sample particles. However, the probability of attaining a valuable mass spectrum is highly dependent on the properties of the analyte. Furthermore, masses observed in the spectrum were products of the molecular fragmentation if the molecular weight was above 500 Da. Clearly, this was not optimal instrumentation for analyzing large biomolecules and bioinorganic compounds that do not ionize well and samples were degraded during the process. Matrix-assisted laser desorption ionization (MALDI) was developed and alleviated many issues associated with LD techniques. The MALDI technique allows proteins with masses up to 300,000 Da to be detected. This is important to bioinorganic chemistry when visualizing products resulting from catalytic reactions, metalloenzyme modifications, and other applications.MALDI as a process decreases the amount of damage to the sample by protected the individual analytes within a matrix (more information of matrices later). The matrix itself absorbs much of the energy introduced by the laser during the pulsing action. Plus, energy absorbed by the matrix in subsequently transferred to the analyte ). Once, energized, the analyte ionized and is released into a plume of ions containing common cations (Na+, K+, etc.), matrix ions, and analyte ions. These ions then enter the flight tube where they are sent to the detector. Different instrumental modes adjust for differences in ion flight time ). The MALDI technique is also more sensitive and universal since readjustments to match absorption frequency is not necessary due to the matrix absorption. Many of the commonly used matrices have similar wavelength absorptions Table \(\PageIndex{3}\). UV: 337 nm,353 nmThe process of MALDI takes place in 2 steps:The sample for analysis is combined with a matrix (a solvent containing small organic molecules that have a strong absorbance at the laser wavelength) and added to the MALDI plate ). The sample is then dried to the surface of the plate before it is analyzed, resulting in the matrix doped with the analyte of interest as a "solid solution". shows the loading of a peptide in water in cyano-4-hydroxycinnamic acid matrix.Prior to insertion of the plate into the MALDI instrument, the samples must be fully dried. The MALDI plate with the dry samples is placed on a carrier and is inserted into the vacuum chamber a-b). After the chamber is evacuated, it is ready to start the step of sample ablation.After the sample is loaded into the instrument, the instrument camera will show activate to show a live feed from inside of the chamber. The live feed allows the controller to view the location where the spectrum is being acquired. This becomes especially important when the operator manually fires the laser pulses.When the sample is loaded into the vacuum chamber of the instrument, there are several options for taking a mass spectrum. First, there are several modes for the instrument, two of which are described here: axial and reflectron modes.Axial ModeIn the axial (or linear) mode, only a very short ion pulse is required before the ions go down the flight tube and hit the detector. This mode is often used when exact accuracy is not required since the mass accuracy has an error of about +/- 2-5%. Sources of these errors are found in the arrival time of different ions through the flight tube to the detector. Errors in the arrival time are caused by the difference in initial velocity with which the ions travel based on their size. The larger ions have a lower initial velocity, thus they reach the detector after a longer period of time. This decreases the mass detection resolution.Reflectron ModeIn the reflectron (“ion mirror”) mode, ions are refocused before they hit the detector. The reflectron itself is actually a set of ring electrodes that create an electric field that is constant near the end of the flight tube. This causes the ions to slow and reverse direction towards a separate detector. Smaller ions are then brought closer to large ions before the group of ions hit the detector. This assists with improving detection resolution and decreases accuracy error to +/- 0.5%.While MALDI is used extensively in analyzing proteins and peptides, it is also used to analyze nanomaterials. The following example describes the analysis of fullerene analogues synthesized for a high performance conversion system for solar power. The fullerene C60 is a spherical carbon molecule consisting of 60 sp2carbon atoms, the properties of which may be altered through functionalization. A series of tert-butyl-4-C61-benzoate (t-BCB) functionalized fullerenes were synthesized and isolated. MALDI was not used extensively as a method for observing activity, but instead was used as a conformative technique to determine the presence of desired product. Three fullerene derivatives were synthesized ).The identity and number of functional groups were determined using MALDI ).Surface-assisted laser desorption/ionization mass spectrometry, which is known as SALDI-MS, is a soft mass spectrometry technique capable of analyzing all kinds of small organic molecules, polymers and large biomolecules. The essential principle of this method is similar to (matrix-assisted laser desorption/ionization mass spectrometry) MALDI-MS (see but the organic matrix commonly used in MALDI has been changed into the surface of certain substrates, usually inorganic compounds. This makes SALDI a matrix-free ionization technique that avoids the interference of matrix molecules.SALDI is considered to be a three-step process shown in . Since the bulk of energy input goes to substrates instead of the sample molecules, it is thought to be a soft ionization technique useful in chemistry and chemical biology fields.The most important characteristic of the substrate in SALDI is a large surface areas. In the past 30 years, efforts have been made to explore novel substrate materials that increase the sensitivity and selectivity in SALDI-MS. Depending on the substrate compounds being used, the interaction between the substrate materials and sample molecules could be covalent, non-covalent such as hydrophobic effect, bio-specific such as recognition between biotins and avidins, and between antigens and antibodies, or electrostatic. With the unique characteristics stated above, SALDI is able to combine the advantages of both hard and soft ionization techniques. On one hand, low molecular weight (LMW) molecules could be analyzed and identified in SALDI-MS, which resembles the function of most hard ionization techniques. On the other hand, molecular or quasi-molecular ions would dominate the spectra as what we commonly see in the spectra prepared by soft ionization techniques.The SALDI technique actually emerged from its well-known rival technique, MALDI. The development of soft ionization techniques, which mainly included MALDI and ESI, enabled chemists and chemical biologists to analyze large polymers and biomolecules using mass spectrometry. This should be attributed to the soft ionization process which prohibited large degree of fragmentation that complicated the spectra, and resultant ions were dominantly molecular ions or quasi-molecular ions. In other words, tolerance of impurities would be increased since the spectra became highly simplified. While it was effective in determining molecular weight of the analytes, the matrix peaks would also appear in low mass range, which seriously interfered with the analysis of LMW analytes. As a result, the SALDI method emerged to resolve the problem by replacing matrix with surface that was rather stationary.The original idea of SALDI was raised by Tanaka ) in 1988. Ultra-fine cobalt powders with an average diameter of about 300 Å that were mixed in the sample were responsible of “rapid heating” due to its high photo-absorption and low heat capacity. With a large surface area, the cobalt powders were able to conduct heat to large numbers of surrounding glycerol liquid and analyte molecules, which indeed resulted in a thermal desorption/ionization mechanism. The upper mass limit was increased up to 100 kDa, which is shown in for the analysis of lysozymes from chicken egg white.The low mass range was not paid much attention at the beginning, and the concept of “surface-assisted” was not proposed until Sunner ) and co-workers reported the study on graphite SALDI in 1995. And that was the first time the term “SALDI” was used by chemists. They achieved obtaining mass spectra of both proteins and LWM analytes by irradiating mixture of 2-150 μm graphite particles and solutions of analytes in glycerol. Although fragmentation of the LMW glycerol molecules was relatively complicated ), it was still considered as a significant improvement in ionizing small molecules by soft ionization methods.Despite the breakthrough mentioned above, SALDI did not widely interest chemists. Regardless of its drawbacks in upper mass limit for the analysis of large molecules, the sensitivity was far from being satisfactory compared to hard ionization techniques in terms of testing LMW molecules. This situation has been changed ever since nanomaterials were introduced as the substrates, especially for the successful development of desorption/ionization on porous silicon (DIOS) shown in . In fact, majority of research on SALDI-MS has been focusing on exploiting novel nanomaterial substrates, aiming at further broadening the mass range, improving the reproducibility, enhancing the sensitivity and extending the categories of compounds that were able to be analyzed. So far, a variety of nanomaterials have been utilized in SALDI-MS, including carbon-based nanomaterials, metal-based nanomaterials, semiconductor-based nanomaterials, etc.As a soft ionization technique, SALDI is expected to produce molecular or quasi-molecular ions in the final mass spectra. Since this requires the ionization process to be both effective and controllable, which means sufficient sample molecules could be ionized while further fragmentation should be mostly avoided.While the original goal mentioned above has been successfully accomplished for years, the study on desorption and ionization mechanism in detail is still one of the most popular and controversial research areas of SALDI at present. It is mostly agreed that the substrate material has played a significant role of both activating and protecting the analyte molecules. The schematic picture describing the entire process is shown in . Energy input from the pulsed laser is largely absorbed by the substrate material, which is possibly followed by complicated energy transfer from the substrate material to the absorbed analyte molecules. As a result, both thermal and non-thermal desorption could be triggered, and for different modes of SALDI experiments, the specific desorption and ionization process greatly differs.The mechanism for porous silicon surface as a SALDI substrate has been widely studied by researchers. In general, the process can be subdivided into the following steps:When no associated proton donor is present in the vicinity of analyte molecules, desorption might occur without ionization. Subsequently, the desorbed analyte molecule is ionized in the gas phase by collision with incoming ions.Since it is the active surface responsible for adsorption, desorption and ionization of analyte molecules that features the technique, the surface chemistry of substrate material is undoubtedly crucial for SALDI performance. But it is rather difficult to draw a general conclusion due to the fact that the affinity between different classes of substrates and analytes is considerably versatile. Basically, the interaction between those two components has an impact on trapping and releasing the analyte molecules, as well as the electronic surface state of the substrate and energy transfer coefficiency.Another important aspect is the physical properties of the substrates which could alter desorption and ionization process directly, especially for the thermally activated pathway. This is closely related to rapid temperature increase on the substrate surface. Those properties include optical absorption coefficiency, heat capacity and heat conductivity (or heat diffusion rate). First, higher optical absorption coefficiency enables the substrate to absorb and generate more heat when certain amount of energy is provided by the laser source. Moreover, a lower heat capacity usually leads to larger temperature increase upon the same amount of heat. In addition, a lower hear conductivity helps the substrate to maintain a high temperature that will further result in a higher temperature peak. Therefore, the thermal desorption and ionization could occur more rapidly and effectively.The instrument involved in SALDI shown in is similar with in MALDI to large extent. It contains a laser source which generates pulsed laser that excites the sample mixture. There is a sample stage that places the sample mixture of substrate materials and analytes. Usually the mass analyzer and ion detector are on the other side to let the ions go through and become separated and detected based on different m/z value. Recent progress has been made that incorporates direct analysis in real time (DART) ion source into SALDI-MS system which makes it possible to perform the analysis in ambient conditions. shows their ambient SALDI-MS method.Porous silicon with large surface area could be used to trap certain analyte molecules for matrix-free desorption and ionization process. More interestingly, a large ultraviolet absorption coefficiency was found for this porous material, which also improved the ionization performance. It has been reported that using porous silicon as the substrate in SALDI-MS was able to work at femtomole and attomole levels of analytes including as peptides, caffeine, an antiviral drug molecule (WIN), reserpine and N-octyl-β-D-glucopyranoside . Compared to conventional MALDI-MS, the DIOS-MS (which was the specific type of SALDI in this research) successfully eliminated the matrix interference and displayed much higher quasi-molecular peak (MH+), which could be observed in . What’s more, chemical modification of the porous silicon was able to further optimize the ionization characteristics.Graphene is a type of popular carbon nanomaterial discovered in 2004. It has a large surface area that could effectively attach the analyte molecules. On the other hand, the efficiency of desorption/ionization for analytes on a layer of graphene can be enhanced for its simple monolayer structure and unique electronic properties. Polar compounds including amino acids, polyamines, anticancer drugs, and nucleosides can be successfully analyzed. In addition, nonpolar molecules can be analyzed with high resolution and sensitivity due to the hydrophobic nature of graphene itself. Compared with a conventional matrix, graphene exhibited a high desorption/ionization efficiency for nonpolar compounds. The graphene substrate functions as a substrate to trap analytes, and it transfers energy to the analytes upon laser irradiation, which allows for the analytes to be readily desorbed/ionized and the interference of matrix to be eliminated. It has been demonstrated that the use of graphene as a substrate material avoids the fragmentation of analytes and provides good reproducibility and a high salt tolerance, underscoring the potential application of graphene as a matrix for MALDI-MS analysis of practical samples in complex sample matrixes. It is also proved that the use of graphene as an adsorbent for the solid-phase extraction of squalene could improve greatly the detection limit.Gas-phase SALDI-MS analysis has a relatively high ionization efficiency, which leads to a high sensitivity. In 2009, gas chromatography (GC) was first used with SALDI-MS, where the SALDI substrate was amorphous silicon and the analyte was N-alkylated phenylethylamines. Detection limits were in the range of attomoles, but improvements are expected in the future. The combination with GC is expected to expand the use of SALDI-MS even more that SALDI could be applied to separation and identification of samples with more complexity. The instrumental setup is shown in .In the study of electrochemistry, it had always been a challenge to obtain immediate and continuous detection of electrochemical products due to the limited formation on the surface of the electrode, until the discovery of differential electrochemical mass spectrometry. Scientists initially tested the idea by combining porous membrane and mass spectrometry for product analysis in the study of oxygen generation from HClO4 using porous electrode in 1971. In 1984, another similar experiment was performed using a porous Teflon membrane with 100 μm of lacquers at the surface between the electrolytes and the vacuum system. Comparing to previous experiment, this experiment has demonstrated a vacuum system with improved time derivative that showed nearly immediate detection of volatile electrochemical reaction products, with high sensitivity of detecting as small as “one monolayer” at the electrode. In summary, the experiment demonstrated in 1984 not only showed continous sample detection in mass spectrometry but also the rates of formation, which distinguished itself from the technique performed previously in 1971. Hence, this method was called differential electrochemical mass spectrometry (DEMS). During the past couple decades, this technique has evolved from using classic electrode to rotating disc electrode (RDE), which provides a more homogeneous and faster transport of reaction species to the surface of the electrode.Described in basic terms, differential electrochemical mass spectrometry is a characterization technique that analyzes specimens using both the electrochemical half-cell experimentation and mass spectrometry. It uses non-wetting membrane to separate the aqueous electrolyte and gaseous electrolyte, which gaseous electrolyte will permeate through the membrane and will be ionized and detected in the mass spectrometer using continuous, two-stage vacuum system. This analytical method can detect gaseous or volatile electrochemical reactants, reaction products, and even reaction intermediates. The instrument consists of three major components: electrochemical half-cell, PTFE (polytetrafluoroethylene) membrane interface, and quadrupole mass spectrometer (QMS), which is a part of the vacuum system.The entire assembly of the instrument is shown in , which consists of three major components: an electrochemical half-cell, a PTFE membrane interface, and the quadrupole mass spectrometer. In this section, each component will be explained and its functionality will be explored, and additional information will be provided at the end of this section. The PTFE membrane is micro-porous membrane that separates the aqueous electrolyte from volatile electrolyte which will be drawn to the high vacuum portion. Using the high vacuum suction, the gaseous or volatile species will be allowed to permeate through the membrane using differential pressure, leaving the aqueous materials on the surface due to hydrophobic nature of the membrane. The selection of the membrane material is very important to maintain both the hydrophobicity and proper diffusion of volatile species. The species permeated to QMS will be monitored and measured, and the kinetics of formation will be determined at the end. Depending on the operating condition, different vacuum pumps might be required.First major component of the DEMS instrument is the design of electrochemical cells. There are many different designs that have been developed for the past several decades, depending on the types of electrochemical reactions, the types and sizes of electrodes. However, only the classic cell will be discussed in this chapter.DEMS method was first demonstrated using the classical method. A conventional setup of electrochemical cell is showed in . The powdered electrode material is deposited on the porous membrane to form the working electrode, shown as Working Electrode Material in . In the demonstration by Wolber and Heitbaum, the electrode was prepared by having small Pt particles deposited onto the membrane by painting a lacquer. It was later in other experimentations evolved to use sputtering electro-catalyst layer for a more homogenous surface. The aqueous cell electrolyte is shielded with an upside down glass body with vertical tunnel opening to the PTFE membrane. The working electrode material lies above the PTFE membrane, where it is supported mechanically by stainless steel frit inside vacuum flange. Both the working electrode material and PTFE membrane are compressed between vacuum castings and PTFE spacer, which is a ring that prevents the electrolyte from leakage. The counter electrode (CE) and reference electrode (RE) made from platinum wire are placed on top of the working electrode material to create the electrical contact. One of the main advantages of the classical design is fast respond time, with high efficiency of “0.5 for lacquer and 0.9 with the sputter electrode”. However, this method poses certain difficulties. First, the electrolyte materials will be absorbed on the working electrode before it permeates through the membrane. Due to the limitation of absorption rate, the concentration on the surface of the electrode will be lower than bulk. Second, the aqueous volatile electrolyte must be absorbed onto working electrode, and then followed by evaporation through the membrane. Therefore, the difference in rates of absorption and evaporation will create a shift in equilibrium. Third, this method is also limited to the types of material that can be deposited on the surface, such as single crystals or even some polycrystalline electrode surfaces. Lastly, the way RE is position could potentially introduce impurity into the system, which will interfere with the experiment.PTFE membrane is placed between the aqueous electrolyte cell and the high vacuum system on the other end. It acts as a barrier that prevents aqueous electrolyte from passing through, while its selectivity allows the vaporized electrochemical species to transport to the high vacuum side, which the process is similar to vacuum membrane distillation shown in . In order to prevent the aqueous solution from penetrating through the membrane, the surface of the membrane must be hydrophobic, which is a material property that repels water or aqueous fluid. Therefore, at each pore location, there is vapor and liquid interface where the liquid will remain on the surface while the vapor will penetrate into the membrane. Then the transportation of the material in vapor phase is triggered by the pressure difference created from the vacuum on the other end of the membrane. Therefore, the size of the pore is crucial in controlling its hydrophobic properties and the transfer rate through the membrane. When the pore size is less than 0.8 μm, the hydrophobic property is activated. This number is determined by calculating the surface tension of liquid, the contact angle and the applied pressure. Therefore, a membrane with relatively small pore sizes and large pore distribution is desired. In general membrane materials used are “typically 0.02 μm in size with thickness between 50 and 110 μm”. In terms of materials, there are other materials such as polypropylene and polyvinylidene fluoride (PVDF)) have been tested; however, PTFE material ) as membrane has demonstrated better durability and chemical resistance to electrochemical environment. Therefore, PTFE is shown to be the better candidate for such application, and is usually laminated onto polypropylene for enhanced mechanical properties. Despite the hydrophobic property of PTFE material, a significant amount aqueous material penetrates through the membrane due to the large pressure drop. Therefore, the correct sizing of the vacuum pumps is crucial to maintain the flux of gas to be transported to the mass spectrometry at the desire pressure. More information regarding the vacuum system will be discussed. In addition, capillary has been used in replacement of the membrane; however, this method will not be discussed here.The correctly sized vacuum system can ensure the maximum amount of vapor material to be transported across the membrane. When the pressure drop is not adequate, part of the vapor material may be remain on the aqueous side as shown . However, when the pressure drop is too large, too much aqueous electrolyte will be pulled from the liquid-vapor interface, subsequently increasing load on the vacuum pumps. In the cases of improper sized pumps can reduce pump efficiency and lower pump life-time if such problem is not corrected immediately. In addition, in order for mass spectrometry operate properly, the gas flux will need to maintain at a certain flow. Therefore, the vacuum pumps should provide steady flux of gas around 0.09 mbar/s.cm2 consisting mostly with gaseous or volatile species and other species that will be sent to mass spectrometry for analyzing. In additional, due to the limitation of pump speed of single vacuum pump, vacuum system with two or more pumps will be needed. For example, if 0.09 mbar/s.cm2 is required and pump speed of 300 s-1 that operates at 10-5 mbar, the acceptable membrane geometrical area is 0.033 cm-2. In order to increase the membrane area, addition pumps will be required in order to achieve the same gas flux.There are several other analytical techniques such as cyclic voltammetry, potential step and galvanic step that can be combined with DEMS experiment. Cyclic voltammetry can provide both quantitative and qualitative results using the potential dependence. As a result, both the ion current of interested species and faradaic electrode current (the current generated by the reduction or oxidation of some chemical substance at an electrode) will be recorded when combining cyclic voltammetry and DEMS.The lack of commercialization of this technique has limited it to only academic research. The largest field of application of DEMS is on electro-catalytic reactions. In addition, it is also used fuel cell research, detoxification reactions, electro-chemical gas sensors or more fundamental relevant research such as decomposition of ionic liquids etc.The ethanol oxidation reaction was studied using alkaline membrane electrode assemblies (MEAs), constructed using nanoparticle Pt catalyst and alkaline polymeric membrane. DEMS will be use to study the mechanics of the ethanol oxidation reaction on the pt-based catalysts. The relevant products of the oxidation reaction are carbon dioxide, acetaldehyde and acetic acid. However, both carbon dioxide and acetaldehyde has the same molecular weight, which 44 g/mole. One approach is to monitor the major fragments where ionized CO22+ at m/z = 22 and COH+ at m/z = 29 were used. Differential electrochemical mass spectrometry can detect volatile products of the electrochemical reaction; however, detections can be varied by solubility or boiling point. CO2 is very volatile, but also soluble in water. If KOH is present, DEMS will not detect any CO2traces. Therefore, all extra alkaline impurities should be removed before measurements are taken. The electrochemical characteristics can also be measured under various conditions and examples shown in . In addition, the CCE (CO2 current efficiency) was measured under different potentials. Using the CCE, the study concluded that the ethanol undergoes more complete oxidation using alkaline MEA than acidic MEA.Ionic liquids (IL) have several properties such as high ionic conductivity, low vapor pressure, high thermal and electrochemical stability, which make them great candidate for battery electrolyte. Therefore, it is important to have better understanding of the stability of the reaction and of the products formed during decomposition behavior. DEMS is a powerful method where it can provide online detection of the volatile products; however, it runs into problems with high viscosity of ILs and low permeability due to the size of the molecules. Therefore, researchers modified the traditional setup of DEMS, which the modified method made use of the low vapor pressure of ILs and have electrochemical cell placed directly into the vacuum system. This experiment shows that this technique can be designed for very specific application and can be modified easily.DEMS technique can provide on-line detection of products for electrochemical reactions both analytically and kinetically. In addition, the results are delivered with high sensitivity where both products and by-products can be detected as long as they are volatile. It can be easily assembled in the laboratory environment. For the past several decades, this technique has demonstrated advanced development and has delivered good results for many applications such as fuel cells, gas sensors etc. However, this technique has its limitation. There are many factors that need to be considered when designing this system such as half-cell electrochemical reaction, absorption rate and etc. Due to these constraints, the type of membrane should be selected and pump should be sized accordingly. Therefore, this characterization method is not one size fits all and will need to be modified base on the experimental parameters. Therefore, next step of development for DEMS is not only to improve its functions, but also to be utilized beyond the academic laboratory.This page titled 4.11: Mass Spectrometry is shared under a CC BY 4.0 license and was authored, remixed, and/or curated by Pavan M. V. Raja & Andrew R. Barron (OpenStax CNX) via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.
555
5.1: Dynamic Headspace Gas Chromatography Analysis
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Physical_Methods_in_Chemistry_and_Nano_Science_(Barron)/05%3A_Reactions_Kinetics_and_Pathways/5.01%3A_Dynamic_Headspace_Gas_Chromatography_Analysis
Gas chromatography (GC) is a very commonly used chromatography in analytic chemistry for separating and analyzing compounds that are gaseous or can be vaporized without decomposition. Because of its simplicity, sensitivity, and effectiveness in separating components of mixtures, gas chromatography is an important tools in chemistry. It is widely used for quantitative and qualitative analysis of mixtures, for the purification of compounds, and for the determination of such thermochemical constants as heats of solution and vaporization, vapor pressure, and activity coefficients. Compounds are separated due to differences in their partitioning coefficient between the stationary phase and the mobile gas phase in the column.A gas chromatograph ) consists of a carrier gas system, a sampling system, a separation system, a detection system, and a data recording system.The carrier gas system consists of carrier gas sources, purification, and gas flow control. The carrier gas must be chemically inert. Commonly used gases include nitrogen, helium, argon, and carbon dioxide. The choice of carrier gas often depends upon the type of detector used. A molecular sieve is often contained in the carrier gas system to remove water and other impurities.An auto sampling system consists of auto sampler, and vaporization chamber. The sample to be analyzed is loaded at the injection port via a hypodermic syringe and it will be volatilized as the injection port is heated up. Typically samples of one micro liter or less are injected on the column. These volumes can be further reduced by using what is called a split injection system in which a controlled fraction of the injected sample is carried away by a gas stream before entering the column.The separation system consists of columns and temperature controlling oven. The column is where the components of the sample are separated, and is the crucial part of a GC system. The column is essentially a tube that contains different stationary phases have different partition coefficients with analytes,and determine the quality of separation. There are two general types of column: packed ) and capillary also known as open tubular ).The purpose of a detector is to monitor the carrier gas as it emerges from the column and to generate a signal in response to variation in its composition due to eluted components. As it transmits physical signal into recordable electrical signal, it is another crucial part of GC. The requirements of a detector for GC are listed below.Detectors for GC must respond rapidly to minute concentration of solutes as they exit the column, i.e., they are required to have a fast response and a high sensitivity. Other desirable properties of a detector are: linear response, good stability, ease of operation, and uniform response to a wide variety of chemical species or, alternatively predictable and selective response to one or more classes of solutes.GC system originally used paper chart readers, but modern system typically uses an online computer, which can track and record the electrical signals of the separated peaks. The data can be later analyzed by software to provide the information of the gas mixture.An ideal separation is judged by resolution, efficiency, and symmetry of the desired peaks, as illustrated by .Resolution can be simply expressed as the distance on the output trace between two peaks. The highest possible resolution is the goal when developing a separation method. Resolution is defined by the R value, \ref{1}, which can be expressed mathamatically, \ref{2}, where k is capacity, α is selectivity, and N is the number of theoretical plates. An R value of 1.5 is defined as being the minimum required for baseline separation, i.e., the two adjacent peaks are separated by the baseline. Separation for different R values is illustrated in .\[ R \ =\ capacity \times selectivity \times efficiency \label{1} \]\[ R\ =\ [k/(1+k)](\alpha -\ 1/\alpha)(N^{0.5}/4) \label{2} \]Capacity (k´) is known as the retention factor. It is a measure of retention by the stationary phase. It is calculated from \ref{3}, where tr = retention time of analyte (substance to be analyzed), and tm = retention time of an unretained compound.\[ k'\ =\ (t_{r}-t_{m})/t_{m} \label{3} \]Selectivity is related to α, the separation factor . The value of α should be large enough to give baseline resolution, but minimized to prevent waste.Narrow peaks have high efficiency ), and are desired. Units of efficiency are "theoretical plates" (N) and are often used to describe column performance. "Plates" is the current common term for N, is defined as a function of the retention time (tr) and the full peak width at half maximum (Wb1/2), EQ.\[ N \ =\ 5.545(t_{r}/W_{b1/2})^{2} \label{4} \]The symmetry of a peak is judged by the values of two half peak widths, a and b ). When a = b, a peak is called symmetric, which is desired. Unsymmetrical peaks are often described as "tailing" or "fronting".The attributions of an ideal separation are as follows:In its simplest form gas chromatography is a process whereby a sample is vaporized and injected onto the chromatographic column, where it is separated into its many components. The elution is brought about by the flow of carrier gas ). The carrier gas serves as the mobile phase that elutes the components of a mixture from a column containing an immobilized stationary phase. In contrast to most other types of chromatography, the mobile phase does not interact with molecules of the analytes. Carrier gases, the mobile phase of GC, include helium, hydrogen and nitrogen which are chemically inert. The stationary phase in gas-solid chromatography is a solid that has a large surface area at which adsorption of the analyte species (solutes) take place. In gas-liquid chromatography, a stationary phase is liquid that is immobilized on the surface of a solid support by adsorption or by chemical bonding.Gas chromatographic separation occurs because of differences in the positions of adsorption equilibrium between the gaseous components of the sample and the stationary phases ). In GC the distribution ratio (ratio of the concentration of analytes in stationary and mobile phase) is dependent on the component vapor pressure, the thermodynamic properties of the bulk component band and affinity for the stationary phase. The equilibrium is temperature dependent. Hence the importance of the selection the stationary phase of the column and column temperature programming in optimizing a separation.Helium, nitrogen, argon, hydrogen and air are typically used carrier gases. Which one is used is usually determined by the detector being used, for example, a discharge ionization detection (DID) requires helium as the carrier gas. When analyzing gas samples, however, the carrier is sometimes selected based on the sample's matrix, for example, when analyzing a mixture in argon, an argon carrier is preferred, because the argon in the sample does not show up on the chromatogram. Safety and availability are other factors, for example, hydrogen is flammable, and high-purity helium can be difficult to obtain in some areas of the world.The carrier gas flow rate affects the analysis in the same way that temperature does. The higher the flow rate the faster the analysis, but the lower the separation between analytes. Furthermore, the shape of peak will be also effected by the flow rate. The slower the rate is, the more axial and radical diffusion are, the broader and the more asymmetric the peak is. Selecting the flow rate is therefore the same compromise between the level of separation and length of analysis as selecting the column temperature.Table \(\PageIndex{1}\) shows commonly used stationary phase in various applications.For precise work, the column temperature must be controlled to within tenths of a degree. The optimum column temperature is dependent upon the boiling point of the sample. As a rule of thumb, a temperature slightly above the average boiling point of the sample results in an elution time of 2 - 30 minutes. Minimal temperatures give good resolution, but increase elution times. If a sample has a wide boiling range, then temperature programming can be useful. The column temperature is increased (either continuously or in steps) as separation proceeds. Another effect that temperature may have is on the shape of peak as flow rate does. The higher the temperature is, the more intensive the diffusion is, the worse the shape is. Thus, a compromise has to be made between goodness of separation and retention time as well as peak shape.A number of detectors are used in gas chromatography. The most common are the flame ionization detector (FID) and the thermal conductivity detector (TCD). Both are sensitive to a wide range of components, and both work over a wide range of concentrations. While TCDs are essentially universal and can be used to detect any component other than the carrier gas (as long as their thermal conductivities are different from that of the carrier gas, at detector temperature), FIDs are sensitive primarily to hydrocarbons, and are more sensitive to them than TCD. However, an FID cannot detect water. Both detectors are also quite robust. Since TCD is non-destructive, it can be operated in-series before an FID (destructive), thus providing complementary detection of the same analytes.For halides, nitrates, nitriles, peroxides, anhydrides and organometallics, ECD is a very sensitive detection, which can detect up to 50 fg of those analytes. Different types of detectors are listed below in Table \(\PageIndex{2}\), along with their properties.Most consumer products and biological samples are composed of a wide variety of compounds that differ in molecular weight, polarity, and volatility. For complex samples like these, headspace sampling is the fastest and cleanest method for analyzing volatile organic compounds. A headspace sample is normally prepared in a vial containing the sample, the dilution solvent, a matrix modifier, and the headspace ). Volatile components from complex sample mixtures can be extracted from non-volatile sample components and isolated in the headspace or vapor portion of a sample vial. An aliquot of the vapor in the headspace is delivered to a GC system for separation of all of the volatile components.The gas phase (G in ) is commonly referred to as the headspace and lies above the condensed sample phase. The sample phase (S in contains the compound(s) of interest and is usually in the form of a liquid or solid in combination with a dilution solvent or a matrix modifier. Once the sample phase is introduced into the vial and the vial is sealed, volatile components diffuse into the gas phase until the headspace has reached a state of equilibrium as depicted by the arrows. The sample is then taken from the headspace.Samples must be prepared to maximize the concentration of the volatile components in the headspace, and minimize unwanted contamination from other compounds in the sample matrix. To help determine the concentration of an analyte in the headspace, you will need to calculate the partition coefficient (K), which is defined by \ref{5} ,where Cs is the concentration of analyte in sample phase and Cg is the concentration of analyte in gas phase. Compounds that have low K values will tend to partition more readily into the gas phase, and have relatively high responses and low limits of detection. K can be lowered by changing the temperature at which the vial is equilibrated or by changing the composition of the sample matrix.\[ K\ =\ C_{s}/C_{g} \label{5} \]The phase ratio (β) is defined as the relative volume of the headspace compared to volume of the sample in the sample vial, \ref{6}, where Vs=volume of sample phase and Vg=volume of gas phase. Lower values for β (i.e., larger sample size) will yield higher responses for volatile compounds. However, decreasing the β value will not always yield the increase in response needed to improve sensitivity. When β is decreased by increasing the sample size, compounds with high K values partition less into the headspace compared to compounds with low K values, and yield correspondingly smaller changes in Cg. Samples that contain compounds with high K values need to be optimized to provide the lowest K value before changes are made in the phase ratio.\[ \beta \ =\ V_{g}/V_{s} \label{6} \]This page titled 5.1: Dynamic Headspace Gas Chromatography Analysis is shared under a CC BY 4.0 license and was authored, remixed, and/or curated by Pavan M. V. Raja & Andrew R. Barron (OpenStax CNX) via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.
556
5.2: Gas Chromatography Analysis of the Hydrodechlorination Reaction of Trichloroethene
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Physical_Methods_in_Chemistry_and_Nano_Science_(Barron)/05%3A_Reactions_Kinetics_and_Pathways/5.02%3A_Gas_Chromatography_Analysis_of_the_Hydrodechlorination_Reaction_of_Trichloroethene
Trichloroethene (TCE) is a widely spread environmental contaminant and a member of the class of compounds known as dense non-aqueous phase liquids (DNAPLs). Pd/Al2O3 catalyst has shown activity for the hydrodechlorination (HDC) of chlorinated compounds.To quantify the reaction rate, a 250 mL screw-cap bottle with 77 mL of headspace gas was used as the batch reactor for the studies. TCE (3 μL) is added in 173 mL DI water purged with hydrogen gas for 15 mins, together with 0.2 μL pentane as internal standard. Dynamic headspace analysis using GC has been applied. The experimental condition is concluded in the table below (Table \(\PageIndex{1}\)).First order reaction is assumed in the HDC of TCE, \ref{1}, where Kmeans is defined by \ref{2}, and Ccatis equal to the concentration of Pd metal within the reactor and kcat is the reaction rate with units of L/gPd/min.\[ -dC_{TCE}/dt\ =\ k_{meas} \times C_{TCE} \label{1} \]\[ k_{meas} \ =\ k_{cat} \times C_{cat} \label{2} \]The GC methods used are listed in Table \(\PageIndex{3}\).Since pentane is introduced as the inert internal standard, the relative concentration of TCE in the system can be expressed as the ratio of area of TCE to pentane in the GC plot, \ref{3}.\[ C_{TCE}\ =\ (peak\ area\ of\ TCE)/(peak\ area\ of\ pentane) \label{3} \]The major analytes (referenced as TCE, pentane, and ethane) are very well separated from each other, allowing for quantitative analysis. The peak areas of the peaks associated with these compounds are integrated by the computer automatically, and are listed in (Table \(\PageIndex{4}\)) with respect to time.Normalize TCE concentration with respect to peak area of pentane and then to the initial TCE concentration, and then calculate the nature logarithm of this normalized concentration, as shown in Table \(\PageIndex{3}\).From a plot normalized TCE concentration against time shows the concentration profile of TCE during reaction , while the slope of the logarithmic plot provides the reaction rate constant (\ref{1}).From , we can see that the linearity, i.e., the goodness of the assumption of first order reaction, is very much satisfied throughout the reaction. Thus, the reaction kinetic model is validated. Furthermore, the reaction rate constant can be calculated from the slope of the fitted line, i.e., kmeas = 0.0414 min-1. From this the kcat can be obtained, \ref{4}.\[ k_{cat}\ =\ k_{meas}/C_{Pd}\ =\ \frac{0.0414min^{-1}{(5 \times 10^{-4}\ g/0.173L)}\ =\ 14.32L/g_{Pd}\ min \label{4} \]This page titled 5.2: Gas Chromatography Analysis of the Hydrodechlorination Reaction of Trichloroethene is shared under a CC BY 4.0 license and was authored, remixed, and/or curated by Pavan M. V. Raja & Andrew R. Barron (OpenStax CNX) via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.
557
5.3: Temperature-Programmed Desorption Mass Spectroscopy Applied in Surface Chemistry
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Physical_Methods_in_Chemistry_and_Nano_Science_(Barron)/05%3A_Reactions_Kinetics_and_Pathways/5.03%3A_Temperature-Programmed_Desorption_Mass_Spectroscopy_Applied_in_Surface_Chemistry
The temperature-programmed desorption (TPD) technique is often used to monitor surface interactions between adsorbed molecules and substrate surface. Utilizing the dependence on temperature is able to discriminate between processes with different activation parameters, such as activation energy, rate constant, reaction order and Arrhenius pre-exponential factorIn order to provide an example of the set-up and results from a TPD experiment we are going to use an ultra-high vacuum (UHV) chamber equipped with a quadrupole mass spectrometer to exemplify a typical surface gas-solid interaction and estimate several important kinetic parameters.When we start to set up an apparatus for a typical surface TPD experiment, we should first think about how we can generate an extremely clean environment for the solid substrate and gas adsorbents. Ultra-high vacuum (UHV) is the most basic requirement for surface chemistry experiments. UHV is defined as a vacuum regime lower than 10-9 Torr. At such a low pressure the mean free path of a gas molecule is approximately 40 Km, which means gas molecules will collide and react with sample substrate in the UHV chamber many times before colliding with each other, ensuring all interactions take place on the substrate surface.Most of time UHV chambers require the use of unusual materials in construction and by heating the entire system to ~180 °C for several hours baking to remove moisture and other trace adsorbed gases around the wall of the chamber in order to reach the ultra-high vacuum environment. Also, outgas from the substrate surface and other bulk materials should be minimized by careful selection of materials with low vapor pressures, such as stainless steel, for everything inside the UHV chamber. Thus bulk metal crystals are chosen as substrates to study interactions between gas adsorbates and crystal surface itself. shows a schematic of a TPD system, while shows a typical TPD instrument equipped with a quadrupole MS spectrometer and a reflection absorption infrared spectrometer (RAIRS).There is no single pump that can operate all the way from atmospheric pressure to UHV. Instead, a series of different pumps are used, according to the appropriate pressure range for each pump. Pumps are commonly used to achieve UHV include:UHV pressures are measured with an ion-gauge, either a hot filament or an inverted magnetron type. Finally, special seals and gaskets must be used between components in a UHV system to prevent even trace leakage. Nearly all such seals are all metal, with knife edges on both sides cutting into a soft (e.g., copper) gasket. This all-metal seal can maintain system pressures down to ~10-12 Torr.A UHV manipulator (or sample holder, see ) allows an object that is inside a vacuum chamber and under vacuum to be mechanically positioned. It may provide rotary motion, linear motion, or a combination of both. The manipulator may include features allowing additional control and testing of a sample, such as the ability to apply heat, cooling, voltage, or a magnetic field. Sample heating can be accomplished by thermal radiation. A filament is mounted close to the sample and resistively heated to high temperature. In order to simplify complexity from the interaction between substrate and adsorbates, surface chemistry labs often carry out TPD experiments by choosing a substrate with single crystal surface instead of polycrystalline or amorphous substrates (see ).Before selected gas molecules are dosed to the chamber for adsorption, substrates (metal crystals) need to be cleaned through argon plasma sputtering, followed by annealing at high temperature for surface reconstruction. After these pretreatments, the system is again cooled down to very low temperature (liquid N2temp), which facilitating gas molecules adsorbed on the substrate surface. Adsorption is a process in which a molecule becomes adsorbed onto a surface of another phase. It is distinguished from absorption, which is used when describing uptake into the bulk of a solid or liquid phase.After gas molecules adsorption, now we are going to release theses adsorbates back into gas phase by programmed-heating the sample holder. A mass spectrometer is set up for collecting these desorbed gas molecules, and then correlation between desorption temperature and fragmentation of desorbed gas molecules will show us certain important information. shows a typical TPD experiment carried out by adsorbing CO onto Pd surface, followed by programmed-heating to desorb the CO adsorbates.The Langmuir isotherm describes the dependence of the surface coverage of an adsorbed gas on the pressure of the gas above the surface at a fixed temperature. Langmuir isotherm is the simplest assumption, but it provides a useful insight into the pressure dependence of the extent of surface adsorption. It was Irving Langmuir who first studied the adsorption process quantitatively. In his proposed model, he supposed that molecules can adsorb only at specific sites on the surface, and that once a site is occupied by one molecule, it cannot adsorb a second molecule. The adsorption process can be represented as \ref{1}, where A is the adsorbing molecule, S is the surface site, and A─S stands for an A molecule bound to the surface site.\[ A\ +\ S \rightarrow A\ -\ S \label{1} \]In a similar way, it reverse desorption process can be represented as \ref{2}.\[A\ -\ S \rightarrow A\ +\ S \label{2} \]According to the Langmuir model, we know that the adsorption rate should be proportional to ka[A](1-θ), where θ is the fraction of the surface sites covered by adsorbate A. The desorption rate is then proportional to kdθ. ka and kd are the rate constants for the adsorption and desorption. At equilibrium, the rates of these two processes are equal, \ref{3} - \ref{4}.We can replace [A] by P, where P means the gas partial pressure, \ref{6}.\[ k_{a} [A](1-\theta )\ =\ k_{d} \theta \label{3} \]\[ \frac{\theta }{1\ -\ \theta }\ =\ \frac{k_{a}}{k_{d}}[A] \label{4} \]\[ K\ =\ \frac{k_{a}}{k_{d}} \label{5} \]\[ \theta \ =\ \frac{K[A]}{1+K[A]} \label{6} \]\[ \theta \ =\ \frac{KP}{1+KP} \label{7} \]We can observe the equation above and know that if [A] or P is low enough so that K[A] or KP << 1, then θ ~ K[A] or KP, which means that the surface coverage should increase linearly with [A] or P. On the contrary, if [A] or P is large enough so that K[A] or KP >> 1, then θ ~ 1. This behavior is shown in the plot of θ versus [A] or P in .Here we are going to show how to use the TPD technique to estimate desorption energy, reaction energy, as well as Arrhenius pre-exponential factor. Let us assume that molecules are irreversibly adsorbed on the surface at some low temperature T0. The leak valve is closed, the valve to the pump is opened, and the “density” of product molecules is monitored with a mass spectrometer as the crystal is heated under programmed temperature \ref{8}, where β is the heating rate (~10 °C/s). We know the desorption rate depends strongly on temperature, so when the temperature of the crystal reaches a high enough value so that the desorption rate is appreciable, the mass spectrometer will begin to record a rise in density. At higher temperatures, the surface will finally become depleted of desorbing molecules; therefore, the mass spectrometer signal will decrease. According to the shape and position of the peak in the mass signal, we can learn about the activation energy for desorption and the Arrhenius pre-exponential factor.\[ T\ =\ T_{0}\ +\ \beta T \label{8} \]Consider a first-order desorption process \ref{9}, with a rate constant kd, \ref{10}, where A is Arrhenius pre-exponential factor. If θ is assumed to be the number of surface adsorbates per unit area, the desorption rate will be given by \ref{11}.\[ A\ -\ S \rightarrow \ A\ +\ S \label{9} \]\[ k_{d}\ =\ Ae^{({- \Delta E_{\alpha}}}{{RT})} \label{10} \]\[ \frac{-d\theta}{dt}\ =\ k_{d} \theta =\ \theta Ae^{({- \Delta E_{\alpha}}}{{RT})} \label{11} \]Since we know the relationship between heat rate β and temperature on the crystal surface T, \ref{12} and \ref{13}.\[ T\ =\ T_{0} \ +\ \beta t \label{12} \]\[ \frac{1}{dt} \ = \ \frac{\beta }{dT} \label{13} \]Multiplying by -dθ gives \ref{13}, since \ref{14} and \ref{15}. A plot of the form of –dθ/dT versus T is shown in . \[ \frac{-d\theta}{dt}\ =\ -\beta \frac{d\theta }{dT} \label{14} \]\[ \frac{-d\theta}{dt}\ =\ k_{d}\ =\ \theta A e^{ (\frac{-\Delta E_{D}}{RT}}) \label{15} \]\[ \frac{-d\theta}{dt}\ =\ \frac{\theta A}{\beta }e^{(\frac{- \Delta E_{a}}{RT})} \label{16} \]We notice that the Tm (peak maximum) in keeps constant with increasing θ, which means the value of Tm does not depend on the initial coverage θ in the first-order desorption. If we want to use different desorption activation energy Ea and see what happens in the corresponding desorption temperature T. We are able to see the Tm values will increase with increasing Ea.At the peak of the mass signal, the increase in the desorption rate is matched by the decrease in surface concentration per unit area so that the change in dθ/dT with T is zero: \ref{17} - \ref{18}. Since \ref{19}, then \ref{20} and \ref{21}.\[ \frac{-d\theta }{dT}\ =\ \frac{\theta A}{\beta }e^{(\frac{- \Delta E_{a}}{RT})} \label{17} \]\[ \frac{d}{dT} [frac{\theta A}{\beta }e^{(\frac{- \Delta E_{a}}{RT})}]\ =\ 0 \label{18} \]\[ \frac{\Delta E_{a}}{RT^{2}_{M}} = -\frac{1}{\theta}(\frac{d\theta }{dT}) \label{19} \]\[ -\frac{-d\theta }{dT}\ =\ \frac{\theta A}{\beta} e^{(- \frac{- \Delta E_{a}}{RT})} \label{20} \]\[ \frac{\Delta E_{a}}{RT^{2}_{M}}\ =\ \frac{A}{\beta }e^{(- \frac{- \Delta E_{a}}{RT})} \label{21} \]\[ 2lnT_{M}\ -\ ln \beta \ = \frac{\Delta E_{a}}{RT^{2}_{M}} + ln{\frac{\Delta E_{a}}{RA}} \label{22} \]This tells us if different heating rates β are used and the left-hand side of the above equation is plotted as a function of 1/TM, we can see that a straight line should be obtained whose slope is ΔEa/R and intercept is ln(ΔEa/RA). So we are able to obtain the activation energy to desorption ΔEa and Arrhenius pre-exponential factor A.Now let consider a second-order desorption process \ref{23}, with a rate constant kd. We can deduce the desorption kinetics as \ref{24}. The result is different from the first-order reaction whose Tm value does not depend upon the initial coverage, the temperature of the peak Tm will decrease with increasing initial surface coverage.\[ 2A\ -S \rightarrow A_{2}\ +\ 2S \label{23} \]\[ -\frac{d\theta }{dT} =\ A \theta^{2} e^{\frac{\Delta E_{a}}{RT}} \label{24} \]The zero-order desorption kinetics relationship as \ref{25}. Looking at desorption rate for the zero-order reaction ), we can observe that the desorption rate does not depend on coverage and also implies that desorption rate increases exponentially with T. Also according to the plot of desorption rate versus T, we figure out the desorption rate rapid drop when all molecules have desorbed. Plus temperature of peak, Tm, moves to higher T with increasing coverage θ. \[ - \frac{d\theta }{dT} \ =\ Ae^{(- \frac{\Delta E_{a}}{RT})} \label{25} \]A typical TPD spectra of D2 from Rh for different exposures in Langmuirs (L = 10-6 Torr-sec) shows in . First we figure out the desorption peaks from g to n show two different desorbing regions. The higher one can undoubtedly be ascribed to chemisorbed D2 on Rh surface, which means chemisorbed molecules need higher energy used to overcome their activation energy for desorption. The lower desorption region is then due to physisorbed D2 with much lower desorption activation energy than chemisorbed D2. According to the TPD theory we learnt, we notice that the peak maximum shifts to lower temperature with increasing initial coverage, which means it should belong to a second-order reaction. If we have other information about heating rate β and each Tm under corresponding initial surface coverage θ then we are able to calculate the desorption activation energy Ea and Arrhenius pre-exponential factor A.Temperature-programmed desorption is an easy and straightforward technique especially useful to investigate gas-solid interaction. By changing one of parameters, such as coverage or heating rate, followed by running a serious of typical TPD experiments, it is possible to to obtain several important kinetic parameters (activation energy to desorption, reaction order, pre-exponential factor, etc). Based on the information, further mechanism of gas-solid interaction can be deduced.This page titled 5.3: Temperature-Programmed Desorption Mass Spectroscopy Applied in Surface Chemistry is shared under a CC BY 4.0 license and was authored, remixed, and/or curated by Pavan M. V. Raja & Andrew R. Barron (OpenStax CNX) via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.
558
6.1: NMR of Dynamic Systems- An Overview
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Physical_Methods_in_Chemistry_and_Nano_Science_(Barron)/06%3A_Dynamic_Processes/6.01%3A_NMR_of_Dynamic_Systems-_An_Overview
The study of conformational and chemical equilibrium is an important part of understanding chemical species in solution. NMR is one of the most useful and easiest to use tools for such kinds of work. The study of conformational and chemical equilibrium is an important part of understanding chemical species in solution. NMR is one of the most useful and easiest to use tools for such kinds of work.Chemical equilibrium is defined as the state in which both reactants and products (of a chemical reaction) are present at concentrations which have no further tendency to change with time. Such a state results when the forward reaction proceeds at the same rate (i.e., Ka in b) as the reverse reaction (i.e., Kd in b). The reaction rates of the forward and reverse reactions are generally not zero but, being equal, there are no net changes in the concentrations of the reactant and product. This process is called dynamic equilibrium.Conformational isomerism is a form of stereoisomerism in which the isomers can be interconverted exclusively by rotations about formally single bonds. Conformational isomers are distinct from the other classes of stereoisomers for which interconversion necessarily involves breaking and reforming of chemical bonds. The rotational barrier, or barrier to rotation, is the activation energy required to interconvert rotamers. The equilibrium population of different conformers follows a Boltzmann distribution.If we consider a simple system )as an example of how to study conformational equilibrium. In this system, the two methyl groups (one is in red, the other blue) will exchange with each other through the rotation of the C-N bond. When the speed of the rotation is fast (faster than the NMR timescale of about 10-5s), NMR can no longer recognize the difference of the two methyl groups, which results in an average peak in the NMR spectrum (as is shown in the red spectrum in ).Conversely, when the speed of the rotation is slowed by cooling (to -50 °C) the two conformations have lifetimes significantly longer that they are observable in the NMR spectrum (as is shown by the dark blue spectrum in ). The changes that occur to this spectrum with varying temperature is shown in , where it is clearly seen the change of the NMR spectrum with the decreasing of temperature.Based upon the above, it should be clear that the presence of an average or separate peaks can be used as an indicator of the speed of the rotation. As such this technique is useful in probing systems such as molecular motors. One of the most fundamental problems is to confirm that the motor is really rotating, while the other is to determine the rotation speed of the motors. In this area, the dynamic NMR measurements is an ideal technique. For example, we can take a look at the molecular motor shown in . This molecular motor is composed of two rigid conjugated parts, which are not in the same plane. The rotation of the C-N bond will change the conformation of the molecule, which can be shown by the variation of the peaks of the two methyl groups in NMR spectrum. For the control of the rotation speed of this particular molecule motor, the researchers added additional functionality. When the nitrogen in the aromatic ring is not protonated the repulsion between the nitrogen and the oxygen atoms is larger which prohibits the rotation of the five member ring, which separates the peaks of the two methyl groups from each other. However, when the nitrogen is protonated, the rotation barrier greatly decreases because of the formation of a more stable coplanar transition state during the rotation process. Therefore, the speed of the rotation of the rotor dramatically increases to make the two methyl groups unrecognizable by NMR spectrometry to get an average peak. The result of the NMR spectrum versus the addition of the acid is shown in , which can visually tell that the rotation speed is changing.This page titled 6.1: NMR of Dynamic Systems- An Overview is shared under a CC BY 4.0 license and was authored, remixed, and/or curated by Pavan M. V. Raja & Andrew R. Barron (OpenStax CNX) via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.
559
6.2: Determination of Energetics of Fluxional Molecules by NMR
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Physical_Methods_in_Chemistry_and_Nano_Science_(Barron)/06%3A_Dynamic_Processes/6.02%3A_Determination_of_Energetics_of_Fluxional_Molecules_by_NMR
It does not take an extensive knowledge of chemistry to understand that as-drawn chemical structures do not give an entirely correct picture of molecules. Unlike drawings, molecules are not stationary objects in solution, the gas phase, or even in the solid state. Bonds can rotate, bend, and stretch, and the molecule can even undergo conformational changes. Rotation, bending, and stretching do not typically interfere with characterization techniques, but conformational changes occasionally complicate analyses, especially nuclear magnetic resonance (NMR).For the present discussion, a fluxional molecule can be defined as one that undergoes an intramolecular reversible interchange between two or more conformations. Fluxionality is specified as intramolecular to differentiate from ligand exchange and complexation mechanisms, intermolecular processes. An irreversible interchange is more of a chemical reaction than a form of fluxionality. Most of the following examples alternate between two conformations, but more complex fluxionality is possible. Additionally, this module will focus on inorganic compounds. In this module, examples of fluxional molecules, NMR procedures, calculations of energetics of fluxional molecules, and the limitations of the approach will be covered.Octahedral trischelate complexes are susceptible to Bailar twists, in which the complex distorts into a trigonal prismatic intermediate before reverting to its original octahedral geometry. If the chelates are not symmetric, a Δ enantiomer will be inverted to a Λ enantiomer. For example not how in with the GaL3 complex of 2,3-dihydroxy-N,N‘-diisopropylterephthalamide he end product has the chelate ligands spiraling the opposite direction around the metal center.D3h compounds can also experience fluxionality in the form of a Berry pseudorotation (depicted in ), in which the complex distorts into a C4v intermediate and returns to trigonal bipyrimidal geometry, exchanging two equatorial and axial groups . Phosphorous pentafluoride is one of the simplest examples of this effect. In its 19FNMR, only one peak representing five fluorines is present at 266 ppm, even at low temperatures. This is due to interconversion faster than the NMR timescale.Perhaps one of the best examples of fluxional metal complexes is (π5-C5H5)Fe(CO)2(π1-C5H5) . Not only does it have a rotating η5 cyclopentadienyl ring, it also has an alternating η1 cyclopentadienyl ring (Cp). This can be seen in its NMR spectra in . The signal for five protons corresponds to the metallocene Cp ring (5.6 ppm). Notice how the peak remains a sharp singlet despite the large temperature sampling range of the spectra. Another noteworthy aspect is how the multiplets corresponding to the other Cp ring broaden and eventually condense into one sharp singlet.ample preparation is essentially the same for routine NMR. The compound of interest will need to be dissolved in an NMR compatible solvent (CDCl3 is a common example) and transferred into an NMR tube. Approximately 600 μL of solution is needed with only micrograms of compound. Compounds should be at least 99 % pure in order to ease peak assignments and analysis. Because each spectrometer has its own protocol for shimming and optimization, having the supervision of a trained specialist is strongly advised. Additionally, using an NMR with temperature control is essential. The basic goal of this experiment is to find three temperatures: slow interchange, fast interchange, and coalescence. Thus many spectra will be needed to be obtained at different temperatures in order to determine the energetics of the fluctuation.The process will be much swifter if the lower temperature range (in which the fluctuation is much slower than the spectrometer timescale) is known. A spectra should be taken in this range. Spectra at higher temperatures should be taken, preferably in regular increments (for instance, 10 K), until the peaks of interest condense into a sharp single at higher temperature. A spectrum at the coalescence temperature should also be taken in case of publishing a manuscript. This procedure should then be repeated in reverse; that is, spectra should be taken from high temperature to low temperature. This ensures that no thermal reaction has taken place and that no hysteresis is observed. With the data (spectra) in hand, the energetics can now be determined.For intramolecular processes that exchange two chemically equivalent nuclei, the function of the difference in their resonance frequencies (Δv) and rate of exchange (k) is the NMR spectrum. Slow interchange occurs when Δv >> k, and two separate peaks are observed. When Δv << k, fast interchange is said to occur, and one sharp peak is observed. At intermediate temperatures, the peaks are broadened and overlap one another. When they completely merge into one peak, the coalescence temperature, Tc is said to be reached. In the case of coalescence of an equal doublet (for instance, one proton exchanging with one proton), coalescences occurs when Δv0t = 1.4142/(2π), where Δv0 is the difference in chemical shift at low interchange and where t is defined by \ref{1}, where ta and tb are the respective lifetimes of species a and b. This condition only occurs when ta = tb, and as a result, k = ½ t.\[ \frac{1}{t} \ =\ \frac{1}{t_{a}}\ +\ \frac{1}{t_{b}} \label{1} \]For reference, the exact lineshape function (assuming two equivalent groups being exchanged) is given by the Bloch Equation, \ref{2}, where g is the intensity at frequency v,and where K is a normalization constant.\[ g(v)= \frac{Kt(v_{a} + v_{b})^{2}}{[0.5(v_{a} + v_{b})- u ]^{2}+4\pi^{2}t^{2}(v_{a}-v)^{2}(v _{b} - v)^{2}} \label{2} \]At low temperature (slow exchange), the spectrum has two peaks and Δv >> t. As a result, \ref{3} reduces to \ref{4}, where T2a is the spin-spin relaxation time. The linewidth of the peak for species a is defined by \ref{5}.\[ g(v)_{a}=g(v)_{b}=\frac{KT_{2a}}{1+T^{2}_{2a}(v_{a}-v)^{2}} \label{3} \]\[ (\Delta v_{a})_{1/2} = \frac{1}{\pi}(\frac{1}{T_{2a}}+\frac{1}{t_{a}} ) \label{4} \]Because the spin-spin relaxation time is difficult to determine, especially in inhomogeneous environments, rate constants at higher temperatures but before coalescence are preferable and more reliable.The rate constant k can then be determined by comparing the linewidth of a peak with no exchange (low temp) with the linewidth of the peak with little exchange using, \ref{5}, where subscript e refers to the peak in the slightly higher temperature spectrum and subscript 0 refers to the peak in the no exchange spectrum.\[ k = \frac{\pi }{ \sqrt{2}}[(\Delta v_{e})_{1/2}- (\Delta v_{0})_{1/2}] \label{5} \]Additionally, k can be determined from the difference in frequency (chemical shift) using \ref{6}, where Δv0is the chemical shift difference in Hz at the no exchange temperature and Δve is the chemical shift difference at the exchange temperature.\[ k= \frac{\pi}{\sqrt{2} }(\Delta v^{2}_{0} - \Delta v^{2}_{e}) \label{6} \]The intensity ratio method, \ref{7}, can be used to determine the rate constant for spectra whose peaks have begun to merge, where r is the ratio between the maximum intensity and the minimum intensity, of the merging peaks, Imax/Imin.\[ k = \frac{\pi }{\sqrt{2}} (r+(r^{2} - r)^{1/2})^{-1/2} \label{7} \]Additionally, k can be determined from the difference in frequency (chemical shift) using \ref{8}, where Δv0is the chemical shift difference in Hz at the no exchange temperature and Δve is the chemical shift difference at the exchange temperature.\[ k\ = \frac{\pi }{\sqrt{2} }(\Delta v^{2}_{0} - \Delta v^{2}_{e} ) \label{8} \]The intensity ratio method, \ref{9} can be used to determine the rate constant for spectra whose peaks have begun to merge, where r is the ratio between the maximum intensity and the minimum intensity, of the merging peaks, Imax/Imin\[ k\ = \frac{\pi }{\sqrt{2} }(r+(r^{2}-r)^{1/2})^{-1/2} \label{9} \]As mentioned earlier, the coalescence temperature, Tc is the temperature at which the two peaks corresponding to the interchanging groups merge into one broad peak and \ref{10} may be used to calculate the rate at coalescence.\[ k\ = \frac{\pi \Delta v_{0} }{\sqrt{2}} \label{10} \]Beyond the coalescence temperature, interchange is so rapid (k >> t) that the spectrometer registers the two groups as equivalent and as one peak. At temperatures greater than that of coalescence, the lineshape equation reduces to \ref{11}.\[ g(v)\ = \frac{KT_{2}}{[1 \ +\ \pi T_{2}(v_{a} \ +\ v_{b} \ +\ 2v)^{2}]} \label{11} \]As mentioned earlier, determination of T2 is very time consuming and often unreliable due to inhomogeneity of the sample and of the magnetic field. The following approximation (\ref{12}) applies to spectra whose signal has not completely fallen (in their coalescence).\[ k\ = \frac{ 0.5\pi \Delta v^{2} }{(\Delta v_{e})_{1/2} - (\Delta v_{0} )_{1/2} } \label{12} \]Now that the rate constants have been extracted from the spectra, energetic parameters may now be calculated. For a rough measure of the activation parameters, only the spectra at no exchange and coalescence are needed. The coalescence temperature is determined from the NMR experiment, and the rate of exchange at coalescence is given by \ref{10}. The activation parameters can then be determined from the Eyring equation (\ref{13} ), where kB is the Boltzmann constant, and where ΔH‡ - TΔS‡ = ΔG‡.\[ ln(\frac{k}{t}) = \frac{\Delta H^{\ddagger } }{RT} - \frac{\Delta S^{\ddagger }}{R} + ln(\frac{k_{B}}{h}) \label{13} \]For more accurate calculations of the energetics, the rates at different temperatures need to be obtained. A plot of ln(k/T) versus 1/T (where T is the temperature at which the spectrum was taken) will yield ΔH‡, ΔS‡, and ΔG‡. For a pictorial representation of these concepts, see .For unequal doublets (for instance, two protons exchanging with one proton), a different treatment is needed. The difference in population can be defined through \ref{14}, where Pi is the concentration (integration) of species i and X = 2πΔvt (counts per second). Values for Δvt are given in .\[ \Delta P = P_{a} - P_{b} = [ \frac{X^{2} - 2}{3} ]^{3/2} (\frac{1}{X}) \label{14} \]The rates of conversion for the two species, ka and kb, follow kaPa = kbPb (equilibrium), and because ka = 1/taand kb = 1/tb, the rate constant follows \ref{15}.\[ k_{i} = \frac{1}{2t}(1- \Delta P) \label{15} \]From Erying's expressions, the Gibbs free activation energy for each species can be obtained through \ref{16} and \ref{17}.\[ \Delta G^{\ddagger }_{a} = \ RT_{c}\ ln(\frac{kT_{c}}{h\pi \Delta v_{0}} \times \frac{X}{1-\Delta P_{a}} ) \label{16} \]\[ \Delta G^{\ddagger }_{b} = \ RT_{c}\ ln(\frac{kT_{c}}{h\pi \Delta v_{0}} \times \frac{X}{1-\Delta P_{b}} ) \label{17} \]Taking the difference of \ref{16} and \ref{17} gives the difference in energy between species a and b (\ref{18}).\[ \Delta G^{\ddagger } = RT_{c} ln(\frac{P_{a}}{P_{b}}=RT_{c}ln(\frac{1+P}{1-P}) \label{18} \]Converting constants will yield the following activation energies in calories per mole (\ref{19} and \ref{20}).\[ \Delta G^{\ddagger }_{a} = 4.57T_{c}[10.62\ +\ log(\frac{X}{2p(1-\Delta P)}) +\ log(T_{c}/\Delta v)] \label{19} \]\[ \Delta G^{\ddagger }_{b} = 4.57T_{c}[10.62\ +\ log(\frac{X}{2p(1-\Delta P)}) +\ log(T_{c}/\Delta v)] \label{20} \]To obtain the free energys of activation, values of log (X/(2π(1 + ΔP))) need to be plotted against ΔP (values Tc and Δv0 are predetermined).This unequal doublet energetics approximation only gives ΔG‡ at one temperature, and a more rigorous theoretical treatment is needed to give information about ΔS‡ and ΔH‡.Normally ligands such as dipyrido(2,3-a;3′,2′-j)phenazine (dpop’) are tridentate when complexed to transition metal centers. However, dpop’ binds to rhenium in a bidentate manner, with the outer nitrogens alternating in being coordinated and uncoordinated. See for the structure of Re(CO)3(dpop')Cl. This fluxionality results in the exchange of the aromatic protons on the dpop’ ligand, which can be observed via 1HNMR. Because of the complex nature of the coalescence of doublets, the rate constants at different temperatures were determined via computer simulation (DNMR3, a plugin of Topspin). These spectra are shown in .The activation parameters can then be obtained by plotting ln(k/T) versus 1/T (see for the Eyring plot). ΔS‡ can be extracted from the y-intercept, and ΔH‡ can be obtained through the slope of the plot. For this example, ΔH‡, ΔS‡ and ΔG‡. were determined to be 64.9 kJ/mol, 7.88 J/mol, and 62.4 kJ/mol.Though NMR is a powerful technique for determining the energetics of fluxional molecules, it does have one major limitation. If the fluctuation is too rapid for the NMR timescale (< 1 ms) or if the conformational change is too slow meaning the coalescence temperature is not observed, the energetics cannot be calculated. In other words, spectra at coalescence and at no exchange need to be observable. One is also limited by the capabilities of the available spectrometer. The energetics of very fast fluxionality (metallocenes, PF5, etc) and very slow fluxionality may not be determinable. Also note that this method does not prove any fluxionality or any mechanism thereof; it only gives a value for the activation energy of the process. As a side note, sometimes the coalescence of NMR peaks is not due to fluxionality, but rather temperature-dependent chemical shifts.This page titled 6.2: Determination of Energetics of Fluxional Molecules by NMR is shared under a CC BY 4.0 license and was authored, remixed, and/or curated by Pavan M. V. Raja & Andrew R. Barron (OpenStax CNX) via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.
560
6.3: Rolling Molecules on Surfaces Under STM Imaging
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Physical_Methods_in_Chemistry_and_Nano_Science_(Barron)/06%3A_Dynamic_Processes/6.03%3A_Rolling_Molecules_on_Surfaces_Under_STM_Imaging
As single molecule imaging methods such as scanning tunneling microscope (STM), atomic force microscope (AFM), and transmission electron microscope (TEM) developed in the past decades, scientists have gained powerful tools to explore molecular structures and behaviors in previously unknown areas. Among these imaging methods, STM is probably the most suitable one to observe detail at molecular level. STM can operate in a wide range of conditions, provides very high resolution, and able to manipulate molecular motions with the tip. An interesting early example came from IBM in 1990, in which the STM was used to position individual atoms for the first time, spelling out "I-B-M" in Xenon atoms. This work revealed that observation and control of single atoms and molecular motions on surfaces were possible.The IBM work, and subsequent experiments, relied on the fact that STM tip always exerts a finite force toward an adsorbate atom that contains both van der Waals and electrostatic forces was utilized for manipulation purpose. By adjusting the position and the voltage of the tip, the interactions between the tip and the target molecule were changed. Therefore, applying/releasing force to a single atom and make it move was possible . The actual positioning experiment was carried out in the following process. The nickel metal substrate was prepared by cycles of argon-ion sputtering, followed by annealing in a partial pressure of oxygen to remove surface carbon and other impurities. After the cleaning process, the sample was cooled to 4 K, and imaged with the STM to ensure the quality of surface. The nickel sample was then doped with xenon. An image of the doped sample was taken at constant-current scanning conditions. Each xenon atom appears as a located randomly 1.6 Å high bump on the surface a). Under the imaging conditions (tip bias = 0.010 V with tunneling current 10-9 A) the interaction of the xenon with the tip is too weak to cause the position of the xenon atom to be perturbed. To move an atom, the STM tip was placed on top of the atom performing the procedure depicted in to move to its target. Repeating this process again and again led the researcher to build of the structure they desired b and c.All motions on surfaces at the single molecule level can be described as by the following (or combination of the following) modes:SlidingHoppingRollingPivotingAlthough the power of STM imaging has been demonstrated, imaging of molecules themselves is still often a difficult task. The successful imaging of the IBM work was attributed to selection of a heavy atom. Other synthetic organic molecules without heavy atoms are much more difficult to be imaged under STM. Determinations of the mechanism of molecular motion is another. Besides imaging methods themselves, other auxiliary methods such as DFT calculations and imaging of properly designed molecules are required to determine the mechanism by which a particular molecule moves across a surface.Herein, we are particularly interested in surface-rolling molecules, i.e., those that are designed to roll on a surface. It is straightforward to imagine that if we want to construct (and image) surface-rolling molecules, we must think of making highly symmetrical structures. In addition, the magnitudes of interactions between the molecules and the surfaces have to be adequate; otherwise the molecules will be more susceptible to slide/hop or stick on the surfaces, instead of rolling. As a result, only very few molecules are known can roll and be detected on surfaces.As described above, rolling motions are most likely to be observed on molecules having high degree of symmetry and suitable interactions between themselves and the surface. C60 is not only a highly symmetrical molecule but also readily imageable under STM due to its size. These properties together make C60 and its derivatives highly suitable to study with regards to surface-rolling motion.The STM imaging of C60 was first carried out at At King College, London. Similar to the atom positioning experiment by IBM, STM tip manipulation was also utilized to achieve C60 displacement. The tip trajectory suggested that a rolling motion took into account the displacement on the surface of C60. In order to confirm the hypothesis, the researchers also employed ab initio density function (DFT) calculations with rolling model boundary condition ). The calculation result has supported their experimental result.The results provided insights into the dynamical response of covalently bound molecules to manipulation. The sequential breaking and reforming of highly directional covalent bonds resulted in a dynamical molecular response in which bond breaking, rotation, and translation are intimately coupled in a rolling motion, but not performing sliding or hopping motion.A triptycene wheeled dimeric molecule was also synthesized for studying rolling motion under STM. This "tripod-like" triptycene wheel ulike a ball like C60 molecule also demonstrated a rolling motion on the surface. The two triptycene units were connected via a dialkynyl axle, for both desired molecule orientation sitting on surface and directional preference of the rolling motion. STM controlling and imaging was demonstrated, including the mechanism .Another use of STM imaging at single molecule imaging is the single molecule nanocar by the Tour group at Rice University. The concept of a nanocar initially employed the free rotation of a C-C single bond between a spherical C60 molecule and an alkyne, . Based on this concept, an “axle” can be designed into which are mounted C60 “wheels” connected with a “chassis” to construct the “nanocar”. Nanocars with this design are expected to have a directional movement perpendicular to the axle. Unfortunately, the first generation nanocar (named “nanotruck” ) encountered some difficulties in STM imaging due to its chemical instability and insolubility. Therefore, a new of design of nanocar based on OPE has been synthesized .The newly designed nanocar was studied with STM. When the nanocar was heated to ~200 °C, noticeable displacements of the nanocar were observed under selected images from a 10 min STM experiment . The phenomenon that the nanocar moved only at high temperature was attributed their stability to a relatively strong adhesion force between the fullerene wheels and the underlying gold. The series of images showed both pivotal and translational motions on the surfaces.Although literature studies suggested that the C60 molecule rolls on the surface, in the nanocar movement studies it is still not possible to conclusively conclude that the nanocar moves on surface exclusively via a rolling mechanism. Hopping, sliding and other moving modes could also be responsible for the movement of the nanocar since the experiment was carried out at high temperature conditions, making the C60 molecules more energetic to overcome interactions between surfaces.To tackle the question of the mode of translation, a trimeric “nano-tricycle” has been synthesized. If the movement of fullerene-wheeled nanocar was based on a hopping or sliding mechanism, the trimer should give observable translational motions like the four-wheeled nanocar, however, if rolling is the operable motion then the nano-tricycle should rotate on an axis, but not translate across the surface. The result of the imaging experiment of the trimer at ~200 °C ), yielded very small and insignificant translational displacements in comparison to 4-wheel nanocar ). The trimeric 3-wheel nanocar showed some pivoting motions in the images. This motion type can be attributed to the directional preferences of the wheels mounted on the trimer causing the car to rotate. All the experimental results suggested that a C60-based nanocar moves via a rolling motion rather than hopping and sliding. In addition, the fact that the thermally driven nanocar only moves in high temperature also suggests that four C60 have very strong interactions to the surface.This page titled 6.3: Rolling Molecules on Surfaces Under STM Imaging is shared under a CC BY 4.0 license and was authored, remixed, and/or curated by Pavan M. V. Raja & Andrew R. Barron (OpenStax CNX) via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.
561
7.1: Crystal Structure
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Physical_Methods_in_Chemistry_and_Nano_Science_(Barron)/07%3A_Molecular_and_Solid_State_Structure/7.01%3A_Crystal_Structure
In any sort of discussion of crystalline materials, it is useful to begin with a discussion of crystallography: the study of the formation, structure, and properties of crystals. A crystal structure is defined as the particular repeating arrangement of atoms (molecules or ions) throughout a crystal. Structure refers to the internal arrangement of particles and not the external appearance of the crystal. However, these are not entirely independent since the external appearance of a crystal is often related to the internal arrangement. For example, crystals of cubic rock salt (NaCl) are physically cubic in appearance. Only a few of the possible crystal structures are of concern with respect to simple inorganic salts and these will be discussed in detail, however, it is important to understand the nomenclature of crystallography.The Bravais lattice is the basic building block from which all crystals can be constructed. The concept originated as a topological problem of finding the number of different ways to arrange points in space where each point would have an identical “atmosphere”. That is each point would be surrounded by an identical set of points as any other point, so that all points would be indistinguishable from each other. Mathematician Auguste Bravais discovered that there were 14 different collections of the groups of points, which are known as Bravais lattices. These lattices fall into seven different "crystal systems”, as differentiated by the relationship between the angles between sides of the “unit cell” and the distance between points in the unit cell. The unit cell is the smallest group of atoms, ions or molecules that, when repeated at regular intervals in three dimensions, will produce the lattice of a crystal system. The “lattice parameter” is the length between two points on the corners of a unit cell. Each of the various lattice parameters are designated by the letters a, b, and c. If two sides are equal, such as in a tetragonal lattice, then the lengths of the two lattice parameters are designated a and c, with b omitted. The angles are designated by the Greek letters α, β, and γsize 12{γ} {}, such that an angle with a specific Greek letter is not subtended by the axis with its Roman equivalent. For example, α is the included angle between the b and c axis.Table \(\PageIndex{1}\) shows the various crystal systems, while shows the 14 Bravais lattices. It is important to distinguish the characteristics of each of the individual systems. An example of a material that takes on each of the Bravais lattices is shown in Table \(\PageIndex{2}\).The cubic lattice is the most symmetrical of the systems. All the angles are equal to 90°, and all the sides are of the same length (a = b = c). Only the length of one of the sides (a) is required to describe this system completely. In addition to simple cubic, the cubic lattice also includes body-centered cubic and face-centered cubic . Body-centered cubic results from the presence of an atom (or ion) in the center of a cube, in addition to the atoms (ions) positioned at the vertices of the cube. In a similar manner, a face-centered cubic requires, in addition to the atoms (ions) positioned at the vertices of the cube, the presence of atoms (ions) in the center of each of the cubes face.The tetragonal lattice has all of its angles equal to 90°, and has two out of the three sides of equal length (a = b). The system also includes body-centered tetragonal .In an orthorhombic lattice all of the angles are equal to 90°, while all of its sides are of unequal length. The system needs only to be described by three lattice parameters. This system also includes body-centered orthorhombic, base-centered orthorhombic, and face-centered orthorhombic . A base-centered lattice has, in addition to the atoms (ions) positioned at the vertices of the orthorhombic lattice, atoms (ions) positioned on just two opposing faces.The rhombohedral lattice is also known as trigonal, and has no angles equal to 90°, but all sides are of equal length (a = b = c), thus requiring only by one lattice parameter, and all three angles are equal (α = β = γ).A hexagonal crystal structure has two angles equal to 90°, with the other angle ( γsize 12{γ} {}) equal to 120°. For this to happen, the two sides surrounding the 120° angle must be equal (a = b), while the third side (c) is at 90° to the other sides and can be of any length.The monoclinic lattice has no sides of equal length, but two of the angles are equal to 90°, with the other angle (usually defined as β) being something other than 90°. It is a tilted parallelogram prism with rectangular bases. This system also includes base-centered monoclinic ). In the triclinic lattice none of the sides of the unit cell are equal, and none of the angles within the unit cell are equal to 90°. The triclinic lattice is chosen such that all the internal angles are either acute or obtuse. This crystal system has the lowest symmetry and must be described by 3 lattice parameters (a, b, and c) and the 3 angles (α, β, and γ).The structure of a crystal is defined with respect to a unit cell. As the entire crystal consists of repeating unit cells, this definition is sufficient to represent the entire crystal. Within the unit cell, the atomic arrangement is expressed using coordinates. There are two systems of coordinates commonly in use, which can cause some confusion. Both use a corner of the unit cell as their origin. The first, less-commonly seen system is that of Cartesian or orthogonal coordinates (X, Y, Z). These usually have the units of Angstroms and relate to the distance in each direction between the origin of the cell and the atom. These coordinates may be manipulated in the same fashion are used with two- or three-dimensional graphs. It is very simple, therefore, to calculate inter-atomic distances and angles given the Cartesian coordinates of the atoms. Unfortunately, the repeating nature of a crystal cannot be expressed easily using such coordinates. For example, consider a cubic cell of dimension 3.52 Å. Pretend that this cell contains an atom that has the coordinates (1.5, 2.1, 2.4). That is, the atom is 1.5 Å away from the origin in the x direction (which coincides with the a cell axis), 2.1 Å in the y (which coincides with the b cell axis) and 2.4 Å in the z (which coincides with the c cell axis). There will be an equivalent atom in the next unit cell along the x-direction, which will have the coordinates (1.5 + 3.52, 2.1, 2.4) or (5.02, 2.1, 2.4). This was a rather simple calculation, as the cell has very high symmetry and so the cell axes, a, b and c, coincide with the Cartesian axes, X, Y and Z. However, consider lower symmetry cells such as triclinic or monoclinic in which the cell axes are not mutually orthogonal. In such cases, expressing the repeating nature of the crystal is much more difficult to accomplish.Accordingly, atomic coordinates are usually expressed in terms of fractional coordinates, (x, y, z). This coordinate system is coincident with the cell axes (a, b, c) and relates to the position of the atom in terms of the fraction along each axis. Consider the atom in the cubic cell discussion above. The atom was 1.5 Å in the a direction away from the origin. As the a axis is 3.52 Å long, the atom is (1.5/3.52) or 0.43 of the axis away from the origin. Similarly, it is (2.1/3.52) or 0.60 of the b axis and (2.4/3.5) or 0.68 of the c axis. The fractional coordinates of this atom are, therefore, (0.43, 0.60, 0.68). The coordinates of the equivalent atom in the next cell over in the a direction, however, are easily calculated as this atom is simply 1 unit cell away in a. Thus, all one has to do is add 1 to the x coordinate: (1.43, 0.60, 0.68). Such transformations can be performed regardless of the shape of the unit cell. Fractional coordinates, therefore, are used to retain and manipulate crystal information.The designation of the individual vectors within any given crystal lattice is accomplished by the use of whole number multipliers of the lattice parameter of the point at which the vector exits the unit cell. The vector is indicated by the notation [hkl], where h, k, and l are reciprocals of the point at which the vector exits the unit cell. The origination of all vectors is assumed defined as. For example, the direction along the a-axis according to this scheme would be because this has a component only in the a-direction and no component along either the b or c axial direction. A vector diagonally along the face defined by the a and baxis would be, while going from one corner of the unit cell to the opposite corner would be in the direction. shows some examples of the various directions in the unit cell. The crystal direction notation is made up of the lowest combination of integers and represents unit distances rather than actual distances. A direction is identical to a, so is used. Fractions are not used. For example, a vector that intercepts the center of the top face of the unit cell has the coordinates x = 1/2, y = 1/2, z = 1. All have to be inversed to convert to the lowest combination of integers (whole numbers); i.e., in . Finally, all parallel vectors have the same crystal direction, e.g., the four vertical edges of the cell shown in all have the crystal direction [hkl] =.Crystal directions may be grouped in families. To avoid confusion there exists a convention in the choice of brackets surrounding the three numbers to differentiate a crystal direction from a family of direction. For a direction, square brackets [hkl] are used to indicate an individual direction. Angle brackets <hkl> indicate a family of directions. A family of directions includes any directions that are equivalent in length and types of atoms encountered. For example, in a cubic lattice, the,, and directions all belong to the <100> family of planes because they are equivalent. If the cubic lattice were rotated 90°, the a, b, and cdirections would remain indistinguishable, and there would be no way of telling on which crystallographic positions the atoms are situated, so the family of directions is the same. In a hexagonal crystal, however, this is not the case, so the and would both be <100> directions, but the direction would be distinct. Finally, negative directions are identified with a bar over the negative number instead of a minus sign.Planes in a crystal can be specified using a notation called Miller indices. The Miller index is indicated by the notation [hkl] where h, k, and l are reciprocals of the plane with the x, y, and z axes. To obtain the Miller indices of a given plane requires the following steps:For example, the face of a lattice that does not intersect the y or z axis would be, while a plane along the body diagonal would be the plane. An illustration of this along with the and planes is given in .As with crystal directions, Miller indices directions may be grouped in families. Individual Miller indices are given in parentheses (hkl), while braces {hkl} are placed around the indices of a family of planes. For example,,, and are all in the {100} family of planes, for a cubic lattice.Crystal structures may be described in a number of ways. The most common manner is to refer to the size and shape of the unit cell and the positions of the atoms (or ions) within the cell. However, this information is sometimes insufficient to allow for an understanding of the true structure in three dimensions. Consideration of several unit cells, the arrangement of the atoms with respect to each other, the number of other atoms they in contact with, and the distances to neighboring atoms, often will provide a better understanding. A number of methods are available to describe extended solid-state structures. The most applicable with regard to elemental and compound semiconductor, metals and the majority of insulators is the close packing approach.Many crystal structures can be described using the concept of close packing. This concept requires that the atoms (ions) are arranged so as to have the maximum density. In order to understand close packing in three dimensions, the most efficient way for equal sized spheres to be packed in two dimensions must be considered.The most efficient way for equal sized spheres to be packed in two dimensions is shown in , in which it can be seen that each sphere (the dark gray shaded sphere) is surrounded by, and is in contact with, six other spheres (the light gray spheres in . It should be noted that contact with six other spheres the maximum possible is the spheres are the same size, although lower density packing is possible. Close packed layers are formed by repetition to an infinite sheet. Within these close packed layers, three close packed rows are present, shown by the dashed lines in . The most efficient way for equal sized spheres to be packed in three dimensions is to stack close packed layers on top of each other to give a close packed structure. There are two simple ways in which this can be done, resulting in either a hexagonal or cubic close packed structures.If two close packed layers A and B are placed in contact with each other so as to maximize the density, then the spheres of layer B will rest in the hollow (vacancy) between three of the spheres in layer A. This is demonstrated in . Atoms in the second layer, B (shaded light gray), may occupy one of two possible positions a or b) but not both together or a mixture of each. If a third layer is placed on top of layer B such that it exactly covers layer A, subsequent placement of layers will result in the following sequence ...ABABAB.... This is known as hexagonal close packing or hcp.The hexagonal close packed cell is a derivative of the hexagonal Bravais lattice system with the addition of an atom inside the unit cell at the coordinates (1/3,2/3,1/2). The basal plane of the unit cell coincides with the close packed layers . In other words the close packed layer makes-up the {001} family of crystal planes.The “packing fraction” in a hexagonal close packed cell is 74.05%; that is 74.05% of the total volume is occupied. The packing fraction or density is derived by assuming that each atom is a hard sphere in contact with its nearest neighbors. Determination of the packing fraction is accomplished by calculating the number of whole spheres per unit cell (2 in hcp), the volume occupied by these spheres, and a comparison with the total volume of a unit cell. The number gives an idea of how “open” or filled a structure is. By comparison, the packing fraction for body-centered cubic ) is 68% and for diamond cubic (an important semiconductor structure to be described later) is it 34%.In a similar manner to the generation of the hexagonal close packed structure, two close packed layers are stacked however, the third layer (C) is placed such that it does not exactly cover layer A, while sitting in a set of troughs in layer B ), then upon repetition the packing sequence will be ...ABCABCABC.... This is known as cubic close packing or ccp.The unit cell of cubic close packed structure is actually that of a face-centered cubic (fcc) Bravais lattice. In the fcc lattice the close packed layers constitute the {111} planes. As with the hcp lattice packing fraction in a cubic close packed (fcc) cell is 74.05%. Since face centered cubic or fcc is more commonly used in preference to cubic close packed (ccp) in describing the structures, the former will be used throughout this text.The coordination number of an atom or ion within an extended structure is defined as the number of nearest neighbor atoms (ions of opposite charge) that are in contact with it. A slightly different definition is often used for atoms within individual molecules: the number of donor atoms associated with the central atom or ion. However, this distinction is rather artificial, and both can be employed.The coordination numbers for metal atoms in a molecule or complex are commonly 4, 5, and 6, but all values from 2 to 9 are known and a few examples of higher coordination numbers have been reported. In contrast, common coordination numbers in the solid state are 3, 4, 6, 8, and 12. For example, the atom in the center of body-centered cubic lattice has a coordination number of 8, because it touches the eight atoms at the corners of the unit cell, while an atom in a simple cubic structure would have a coordination number of 6. In both fcc and hcp lattices each of the atoms have a coordination number of 12.As was mentioned above, the packing fraction in both fcc and hcp cells is 74.05%, leaving 25.95% of the volume unfilled. The unfilled lattice sites (interstices) between the atoms in a cell are called interstitial sites or vacancies. The shape and relative size of these sites is important in controlling the position of additional atoms. In both fcc and hcp cells most of the space within these atoms lies within two different sites known as octahedral sites and tetrahedral sites. The difference between the two lies in their “coordination number”, or the number of atoms surrounding each site. Tetrahedral sites (vacancies) are surrounded by four atoms arranged at the corners of a tetrahedron. Similarly, octahedral sites are surrounded by six atoms which make-up the apices of an octahedron. For a given close packed lattice an octahedral vacancy will be larger than a tetrahedral vacancy.Within a face centered cubic lattice, the eight tetrahedral sites are positioned within the cell, at the general fractional coordinate of (n/4,n/4,n/4) where n = 1 or 3, e.g., (1/4,1/4,1/4), (1/4,1/4,3/4), etc. The octahedral sites are located at the center of the unit cell (1/2,1/2,1/2), as well as at each of the edges of the cell, e.g., (1/2,0,0). In the hexagonal close packed system, the tetrahedral sites are at (0,0,3/8) and (1/3,2/3,7/8), and the octahedral sites are at (1/3,1/3,1/4) and all symmetry equivalent positions.The majority of crystalline materials do not have a structure that fits into the one atom per site simple Bravais lattice. A number of other important crystal structures are found, however, only a few of these crystal structures are those of which occur for the elemental and compound semiconductors and the majority of these are derived from fcc or hcp lattices. Each structural type is generally defined by an archetype, a material (often a naturally occurring mineral) which has the structure in question and to which all the similar materials are related. With regard to commonly used elemental and compound semiconductors the important structures are diamond, zinc blende, Wurtzite, and to a lesser extent chalcopyrite. However, rock salt, β-tin, cinnabar and cesium chloride are observed as high pressure or high temperature phases and are therefore also discussed. The following provides a summary of these structures. Details of the full range of solid-state structures are given elsewhere.The diamond cubic structure consists of two interpenetrating face-centered cubic lattices, with one offset 1/4 of a cube along the cube diagonal. It may also be described as face centered cubic lattice in which half of the tetrahedral sites are filled while all the octahedral sites remain vacant. The diamond cubic unit cell is shown in . Each of the atoms (e.g., C) is four coordinate, and the shortest interatomic distance (C-C) may be determined from the unit cell parameter (a).\[ C-C\ =\ a \frac{\sqrt{3} }{4} \approx \ 0.422 a \label{1} \]This is a binary phase (ME) and is named after its archetype, a common mineral form of zinc sulfide (ZnS). As with the diamond lattice, zinc blende consists of the two interpenetrating fcc lattices. However, in zinc blende one lattice consists of one of the types of atoms (Zn in ZnS), and the other lattice is of the second type of atom (S in ZnS). It may also be described as face centered cubic lattice of S atoms in which half of the tetrahedral sites are filled with Zn atoms. All the atoms in a zinc blende structure are 4-coordinate. The zinc blende unit cell is shown in . A number of inter-atomic distances may be calculated for any material with a zinc blende unit cell using the lattice parameter (a).\[ Zn-S\ =\ a \frac{\sqrt{3} }{4} \approx \ 0.422 a \label{2} \]\[ Zn-Zn \ =\ S-S\ = \frac{a}{\sqrt{2}} \approx 0.707\ a \label{3} \]The mineral chalcopyrite CuFeS2 is the archetype of this structure. The structure is tetragonal (a = b ≠ c, α = β = γ = 90°, and is essentially a superlattice on that of zinc blende. Thus, is easiest to imagine that the chalcopyrite lattice is made-up of a lattice of sulfur atoms in which the tetrahedral sites are filled in layers, ...FeCuCuFe..., etc. . In such an idealized structure c = 2a, however, this is not true of all materials with chalcopyrite structures.As its name implies the archetypal rock salt structure is NaCl (table salt). In common with the zinc blende structure, rock salt consists of two interpenetrating face-centered cubic lattices. However, the second lattice is offset 1/2a along the unit cell axis. It may also be described as face centered cubic lattice in which all of the octahedral sites are filled, while all the tetrahedral sites remain vacant, and thus each of the atoms in the rock salt structure are 6-coordinate. The rock salt unit cell is shown in . A number of inter-atomic distances may be calculated for any material with a rock salt structure using the lattice parameter (a).\[ Na-Cl\ =\ \frac{a}{2} \approx 0.5 a \label{4} \]\[ Na-Na \ =\ Cl-Cl \ =\ \frac{a}{\sqrt{2}} \approx 0.707\ a \label{5} \]Cinnabar, named after the archetype mercury sulfide, HgS, is a distorted rock salt structure in which the resulting cell is rhombohedral (trigonal) with each atom having a coordination number of six.This is a hexagonal form of the zinc sulfide. It is identical in the number of and types of atoms, but it is built from two interpenetrating hcp lattices as opposed to the fcc lattices in zinc blende. As with zinc blende all the atoms in a wurtzite structure are 4-coordinate. The wurtzite unit cell is shown in . A number of inter atomic distances may be calculated for any material with a wurtzite cell using the lattice parameter (a).\[ Zn-S\ =\ a \sqrt{3/8 } \ =\ 0.612\ a\ = \frac{3 c}{8} \ =\ 0.375\ c \label{6} \]\[ Zn- Zn \ =\ S-S\ =\ a\ =\ 1.632\ c \label{7} \]However, it should be noted that these formulae do not necessarily apply when the ratio a/c is different from the ideal value of 1.632.The cesium chloride structure is found in materials with large cations and relatively small anions. It has a simple (primitive) cubic cell ) with a chloride ion at the corners of the cube and the cesium ion at the body center. The coordination numbers of both Cs+ and Cl-, with the inner atomic distances determined from the cell lattice constant (a).\[ Cs-Cl\ =\ \frac{a \sqrt{3} }{2} \approx 0.866a \label{8} \]\[Cs-Cs \ =\ Cl-Cl\ = a \label{9} \]The room temperature allotrope of tin is β-tin or white tin. It has a tetragonal structure, in which each tin atom has four nearest neighbors (Sn-Sn = 3.016 Å) arranged in a very flattened tetrahedron, and two next nearest neighbors (Sn-Sn = 3.175 Å). The overall structure of β-tin consists of fused hexagons, each being linked to its neighbor via a four-membered Sn4 ring.Up to this point we have only been concerned with ideal structures for crystalline solids in which each atom occupies a designated point in the crystal lattice. Unfortunately, defects ordinarily exist in equilibrium between the crystal lattice and its environment. These defects are of two general types: point defects and extended defects. As their names imply, point defects are associated with a single crystal lattice site, while extended defects occur over a greater range.Point defects have a significant effect on the properties of a semiconductor, so it is important to understand the classes of point defects and the characteristics of each type. summarizes various classes of native point defects, however, they may be divided into two general classes; defects with the wrong number of atoms (deficiency or surplus) and defects where the identity of the atoms is incorrect.An interstitial impurity occurs when an extra atom is positioned in a lattice site that should be vacant in an ideal structure b).Since all the adjacent lattice sites are filled the additional atom will have to squeeze itself into the interstitial site, resulting in distortion of the lattice and alteration in the local electronic behavior of the structure. Small atoms, such as carbon, will prefer to occupy these interstitial sites. Interstitial impurities readily diffuse through the lattice via interstitial diffusion, which can result in a change of the properties of a material as a function of time. Oxygen impurities in silicon generally are located as interstitials. The converse of an interstitial impurity is when there are not enough atoms in a particular area of the lattice. These are called vacancies. Vacancies exist in any material above absolute zero and increase in concentration with temperature. In the case of compound semiconductors, vacancies can be either cation vacancies c) or anion vacancies d), depending on what type of atom are “missing”.Substitution of various atoms into the normal lattice structure is common, and used to change the electronic properties of both compound and elemental semiconductors. Any impurity element that is incorporated during crystal growth can occupy a lattice site. Depending on the impurity, substitution defects can greatly distort the lattice and/or alter the electronic structure. In general, cations will try to occupy cation lattice sites e), and anion will occupy the anion site f). For example, a zinc impurity in GaAs will occupy a gallium site, if possible, while a sulfur, selenium and tellurium atoms would all try to substitute for an arsenic. Some impurities will occupy either site indiscriminately, e.g., Si and Sn occupy both Ga and As sites in GaAs.Antisite defects are a particular form of substitution defect, and are unique to compound semiconductors. An antisite defect occurs when a cation is misplaced on an anion lattice site or vice versa ( g and h).Dependant on the arrangement these are designated as either AB antisite defects or BA antisite defects. For example, if an arsenic atom is on a gallium lattice site the defect would be an AsGa defect. Antisite defects involve fitting into a lattice site atoms of a different size than the rest of the lattice, and therefore this often results in a localized distortion of the lattice. In addition, cations and anions will have a different number of electrons in their valence shells, so this substitution will alter the local electron concentration and the electronic properties of this area of the semiconductor.Extended defects may be created either during crystal growth or as a consequence of stress in the crystal lattice. The plastic deformation of crystalline solids does not occur such that all bonds along a plane are broken and reformed simultaneously. Instead, the deformation occurs through a dislocation in the crystal lattice. is readily grown on gallium arsenide (zinc blende, a = 5.653 Å). Alternatively, epitaxial crystal growth can occur where there exists a simple relationship between the structures of the substrate and crystal layer, such as is observed between Al2O3 on Si. Whichever route is chosen a close match in the lattice parameters is required, otherwise, the strains induced by the lattice mismatch results in distortion of the film and formation of dislocations. If the mismatch is significant epitaxial growth is not energetically favorable, causing a textured film or polycrystalline untextured film to be grown. As a general rule of thumb, epitaxy can be achieved if the lattice parameters of the two materials are within about 5% of each other. For good quality epitaxy, this should be less than 1%. The larger the mismatch, the larger the strain in the film. As the film gets thicker and thicker, it will try to relieve the strain in the film, which could include the loss of epitaxy of the growth of dislocations. It is important to note that the <100> directions of a film must be parallel to the <100> direction of the substrate. In some cases, such as Fe on MgO, the direction is parallel to the substrate. The epitaxial relationship is specified by giving first the plane in the film that is parallel to the substrate.This page titled 7.1: Crystal Structure is shared under a CC BY 4.0 license and was authored, remixed, and/or curated by Pavan M. V. Raja & Andrew R. Barron (OpenStax CNX) via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.
562
7.2: Structures of Element and Compound Semiconductors
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Physical_Methods_in_Chemistry_and_Nano_Science_(Barron)/07%3A_Molecular_and_Solid_State_Structure/7.02%3A_Structures_of_Element_and_Compound_Semiconductors
A single crystal of either an elemental (e.g., silicon) or compound (e.g., gallium arsenide) semiconductor forms the basis of almost all semiconductor devices. The ability to control the electronic and opto-electronic properties of these materials is based on an understanding of their structure. In addition, the metals and many of the insulators employed within a microelectronic device are also crystalline.Each of the semiconducting phases of the group IV elements, C (diamond), Si, Ge, and α-Sn, adopt the diamond cubic structure ). Their lattice constants (a, Å) and densities (ρ, g/cm3) are given in Table \(\PageIndex{1}\).As would be expected the lattice parameter increase in the order C < Si < Ge < α-Sn. Silicon and germanium form a continuous series of solid solutions with gradually varying parameters. It is worth noting the high degree of accuracy that the lattice parameters are known for high purity crystals of these elements. In addition, it is important to note the temperature at which structural measurements are made, since the lattice parameters are temperature dependent ). The lattice constant (a), in Å, for high purity silicon may be calculated for any temperature (T) over the temperature range 293 - 1073 K by the formula shown below.\[ a_{T}\ =\ 5.4304\ +\ 1.8138 \times 10^{-5}\ (T- 298.15\ K)\ +\ 1.542 \times 10^{-9}\ (T-298.15\ K) \label{1} \]Even though the diamond cubic forms of Si and Ge are the only forms of direct interest to semiconductor devices, each exists in numerous crystalline high pressure and meta-stable forms. These are described along with their interconversions, in Table \(\PageIndex{2}\).The stable phases for the arsenides, phosphides and antimonides of aluminum, gallium and indium all exhibit zinc blende structures ). In contrast, the nitrides are found as wurtzite structures (e.g., ). The structure, lattice parameters, and densities of the III-V compounds are given in Table \(\PageIndex{3}\). It is worth noting that contrary to expectation the lattice parameter of the gallium compounds is smaller than their aluminum homolog; for GaAs a = 5.653 Å; AlAs a = 5.660 Å. As with the group IV elements the lattice parameters are highly temperature dependent; however, additional variation arises from any deviation from absolute stoichiometry. These effects are shown in .The homogeneity of structures of alloys for a wide range of solid solutions to be formed between III-V compounds in almost any combination. Two classes of ternary alloys are formed: IIIx-III1-x-V (e.g., Alx-Ga1-x-As) and III-V1-x-Vx (e.g., Ga-As1-x-Px) . While quaternary alloys of the type IIIx-III1-x-Vy-V1-y allow for the growth of materials with similar lattice parameters, but a broad range of band gaps. A very important ternary alloy, especially in optoelectronic applications, is Alx-Ga1-x-As and its lattice parameter (a) is directly related to the composition (x).\[ a\ =\ 5.6533\ +\ 0.0078\ x \nonumber \]Not all of the III-V compounds have well characterized high-pressure phases. however, in each case where a high-pressure phase is observed the coordination number of both the group III and group V element increases from four to six. Thus, AlP undergoes a zinc blende to rock salt transformation at high pressure above 170 kbar, while AlSb and GaAs form orthorhombic distorted rock salt structures above 77 and 172 kbar, respectively. An orthorhombic structure is proposed for the high-pressure form of InP (>133 kbar). Indium arsenide (InAs) undergoes two-phase transformations. The zinc blende structure is converted to a rock salt structure above 77 kbar, which in turn forms a β-tin structure above 170 kbar.The structures of the II-VI compound semiconductors are less predictable than those of the III-V compounds (above), and while zinc blende structure exists for almost all of the compounds there is a stronger tendency towards the hexagonal wurtzite form. In several cases the zinc blende structure is observed under ambient conditions, but may be converted to the wurtzite form upon heating. In general the wurtzite form predominates with the smaller anions (e.g., oxides), while the zinc blende becomes the more stable phase for the larger anions (e.g., tellurides). One exception is mercury sulfide (HgS) that is the archetype for the trigonal cinnabar phase.Table \(\PageIndex{5}\) lists the stable phase of the chalcogenides of zinc, cadmium and mercury, along with their high temperature phases where applicable. Solid solutions of the II-VI compounds are not as easily formed as for the III-V compounds; however, two important examples are ZnSxSe1-x and CdxHg1-xTe.The zinc chalcogenides all transform to a cesium chloride structure under high pressures, while the cadmium compounds all form rock salt high-pressure phases ). Mercury selenide (HgSe) and mercury telluride (HgTe) convert to the mercury sulfide archetype structure, cinnabar, at high pressure.Nearly all I-III-VI2 compounds at room temperature adopt the chalcopyrite structure ). The cell constants and densities are given in Table \(\PageIndex{6}\). Although there are few reports of high temperature or high-pressure phases, AgInS2 has been shown to exist as a high temperature orthorhombic polymorph (a = 6.954, b = 8.264, and c = 6.683 Å), and AgInTe2 forms a cubic phase at high pressures.Of the I-III-VI2 compounds, the copper indium chalcogenides (CuInE2) are certainly the most studied for their application in solar cells. One of the advantages of the copper indium chalcogenide compounds is the formation of solid solutions (alloys) of the formula CuInE2-xE'x, where the composition variable (x) varies from 0 to 2. The CuInS2-xSex and CuInSe2-xTex systems have also been examined, as has the CuGayIn1-yS2-xSex quaternary system. As would be expected from a consideration of the relative ionic radii of the chalcogenides the lattice parameters of the CuInS2-xSex alloy should increase with increased selenium content. Vergard's law requires the lattice constant for a linear solution of two semiconductors to vary linearly with composition (e.g., as is observed for AlxGa1-xAs), however, the variation of the tetragonal lattice constants (a and c) with composition for CuInS2-xSx are best described by the parabolic relationships.\[ a\ =\ 5.532\ +\ 0.0801x\ +\ 0.026 x^{2} \nonumber \]\[ c\ =\ 11.156\ +\ 0.1204x\ +\ 0.0611 x^{2} \nonumber \]A similar relationship is observed for the CuInSe2-xTex alloys.\[ a\ =\ 5.783\ +\ 0.1560 x\ +\ 0.0212x^{2} \nonumber \]\[ c\ =\ 11.628\ +\ 0.3340x\ +\ 0.0277x^{2} \nonumber \]The large difference in ionic radii between S and Te (0.37 Å) prevents formation of solid solutions in the CuInS2-xTex system, however, the single alloy CuInS1.5Te0.5 has been reported.Once single crystals of high purity silicon or gallium arsenide are produced they are cut into wafers such that the exposed face of these wafers is either the crystallographic {100} or {111} planes. The relative structure of these surfaces are important with respect to oxidation, etching and thin film growth. These processes are orientation-sensitive; that is, they depend on the direction in which the crystal slice is cut.The principle planes in a crystal may be differentiated in a number of ways, however, the atom and/or bond density are useful in predicting much of the chemistry of semiconductor surfaces. Since both silicon and gallium arsenide are fcc structures and the {100} and {111} are the only technologically relevant surfaces, discussions will be limited to fcc {100} and {111}.The atom density of a surface may be defined as the number of atoms per unit area. b as an overlaid hexagon. Given the intra-planar inter-atomic distance may be defined as a function of the lattice parameter, the area of this hexagon may be readily calculated. For example in the case of silicon, the hexagon has an area of 38.30 Å2. The number of atoms within the hexagon is three: the atom in the center plus 1/3 of each of the six atoms at the vertices of the hexagon (each of the atoms at the hexagons vertices is shared by three other adjacent hexagons). Thus, the atom density of the {111} plane is calculated to be 0.0783 Å-2. Similarly, the atom density of the {100} plane may be calculated. The {100} plane consists of a square array in which the crystal directions within the plane are oriented at 90° to each other. Since the square is coincident with one of the faces of the unit cell the area of the square may be readily calculated. For example in the case of silicon, the square has an area of 29.49 Å2. The number of atoms within the square is 2: the atom in the center plus 1/4 of each of the four atoms at the vertices of the square (each of the atoms at the corners of the square are shared by four other adjacent squares). Thus, the atom density of the {100} plane is calculated to be 0.0678 Å-2. While these values for the atom density are specific for silicon, their ratio is constant for all diamond cubic and zinc blende structures: {100}:{111} = 1:1.155. In general, the fewer dangling bonds the more stable a surface structure.An atom inside a crystal of any material will have a coordination number (n) determined by the structure of the material. For example, all atoms within the bulk of a silicon crystal will be in a tetrahedral four-coordinate environment (n = 4). However, at the surface of a crystal the atoms will not make their full compliment of bonds. Each atom will therefore have less nearest neighbors than an atom within the bulk of the material. The missing bonds are commonly called dangling bonds. While this description is not particularly accurate it is, however, widely employed and as such will be used herein. The number of dangling bonds may be defined as the difference between the ideal coordination number (determined by the bulk crystal structure) and the actual coordination number as observed at the surface. shows a section of the {111} surfaces of a diamond cubic lattice viewed perpendicular to the {111} plane. The atoms within the bulk have a coordination number of four. In contrast, the atoms at the surface (e.g., the atom shown in blue in are each bonded to just three other atoms (the atoms shown in red in , thus each surface atom has one dangling bond. As can be seen from , which shows the atoms at the {100} surface viewed perpendicular to the {100} plane, each atom at the surface (e.g., the atom shown in blue in is only coordinated to two other atoms (the atoms shown in red in , leaving two dangling bonds per atom. It should be noted that the same number of dangling bonds are found for the {111} and {100} planes of a zinc blende lattice. The ratio of dangling bonds for the {100} and {111} planes of all diamond cubic and zinc blende structures is {100}:{111} = 2:1. Furthermore, since the atom densities of each plane are known then the ratio of the dangling bond densities is determined to be: {100}:{111} = 1:0.577.For silicon, the {111} planes are closer packed than the {100} planes. As a result, growth of a silicon crystal is therefore slowest in the <111> direction, since it requires laying down a close packed atomic layer upon another layer in its closest packed form. As a consequence <111> Si is the easiest to grow, and therefore the least expensive.The dissolution or etching of a crystal is related to the number of broken bonds already present at the surface: the fewer bonds to be broken in order to remove an individual atom from a crystal, the easier it will be to dissolve the crystal. As a consequence of having only one dangling bond (requiring three bonds to be broken) etching silicon is slowest in the <111> direction. The electronic properties of a silicon wafer are also related to the number of dangling bonds.Silicon microcircuits are generally formed on a single crystal wafer that is diced after fabrication by either sawing part way through the wafer thickness or scoring (scribing) the surface, and then physically breaking. The physical breakage of the wafer occurs along the natural cleavage planes, which in the case of silicon are the {111} planes.The zinc blende lattice observed for gallium arsenide results in additional considerations over that of silicon. Although the {100} plane of GaAs is structurally similar to that of silicon, two possibilities exist: a face consisting of either all gallium atoms or all arsenic atoms. In either case the surface atoms have two dangling bonds, and the properties of the face are independent of whether the face is gallium or arsenic.The {111} plane also has the possibility of consisting of all gallium or all arsenic. However, unlike the {100} planes there is a significant difference between the two possibilities. shows the gallium arsenide structure represented by two interpenetrating fcc lattices. The axis is vertical within the plane of the page. Although the structure consists of alternate layers of gallium and arsenic stacked along the axis, the distance between the successive layers alternates between large and small. Assigning arsenic as the parent lattice the order of the layers in the direction is As-Ga-As-Ga-As-Ga, while in the direction the layers are ordered, Ga-As-Ga-As-Ga-As ).In silicon these two directions are of course identical. The surface of a crystal would be either arsenic, with three dangling bonds, or gallium, with one dangling bond. Clearly, the latter is energetically more favorable. Thus, the plane shown in is called the Ga face. Conversely, the plane would be either gallium, with three dangling bonds, or arsenic, with one dangling bond. Again, the latter is energetically more favorable and the plane is therefore called the As face.The As is distinct from that of Ga due to the difference in the number of electrons at the surface. As a consequence, the As face etches more rapidly than the Ga face. In addition, surface evaporation below 770 °C occurs more rapidly at the As face.This page titled 7.2: Structures of Element and Compound Semiconductors is shared under a CC BY 4.0 license and was authored, remixed, and/or curated by Pavan M. V. Raja & Andrew R. Barron (OpenStax CNX) via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.
563
7.3: X-ray Crystallography
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Physical_Methods_in_Chemistry_and_Nano_Science_(Barron)/07%3A_Molecular_and_Solid_State_Structure/7.03%3A_X-ray_Crystallography
The birth of X-ray crystallography is considered by many to be marked by the formulation of the law of constant angles by Nicolaus Steno in 1669 ).Although Steno is well known for his numerous principles regarding all areas of life, this particular law dealing with geometric shapes and crystal lattices is familiar ground to all chemists. It simply states that the angles between corresponding faces on crystals are the same for all specimens of the same mineral. The significance of this for chemistry is that given this fact, crystalline solids will be easily identifiable once a database has been established. Much like solving a puzzle, crystal structures of heterogeneous compounds could be solved very methodically by comparison of chemical composition and their interactions.Although Steno was given credit for the notion of crystallography, the man that provided the tools necessary to bring crystallography into the scientific arena was Wilhelm Roentgen ), who in 1895 successfully pioneered a new form of photography, one that could allegedly penetrate through paper, wood, and human flesh; due to a lack of knowledge of the specific workings of this new discovery, the scientific community conveniently labeled the new particles X-rays. This event set off a chain reaction of experiments and studies, not all performed by physicists. Within one single month, medical doctors were using X-rays to pinpoint foreign objects such in the human body such as bullets and kidney stones ).The credit for the actual discovery of X-ray diffraction goes to Max von Laue , to whom the Nobel Prize in physics in 1914 was awarded for the discovery of the diffraction of X-rays. Legend has it that the notion that eventually led to a Nobel prize was born in a garden in Munich, while von Laue was pondering the problem of passing waves of electromagnetic radiation through a specific crystalline arrangement of atoms. Because of the relatively large wavelength of visible light, von Laue was forced to turn his attention to another part of the electromagnetic spectrum, to where shorter wavelengths resided. Only a few decades earlier, Röentgen had publicly announced the discovery of X-rays, which supposedly had a wavelength shorter than that of visible light. Having this information, von Laue entrusted the task of performing the experimental work to two technicians, Walter Friedrich and Paul Knipping. The setup consisted of an X-ray source, which beamed radiation directly into a copper sulfate crystal housed in a lead box. Film was lined against the sides and back of the box, so as to capture the X-ray beam and its diffraction pattern. Development of the film showed a dark circle in the center of the film, surrounded by several extremely well defined circles, which had formed as a result of the diffraction of the X-ray beam by the ordered geometric arrangement of copper sulfate. Max von Laue then proceeded to work out the mathematical formulas involved in the observed diffraction pattern, for which he was awarded the Nobel Prize in physics in 1914.The simplest definition of diffraction is the irregularities caused when waves encounter an object. Diffraction is a phenomenon that exists commonly in everyday activities, but is often disregarded and taken for granted. For example, when looking at the information side of a compact disc, a rainbow pattern will often appear when it catches light at a certain angle. This is caused by visible light striking the grooves of the disc, thus producing a rainbow effect ), as interpreted by the observers' eyes. Another example is the formation of seemingly concentric rings around an astronomical object of significant luminosity when observed through clouds. The particles that make up the clouds diffract light from the astronomical object around its edges, causing the illusion of rings of light around the source. It is easy to forget that diffraction is a phenomenon that applies to all forms of waves, not just electromagnetic radiation. Due to the large variety of possible types of diffractions, many terms have been coined to differentiate between specific types. The most prevalent type of diffraction to X-ray crystallography is known as Bragg diffraction, which is defined as the scattering of waves from a crystalline structure.Formulated by William Lawrence Bragg ), the equation of Bragg's law relates wavelength to angle of incidence and lattice spacing, \ref{1}, where n is a numeric constant known as the order of the diffracted beam, λ is the wavelength of the beam, d denotes the distance between lattice planes, and θ represents the angle of the diffracted wave. The conditions given by this equation must be fulfilled if diffraction is to occur.\[ n\lambda \ =\ 2d\ sin(\theta ) \label{1} \]Because of the nature of diffraction, waves will experience either constructive ) or destructive ) interference with other waves. In the same way, when an X-ray beam is diffracted off a crystal, the different parts of the diffracted beam will have seemingly stronger energy, while other parts will have seemed to lost energy. This is dependent mostly on the wavelength of the incident beam, and the spacing between crystal lattices of the sample. Information about the lattice structure is obtained by varying beam wavelengths, incident angles, and crystal orientation. Much like solving a puzzle, a three dimensional structure of the crystalline solid can be constructed by observing changes in data with variation of the aforementioned variables.At the heart of any XRD machine is the X-ray source. Modern day machines generally rely on copper metal as the element of choice for producing X-rays, although there are variations among different manufacturers. Because diffraction patterns are recorded over an extended period of time during sample analysis, it is very important that beam intensity remain constant throughout the entire analysis, or else faulty data will be procured. In light of this, even before an X-ray beam is generated, current must pass through a voltage regular, which will guarantee a steady stream of voltage to the X-ray source.Another crucial component to the analysis of crystalline via X-rays is the detector. When XRD was first developed, film was the most commonly used method for recognizing diffraction patterns. The most obvious disadvantage to using film is the fact that it has to replaced every time a new specimen is introduced, making data collection a time consuming process. Furthermore, film can only be used once, leading to an increase in cost of operating diffraction analysis.Since the origins of XRD, detection methods have progressed to the point where modern XRD machines are equipped with semiconductor detectors, which produce pulses proportional to the energy absorbed. With these modern detectors, there are two general ways in which a diffraction pattern may be obtained. The first is called continuous scan, and it is exactly what the name implies. The detector is set in a circular motion around the sample, while a beam of X-ray is constantly shot into the sample. Pulses of energy are plotted with respect to diffraction angle, which ensure all diffracted X-rays are recorded. The second and more widely used method is known as step scan. Step scanning bears similarity to continuous scan, except it is highly computerized and much more efficient. Instead of moving the detector in a circle around the entire sample, step scanning involves collecting data at one fixed angle at a time, thus the name. Within these detection parameters, the types of detectors can themselves be varied. A more common type of detector, known as the charge-coupled device (CCD) detector , can be found in many XRD machines, due to its fast data collection capability. A CCD detector is comprised of numerous radiation sensitive grids, each linked to sensors that measure changes in electromagnetic radiation. Another commonly seen type of detector is a simple scintillation counter ), which counts the intensity of X-rays that it encounters as it moves along a rotation axis. A comparable analogy to the differences between the two detectors mentioned would be that the CCD detector is able to see in two dimensions, while scintillation counters are only able to see in one dimension.Aside from the above two components, there are many other variables involved in sample analysis by an XRD machine. As mentioned earlier, a steady incident beam is extremely important for good data collection. To further ensure this, there will often be what is known as a Söller slit or collimator found in many XRD machines. A Söller slit collimates the direction of the X-ray beam. In the collimated X-ray beam the rays are parallel, and therefore will spread minimally as they propagates . Without a collimator X-rays from all directions will be recorded; for example, a ray that has passed through the top of the specimen (see the red arrow in a) but happens to be traveling in a downwards direction may be recorded at the bottom of the plate. The resultant image will be so blurred and indistinct as to be useless. Some machines have a Söller slit between the sample and the detector, which drastically reduces the amount of background noise, especially when analyzing iron samples with a copper X-ray source.This single crystal XRD machine ) features a cooling gas line, which allows the user to bring down the temperature of a sample considerably below room temperature. Doing so allows for the opportunities for studies performed where the sample is kept in a state of extremely low energy, negating a lot of vibrational motion that might interfere with consistent data collection of diffraction patterns. Furthermore, information can be collected on the effects of temperature on a crystal structure. Also seen in is the hook-shaped object located between the beam emitter and detector. It serves the purpose of blocking X-rays that were not diffracted from being seen by the detector, drastically reducing the amount of unnecessary noise that would otherwise obscure data analysis.Over time, XRD analysis has evolved from a very narrow and specific field to something that encompasses a much wider branch of the scientific arena. In its early stages, XRD was (with the exception of the simplest structures) confined to single crystal analysis, as detection methods had not advanced to a point where more complicated procedures was able to be performed. After many years of discovery and refining, however, technology has progressed to where crystalline properties (structure) of solids can be gleaned directly from a powder sample, thus offering information for samples that cannot be obtained as a single crystal. One area in which this is particularly useful is pharmaceuticals, since many of the compounds studied are not available in single crystal form, only in a powder.Even though single crystal diffraction and powder diffraction essentially generate the same data, due to the powdered nature of the latter sample, diffraction lines will often overlap and interfere with data collection. This is apparently especially when the diffraction angle 2θ is high; patterns that emerge will be almost to the point of unidentifiable, because of disruption of individual diffraction patterns. For this particular reason, a new approach to interpreting powder diffraction data has been created.There are two main methods for interpreting diffraction data:Another important aspect of being able to study compounds in powder form for the pharmaceutical researcher is the ability to identify structures in their natural state. A vast majority of drugs in this day and age are delivered through powdered form, either in the form of a pill or a capsule. Crystallization processes may often alter the chemical composition of the molecule (e.g., by the inclusion of solvent molecules), and thus marring the data if confined to single crystal analysis. Furthermore, when the sample is in powdered form, there are other variables that can be adjusted to see real-time effects on the molecule. Temperature, pressure, and humidity are all factors that can be changed in-situ to glean data on how a drug might respond to changes in those particular variables.Powder X-Ray diffraction (XRD) was developed in 1916 by Debye ) and Scherrer ) as a technique that could be applied where traditional single-crystal diffraction cannot be performed. This includes cases where the sample cannot be prepared as a single crystal of sufficient size and quality. Powder samples are easier to prepare, and is especially useful for pharmaceuticals research.Diffraction occurs when a wave meets a set of regularly spaced scattering objects, and its wavelength of the distance between the scattering objects are of the same order of magnitude. This makes X-rays suitable for crystallography, as its wavelength and crystal lattice parameters are both in the scale of angstroms (Å). Crystal diffraction can be described by Bragg diffraction, \ref{2}, where λ is the wavelength of the incident monochromatic X-ray, d is the distance between parallel crystal planes, and θ the angle between the beam and the plane.\[ \lambda \ =\ 2d\ sin \theta \label{2} \]For constructive interference to occur between two waves, the path length difference between the waves must be an integral multiple of their wavelength. This path length difference is represented by 2d sinθ . Because sinθ cannot be greater than 1, the wavelength of the X-ray limits the number of diffraction peaks that can appear.Most diffractometers use Cu or Mo as an X-ray source, and specifically the Kα radiation of wavelengths of 1.54059 Å and 0.70932 Å, respectively. A stream of electrons is accelerated towards the metal target anode from a tungsten cathode, with a potential difference of about 30-50 kV. As this generates a lot of heat, the target anode must be cooled to prevent melting.Detection of the diffracted beam can be done in many ways, and one common system is the gas proportional counter (GPC). The detector is filled with an inert gas such as argon, and electron-ion pairs are created when X-rays pass through it. An applied potential difference separates the pairs and generates secondary ionizations through an avalanche effect. The amplification of the signal is necessary as the intensity of the diffracted beam is very low compared to the incident beam. The current detected is then proportional to the intensity of the diffracted beam. A GPC has a very low noise background, which makes it widely used in labs.The particle size distribution should be even to ensure that the diffraction pattern is not dominated by a few large particles near the surface. This can be done by grinding the sample to reduce the average particle size to <10µm. However, if particle sizes are too small, this can lead to broadening of peaks. This is due to both lattice damage and the reduction of the number of planes that cause destructive interference.The diffraction pattern is actually made up of angles that did not suffer from destructive interference due to their special relationship described by Bragg Law ). If destructive interference is reduced close to these special angles, the peak is broadened and becomes less distinct. Some crystals such as calcite (CaCO3, have preferred orientations and will change their orientation when pressure is applied. This leads to differences in the diffraction pattern of ‘loose’ and pressed samples. Thus, it is important to avoid even touching ‘loose’ powders to prevent errors when collecting data.The sample powder is loaded onto a sample dish for mounting in the diffractometer ), where rotating arms containing the X-ray source and detector scan the sample at different incident angles. The sample dish is rotated horizontally during scanning to ensure that the powder is exposed evenly to the X-rays.A sample X-ray diffraction spectrum of germanium is shown in , with peaks identified by the planes that caused that diffraction. Germanium has a diamond cubic crystal lattice ), named after the crystal structure of prototypical example. The crystal structure determines what crystal planes cause diffraction and the angles at which they occur. The angles are shown in 2θ as that is the angle measured between the two arms of the diffractometer, i.e., the angle between the incident and the diffracted beam ).There are three basic cubic crystal lattices, and they are the simple cubic (SC), body-centered cubic (BCC), and the face-centered cubic (FCC) . These structures are simple enough to have their diffraction spectra analyzed without the aid of software.Each of these structures has specific rules on which of their planes can produce diffraction, based on their Miller indices (hkl).The order in which these peaks appear depends on the sum of h2+k2+l2. These are shown in Table \(\PageIndex{1}\).The value of d for each of these planes can be calculated using \ref{3}, where a is the lattice parameter of the crystal.The lattice constant, or lattice parameter, refers to the constant distance between unit cells in a crystal lattice.\[ \frac{1}{d^{2}} \ =\ \frac{h^{2}+k^{2}+l^{2}}{a^{2}} \label{3} \]As the diamond cubic structure of Ge can be complicated, a simpler worked example for sample diffraction of NaCl with Cu-Kα radiation is shown below. Given the values of 2θ that result in diffraction, Table \(\PageIndex{2}\) can be constructed.The values of these ratios can then be inspected to see if they corresponding to an expected series of hkl values. In this case, the last column gives a list of integers, which corresponds to the h2+k2+l2 values of the FCC lattice diffraction. Hence, NaCl has a FCC structure, shown in angles .The lattice parameter of NaCl can now be calculated from this data. The first peak occurs at θ = 13.68°. Given that the wavelength of the Cu-Kα radiation is 1.54059 Å, Bragg's Equation \ref{4} can be applied as follows:\[ 1.54059 \ =\ 2d\ sin 13.68 \label{4} \]\[ d\ =\ 3.2571\ Å \label{5} \]Since the first peak corresponds to the plane, the distance between two parallel planes is 3.2571 Å. The lattice parameter can now be worked out using \ref{6}.\[ 1/3.2561^{2}\ =\ (1^{2}+1^{2}+I^{2})/a^{2} \label{6} \]\[ a\ =\ 5.6414\ Å \label{7} \]The powder XRD spectrum of Ag nanoparticles is given in as collected using Cu-Kα radiation of 1.54059 Å. Determine its crystal structure and lattice parameter using the labeled peaks.Applying the Bragg Equation \ref{8},\[ 1.54059\ =\ 2d\ sin\ 19.03 \label{8} \]\[ d\ =\ 2.3624\ Å \label{9} \]Calculate the lattice parameter using \ref{10},\[ 1/2.3624^{2}\ =\ (1^{2}+1^{2}+I^{2})/a^{2} \label{10} \]\[ a\ =\ 4.0918\ Å \label{11} \]The last column gives a list of integers, which corresponds to the h2+k2+l2 values of the FCC lattice diffraction. Hence, the Ag nanoparticles have a FCC structure.As seen above, each crystal will give a pattern of diffraction peaks based on its lattice type and parameter. These fingerprint patterns are compiled into databases such as the one by the Joint Committee on Powder Diffraction Standard (JCPDS). Thus, the XRD spectra of samples can be matched with those stored in the database to determine its composition easily and rapidly.Powder XRD is also able to perform analysis on solid state reactions such as the titanium dioxide (TiO2) anatase to rutile transition. A diffractometer equipped with a sample chamber that can be heated can take diffractograms at different temperatures to see how the reaction progresses. Spectra of the change in diffraction peaks during this transition is shown in , , and .XRD allows for quick composition determination of unknown samples and gives information on crystal structure. Powder XRD is a useful application of X-ray diffraction, due to the ease of sample preparation compared to single-crystal diffraction. Its application to solid state reaction monitoring can also provide information on phase stability and transformation.Described simply, single-crystal X-ray diffraction (XRD) is a technique in which a crystal of a sample under study is bombarded with an X-ray beam from many different angles, and the resulting diffraction patterns are measured and recorded. By aggregating the diffraction patterns and converting them via Fourier transform to an electron density map, a unit cell can be constructed which indicates the average atomic positions, bond lengths, and relative orientations of the molecules within the crystal.As an analogy to describe the underlying principles of diffraction, imagine shining a laser onto a wall through a fine sieve. Instead of observing a single dot of light on the wall, a diffraction pattern will be observed, consisting of regularly arranged spots of light, each with a definite position and intensity. The spacing of these spots is inversely related to the grating in the sieve— the finer the sieve, the farther apart the spots are, and the coarser the sieve, the closer together the spots are. Individual objects can also diffract radiation if it is of the appropriate wavelength, but a diffraction pattern is usually not seen because its intensity is too weak. The difference with a sieve is that it consists of a grid made of regularly spaced, repeating wires. This periodicity greatly magnifies the diffraction effect because of constructive interference. As the light rays combine amplitudes, the resulting intensity of light seen on the wall is much greater because intensity is proportional to the square of the light’s amplitude.To apply this analogy to single-crystal XRD, we must simply scale it down. Now the sieve is replaced by a crystal and the laser (visible light) is replaced by an X-ray beam. Although the crystal appears solid and not grid-like, the molecules or atoms contained within the crystal are arranged periodically, thus producing the same intensity-magnifying effect as with the sieve. Because X-rays have wavelengths that are on the same scale as the distance between atoms, they can be diffracted by their interactions with the crystal lattice.These interactions are dictated by Bragg's law, which says that constructive interference occurs only when \ref{12} is satisfied; where n is an integer, λ is the wavelength of light, d is the distance between parallel planes in the crystal lattice, and θ is the angle of incidence between the X-ray beam and the diffracting planes (see ). A complication arises, however, because crystals are periodic in all three dimensions, while the sieve repeats in only two dimensions. As a result, crystals have many different diffraction planes extending in certain orientations based on the crystal’s symmetry group. For this reason, it is necessary to observe diffraction patterns from many different angles and orientations of the crystal to obtain a complete picture of the reciprocal lattice.The reciprocal lattice of a lattice (Bravais lattice) is the lattice in which the Fourier transform of the spatial wavefunction of the original lattice (or direct lattice) is represented. The reciprocal lattice of a reciprocal lattice is the original lattice.\[ n \lambda \ =\ 2d\ sin \theta \label{12} \]The reciprocal lattice is related to the crystal lattice just as the sieve is related to the diffraction pattern: they are inverses of each other. Each point in real space has a corresponding point in reciprocal space and they are related by 1/d; that is, any vector in real space multiplied by its corresponding vector in reciprocal space gives a product of unity. The angles between corresponding pairs of vectors remains unchanged.Real space is the domain of the physical crystal, i.e. it includes the crystal lattice formed by the physical atoms within the crystal. Reciprocal space is, simply put, the Fourier transform of real space; practically, we see that diffraction patterns resulting from different orientations of the sample crystal in the X-ray beam are actually two-dimensional projections of the reciprocal lattice. Thus by collecting diffraction patterns from all orientations of the crystal, it is possible to construct a three-dimensional version of the reciprocal lattice and then perform a Fourier transform to model the real crystal lattice.Two common types of X-ray diffraction are powder XRD and single-crystal XRD, both of which have particular benefits and limitations. While powder XRD has a much simpler sample preparation, it can be difficult to obtain structural data from a powder because the sample molecules are randomly oriented in space; without the periodicity of a crystal lattice, the signal-to-noise ratio is greatly decreased and it becomes difficult to separate reflections coming from the different orientations of the molecule. The advantage of powder XRD is that it can be used to quickly and accurately identify a known substance, or to verify that two unknown samples are the same material.Single-crystal XRD is much more time and data intensive, but in many fields it is essential for structural determination of small molecules and macromolecules in the solid state. Because of the periodicity inherent in crystals, small signals from individual reflections are magnified via constructive interference. This can be used to determine exact spatial positions of atoms in molecules and can yield bond distances and conformational information. The difficulty of single-crystal XRD is that single crystals may be hard to obtain, and the instrument itself may be cost-prohibitive.An example of typical diffraction patterns for single-crystal and powder XRD follows ( and , respectively). The dots in the first image correspond to Bragg reflections and together form a single view of the molecule’s reciprocal space. In powder XRD, random orientation of the crystals means reflections from all of them are seen at once, producing the observed diffraction rings that correspond to particular vectors in the material’s reciprocal lattice.In a single-crystal X-ray diffraction experiment, the reciprocal space of a crystal is constructed by measuring the angles and intensities of reflections in observed diffraction patterns. These data are then used to create an electron density map of the molecule which can be refined to determine the average bond lengths and positions of atoms in the crystal.The basic setup for single-crystal XRD consist of an X-ray source, a collimator to focus the beam, a goniometer to hold and rotate the crystal, and a detector to measure and record the reflections. Instruments typically contain a beamstop to halt the primary X-ray beam from hitting the detector, and a camera to help with positioning the crystal. Many also contain an outlet connected to a cold gas supply (such as liquid nitrogen) in order to cool the sample crystal and reduce its vibrational motion as data is being collected. A typical instrument is shown in and .Despite advances in instrumentation and computer programs that make data collection and solving crystal structures significantly faster and easier, it can still be a challenge to obtain crystals suitable for analysis. Ideal crystals are single, not twinned, clear, and of sufficient size to be mounted within the the X-ray beam (usually 0.1-0.3 mm in each direction). They also have clean faces and smooth edges. Following are images of some ideal crystals and ), as well as an example of twinned crystals ).Crystal twinning occurs when two or more crystals share lattice points in a symmetrical manner. This usually results in complex diffraction patterns which are difficult to analyze and construct a reciprocal lattice.Crystal formation can be affected by temperature, pressure, solvent choice, saturation, nucleation, and substrate. Slow crystal growth tends to be best, as rapid growth creates more imperfections in the crystal lattice and may even lead to a precipitate or gel. Similarly, too many nucleation sites (points at which crystal growth begins) can lead to many small crystals instead of a few, well-defined ones.There are a number of basic methods for growing crystals suitable for single-crystal XRD:These are only the most common ways that crystals are grown. Particularly for macromolecules, it may be necessary to test hundreds of crystallization conditions before a suitable crystal is obtained. There now exist automated techniques utilizing robots to grow crystals, both for obtaining large numbers of single crystals and for performing specialized techniques (such as drawing a crystal out of solution) that would otherwise be too time-consuming to be of practical use.Some organic molecules display a series of intermediate transition states between solid and isotropic liquid states ) as their temperature is raised. These intermediate phases have properties in between the crystalline solid and the corresponding isotropic liquid state, and hence they are called liquid crystalline phases. Other name is mesomorphic phases where mesomorphic means of intermediate form. According to the physicist de Gennes ), liquid crystal is ‘an intermediate phase, which has liquid like order in at least one direction and possesses a degree of anisotropy’. It should be noted that all liquid crystalline phases are formed by anisotropic molecules (either elongated or disk-like) but not all the anisotropic molecules form liquid crystalline phases.Anisotropic objects can possess different types of ordering giving rise to different types of liquid crystalline phases ).The word nematic comes from the Greek for thread, and refers to the thread-like defects commonly observed in the polarizing optical microscopy of these molecules. They have no positional order only orientational order, i.e., the molecules all pint in the same direction. The direction of molecules denoted by the symbol n commonly referred as the ‘director’ ). The director n is bidirectional that means the states n and -n are indistinguishable.All the smectic phases are layered structures that usually occur at slightly lower temperatures than nematic phases. There are many variations of smectic phases, and some of the distinct ones are as follows:Sometimes cholesteric phases ) are also referred to as chiral nematic phases because they are similar to nematic phases in many regards. Many derivatives of cholesterol exhibit this type of phase. They are generally formed by chiral molecules or by doping the nematic host matrix with chiral molecules. Adding chirality causes helical distortion in the system, which makes the director, n, rotate continuously in space in the shape of a helix with specific pitch. The magnitude of pitch in a cholesteric phase is a strong function of temperature.In columnar phases liquid crystals molecules are shaped like disks as opposed to rod-like in nematic and smectics liquid crystal phases. These disk shaped molecules stack themselves in columns and form a 2D crystalline array structures ). This type of two dimensional ordering leads to new mesophases.X-ray diffraction (XRD) is one of the fundamental experimental techniques used to analyze the atomic arrangement of materials. The basic principle behind X-ray diffraction is Bragg’s Law ). According to this law, X-rays that are reflected from the adjacent crystal planes will undergo constructive interference only when the path difference between them is an integer multiple of the X-ray's wavelength, \ref{13}, where n is an integer, d is the spacing between the adjacent crystal planes, θ is the angle between incident X-ray beam and scattering plane, and λ is the wavelength of incident X-ray.\[ 2d sin \theta \ =\ n \lambda \label{13}\ \]Now the atomic arrangement of molecules can go from being extremely ordered (single crystals) to random (liquids). Correspondingly, the scattered X-rays form specific diffraction patterns particular to that sample. shows the difference between X-rays scattered from a single crystal and a polycrystalline (powder) sample. In case of a single crystal the diffracted rays point to discrete directions ), while for polycrystalline sample diffracted rays form a series of diffraction cones ).A two dimensional (2D) XRD system is a diffraction system with the capability of simultaneously collecting and analyzing the X-ray diffraction pattern in two dimensions. A typical 2D XRD setup consists of five major components ): For laboratory scale X-ray generators, X-rays are emitted by bombarding metal targets with high velocity electrons accelerated by strong electric field in the range 20-60 kV. Different metal targets that can be used are chromium (Cr), cobalt (Co), copper (Cu), molybdenum (Mo) and iron (Fe). The most commonly used ones are Cu and Mo. Synchrotrons are even higher energy radiation sources. They can be tuned to generate a specific wavelength and they have much brighter luminosity for better resolution. Available synchrotron facilities in US are:The X-ray optics are comprised of the X-ray tube, monochromator, pinhole collimator and beam stop. A monochromator is used to get rid of unwanted X-ray radiation from the X-ray tube. A diffraction from a single crystal can be used to select a specific wavelength of radiation. Typical materials used are pyrolytic graphite and silicon. Monochromatic X-ray beams have three components: parallel, convergent and divergent X-rays. The function of a pinhole collimator is to filter the incident X-ray beam and allow passage of parallel X-rays. A 2D X-ray detector can either be a film or a digital detector, and its function is to measure the intensity of X-rays diffracted from a sample as a function of position, time, and energy.2D diffracton data has much more information in comparison diffraction pattern, which is acquired using a 1D detector. shows the diffraction pattern from a polycrystalline sample. For illustration purposes only, two diffraction cones are shown in the schematic. In the case of 1D X-ray diffraction, measurement area is confined within a plane labeled as diffractometer plane. The 1D detector is mounted along the detection circle and variation of diffraction pattern in the z direction are not considered. The diffraction pattern collected is an average over a range defined by a beam size in the z direction. The diffraction pattern measured is a plot of X-ray intensity at different 2θ angles. For 2D X-ray diffraction, the measurement area is not limited to the diffractometer plane. Instead, a large portion of the diffraction rings are measured simultaneously depending on the detector size and position from the sample.One such advantage is the measurement of percent crystallinity of a material. Determination of material crystallinity is required both for research and quality control. Scattering from amorphous materials produces a diffuse intensity ring while polycrystalline samples produce sharp and well-defined rings or spots are seen. The ability to distinguish between amorphous and crystalline is the key in determining percent of crystallinity accurately. Since most crystalline samples have preferred orientation, depending on the sample is oriented it is possible to measure different peak or no peak using conventional diffraction system. On the other hand, sample orientation has no effect on the full circle integrated diffraction measuring done using 2D detector. A 2D XRD can therefore measure percent crystallinity more accurately.As mentioned in the introduction section, liquid crystal is an intermediate state between solid and liquid phases. At temperatures above the liquid crystal phase transition temperature ), they become isotropic liquid, i.e., absence of long-range positional or orientational order within molecules. Since an isotropic state cannot be aligned, its diffraction pattern consists of weak, diffuse rings . The reason we see any diffraction pattern in the isotropic state is because in classical liquids there exists a short range positional order. The ring has of radius of 4.5 Å and it mostly appears at 20.5°. It represents the distance between the molecules along their widths.Nematic liquid crystalline phases have long range orientational order but no positional order. An unaligned sample of nematic liquid crystal has similar diffraction pattern as an isotropic state. But instead of a diffuse ring, it has a sharper intensity distribution. For an aligned sample of nematic liquid crystal, X-ray diffraction patterns exhibit two sets of diffuse arcs b). The diffuse arc at the larger radius (P1, 4.5 Å) represents the distance between molecules along their widths. Under the presence of an external magnetic field, samples with positive diamagnetic anisotropy align parallel to the field and P1 is oriented perpendicularly to the field. While samples with negative diamagnetic anisotropy align perpendicularly to the field with P1 being parallel to the field. The intensity distribution within these arcs represents the extent of alignment within the sample; generally denoted by S.The diamagnetic anistropy of all liquid crystals with an aromatic ring is positive, and on order of 10-7. The value decreases with the substitution of each aromatic ring by a cyclohexane or other aliphatic group. A negative diamagnetic anistropy is observed for purely cycloaliphatic LCs.When a smectic phase is cooled down slowly under the presence the external field, two sets of diffuse peaks are seen in diffraction pattern c). The diffuse peak at small angles condense into sharp quasi-Bragg peaks. The peak intensity distribution at large angles is not very sharp because molecules within the smectic planes are randomly arranged. In case of smectic C phases, the angle between the smectic layers normal and the director (θ) is no longer collinear d). This tilt can easily be seen in the diffraction pattern as the diffuse peaks at smaller and larger angles are no longer orthogonal to each other.In general, X-ray scattering measurements of liquid crystal samples are considered more difficult to perform than those of crystalline samples. The following steps should be performed for diffraction measurement of liquid crystal samples:Identification of the phase of a liquid crystal sample is critical in predicting its physical properties. A simple 2D X-ray diffraction pattern can tell a lot in this regard ). It is also critical to determine the orientational order of a liquid crystal. This is important to characterize the extent of sample alignment.For simplicity, the rest of the discussion focuses on nematic liquid crystal phases. In an unaligned sample, there isn't any specific macroscopic order in the system. In the micrometer size domains, molecules are all oriented in a specific direction, called a local director. Because there is no positional order in nematic liquid crystals, this local director varies in space and assumes all possible orientations. For example, in a perfectly aligned sample of nematic liquid crystals, all the local directors will be oriented in the same direction. The specific alignment of molecules in one preferred direction in liquid crystals makes their physical properties such as refractive index, viscosity, diamagnetic susceptibility, directionally dependent.When a liquid crystal sample is oriented using external fields, local directors preferentially align globally along the field director. This globally preferred direction is referred to as the director and is denoted by unit vector n. The extent of alignment within a liquid crystal sample is typically denoted by the order parameter, S, as defined by \ref{14}, where θ is the angle between long axis of molecule and the preferred direction, n.\[ S\ =\ (\frac{3cos^{2} \theta \ -\ 1}{2}) \label{14} \]For isotropic samples, the value of S is zero, and for perfectly aligned samples it is 1. shows the structure of a most extensively studied nematic liquid crystal molecule, 4-cyano-4'-pentylbiphenyl, commonly known as 5CB. For preparing a polydomain sample 5CB was placed inside a glass capillary via capillary forces ). shows the 2D X-ray diffraction of the as prepared polydomain sample. For preparing monodomain sample, a glass capillary filled with 5CB was heated to 40 °C (i.e., above the nematic-isotropic transition temperature of 5CB, ~35 °C) and then cooled slowly in the presence of magnetic field (1 Testla, . This gives a uniformly aligned sample with the nematic director n oriented along the magnetic field. shows the collected 2D X-ray diffraction measurement of a monodomain 5CB liquid crystal sample using Rigaku Raxis-IV++, and it consists of two diffuse arcs (as mentioned before). shows the intensity distribution of a diffuse arc as a function of Θ, and the calculated order parameter value, S, is -0.48.Through the course of our structural characterization of various tetrafluoroborate salts, the complex cation has nominally been the primary subject of interest; however, we observed that the tetrafluoroborate anion (BF4-) anions were commonly disordered (13 out of 23 structures investigated). Furthermore, a consideration of the Cambridge Structural Database as of 14th December 2010 yielded 8,370 structures in which the tetrafluoroborate anion is present; of these, 1044 (12.5%) were refined as having some kind of disorder associated with the BF4- anion. Several different methods have been reported for the treatment of these disorders, but the majority was refined as a non-crystallographic rotation along the axis of one of the B-F bonds.Unfortunately, the very property that makes fluoro-anions such good candidates for non-coordinating counter-ions (i.e., weak intermolecular forces) also facilitates the presence of disorder in crystal structures. In other words, the appearance of disorder is intensified with the presence of a weakly coordinating spherical anion (e.g., BF4- or PF6-) which lack the strong intermolecular interactions needed to keep a regular, repeating anion orientation throughout the crystal lattice. Essentially, these weakly coordinating anions are loosely defined electron-rich spheres. All considered it seems that fluoro-anions, in general, have a propensity to exhibit apparently large atomic displacement parameters (ADP's), and thus, are appropriately refined as having fractional site-occupancies.In crystallography the observed atomic displacement parameters are an average of millions of unit cells throughout entire volume of the crystal, and thermally induced motion over the time used for data collection. A disorder of atoms/molecules in a given structure can manifest as flat or non-spherical atomic displacement parameters in the crystal structure. Such cases of disorder are usually the result of either thermally induced motion during data collection (i.e., dynamic disorder), or the static disorder of the atoms/molecules throughout the lattice. The latter is defined as the situation in which certain atoms, or groups of atoms, occupy slightly different orientations from molecule to molecule over the large volume (relatively speaking) covered by the crystal lattice. This static displacement of atoms can simulate the effect of thermal vibration on the scattering power of the "average" atom. Consequently, differentiation between thermal motion and static disorder can be ambiguous, unless data collection is performed at low temperature (which would negate much of the thermal motion observed at room temperature).In most cases, this disorder is easily resolved as some non-crystallographic symmetry elements acting locally on the weakly coordinating anion. The atomic site occupancies can be refined using the FVAR instruction on the different parts (see PART 1 and PART 2 in ) of the disorder, having a site occupancy factor (s.o.f.) of x and 1-x, respectively. This is accomplished by replacing 11.000 (on the F-atom lines in the “NAME.INS” file) with 21.000 or -21.000 for each of the different parts of the disorder. For instance, the "NAME.INS" file would look something like that shown in . Note that for more heavily disordered structures, i.e., those with more than two disordered parts, the SUMP command can be used to determine the s.o.f. of parts 2, 3, 4, etc. the combined sum of which is set at s.o.f. = 1.0. These are designated in FVAR as the second, third, and fourth terms.In small molecule refinement, the case will inevitably arise in which some kind of restraints or constraints must be used to achieve convergence of the data. A restraint is any additional information concerning a given structural feature, i.e., limits on the possible values of parameters, may be added into the refinement, thereby increasing the number of refined parameters. For example, aromatic systems are essentially flat, so for refinement purposes, a troublesome ring system could be restrained to lie in one plane. Restraints are not exact, i.e., they are tied to a probability distribution, whereas constraints are exact mathematical conditions. Restraints can be regarded as falling into one of several general types:The most common case of disorder is a rotation about an axis, the simplest of which involves a non-crystallographic symmetry related rotation axis about the vector made by one of the B-F bonds; this operation leads to three of the four F-atoms having two site occupancies ). This disorder is also seen for tBu and CF3 groups, and due to the C3 symmetry of the C(CH3)3, CF3 and BF3 moieties actually results in a near C2rotation.In a typical example, the BF4- anion present in the crystal structure of [H(Mes-dpa)]BF4 ) was found to have a 75:25 site occupancy disorder for three of the four fluorine atoms ). The disorder is a rotation about the axis of the B-F bond. For initial refinement cycles, similar distance restraints (SADI) were placed on all B-F and F-F distances, in addition to similar ADP restraints (SIMU) and rigid bond restraints (DELU) for all F atoms. Restraints were lifted for final refinement cycles. A similar disorder refinement was required for [H(2-iPrPh-dpa)]BF4 (45:55), while refinement of the disorder in [Cu(2-iPrPh-dpa)(styrene)]BF4(65:35) was performed with only SADI and DELU restraints were lifted in final refinement cycles.In the complex [Ag(H-dpa)(styrene)]BF4 use of the free variable (FVAR) led to refinement of disordered fluorine atoms F(2A)-F(4A) and F(2B)-F(4B) as having a 75:25 site-occupancy disorder ). For initial refinement cycles, all B-F bond lengths were given similar distance restraints (SADI). Similar distance restraints (SADI) were also placed on F…F distances for each part, i.e., F(2A)…F(3A) = F(2B)…F(3B), etc. Additionally, similar ADP restraints (SIMU) and rigid bond restraints (DELU) were placed on all F atoms. All restraints, with the exception of SIMU, were lifted for final refinement cycles.The second type of disorder is closely related to the first, with the only difference being that the rotational axis is tilted slightly off the B-F bond vector, resulting in all four F-atoms having two site occupancies ). Tilt angles range from 6.5° to 42°.The disordered BF4- anion present in the crystal structure of [Cu(Ph-dpa)(styrene)]BF4 was refined having fractional site occupancies for all four fluorine atoms about a rotation slightly tilted off the B-F(2A) bond. However, it should be noted that while the U(eq) values determined for the data collected at low temperature data is roughly half that of that found at room temperature, as is evident by the sizes and shapes of fluorine atoms in , the site occupancies were refined to 50:50 in each case, and there was no resolution in the disorder.An extreme example of rotation off-axis is observed where refinement of more that two site occupancies ) with as many as thirteen different fluorine atom locations on only one boron atom.Although a wide range of tilt angles are possible, in some systems the angle is constrained by the presence of hydrogen bonding. For example, the BF4- anion present in [Cu(Mes-dpa)(μ-OH)(H2O)]2[BF4]2 was found to have a 60:40 site occupancy disorder of the four fluorine atoms, and while the disorder is a C2-rotation slightly tilted off the axis of the B-F(1A) bond, the angle is restricted by the presence of two B-F…O interactions for one of the isomers ).An example that does adhere to global symmetry elements is seen in the BF4- anion of [Cu{2,6-iPr2C6H3N(quin)2}2]BF4.MeOH ), which exhibits a hydrogen-bonding interaction with a disordered methanol solvent molecule. The structure of R-N(quin)2 is shown in b. By crystallographic symmetry, the carbon atom from methanol and the boron atom from the BF4- anion lie on a C2-axis. Fluorine atoms [F-F], the methanol oxygen atom, and the hydrogen atoms attached to methanol O(1S) and C(1S) atoms were refined as having 50:50 site occupancy disorder ).Multiple disorders can be observed with a single crystal unit cell. For example, the two BF4- anions in [Cu(Mes-dpa)(styrene)]BF4 both exhibited 50:50 site occupancy disorders, the first is a C2-rotation tilted off one of the B-F bonds, while the second is disordered about an inversion centered on the boron atom. Refinement of the latter was carried out similarly to the aforementioned cases, with the exception that fixed distance restraints for non-bonded atoms (DANG) were left in place for the disordered fluorine atoms attached to B ). Another instance in which the BF4- anion is disordered about a crystallographic symmetry element is that of [Cu(H-dpa)(1,5-cyclooctadiene)]BF4. In this instance fluorine atoms F through F are present in the asymmetric unit of the complex. Disordered atoms F(1A)-F(4A) were refined with 50% site occupancies, as B lies on a mirror plane ). For initial refinement cycles, similar distance restraints (SADI) were placed on all B-F and F-F distances, in addition to similar ADP restraints (SIMU) and rigid bond restraints (DELU) for all F atoms. Restraints were lifted for final refinement cycles, in which the boron atom lies on a crystallographic mirror plane, and all four fluorine atoms are reflected across.It has been observed that the BF4- anion can exhibit site occupancy disorder of the boron atom and one of the fluorine atoms across an NCS mirror plane defined by the plane of the other three fluorine atoms ) modeling the entire anion as disordered (including the boron atom).The extreme case of a disorder involves refinement of the entire anion, with all boron and all fluorine atoms occupying more than two sites ). In fact, some disorders of the latter types must be refined isotropically, or as a last-resort, not at all, to prevent one or more atoms from turning non-positive definite.This page titled 7.3: X-ray Crystallography is shared under a CC BY 4.0 license and was authored, remixed, and/or curated by Pavan M. V. Raja & Andrew R. Barron (OpenStax CNX) via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.
564
7.4: Low Energy Electron Diffraction
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Physical_Methods_in_Chemistry_and_Nano_Science_(Barron)/07%3A_Molecular_and_Solid_State_Structure/7.04%3A_Low_Energy_Electron_Diffraction
Low energy electron diffraction (LEED) is a very powerful technique that allows for the characterization of the surface of materials. Its high surface sensitivity is due to the use of electrons with energies between 20-200 eV, which have wavelengths equal to 2.7 – 0.87 Å (comparable to the atomic spacing). Therefore, the electrons can be elastically scattered easily by the atoms in the first few layers of the sample. Its features, such as little penetration of low–energy electrons have positioned it as one of the most common techniques in surface science for the determination of the symmetry of the unit cell (qualitative analysis) and the position of the atoms in the crystal surface (quantitative analysis).In 1924 Louis de Brogile postulated that all forms of matter, such as electrons, have a wave-particle nature. Three years later after this postulate, the American physicists Clinton J. Davisson and Lester H. Germer ) proved experimentally the wave nature of electrons at Bell Labs in New York, see At that time, they were investigating the distribution-in-angle of the elastically scattered electrons (electrons that have suffered no loss of kinetic energy) from the face of a polycrystalline nickel, material composed of many randomly oriented crystals.The experiment consisted of a beam of electrons from a heated tungsten filament directed against the polycrystalline nickel and an electron detector, which was mounted on an arc to observe the electrons at different angles. During the experiment, air entered in the vacuum chamber where the nickel was, producing an oxide layer on its surface. Davisson and Clinton reduced the nickel by heating it at high temperature. They did not realize that the thermal treatment changed the polycrystalline nickel to a nearly monocrystalline nickel, material composed of many oriented crystals. When they repeated the experiment, it was a great surprise that the distribution-in-angle of the scattered electrons manifested sharp peaks at certain angles. They soon realized that these peaks were interference patterns, and, in analogy to X-ray diffraction, the arrangement of atoms and not the structure of the atoms was responsible for the pattern of the scattered electrons.The results of Davisson and Germer were soon corroborated by George Paget Thomson, J. J. Thomson’s son. In 1937, both Davisson and Thomson were awarded with the Nobel Prize in Physics for their experimental discovery of the electron diffraction by crystals. It is noteworthy that 31 years after J. J. Thomson showed that the electron is a particle, his son showed that it is also a wave.Although the discovery of low-energy electron diffraction was in 1927, it became popular in the early 1960’s, when the advances in electronics and ultra-high vacuum technology made possible the commercial availability of LEED instruments. At the beginning, this technique was only used for qualitative characterization of surface ordering. Years later, the impact of computational technologies allowed the use of LEED for quantitative analysis of the position of atoms within a surface. This information is hidden in the energetic dependence of the diffraction spot intensities, which can be used to construct a LEED I-V curve.Electrons can be considered as a stream of waves that hit a surface and are diffracted by regions with high electron density (the atoms). The electrons in the range of 20 to 200 eV can penetrate the sample for about 10 Å without loosing energy. Because of this reason, LEED is especially sensitive to surfaces, unlike X-ray diffraction, which gives information about the bulk-structure of a crystal due to its larger mean free path (around micrometers). Table \(\PageIndex{1}\) compares general aspects of both techniques.Like X-ray diffraction, electron diffraction also follows the Bragg’s law, see , where λ is the wavelength, a is the atomic spacing, d is the spacing of the crystal layers, θ is the angle between the incident beam and the reflected beam, and n is an integer. For constructive interference between two waves, the path length difference (2a sinθ / 2d sinθ) must be an integral multiple of the wavelength.In LEED, the diffracted beams impact on a fluorescent screen and form a pattern of light spots a), which is a to-scale version of the reciprocal lattice of the unit cell. The reciprocal lattice is a set of imaginary points, where the direction of a vector from one point to another point is equal to the direction of a normal to one plane of atoms in the unit cell (real space). For example, an electron beam penetrates a few 2D-atomic layers, b), so the reciprocal lattice seen by LEED consists of continues rods and discrete points per atomic layer, see c. In this way, LEED patterns can give information about the size and shape of the real space unit cell, but nothing about the positions of the atoms. To gain this information about atomic positions, analysis of the spot intensities is required. For further information about reciprocal lattice and crystals refer to Crystal Structure and An Introduction to Single-Crystal X-Ray Crystallography.Thanks to the hemispheric geometry of the green screen of LEED, we can observe the reciprocal lattice without distortion. It is important to take into account that the separation of the points in the reciprocal lattice and the real interplanar distance are inversely proportional, which means that if the atoms are more widely spaced, the spots in the pattern get closer and vice versa. In the case of superlattices, a periodic structure composed of layers of two materials, new points arise in addition to the original diffraction pattern.The typical diagram of a LEED system is shown in . This system sends an electron beam to the surface of the sample, which comes from an electron gun behind a transparent hemispherical fluorescent screen. The electron gun consists of a heated cathode and a set of focusing lenses which send electrons at low energies. The electrons collide with the sample and diffract in different directions depending on the surface. Once diffracted, they are directed to the fluorescent screen. Before colliding with the screen, they must pass through four different grids (known as retarding grids), which contain a central hole through which the electron gun is inserted. The first grid is the nearest one to the sample and is connected to earth ground. A negative potential is applied to the second and third grids, which act as suppressor grids, given that they repel all electrons coming from non–elastic diffractions. These grids perform as filters, which only allow the highest–energy electrons to pass through; the electrons with the lowest energies are blocked in order to prevent a bad resolution image. The fourth grid protects the phosphor screen, which possesses positive charge from the negative grids. The remaining electrons collide with the luminescent screen, creating a phosphor glow (left side of ), where the light intensity depends on the electron intensity.For conventional systems of LEED, it is necessary a method of data acquisition. In the past, the general method for analyzing the diffraction pattern was to manually take several dozen pictures. After the development of computers, the photographs were scanned and digitalized for further analysis through computational software. Years later, the use of the charge–coupled device (CCD) camera was incorporated, allowing rapid acquisition, the possibility to average frames during the acquisition in order to improve the signal, the immediate digitalization and channeling of LEED pattern. In the case of the IV curves, the intensities of the points are extracted making use of special algorithms. shows a commercial LEED spectrometer with the CCD camera, which has to be in an ultra-high vacuum vessel.We have previously talked about the discovery of LEED and its principles, along with the experimental setup of a LEED system. It was also mentioned that LEED provides qualitative and quantitative surface analysis. In the following section, we will discuss the most common applications of LEED and the information that one can obtain with this technique.ne of the principal applications of LEED is the study of adsorbates on catalysts, due to its high surface sensitivity. In order to illustrate the application of LEED in the study of adsorbates. As an example, a shows the surface of Cu single crystal, the pristine material. This surface was cleaned carefully by various cycles of sputtering with ions of argon, followed by annealing. The LEED patter of Cu presents four well-defined spots corresponding to its cubic unit cell. b shows the LEED pattern after the growth of graphene on the surface of Cu at 800 °C, we can observe the four spots that correspond to the surface of Cu and a ring just outside these spots, which correspond to the domains of graphene with four different primary rotational alignments with respect to the Cu substrate lattice, see . When increasing the temperature of growth of graphene to 900 °C, we can observe a ring of twelve spots (as seen in c), which indicates that the graphene has a much higher degree of rotational order. Only two domains are observed with an alignment of one of the lattice vectors to one of the Cu surface lattice vectors, given that graphene has a hexagonal geometry, so that only one vector can coincide with the cubic lattice of Cu.One possible explanation for the twelve spots observed at 900 ˚C is that when the temperature of all domains is increased the four different domains observed at 800 ˚C, may possess enough energy to adopt the two orientations in which the vectors align with the surface lattice vector of Cu. In addition, at 900 ˚C, a decrease in the size and intensity of the Cu spots is observed, indicating a larger coverage of the copper surface by the domains of graphene.When the oxygen is chemisorbed on the surface of Cu, the new spots correspond to oxygen, a. Once graphene is allowed to grow on the surface with oxygen at 900 ˚C, the LEED pattern turns out different: the twelve spots corresponding to graphene domains are not observed due to nucleation of graphene domains in the presence of oxygen in multiple orientations, b.A way to study the disorder of the adsorbed layers is through the LEED–IV curves, see . In this case, the intensities are in relation to the angle of the electron beam. The spectrum of Cu with only four sharp peaks shows a very organized surface. In the case of the graphene sample growth over the copper surface, twelve peaks are shown, which correspond to the main twelve spots of the LEED pattern. These peaks are sharp, which indicate an high level of order. For the case of the sample of graphene growth over copper with oxygen, the twelve peaks widen, which is an effect of the increase of disorder in the layers.As previously mentioned, LEED–IV curves may give us exact information about the position of the atoms in a crystal. These curves are related to a variation of intensities of the diffracted electron (spots) with the energy of the electron beam. The process of determination of the structure by this technique works as ‘proof and error’ and consists of three main parts: the measurement of the intensity spectra, the calculations for various models of atomic positions and the search for the best-fit structure which is determined by an R-factor.The first step consists of obtaining the experimental LEED pattern and all the electron beam intensities for every spot of the reciprocal lattice in the pattern. Theoretical LEED–IV curves are calculated for a large number of geometrical models and these are compared with the experimental curves. The agreement is quantified by means of a reliability factor or R–factor. The closest this value to zero is, the more perfect the agreement between experimental and theoretical curves. In this way, the level of precision of the crystalline structure will depend on the smallest R–factor that can be achieved.Pure metals with pure surfaces allow R–factor values of around 0.1. When moving to more complex structures, these values increase. The main reason for this gradually worse agreement between theoretical and experimental LEED-IV curves lies in the approximations in conventional LEED theory, which treats the atoms as perfect spheres with constant scattering potential in between. This description results in inaccurate scattering potential for more open surfaces and organic molecules. In consequence, a precision of 1-2 pm can be achieved for atoms in metal surfaces, whereas the positions of atoms within organic molecules are typically determined within ±10-20 pm. The values of the R-factor are usually between 0.2 and 0.5, where 0.2 represents a good agreement, 0.35 a mediocre agreement and 0.5 a poor agreement. shows an example of a typical LEED–IV curve for Ir, which has a quasi-hexagonal unit cell. One can observe the parameters used to calculate the theoretical LEED–IV curve and the best-fitted curve obtained experimentally, which has an R–factor value of 0.144. The model used is also shown.This page titled 7.4: Low Energy Electron Diffraction is shared under a CC BY 4.0 license and was authored, remixed, and/or curated by Pavan M. V. Raja & Andrew R. Barron (OpenStax CNX) via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.
565
7.5: Neutron Diffraction
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Physical_Methods_in_Chemistry_and_Nano_Science_(Barron)/07%3A_Molecular_and_Solid_State_Structure/7.05%3A_Neutron_Diffraction
The first neutron diffraction experiment was in 1945 by Ernest O. Wollan ) using the Graphite Reactor at Oak Ridge. Along with Clifford Shull ) they outlined the principles of the technique. However, the concept that neutrons would diffract like X-rays was first proposed by Dana Mitchell and Philip Powers. They proposed that neutrons have a wave like structure, which is explained by the de Broglie equation, \ref{1}, where \(λ\) is the wavelength of the source usually measured in Å, \(h\) is Planck’s constant, \(v\) is the velocity of the neutron, and finally \(m\) represents the mass of the neutron.\[ \lambda \ =\ h/mv \label{1} \]The great majority of materials that are studied by diffraction methods are composed of crystals. X-rays where the first type of source tested with crystals in order to determine their structural characteristics. Crystals are said to be perfect structures although some of them show defects on their structure. Crystals are composed of atoms, ions or molecules, which are arranged, in a uniform repeating pattern. The basic concept to understand about crystals is that they are composed of an array of points, which are called lattice points, and the motif, which represents the body part of the crystal. Crystals are composed of a series of unit cells. A unit cell is the repeating portion of the crystal. Usually there are another eight unit cells surrounding each unit cell. Unit cells can be categorized as primitive, which have only one lattice point. This means that the unit cell will only have lattice points on the corners of the cell. This point is going to be shared with eight other unit cells. Whereas in a non primitive cell there will also be point in the corners of the cell but in addition there will be lattice points in the faces or the interior of the cell, which similarly will be shared by other cells. The only primitive cell known is the simple crystal system and for nonprimitive cells there are known face-centered cubic, base centered cubic and body centered cubic.Crystals can be categorized depending on the arrangement of lattice points; this will generate different types of shapes. There are known seven crystal systems, which are cubic, tetragonal, orthorhombic, rhombohedral, hexagonal, monoclinic and triclinic. All of these have different angles and the axes are equally the same or different in others. Each of these type of systems have different bravais lattice.Braggs Law was first derived by physicist Sir W.H. Bragg ) and his son W. L Bragg ) in 1913.It has been used to determine the spacing of planes and angles formed between these planes and the incident beam that had been applied to the crystal examined. Intense scattered X-rays are produced when X-rays with a set wavelength are executed to a crystal. These scattered X-rays will interfere constructively due the equality in the differences between the travel path and the integral number of the wavelength. Since crystals have repeating units patterns, diffraction can be seen in terms of reflection from the planes of the crystals. The incident beam, the diffracted beam and normal plane to diffraction need to lie in the same geometric plane. The angle, which the incident beam forms when it hits the plane of the crystal, is called \(2θ\). shows a schematic representation of how the incident beam hits the plane of the crystal and is reflected at the same angle \(2θ\), which the incident beam hits. Bragg’s Law is mathematically expressed, \ref{2}:\[ n\lambda = 2d \sin \theta \label{2} \]where \(n\) is the integer order of reflection, \(λ\)= wavelength, and \(d\)= plane spacing.Bragg’s Law is essential in determining the structure of an unknown crystal. Usually the wavelength is known and the angle of the incident beam can be measured. Having these two known values, the plane spacing of the layer of atoms or ions can be obtained. All reflections collected can be used to determine the structure of the unknown crystal material.Bragg’s Law applies similarly to neutron diffraction. The same relationship is used the only difference being is that instead of using X-rays as the source, neutrons that are ejected and hit the crystal are being examined.Neutrons have been studied for the determination of crystalline structures. The study of materials by neutron radiation has many advantages against the normally used such as X-rays and electrons. Neutrons are scattered by the nucleus of the atoms rather than X-rays, which are scattered by the electrons of the atoms. These generates several differences between them such as that scattering of X-rays highly depend on the atomic number of the atoms whereas neutrons depend on the properties of the nucleus. These lead to a greater and accurately identification of the unknown sample examined if neutron source is being used. The nucleus of every atom and even from isotopes of the same element is completely different. They all have different characteristics, which make neutron diffraction a great technique for identification of materials, which have similar elemental composition. In contrast, X-rays will not give an exact solution if similar characteristics are known between materials. Since the diffraction will be similar for adjacent atoms further analysis needs to be done in order to determine the structure of the unknown. Also, if the sample contains light elements such as hydrogen, it is almost impossible to determine the exact location of each of them just by X-ray diffraction or any other technique. Neutron diffraction can tell the number of light elements and the exact position of them present in the structure.Neutrons were first discovered by James Chadwick in 1932 when he showed that there were uncharged particles in the radiation he was using. These particles had a similar mass of the protons but did not have the same characteristics as them. Chadwick followed some of the predictions of Rutherford who first worked in this unknown field. Later, Elsasser designed the first neutron diffraction in 1936 and the ones responsible for the actual constructing were Halban and Preiswerk. This was first constructed for powders but later Mitchell and Powers developed and demonstrated the single crystal system. All experiments realized in early years were developed using radium and beryllium sources. The neutron flux from these was not sufficient for the characterization of materials. Then, years passed and neutron reactors had to be constructed in order to increase the flux of neutrons to be able to realize a complete characterization the material being examined.Between mid and late 40s neutron sources began to appear in countries such as Canada, UK and some other of Europe. Later in 1951 Shull and Wollan presented a paper that discussed the scattering lengths of 60 elements and isotopes, which generated a broad opening of neutron diffraction for the structural information that can be obtained from neutron diffraction.The first source of neutrons for early experiments was gathered from radium and beryllium sources. The problem with this, as already mentioned, was that the flux was not enough to perform huge experiments such as the determination of the structure of an unknown material. Nuclear reactors started to emerge in early 50s and these had a great impact in the scientific field. In the 1960s neutron reactors were constructed depending on the desired flux required for the production of neutron beams. In USA the first one constructed was the High Flux Beam Reactor (HFBR). Later, this was followed by one at Oak Ridge Laboratory (HFIR) ), which also was intended for isotope production and a couple of years later the ILL was built. This last one is the most powerful so far and it was built by collaboration between Germany and France. These nuclear reactors greatly increased the flux and so far there has not been constructed any other better reactor. It has been discussed that probably the best solution to look for greater flux is to look for other approaches for the production of neutrons such as accelerator driven sources. These could greatly increase the flux of neutrons and in addition other possible experiments could be executed. The key point in these devices is spallation, which increases the number of neutrons executed from a single proton and the energy released is minimal. Currently, there are several of these around the world but investigations continue searching for the best approach of the ejection of neutrons.Although neutrons are great particles for determining complete structures of materials they have some disadvantages. These particles experiment a reasonably weak scattering when looking especially to soft materials. This is a huge concern because there can be problems associated with the scattering of the particles which can lead to a misunderstanding in the analysis of the structure of the material.Neutrons are particles that have the ability to penetrate through the surface of the material being examined. This is primarily due to the nuclear interaction produced from the particles and the nucleus from the material. This interaction is much greater that the one performed from the electrons, which it is only an electrostatic interaction. Also, it cannot be omitted the interaction that occurs between the electrons and the magnetic moment of the neutrons. All of these interactions discussed are of great advantage for the determination of the structure since neutrons interacts with every single nucleus in the material. The problem comes when the material is being analyzed because neutrons being uncharged materials make them difficult to detect them. For this reason, neutrons need to be reacted in order to generate charged particles, ions. Some of the reactions uusually used for the detection of neutrons are:\[ n\ +\ ^{3}He \rightarrow \ ^{3}H\ +\ ^{1}H\ +\ 0.764 MeV \label{3} \]\[ n\ +\ ^{10}B \rightarrow \ ^{7}Li\ +\ ^{4}He\ +\ \gamma \ +\ 2.3 MeV \label{4} \]\[ n\ +\ ^{6}Li \rightarrow \ ^{4}He\ +\ ^{3}H\ +\ 4.79 MeV \label{5} \]The first two reactions apply when the detection is performed in a gas environment whereas the third one is carried out in a solid. In each of these reaction there is a large cross section, which makes them ideal for neutron capture. The neutron detection hugely depends on the velocity of the particles. As velocity increases, shorter wavelengths are produced and the less efficient the detection becomes. The particles that are executed to the material need to be as close as possible in order to have an accurate signal from the detector. These signal needs to be quickly transduced and the detector should be ready to take the next measurement.In gas detectors the cylinder is filled up with either 3He or BF3. The electrons produced by the secondary ionization interact with the positively charged anode wire. One disadvantage of this detector is that it cannot be attained a desired thickness since it is very difficult to have a fixed thickness with a gas. In contrast, in scintillator detectors since detection is developed in a solid, any thickness can be obtained. The thinner the thickness of the solid the more efficient the results obtained become. Usually the absorber is 6Li and the substrate, which detects the products, is phosphor, which exhibits luminescence. This emission of light produced from the phosphor results from the excitation of this when the ions pass thorough the scintillator. Then the signal produced is collected and transduced to an electrical signal in order to tell that a neutron has been detected.One of the greatest features of neutron scattering is that neutrons are scattered by every single atomic nucleus in the material whereas in X-ray studies, these are scattered by the electron density. In addition, neutron can be scattered by the magnetic moment of the atoms. The intensity of the scattered neutrons will be due to the wavelength at which it is executed from the source. shows how a neutron is scattered by the target when the incident beam hits it.The incident beam encounters the target and the scattered wave produced from the collision is detected by a detector at a defined position given by the angles θ, ϕ which are joined by the dΩ. In this scenario there is assumed that there is no transferred energy between the nucleus of the atoms and the neutron ejected, leads to an elastic scattering.When there is an interest in calculating the diffracted intensities the cross sectional area needs to be separated into scattering and absorption respectively. In relation to the energies of these there is moderately large range for constant scattering cross section. Also, there is a wide range cross sections close to the nuclear resonance. When the energies applied are less than the resonance the scattering length and scattering cross section are moved to the negative side depending on the structure being examined. This means that there is a shift on the scattering, therefore the scattering will not be in a 180° phase. When the energies are higher that resonance it means that the cross section will be asymptotic to the nucleus area. This will be expected for spherical structures. There is also resonance scattering when there are different isotopes because each produce different nuclear energy levels.Usually in every material, atoms will be arranged differently. Therefore, neutrons when scattered will be either coherently or incoherently. It is convenient to determine the differential scattering cross section, which is given by \ref{6}, where b represents the mean scattering length of the atoms, k is the scattering vector, r nis the position of the vector of the analyzed atom and lastly N is the total number of atoms in the structure.This equation can be separated in two parts, which one corresponds to the coherent scattering and the incoherent scattering as labeled below. Usually the particles scattered will be coherent which facilitates the solution of the cross section but when there is a difference in the mean scattering length, there will be a complete arrangement of the formula and these new changes (incoherent scattering) should be considered. Incoherent scattering is usually due to the isotopes and nuclear spins of the atoms in the structure.\[ d\sigma /d\Omega \ =\ |b|^{2}\ |\Sigma e^{(ik.r_{n})}\ |^{2}\ +\ N|b-b^2| \label{6} \]Coherent Exp: \[ |b|^{2}\ |\Sigma e^{(ik.r_{n})}\ |^{2} \nonumber \]Incoherent Exp: \[ N\ |b-b|^{2} \nonumber \]The ability to distinguish atoms with similar atomic number or isotopes is proportional to the square of their corresponding scattering lengths. There are already known several coherent scattering lengths of some atoms which are very similar to each other. Therefore, it makes even easier to identify by neutrons the structure of a sample. Also neutrons can find ions of light elements because they can locate very low atomic number elements such as hydrogen. Due to the negative scattering that hydrogen develops it increases the contrast leading to a better identification of it, although it has a very large incoherent scattering which causes electrons to be removed from the incident beam applied.As previously mentioned one of the greatest features about neutron diffraction is that neutrons because of their magnetic moment can interact with either the orbital or the spin magnetic moment of the material examined. Not all every single element in the periodic table can exhibit a magnetic moment. The only elements that show a magnetic moment are those, which have unpaired electrons spins. When neutrons hit the solid this produces a scattering from the magnetic moment vector as well as the scattering vector from the neutron itself. Below shows the different vectors produced when the incident beam hits the solid.When looking at magnetic scattering it needs to be considered the coherent magnetic diffraction peaks where the magnetic contribution to the differential cross section is p2q2 for an unpolarized incident beam. Therefore the magnetic structure amplitude will be given by \ref{9}, where qn is the magnetic interaction vector, pn is the magnetic scattering length and the rest of the terms are used to know the position of the atoms in the unit cell. When this term \(F_{mag}\) is squared, the result is the intensity of magnetic contribution from the peak analyzed. This equation only applies to those elements which have atoms that develop a magnetic moment.\[ F_{\text{mag}}\ =\ \Sigma p_{n}q_{n} e^{2\pi i(hx_{n}\ +\ ky_{n}\ +\ Iz_{n})} \label{9} \]Magnetic diffraction becomes very important due to its d-spacing dependence. Due to the greater effect produced from the electrons in magnetic scattering the forward scattering has a greater strength than the backward scattering. There can also be developed similar as in X-ray, interference between the atoms which makes structure factor also be considered. These interference effects could be produced by the wide range in difference between the electron distribution and the wavelength of the thermal neutrons. This factor quickly decreases as compared to X-rays because the beam only interacts with the outer electrons of the atoms.In neutron diffraction there is not a unique protocol of factors that should be considered such as temperature, electric field and pressure to name a few. Depending on the type of material and data that has been looked the parameters are assigned. There can be reached very high temperatures such as 1800K or it can go as low as 4K. Usually to get to these extreme temperatures a special furnace capable of reaching these temperatures needs to be used. For example, one of the most common used is the He refrigerator when working with very low temperatures. For high temperatures, there are used furnaces with a heating element cylinder such as vanadium (V), niobium (Nb), tantalum (Ta) or tungsten (W) that is attached to copper bars which hold the sample. shows the design for the vacuum furnaces used for the analysis. The metal that works best at the desired temperature range will be the one chosen as the heating element. The metal that is commonly used is vanadium because it prevents the contribution of other factors such as coherent scattering. Although with this metal this type of scattering is almost completely reduced. Other important factor about this furnaces is that the material been examined should not decompose under vacuum conditions. The crystal needs to be as stable as possible when it is being analyzed. When samples are not able to persist at a vacuum environment, they are heated in the presence of several gases such as nitrogen or argon.Usually in order to prepare the samples that are being examined in neutron diffraction it is needed large crystals rather small ones as the one needed for X-ray studies. This one of the main disadvantages of this instrument. Most experiments are carried out using a four-circle diffractometer. The main reason being is because several experiment are carried out using very low temperatures and in order to achieve a good spectra it is needed the He refrigerator. First, the crystal being analyzed is mounted on a quartz slide, which needs to be a couple millimeters in size. Then, it is inserted into the sample holder, which is chosen depending on the temperatures wanted to be reached. In addition, neutrons can also analyze powder samples and in order to prepare the sample for these they need to be completely rendered into very fine powders and then inserted into the quartz slide similarly to the crystal structures. The main concern with this method is that when samples are grounded into powders the structure of the sample being examined can be altered.Neutron diffraction is a great technique used for complete characterization of molecules involving light elements and also very useful for the ones that have different isotopes in the structure. Due to the fact that neutrons interact with the nucleus of the atoms rather than with the outer electrons of the atoms such as X-rays, it leads to a more reliable data. In addition, due to the magnetic properties of the neutrons there can be characterized magnetic compounds due to the magnetic moment that neutrons develop. There are several disadvantages as well, one of the most critical is that there needs to be a good amount of sample in order to be analyzed by this technique. Also, great amounts of energy are needed to produce large amounts of neutrons. There are several powerful neutron sources that have been developed in order to conduct studies of largest molecules and a smaller quantity of sample. However, there is still the need of devices which can produce a great amount of flux to analyze more sophisticated samples. Neutron diffraction has been widely studied due to the fact that it works together with X-rays studies for the characterization of crystalline samples. The properties and advantages of this technique can greatly increased if some of the disadvantages are solved. For example, the study of molecules which exhibit some type of molecular force can be characterized. This will be because neutrons can precisely locate hydrogen atoms in a sample. Neutrons have gives a better answer to the chemical interactions that are present in every single molecule, whereas X-rays help to give an idea of the macromolecular structure of the samples being examined.This page titled 7.5: Neutron Diffraction is shared under a CC BY 4.0 license and was authored, remixed, and/or curated by Pavan M. V. Raja & Andrew R. Barron (OpenStax CNX) via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.
566
7.6: XAFS
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Physical_Methods_in_Chemistry_and_Nano_Science_(Barron)/07%3A_Molecular_and_Solid_State_Structure/7.06%3A_XAFS
X-ray absorption fine structure (XAFS) spectroscopy includes both X-ray absorption near edge structure (XANES) and extended X-ray absorption fine structure (EXAFS) spectroscopies. The difference between both techniques is the area to analyze, as shown in and the information each technique provides. The complete XAFS spectrum is collected across an energy range of around 200 eV before the absorption edge of interest and until 1000 eV after it ). The absorption edge is defined as the X-ray energy when the absorption coefficient has a pronounced increasing. This energy is equal to the energy required to excite an electron to an unoccupied orbital.X-ray absorption near edge structure (XANES) is used to determine the valence state and coordination geometry, whereas extended X-ray absorption fine structure (EXAFS) is used to determine the local molecular structure of a particular element in a sample.XANES is the part of the absorption spectrum closer an absorption edge. It covers from approximately -50 eV to +200 eV relative to the edge energy ). Because the shape of the absorption edge is related to the density of states available for the excitation of the photoelectron, the binding geometry and the oxidation state of the atom affect the XANES part of the absorption spectrum.Before the absorption edge, there is a linear and smooth area. Then, the edge appears as a step, which can have other extra shapes as isolated peaks, shoulders or a white line, which is a strong peak onto the edge. Those shapes give some information about the atom. For example, the presence of a white line indicates that after the electron releasing, the atomic states of the element is confined by the potential it feels. This peak sharp would be smoothed if the atom could enter to any kind of resonance. Important information is given because of the absorption edge position. Atoms with higher oxidation state have fewer electrons than protons, so, the energy states of the remaining electrons are lowered slightly, which causes a shift of the absorption edge energy up to several eV to a higher X-ray energy.The EXAFS part of the spectrum is the oscillatory part of the absorption coefficient above around 1000 eV of the absorption edge. This region is used to determine the molecular bonding environments of the elements. EXAFS gives information about the types and numbers of atoms in coordination a specific atom and their inter-atomic distances. The atoms at the same radial distance from a determinate atom form a shell. The number of the atoms in the shell is the coordination number (e.g., ).An EXAFS signal is given by the photoelectron scattering generated for the center atom. The phase of the signal is determinate by the distance and the path the photoelectrons travel. A simple scheme of the different paths is shown by . In the case of two shells around the centered atom, there is a degeneracy of four for the path between the main atom to the first shell, a degeneracy of four for the path between the main atom to the second shell, and a degeneracy of eight for the path between the main atom to the first shell, to the second one and to the center atom.The analysis of EXAFS spectra is accomplished using Fourier transformation to fit the data to the EXAFS equation. The EXAFS equation is a sum of the contribution from all scattering paths of the photoelectrons \ref{1}, where each path is given by \ref{2}.\[ \chi (k)\ =\ \sum_{i} \chi _{i}(k) \label{1} \]\[ \chi _{i} (k) \equiv \frac{(N_{i}S_{0}^{2})F_{eff_{i}}(k)}{kR^{2}_{i}} \sin[2kR_{i}\ +\ \phi _{i}(k)] e^{-2\sigma ^{2}_{i} k^{2}} e^{\frac{-2R_{i}}{\lambda (k)}} \label{2} \]The terms Feffi(k), φi(k), and λi(k) are the effective scattering amplitude of the photoelectron, the phase shift of the photoelectron, and the mean free path of the photoelectron, respectively. The term Ri is the half path length of the photoelectron (the distance between the centered atom and a coordinating atom for a single-scattering event). And the k2 is given by \ref{3}. The remaining variable are frequently determined by modeling the EXAFS spectrum.\[ k^{2}\ = \frac{2m_{e}(E-E_{0}\ +\ \Delta E_{0})}{\hbar} \label{3} \]The absorption of arsenic species onto iron oxide offers n example of the information that can be obtained by EXAFS. Because the huge impact that the presence of arsenic in water can produce in societies there is a lot of research in the adsorption of arsenic in several kinds of materials, in particular nano materials. Some of the materials more promising for this kind of applications are iron oxides. The elucidation of the mechanism of arsenic coordination onto the surfaces of those materials has been studied lately using X-ray absorption spectroscopy.There are several ways how arsenate (AsO43−, ) can be adsorbed onto the surfaces. shows the three ways that Sherman proposes arsenate can be adsorbed onto goethite (α-FeOOH): bidentate cornersharing (2C), bidentate edge sharing (2E) and monodentate corner-sharing (1V) shapes. shows that the bidentate corner sharing (2C) is the configuration that corresponds with the calculated parameters not only for goethite, but for several iron oxides.Several studies have confirmed that the bidentate corner sharing (2C) is the one present in the arsenate adsorption but also one similar, a tridentate corner sharing complex (3C), for the arsenite adsorption onto most of iron oxides as shows . Table \(\PageIndex{1}\) shows the coordination numbers and distances reported in the literature for the As(III) and As(V) onto goethite.This page titled 7.6: XAFS is shared under a CC BY 4.0 license and was authored, remixed, and/or curated by Pavan M. V. Raja & Andrew R. Barron (OpenStax CNX) via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.
567
7.7: Circular Dichroism Spectroscopy and its Application for Determination of Secondary Structure of Optically Active Species
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Physical_Methods_in_Chemistry_and_Nano_Science_(Barron)/07%3A_Molecular_and_Solid_State_Structure/7.07%3A_Circular_Dichroism_Spectroscopy_and_its_Application_for_Determination_of_Secondary_Structure_of_Optically_Active_Species
Circular dichroism (CD) spectroscopy is one of few structure assessmet methods that can be utilized as an alternative and amplification to many conventional analysis techniques with advatages such as rapid data collection and ease of use. Since most of the efforts and time spent in advancement of chemical sciences are devoted to elucidation and analysis of structure and composition of synthesized molecules or isolated natural products rather than their preparation, one should be aware of all the relevant techniques available and know which instrument can be employed as an alternative to any other technique.The aim of this module is to introduce CD technique and discuss what kind of information one can collect using CD. Additionally, the advantages of CD compared to other analysis techniques and its limitations will be shown.As CD spectroscopy can analyze only optically active species, it is convenient to start the module with a brief introduction of optical activity. In nature almost every life form is handed, meaning that there is certain degree of asymmetry, just like in our hands. One cannot superimpose right hand on the left because they are non-identical mirror images of one another. So are the chiral (handed) molecules, they exist as enantiomers, which mirror images of each other ). One interesting phenomena related to chiral molecules is their ability to rotate plane of polarized light. Optical activity property is used to determine specific rotation, [ α ]Tλ, of pure enantiomer. This feature is used in polarimetery to find the enantiomeric excess, (ee), present in sample.Circular dichroism (CD) spectroscopy is a powerful yet straightforward technique for examining different aspects of optically active organic and inorganic molecules. Circular dichroism has applications in variety of modern research fields ranging from biochemistry to inorganic chemistry. Such widespread use of the technique arises from its essential property of providing structural information that cannot be acquired by other means. One other laudable feature of CD is its being a quick, easy technique that makes analysis a matter of minutes. Nevertheless, just like all methods, CD has a number of limitations, which will be discussed while comparing CD to other analysis techniques.CD spectroscopy and related techniques were considered as esoteric analysis techniques needed and accessible only to a small clandestine group of professionals. In order to make the reader more familiar with the technique, first of all, the principle of operation of CD and its several types, as well as related techniques will be shown. Afterwards, sample preparation and instrument use will be covered for protein secondary structure study case.Depending on the light source used for generation of circularly polarized light, there are:In the CD spectrometer the sample is places in a cuvette and a beam of light is passed through the sample. The light (in the present context all electromagnetic waves will be refer to as light) coming from source is subjected to circular polarization, meaning that its plane of polarization is made to rotate either clockwise (right circular polarization) or anti-clockwise (left circular polarization) with time while propagating, see .The sample is, firstly irradiated with left rotating polarized light, and the absorption is determined by \ref{1}. A second irradiation is performed with right polarized light. Now, due to the intrinsic asymmetry of chiral molecules, they will interact with circularly polarized light differently according to the direction of rotation there is going to be a tendency to absorb more for one of rotation directions. The difference between absorption of left and right circularly polarized light is the data, which is obtained from \ref{2}, where εL and εR are the molar extinction coefficients for left and right circularly polarized light, c is the molar concentration, l is the path length, the cuvette width (in cm). The difference in absorption can be related to difference in extinction, Δε, by \ref{3}.\[ A\ = \varepsilon c l \label{1} \]\[ \Delta A\ =\ A_{L}-A_{R}\ =\ (\varepsilon _{L}\ -\ \varepsilon _{R} ) c l \label{2} \]\[ \Delta \varepsilon \ =\ \varepsilon _{L} \ -\ \varepsilon _{R} \label{3} \]Usually, due to historical reasons the CD is reported not only as difference in absorption or extinction coefficients but as degree of ellipticity, [θ]. The relationship between [θ] and Δε is given by \ref{4}.\[ [\theta ]\ =\ 3,298 \Delta \varepsilon \label{4} \]Since the absorption is monitored in a range of wavelengths, the output is a plot of [θ] versus wavelength or Δε versus wavelength. shows the CD spectrum of Δ–[Co(en)3]Cl3.Magnetic circular dichroism (MCD) is a sister technique to CD, but there are several distinctions:MCD is powerful method for studying magnetic properties of materials and has recently been employed for analysis of iron-nitrogen compound, the strongest magnet known. Moreover, MCD and its variation, variable temperature MCD are complementary techniques to Mossbauer spectroscopy and electron paramagnetic resonance (EPR) spectroscopy. Hence, these techniques can give useful amplification to the chapter about Mossbauer and EPR spectroscopy.Linear dichrosim (LD) is also a very closely related technique to CD in which the difference between absorbance of perpendicularly and parallel polarized light is measured. In this technique the plane of polarization of light does not rotate. LD is used to determine the orientation of absorbing parts in space.Just like any other instrument CD has its strengths and limits. The comparison between CD and NMR shown in Table \(\PageIndex{1}\) gives a good sense of capabilities of CD.One effective way to demonstrate capabilities of CD spectroscopy is to cover the protein secondary structure study case, since CD spectroscopy is well-established technique for elucidation of secondary structure of proteins as well as any other macromolecules. By using CD one can estimate the degree of conformational order (what percent of the sample proteins is in α-helix and/or β-sheet conformation), see .Key points for visual estimation of secondary structure by looking at a CD spectrum:Since the CD spectra of proteins uniquely represent their conformation, CD can be used to monitor structural changes (due to complex formation, folding/unfolding, denaturation because of rise in temperature, denaturants, change in amino acid sequence/mutation, etc. ) in dynamic systems and to study kinetics of protein. In other words CD can be used to perform stability investigations and interaction modeling. shows a typical CD instrument.Most of proteins and peptides will require using buffers in order to prevent denaturation. Caution should be shown to avoid using any optically active buffers. Clear solutions are required. CD is taken in high transparency quartz cuvettes to ensure least interference. There are cuvettes available that have path-length ranging from 0.01 cm to 1 cm. Depending on UV activity of buffers used one should choose a cuvette with path-length (distance the beam of light passes through the sample) that compensates for UV absorbance of buffer. Solutions should be prepared according to cuvette that will be used, see Table \(\PageIndex{2}\).Besides, just like salts used to prepare pallets in FT-IR, the buffers in CD will show cutoffs at a certain point in low wavelength region, meaning that buffers start to absorb after certain wavelengh. The cutoff values for most of common buffers are known and can be found from manufacturer. Oxygen absorbs light below 200 nm. Therefore, in order to remove interference buffers should be prepared from distilled water or the water should be degassed before use. Another important point is to accurately determine concentration of sample, because concentration should be known for CD data analysis. Concentration of sample can be determined from extinction coefficients, if such are reported in literature also for protein samples quantitative amino acid analysis can be used.Many CD instrument come bundled with a sample compartment temperature control unit. This is very handy when doing stability and unfolding/denaturation studies of proteins. Check to make sure the heat sink is filled with water. Turn the temperature control unit on and set to chosen temperature.UV source in CD is very powerful lamp and can generates large amounts of Ozone in its chamber. Ozone significantly reduces the life of the lamp. Therefore, oxygen should be removed before turning on the main lamp (otherwise it will be converted to ozone near lamp). For this purpose nitrogen gas is constantly flushed into lamp compartment. Let Nitrogen flush at least for 15 min. before turning on the lamp.After saving the data for both the spectra of the sample and blank is smoothed using built-in commands of controller software. The smoothed baseline is subtracted from the smoothed spectrum of the sample. The next step is to use software bundles which have algorithms for estimating secondary structure of proteins. Input the data into the software package of choice and process it. The output from algorithms will be the percentage of a particular secondary structure conformation in sample. The data shown in lists commonly used methods and compares them for several proteins. The estimated secondary structure is compared to X-ray data, and one can see that it is best to use several methods for best accuracy.What advantages CD has over other analysis methods? CD spectroscopy is an excellent, rapid method for assessing the secondary structure of proteins and performing studies of dynamic systems like folding and binding of proteins. It worth noting that CD does not provide information about the position of those subunits with specific conformation. However, CD outrivals other techniques in rapid assessing of the structure of unknown protein samples and in monitoring structural changes of known proteins caused by ligation and complex formation, temperature change, mutations, denaturants. CD is also widely used to juxtapose fused proteins with wild type counterparts, because CD spectra can tell whether the fused protein retained the structure of wild type or underwent changes.This page titled 7.7: Circular Dichroism Spectroscopy and its Application for Determination of Secondary Structure of Optically Active Species is shared under a CC BY 4.0 license and was authored, remixed, and/or curated by Pavan M. V. Raja & Andrew R. Barron (OpenStax CNX) via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.
568
7.8: Protein Analysis using Electrospray Ionization Mass Spectroscopy
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Physical_Methods_in_Chemistry_and_Nano_Science_(Barron)/07%3A_Molecular_and_Solid_State_Structure/7.08%3A_Protein_Analysis_using_Electrospray_Ionization_Mass_Spectroscopy
Electrospray ionization-mass spectrometry (ESI-MS) is an analytical method that focuses on macromolecular structural determination. The unique component of ESI-MS is the electrospray ionization. The development of electrospraying, the process of charging a liquid into a fine aerosol, was completed in the 1960’s when Malcolm Dole ) demonstrated the ability of chemical species to be separated through electrospray techniques. With this important turn of events, the combination of ESI and MS was feasible and was later developed by John B. Fenn ), as a functional analytical method that could provide beneficial information about the structure and size of a protein. Fenn shared the Nobel Prize in 2002, with Koichi Tanaka and Kurt Wuthrich ) for the development of ESI-MS.ESI-MS is the process through which proteins, or macromolecules, in the liquid phase are charged and fragmented into smaller aerosol droplets. These aerosol droplets lose their solvent and propel the charged fragments into the gas phase in several components that vary by charge. These components can then be detected by a mass spectrometer. The recent boom and development of ESI-MS is attributed to its benefits in characterizing and analyzing macromolecules, specifically biologically important macromolecules such as proteins.ESI-MS is a process that requires the sample to be in liquid solution, so that tiny droplets may be ionized and analyzed individually by a mass spectrometer. The following delineates the processes that occur as relevant to :As implied by the name, the data produced from this technique is a mass spectrometry spectrum. Without delving too deeply into the topic of mass spectrometry, which is out of the true scope of this module, a slight explanation will be provided here. The mass spectrometer separates particles based on a magnetic field created by a quadrupole magnet. The strength of the interaction varies on the charge the particles carry. The amount of deflection or strength of interaction is determined by the ion detector and quantified into a mass/charge (m/z) ratio. Because of this information, determination of chemical composition or peptide structure can easily be managed as is explained in greater detail in the following section.Interpreting the mass spectrometry data involves understanding the m/z ratio. The knowledge necessary to understanding the interpretation of the spectrum is that the peaks correspond to portions of the whole molecule. That is to say, hypothetically, if you put a human body in the mass spectrometer, one peak would coincide with one arm, another peak would coincide with the arm and the abdomen, etc. The general idea behind these peaks, is that an overlay would paint the entire picture, or in the case of the hypothetical example, provide the image of the human body. The m/z ratio defines these portions based on the charges carried by them; thus the terminology of the mass/charge ratio. The more charges a portion of the macromolecule or protein holds, the smaller the m/z ratio will be and the farther left it will appear on the spectrum. The fundamental concept behind interpretation involves understanding that the peaks are interrelated, and thus the math calculations may be carried out to provide relevant information of the protein or macromolecule being analyzed.As mentioned above, the pertinent information to be obtained from the ESI-MS data is extrapolated from the understanding that the peaks are interrelated. The steps for calculating the data are as follow:1. Determine which two neighboring peaks will be analyzed from the MS ) as the m/z = 5 and m/z = 10 peaks. 2. Establish the first peak (the one farthest left in as the z + 1 peak (i.e., z + 1 = 5).3. Establish the adjacent peak to the right of the first peak as the z peak (i.e., z = 10).4. Establish the peak ratios, \ref{1} and \ref{2}.\[ \frac{m+1}{z+1} =\ 5 \label{1} \]\[ \frac{m}{z} = 10 \label{2} \]5. Solve the ratios for m: \ref{3} and \ref{4}.\[ m\ =\ 5z\ +\ 4 \label{3} \]\[ m\ =\ 10z \label{4} \]6. Substitute one equation for m: \ref{5}.\[ 5z\ +\ 4\ =\ 10z \label{5} \]7. Solve for z: \ref{6}.\[ z\ = 4/5 \label{6} \]8. Find z+1: \ref{7}.\[ z\ +\ 1\ =\ 9/5 \label{7} \]Find average molecular mass by subtracting the mass by 1 and multiplying by the charge: \ref{8} and \ref{9}. Hence, the average mass = 7.2\[ (10\ -\ 1)(4/5)\ =\ 7.2 \label{8} \]\[ (5\ -\ 1)(9/5)\ =\ 7.2 \label{9} \]Samples for ESI-MS must be in a liquid state. This requirement provides the necessary medium to easily charge the macromolecules or proteins into a fine aerosol state that can be easily fragmented to provide the desired outcomes. The benefit to this technique is that solid proteins that were once difficult to analyze, like metallothionein, can dissolved in an appropriate solvent that will allow analysis through ESI-MS. Because the sample is being delivered into the system as a liquid, the capillary can easily charge the solution to begin fragmentation of the protein into smaller fractions Maximum charge of the capillary is approximately 4 kV. However, this amount of charge is not necessary for every macromolecule. The appropriate charge is dependent on the size and characteristic of the solvent and each individual macromolecule. This has allowed for the removal of the molecular weight limit that was once held true for simple mass spectrometry analysis of proteins. Large proteins and macromolecules can now easily be detected and analyzed through ESI-MS due to the facility with which the molecules can fragment.A related technique that was developed at approximately the same time as ESI-MS is matrix assisted laser desorption/ionization mass spectrometry (MALDI-MS). This technique that was developed in the late 1980’s as wells, serves the same fundamental purpose; allowing analysis of large macromolecules via mass spectrometry through an alternative route of generating the necessary gas phase for analysis. In MALDI-MS, a matrix, usually comprised of crystallized 3,5-dimethoxy-4-hydroxycinnamic acid ), water, and an organix solvent, is used to mix the analyte, and a laser is used to charge the matrix.The matrix then co-crystallizes the analyte and pulses of the laser are then used to cause desorption of the matrix and some of the analyte crystals with it, leading to ionization of the crystals and the phase change into the gaseous state. The analytes are then read by the tandem mass spectrometer. Table \(\PageIndex{1}\) directly compares some attributes between ESI-MS and MALDI-MS. It should be noted that there are several variations of both ESI-MS and MALDI-MS, with the methods of data collection varying and the piggy-backing of several other methods (liquid chromatography, capillary electrophoresis, inductively coupled plasma mass spectrometry, etc.), yet all of them have the same fundamental principles as these basic two methods. ESI-MS has proven to be useful in determination of tertiary structure and molecular weight calculations of large macromolecules. However, there are still several problems incorporated with the technique and macromolecule analysis. One problem is the isolation of the desired protein for analysis. If the protein is unable to be extracted from the cell, this is usually done through gel electrophoresis, there is a limiting factor in what proteins can be analyzed. Cytochrome c ) is an example of a protein that can be isolated and analyzed, but provides an interesting limitation on how the analytical technique does not function for a completely effective protein analysis. The problem with cytochrome c is that even if the protein is in its native confirmation, it can still show different charge distribution. This occurs due to the availability of basic sites for protonation that are consistently exposed to the solvent. Any slight change to the native conformation may cause basic sites, such as in cytochrome c to be blocked causing different m/z ratios to be seen. Another interesting limitation is seen when inorganic elements, such as in metallothioneins proteins that contain zinc, are analyzed using ESI-MS. Metallothioneins have several isoforms that show no consistent trend in ESI-MS data between the varied isoforms. The marked differences occur due to the metallation of each isoform being different, which causes the electrospraying and as a result protonation of the protein to be different. Thus, incorporation of metal atoms in proteins can have various effects on ESI-MS data due to the unexpected interactions between the metal center and the protein itself.This page titled 7.8: Protein Analysis using Electrospray Ionization Mass Spectroscopy is shared under a CC BY 4.0 license and was authored, remixed, and/or curated by Pavan M. V. Raja & Andrew R. Barron (OpenStax CNX) via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.
569
7.9: The Analysis of Liquid Crystal Phases using Polarized Optical Microscopy
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Physical_Methods_in_Chemistry_and_Nano_Science_(Barron)/07%3A_Molecular_and_Solid_State_Structure/7.09%3A_The_Analysis_of_Liquid_Crystal_Phases_using_Polarized_Optical_Microscopy
Liquid crystals are a state of matter that has the properties between solid crystal and common liquid. There are basically three different types of liquid crystal phases:Thermotropic LCs are the most widely used one, which can be divided into five categories:Thermotropic LCs are very sensitive to temperature. If the temperature is too high, thermal motion will destroy the ordering of LCs, and push it into a liquid phase. If the temperature is too low, thermal motion is hard to perform, so the material will become crystal phase.The existence of liquid crystal phase can be detected by using polarized optical microscopy, since liquid crystal phase exhibits its unique texture under microscopy. The contrasting areas in the texture correspond to domains where LCs are oriented towards different directions.Polarized optical microscopy is typically used to detect the existence of liquid crystal phases in a solution.The principle of this is corresponding to the polarization of light. A polarizer is a filter that only permits the light oriented in a specific direction with its polarizing direction to pass through. There are two polarizers in a polarizing optical microscope (POM) ) and they are designed to be oriented at right angle to each other, which is termed as cross polar. The fundamental of cross polar is illustrated in , the polarizing direction of the first polarizer is oriented vertically to the incident beam, so only the waves with vertical direction can pass through it. The passed wave is subsequently blocked by the second polarizer, since this polarizer is oriented horizontally to the incident wave.Birefringent or doubly-refracting sample has a unique property that it can produce two individual wave components while one wave passes through it, those two components are termed as ordinary and extraordinary waves. is an illustration of a typical construction of Nicol polarizing prism, as we can see, the non-plarized white light are splitted into two ray as it passes through the prism. The one travels out of the prism is called ordinary ray, and the other one is called extraordinary ray. So if we have a birefringent specimen located between the polarizer and analyzer, the initial light will be separated into two waves when it passes though the specimen. After exiting the specimen, the light components become out of phase, but are recombined with constructive and destructive interference when they pass through the analyzer. Now the combined wave will have elliptically or circularly polarized light wave, see , image contrast arises from the interaction of plane-polarized light with a birefringent specimen so some amount of wave will pass through the analyzer and give a bright domain on the specimen.The most common application of LCs are in liquid crystals displays (LCD). is a simple demonstration of how LCD works in digit calculators. There are two crossed polarizers in this system, and liquid crystal (cholesteric spiral pattern) sandwich with positive and negative charging is located between these two polarizers. When the liquid crystal is charged, waves can pass through without changing orientations. When the liquid crystal is out of charge, waves will be rotated 90° as it passes through LCs so it can pass through the second polarizer. There are seven separately charged electrodes in the LCD, so the LCD can exhibit different numbers from 0 to 9 by adjusting the electrodes. For example, when the upper right and lower left electrodes are charged, we can get 2 on the display.The first order retardation plate is frequently utilized to determine the optical sign of a birefringent specimen in polarized light microscopy. The optical sign includes positive and negative. If the ordinary wavefront is faster than the extraordinary wavefront (see a). When a first order retardation plate is added, the structure of the cell become all apparent compared with the one without retardation plate, b). shows the images of liquid crystal phases from different specimens. First order retardation plates are utilized in all of these images. Apparent contrasts are detected here in the image which corresponds to the existence of liquid crystal phase within the specimen.The effect of the angle between horizontal direction and polarizer transmission axis on the appearance of liquid crystal phase may be analyzed. In is show images of an ascorbic acid ) sample under cross polar mode. When the polarizer rotates from 0° to 90°, big variations on the shape of bright domains and domain colors appear due to the change of wave vibrating directions. By rotating the polarizer, we can have a comprehensive understanding of the overall texture.This page titled 7.9: The Analysis of Liquid Crystal Phases using Polarized Optical Microscopy is shared under a CC BY 4.0 license and was authored, remixed, and/or curated by Pavan M. V. Raja & Andrew R. Barron (OpenStax CNX) via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.
570
8.1: Microparticle Characterization via Confocal Microscopy
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Physical_Methods_in_Chemistry_and_Nano_Science_(Barron)/08%3A_Structure_at_the_Nano_Scale/8.01%3A_Microparticle_Characterization_via_Confocal_Microscopy
Confocal microscopy was invented by Marvin Minsky (FIGURE) in 1957, and subsequently patented in 1961. Minsky was trying to study neural networks to understand how brains learn, and needed a way to image these connections in their natural state (in three dimensions). He invented the confocal microscope in 1955, but its utility was not fully realized until technology could catch up. In 1973 Egger published the first recognizable cells, and the first commercial microscopes were produced in 1987.In the 1990's confocal microscopy became near routine due to advances in laser technology, fiber optics, photodetectors, thin film dielectric coatings, computer processors, data storage, displays, and fluorophores. Today, confocal microscopy is widely used in life sciences to study cells and tissues.Fluorescence is the emission of a secondary photon upon absorption of a photon of higher wavelength. Most molecules at normal temperatures are at the lowest energy state, the so-called 'ground state'. Occasionally, a molecule may absorb a photon and increase its energy to the excited state. From here it can very quickly transfer some of that energy to other molecules through collisions; however, if it cannot transfer enough energy it spontaneously emits a photon with a lower wavelength . This is fluorescence.In fluorescence microscopy, fluorescent molecules are designed to attach to specific parts of a sample, thus identifying them when imaged. Multiple fluorophores can be used to simultaneously identify different parts of a sample. There are two options when using multiple fluorophores:Fluorophores can be chosen that respond to different wavelengths of a multi-line laser. Fluorophores can be chosen that respond to the same excitation wavelength but emit at different wavelengths. In order to increase the signal, more fluorophores can be attached to a sample. However, there is a limit, as high fluorophore concentrations result in them quenching each other, and too many fluorophores near the surface of the sample may absorb enough light to limit the light available to the rest of the sample. While the intensity of incident radiation can be increased, fluorophores may become saturated if the intensity is too high.Photobleaching is another consideration in fluorescent microscopy. Fluorophores irreversibly fade when exposed to excitation light. This may be due to reaction of the molecules’ excited state with oxygen or oxygen radicals. There has been some success in limiting photobleaching by reducing the oxygen available or by using free-radical scavengers. Some fluorophores are more robust than others, so choice of fluorophore is very important. Fluorophores today are available that emit photons with wavelengths ranging 400 - 750 nm.A microscope’s lenses project the sample plane onto an image plane. An image can be formed at many image planes; however, we only consider one of these planes to be the ‘focal plane’ (when the sample image is in focus). When a pinhole screen in placed at the image focal point, it allows in-focus light to pass while effectively blocking light from out-of-focus locations . This pinhole is placed at the conjugate image plane to the focal plane, thus the name "confocal". The size of this pinhole determines the depth-of-focus; a bigger pinhole collects light from a larger volume. The pinhole can only practically be made as small as approximately the radius of the Airy disk, which is the best possible light spot from a circular aperture , because beyond that more signal is blocked resulting in a decreased signal-to-boise ratio.In optics, the Airy disk and Airy pattern are descriptions of the best focused spot of light that a perfect lens with a circular aperture can make, limited by the diffraction of light.To further reduce the effect of scattering due to light from other parts of the sample, the sample is only illuminated at a tiny point through the use of a pinhole in front of the light source. This greatly reduces the interference of scattered light from other parts of the sample. The combination of a pinhole in front of both the light source and detector is what makes confocal unique.A simple confocal microscope generally consists of a laser, pinhole aperture, dichromatic mirror, scanning mirrors, microscope objectives, a photomultiplier tube, and computing software used to reconstruct the image . Because a relatively small volume of the sample is being illuminated at any given time, a very bright light source must be used to produce a detectable signal. Early confocal microscopes used zirconium arc lamps, but recent advances in laser technology have made lasers in the UV-visible and infrared more stable and affordable. A laser allows for a monochromatic (narrow wavelength range) light source that can be used to selectively excite fluorophores to emit photons of a different wavelength. Sometimes filters are used to further screen for single wavelengths.The light passes through a dichromatic (or "dichroic") mirror which allows light with a higher wavelength (from the laser) to pass but reflects light of a lower wavelength (from the sample) to the detector. This allows the light to travel the same path through the majority of the instrument, and eliminates signal due to reflection of the incident light.The light is then reflects across a pair of mirrors or crystals, one each for the x and y directions, which enable the beam to scan across the sample ). The speed of the scan is usually the limiting factor in the speed of image acquisition. Most confocal microscopes can create an image in 0.1 - 1 second. Usually the sample is raster scanned quickly in the x-direction and slowly in the y direction (like reading a paragraph left to right, ). The rastering is controlled by galvanometers that move the mirrors back and forth in a sawtooth motion. The disadvantage to scanning with the light beam is that the angle of light hitting the sample changes. Fortunately, this change is small. Interestingly, Minsky's original design moved the stage instead of the beam, as it was difficult to maintain alignment of the sensitive optics. The field of view can be made much larger by controlling the amplitude of the stage movements. An alternative to light-reflecting mirrors is the acousto-optic deflector (AOD). The AOD allows for fast x-direction scans by creating a diffraction grating from high-frequency standing sound (pressure) waves which locally change the refractive index of a crystal. The disadvantage to AODs is that the amount of deflection depends on the wavelength, so the emission light cannot be descanned (travel back through the same path as the excitation light). The solution to this is to descan only in the y direction controlled by the slow galvanometer and collect the light in a slit instead of a pinhole. This results in reduced optical sectioning and slight distortion due to the loss of radial symmetry, but good images can still be formed. Keep in mind this is not a problem for reflected light microscopy which has the same wavelength for incident and reflected light!Another alternative is the Nipkow disk, which has a spiral array of pinholes that create the simultaneous sampling of many points in the sample. A single rotation covers the entire specimen several times over (at 40 revolutions per second, that's over 600 frames per second). This allows descanning, but only about 1% of the excitation light passes through. This is okay for reflected light microscopy, but the signal is relatively weak and signal-to-noise ratio is low. The pinholes could be made bigger to increase light transmission but then the optical sectioning is less effective (remember depth of field is dependent on the diameter of the pinhole) and xy resolution is poorer. Highly responsive, efficient fluorophores are needed with this method.Returning to the confocal microscope ), light then passes through the objective which acts as a well-corrected condenser and objective combination. The illuminated fluorophores fluoresce and emitted light travels up the objective back to the dichromatic mirror. This is known as epifluorescence when the incident light has the same path as detected light. Since the emitted light now has a lower wavelength than the incident, it cannot pass through the dichromatic mirror and is reflected to the detector. When using reflected light, a beamsplitter is used instead of a dichromatic mirror. Fluorescence microscopy when used properly can be more sensitive than reflected light microscopy.Though the signal’s position is well-defined according to the position of the xy mirrors, the signal from fluorescence is relatively weak after passing through the pinhole, so a photomultiplier tube is used to detect emitted photons. Detecting all photons without regard to spatial position increases the signal, and the photomultiplier tube further increases the detection signal by propagating an electron cascade resulting from the photoelectric effect (incident photons kicking off electrons). The resulting signal is an analog electrical signal with continuously varying voltage that corresponds to the emission intensity. This is periodically sampled by an analog-to-digital converter.It is important to understand that the image is a reconstruction of many points sampled across the specimen. At any given time the microscope is only looking at a tiny point, and no complete image exists that can be viewed at an instantaneous point in time. Software is used to recombine these points to form an image plane, and combine image planes to form a 3-D representation of the sample volume.Two-photon microscopy is a technique whereby two beams of lower intensity are directed to intersect at the focal point. Two photons can excite a fluorophore if they hit it at the same time, but alone they do not have enough energy to excite any molecules. The probability of two photons hitting a fluorophore at nearly the exact same time (less than 10-16) is very low, but more likely at the focal point. This creates a bright point of light in the sample without the usual cone of light above and below the focal plane, since there are almost no excitations away from the focal point.To increase the chance of absorption, an ultra-fast pulsed laser is used to create quick, intense light pulses. Since the hourglass shape is replaced by a point source, the pinhole near the detector (used to reduce the signal from light originating from outside the focal plane) can be eliminated. This also increases the signal-to-noise ratio (here is very little noise now that the light source is so focused, but the signal is also small). These lasers have lower average incident power than normal lasers, which helps reduce damage to the surrounding specimen. This technique can image deeper into the specimen (~400 μm), but these lasers are still very expensive, difficult to set up, require a stronger power supply, intensive cooling, and must be aligned in the same optical table because pulses can be distorted in optical fibers.Confocal microscopy is very useful for determining the relative positions of particles in three dimensions . Software allows measurement of distances in the 3D reconstructions so that information about spacing can be ascertained (such as packing density, porosity, long range order or alignment, etc.).FIgure \(\PageIndex{8}\) A reconstruction of a colloidal suspension of poly(methyl methacrylate) (PMMA) microparticles approximately 2 microns in diameter. Adapted from Confocal Microscopy of Colloids, Eric Weeks.If imaging in fluorescence mode, remember that the signal will only represent the locations of the individual fluorophores. There is no guarantee fluorophores will completely attach to the structures of interest or that there will not be stray fluorophores away from those structures. For microparticles it is often possible to attach the fluorophores to the shell of the particle, creating hollow spheres of fluorophores. It is possible to tell if a sample sphere is hollow or solid but it would depend on the transparency of the material.Dispersions of microparticles have been used to study nucleation and crystal growth, since colloids are much larger than atoms and can be imaged in real-time. Crystalline regions are determined from the order of spheres arranged in a lattice, and regions can be distinguished from one another by noting lattice defects.Self-assembly is another application where time-dependent, 3-D studies can help elucidate the assembly process and determine the position of various structures or materials. Because confocal is popular for biological specimens, the position of nanoparticles such as quantum dots in a cell or tissue can be observed. This can be useful for determining toxicity, drug-delivery effectiveness, diffusion limitations, etc.Less haze, better contrast than ordinary microscopes. 3-D capability. Illuminates a small volume. Excludes most of the light from the sample not in the focal plane. Depth of field may be adjusted with pinhole size. Has both reflected light and fluorescence modes. Can image living cells and tissues. Fluorescence microscopy can identify several different structures simultaneously. Accommodates samples with thickness up to 100 μm. Can use with two-photon microscopy. Allows for optical sectioning (no artifacts from physical sectioning) 0.5 - 1.5 μm.Images are scanned slowly (one complete image every 0.1-1 second). Must raster scan sample, no complete image exists at any given time. There is an inherent resolution limit because of diffraction (based on numerical aperture, ~200 nm). Sample should be relatively transparent for good signal. High fluorescence concentrations can quench the fluorescent signal. Fluorophores irreversibly photobleach. Lasers are expensive. Angle of incident light changes slightly, introducing slight distortion.This page titled 8.1: Microparticle Characterization via Confocal Microscopy is shared under a CC BY 4.0 license and was authored, remixed, and/or curated by Pavan M. V. Raja & Andrew R. Barron (OpenStax CNX) via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.
571
8.2: Transmission Electron Microscopy
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Physical_Methods_in_Chemistry_and_Nano_Science_(Barron)/08%3A_Structure_at_the_Nano_Scale/8.02%3A_Transmission_Electron_Microscopy
Transmission electron microscopy (TEM) is a form of microscopy which in which a beam of electrons transmits through an extremely thin specimen, and then interacts with the specimen when passing through it. The formation of images in a TEM can be explained by an optical electron beam diagram in . TEMs provide images with significantly higher resolution than visible-light microscopes (VLMs) do because of the smaller de Broglie wavelength of electrons. These electrons allow for the examination of finer details, which are several thousand times higher than the highest resolution in a VLM. Nevertheless, the magnification provide in a TEM image is in contrast to the absorption of the electrons in the material, which is primarily due to the thickness or composition of the material.When a crystal lattice spacing (d) is investigated with electrons with wavelength λ, diffracted waves will be formed at specific angles 2θ, satisfying the Bragg condition, \ref{1}.\[ 2dsin\theta \ =\ \lambda \label{1} \]The regular arrangement of the diffraction spots, the so-called diffraction pattern (DP), can be observed. While the transmitted and the diffracted beams interfere on the image plane, a magnified image (electron microscope image) appears. The plane where the DP forms is called the reciprocal space, which the image plane is called the real space. A Fourier transform can mathematically transform the real space to reciprocal space.By adjusting the lenses (changing their focal lengths), both electron microscope images and DP can be observed. Thus, both observation modes can be successfully combined in the analysis of the microstructures of materials. For instance, during investigation of DPs, an electron microscope image is observed. Then, by inserting an aperture (selected area aperture), adjusting the lenses, and focusing on a specific area that we are interested in, we will get a DP of the area. This kind of observation mode is called a selected area diffraction. In order to investigate an electron microscope image, we first observe the DP. Then by passing the transmitted beam or one of the diffracted beams through a selected aperture and changing to the imaging mode, we can get the image with enhanced contrast, and precipitates and lattice defects can easily be identified.Describing the resolution of a TEM in terms of the classic Rayleigh criterion for VLMs, which states that the smallest distance that can be investigated, δ, is given approximately by \ref{2}, where λ is the wavelength of the electrons, µ is the refractive index of the viewing medium, and β is the semi-angle of collection of the magnifying lens.\[ \delta \ = \frac{0.61 \lambda }{\mu \ sin \beta} \label{2} \]ccording to de Broglie’s ideas of the wave-particle duality, the particle momentum p is related to its wavelength λ through Planck’s constant h, \ref{3}.\[ \lambda = \frac{h}{p} \label{3} \]Momentum is given to the electron by accelerating it through a potential drop, V, giving it a kinetic energy, eV. This potential energy is equal to the kinetic energy of the electron, \ref{4}.\[ eV\ =\ \frac{m_{o} u ^{2}}{2} \label{4} \]Based upon the foregoing, we can equate the momentum (p) to the electron mass (mo), multiplied by the velocity (v) and substituting for v from \ref{5} i.e., \ref{6}. \[ p\ =\ m_{o} u \ =\ (2m_{o}eV)^{\frac{1}{2}} \label{5} \]These equations define the relationship between the electron wavelength, λ, and the accelerating voltage of the electron microscope (V), Eq. However, we have to consider about the relative effects when the energy of electron more than 100 keV. So in order to be exact we must modify \ref{6} to give \ref{7}. \[ \lambda \ =\frac{h}{(2m_{o}eV)^{\frac{1}{2}} } \label{6} \]\[ \lambda \ =\frac{h}{[2m_{o}eV(1\ +\ \frac{eV}{2m_{o}e^{2}})]^{\frac{1}{2}}} \label{7} \]From \ref{2} and \ref{3}, if a higher resolution is desired a decrease in the electron wavelength is accomplished by increasing the accelerating voltage of the electron microscope. In other words, the higher accelerating rating used, the better resolution obtained.The scattering of the electron beam through the material under study can form different angular distribution ) and it can be either forward scattering or back scattering. If an electron is scattered < 90o, then it is forward scattered, otherwise, it is backscattered. If the specimen is thicker, fewer electrons are forward scattered and more are backscattered. Incoherent, backscattered electrons are the only remnants of the incident beam for bulk, non-transparent specimens. The reason that electrons can be scattered through different angles is related to the fact that an electron can be scattered more than once. Generally, the more times of scattering happen, the greater the angle of scattering.All scattering in the TEM specimen is often approximated as a single scattering event since it is the simplest process. If the specimen is very thin, this assumption will be reasonable enough. If the electron is scattered more than once, it is called ‘plural scattering.’ It is generally safe to assume single scattering occurs, unless the specimen is particularly thick. When the times of scattering increase, it is difficult to predict what will happen to the electron and to interpret the images and DPs. So, the principle is ‘thinner is better’, i.e., if we make thin enough specimens so that the single-scattering assumption is plausible, and the TEM research will be much easier.In fact, forward scattering includes the direct beam, most elastic scattering, refraction, diffraction, particularly Bragg diffraction, and inelastic scattering. Because of forward scattering through the thin specimen, a DP or an image would be showed on the viewing screen, and an X-ray spectrum or an electron energy-loss spectrum can be detected outside the TEM column. However, backscattering still cannot be ignored, it is an important imagine mode in the SEM.One significant problem that might encounter when TEM images are analyzed is that the TEM present us with 2D images of a 3D specimen, viewed in transmission. This problem can be illustrated by showing a picture of two rhinos side by side such that the head of one appears attached to the rear of the other ).One aspect of this particular drawback is that a single TEM images has no depth sensitivity. There often is information about the top and bottom surfaces of the specimen, but this is not immediately apparent. There has been progress in overcoming this limitation, by the development of electron tomography, which uses a sequence of images taken at different angles. In addition, there has been improvement in specimen-holder design to permit full 360o rotation and, in combination with easy data storage and manipulation; nanotechnologists have begun to use this technique to look at complex 3D inorganic structures such as porous materials containing catalyst particles. A detrimental effect of ionizing radiation is that it can damage the specimen, particularly polymers (and most organics) or certain minerals and ceramics. Some aspects of beam damage made worse at higher voltages. shows an area of a specimen damaged by high-energy electrons. However, the combination of more intense electron sources with more sensitive electron detectors, and the use computer enhancement of noisy images, can be used to minimize the total energy received by the sample.The specimens under study have to be thin if any information is to be obtained using transmitted electrons in the TEM. For a sample to be transparent to electrons, the sample must be thin enough to transmit sufficient electrons such that enough intensity falls on the screen to give an image. This is a function of the electron energy and the average atomic number of the elements in the sample. Typically for 100 keV electrons, a specimen of aluminum alloy up to ~ 1 µm would be thin, while steel would be thin up to about several hundred nanometers. However, thinner is better and specimens < 100 nm should be used wherever possible.The method to prepare the specimens for TEM depends on what information is required. In order to observe TEM images with high resolution, it is necessary to prepare thin films without introducing contamination or defects. For this purpose, it is important to select an appropriate specimen preparation method for each material, and to find an optimum condition for each method.A specimen can be crushed with an agate mortar and pestle. The flakes obtained are suspended in an organic solvent (e.g., acetone), and dispersed with a sonic bath or simply by stirring with a glass stick. Finally, the solvent containing the specimen flakes is dropped onto a grid. This method is limited to materials which tend to cleave (e.g., mica).Slicing a bulk specimen into wafer plates of about 0.3 mm thickness by a fine cutter or a multi-wire saw. The wafer is further thinned mechanically down to about 0.1 mm in thickness. Electropolishing is performed in a specific electrolyte by supplying a direct current with the positive pole at the thin plate and the negative pole at a stainless steel plate. In order to avoid preferential polishing at the edge of the specimen, all the edges are cover with insulating paint. This is called the window method. The electropolishing is finished when there is a small hole in the plate with very thin regions around it ). This method is mainly used to prepare thin films of metals and alloys.Thinning is performed chemically, i.e., by dipping the specimen in a specific solution. As for electropolishing, a thin plate of 0.1~0.2 mm in thickness should be prepared in advance. If a small dimple is made in the center of the plate with a dimple grinder, a hole can be made by etching around the center while keeping the edge of the specimen relatively thick. This method is frequently used for thinning semiconductors such as silicon. As with electro-polishing, if the specimen is not washed properly after chemical etching, contamination such as an oxide layer forms on the surface.Specimens of thin films or powders are usually fixed in an acrylic or epoxy resin and trimmed with a glass knife before being sliced with a diamond knife. This process is necessary so that the specimens in the resin can be sliced easily by a diamond knife. Acrylic resins are easily sliced and can be removed with chloroform after slicing. When using an acrylic resin, a gelatin capsule is used as a vessel. Epoxy resin takes less time to solidify than acrylic resins, and they remain strong under electron irradiation. This method has been used for preparing thin sections of biological specimens and sometimes for thin films of inorganic materials which are not too hard to cut.A thin plate (less than 0.1 mm) is prepared from a bulk specimen by using a diamond cutter and by mechanical thinning. Then, a disk 3 mm in diameter is made from the plate using a diamond knife or a ultrasonic cutter, and a dimple is formed in the center of the surface with a dimple grinder. If it is possible to thin the disk directly to 0.03 mm in thickness by mechanical thinning without using a dimple grinder, the disk should be strengthened by covering the edge with a metal ring. Ar ions are usually used for the sputtering, and the incidence angle against the disk specimen and the accelerating voltage are set as 10 - 20o and a few kilovolts, respectively. This method is widely used to obtain thin regions of ceramics and semiconductors in particular, and also for cross section of various multilayer films.This method was originally developed for the purpose of fixing semiconductor devices. In principle, ion beams are sharply focused on a small area, and the specimen in thinned very rapidly by sputtering. Usually Ga ions are used, with an accelerating voltage of about 30 kV and a current of about 10 A/cm2. The probe size is several tens of nanometers. This method is useful for specimens containing a boundary between different materials, where it may be difficult to homogeneously thin the boundary region by other methods such as ion milling.The specimen to be studied is set in a tungsten-coil or basket. Resistance heating is applied by an electric current passing through the coil or basket, and the specimen is melted, then evaporated (or sublimed), and finally deposited onto a substrate. The deposition process is usually carried under a pressure of 10-3-10-4 Pa, but in order to avoid surface contamination, a very high vacuum is necessary. A collodion film or cleaved rock salt is used as a substrate. Rock salt is especially useful in forming single crystals with a special orientation relationship between each crystal and the substrate. Salt is easily dissolved in water, and then the deposited films can be fixed on a grid. Recently, as an alternative to resistance heating, electron beam heating or an ion beam sputtering method has been used to prepare thin films of various alloys. This method is used for preparing homogeneous thin films of metals and alloys, and is also used for coating a specimen with the metal of alloy.The types of TEM specimens that are prepared depend on what information is needed. For example, a self-supporting specimen is one where the whole specimen consists of one material (which may be a composite). Other specimens are supported on a grid or on a Cu washer with a single slot. Some grids are shown in . Usually the specimen or grid will be 3 mm in diameter.TEM specimen stage designs include airlocks to allow for insertion of the specimen holder into the vacuum with minimal increase in pressure in other areas of the microscope. The specimen holders are adapted to hold a standard size of grid upon which the sample is placed or a standard size of self-supporting specimen. Standard TEM grid sizes is a 3.05 mm diameter ring, with a thickness and mesh size ranging from a few to 100 µm. The sample is placed onto the inner meshed area having diameter of approximately 2.5 mm. The grid materials usually are copper, molybdenum, gold or platinum. This grid is placed into the sample holder which is paired with the specimen stage. A wide variety of designs of stages and holders exist, depending upon the type of experiment being performed. In addition to 3.05 mm grids, 2.3 mm grids are sometimes, if rarely, used. These grids were particularly used in the mineral sciences where a large degree of tilt can be required and where specimen material may be extremely rare. Electron transparent specimens have a thickness around 100 nm, but this value depends on the accelerating voltage.Once inserted into a TEM, the sample is manipulated to allow study of the region of interest. To accommodate this, the TEM stage includes mechanisms for the translation of the sample in the XY plane of the sample, for Z height adjustment of the sample holder, and usually at least one rotation degree of freedom. Most TEMs provide the ability for two orthogonal rotation angles of movement with specialized holder designs called double-tilt sample holders.A TEM stage is required to have the ability to hold a specimen and be manipulated to bring the region of interest into the path of the electron beam. As the TEM can operate over a wide range of magnifications, the stage must simultaneously be highly resistant to mechanical drift as low as a few nm/minute while being able to move several µm/minute, with repositioning accuracy on the order of nanometers.Although, TEMs can only provide 2D analysis for a 3D specimen; magnifications of 300,000 times can be routinely obtained for many materials making it an ideal methodfor the study of nanomaterials. Besides from the TEM images, darker areas of the image show that the sample is thicker or denser in these areas, so we can observe the different components and structures of the specimen by the difference of color. For investigating multilayer-nanomaterials, a TEM is usually the first choice, because not only does it provide a high resolution image for nanomaterials but also it can distinguish each layer within a nanostructured material.TEM was been used to analyze the depth-graded W/Si multilayer films. Multilayer films were grown on polished, 100 mm thick Si wafers by magnetron sputtering in argon gas. The individual tungsten and silicon layer thicknesses in periodic and depth-graded multilayers are adjusted by varying the computer-controlled rotational velocity of the substrate platen. The deposition times required to produce specific layer thicknesses were determined from detailed rate calibrations. Samples for TEM were prepared by focused ion beam milling at liquid N2 temperature to prevent any beam heating which might result in re-crystallization and/or re-growth of any amorphous or fine grained polycrystalline layers in the film.TEM measurements were made using a JEOL-4000 high-resolution transmission electron microscope operating at 400 keV; this instrument has a point-to-point resolution of 0.16 nm. Large area cross-sectional images of a depth-graded multilayer film obtained under medium magnification (~100 kX) were acquired at high resolution. A cross-sectional TEM image showed 150 layers W/Si film with the thickness of layers in the range of 3.33 ~ 29.6 nm shows a part of layers). The dark layers are tungsten and the light layers are silicon and they are separated by the thin amorphous W–Si interlayers (gray bands). By the high resolution of the TEM and the nature characteristics of the material, each layer can be distinguished clearly with their different darkness.Not all kinds of multilayer nanomaterials can be observed clearly under TEM. A materials consist of pc-Si:H multilayers were prepared by a photo-assisted chemical vapor deposition (photo-CVD) using a low-pressure mercury lamp as an UV light source to dissociate the gases. The pc-Si:H multilayer included low H2-diluted a-Si:H sublayers (SL’s) and highly H2-diluted a-Si:H sublayers (SH’s). Control of the CVD gas flow (H2|SiH4) under continuous UV irradiation resulted in the deposition of multilayer films layer by layer.For a TEM measurement, a 20 nm thick undiluted a-Si:H film on a c-Si wafer before the deposition of multilayer to prevent from any epitaxial growth. shows a cross-sectional TEM image of a six-cycled pc-Si:H multilayer specimen. The white dotted lines are used to emphasize the horizontal stripes, which have periodicity in the TEM image. As can be seen, there are no significant boundaries between SL and SH could be observed because all sublayers are prepared in H2 gas. In order to get the more accurate thickness of each sublayer, other measurements might be necessary.Transmission electron microscopy (TEM) is a form of microscopy that uses an high energy electron beam (rather than optical light). A beam of electrons is transmitted through an ultra thin specimen, interacting with the specimen as it passes through. The image (formed from the interaction of the electrons with the sample) is magnified and focused onto an imaging device, such as a photographic film, a fluorescent screen,or detected by a CCD camera. In order to let the electrons pass through the specimen, the specimen has to be ultra thin, usually thinner than 10 nm.The resolution of TEM is significantly higher than light microscopes. This is because the electron has a much smaller de Broglie wavelength than visible light (wavelength of 400~700 nm). Theoretically, the maximum resolution, d, has been limited by λ, the wavelength of the detecting source (light or electrons) and NA, the numerical aperture of the system.\[ d\ = \frac{\lambda }{2n\ sin \alpha} \approx \frac{\lambda }{2NA} \label{8} \]For high speed electrons (in TEM, electron velocity is close to the speed of light, c, so that the special theory of relativity has to be considered), the λe:\[ \lambda _{e} =\ \frac{h}{\sqrt{2m_{0}E(1+E/2m_{0}c^{2})}} \label{9} \]According to this formula, if we increase the energy of the detecting source, its wavelength will decrease, and we can get higher resolution. Today, the energy of electrons used can easily get to 200 keV, sometimes as high as 1 MeV, which means the resolution is good enough to investigate structure in sub-nanometer scale. Because the electrons is focused by several electrostatic and electromagnetic lenses, like the problems optical camera usually have, the image resolution is also limited by aberration, especially the spherical aberration called Cs. Equipped with a new generation of aberration correctors, transmission electron aberration-corrected microscope (TEAM) can overcome spherical aberration and get to half angstrom resolution.Although TEAM can easily get to atomic resolution, the first TEM invented by Ruska in April 1932 could hardly compete with optical microscope, with only 3.6×4.8 = 14.4 magnification. The primary problem was the electron irradiation damage to sample in poor vacuum system. After World War II, Ruska resumed his work in developing high resolution TEM. Finally, this work brought him the Nobel Prize in physics 1986. Since then, the general structure of TEM hasn’t changed too much as shown in . The basic components in TEM are: electron gun, condenser system, objective lens (most important len in TEM which determines the final resolution), diffraction lens, projective lenses (all lens are inside the equipment column, between apertures), image recording system (used to be negative films, now is CCD cameras) and vacuum system.Common carbon allotropes include diamond, graphite, amorphrous C (a-C), fullerene (also known as buckyball), carbon nanotube (CNT, including single wall CNT and multi wall CNT), graphene. Most of them are chemically inert and have been found in nature. We can also define carbon as sp2 carbon (which is graphite), sp3 carbon (which is diamond) or hybrids of sp2 and sp3 carbon. As shown in is the structure of diamond, (b) is the structure of graphite, (c) graphene is a single sheet of graphite, (d) is amorphous carbon, (e) is C60, and (f) is single wall nanotube. As for carbon nanomaterials, fullerene, CNT and graphene are the three most well investigated, due to their unique properties in both mechanics and electronics. Under TEM, these carbon nanomaterials will display three different projected images.All carbon naomaterials can be investigated under TEM. Howerver, because of their difference in structure and shape, specific parts should be focused in order to obtain their atomic structure.For C60, which has a diameter of only 1 nm, it is relatively difficult to suspend a sample over a lacey carbon grid (a common kind of TEM grid usually used for nanoparticles). Even if the C60 sits on a thin a-C film, it also has some focus problems since the surface profile variation might be larger than 1 nm. One way to solve this problem is to encapsulate the C60 into single wall CNTs, which is known as nano peapods. This method has two benefits:CNT helps focus on C60. Single wall is aligned in a long distance (relative to C60). Once it is suspended on lacey carbon film, it is much easier to focus on it. Therefore, the C60 inside can also be caught by minor focus changes. The CNT can protect C60 from electron irradiation. Intense high energy electrons can permanently change the structure of the CNT. For C60, which is more reactive than CNTs, it can not survive after exposing to high dose fast electrons. In studying CNT cages, C92 is observed as a small circle inside the walls of the CNT. While a majority of electron energy is absorbed by the CNT, the sample is still not irradiation-proof. Thus, as is seen in , after a 123 s exposure, defects can be generated and two C92 fused into one new larger fullerene.Although, the discovery of C60 was first confirmed by mass spectra rather than TEM. When it came to the discovery of CNTs, mass spectra was no longer useful because CNTs shows no individual peak in mass spectra since any sample contains a range of CNTs with different lengths and diameters. On the other hand, HRTEM can provide a clear image evidence of their existence. An example is shown in .Graphene is a planar fullerene sheet. Until recently, Raman, AFM and optical microscopy (graphene on 300 nm SiO2 wafer) were the most convenient methods to characterize samples. However, in order to confirm graphene’s atomic structure and determine the difference between mono-layer and bi-layer, TEM is still a good option. In , a monolayer suspended graphene is observed with its atomic structure clearly shown. Inset is the FFT of the TEM image, which can be used as a filter to get an optimized structure image. High angle annular dark field (HAADF) image usually gives better contrast for different particles on it. It is also sensitive with changes of thickness, which allows a determination of the number of graphene layers.Like the situation in CNT, TEM image is a projected image. Therefore, even with exact count of edge lines, it is not possible to conclude that a sample is a single layer graphene or multi-layer. If folding graphene has AA stacking (one layer is superposed on the other), with a projected direction of, one image could not tell the thickness of graphene. In order to distinguish such a bilayer of graphene from a single layer of graphene, a series of tilting experiment must be done. Different stacking structures of graphene are shown in a.Theoretically, graphene has the potential for interesting edge effects. Based upon its sp2 structure, its edge can be either that of a zigzag or armchair configuration. Each of these possess different electronic properties similar to that observed for CNTs. For both research and potential application, it is important to control the growth or cutting of graphene with one specific edge. But before testing its electronic properties, all the edges have to be identified, either by directly imaging with STM or by TEM. Detailed information of graphene edges can be obtained with HRTEM, simulated with fast fourier transform (FFT). In b, armchair directions are marked with red arrow respectively. A clear model in is a technique that measures electronic excitations within solid-state materials. When an electron beam with a narrow range of kinetic energy is directed at a material some electrons will be inelastically scattered, resulting in a kinetic energy loss. Electrons can be inelastically scattered from phonon excitations, plasmon excitations, interband transitions, or inner shell ionization. EELS measures the energy loss of these inelastically scattered electrons and can yield information on atomic composition, bonding, electronic properties of valance and conduction bands and surface properties. An example of atomic level composition mapping is shown in a. EELS has even been used to measure pressure and temperature within materials.An idealized EEL spectrum is show in . The most prominent feature of any EEL spectrum is the zero loss peak (ZLP). The ZLP is due to those electrons from the electron beam that do not inelastically scatter and reach the detector with their original kinetic energy; typically 100-300 keV. By definition the ZLP is set to 0 eV for further analysis and all signals arising from inelastically scatter electrons occur at >0 eV. The second largest feature is often the plasmon resonance - the collective excitation of conduction band electrons within a material. The plasmon resonance and other peaks attributed to weakly bound, or outer shell electrons, occur in the “low-loss” region of the spectrum. The low-loss regime is typically thought of as energy loss <50 eV, but this cut-off from low-loss to high-loss is arbitrary. Shown in the inset of is an edge from atom core-loss and further fine structure. Inner shell ionizations, represented by the core-loss peaks, are useful in determining elemental compositions as these peaks can act as fingerprints for specific elements. For example, if there is a peak at 532 eV in a EEL spectrum, there is a high probability that the sample contains a considerable amount of oxygen because this is known to be the energy needed to remove an inner shell electron from oxygen. This idea is further explored by looking at sudden changes in the bulk plasmon for aluminum in different chemical environments as shown in .Of course, there are several other techniques available for probing atomic compositions many of which are covered in this text. These include Energy dispersive X-ray spectroscopy, X-ray photoelectron spectroscopy, and Auger electron spectroscopy. Please reference these chapters thorough introduction to these techniques.As a technique EELS is most frequently compared to energy dispersive X-ray spectroscopy (EDX) also known as energy dispersive spectroscopy (EDS). Energy dispersive X-ray detectors are commonly found as analytical probes on both scanning and transmission electron microscopes. The popularity of EDS can be understood by recognizing the simplicity of compositional analysis using this technique. However, EELS data can offer complementary compositional analysis while also generally yielding further insight into the solid-state physics and chemistry in a system at the cost of a steeper learning curve. EDS and EELS spectra are both derived from the electronic excitations of materials, however, EELS probes the initial excitation while EDS looks at X-ray emissions from the decay of this excited state. As a result, EEL spectra investigate energy ranges from 0-3 keV while EDS spectra analyze a wider energy range from 1-40 keV. The difference in ranges makes EDS suited particularly well for heavy elements while EELS complements measurement elements lighter than Zn.In the early 1940s, James Hillier ) and R.F. Baker were looking to develop a method for pairing the size, shape, and structure available from electron microscopes to a convenient method for “determining the composition of individual particles in a mixed specimen”. Their instrument, shown in ,reported in the Journal of Applied Physics in September 1994 was the first electron-optical instrument used to measure the velocity distribution in an electron beam transmitting through a sample.The instrument was built from a repurposed transmission electron microscope (TEM). It consisted of an electron source and three electromagnetic focusing lenses, standard for TEMs at the time, but also incorporated a magnetic deflecting lenses, which when turned on, would redirect the electrons 180° into a photographic plate. The electrons with varying kinetic energies dispersed across the photographic plate and could be correlated to the energy loss of each peak depending on position. In this groundbreaking work, Hillier and Baker were able to find the discrete energy loss corresponding to the K levels of both carbon and oxygen.The vast majority of EEL spectrometers are found as secondary analyzers in transmission electron microscopes. It wasn’t until the 1990s when EELS became a widely used research tool because of advances in electron beam aberration correction and vacuum technologies. Today, EELS is capable of spatial resolutions down to the single atom level, and if the electron beam is monochromated energy resolution can be as low as 0.01eV. depicts the typical layout of an EEL spectrometer at the base of a TEM.This page titled 8.2: Transmission Electron Microscopy is shared under a CC BY 4.0 license and was authored, remixed, and/or curated by Pavan M. V. Raja & Andrew R. Barron (OpenStax CNX) via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.
572
8.3: Scanning Tunneling Microscopy
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Physical_Methods_in_Chemistry_and_Nano_Science_(Barron)/08%3A_Structure_at_the_Nano_Scale/8.03%3A_Scanning_Tunneling_Microscopy
Scanning tunneling microscopy (STM) is a powerful instrument that allows one to image the sample surface at the atomic level. As the first generation of scanning probe microscopy (SPM), STM paves the way for the study of nano-science and nano-materials. For the first time, researchers could obtain atom-resolution images of electrically conductive surfaces as well as their local electric structures. Because of this milestone invention, Gerd Binnig ) and Heinrich Rohrer ) won the Nobel Prize in Physics in 1986.The key physical principle behind STM is the tunneling effect. In terms of their wave nature, the electrons in the surface atoms actually are not as tightly bonded to the nucleons as the electrons in the atoms of the bulk. More specifically, the electron density is not zero in the space outside the surface, though it will decrease exponentially as the distance between the electron and the surface increases a). So, when a metal tip approaches to a conductive surface within a very short distance, normally just a few Å, their perspective electron clouds will starting to overlap, and generate tunneling current if a small voltage is applied between them, as shown in b.When we consider the separation between the tip and the surface as an ideal one-dimensional tunneling barrier, the tunneling probability, or the tunneling current I, will depend largely on s, the distance between the tip and surface, \ref{1}, where m is the electron mass, e the electron charge, h the Plank constant, ϕ the averaged work function of the tip and the sample, and V the bias voltage.\[ I \propto e^{-2s\ [2m/h^{2} (<\phi >\ -\ e|V|/2)]^{1/2}} \label{1} \]A simple calculation will show us how strongly the tunneling current is affected by the distance (s). If s is increased by ∆s = 1 Å, \ref{2} and \ref{3}.\[ \Delta I\ =\ e^{-2k_{0} \Delta s} \label{2} \]\[ k_{0}\ =\ [2m/h^{2} (<\phi >\ -\ e|V|/2)]^{1/2} \label{3} \]Usually (<ϕ> -e|V|/2) is about 5 eV, which k0 about 1 Å-1, then ∆I/I = 1/8. That means, if s changes by 1 Å, the current will change by one order of the magnitude. That’s the reason why we can get atom-level image by measuring the tunneling current between the tip and the sample.In a typical STM operation process, the tip is scanning across the surface of sample in x-y plain, the instrument records the x-y position of the tip, measures the tunneling current, and control the height of the tip via a feedback circuit. The movements of the tip in x, y and z directions are all controlled by piezo ceramics, which can be elongated or shortened according to the voltage applied on them.Normally, there are two modes of operation for STM, constant height mode and constant current mode. In constant height mode, the tip stays at a constant height when it scans through the sample, and the tunneling current is measured at different (x, y) position b). This mode can be applied when the surface of sample is very smooth. But, if the sample is rough, or has some large particles on the surface, the tip may contact with the sample and damage the surface. In this case, the constant current mode is applied. During this scanning process, the tunneling current, namely the distance between the tip and the sample, is settled to an unchanged target value. If the tunneling current is higher than that target value, that means the height of the sample surface is increasing, the distance between the tip and sample is decreasing. In this situation, the feedback control system will respond quickly and retract the tip. Conversely, if the tunneling current drops below the target value, the feedback control will have the tip closer to the surface. According to the output signal from feedback control, the surface of the sample can be imaged.Both AFM and STM are widely used in nano-science. According to the different working principles though, they have their own advantages and disadvantages when measuring specific properties of sample (Table \(\PageIndex{1}\)). STM requires an electric circuit including the tip and sample to let the tunneling current go through. That means, the sample for STM must be conducting. In case of AFM however, it just measures the deflection of the cantilever caused by the van der Waals forces between the tip and sample. Thus, in general any kind of sample can be used for AFM. But, because of the exponential relation of the tunneling current and distance, STM has a better resolution than AFM. In STM image one can actually “see” an individual atom, while in AFM it’s almost impossible, and the quality of AFM image is largely depended on the shape and contact force of the tip. In some cases, the measured signal would be rather complicated to interpret into morphology or other properties of sample. On the other side, STM can give straight forward electric property of the sample surface.STM provides a powerful method to detect the surface of conducting and semi-conducting materials. Recently STM can also be applied in the imaging of insulators, superlattice assemblies and even the manipulation of molecules on surface. More importantly, STM can provide the surface structure and electric property of surface at atomic resolution, a true breakthrough in the development of nano-science. In this sense, the data collected from STM could reflect the local properties even of single molecule and atom. With these valuable measurement data, one could give a deeper understanding of structure-property relations in nanomaterials.An excellent example is the STM imaging of graphene on Ru, as shown in . Clearly seen is the superstructure with a periodicity of ~30 Å , coming from the lattice mismatch of 12 unit cells of the graphene and 11 unit cells of the underneath Ru substrate. This so-called moiré structure can also be seen in other systems when the adsorbed layers have strong chemical bonds within the layer and weak interaction with the underlying surface. In this case, the periodic superstructure seen in graphene tells us that the formed graphene is well crystallized and expected to have high quality.Another good example is shown to see that the measurement from STM could tell us the bonding information in single-molecular level. In thiol- and thiophene-functionalization of single-wall carbon nanotubes (SWNTs), the use of Au nanoparticles as chemical markers for AFM gives misleading results, while STM imaging could give correct information of substituent location. From AFM image, Au-thiol-SWNT a) shows that most of the sidewalls are unfunctionalized, while Au-thiophene-SWNT c)shows long bands of continuous functionalized regions on SWNT. This could lead to the estimation that thiophene is better functionalized to SWNT than thiol. Yet, if we look up to the STM image b and d), in thiol-SWNTs the multiple functional groups are tightly bonded in about 5 - 25 nm, while in thiophene-SWNTs the functionalization is spread out uniformly along the whole length of SWNT. This information indicates that actually the functionalization levels of thiol- and thiophene-SWNTs are comparable. The difference is that, in thiol-SWNTs, functional groups are grouped together and each group is bonded to a single gold nanoparticle, while in thiophene-SWNTs, every individual functional group is bonded to a nanoparticle.Scanning tunneling microscopy (STM) is a relatively recent imaging technology that has proven very useful for determining the topography of conducting and semiconducting samples with angstrom (Å) level precision. STM was invented by Gerd Binnig ) and Heinrich Rohrer ), who both won the 1986 Nobel Prize in physics for their technological advances.The main component of a scanning tunneling microscope is a rigid metallic probe tip, typically composed of tungsten, connected to a piezodrive containing three perpendicular piezoelectric transducers ). The tip is brought within a fraction of a nanometer of an electrically conducting sample. At close distances, the electron clouds of the metal tip overlap with the electron clouds of the surface atoms inset). If a small voltage is applied between the tip and the sample a tunneling current is generated. The magnitude of this tunneling current is dependent on the bias voltage applied and the distance between the tip and the surface. A current amplifier can covert the generated tunneling current into a voltage. The magnitude of the resulting voltage as compared to the initial voltage can then be used to control the piezodrive, which controls the distance between the tip and the surface (i.e., the z direction). By scanning the tip in the x and y directions, the tunneling current can be measured across the entire sample. The STM system can operate in either of two modes: Constant height or constant currentIn constant height mode, the tip is fixed in the z direction and the change in tunneling current as the tip changes in the x,y direction is collected and plotted to describe the change in topography of the sample. This method is dangerous for use in samples with fluctuations in height as the fixed tip might contact and destroy raised areas of the sample. A common method for non-uniformly smooth samples is constant current mode. In this mode, a target current value, called the set point, is selected and the tunneling current data gathered from the sample is compared to the target value. If the collected voltage deviates from the set point, the tip is moved in the z direction and the voltage is measured again until the target voltage is reached. The change in the z direction required to reach the set point is recorded across the entire sample and plotted as a representation of the topography of the sample. The height data is typically displayed as a gray scale image of the topography of the sample, where lighter areas typically indicate raised sample areas and darker spots indicate protrusions. These images are typically colored for better contrast.The standard method of STM, described above, is useful for many substances (including high precision optical components, disk drive surfaces, and buckyballs) and is typically used under ultrahigh vacuum to avoid contamination of the samples from the surrounding systems. Other sample types, such as semiconductor interfaces or biological samples, need some enhancements to the traditional STM apparatus to yield more detailed sample information. Three such modifications, spin-polarized STM (SP-STM), ballistic electron emission microscopy (BEEM) and photon STM (PSTM) are summarized in Table \(\PageIndex{2}\) and in described in detail below.Spin-polarized scanning tunneling microscopy (SP-STM) can be used to provide detailed information of magnetic phenomena on the single-atom scale. This imaging technique is particularly important for accurate measurement of superconductivity and high-density magnetic data storage devices. In addition, SP-STM, while sensitive to the partial magnetic moments of the sample, is not a field-sensitive technique and so can be applied in a variety of different magnetic fields.In SP-STM, the STM tip is coated with a thin layer of magnetic material. As with STM, voltage is then applied between tip and sample resulting in tunneling current. Atoms with partial magnetic moments that are aligned in the same direction as the partial magnetic moment of the atom at the very tip of the STM tip show a higher magnitude of tunneling current due to the interactions between the magnetic moments. Likewise, atoms with partial magnetic moments opposite that of the atom at the tip of the STM tip demonstrate a reduced tunneling current ). A computer program can then translate the change in tunneling current to a topographical map, showing the spin density on the surface of the sample.The sensitivity to magnetic moments depends greatly upon the direction of the magnetic moment of the tip, which can be controlled by the magnetic properties of the material used to coat the outermost layer of the tungsten STM probe. A wide variety of magnetic materials have been studied as possible coatings, including both ferromagnetic materials, such as a thin coat of iron or of gadolinium, and antiferromagnetic materials such as chromium. Another method that has been used to make a magnetically sensitive probe tip is irradiation of a semiconducting GaAs tip with high energy circularly polarized light. This irradiation causes a splitting of electrons in the GaAs valence band and population of the conduction band with spin-polarized electrons. These spin-polarized electrons then provide partial magnetic moments which in turn influence the tunneling current generated by the sample surface.Sample preparation for SP-STM is essentially the same as for STM. SP-STM has been used to image samples such as thin films and nanoparticle constructs as well as determining the magnetic topography of thin metallic sheets such as in . The upper image is a traditional STM image of a thin layer of cobalt, which shows the topography of the sample. The second image is an SP-STM image of the same layer of cobalt, which shows the magnetic domain of the sample. The two images, when combined provide useful information about the exact location of the partial magnetic moments within the sample.One of the major limitations with SP-STM is that both distance and partial magnetic moment yield the same contrast in a SP-STM image. This can be corrected by combination with conventional STM to get multi-domain structures and/or topological information which can then be overlaid on top of the SP-STM image, correcting for differences in sample height as opposed to magnetization.The properties of the magnetic tip dictate much of the properties of the technique itself. If the outermost atom of the tip is not properly magnetized, the technique will yield no more information than a traditional STM. The direction of the magnetization vector of the tip is also of great importance. If the magnetization vector of the tip is perpendicular to the magnetization vector of the sample, there will be no spin contrast. It is therefore important to carefully choose the coating applied to the tungsten STM tip in order to align appropriately with the expected magnetic moments of the sample. Also, the coating makes the magnetic tips more expensive to produce than standard STM tips. In addition, these tips are often made of mechanically soft materials, causing them to wear quickly and require a high cost of maintenance.Ballistic electron emission microscopy (BEEM) is a technique commonly used to image semiconductor interfaces. Conventional surface probe techniques can provide detailed information on the formation of interfaces, but lack the ability to study fully formed interfaces due to inaccessibility to the surface. BEEM allows for the ability to obtain a quantitative measure of electron transport across fully formed interfaces, something necessary for many industrial applications.BEEM utilizes STM with a three-electrode configuration, as seen in . In this technique, ballistic electrons are first injected from a STM tip into the sample, traditionally composed of at least two layers separated by an interface, which rests on three indium contact pads that provide a connection to a base electrode ). As the voltage is applied to the sample, electrons tunnel across the vacuum and through the first layer of the sample, reaching the interface, and then scatter. Depending on the magnitude of the voltage, some percentage of the electrons tunnel through the interface, and can be collected and measured as a current at a collector attached to the other side of the sample. The voltage from the STM tip is then varied, allowing for measurement of the barrier height. The barrier height is defined as the threshold at which electrons will cross the interface and are measurable as a current in the far collector. At a metal/n-type semiconductor interface this is the difference between the conduction band minimum and the Fermi level. At a metal/p-type semiconductor interface this is the difference between the valence band maximum of the semiconductor and the metal Fermi level. If the voltage is less than the barrier height, no electrons will cross the interface and the collector will read zero. If the voltage is greater than the barrier height, useful information can be gathered about the magnitude of the current at the collector as opposed to the initial voltage.Samples are prepared from semiconductor wafers by chemical oxide growth-strip cycles, ending with the growth of a protective oxide layer. Immediately prior to imaging the sample is spin-etched in an inert environment to remove oxides of oxides and then transferred directly to the ultra-high vacuum without air exposure. The BEEM apparatus itself is operated in a glove box under inert atmosphere and shielded from light.Nearly any type of semiconductor interface can be imaged with BEEM. This includes both simple binary interfaces such as Au/n-Si and more chemically complex interfaces such as Au/n-GaAs, such as seen in . Expected barrier height matters a great deal in the desired setup of the BEEM apparatus. If it is necessary to measure small collector currents, such as with an interface of high-barrier-height, a high-gain, low-noise current preamplifier can be added to the system. If the interface is of low-barrier-height, the BEEM apparatus can be operated at very low temperatures, accomplished by immersion of the STM tip in liquid nitrogen and enclosure of the BEEM apparatus in a nitrogen-purged glove box.Photon scanning tunneling microscopy (PSTM) measures light to determine more information about characteristic sample topography. It has primarily been used as a technique to measure the electromagnetic interaction of two metallic objects in close proximity to one another and biological samples, which are both difficult to measure using many other common surface analysis techniques.This technique works by measuring the tunneling of photons to an optical tip. The source of these photons is the evanescent field generated by the total internal reflection (TIR) of a light beam from the surface of the sample ). This field is characteristic of the sample material on the TIR surface, and can be measured by a sharpened optical fiber probe tip where the light intensity is converted to an electrical signal ). Much like conventional STM, the force of this electrical signal modifies the location of the tip in relation to the sample. By mapping these modifications across the entire sample, the topography can be determined to a very accurate degree as well as allowing for calculations of polarization, emission direction and emission time.In PSTM, the vertical resolution is governed only by the noise, as opposed to conventional STM where the vertical resolution is limited by the tip dimensions. Therefore, this technique provides advantages over more conventional STM apparatus for samples where subwavelength resolution in the vertical dimension is a critical measurement, including fractal metal colloid clusters, nanostructured materials and simple organic molecules.Samples are prepared by placement on a quartz or glass slide coupled to the TIR face of a triangular prism containing a laser beam, making the sample surface into the TIR surface ). The optical fiber probe tips are constructed from UV grade quartz optical fibers by etching in HF acid to have nominal end diameters of 200 nm or less and resemble either a truncated cone or a paraboloid of revolution ).PSTM shows much promise in the imaging of biological materials due to the increase in vertical resolution and the ability to measure a sample within a liquid environment with a high index TIR substrate and probe tip. This would provide much more detailed information about small organisms than is currently available.The majority of the limitations in this technique come from the materials and construction of the optical fibers and the prism used in the sample collection. The sample needs to be kept at low temperatures, typically around 100K, for the duration of the imaging and therefore cannot decompose or be otherwise negatively impacted by drastic temperature changes.Scanning tunneling microscopy can provide a great deal of information into the topography of a sample when used without adaptations, but with adaptations, the information gained is nearly limitless. Depending on the likely properties of your sample surface, SP-STM, BEEM and PSTM can provide much more accurate topographical pictures than conventional forms of STM (Table \(\PageIndex{2}\)). All of these adaptations to STM have their limitations and all work within relatively specialized categories and subsets of substances, but they are very strong tools that are constantly improving to provide more useful information about materials to the nanometer scale.STEM-EELS is a terminology abbreviation for scanning transmission electron microscopy (STEM) coupled with electron energy loss spectroscopy (EELS). It works by combining two instruments, obtaining an image through STEM and applying EELS to detect signals on the specific selected area of the image. Therefore, it can be applied for many research, such as characterizing morphology, detecting different elements, and different valence state. The first STEM was built by Baron Manfred von Arden ) in around 1983, since it was just the prototype of STEM, it was not as good as transmission electron microscopy (TEM) by that time. Development of STEM was stagnant until the field emission gun was invented by Albert Crewe ) in 1970s; he also came with the idea of annular dark field detector to detect atoms. In 1997, its resolution increased to 1.9 Å, and further increased to 1.36 Å in 2000. 4D STEM-EELS was developed recently, and this type of 4D STEM-EELS has high brightness STEM equipped with a high acquisition rate EELS detector, and a rotation holder. The rotation holder plays quite an important role to achieve this 4D aim, because it makes observation of the sample in 360° possible, the sample could be rotated to acquire the sample’s thickness. High acquisition rate EELS enables this instrument the acquisition of the pixel spectrum in a few minutes.When electrons interact with the samples, the interaction between those two can be classified into two types, namely, elastic and inelastic interactions ). In the elastic interaction, if electrons do not interact with the sample and pass through it, these electrons will contribute to the direct beam. The direct beam can be applied in STEM. In another case, electrons’ moving direction in the sample is guided by the Coulombic force; the strength of the force is decided by charge and the distance between electrons and the core. In both cases, these is no energy transfer from electrons to the samples, that’s the reason why it is called elastic interaction. In inelastic interaction, energy transfers from incident electrons to the samples, thereby, losing energy. The lost energy can be measured and how many electrons amounted to this energy can also be measured, and these data yield the electron energy loss spectrum (EELS).In transmission electron microscopy (TEM), a beam of electrons is emitted from tungsten source and then accelerated by electromagnetic field. Then with the aid of lens condenser, the beam will focus on and pass through the sample. Finally, the electrons will be detected by a charge-coupled device (CCD) and produce images, . STEM works differently from TEM, the electron beam focuses on a specific spot of the sample and then raster scans the sample pixel by pixel, the detector will collect the transmitted electrons and visualize the sample. Moreover, STEM-EELS allows to analyze these electrons, the transmitted electrons could be characterized by adding a magnetic prism, the more energy the electrons lose, the more they will be deflected. Therefore, STEM-EELS can be used to characterize the chemical properties of thin samples.A brief illustration of STEM-EELS is displayed in . The electron source provides electrons, and it usually comes from a tungsten source located in a strong electrical field. The electron field will provide electrons with high energy. The condenser and the object lens also promote electrons forming into a fine probe and then raster scanning the specimen. The diameter of the probe will influence STEM’s spatial resolution, which is caused by the lens aberrations. Lens aberration results from the refraction difference between light rays striking the edge and center point of the lens, and it also can happen when the light rays pass through with different energy. Base on this, an aberration corrector is applied to increase the objective aperture, and the incident probe will converge and increase the resolution, then promote sensitivity to single atoms. For the annular electron detector, the installment sequence of detectors is a bright field detector, a dark field detector and a high angle annular dark field detector. Bright field detector detects the direct beam that transmits through the specimen. Annular dark field detector collects the scattered electrons, which only go through at an aperture. The advantage of this is that it will not influence the EELS to detect signals from direct beam. High angle annular dark field detector collects electrons which are Rutherford scattering (elastic scattering of charged electrons), and its signal intensity is related with the square of atomic number (Z). So, it is also named as Z-contrast image. The unique point about STEM in acquiring image is that the pixels in image are obtained in a point by point mode by scanning the probe. EELS analysis is based on the energy loss of the transmitted electrons, so the thickness of the specimen will influence the detecting signal. In other words, if the specimen is too thick, the intensity of plasmon signal will decrease and may cause difficulty distinguishing these signals from the background.As shown in , a significant peak appears at energy zero in EELS spectra and is therefore called zero-loss peak. Zero-loss peak represents the electrons which undergo elastic scattering during the interaction with specimen. Zero-loss peak can be used to determine the thickness of specimen according to \ref{4}, where t stands for the thickness, λinel is inelastic mean free path, It stands for the total intensity of the spectrum and IZLP is the intensity of zero loss peak.\[ t\ =\ \lambda _{inel}\ ln[I_{t}/I_{ZLP}] \label{4} \]The low loss region is also called valence EELS. In this region,valence electrons will be excited to the conduction band. Valence EELS can provide the information about band structure, bandgap, and optical properties. In the low loss region, plasmon peak is the most important. Plasmon is a phenomenon originates from the collective oscillation of weakly bound electrons. Thickness of the sample will influence the plasmon peak. The incident electrons will go through inelastic scattering several times when they interact with a very thick sample, and then result in convoluted plasmon peaks. It is also the reason why STEM-EELS favors sample with low thickness (usually less than 100 nm).The high loss region is characterized by the rapidly increasing intensity with a gradually falling, which called ionization edge. The onset of ionization edges equals to the energy that inner shell electron needs to be excited from the ground state to the lowest unoccupied state. The amount of energy is unique for different shells and elements. Thus, this information will help to understand the bonding, valence state, composition and coordination information.Energy resolution affects the signal to background ratio in the low loss region and is used to evaluate EELS spectrum. Energy resolution is based on the full width at half maximum of zero-loss peak.Background signal in the core-loss region is caused by plasmon peaks and core-loss edges, and can be described by the following power law, \ref{5}, where IBG stands for the background signal, E is the energy loss, A is the scaling constant and r is the slope exponent:\[ I_{BG}\ =\ AE^{-r} \label{5} \]Therefore, when quantification the spectra data, the background signal can be removed by fitting pre-edge region with the above-mentioned equation and extrapolating it to the post-edge region.STEM-EELS has advantages over other instruments, such as the acquisition of high resolution of images. For example, the operation of TEM on samples sometimes result in blurring image and low contrast because of chromatic aberration. STEM-EELS equipped with aberration corrector, will help to reduce the chromatic aberration and obtain high quality image even at atomic resolution. It is very direct and convenient to understand the electron distributions on surface and bonding information. STEM-EELS also has the advantages in controlling the spread of energy. So, it becomes much easier to study the ionization edge of different material.Even though STEM-EELS does bring a lot of convenience for research in atomic level, it still has limitations to overcome. One of the main limitation of STEM-EELS is controlling the thickness of the sample. As discussed above, EELS detects the energy loss of electrons when they interact with samples and the specimen, then the thickness of samples will impact on the energy lost detection. Simplify, if the sample is too thick, then most of the electrons will interact with the sample, signal to background ratio and edge visibility will decrease. Thus, it will be hard to tell the chemical state of the element. Another limitation is due to EELS needs to characterize low-loss energy electrons, which high vacuum condition is essential for characterization. To achieve such a high vacuum environment, high voltage is necessary. STEM-EELS also requires the sample substrates to be conductive and flat.STEM-EELS can be used to detect the size and distribution of nanoparticles on a surface. For example, CoO on MgO catalyst nanoparticles may be prepared by hydrothermal methods. The size and distribution of nanoparticles will greatly influence the catalytic properties, and the distribution and morphology change of CoO nanoparticles on MgO is important to understand. Co L3/L2 ratios display uniformly around 2.9, suggesting that Co2+ dominates the electron state of Co. The results show that the ratios of O:(Co+Mg) and Mg:(Co+Mg) are not consistence, indicating that these three elements are in a random distribution. STEM-EELS mapping images results further confirm the non-uniformity of the elemental distribution, consistent with a random distribution of CoO on the MgO surface ). shows the K-edge absorption of carbon and transition state information could be concluded. Typical carbon based materials have the features of the transition state, such that 1s transits to π* state and 1s to σ* states locate at 285 and 292 eV, respectively. The two-transition state correspond to the electrons in the valence band electrons being excited to conduction state. Epoxy exhibits a sharp peak around 285.3 eV compared to GO and GNPs. Meanwhile, GNPs have the sharpest peak around 292 eV, suggesting the most C atoms in GNPs are in 1s to σ* state. Even though GO is in oxidation state, part of its carbon still behaves 1s transits to π*.The annular dark filed (ADF) mode of STEM provides information about atomic number of the elements in a sample. For example, the ADF image of La1.2Sr1.8Mn2O7 a and b) along direction shows bright spots and dark spots, and even for bright spots (p and r), they display different levels of brightness. This phenomenon is caused by the difference in atomic numbers. Bright spots are La and Sr, respectively. Dark spots are Mn elements. O is too light to show on the image. EELS result shows the core-loss edge of La, Mn and O c), but the researchers did not give information on core-loss edge of Sr, Sr has N2,3 edge at 29 eV and L3 edge at 1930 eV and L2 edge at 2010 eV.This page titled 8.3: Scanning Tunneling Microscopy is shared under a CC BY 4.0 license and was authored, remixed, and/or curated by Pavan M. V. Raja & Andrew R. Barron (OpenStax CNX) via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.
573
8.4: Magnetic Force Microscopy
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Physical_Methods_in_Chemistry_and_Nano_Science_(Barron)/08%3A_Structure_at_the_Nano_Scale/8.04%3A_Magnetic_Force_Microscopy
Magnetic force microscopy (MFM) is a natural extension of scanning tunneling microscopy (STM), whereby both the physical topology of a sample surface and the magnetic topology may be seen. Scanning tunneling microscopy was developed in 1982 by Gerd Binnig and Heinrich Rohrer, and the two shared the 1986 Nobel prize for their innovation. Binnig later went on to develop the first atomic force microscope (AFM) along with Calvin Quate and Christoph Gerber ). Magnetic force microscopy was not far behind, with the first report of its use in 1987 by Yves Martin and H. Kumar Wickramasinge ). An AFM with a magnetic tip was used to perform these early experiments, which proved to be useful in imaging both static and dynamic magnetic fields. MFM, AFM, and STM all have similar instrumental setups, all of which are based on the early scanning tunneling microscopes. In essence, STM uses a very small conductive tip attached to a piezoelectric cylinder to carefully scan across a small sample space. The electrostatic forces between the conducting sample and tip are measured, and the output is a picture that shows the surface of the sample. AFM and MFM are essentially derivative types of STM, which explains why a typical MFM device is very similar to an STM, with a piezoelectric driver and magnetized tip as seen in and .One may notice that this MFM instrument very closely resembles an atomic force microscope, and this is for good reason. The simplest MFM instruments are no more than AFM instruments with a magnetic tip. The differences between AFM and MFM lie in the data collected and its processing. Where AFM gives topological data through tapping, noncontact, or contact mode, MFM gives both topological (tapping) and magnetic topological (non-contact) data through a two-scan process known as interleave scanning. The relationships between basic STM, AFM, and MFM are summarized in Table \(\PageIndex{1}\).Interleave scanning, also known as two-pass scanning, is a process typically used in an MFM experiment. The magnetized tip is first passed across the sample in tapping mode, similar to an AFM experiment, and this gives the surface topology of the sample. Then, a second scan is taken in non-contact mode, where the magnetic force exerted on the tip by the sample is measured. These two types of scans are shown in .In non-contact mode (also called dynamic or AC mode), the magnetic force gradient from the sample affects the resonance frequency of the MFM cantilever, and can be measured in three different ways.Phase detection: the phase difference between the oscillation of the cantilever and piezoelectric source is measured Amplitude detection: the changes in the cantilever’s oscillations are measured Frequency modulation: the piezoelectric source’s oscillation frequency is changed to maintain a 90° phase lag between the cantilever and the piezoelectric actuator. The frequency change needed for the lag is measured. Regardless of the method used in determining the magnetic force gradient from the sample, a MFM interleave scan will always give the user information about both the surface and magnetic topology of the sample. A typical sample size is 100x100 μm, and the entire sample is scanned by rastering from one line to another. In this way, the MFM data processor can compose an image of the surface by combining lines of data from either the surface or magnetic scan. The output of an MFM scan is two images, one showing the surface and the other showing magnetic qualities of the sample. An idealized example is shown in . Any suitable magnetic material or coating can be used to make an MFM tip. Some of the most commonly used standard tips are coated with FeNi, CoCr, and NiP, while many research applications call for individualized tips such as carbon nanotubes. The resolution of the end image in MFM is dependent directly on the size of the tip, therefore MFM tips must come to a sharp point on the angstrom scale in order to function at high resolution. This leads to tips being costly, an issue exacerbated by the fact that coatings are often soft or brittle, leading to wear and tear. The best materials for MFM tips, therefore, depend on the desired resolution and application. For example, a high coercivity coating such as CoCr may be favored for analyzing bulk or strongly magnetic samples, whereas a low coercivity material such as FeNi might be preferred for more fine and sensitive applications.From an MFM scan, the product is a 2D scan of the sample surface, whether this be the physical or magnetic topographical image. Importantly, the resolution depends on the size of the tip of the probe; the smaller the probe, the higher the number of data points per square micrometer and therefore the resolution of the resulting image. MFM can be extremely useful in determining the properties of new materials, as in , or in analyzing already known materials’ magnetic landscapes. This makes MFM particularly useful for the analysis of hard drives. As people store more and more information on magnetic storage devices, higher storage capacities need to be developed and emergency backup procedures for this data must be developed. MFM is an ideal procedure for characterizing the fine magnetic surfaces of hard drives for use in research and development, and also can show the magnetic surfaces of already-used hard drives for data recovery in the event of a hard drive malfunction. This is useful both in forensics and in researching new magnetic storage materials.MFM has also found applications on the frontiers of research, most notably in the field of Spintronics. In general, Spintronics is the study of the spin and magnetic moment of solid-state materials, and the manipulation of these properties to create novel electronic devices. One example of this is quantum computing, which is promising as a fast and efficient alternative to traditional transistor-based computing. With regards to Spintronics, MFM can be used to characterize non-homogenous magnetic materials and unique samples such as dilute magnetic semiconductors (DMS). This is useful for research in magnetic storage such as MRAM, semiconductors , and magnetoresistive materials.In device manufacturing, the smoothness and/or roughness of the magnetic coatings of hard drive disks is significant in their ability to operate. Smoother coatings provide a low magnetic noise level, but stick to read/write heads, whereas rough surfaces have the opposite qualities. Therefore, fine tuning not only of the magnetic properties but the surface qualities of a given magnetic film is extremely important in the development of new hard drive technology. Magnetic force microscopy allows the manufacturers of hard drives to analyze disks for magnetic and surface topology, making it easier to control the quality of drives and determine which materials are suitable for further research. Industrial competition for higher bit density (bits per square millimeter), which means faster processing and increased storage capability, means that MFM is very important for characterizing films to very high resolution.Magnetic force microscopy is a powerful surface technique used to deduce both the magnetic and surface topology of a given sample. In general, MFM offers high resolution, which depends on the size of the tip, and straightforward data once processed. The images outputted by the MFM raster scan are clear and show structural and magnetic features of a 100x100 μm square of the given sample. This information can be used not only to examine surface properties, morphology, and particle size, but also to determine the bit density of hard drives, features of magnetic computing materials, and identify exotic magnetic phenomena at the atomic level. As MFM evolves, thinner and thinner magnetic tips are being fabricated to finer applications, such as in the use of carbon nanotubes as tips to give high atomic resolution in MFM images. The customizability of magnetic coatings and tips, as well as the use of AFM equipment for MFM, make MFM an important technique in the electronics industry, making it possible to see magnetic domains and structures that otherwise would remain hidden.This page titled 8.4: Magnetic Force Microscopy is shared under a CC BY 4.0 license and was authored, remixed, and/or curated by Pavan M. V. Raja & Andrew R. Barron (OpenStax CNX) via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.
574
8.5: Spectroscopic Characterization of Nanoparticles
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Physical_Methods_in_Chemistry_and_Nano_Science_(Barron)/08%3A_Structure_at_the_Nano_Scale/8.05%3A_Spectroscopic_Characterization_of_Nanoparticles
Quantum dots (QDs) are small semiconductor nanoparticles generally composed of two elements that have extremely high quantum efficiencies when light is shined on them. The most common quantum dots are CdSe, PbS, and ZnSe, but there are many many other varieties of these particles that contain other elements as well. QDs can also be made of just three elements or just one element such as silicon.Silicon quantum dots are synthesized in inverse micelles. SiCl4 is reduced using a two fold excess of LiAlH4 ). After the silicon has been fully reduced and the excess reducing agent quenched, the particles are capped with hydrogens and are hydrophobic. A platinum catalyzed ligand exchange of hydrogen for allylamine will produce hydrophilic particles ). All reactions in making these particles are extremely air sensitive, and silica is formed readily, so the reactions should be performed in a highly controlled atmosphere, such as a glove box. The particles are then washed in DMF, and finally filtered and stored in deionized water. This will allow the Si QDs to be pure in water, and the particles are ready for analysis. This technique yields Si QDs of 1 - 2 nm in size.The reported absorbtion wavelength for 1 - 2 nm Si QDs absorb is 300 nm. With the hydrophobic Si QDs, UV-vis absorbance analysis in toluene does not yield an acceptable spectrum because the UV-vis absorbance cutoff is 287 nm, which is very close to 300 nm for the peaks to be resolvable. A better hydrophobic solvent would be hexanes. All measurements of these particles would require a quartz cuvette since the glass aborbance cutoff (300 nm) is exactly where the particles would be observed. Hydrophilic substituted particles do not need to be transferred to another solvent because water’s absorbance cutoff is much lower. There is usually a slight impurity of DMF in the water due to residue on the particles after drying. If there is a DMF peak in the spectrum with the Si QDs the wavelengths are far enough apart to be resolved.Quantum dots are especially interesting when it comes to UV-vis spectroscopy because the size of the quantum dot can be determined from the position of the absorbtion peak in the UV-vis spectrum. Quantum dots absorb different wavelengths depending on the size of the particles (e.g., ). Many calibration curves would need to be done to determine the exact size and concentration of the quantum dots, but it is entirely possible and very useful to be able to determine size and concentration of quantum dots in this way since other ways of determining size are much more expensive and extensive (electron microscopy is most widely used for this data).An example of silicon quantum dot data can be seen in . The wider the absorbance peak is, the less monodispersed the sample is.Different size (different excitation) quantum dots can be used for different applications. The absorbance of the QDs can also reveal how monodispersed the sample is; more monodispersity in a sample is better and more useful in future applications. Silicon quantum dots in particular are currently being researched for making more efficient solar cells. The monodispersity of these quantum dots is particularly important for getting optimal absorbance of photons from the sun or other light source. Different sized quantum dots will absorb light differently, and a more exact energy absorption is important in the efficiency of solar cells. UV-vis absorbance is a quick, easy, and cheap way to determine the monodispersity of the silicon quantum dot sample. The peak width of the absorbance data can give that information. The other important information for future applications is to get an idea about the size of the quantum dots. Different size QDs absorb at different wavelengths; therefore, specific size Si QDs will be required for different cells in tandem solar cells.Noble metal nanoparticles have been used for centuries to color stained glass windows and provide many opportunities for novel sensing and optical technologies due to their intense scattering (deflection) and absorption of light. One of the most interesting and important properties of noble metal nanoparticles is their localized surface plasmon resonance (LSPR). The LSPR of noble metal nanoparticles arises when photons of a certain frequency induce the collective oscillation of conduction electrons on the nanoparticles’ surface. This causes selective photon absorption, efficient scattering, and enhanced electromagnetic field strength around the nanoparticles. More information about the properties and potential applications of noble metal nanoparticles can be found in Silver Nanoparticles: A Case Study in Cutting Edge ResearchNoble metal nanoparticles can be synthesized via the reduction of metal salts. Spherical metal nanoparticle “seeds” are first synthesized by reducing metal salts in water with a strong reducing agent such as sodium borohydride ). The seeds are then "capped" to prevent aggregation with a surface group such as citrate ).After small nanoparticle seeds have been synthesized, the seeds can be grown into nanoparticles of various sizes and shapes. Seeds are added to a solution of additional metal salt and a structure-directing agent, and are then reduced with a weak reducing agent such as ascorbic acid (see ). The structure-directing agent will determine the geometry of the nanoparticles produced. For example, cetyltrimethylammonium bromide (CTAB) is often used to produce nanorods ).Once synthesized, noble metal nanoparticles can be assembled into various higher-order nanostructures. Nanoparticle dimers, linear chains of two nanoparticles, can be assembled using a linker molecule that binds the two nanoparticles together ). Less-organized nanoparticle assemblies can be formed through the addition of counterions. Counterions react with the surface groups on nanoparticles, causing the nanoparticles to be stripped of their protective surface coating and inducing their aggregation.UV-visible absorbance spectroscopy is a powerful tool for detecting noble metal nanoparticles, because the LSPR of metal nanoparticles allows for highly selective absorption of photons. UV-visible absorbance spectroscopy can also be used to detect various factors that affect the LSPR of noble metal nanoparticles. More information about the theory and instrumentation of UV-visible absorbance spectroscopy can be found in the section related to UV-Vis Spectroscopy.Mie theory, a theory that describes the interaction of light with a homogenous sphere, can be used to predict the UV-visible absorbance spectrum of spherical metallic nanoparticles. One equation that can be obtained using Mie theory is \ref{1}, which describes the extinction, the sum of absorption and scattering of light, of spherical nanoparticles. In \ref{1}, E(λ) is the extinction, NA is the areal density of the nanoparticles, a is the radius of the nanoparticles, εm is the dielectric constant of the environment surrounding the nanoparticles, λ is the wavelength of the incident light, and εr and εi are the real and imaginary parts of the nanoparticles’ dielectric function. From this relation, we can see that the UV-visible absorbance spectrum of a solution of nanoparticles is dependent on the radius of the nanoparticles, the composition of the nanoparticles, and the environment surrounding the nanoparticles.\[ E(\lambda )\ = \frac{24\pi N_{A}a^{3}\varepsilon _{m}^{3/2}}{\lambda \ln} \left[\frac{\varepsilon _{i}}{(\varepsilon_{r}\ +\ 2\varepsilon _{m})^{2}\ +\ \varepsilon _{i}^{2}}\right] \label{1} \]Mie theory is limited to spherical nanoparticles, but there are other theoretical techniques that can be used to predict the UV-visible spectrum of more complex noble metal nanostructures. These techniques include surface-based methods such as the generalized multipole technique and T-matrix method, as well as volume-based techniques such as the discrete dipole approximation and the finite different time domain method.Just as the theoretical techniques described above can use nanoparticle geometry to predict the UV-visible absorbance spectrum of noble metal nanoparticles, nanoparticles’ UV-visible absorbance spectrum can be used to predict their geometry. As shown in below, the UV-visible absorbance spectrum is highly dependent on nanoparticle geometry. The shapes of the two spectra are quite different despite the two types of nanoparticles having similar dimensions and being composed of the same material ).The UV-visible absorbance spectrum is also dependent on the aggregation state of the nanoparticles. When nanoparticles are in close proximity to each other, their plasmons couple, which affects their LSPR and thus their absorption of light. Dimerization of nanospheres causes a “red shift,” a shift to longer wavelengths, in the UV-visible absorbance spectrum as well as a slight increase in absorption at higher wavelengths (see ). Unlike dimerization, aggregation of nanoparticles causes a decrease in the intensity of the peak absorbance without shifting the wavelength at which the peak occurs (λmax). Information about the calculation of λmax can be found in the earlier section about silver nanoparticles. illustrates the increase in nanoparticle aggregation with increased salt concentrations based on the decreased absorbance peak intensity.The λmax of the UV-visible absorbance spectrum of noble metal nanoparticles is highly dependent on the environment surrounding the nanoparticles. Because of this, shifts in λmax can be used to detect changes in the surface composition of the nanoparticles. One potential application of this phenomenon is using UV-visible absorbance spectroscopy to detect the binding of biomolecules to the surface of noble metal nanoparticles. The red shift in the λmax of the UV-visible absorbance spectrum in below with the addition of human serum albumin protein indicates that the protein is binding to the surface of the nanoparticles.Semiconductor materials are generally classified on the basis of the periodic table group that their constituent elements belong to. Thus, Group 12-16 semiconductors, formerly called II-VI semiconductors, are materials whose cations are from the Group 12 and anions are from Group 16 in the periodic table ). Some examples of Group 12-16 semiconductor materials are cadmium selenide (CdSe), zinc sulfide (ZnS), cadmium teluride (CdTe), zinc oxide (ZnO), and mercuric selenide (HgSe) among others.The new IUPAC (International Union of Pure and Applied Chemistry) convention is being followed in this document, to avoid any confusion with regard to conventions used earlier. In the old IUPAC convention, Group 12 was known as Group IIB with the roman numeral ‘II’ referring to the number of electrons in the outer electronic shells and B referring to being on the right part of the table. However, in the CAS (Chemical Abstracts Service), the alphabet B refers to transition elements as compared to main group elements, though the roman numeral has the same meaning. Similarly, Group 16 was earlier known as Group VI because all the elements in this group have 6 valence shell electrons.From the Greek word nanos - meaning "dwarf" this prefix is used in the metric system to mean 10-9 or one billionth (1/1,000,000,000). Thus a nanometer is 10-9 or one billionth of a meter, and a nanojoule is 10-9 or one billionth of a Joule, etc. A nanoparticle is ordinarily defined as any particle with at least one of its dimensions in the 1 - 100 nm range.Nanoscale materials often show behavior which is intermediate between that of a bulk solid and that of an individual molecule or atom. An inorganic nanocrystal can be imagined to be comprised of a few atoms or molecules. It thus will behave differently from a single atom; however, it is still smaller than a macroscopic solid, and hence will show different properties. For example, if one would compare the chemical reactivity of a bulk solid and a nanoparticle, the latter would have a higher reactivity due to a significant fraction of the total number of atoms being on the surface of the particle. Properties such as boiling point, melting point, optical properties, chemical stability, electronic properties, etc. are all different in a nanoparticle as compared to its bulk counterpart. In the case of Group 12-16 semiconductors, this reduction in size from bulk to the nanoscale results in many size dependent properties such as varying band gap energy, optical and electronic properties.In the case of semiconductor nanocrystals, the effect of the size on the optical properties of the particles is very interesting. Consider a Group 12-16 semiconductor, cadmium selenide (CdSe). A 2 nm sized CdSe crystal has a blue color fluorescence whereas a larger nanocrystal of CdSe of about 6 nm has a dark red fluorescence ). In order to understand the size dependent optical properties of semiconductor nanoparticles, it is important to know the physics behind what is happening at the nano level.The electronic structure of any material is given by a solution of Schrödinger equations with boundary conditions, depending on the physical situation. The electronic structure of a semiconductor can be described by the following terms:By the solution of Schrödinger’s equations, the electrons in a semiconductor can have only certain allowable energies, which are associated with energy levels. No electrons can exist in between these levels, or in other words can have energies in between the allowed energies. In addition, from Pauli’s Exclusion Principle, only 2 electrons with opposite spin can exist at any one energy level. Thus, the electrons start filling from the lowest energy levels. Greater the number of atoms in a crystal, the difference in allowable energies become very small, thus the distance between energy levels decreases. However, this distance can never be zero. For a bulk semiconductor, due to the large number of atoms, the distance between energy levels is very small and for all practical purpose the energy levels can be described as continuous ).From the solution of Schrödinger’s equations, there are a set of energies which is not allowable, and thus no energy levels can exist in this region. This region is called the band gap and is a quantum mechanical phenomenon ). In a bulk semiconductor the bandgap is fixed; whereas in a quantum dot nanoparticle the bandgap varies with the size of the nanoparticle.The conduction band consists of energy levels from the upper edge of the bandgap and higher ). To reach the conduction band, the electrons in the valence band should have enough energy to cross the band gap. Once the electrons are excited, they subsequently relax back to the valence band (either radiatively or non-radiatively) followed by a subsequent emission of radiation. This property is responsible for most of the applications of quantum dots.When an electron is excited from the valence band to the conduction band, corresponding to the electron in the conduction band a hole (absence of electron) is formed in the valence band. This electron pair is called an exciton. Excitons have a natural separation distance between the electron and hole, which is characteristic of the material. This average distance is called exciton Bohr radius. In a bulk semiconductor, the size of the crystal is much larger than the exciton Bohr radius and hence the exciton is free to move throughout the crystal.Before understanding the electronic structure of a quantum dot semiconductor, it is important to understand what a quantum dot nanoparticle is. We earlier studied that a nanoparticle is any particle with one of its dimensions in the 1 - 100 nm. A quantum dot is a nanoparticle with its diameter on the order of the materials exciton Bohr radius. Quantum dots are typically 2 - 10 nm wide and approximately consist of 10 to 50 atoms. With this understanding of a quantum dot semiconductor, the electronic structure of a quantum dot semiconductor can be described by the following terms.When the size of the semiconductor crystal becomes comparable or smaller than the exciton Bohr radius, the quantum dots are in a state of quantum confinement. As a result of quantum confinement, the energy levels in a quantum dot are discrete as opposed to being continuous in a bulk crystal ).In materials that have small number of atoms and are considered as quantum confined, the energy levels are separated by an appreciable amount of energy such that they are not continuous, but are discrete (see ). The energy associated with an electron (equivalent to conduction band energy level) is given by is given by \ref{2}, where h is the Planck’s constant, me is the effective mass of electron and n is the quantum number for the conduction band states, and n can take the values 1, 2, 3 and so on. Similarly, the energy associated with the hole (equivalent to valence band energy level) is given by \ref{2}, where n' is the quantum number for the valence states, and n' can take the values 1, 2, 3, and so on. The energy increases as one goes higher in the quantum number. Since the electron mass is much smaller than that of the hole, the electron levels are separated more widely than the hole levels.\[ E^{e}\ =\ \frac{h^{2}n^{2}}{8\pi ^{2}m_e d^2 } \label{2} \]\[E^h \ =\ \frac{h^{2}n'^{2}}{8\pi ^{2}m_h d^2 } \label{3} \]As seen from \ref{2} and \ref{3}, the energy levels are affected by the diameter of the semiconductor particles. If the diameter is very small, since the energy is dependent on inverse of diameter squared, the energy levels of the upper edge of the band gap (lowest conduction band level) and lower edge of the band gap (highest valence band level) change significantly with the diameter of the particle and the effective mass of the electron and the hole, resulting in a size dependent tunable band gap. This also results in the discretization of the energy levels.Qualitatively, this can be understood in the following way. In a bulk semiconductor, the addition or removal of an atom is insignificant compared to the size of the bulk semiconductor, which consists of a large number of atoms. The large size of bulk semiconductors makes the changes in band gap so negligible on the addition of an atom, that it is considered as a fixed band gap. In a quantum dot, addition of an atom does make a difference, resulting in the tunability of band gap.Due to the presence of discrete energy levels in a QD, there is a widening of the energy gap between the highest occupied electronic states and the lowest unoccupied states as compared to the bulk material. As a consequence, the optical properties of the semiconductor nanoparticles also become size dependent.The minimum energy required to create an exciton is the defined by the band gap of the material, i.e., the energy required to excite an electron from the highest level of valence energy states to the lowest level of the conduction energy states. For a quantum dot, the bandgap varies with the size of the particle. From \ref{2} and \ref{3}, it can be inferred that the band gap becomes higher as the particle becomes smaller. This means that for a smaller particle, the energy required for an electron to get excited is higher. The relation between energy and wavelength is given by \ref{4}, where h is the Planck’s constant, c is the speed of light, λ is the wavelength of light. Therefore, from \ref{4} to cross a bandgap of greater energy, shorter wavelengths are absorbed, i.e., a blue shift is seen.\[ E\ =\ hc \label{4} \]For Group 12-16 semiconductors, the bandgap energy falls in the UV-visible range. That is ultraviolet light or visible light can be used to excite an electron from the ground valence states to the excited conduction states. In a bulk semiconductor the band gap is fixed, and the energy states are continuous. This results in a rather uniform absorption spectrum a). In the case of Group 12-16 quantum dots, since the bandgap can be changed with the size, these materials can absorb over a range of wavelengths. The peaks seen in the absorption spectrum b) orrespond to the optical transitions between the electron and hole levels. The minimum energy and thus the maximum wavelength peak corresponds to the first exciton peak or the energy for an electron to get excited from the highest valence state to the lowest conduction state. The quantum dot will not absorb wavelengths of energy longer than this wavelength. This is known as the absorption onset.Fluorescence is the emission of electromagnetic radiation in the form of light by a material that has absorbed a photon. When a semiconductor quantum dot (QD) absorbs a photon/energy equal to or greater than its band gap, the electrons in the QD’s get excited to the conduction state. This excited state is however not stable. The electron can relax back to its ground state by either emitting a photon or lose energy via heat losses. These processes can be divided into two categories – radiative decay and non-radiative decay. Radiative decay is the loss of energy through the emission of a photon or radiation. Non-radiative decay involves the loss of heat through lattice vibrations and this usually occurs when the energy difference between the levels is small. Non-radiative decay occurs much faster than radiative decay.Usually the electron relaxes to the ground state through a combination of both radiative and non-radiative decays. The electron moves quickly through the conduction energy levels through small non-radiative decays and the final transition across the band gap is via a radiative decay. Large nonradiative decays don’t occur across the band gap because the crystal structure can’t withstand large vibrations without breaking the bonds of the crystal. Since some of the energy is lost through the non-radiative decay, the energy of the emitted photon, through the radiative decay, is much lesser than the absorbed energy. As a result the wavelength of the emitted photon or fluorescence is longer than the wavelength of absorbed light. This energy difference is called the Stokes shift. Due this Stokes shift, the emission peak corresponding to the absorption band edge peak is shifted towards a higher wavelength (lower energy), i.e., .Intensity of emission versus wavelength is a bell-shaped Gaussian curve. As long as the excitation wavelength is shorter than the absorption onset, the maximum emission wavelength is independent of the excitation wavelength. shows a combined absorption and emission spectrum for a typical CdSe tetrapod.There are various factors that affect the absorption and emission spectra for Group 12-16 semiconductor quantum crystals. Fluorescence is much more sensitive to the background, environment, presence of traps and the surface of the QDs than UV-visible absorption. Some of the major factors influencing the optical properties of quantum nanoparticles include:The size dependent optical properties of NP’s have many applications from biomedical applications to solar cell technology, from photocatalysis to chemical sensing. Most of these applications use the following unique properties.For applications in the field of nanoelectronics, the sizes of the quantum dots can be tuned to be comparable to the scattering lengths, reducing the scattering rate and hence, the signal to noise ratio. For Group 12-16 QDs to be used in the field of solar cells, the bandgap of the particles can be tuned so as to form absorb energy over a large range of the solar spectrum, resulting in more number of excitons and hence more electricity. Since the nanoparticles are so small, most of the atoms are on the surface. Thus, the surface to volume ratio is very large for the quantum dots. In addition to a high surface to volume ratio, the Group 12-16 QDs respond to light energy. Thus quantum dots have very good photocatalytic properties. Quantum dots show fluorescence properties, and emit visible light when excited. This property can be used for applications as biomarkers. These quantum dots can be tagged to drugs to monitor the path of the drugs. Specially shaped Group 12-16 nanoparticles such as hollow shells can be used as drug delivery agents. Another use for the fluorescence properties of Group 12-16 semiconductor QDs is in color-changing paints, which can change colors according to the light source used.Quantum dots (QDs) as a general term refer to nanocrystals of semiconductor materials, in which the size of the particles are comparable to the natural characteristic separation of an electron-hole pair, otherwise known as the exciton Bohr radius of the material. When the size of the semiconductor nanocrystal becomes this small, the electronic structure of the crystal is governed by the laws of quantum physics. Very small Group 12-16 (II-VI) semiconductor nanoparticle quantum dots, in the order of 2 - 10 nm, exhibit significantly different optical and electronic properties from their bulk counterparts. The characterization of size dependent optical properties of Group 12-16 semiconductor particles provide a lot of qualitative and quantitative information about them – size, quantum yield, monodispersity, shape and presence of surface defects. A combination of information from both the UV-visible absorption and fluorescence, complete the analysis of the optical properties.Absorption spectroscopy, in general, refers to characterization techniques that measure the absorption of radiation by a material, as a function of the wavelength. Depending on the source of light used, absorption spectroscopy can be broadly divided into infrared and UV-visible spectroscopy. The band gap of Group 12-16 semiconductors is in the UV-visible region. This means the minimum energy required to excite an electron from the valence states of the Group 12-16 semiconductor QDs to its conduction states, lies in the UV-visible region. This is also a reason why most of the Group 12-16 semiconductor quantum dot solutions are colored.This technique is complementary to fluorescence spectroscopy, in that UV-visible spectroscopy measures electronic transitions from the ground state to the excited state, whereas fluorescence deals with the transitions from the excited state to the ground state. In order to characterize the optical properties of a quantum dot, it is important to characterize the sample with both these techniquesIn quantum dots, due to the very small number of atoms, the addition or removal of one atom to the molecule changes the electronic structure of the quantum dot dramatically. Taking advantage of this property in Group 12-16 semiconductor quantum dots, it is possible to change the band gap of the material by just changing the size of the quantum dot. A quantum dot can absorb energy in the form of light over a range of wavelengths, to excite an electron from the ground state to its excited state. The minimum energy that is required to excite an electron, is dependent on the band gap of the quantum dot. Thus, by making accurate measurements of light absorption at different wavelengths in the ultraviolet and visible spectrum, a correlation can be made between the band gap and size of the quantum dot. Group 12-16 semiconductor quantum dots are of particular interest, since their band gap lies in the visible region of the solar spectrum.The UV-visible absorbance spectroscopy is a characterization technique in which the absorbance of the material is studied as a function of wavelength. The visible region of the spectrum is in the wavelength range of 380 nm (violet) to 740 nm (red) and the near ultraviolet region extends to wavelengths of about 200 nm. The UV-visible spectrophotometer analyzes over the wavelength range 200 – 900 nm.When the Group 12-16 semiconductor nanocrystals are exposed to light having an energy that matches a possible electronic transition as dictated by laws of quantum physics, the light is absorbed and an exciton pair is formed. The UV-visible spectrophotometer records the wavelength at which the absorption occurs along with the intensity of the absorption at each wavelength. This is recorded in a graph of absorbance of the nanocrystal versus wavelength.A working schematic of the UV-visible spectrophotometer is show in .Since it is a UV-vis spectrophotometer, the light source ) needs to cover the entire visible and the near ultra-violet region (200 - 900 nm). Since it is not possible to get this range of wavelengths from a single lamp, a combination of a deuterium lamp for the UV region of the spectrum and tungsten or halogen lamp for the visible region is used. This output is then sent through a diffraction grating as shown in the schematic.The beam of light from the visible and/or UV light source is then separated into its component wavelengths (like a very efficient prism) by a diffraction grating ). Following the slit is a slit that sends a monochromatic beam into the next section of the spectrophotometer.Light from the slit then falls onto a rotating disc ). Each disc consists of different segments – an opaque black section, a transparent section and a mirrored section. If the light hits the transparent section, it will go straight through the sample cell, get reflected by a mirror, hits the mirrored section of a second rotating disc, and then collected by the detector. Else if the light hits the mirrored section, gets reflected by a mirror, passes through the reference cell, hits the transparent section of a second rotating disc and then collected by the detector. Finally if the light hits the black opaque section, it is blocked and no light passes through the instrument, thus enabling the system to make corrections for any current generated by the detector in the absence of light.For liquid samples, a square cross section tube sealed at one end is used. The choice of cuvette depends on the following factors:The best cuvettes need to be very clear and have no impurities that might affect the spectroscopic reading. Defects on the cuvette such as scratches, can scatter light and hence should be avoided. Some cuvettes are clear only on two sides, and can be used in the UV-Visible spectrophotometer, but cannot be used for fluorescence spectroscopy measurements. For Group 12-16 semiconductor nanoparticles prepared in organic solvents, the quartz cuvette is chosen.In the sample cell the quantum dots are dispersed in a solvent, whereas in the reference cell the pure solvent is taken. It is important that the sample be very dilute (maximum first exciton absorbance should not exceed 1 au) and the solvent is not UV-visible active. For these measurements, it is required that the solvent does not have characteristic absorption or emission in the region of interest. Solution phase experiments are preferred, though it is possible to measure the spectra in the solid state also using thin films, powders, etc. The instrumentation for solid state UV-visible absorption spectroscopy is slightly different from the solution phase experiments and is beyond the scope of discussion.Detector converts the light into a current signal that is read by a computer. Higher the current signal, greater is the intensity of the light. The computer then calculates the absorbance using the in \ref{5}, here A denotes absorbance, I is sample cell intensity and Io is the reference cell intensity.\[ A\ =\ log_{10}(I_{0}/I) \label{5} \]The following cases are possible:Where I < I0 and A < 0. This usually occurs when the solvent absorbs in the wavelength range. Preferably the solvent should be changed, to get an accurate reading for actual reference cell intensity. Where I = I0­ and A= 0. This occurs when pure solvent is put in both reference and sample cells. This test should always be done before testing the sample, to check for the cleanliness of the cuvettes. When A = 1. This occurs when 90% or the light at a particular wavelength has been absorbed, which means that only 10% is seen at the detector. So I0/I becomes 100/10 = 10. Log10 of 10 is 1. When A > 1. This occurs in extreme case where more than 90% of the light is absorbed.The output is the form of a plot of absorbance against wavelength, e.g., .In order to make comparisons between different samples, it is important that all the factors affecting absorbance should be constant except the sample itself.The extent of absorption depends on the number of absorbing nanoparticles or in other words the concentration of the sample. If it is a reasonably concentrated solution, it will have a high absorbance since there are lots of nanoparticles to interact with the light. Similarly in an extremely dilute solution, the absorbance is very low. In order to compare two solutions, it is important that we should make some allowance for the concentration.Even if we had the same concentration of solutions, if we compare two solutions – one in a rectagular shaped container (e.g., ) so that light travelled 1 cm through it and the other in which the light travelled 100 cm through it, the absorbance would be different. This is because if the length the light travelled is greater, it means that the light interacted with more number of nanocrystals, and thus has a higher absorbance. Again, in order to compare two solutions, it is important that we should make some allowance for the concentration.The Beer-Lambert law addresses the effect of concentration and container shape as shown in \ref{5}, \ref{6} and \ref{7}, where A denotes absorbance; ε is the molar absorptivity or molar absorption coefficient; l is the path length of light (in cm); and c is the concentration of the solution (mol/dm3).\[ log_{10}(I_{0}/I)\ =\ \varepsilon l c \label{6} \]\[ A\ =\ \varepsilon l c \label{7} \]From the Beer-Lambert law, the molar absorptivity 'ε' can be expressed as shown in \ref{8}.\[ c\ =\ A/l \varepsilon \label{8} \]Molar absorptivity corrects for the variation in concentration and length of the solution that the light passes through. It is the value of absorbance when light passes through 1 cm of a 1 mol/dm3 ­­solution.The linearity of the Beer-Lambert law is limited by chemical and instrumental factors.The data obtained from the spectrophotometer is a plot of absorbance as a function of wavelength. Quantitative and qualitative data can be obtained by analysing this information.The band gap of the semiconductor quantum dots can be tuned with the size of the particles. The minimum energy for an electron to get excited from the ground state is the energy to cross the band gap. In an absorption spectra, this is given by the first exciton peak at the maximum wavelength (λmax).The size of quantum dots can be approximated corresponding to the first exciton peak wavelength. Emperical relationships have been determined relating the diameter of the quantum dot to the wavelength of the first exciton peak. The Group 12-16 semiconductor quantum dots that they studied were cadmium selenide (CdSe), cadmium telluride (CdTe) and cadmium sulfide (CdS). The empirical relationships are determined by fitting experimental data of absorbance versus wavelength of known sizes of particles. The empirical equations determined are given for CdTe, CdSe, and CdS in \ref{9}, \ref{10} and \ref{11} respectively, where D is the diameter and λ is the wavelength corresponding to the first exciton peak. For example, if the first exciton peak of a CdSe quantum dot is 500 nm, the corresponding diameter of the quantum dot is 2.345 nm and for a wavelength of 609 nm, the corresponding diameter is 5.008 nm.\[ D\ =\ (9.8127\ x\ 10^{-7})\lambda ^{3}\ -\ (1.7147\ x\ 10^{-3})\lambda ^{2}\ +\ (1.0064)\lambda \ -\ 194.84 \label{9} \]\[ D\ =\ (1.6122\ x\ 10^{-7})\lambda ^{3}\ -\ (2.6575\ x\ 10^{-3})\lambda ^{2}\ +\ (1.6242)\lambda \ -\ 41.57 \label{10} \]\[ D\ =\ (-6.6521\ x\ 10^{-7})\lambda ^{3}\ -\ (1.9577\ x\ 10^{-3})\lambda ^{2}\ +\ (9.2352)\lambda \ -\ 13.29 \label{11} \]Using the Beer-Lambert law, it is possible to calculate the concentration of the sample if the molar absorptivity for the sample is known. The molar absorptivity can be calculated by recording the absorbance of a standard solution of 1 mol/dm3 concentration in a standard cuvette where the light travels a constant distance of 1 cm. Once the molar absorptivity and the absorbance of the sample are known, with the length the light travels being fixed, it is possible to determine the concentration of the sample solution.Empirical equations can be determined by fitting experimental data of extinction coefficient per mole of Group 12-16 semiconductor quantum dots, at 250 °C, to the diameter of the quantum dot, \ref{12}, \ref{13}, and \ref{14}.\[ \varepsilon \ =\ 10043 x D^{2.12} \label{12} \]\[ \varepsilon \ =\ 5857\ x\ D^{2.65} \label{13} \]\[ \varepsilon \ =\ 21536\ x\ D^{2.3} \label{14} \]The concentration of the quantum dots can then be then be determined by using the Beer Lambert law as given by \ref{8}.Apart from quantitative data such as the size of the quantum dots and concentration of the quantum dots, a lot of qualitative information can be derived from the absorption spectra.If there is a very narrow size distribution, the first exciton peak will be very sharp ). his is because due to the narrow size distribution, the differences in band gap between different sized particles will be very small and hence most of the electrons will get excited over a smaller range of wavelengths. In addition, if there is a narrow size distribution, the higher exciton peaks are also seen clearly.In the case of a spherical quantum dot, in all dimensions, the particle is quantum confined ). In the case of a nanorod, whose length is not in the quantum regime, the quantum effects are determined by the width of the nanorod. Similar is the case in tetrapods or four legged structures. The quantum effects are determined by the thickness of the arms. During the synthesis of the shaped particles, the thickness of the rod or the arm of the tetrapod does not vary among the different particles, as much as the length of the rods or arms changes. Since the thickness of the rod or tetrapod is responsible for the quantum effects, the absorption spectrum of rods and tetrapods has sharper features as compared to a quantum dot. Hence, qualitatively it is possible to differentiate between quantum dots and other shaped particles.In the case of CdSe semiconductor quantum dots it has been shown that it is possible to estimate the crystal lattice of the quantum dot from the adsorption spectrum ), and hence determine if the structure is zinc blend or wurtzite.Cadmium selenide (CdSe) is one of the most popular Group 12-16 semiconductors. This is mainly because the band gap (712 nm or 1.74 eV) energy of CdSe. Thus, the nanoparticles of CdSe can be engineered to have a range of band gaps throughout the visible range, corresponding to the major part of the energy that comes from the solar spectrum. This property of CdSe along with its fluorescing properties is used in a variety of applications such as solar cells and light emitting diodes. Though cadmium and selenium are known carcinogens, the harmful biological effects of CdSe can be overcome by coating the CdSe with a layer of zinc sulfide. Thus CdSe, can also be used as bio-markers, drug-delivery agents, paints and other applications.A typical absorption spectrum of narrow size distribution wurtzite CdSe quantum dot is shown in . A size evolving absorption spectra is shown in . However, a complete analysis of the sample is possible only by also studying the fluorescence properties of CdSe.Cadmium telluride has a band gap of 1.44 eV (860 nm) and as such it absorbs in the infrared region. Like CdSe, the sizes of CdTe can be engineered to have different band edges and thus, different absorption spectra as a function of wavelength. A typical CdTe spectra is shown in . Due to the small bandgap energy of CdTe, it can be used in tandem with CdSe to absorb in a greater part of the solar spectrum.Table \(\PageIndex{1}\) shows the bulk band gap of other Group 12-16 semiconductor systems. The band gap of ZnS falls in the UV region, while those of ZnSe, CdS, and ZnTe fall in the visible region.It is often desirable to have a combination of two Group 12-16 semiconductor system quantum heterostructures of different shapes like dots and tetrapods, for applications in solar cells, bio-markers, etc. Some of the most interesting systems are ZnS shell-CdSe core systems, such as the CdSe/CdS rods and tetrapods. shows a typical absorption spectra of CdSe-ZnS core-shell system. This system is important because of the drastically improved fluorescence properties because of the addition of a wide band gap ZnS shell than the core CdSe. In addition with a ZnS shell, CdSe becomes bio-compatible.A CdSe seed, CdS arm nanorods system is also interesting. Combining CdSe and CdS in a single nanostructure creates a material with a mixed dimensionality where holes are confined to CdSe while electrons can move freely between CdSe and CdS phases.Group 12-16 semiconductor nanocrystals when exposed to light of a particular energy absorb light to excite electrons from the ground state to the excited state, resulting in the formation of an electron-hole pair (also known as excitons). The excited electrons relax back to the ground state, mainly through radiative emission of energy in the form of photons.Quantum dots (QD) refer to nanocrystals of semiconductor materials where the size of the particles is comparable to the natural characteristic separation of an electron-hole pair, otherwise known as the exciton Bohr radius of the material. In quantum dots, the phenomenon of emission of photons associated with the transition of electrons from the excited state to the ground state is called fluorescence.Emission spectroscopy, in general, refers to a characterization technique that measures the emission of radiation by a material that has been excited. Fluorescence spectroscopy is one type of emission spectroscopy which records the intensity of light radiated from the material as a function of wavelength. It is a nondestructive characterization technique.After an electron is excited from the ground state, it needs to relax back to the ground state. This relaxation or loss of energy to return to the ground state, can be achieved by a combination of non-radiative decay (loss of energy through heat) and radiative decay (loss of energy through light). Non-radiative decay by vibrational modes typically occurs between energy levels that are close to each other. Radiative decay by the emission of light occurs when the energy levels are far apart like in the case of the band gap. This is because loss of energy through vibrational modes across the band gap can result in breaking the bonds of the crystal. This phenomenon is shown in . The band gap of Group 12-16 semiconductors is in the UV-visible region. Thus, the wavelength of the emitted light as a result of radiative decay is also in the visible region, resulting in fascinating fluorescence properties.A fluorimeter is a device that records the fluorescence intensity as a function of wavelength. The fluorescence quantum yield can then be calculated by the ratio of photons absorbed to photons emitted by the system. The quantum yield gives the probability of the excited state getting relaxed via fluorescence rather than by any other non-radiative decay.Photoluminescence is the emission of light from any material due to the loss of energy from excited state to ground state. There are two main types of luminescence – fluorescence and phosphorescence. Fluorescence is a fast decay process, where the emission rate is around 108 s-1 and the lifetime is around 10-9 - 10-7 s. Fluorescence occurs when the excited state electron has an opposite spin compared to the ground state electrons. From the laws of quantum mechanics, this is an allowed transition, and occurs rapidly by emission of a photon. Fluorescence disappears as soon as the exciting light source is removed.Phosphorescence is the emission of light, in which the excited state electron has the same spin orientation as the ground state electron. This transition is a forbidden one and hence the emission rates are slow (103 - 100 s-1). So the phosphorescence lifetimes are longer, typically seconds to several minutes, while the excited phosphors slowly returned to the ground state. Phosphorescence is still seen, even after the exciting light source is removed. Group 12-16 semiconductor quantum dots exhibit fluorescence properties when excited with ultraviolet light.The working schematic for the fluorometer is shown in .The excitation energy is provided by a light source that can emit wavelengths of light over the ultraviolet and the visible range. Different light sources can be used as excitation sources such as lasers, xenon arcs and mercury-vapor lamps. The choice of the light source depends on the sample. A laser source emits light of a high irradiance at a very narrow wavelength interval. This makes the need for the filter unnecessary, but the wavelength of the laser cannot be altered significantly. The mercury vapor lamp is a discrete line source. The xenon arc has a continuous emission spectrum between the ranges of 300 - 800 nm.The diffraction grating splits the incoming light source into its component wavelengths ). The monochromator can then be adjusted to choose with wavelengths to pass through. Following the primary filter, specific wavelengths of light are irradiated onto the sample.A proportion of the light from the primary filter is absorbed by the sample. After the sample gets excited, the fluorescent substance returns to the ground state, by emitting a longer wavelength of light in all directions ). Some of this light passes through a secondary filter. For liquid samples, a square cross section tube sealed at one end and all four sides clear, is used as a sample cell. The choice of cuvette depends on three factors:1. Type of Solvent - For aqueous samples, specially designed rectangular quartz, glass or plastic cuvettes are used. For organic samples glass and quartz cuvettes are used.2. Excitation Wavelength - Depending on the size and thus, bandgap of the Group 12-16 semiconductor nanoparticles, different excitation wavelengths of light are used. Depending on the excitation wavelength, different materials are used (Table \(\PageIndex{2}\)).3. Cost - Plastic cuvettes are the least expensive and can be discarded after use. Though quartz cuvettes have the maximum utility, they are the most expensive, and need to reused. Generally, disposable plastic cuvettes are used when speed is more important than high accuracy.The cuvettes have a 1 cm path length for the light ). The best cuvettes need to be very clear and have no impurities that might affect the spectroscopic reading. Defects on the cuvette, such as scratches, can scatter light and hence should be avoided. Since the specifications of a cuvette are the same for both, the UV-visible spectrophotometer and fluorimeter, the same cuvette that is used to measure absorbance can be used to measure the fluorescence. For Group 12-16 semiconductor nanoparticles preparted in organic solvents, the clear four sided quartz cuvette is used. The sample solution should be dilute (absorbance <1 au), to avoid very high signal from the sample to burn out the detector. The solvent used to disperse the nanoparticles should not absorb at the excitation wavelength.The secondary filter is placed at a 90° angle ) to the original light path to minimize the risk of transmitted or reflected incident light reaching the detector. Also this minimizes the amount of stray light, and results in a better signal-to-noise ratio. From the secondary filter, wavelengths specific to the sample are passed onto the detector.The detector can either be single-channeled or multichanneled ). The single-channeled detector can only detect the intensity of one wavelength at a time, while the multichanneled detects the intensity at all wavelengths simultaneously, making the emission monochromator or filter unnecessary. The different types of detectors have both advantages and disadvantages.The output is the form of a plot of intensity of emitted light as a function of wavelength as shown in ). The data obtained from fluorimeter is a plot of fluorescence intensity as a function of wavelength. Quantitative and qualitative data can be obtained by analysing this information.From the fluorescence intensity versus wavelength data, the quantum yield (ΦF) of the sample can be determined. Quantum yield is a measure of the ratio of the photons absorbed with respect to the photons emitted. It is important for the application of Group 12-16 semiconductor quantum dots using their fluorescence properties, for e.g., bio-markers.The most well-known method for recording quantum yield is the comparative method which involves the use of well characterized standard solutions. If a test sample and a standard sample have similar absorbance values at the same excitation wavelength, it can be assumed that the number of photons being absorbed by both the samples is the same. This means that a ratio of the integrated fluorescence intensities of the test and standard sample measured at the same excitation wavelength will give a ratio of quantum yields. Since the quantum yield of the standard solution is known, the quantum yield for the unknown sample can be calculated.A plot of integrated fluorescence intensity versus absorbance at the excitation wavelength is shown in . The slope of the graphs shown in are proportional to the quantum yield of the different examples. Quantum yield is then calculated using \ref{15}, where subscripts ST denotes standard sample and X denotes the test sample; QY is the quantum yield; RI is the refractive index of the solvent.\[ \frac{QY_{X}}{QY_{ST}}\ =\ \frac{slope_{X} (RI_{X})^{2}}{slope_{ST} (RI_{ST})^{2}} \label{15} \]Take the example of . If the same solvent is used in both the sample and the standard solution, the ratio of quantum yields of the sample to the standard is given by \ref{16}. If the quantum yield of the standard is known to 0.95, then the quantum yield of the test sample is 0.523 or 52.3%.\[ \frac{QY_{X}}{QY_{ST}}\ =\ \frac{1.41}{2.56} \label{16} \]The assumption used in the comparative method is valid only in the Beer-Lambert law linear regime. Beer-Lambert law states that absorbance is directly proportional to the path length of light travelled within the sample, and concentration of the sample. The factors that affect the quantum yield measurements are the following:The quantum yield of the Group 12-16 semiconductor nanoparticles are affected by many factors such as the following.Apart from quantum yield information, the relationship between intensity of fluorescence emission and wavelength, other useful qualitative information such as size distribution, shape of the particle and presence of surface defects can be obtained.As shown in , the shape of the plot of intensity versus wavelength is a Gaussian distribution. In , the full width at half maximum (FWHM) is given by the difference between the two extreme values of the wavelength at which the photoluminescence intensity is equal to half its maximum value. From the full width half max (FWHM) of the fluorescence intensity Gaussian distribution, it is possible to determine qualitatively the size distribution of the sample. For a Group 12-16 quantum dot sample if the FWHM is greater than 30, the system is very polydisperse and has a large size distribution. It is desirable for all practical applications for the FWHM to be lesser than 30.From the FWHM of the emission spectra, it is also possible to qualitatively get an idea if the particles are spherical or shaped. During the synthesis of the shaped particles, the thickness of the rod or the arm of the tetrapod does not vary among the different particles, as much as the length of the rods or arms changes. The thickness of the arm or rod is responsible for the quantum effects in shaped particles. In the case of quantum dots, the particle is quantum confined in all dimensions. Thus, any size distribution during the synthesis of quantum dots greatly affects the emission spectra. As a result the FWHM of rods and tetrapods is much smaller as compared to a quantum dot. Hence, qualitatively it is possible to differentiate between quantum dots and other shaped particles.Another indication of branched structures is the decrease in the intensity of fluorescence peaks. Quantum dots have very high fluorescence values as compared to branched particles, since they are quantum confined in all dimensions as compared to just 1 or 2 dimensions in the case of branched particles.The emission spectra of all Group 12-16 semiconductor nanoparticles are Gaussian curves as shown in and . The only difference between them is the band gap energy, and hence each of the Group 12-16 semiconductor nanoparticles fluoresce over different ranges of wavelengths.Since its bulk band gap (1.74 eV, 712 nm) falls in the visible region cadmium Selenide (CdSe) is used in various applications such as solar cells, light emitting diodes, etc. Size evolving emission spectra of cadmium selenide is shown in . Different sized CdSe particles have different colored fluorescence spectra. Since cadmium and selenide are known carcinogens and being nanoparticles are easily absorbed into the human body, there is some concern regarding these particles. However, CdSe coated with ZnS can overcome all the harmful biological effects, making cadmium selenide nanoparticles one of the most popular 12-16 semiconductor nanoparticle.A combination of the absorbance and emission spectra is shown in for four different sized particles emitting green, yellow, orange, and red fluorescence.Cadmium Telluride (CdTe) has a band gap of 1.44 eV and thus absorbs in the infra red region. The size evolving CdTe emission spectra is shown in .Capping a core quantum dot with a semiconductor material with a wider bandgap than the core, reduces the nonradiative recombination and results in brighter fluorescence emission. Quantum yields are affected by the presences of free surface charges, surface defects and crystal defects, which results in unwanted recombinations. The addition of a shell reduces the nonradiative transitions and majority of the electrons relax radiatively to the valence band. In addition, the shell also overcomes some of the surface defects.For the CdSe-core/ZnS-shell systems exhibit much higher quantum yield as compared to core CdSe quantum dots as seen in . In solid state physics a band gap also called an energy gap, is an energy range in an ideal solid where no electron states can exist. As shown in for an insulator or semiconductor the band gap generally refers to the energy difference between the top of the valence band and the bottom of the conduction band. This is equivalent to the energy required to free an outer shell electron from its orbit about the nucleus to become a mobile charge carrier, able to move freely within the solid material.The band gap is a major factor determining the electrical conductivity of a solid. Substances with large band gaps are generally insulators (i.e., dielectric), those with smaller band gaps are semiconductors, while conductors either have very small band gaps or no band gap (because the valence and conduction bands overlap as shown in .The theory of bands in solids is one of the most important steps in the comprehension of the properties of solid matter. The existence of a forbidden energy gap in semiconductors is an essential concept in order to be able to explain the physics of semiconductor devices. For example, the magnitude of the bad gap of solid determines the frequency or wavelength of the light, which will be adsorbed. Such a value is useful for photocatalysts and for the performance of a dye sensitized solar cell.Nanocomposites materials are of interest to researchers the world over for various reasons. One driver for such research is the potential application in next-generation electronic and photonic devices. Particles of a nanometer size exhibit unique properties such as quantum effects, short interface migration distances (and times) for photoinduced holes and electrons in photochemical and photocatalytic systems, and increased sensitivity in thin film sensors.For a p-n junction, the essential electrical characteristic is that it constitutes a rectifier, which allows the easy flow of a charge in one direction but restrains the flow in the opposite direction. The voltage-current characteristic of such a device can be described by the Shockley equation, \ref{17}, in which, I0 is the reverse bias saturation current, q the charge of the electron, k is Boltzmann’s constant, and T is the temperature in Kelvin.\[ I\ =\ I_{0}(e^{qV/kT} - 1) \label{17} \]When the reverse bias is very large, the current I is saturated and equal to I0. This saturation current is the sum of several different contributions. They are diffusion current, generation current inside the depletion zone, surface leakage effects and tunneling of carriers between states in the band gap. In a first approximation at a certain condition, I0 can be interpreted as being solely due to minority carriers accelerated by the depletion zone field plus the applied potential difference. Therefore it can be shown that, \ref{18}, where A is a constant, Eg the energy gap (slightly temperature dependent), and γ an integer depending on the temperature dependence of the carrier mobility µ.\[ I_{0}\ =\ AT^{(3\ +\ \gamma /2)}e^{-E_{g}(T)/KT} \label{18} \]We can show that γ is defined by the relation by a more advanced treatment, \ref{19}.\[ T \mu^{2} \ =\ T^{\gamma } \label{19} \]After substituting the value of I0 given by \ref{17} into \ref{18}, we take the napierian logarithm of the two sides and multiply them by kT for large forward bias (qV > 3kT); thus, rearranging, we have \ref{20}.\[ qV\ =\ E_{g}(T)\ +\ T[k\ ln(1/A)]\ -\ (3+\gamma /2)klnT \label{20} \]As InT can be considered as a slowly varying function in the 200 - 400 K interval, therefore for a constant current, I, flowing through the junction a plot of qV versus the temperature should approximate a straight line, and the intercept of this line with the qV axis is the required value of the band gap Eg extrapolated to 0 K. Through \ref{21} instead of qV, we can get a more precise value of Eg.\[ qV_{c}\ =\ qV\ +\ (3+\gamma /2)klnT \label{21} \]\ref{20} shows that the value of γ depends on the temperature and µ that is a very complex function of the particular materials, doping and processing. In the 200 - 400 K range, one can estimate that the variation ΔEg produced by a change of Δγ in the value of γ is \ref{22}. So a rough value of γ is sufficient for evaluating the correction. By taking the experimental data for the temperature dependence of the mobility µ, a mean value for γ can be found. Then the band gap energy qV can be determined.\[ \Delta E_{g}\ =\ 10^{-2}eV\Delta \gamma \label{22} \]The electrical circuit required for the measurement is very simple and the constant current can be provided by a voltage regulator mounted as a constant current source (see ). The potential difference across the junction can be measured with a voltmeter. Five temperature baths were used: around 90 °C with hot water, room temperature water, water-ice mixture, ice-salt-water mixture and mixture of dry ice and acetone. The result for GaAs is shown in . The plot qV corrected (qVc) versus temperature gives E1 = 1.56±0.02 eV for GaAs. This may be compared with literature value of 1.53 eV.Optical method can be described by using the measurement of a specific example, e.g., hexagonal boron nitride (h-BN, . The UV-visible absorption spectrum was carried out for investigating the optical energy gap of the h-BN film based on its optically induced transition.For this study, a sample of h-BN was first transferred onto an optical quartz plate, and a blank quartz plate was used for the background as the reference substrate. The following Tauc’s equation was used to determine the optical band gap Eg, \ref{23}, where ε is the optical absorbance, λ is the wavelength and ω = 2π/λ is the angular frequency of the incident radiation.\[ \omega ^{2} \varepsilon \ =\ (h\omega \ -\ E_{g})^{2} \label{23} \]As a shows, the absorption spectrum has one sharp absorption peak at 201 - 204 nm. On the basis of Tauc’s formulation, it is speculated that the plot of ε1/2/λ versus 1/λ should be a straight line at the absorption range. Therefore, the intersection point with the x axis is 1/λg (λg is defined as the gap wavelength). The optical band gap can be calculated based on Eg) hc/λg. The plot in b shows ε1/2/λ versus 1/λ curve acquired from the thin h-BN film. For more than 10 layers sample, he calculated gap wavelength λg is about 223 nm, which corresponds to an optical band gap of 5.56 eV.Previous theoretical calculations of a single layer of h-BN shows 6 eV band gap as the result. The thickness of h-BN film are 1 layer, 5 layers and thick (>10 layers) h-BN films, the measured gap is about 6.0, 5.8, 5.6 eV, respectively, which is consistent with the theoretical gap value. For thicker samples, the layer-layer interaction increases the dispersion of the electronic bands and tends to reduce the gap. From this example, we can see that the band gap is relative to the size of the materials, this is the most important feature of nano material.A semiconductor is a material that has unique properties in the way it reacts to electrical current. A semiconductor’s ability to conduct an electrical current is intermediate between that of an insulator (such as rubber or glass) and a conductor (such as copper). However, the conductivity of a semiconductor material increases with increasing temperature, a behavior opposite to that of a metal. Semiconductors may also have a lower resistance to the flow of current in one direction than in the other.The properties of semiconductors can best be understood by band theory, where the difference between conductors, semiconductors, and insulators can be understood by increasing separations between a valence band and a conduction band, as shown in . In semiconductors a small energy gap separates the valence band and the conduction band. This energy gap is smaller than that of insulators – which is too large for essentially any electrons from the valence band to enter the conduction band – and larger than that of conductors, where the valence and conduction bands overlap. At 0 K all of the electrons in a semiconductor lie in the valence band, but at higher temperatures some electrons will have enough energy to be promoted to the conduction bandIn addition to the band structure of solids, the concept of carrier generation and recombination is very important to the understanding of semiconducting materials. Carrier generation and recombination is the process by which mobile charge carriers (electrons and electron holes) are created and eliminated. The valence band in semiconductors is normally very full and its electrons immobile, resulting in no flow as electrical current. However, if an electron in the valence band acquires enough energy to reach the conduction band, it can flow freely in the nearly empty conduction band. Furthermore, it will leave behind an electron hole that can flow as current exactly like a physical charged particle. The energy of an electron-electron hole pair is quantified in the form of a neutrally-charged quasiparticle called an exciton. For semiconducting materials, there is a characteristic separation distance between the electron and the hole in an exciton called the exciton Bohr radius. The exciton Bohr radius has large implications for the properties of quantum dots.The process by which electrons gain energy and move from the valence to the conduction band is termed carrier generation, while recombination describes the process by which electrons lose energy and re-occupy the energy state of an electron hole in the valence band. Carrier generation is accompanied by the absorption of radiation, while recombination is accompanied by the emission of radiation.In the 1980s, a new nanoscale (~1-10 nm) semiconducting structure was developed that exhibits properties intermediate between bulk semiconductors and discrete molecules. These semiconducting nanocrystals, called quantum dots, are small enough to be subjected to quantum effects, which gives them interesting properties and the potential to be useful in a wide-variety of applications. The most important characteristic of quantum dots (QDs) is that they are highly tunable, meaning that the optoelectronic properties are dependent on the particles size and shape. As illustrates, the band gap in a QD is inversely related to its size, which produces a blue shift in emitted light as the particle size decreases. The highly tunable nature of QDs result not only from the inverse relationship between band gap size and particle size, but also from the ability to set the size of QDs and make QDs out of a wide variety of materials. The potential to produce QDs with properties tailored to fulfill a specific function has produce an enormous amount of interest in quantum dots (see the section on Optical Properties of Group 12-16 (II-VI) Semiconductor Nanoparticles).As previously mentioned, QDs are small enough that quantum effects influence their properties. At sizes under approximately 10 nm, quantum confinement effects dominate the optoelectronic properties of a material. Quantum confinement results from electrons and electron holes being squeezed into a dimension that approaches a critical quantum measurement, called the exciton Bohr radius. As explained above, the distance between the electron and the hole within an exciton is called the exciton Bohr radius. In bulk semiconductors the exciton can move freely in all directions, but when the size of a semiconductor is reduced to only a few nanometers, quantum confinement effects occur and the band gap properties are changed. Confinement of the exciton in one dimension produces a quantum well, confinement in two dimensions produces a quantum wire, and confinement in all three dimensions produces a quantum dot.Recombination occurs when an electron from a higher energy level relaxes to a lower energy level and recombines with an electron hole. This process is accompanied by the emission of radiation, which can be measured to give the band gap size of a semiconductor. The energy of the emitted photon in a recombination process of a QD can be modeled as the sum of the band gap energy, the confinement energies of the excited electron and the electron hole, and the bound energy of the exciton as show in \ref{24}.\[ E\ =\ E_{bandgap}\ +\ E_{confinement}\ +\ E_{exciton} \label{24} \]The confinement energy can be modeled as a simple particle in a one-dimensional box problem and the energy levels of the exciton can be represented as the solutions to the equation at the ground level (n = 1) with the mass replaced by the reduced mass. The confinement energy is given by \ref{25}, where ħ is the reduced Plank’s constant, µ is the reduced mass, and a is the particle radius. me and mh are the effective masses of the electron and the hole, respectively.\[ E_{confinement}\ =\ \frac{\hbar ^{2} \pi ^{2}}{2a^{2}}(\frac{1}{m_{e}}+\frac{1}{m_{h}})=\ \frac{\hbar ^{2}\pi ^{2}}{2\mu a^{2}} \label{25} \]The bound exciton energy can be modeled by using the Coulomb interaction between the electron and the positively charged electron-hole, as shown in \ref{26}. The negative energy is proportional to Rydberg’s energy (Ry) (13.6 eV) and inversely proportional to the square of the size-dependent dielectric constant, εr. µ and me are the reduced mass and the effective mass of the electron, respectively.\[ E\ =\ -R_{y} \frac{1}{\varepsilon _{r}^{2}}\frac{\mu }{m_{e}} +\ -R_{y}^{*} \label{26} \]Using these models and spectroscopic measurements of the emitted photon energy (E), it is possible to measure the band gap of QDs.Photoluminescence (PL) Spectroscopy is perhaps the best way to measure the band gap of QDs. PL spectroscopy is a contactless, nondestructive method that is extremely useful in measuring the separation between different energy levels. PL spectroscopy works by directing light onto a sample, where energy is absorbed by electrons in the sample and elevated to a higher energy-state through a process known as photo-excitation. Photo-excitation produces the electron-electron hole pair. The recombination of the electron-electron hole pair then occurs with the emission of radiation (light). The energy of the emitted light (photoluminescence) relates to the difference in energy levels between the lower (ground) electronic state and the higher (excited) electronic state. This amount of energy is measured by PL spectroscopy to give the band gap size.PL spectroscopy can be divided in two different categories: fluorescence and phosphorescence. It is fluorescent PL spectroscopy that is most relevant to QDs. In fluorescent PL spectroscopy, an electron is raised from the ground state to some elevated excited state. The electron than relaxes (loses energy) to the lowest electronic excited state via a non-radiative process. This non-radiative relaxation can occur by a variety of mechanisms, but QDs typically dissipate this energy via vibrational relaxation. This form of relaxation causes vibrations in the material, which effectively heat the QD without emitting light. The electron then decays from the lowest excited state to the ground state with the emission of light. This means that the energy of light absorbed is greater than the energy of the light emitted. The process of fluorescence is schematically summarized in the Jablonski diagram in .A schematic of a basic design for measuring fluorescence is shown in . The requirements for PL spectroscopy are a source of radiation, a means of selecting a narrow band of radiation, and a detector. Unlike optical absorbance spectroscopy, the detector must not be placed along the axis of the sample, but rather at 90º to the source. This is done to minimize the intensity of transmitted source radiation (light scattered by the sample) reaching the detector. shows two different ways of selecting the appropriate wavelength for excitation: a monochromator and a filter. In a fluorimeter the excitation and emission wavelengths are selected using absorbance or interference filters. In a spectrofluorimeterthe excitation and emission wavelengths are selected by a monochromator.PL spectra can be recorded in two ways: by measuring the intensity of emitted radiation as a function of the excitation wavelength, or by measuring the emitted radiation as a function of the the emission wavelength. In an excitation spectrum, a fixed wavelength is used to monitor emission while the excitation wavelength is varied. An excitation spectrum is nearly identical to a sample’s absorbance spectrum. In an emission spectrum, a fixed wavelength is used to excite the sample and the intensity of the emitted radiation is monitored as a function of wavelength.PL spectroscopy data is frequently combined with optical absorbance spectroscopy data to produce a more detailed description of the band gap size of QDs. UV-visible spectroscopy is a specific kind of optical absorbance spectroscopy that measures the transitions from ground state to excited state. This is the opposite of PL spectroscopy, which measures the transitions from excited states to ground states. UV-visible spectroscopy uses light in the visible or ultraviolet range to excite electrons and measures the absorbance of radiation verses wavelength. A sharp peak in the UV-visible spectrum indicates the wavelength at which the sample best absorbs radiation.As mentioned before, an excitation spectrum is a graph of emission intensity versus excitation wavelength. This spectrum often looks very similar to the absorbance spectrum and in some instances they are the exact same. However, slight differences in the theory behind these techniques do exist. Broadly speaking, an absorption spectrum measures wavelengths at which a molecule absorbs lights, while an excitation spectrum determines the wavelength of light necessary to produce emission or fluorescence from the sample, as monitored at a particular wavelength. It is quite possible then for peaks to appear in the absorbance spectrum that would not occur on the PL excitation spectrum.A schematic diagram for a UV-vis spectrometer is shown in . Like PL spectroscopy, the instrument requires a source of radiation, a means of selecting a narrow band of radiation (monochromator), and a detector. Unlike PL spectroscopy, the detector is placed along the same axis as the sample, rather than being directed 90º away from it. A UV-Vis spectrum, such as the one shown in , can be used not only to determine the band gap of QDs, but to also determine QD size. Because QDs absorb different wavelengths of light based on the size of the particles, UV-Vis (and PL) spectroscopy can provide a convenient and inexpensive way to determine the band gap and/or size of the particle by using the peaks on the spectrum.The highly tunable nature of QDs, as well as their high extinction coefficient, makes QDs well-suited to a large variety of applications and new technologies. QDs may find use as inorganic fluorophores in biological imaging, as tools to improve efficiency in photovoltaic devices, and even as a implementations for qubits in quantum computers. Knowing the band gap of QDs is essential to understanding how QDs may be used in these technologies. PL and optical absorbance spectroscopies provide ideal ways of obtaining this information.This page titled 8.5: Spectroscopic Characterization of Nanoparticles is shared under a CC BY 4.0 license and was authored, remixed, and/or curated by Pavan M. V. Raja & Andrew R. Barron (OpenStax CNX) via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.
575
8.6: Measuring the Specific Surface Area of Nanoparticle Suspensions using NMR
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Physical_Methods_in_Chemistry_and_Nano_Science_(Barron)/08%3A_Structure_at_the_Nano_Scale/8.06%3A_Measuring_the_Specific_Surface_Area_of_Nanoparticle_Suspensions_using_NMR
Surface area is a property of immense importance in the nano-world, especially in the area of heterogeneous catalysis. A solid catalyst works with its active sites binding to the reactants, and hence for a given active site reactivity, the higher the number of active sites available, the faster the reaction will occur. In heterogeneous catalysis, if the catalyst is in the form of spherical nanoparticles, most of the active sites are believed to be present on the outer surface. Thus it is very important to know the catalyst surface area in order to get a measure of the reaction time. One expresses this in terms of volume specific surface area, i.e., surface area/volume although in industry it is quite common to express it as surface area per unit mass of catalyst, e.g., m2/g.Nuclear magnetic resonance (NMR) is the study of the nuclei of the response of an atom to an external magnetic field. Many nuclei have a net magnetic moment with I ≠ 0, along with an angular momentum in one direction where I is the spin quantum number of the nucleus. In the presence of an external magnetic field, a nucleus would precess around the field. With all the nuclei precessing around the external magnetic field, a measurable signal is produced. NMR can be used on any nuclei with an odd number of protons or neutrons or both, like the nuclei of hydrogen (1H), carbon (13C), phosphorous (31P), etc. Hydrogen has a relatively large magnetic moment (μ = 14.1 x 10-27 J/T) and hence it is used in NMR logging and NMR rock studies. The hydrogen nucleus composes of a single positively charged proton that can be seen as a loop of current generating a magnetic field. It is may be considered as a tiny bar magnet with the magnetic axis along the spin axis itself as shown in . absence of any external forces, a sample with hydrogen alone will have the individual magnetic moments randomly aligned as shown in .Nuclear magnetic resonance (NMR) is the study of the nuclei of the response of an atom to an external magnetic field. Many nuclei have a net magnetic moment with I≠0, along with an angular momentum in one direction where I is the spin quantum number of the nucleus. In the presence of an external magnetic field, a nucleus would precess around the field. With all the nuclei precessing around the external magnetic field, a measurable signal is produced.NMR can be used on any nuclei with an odd number of protons or neutrons or both, like the nuclei of hydrogen (1H), carbon (13C), phosphorous (31P), etc. Hydrogen has a relatively large magnetic moment (μ = 14.1 x 10-27 J/T) and hence it is used in NMR logging and NMR rock studies. The hydrogen nucleus composes of a single positively charged proton that can be seen as a loop of current generating a magnetic field. It is may be considered as a tiny bar magnet with the magnetic axis along the spin axis itself as shown in .BET measurements follow the BET (Brunner-Emmet-Teller) adsorption isotherm of a gas on a solid surface. Adsorption experiments of a gas of known composition can help determine the specific surface area of the solid particle. This technique has been the main source of surface area analysis used industrially for a long time. However BET techniques take a lot of time for the gas-adsorption step to be complete while one shall see in the course of this module that NMR can give you results in times averaging around 30 minutes depending on the sample. BET also requires careful sample preparation with the sample being in dry powder form, whereas NMR can accept samples in the liquid state as well.From an atomic stand point, T1 relaxation occurs when a precessing proton transfers energy with its surroundings as the proton relaxes back from higher energy state to its lower energy state. With T2 relaxation, apart from this energy transfer there is also dephasing and hence T2 is less than T1 in general. For solid suspensions, there are three independent relaxation mechanisms involved:-These mechanisms act in parallel so that the net effects are given by \ref{1} and \ref{2}.\[ \frac{1}{T_{2}}=\frac{1}{T_{2, bulk}}\ +\ \frac{1}{T_{2,surface}}+\frac{1}{T_{2,diffusion}} \label{1} \]\[ \frac{1}{T_{1}} = \frac{1}{T_{1, bulk}}\ +\ \frac{1}{T_{1,surface}} \label{2} \]The relative importance of each of these terms depend on the specific scenario. For the case of most solid suspensions in liquid, the diffusion term can be ignored by having a relatively uniform external magnetic field that eliminates magnetic gradients. Theoretical analysis has shown that the surface relaxation terms can be written as\ref{3} and \ref{4}.\[ \frac{1}{T_{1,surface}} = \rho _{1} (\frac{S}{V})_{particle} \label{3} \]\[ \frac{1}{T_{2,surface}} = \rho_{2} (\frac{S}{V})_{particle} \label{4} \]Thus one can use T1 or T2 relaxation experiment to determine the specific surface area. We shall explain the case of the T2 technique further as \ref{5}.\[ \frac{1}{T_{2}} = \frac{1}{T_{2,bulk}}+ \rho_{2}(\frac{S}{V})_{particle} \label{5} \]One can determine T2 by spin-echo measurements for a series of samples of known S/V values and prepare a calibration chart as shown in , with the intercept as 1/T2,bulk and the slope as ρ2, one can thus find the specific surface area of an unknown sample of the same material.The sample must be soluble in the solvent. For proton NMR, about 0.25-1.00 mg/mL are needed depending on the sensitivity of the instrument.The solvent properties will have an impact of some or all of the spectrum. Solvent viscosity affects obtainable resolution, while other solvents like water or ethanol have exchangeable protons that will prevent the observation of such exchangeable protons present in the solute itself. Solvents must be chosen such that the temperature dependence of solute solubility is low in the operation temperature range. Solvents containing aromatic groups like benzene can cause shifts in the observed spectrum compared to non-aromatic solvents.NMR tubes are available in a wide range of specifications depending on specific scenarios. The tube specifications need to be extremely narrow while operating with high strength magnetic fields. The tube needs to be kept extremely clean and free from dust and scratches to obtain good results, irrespective of the quality of the tube. Tubes can cleaned without scratching by rinsing out the contents and soaking them in a degreasing solution, and by avoiding regular glassware cleaning brushes. After soaking for a while, rinse with distilled water and acetone and dry the tube by blowing filterened nitrogen gas through a pipette or by using a swob of cotton wool.Filter the sample solution by using a Pasteur pipette stuffed with a piece of cotton wool at the neck. Any suspended material like dust can cause changes in the spectrum. When working with dilute aqueous solutions, sweat itself can have a major effect and so gloves are recommended at all times.Sweat contains mainly water, minerals (sodium 0.9 g/L, potassium 0.2 g/L, calcium 0.015 g/L, magnesium 0.0013 g/L and other trace elements like iron, nickel, zinc, copper, lead and chromium), as well as lactate and urea. In presence of a dilute solution of the sample, the proton-containing substances in sweat (e.g., lactate and urea) can result in a large signal that can mask the signal of the sample.The NMR probe is the most critical piece of equipment as it contains the apparatus that must detect the small NMR signals from the sample without adding a lot of noise. The size of the probe is given by the diameter of the NMR tube it can accommodate with common sizes 5, 10 and 15 mm. A larger size probe can be used in the case of less sensitive samples in order to get as much solute into the active zone as possible. When the sample is available in less quantity, use a smaller size tube to get an intrinsically higher sensitivity.A result sheet of T2­ relaxation has the plot of magnetization versus time, which will be linear in a semi-log plot as shown in . Fitting it to the equation, we can find T­2 and thus one can prepare a calibration plot of 1/T2 versus S/V of known samples.The following are a few of the limitations of the T2 technique:A study of colloidal silica dispersed in water provides a useful example. shows a representation of an individual silica particle.A series of dispersion in DI water at different concentrations was made and surface area calculated. The T2 relaxation technique was performed on all of them with a typical T2 plot shown in and T2 was recorded at 2117 milliseconds for this sample.A calibration plot was prepared with 1/T2 – 1/T2,bulk as ordinate (the y-axis coordinate) and S/V as abscissa (the x-axis coordinate). This is called the surface relaxivity plot and is illustrated in .Accordingly for the colloidal dispersion of silica in DI water, the best fit resulted in \ref{6}, from which one can see that the value of surface relaxivity, 2.3 x 10-8, is in close accordance with values reported in literature.\[ \frac{1}{T_{2}}\ -\ \frac{1}{T_{2,bulk}}\ =\ 2.3 \times 10^{-8} (\frac{S}{V})\ -\ 0.0051 \label{6} \]The T2 technique has been used to find the pore-size distribution of water-wet rocks. Information of the pore size distribution helps petroleum engineers model the permeability of rocks from the same area and hence determine the extractable content of fluid within the rocks.Usage of NMR for surface area determination has begun to take shape with a company, Xigo nanotools, having developed an instrument called the Acorn AreaTM to get surface area of a suspension of aluminum oxide. The results obtained from the instrument match closely with results reported by other techniques in literature. Thus the T2 NMR technique has been presented as a strong case to obtain specific surface areas of nanoparticle suspensions.This page titled 8.6: Measuring the Specific Surface Area of Nanoparticle Suspensions using NMR is shared under a CC BY 4.0 license and was authored, remixed, and/or curated by Pavan M. V. Raja & Andrew R. Barron (OpenStax CNX) via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.
576
8.7: Characterization of Graphene by Raman Spectroscopy
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Physical_Methods_in_Chemistry_and_Nano_Science_(Barron)/08%3A_Structure_at_the_Nano_Scale/8.07%3A_Characterization_of_Graphene_by_Raman_Spectroscopy
Graphene is a quasi-two-dimensional material, which comprises layers of carbon atoms arranged in six-member rings ). Since being discovered by Andre Geim and co-wokers at the University of Manchester, graphene has become one of the most exciting topics of research because of its distinctive band structure and physical properties, such as the observation of a quantum hall effect at room temperature, a tunable band gap, and a high carrier mobility.Graphene can be characterized by many techniques including atomic force microscopy (AFM), transmission electron microscopy (TEM) and Raman spectroscopy. AFM can be used to determine the number of the layers of the graphene, and TEM images can show the structure and morphology of the graphene sheets. In many ways, however, Raman spectroscopy is a much more important tool for the characterization of graphene. First of all, Raman spectroscopy is a simple tool and requires little sample preparation. What’s more, Raman spectroscopy can not only be used to determine the number of layers, but also can identify if the structure of graphene is perfect, and if nitrogen, hydrogen or other fuctionalization is successful.While Raman spectroscopy is a useful technique for characterizing sp2 and sp3 hybridized carbon atoms, including those in graphite, fullerenes, carbon nanotubes, and graphene. Single, double, and multi-layer graphenes have also been differentiated by their Raman fingerprints. shows a typical Raman spectrum of N-doped single-layer graphene. The D-mode, appears at approximately 1350 cm-1, and the G-mode appears at approximately 1583 cm-1. The other Raman modes are at 1620 cm-1 (D’- mode), 2680 cm-1 (2D-mode), and 2947 cm-1 (D+G-mode).The G-mode is at about 1583 cm-1, and is due to E2g mode at the Γ-point. G-band arises from the stretching of the C-C bond in graphitic materials, and is common to all sp2 carbon systems. The G-band is highly sensitive to strain effects in sp2 system, and thus can be used to probe modification on the flat surface of graphene.The D-mode is caused by disordered structure of graphene. The presence of disorder in sp2-hybridized carbon systems results in resonance Raman spectra, and thus makes Raman spectroscopy one of the most sensitive techniques to characterize disorder in sp2 carbon materials. As is shown by a comparison of and there is no D peak in the Raman spectra of graphene with a perfect structure.If there are some randomly distributed impurities or surface charges in the graphene, the G-peak can split into two peaks, G-peak (1583 cm-1) and D’-peak (1620 cm-1). The main reason is that the localized vibrational modes of the impurities can interact with the extended phonon modes of graphene resulting in the observed splitting.All kinds of sp2 carbon materials exhibit a strong peak in the range 2500 - 2800 cm-1 in the Raman spectra. Combined with the G-band, this spectrum is a Raman signature of graphitic sp2 materials and is called 2D-band. 2D-band is a second-order two-phonon process and exhibits a strong frequency dependence on the excitation laser energy.What’s more, the 2D band can be used to determine the number of layer of graphene. This is mainly because in the multi-layer graphene, the shape of 2D band is pretty much different from that in the single-layer graphene. As shown in , the 2D band in the single-layer graphene is much more intense and sharper as compared to the 2D band in multi-layer graphene.This page titled 8.7: Characterization of Graphene by Raman Spectroscopy is shared under a CC BY 4.0 license and was authored, remixed, and/or curated by Pavan M. V. Raja & Andrew R. Barron (OpenStax CNX) via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.
577
8.8: Characterization of Covalently Functionalized Single-Walled Carbon Nanotubes
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Physical_Methods_in_Chemistry_and_Nano_Science_(Barron)/08%3A_Structure_at_the_Nano_Scale/8.08%3A_Characterization_of_Covalently_Functionalized_Single-Walled_Carbon_Nanotubes
Characterization of nanoparticles in general, and carbon nanotubes in particular, remains a technical challenge even though the chemistry of covalent functionalization has been studied for more than a decade. It has been noted by several researchers that the characterization of products represents a constant problem in nanotube chemistry. A systematic tool or suites of tools are needed for adequate characterization of chemically functionalized single-walled carbon nanotubes (SWNTs), and is necessary for declaration of success or failure in functionalization trials.So far, a wide range of techniques have been applied to characterize functionalized SWNTs: infra red (IR), Raman, and UV/visible spectroscopies, thermogravimetric analysis (TGA), atomic force microscopy (AFM), transmission electron microscopy (TEM), X-ray photoelectron spectroscopy (XPS), etc. A summary of the attribute of each of the characterization method is given in Table \(\PageIndex{1}\). Thermogravimetric analysis (TGA) is the mostly widely used method to determine the level of sidewall functionalization. Since most functional groups are labile or decompose upon heating, while the SWNTs are stable up to 1200 °C under Ar atmosphere. The weight loss at 800 °C under Ar is often used to determine functionalization ratio using this indirect method. Unfortunately, quantification can be complicated with presence of multiple functional groups. Also, TGA does not provide direct evidence for covalent functionalization since it cannot differentiate between covalent attachment and physical adsorption.XPS confirms the presence of different elements in functionalized SWNTs. This is useful for identification of heteroatom elements such as F and N, and then XPS can be used for quantification with simple substituent groups and used indirectly. Deconvolution of XPS is useful to study fine structures on SWNTs. However, the overlapping of binding energies in the spectrum complicates quantification.Raman spectroscopy is very informative and important for characterizing functionalized SWNTs. The tangential G mode (ca. 1550 – 1600 cm-1) is characteristic of sp2 carbons on the hexagonal graphene network. The D-band, so-called disorder mode (found at ca. 1295 cm-1) appears due to disruption of the hexagonal sp2 network of SWNTs. The D-band was largely used to characterize functionalized SWNTs and ensure functionalization is covalent and occurred at the sidewalls. However, the observation of D band in Raman can also be related to presence of defects such as vacancies, 5-7 pairs, or dopants. Thus, using Raman to provide evidence of covalent functionalization needs to be done with caution. In particular, the use of Raman spectroscopy for a determination of the degree of functionalization is not reliable.It has been shown that quantification with Raman is complicated by the distribution of functional groups on the sidewall of SWNTs. For example, if fluorinated-SWNTs (F-SWNTs) are functionalized with thiol or thiophene terminated moieties, TGA shows that they have similar level of functionalization. However, their relative intensities of D:G in Raman spectrum are quite different. The use of sulfur substituents allow for gold nanoparticles with 5 nm in diameter to be attached as a “chemical marker” for direct imaging of the distribution of functional groups. AFM and STM suggest that the functional groups of thio-SWNTs are group together while the thiophene groups are widely distributed on the sidewall of SWNTs. Thus the difference is not due to significant difference in substituent concentration but on substituent distribution, while Raman shows different D:G ratio.IR spectroscopy is useful in characterizing functional groups bound to SWNTs. A variety of organic functional groups on sidewall of SWNTs have been identified by IR, such as COOH(R), -CH2, -CH3, -NH2, -OH, etc. However, it is difficult to get direct functionalization information from IR spectroscopy. The C-F group has been identified by IR in F-SWNTs. However, C-C, C-N, C-O groups associated with the side-wall functionalization have not been observed in the appropriately functionalized SWNTs.UV/visible spectroscopy is maybe the most accessible technique that provides information about the electronic states of SWNTs, and hence functionalization. The absorption spectrum shows bands at ca. 1400 nm and 1800 nm for pristine SWNTs. A complete loss of such structure is observed after chemical alteration of SWNTs sidewalls. However, such information is not quantitative and also does not show what type of functional moiety is on the sidewall of SWNTs.NMR can be considered as a “new” characterization technique as far as SWNTs are concerned. Solution state NMR is limited for SWNT characterization because low solubility and slow tumbling of the SWNTs results in broad spectra. Despite this issue, there are still solution 1H NMR reported of SWNTs functionalized by carbenes, nitrenes and azomethine ylides because of the high solubility of derivatized SWNTs. However, proof of covalent functionalization cannot be obtained from the 1H NMR. As an alternative, solid state 13C NMR has been employed to characterize several functionalized SWNTs and show successful observation of sidewall organic functional groups, such as carboxylic and alkyl groups. But there has been a lack of direct evidence of sp3 carbons on the sidewall of SWNTs that provides information of covalent functionalization.Solid state 13C NMR has been successfully employed in the characterization of F-SWNTs through the direct observation of the sp3C-F carbons on sidewall of SWNTs. This methodology has been transferred to more complicated systems; however, it has been found that longer side chain length increases the ease to observe sp3C-X sidewall carbons.Solid state NMR is a potentially powerful technique for characterizing functionalized SWNTs because molecular dynamic information can also be obtained. Observation that higher side chain mobility can be achieved by using a longer side chain length offers a method of exploring functional group conformation. In fact, there have been reports using solid state NMR to study molecular mobility of functionalized multi-walled carbon nanotubes.AFM, TEM and STM are useful imaging techniques to characterize functionalized SWNTs. As techniques, they are routinely used to provide an “image” of an individual nanoparticle, as opposed to an average of all the particles.AFM shows morphology on the surface of SWNTs. The height profile on AFM is often used to show presence of functional groups on sidewall of SWNTs. Individual SWNTs can be probed by AFM and sometimes provide information of dispersion and exfoliation of bundles. Measurement of heights along an individual SWNT can be correlated with the substituent group, i.e., the larger an alkyl chain of a sidewall substituent the greater the height measured. AFM does not distinguish whether those functional groups are covalently attached or physically adsorbed on the surface of SWNTs.TEM can be used to directly image SWNTs and at high resolution clearly shows the sidewall of individual SWNT. However, the resolution of TEM is not sufficient to directly observe covalent attachment of chemical modification moieties, i.e., to differentiate between sp2 and sp3 carbon atoms. TEM can be used to provide information of functionalization effect on dispersion and exfoliation of ropes.Samples are usually prepared from very dilute concentration of SWNTs. Sample needs to be very homogeneous to get reliable data. As with AFM, TEM only shows a very small portion of sample, using them to characterize functionalized SWNTs and evaluate dispersion of samples in solvents needs to be done with caution.STM offers a lot of insight on structure and surface of functionalized SWNTs. STM measures electronic structure, while sometimes the topographical information can be indirectly inferred by STM images. STM has been used to characterize F-SWNTs gold-marked SWNTs, and organic functionalized SWNTs. Distribution of functional groups can be inferred from STM images since the location of a substituent alters the localized electronic structure of the tube. STM images the position/location of chemical changes to the SWNT structure. The band-like structure of F-SWNTs was first disclosed by STM.STM has the same problem that is inherent with AFM and TEM, that when using small sample size, the result may not be statistically relevant. Also, chemical identity of the features on SWNTs cannot be determined by STM; rather, they have to be identified by spectroscopic methods such as IR or NMR. A difficulty with STM imaging is that the sample has to be conductive, thus deposition of the SWNT onto a gold (or similar) surface is necessary.This page titled 8.8: Characterization of Covalently Functionalized Single-Walled Carbon Nanotubes is shared under a CC BY 4.0 license and was authored, remixed, and/or curated by Pavan M. V. Raja & Andrew R. Barron (OpenStax CNX) via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.
578
8.9: Characterization of Bionanoparticles by Electrospray-Differential Mobility Analysis
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Physical_Methods_in_Chemistry_and_Nano_Science_(Barron)/08%3A_Structure_at_the_Nano_Scale/8.09%3A_Characterization_of_Bionanoparticles_by_Electrospray-Differential_Mobility_Analysis
|Electrospray-differential mobility analysis (ES-DMA) is an analytical technique that uses first an electrospray to aerosolize particles and then DMA to characterize their electrical mobility at ambient conditions. This versatil tool can be used to quantitative characterize biomolecules and nanoparticles from 0.7 to 800 nm. In the 1980s, it was discovered that ES could be used for producing aerosols of biomacromolecules. In the case of the DMA, its predecesor was developed by Hewitt in 1957 to analize charging of small particles. The modified DMA, which is a type of ion mobility analyzer, was developed by Knuts}on and Whitby \) in 1975 and later it was commercialized. Among the several designs, the cylindrical DMA has become the standard design and has been used for the obtention of monodisperse aerosols, as well as for the classification of polydisperse aerosols.The first integration of ES with DMA ocurred in 1996 when this technique was used to determine the size of different globular proteins. DMA was refined over the past decade to be used in a wide range of applications for the characterization of polymers, viruses, bacteriophages and nanoparticle-biomolecule conjugates. Although numerous publications have reported the use of ES-DMA in medicinal and pharmaceutical applications, this present module describes the general principles of the technique and its application in the analysis of gold nanoparticles.ES-DMA consits of an electrospray source (ES) that aerosolizes bionanoparticles and a class of ion mobility analyzer (DMA) that measures their electrical mobility by balancing electrical and drag forces on the particles. DMA continously separates particles based on their charge to size ratio. An schematic of the experimental setup for ES-DMA is shown in for the analysis of gold nanoparticles.First, the analyte dissolved in a volatile buffer such as ammonium acetate [NH4][O2CCH3] is placed inside a pressure chamber. Then, the solution is delivered to the nozzle through a fused silica capillary to generate multiply charged droplets. ES nebulizers produce droplets of 100-400 nm in diameter but they are highly charged.In the next step, the droplets are mixed with air and carbon dioxide (CO2) and are passed through the charge reducer or neutralizer where the solvent continues to evaporate and charge distribution decreases. The charge reducer is an ionizing α radiation source such as Po210 that ionizes the carrier gas and reduces the net charges on the particles to a Fuchs’-Boltzmann distribution. As a result, the majority of the droplets contain single net charge particles that pass directly to the DMA. DMA separates positively or negatively charged particles by applying a negative or positive potential. shows a single channel design of cylindrical DMA that is composed of two concentric electrodes between which a voltage is applied. The inner electrode is maintained at a controlled voltage from 1V to 10 kV, whereas the outer electrode is electrically grounded.In the third step, the aerosol flow (Qa) enters through a slit that is adjacent to one electrode and the sheath air (air or N2) flow (Qs) is introduced to separate the aerosol flow from the other electrode. After a voltage is applied between the inner and outer electrodes, an electric field is formed and the charged particles with specific electrical mobility are attracted to a charged collector rod. The positions of the charged particles along the length of the collector depend on their electrical mobility (Zp), the fluid flow rate and the DMA geometry. In the case of particles with a high electrical mobility, they are collected in the upper part of the rod (particles a and b, ) while particles with a low electrical mobility are collected in the lower part of the rod (particle d, .\[ Z_{p} = \frac{(Q_{s}\ +\ Q_{a})ln(R_{2})}{R_{1}} \label{1} \]With the value of the electrical mobility, the particle diameter (dp) can be determined by using Stokes’ law as described by \ref{2}, where n is the number of charge units, e is the elementary unit of charge (1.61x10-19C), Cc is the Cunningham slip correction factor and µ is the gas viscosity. Cc \ref{3}, considers the noncontinuum flow effect when dp is similar to or smaller than the mean free path (λ) of the carrier gas.\[ d_{p} \ =\frac{n_{e}C_{c}}{3\pi \mu Z^{p}} \label{2} \]\[ C_{c} = 1\ +\ \frac{2\lambda }{d_{p}} [1.257\ +\ 0.4e^{-\frac{-1.10 d_{p}}{2\lambda }}] \label{3} \]In the last step, the size-selected particles are detected with a condensation particle counter (CPC) or an aerosol electrometer (AE) that determines the particle number concentration. CPC has lower detection and quantitation limits and is the most sensitive detector available. AE is used when the particles concentrations are high or when particles are so small that cannot be detected by CPC. shows the operation of the CPC in which the aerosol is mixed with butanol (C4H9OH) or water vapor (working fluid) that condensates on the particles to produce supersaturation. Hence, large size particles (around 10 μm) are obtained, detected optically and counted. Since each droplet is approximately of the same size, the count is not biased. The particle size distribution is obtained by changing the applied voltage. Generally, the performance of the CPC is evaluated in terms of the minimum size that is counted with 50% efficiency.ES-DMA provides information of the mobility diameter of particles and their concentration in number of particles per unit volume of analyzed gas so that the particle size distribution is obtained as shown in . Another form of data representation is the differential distribution plot of ΔN/Δlogdp vs dp . This presentation has a logarithmic size axis that is usually more convenient because particles are often distributed over a wide range of sizes. To obtain the actual particle size distribution (, the raw data acquired with the ES-DMA is corrected for charge correction, transfer function of the DMA and collection efficiency for CPC. illustrates the charge correction in which a charge reducer or neutralizer is necessary to reduce the problem of multiple charging and simplify the size distribution. The charge reduction depends on the particle size and multiple charging can be produced as the particle size increases. For instance, for 10 nm particles, the percentages of single charged particles are lower than those of neutral particles. After a negative voltage is applied, only the positive charged particles are collected. Conversely, for 100 nm particles, the percentages of single charged particles increase and multiple charges are present. Hence, after a negative bias is applied, +1 and +2 particles are collected. The presence of more charges in particles indicates high electrical mobility andThe transfer function for DMA modifies the input particle size distribution and affects the resolution as shown in . This transfer function depends on the operation conditions such as flow rates and geometry of the DMA. Furthermore, the transfer function can be broadened by Brownian diffusion and this effect produces the actual size distribution. The theoretical resolution is measured by the ratio of the sheath to the aerosol flow in under balance flow conditions (sheath flow equals excess flow and aerosol flow in equals monodisperse aerosol flow out).The CPC has a size limit of detection of 2.5 nm because small particles are difficult to activate at the supersaturation of the working fluid. Therefore, CPC collection efficiency is required that consists on the calibration of the CPC against an electrometer.Citrate tabilized gold nanoparticles (AuNPs)) with diameter in the range 10-60 nm and conjugated AuNPs are analyzed by ES-DMA. This investigation shows that the formation of salt particles on the surface of AuNPs can interfere with the mobility analysis because of the reduction in analyte signals. Since sodium citrate is a non volatile soluble salt, ES produces two types of droplets. One droplet consists of AuNPs and salt and the other droplet contains only salt. Thus, samples must be cleaned by centrifugation prior to determine the size of bare AuNPs. presents the size distribution of AuNPs of distinct diameters and peaks corresponding to salt residues.The mobility size of bare AuNPs (dp0) can be obtained by using \ref{4}, where dp,m and ds are mobility sizes of the AuNPs encrusted with salts and the salt NP, respectively. However, the presence of self-assembled monolayer (SAM) produces a difference in electrical mobility between conjugated and bare AuNPs. Hence, the determination of the diameter of AuNPs (salt-free) is critical to distinguish the increment in size after functionalization with SAM. The coating thickness of SAM that corresponds to the change in particle size (ΔL) is calculated by using \ref{5}, where dp and dp0 are the coated and uncoated particle mobility diameters, respectively.\[ d_{p0} =\ \sqrt{d_{p,m}^{3}\ -\ d^{3}_{s}} \label{4} \]\[ \Delta L\ =\ d_{p}\ -\ d_{p0} \label{5} \]In addition, the change in particle size can be determined by considering a simple rigid core-shell model that gives theoretical values of ΔL1 higher than the experimental ones (ΔL). A modified core-shell model is proposed in which a size dependent effect on ΔL2 is observed for a range of particle sizes. AuNPs of 10 nm and 60 nm are coated with MUA ), a charge alkanethiol, and the particle size distributions of bare and coated AuNPs are presented in Figure. The increment in average particle size is 1.2 ± 0.1 nm for 10 nm AuNPs and 2.0 ± 0.3 nm for 60 nm AuNPs so that ΔL depends on particle size.A tandem technique is ES-DMA-APM that determines mass of ligands adsorbed to nanoparticles after size selection with DMA. APM is an aerosol particle mass analyzer that measures mass of particles by balancing electrical and centrifugal forces. DMA-APM has been used to analyze the density of carbon nanotubes, the porosity of nanoparticles and the mass and density differences of metal nanoparticles that undergo oxidation.r8.9: Characterization of Bionanoparticles by Electrospray-Differential Mobility Analysis is shared under a not declared license and was authored, remixed, and/or curated by LibreTexts.
579
9.1: Interferometry
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Physical_Methods_in_Chemistry_and_Nano_Science_(Barron)/09%3A_Surface_Morphology_and_Structure/9.01%3A_Interferometry
The processes which occur at the surfaces of crystals depend on many external and internal factors such as crystal structure and composition, conditions of a medium where the crystal surface exists and others. The appearance of a crystal surface is the result of complexity of interactions between the crystal surface and the environment. The mechanisms of surface processes such as dissolution or growth are studied by the physical chemistry of surfaces. There are a lot of computational techniques which allows us to predict the changing of surface morphology of different minerals which are influenced by different conditions such as temperature, pressure, pH and chemical composition of solution reacting with the surface. For example, Monte Carlo method is widely used to simulate the dissolution or growth of crystals. However, the theoretical models of surface processes need to be verified by natural observations. We can extract a lot of useful information about the surface processes through studying the changing of crystal surface structure under influence of environmental conditions. The changes in surface structure can be studied through the observation of crystal surface topography. The topography can be directly observed macroscopically or by using microscopic techniques. Microscopic observation allows us to study even very small changes and estimate the rate of processes by observing changing the crystal surface topography in time.Much laboratory worked under the reconstruction of surface changes and interpretation of dissolution and precipitation kinetics of crystals. Invention of AFM made possible to monitor changes of surface structure during dissolution or growth. However, to detect and quantify the results of dissolution processes or growth it is necessary to determine surface area changes over a significantly larger field of view than AFM can provide. More recently, vertical scanning interferometry (VSI) has been developed as new tool to distinguish and trace the reactive parts of crystal surfaces. VSI and AFM are complementary techniques and practically well suited to detect surface changes.VSI technique provides a method for quantification of surface topography at the angstrom to nanometer level. Time-dependent VSI measurements can be used to study the surface-normal retreat across crystal and other solid surfaces during dissolution process. Therefore, VSI can be used to directly and nondirectly measure mineral dissolution rates with high precision. Analogically, VSI can be used to study kinetics of crystal growth.Optical interferometry allows us to make extremely accurate measurements and has been used as a laboratory technique for almost a hundred years. Thomas Young observed interference of light and measured the wavelength of light in an experiment, performed around 1801. This experiment gave an evidence of Young's arguments for the wave model for light. The discovery of interference gave a basis to development of interferomertry techniques widely successfully used as in microscopic investigations, as in astronomic investigations.The physical principles of optical interferometry exploit the wave properties of light. Light can be thought as electromagnetic wave propagating through space. If we assume that we are dealing with a linearly polarized wave propagating in a vacuum in z direction, electric field E can be represented by a sinusoidal function of distance and time.\[ E(x,y,z,t)\ =\ a \cos[2\pi (vt\ -\ z/\lambda )] \label{1} \]Where a is the amplitude of the light wave, v is the frequency, and λ is its wavelength. The term within the square brackets is called the phase of the wave. Let’s rewrite this equation in more compact form,\[ E(x,y,z,t)\ =\ a \cos(\omega t\ -\ kz) \label{2} \]where ω=2πv is the circular frequency, and k=2π/λ is the propagation constant. Let’s also transform this second equation into a complex exponential form,\[ E(x,y,z,t)\ =\ Re(a\ e^{i(\psi + \omega t)})\ =\ Re(a\ e^{i\omega t}) \label{3} \]where ϕ=2πz/λ and A=e−iϕ is known as the complex amplitude. If n is a refractive index of a medium where the light propagates, the light wave traverses a distance d in such a medium. The equivalent optical path in this case is\[ p\ =\ n\ \cdot \ d \label{4} \]When two light waves are superposed, the result intensity at any point depends on whether reinforce or cancel each other ). This is well known phenomenon of interference. We will assume that two waves are propagating in the same direction and are polarized with their field vectors in the same plane. We will also assume that they have the same frequency. The complex amplitude at any point in the interference pattern is then the sum of the complex amplitudes of the two waves, so that we can write,\[ A\ =\ A_{1}\ +\ A_{2} \label{5} \]where A1=a1exp(−iϕ1) and A2=a2exp(−iϕ2) are the complex amplitudes of two waves. The resultant intensity is, therefore,\[ I\ =\ |A|^{2}\ =\ I_{1}\ +\ I_{2}\ +\ 2(I_{1}I_{2})^{1/2} cos (\Delta \psi) \label{6} \]where I1 and I2 are the intensities of two waves acting separately, and Δϕ=ϕ1−ϕ2 is the phase difference between them. If the two waves are derived from a common source, the phase difference corresponds to an optical path difference,\[ \Delta p\ =\ (\lambda /2 \pi) \Delta \psi \label{7} \]If Δϕ, the phase difference between the beams, varies linearly across the field of view, the intensity varies cosinusoidally, giving rise to alternating light and dark bands or fringes ). The intensity of an interference pattern has its maximum value:\[ I_{max}\ =\ I_{1}\ +\ I_{2}\ +\ 2(I_{1}I_{2})^{1/2} \label{8} \]when Δϕ=2mπ, where m is an integer and its minimum value i determined by:\[ I_{min}\ =\ I_{1}\ +\ I_{2}\ -\ 2(I_{1}I_{2})^{1/2} \label{9} \]when Δϕ=(2m+1)π The principle of interferometry is widely used to develop many types of interferometric set ups. One of the earliest set ups is Michelson interferometry. The idea of this interferometry is quite simple: interference fringes are produced by splitting a beam of monochromatic light so that one beam strikes a fixed mirror and the other a movable mirror. An interference pattern results when the reflected beams are brought back together. The Michelson interferometric scheme is shown in .The difference of path lengths between two beams is 2x because beams traverse the designated distances twice. The interference occurs when the path difference is equal to integer numbers of wavelengths,\[ \Delta p\ =\ 2x\ m\lambda (m= 0, \pm 1, \pm 2 ... ) \label{10} \]Modern interferometric systems are more complicated. Using special phase-measurement techniques they capable to perform much more accurate height measurements than can be obtained just by directly looking at the interference fringes and measuring how they depart from being straight and equally spaced. Typically interferometric system consist of lights source, beamsplitter, objective system, system of registration of signals and transformation into digital format and computer which process data. Vertical scanning interferometry is contains all these parts. shows a configuration of a VSI interferometric system.Many of modern interferometric systems use Mirau objective in their constructions. Mireau objective is based on a Michelson interferometer. This objective consists of a lens, a reference mirror and a beamsplitter. The idea of getting interfering beams is simple: two beams (red lines) travel along the optical axis. Then they are reflected from the reference surface and the sample surface respectively (blue lines). After this these beams are recombined to interfere with each other. An illumination or light source system is used to direct light onto a sample surface through a cube beam splitter and the Mireau objective. The sample surface within the field of view of the objective is uniformly illuminated by those beams with different incidence angles. Any point on the sample surface can reflect those incident beams in the form of divergent cone. Similarly, the point on the reference symmetrical with that on the sample surface also reflects those illuminated beams in the same form.The Mireau objective directs the beams reflected of the reference and the sample surface onto a CCD (charge-coupled device) sensor through a tube lens. The CCD sensor is an analog shift register that enables the transportation of analog signals (electric charges) through successive stages (capacitors), controlled by a clock signal. The resulting interference fringe pattern is detected by CCD sensor and the corresponding signal is digitized by a frame grabber for further processing with a computer.The distance between a minimum and a maximum of the interferogram produced by two beams reflected from the reference and sample surface is known. That is, exactly half the wavelength of the light source. Therefore, with a simple interferogram the vertical resolution of the technique would be also limited to λ/2. If we will use a laser light as a light source with a wavelength of 300 nm the resolution would be only 150 nm. This resolution is not good enough for a detailed near-atomic scale investigation of crystal surfaces. Fortunately, the vertical resolution of the technique can be improved significantly by moving either the reference or the sample by a fraction of the wavelength of the light. In this way, several interferograms are produced. Then they are all overlayed, and their phase shifts compared by the computer software .Most optical testing interferometers now use phase-shifting techniques not only because of high resolution but also because phase-shifting is a high accuracy rapid way of getting the interferogram information into the computer. Also usage of this technique makes the inherent noise in the data taking process very low. As the result in a good environment angstrom or sub-angstrom surface height measurements can be performed. As it was said above, in phase-shifting interferometry the phase difference between the interfering beams is changed at a constant rate as the detector is read out. Once the phase is determined across the interference field, the corresponding height distribution on the sample surface can be determined. The phase distribution \(φ(x, y)\) is recorded by using the CCD camera.Let’s assign \(A(x, y)\), \(B(x, y)\), \(C(x, y)\) and \(D(x, y)\) to the resulting interference light intensities which are corresponded to phase-shifting steps of 0, π/2, π and 3π/2. These intensities can be obtained by moving the reference mirror through displacements of λ/8, λ/4 and 3λ/8, respectively. The equations for the resulting intensities would be:\[ A(x,y)\ =\ I_{1}(x,y)\ +\ I_{2}(x,y) \cos(\alpha (x,y)) \label{11} \]\[ B(x,y)\ =\ I_{1}(x,y)\ -\ I_{2}(x,y) \sin(\alpha (x,y)) \label{12} \]\[ C(x,y)\ =\ I_{1}(x,y)\ -\ I_{2}(x,y) \cos(\alpha (x,y)) \label{13} \]\[ D(x,y)\ =\ I_{1}(x,y)\ +\ I_{2}(x,y) \sin(\alpha (x,y)) \label{14} \]where \(I_1(x,y)\) and \(I_2(x,y)\) are two overlapping beams from two symmetric points on the test surface and the reference respectively. Solving Equations \ref{11} - \ref{14}, the phase map \(φ(x, y)\) of a sample surface will be given by the relation:\[ \psi (x,y)\ =\ \frac{B(x,y)\ -\ D(x,y)}{A(x,y)\ -\ C(x,y)} \label{15} \]Once the phase is determined across the interference field pixel by pixel on a two-dimensional CCD array, the local height distribution/contour, \(h(x, y)\), on the test surface is given by\[ h(x,y)\ =\ \frac{\lambda}{4\pi } \psi (x,y) \label{16} \]Normally the resulted fringe can be in the form of a linear fringe pattern by adjusting the relative position between the reference mirror and sample surfaces. Hence any distorted interference fringe would indicate a local profile/contour of the test surface.It is important to note that the Mireau objective is mounted on a capacitive closed-loop controlled PZT (piezoelectric actuator) as to enable phase shifting to be accurately implemented. The PZT is based on piezoelectric effect referred to the electric potential generated by applying pressure to piezoelectric material. This type of materials is used to convert electrical energy to mechanical energy and vice-versa. The precise motion that results when an electric potential is applied to a piezoelectric material has an importance for nanopositioning. Actuators using the piezo effect have been commercially available for 35 years and in that time have transformed the world of precision positioning and motion control.Vertical scanning interferometer also has another name; white-light interferometry (WLI) because of using the white light as a source of light. With this type of source a separate fringe system is produced for each wavelength, and the resultant intensity at any point of examined surface is obtained by summing these individual patterns. Due to the broad bandwidth of the source the coherent length L of the source is short:\[ L\ =\ \frac{\lambda ^{2}}{n \Delta \lambda} \label{17} \]where λ is the center wavelength, n is the refractive index of the medium, ∆λ is the spectral width of the source. In this way good contrast fringes can be obtained only when the lengths of interfering beams pathways are closed to each other. If we will vary the length of a pathway of a beam reflected from sample, the height of a sample can be determined by looking at the position for which a fringe contrast is a maximum. In this case interference pattern exist only over a very shallow depth of the surface. When we vary a pathway of sample-reflected beam we also move the sample in a vertical direction in order to get the phase at which maximum intensity of fringes will be achieved. This phase will be converted in height of a point at the sample surface.The combination of phase shift technology with white-light source provides a very powerful tool to measure the topography of quite rough surfaces with the amplitude in heights about and the precision up to 1-2 nm. Through a developed software package for quantitatively evaluating the resulting interferogram, the proposed system can retrieve the surface profile and topography of the sample objects .Except the interferometric methods described above, there are a several other microscopic techniques for studying crystal surface topography. The most common are scanning electron microscopy (SEM) and atomic force microscopy (AFM). All these techniques are used to obtain information about the surface structure. However they differ from each other by the physical principles on which they based.SEM allows us to obtain images of surface topography with the resolution much higher than the conventional light microscopes do. Also it is able to provide information about other surface characteristics such as chemical composition, electrical conductivity etc, see . All types of data are generated by the reflecting of accelerated electron beams from the sample surface. When electrons strike the sample surface, they lose their energy by repeated random scattering and adsorption within an outer layer into the depth varying from 100 nm to 5 microns.The thickness of this outer layer also knows as interactive layer depends on energy of electrons in the beam, composition and density of a sample. Result of the interaction between electron beam and the surface provides several types of signals. The main type is secondary or inelastic scattered electrons. They are produced as a result of interaction between the beam of electrons and weakly bound electrons in the conduction band of the sample. Secondary electrons are ejected from the k-orbitals of atoms within the surface layer of thickness about a few nanometers. This is because secondary electrons are low energy electrons (<50 eV), so only those formed within the first few nanometers of the sample surface have enough energy to escape and be detected. Secondary backscattered electrons provide the most common signal to investigate surface topography with lateral resolution up to 0.4 - 0.7 nm.High energy beam electrons are elastic scattered back from the surface. This type of signal gives information about chemical composition of the surface because the energy of backscattered electrons depends on the weight of atoms within the interaction layer. Also this type of electrons can form secondary electrons and escape from the surface or travel father into the sample than the secondary. The SEM image formed is the result of the intensity of the secondary electron emission from the sample at each x,y data point during the scanning of the surface.AFM is a very popular tool to study surface dissolution. AFM set up consists of scanning a sharp tip on the end of a flexible cantilever which moves across a sample surface. The tips typically have an end radius of 2 to 20 nm, depending on tip type. When the tip touch the surface the forces of these interactions leads to deflection of a cantilever. The interaction between tip and sample surface involve mechanical contact forces, van der Waals forces, capillary forces, chemical bonding, electrostatic forces, magnetic forces etc. The deflection of a cantilever is usually measured by reflecting a laser beam off the back of the cantilever into a split photodiode detector. A schematic drawing of AFM can be seen in . The two most commonly used modes of operation are contact mode AFM and tapping mode AFM, which are conducted in air or liquid environments.Working under the contact mode AFM scans the sample while monitoring the change in cantilever deflection with the split photodiode detector. Loop maintains a constant cantilever reflection by vertically moving the scanner to get a constant signal. The distance which the scanner goes by moving vertically at each x,y data point is stored by the computer to form the topographic image of the sample surface. Working under the tapping mode AFM oscillates the cantilever at its resonance frequency (typically~300 kHz) and lightly “taps” the tip on the surface during scanning. The electrostatic forces increase when tip gets close to the sample surface, therefore the amplitude of the oscillation decreases. The laser deflection method is used to detect the amplitude of cantilever oscillation. Similar to the contact mode, feedback loop maintains a constant oscillation amplitude by moving the scanner vertically at every x,y data point. Recording this movement forms the topographical image. The advantage of tapping mode over contact mode is that it eliminates the lateral, shear forces present in contact mode. This enables tapping mode to image soft, fragile, and adhesive surfaces without damaging them while work under contact mode allows the damage to occur.All techniques described above are widely used in studying of surface nano- and micromorphology. However, each method has its own limitations and the proper choice of analytical technique depends on features of analyzed surface and primary goals of research.All these techniques are capable to obtain an image of a sample surface with quite good resolution. The lateral resolution of VSI is much less, then for other techniques: 150 nm for VSI and 0.5 nm for AFM and SEM. Vertical resolution of AFM (0.5 Ǻ) is better then for VSI (1 - 2 nm), however VSI is capable to measure a high vertical range of heights (1 mm) which makes possible to study even very rough surfaces. On the contrary, AFM allows us to measure only quite smooth surfaces because of its relatively small vertical scan range (7 µm). SEM has less resolution, than AFM because it requires coating of a conductive material with the thickness within several nm.The significant advantage of VSI is that it can provide a large field of view (845 × 630 µm for 10x objective) of tested surfaces. Recent studies of surface roughness characteristics showed that the surface roughness parameters increase with the increasing field of view until a critical size of 250,000 µm is reached. This value is larger then the maximum field of view produced by AFM (100 × 100 µm) but can be easily obtained by VSI. SEM is also capable to produce images with large field of view. However, SEM is able to provide only 2D images from one scan while AFM and VSI let us to obtain 3D images. It makes quantitative analysis of surface topography more complicated, for example, topography of membranes is studied by cross section and top view images.The limitations of each technique described above are critically important to choose appropriate technique for studying surface processes. Let’s explore application of these techniques to study dissolution of crystals.When crystalline matter dissolves the changes of the crystal surface topography can be observed by using microscopic techniques. If we will apply an unreactive mask (silicon for example) on crystal surface and place a crystalline sample into the experiment reactor then we get two types of surfaces: dissolving and remaining the same or unreacted. After some period of time the crystal surface starts to dissolve and change its z-level. In order to study these changes ex situ we can pull out a sample from the reaction cell then remove a mask and measure the average height difference Δh bar between the unreacted and dissolved areas. The average heights of dissolved and unreacted areas are obtained through digital processing of data obtained by microscopes. The velocity of normal surface retreat vSNR during the time interval ∆t is defined by \ref{18}\[ \nu _{SNR}\ =\ \frac{\Delta \hbar}{\Delta t} \label{18} \]Dividing this velocity by the molar volume (cm3/mol) gives a global dissolution rate in the familiar units of moles per unit area per unit time:\[ R\ =\ \frac{\nu _{SNR}}{\bar{V}} \label{19} \]This method allows us to obtain experimental values of dissolution rates just by precise measuring of average surface heights. Moreover, using this method we can measure local dissolution rates at etch pits by monitoring changes in the volume and density of etch pits across the surface over time. VSI technique is capable to perform these measurements because of large vertical range of scanning. In order to get precise values of rates which are not depend on observing place of crystal surface we need to measure enough large areas. VSI technique provides data from areas which are large enough to study surfaces with heterogeneous dissolution dynamics and obtain average dissolution rates. Therefore, VSI makes possible to measure rates of normal surface retreat during the dissolution and observe formation, growth and distribution of etch pits on the surface.However, if the mechanism of dissolution is controlled by dynamics of atomic steps and kink sites within a smooth atomic surface area, the observation of the dissolution process need to use a more precise technique. AFM is capable to provide information about changes in step morphology in situ when the dissolution occurs. For example, immediate response of the dissolved surface to the changing of environmental conditions (concentrations of ions in the solution, pH etc.) can be studied by using AFM.SEM is also used to examine micro and nanotexture of solid surfaces and study dissolution processes. This method allows us to observe large areas of crystal surface with high resolution which makes possible to measure a high variety of surfaces. The significant disadvantage of this method is the requirement to cover examine sample by conductive substance which limits the resolution of SEM. The other disadvantage of SEM is that the analysis is conducted in vacuum. Recent technique, environmental SEM or ESEM overcomes these requirements and makes possible even examine liquids and biological materials. The third disadvantage of this technique is that it produces only 2D images. This creates some difficulties to measure Δhbar within the dissolving area. One of advantages of this technique is that it is able to measure not only surface topography but also chemical composition and other surface characteristics of the surface. This fact is used to monitor changing in chemical composition during the dissolution.As research interests begin to focus on progressively smaller dimensions, the need for nanoscale characterization techniques has seen a steep rise in demand. In addition, the wide scope of nanotechnology across all fields of science has perpetuated the application of characterization techniques to a multitude of disciplines. Dual polarization interferometry (DPI) is an example of a technique developed to solve a specific problem, but was expanded and utilized to characterize fields ranging surface science, protein studies, and crystallography. With a simple optical instrument, DPI can perform label-free sensing of refractive index and layer thickness in real time, which provides vital information about a system on the nanoscale, including the elucidation of structure-function relationships.DPI was conceived in 1996 by Dr. Neville Freeman and Dr. Graham Cross ) when they recognized a need to measure refractive index and adlayer thickness simultaneously in protein membranes to gain a true understanding of the dynamics of the system. They patented the technique in 1998, and the instrument was commercialized by Farfield Group Ltd. in 2000.Freeman and Cross unveiled the first full publication describing the technique in 2003, where they chose to measure well-known protein systems and compare their data to X-ray crystallography and neutron reflection data. The first system they studied was sulpho-NHS-LC-biotin coated with streptavidin and a biotinylated peptide capture antibody, and the second system was BS3 coated with anti-HSA. Molecular structures are shown in . Their results showed good agreement with known layer thicknesses, and the method showed clear advantages over neutron reflection and surface plasmon resonance. A schematic and picture of the instrument used by Freeman and Cross in this publication is shown in and , respectively.The optical power of DPS comes from the ability to measure two different interference fringe patterns simultaneously in real time. Phase changes in these fringe patterns result from changes in refractive index and layer thickness that can be detected by the waveguide interferometer, and resolving these interference patterns provides refractive index and layer thickness values.A representation of the interferometer is shown in . The interferometer is composed of a simplified slab waveguide, which guides a wave of light in one transverse direction without scattering. A broad laser light is shone on the side facet of stacked waveguides separated with a cladding layer, where the waveguides act as a sensing layer and a reference layer that produce an interference pattern in a decaying (evanescent) electric field.A full representation of DPI theory and instrumentation is shown in and , respectively. The layer thickness and refractive index measurements are determined by measuring two phase changes in the system simultaneously because both transverse-electric and transverse-magnetic polarizations are allowed through the waveguides. Phase changes in each polarization of the light wave are lateral shifts of the wave peak from a given reference peak. The phase shifts are caused by changes in refractive index and layer thickness that result from molecular fluctuations in the sample. Switching between transverse-electric and transverse-magnetic polarizations happens very rapidly at 2 ms, where the switching mechanism is performed by a liquid crystal wave plate. This enables real-time measurements of the parameters to be obtained simultaneously.The first techniques rigorously compared to DPI were neutron reflection (NR) and X-ray diffraction. These studies demonstrated that DPI had a very high precision of 40 pm, which is comparable to NR and superior to X-ray diffraction. Additionally, DPI can provide real time information and conditions similar to an in-vivo environment, and the instrumental requirements are far simpler than those for NR. However, NR and X-ray diffraction are able to provide structural information that DPI cannot determine.Comparisons between DPI and alternative techniques have been performed since the initial evaluations, with techniques including surface plasmon resonance (SPR), atomic force microscopy (AFM), and quartz crystal microbalance with dissipation monitoring (QCM-D).SPR is well-established for characterizing protein adsorption and has been used before DPI was developed. These techniques are very similar in that they both use an optical element based on an evanescent field, but they differ greatly in the method of calculating the mass of adsorbed protein. Rigorous testing showed that both tests give very accurate results, but their strengths differ. Because SPR uses spot-testing with an area of 0.26 mm2 while DPI uses the average measurements over the length of the entire 15 mm chip, SPR is recommended for use in kinetic studies where diffusion in involved. However, DPI shows much greater accuracy than SPR when measuring refractive index and layer thickness.Atomic Force Microscopy is a very different analytical technique than DPI because it is a type of microscopy used for high-resolution surface characterization. Hence, AFM and DPI are well-suited to be used in conjunction because AFM can provide accurate molecular structures and surface mapping while DPI provides layer thickness that AFM cannot determine.QCM-D is a technique that can be used with DPI to provide complementary data. QCM-D differs from DPI by calculating both mass of the solvent and the mass of the adsorbed protein layer. These techniques can be used together to determine the amount of hydration in the adsorbed layer. QCM-D can also quantify the supramolecular conformation of the adlayer using energy dissipation calculations, while DPI can detect these conformational changes using birefringence, thus making these techniques orthogonal. One way that DPI is superior to QCM-D is that the latter techniques loses accuracy as the film becomes very thin, while DPI retains accuracy throughout the angstrom scale.A tabulated representation of these techniques and their ability to measure structural detail, in-vivoconditions, and real time data is shown in Table \(\PageIndex{2}\).DPI has been most heavily applied to protein studies. It has been used to elucidate membrane crystallization, protein orientation in a membrane, and conformational changes. It has also been used to study protein-protein interactions, protein-antibody interactions, and the stoichiometry of binding events.Since its establishment using protein interaction studies, DPI has seen its applications expanded to include thin film studies. DPI was compared to ellipsometry and QCM-D studies to indicate that it can be applied to heterogeneous thin films by applying revised analytical formulas to estimate the thickness, refractive index, and extinction coefficient of heterogeneous films that absorb light. A non-uniform density distribution model was developed and tested on polyethylenimine deposited onto silica and compared to QCD-M measurements. Additionally, this revised model was able to calculate the mass of multiple species of molecules in composite films, even if the molecules absorbed different amounts of light. This information is valuable for providing surface composition. The structure of polyethylenimine used to form an adsorbing film is shown in .A challenge of measuring layer thickness in thin films such as polyethylenimine is that DPI’s evanescent field will create inaccurate measurements in inhomogeneous films as the film thickness increases. An error of approximately 5% was seen when layer thickness was increased to 90 nm. Data from this study determining the densities throughout the polyethylenimine film are shown in .Similar to thin film characterization studies, thin layers of adsorbed polymers have also been elucidated using DPI. It has been demonstrated that two different adsorption conformations of polyacrylamide form on resin, which provides useful information about adsorption behaviors of the polymer. This information is industrially important because polyacrylamide is widely used through the oil industry, and the adsorption of polyacrylamide onto resin is known to affect the oil/water interfacial stability.Initial adsorption kinetics and conformations were also illuminated using DPI on bottlebrush polyelectrolytes. Bottlebrush polyelectrolytes are show in . It was shown that polyelectrolytes with high charge density initially adsorbed in layers that were parallel to the surface, but as polyelectrolytes were replaced with low charge density species, alignment changed to prefer perpendicular arrangement to the surface.In 2009, it was shown by Wang et al. that DPI could be used for small molecule sensing. In their first study describing this use of DPI, they used single stranded DNA that was rich in thymine to complex Hg2+ ions. When DNA complexed with Hg2+, the DNA transformed from a random coil structure to a hairpin structure. This change in structure could be detected by DPI at Hg2+ concentrations smaller than the threshold concentration allowed in drinking water, indicating the sensitivity of this label-free method for Hg2+ detection. High selectivity was indicated when the authors did not observe similar structural changes for Mg2+, Ca2+, Mn2+, Fe3+, Cd2+, Co2+, Ni2+, Zn2+ or Pb2+ ions. A graphical description of this experiment is shown in .This page titled 9.1: Interferometry is shared under a CC BY 4.0 license and was authored, remixed, and/or curated by Pavan M. V. Raja & Andrew R. Barron (OpenStax CNX) via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.
580
9.2: Atomic Force Microscopy (AFM)
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Physical_Methods_in_Chemistry_and_Nano_Science_(Barron)/09%3A_Surface_Morphology_and_Structure/9.02%3A_Atomic_Force_Microscopy_(AFM)
Atomic force microscopy (AFM) is a high-resolution form of scanning probe microscopy, also known as scanning force microscopy (SFM). The instrument uses a cantilever with a sharp tip at the end to scan over the sample surface ). As the probe scans over the sample surface, attractive or repulsive forces between the tip and sample, usuually in the form of Van Der Waal forces but also can be a number of others such as electrostatic and hydrophobic/hydrophilic, cause a deflection of the cantilever. The deflection is measured by a laser ) which is reflected off the cantilever into photodiodes. As one of the photodiodes collects more light, it creates an output signal that is processed and provides information about the vertical bending of the cantilever. This data is then sent to a scanner that controls the height of the probe as it moves across the surface. The variance in height applied by the scanner can then be used to produce a three-dimensional topographical representation of the sample.The contact mode method utilizes a constant force for tip-sample interactions by maintaining a constant tip deflection .The tip communicates the nature of the interactions that the probe is having at the surface via feedback loops and the scanner moves the entire probe in order to maintain the original deflection of the cantilever. The constant force is calculated and maintained by using Hooke's Law, \ref{2}. This equation relates the force (F), spring constant (k), and cantilever deflection (x). Force constants typically range from 0.01 to 1.0 N/m. Contact mode usually has the fastest scanning times but can deform the sample surface. It is also only the only mode that can attain "atomic resolution."\[ F\ =\ -kx \label{1} \]In the tapping mode the cantilever is externally oscillated at its fundamental resonance frequency ). A piezoelectric on top of the cantilever is used to adjust the amplitude of oscillation as the probe scans across the surface. The deviations in the oscillation frequency or amplitude due to interactions between the probe and surface are measured, and provide information about the surface or types of material present in the sample. This method is gentler than contact AFM since the tip is not dragged across the surface, but it does require longer scanning times. It also tends to provide higher lateral resolution than contact AFM.For noncontact mode the cantilever is oscillated just above its resonance frequency and this frequency is decreased as the tip approaches the surface and experiences the forces associated with the material ). The average tip-to-sample distance is measured as the oscillation frequency or amplitude is kept constant, which then can be used to image the surface. This method exerts very little force on the sample, which extends the lifetime of the tip. However, it usually does not provide very good resolution unless placed under a strong vacuum.A common problem seen in AFM images is the presence of artifacts which are distortions of the actual topography, usually either due to issues with the probe, scanner, or image processing. The AFM scans slowly which makes it more susceptible to external temperature fluctuations leading to thermal drift. This leads to artifacts and inaccurate distances between topographical features.It is also important to consider that the tip is not perfectly sharp and therefore may not provide the best aspect ratio, which leads to a convolution of the true topography. This leads to features appearing too large or too small since the width of the probe cannot precisely move around the particles and holes on the surface. It is for this reason that tips with smaller radii of curvature provide better resolution in imaging. The tip can also produce false images and poorly contrasted images if it is blunt or broken.The movement of particles on the surface due to the movement of the cantilever can cause noise, which forms streaks or bands in the image. Artifacts can also be made by the tip being of inadequate proportions compared to the surface being scanned. It is for this reason that it is important to use the ideal probe for the particular application.The sample size varies with the instrument but a typical size is 8 mm by 8 mm with a typical height of 1 mm. Solid samples present a problem for AFM since the tip can shift the material as it scans the surface. Solutions or dispersions are best for applying as uniform of a layer of material as possible in order to get the most accurate value of particles’ heights. This is usually done by spin-coating the solution onto freshly cleaved mica which allows the particles to stick to the surface once it has dried.AFM is particularly versatile in its applications since it can be used in ambient temperatures and many different environments. It can be used in many different areas to analyze different kinds of samples such as semiconductors, polymers, nanoparticles, biotechnology, and cells amongst others. The most common application of AFM is for morphological studies in order to attain an understanding of the topography of the sample. Since it is common for the material to be in solution, AFM can also give the user an idea of the ability of the material to be dispersed as well as the homogeneity of the particles within that dispersion. It also can provide a lot of information about the particles being studied such as particle size, surface area, electrical properties, and chemical composition. Certain tips are capable of determining the principal mechanical, magnetic, and electrical properties of the material. For example, in magnetic force microscopy (MFM) the probe has a magnetic coating that senses magnetic, electrostatic, and atomic interactions with the surface. This type of scanning can be performed in static or dynamic mode and depicts the magnetic structure of the surface.Atomic force microscopy is usually used to study the topographical morphology of these materials. By measuring the thickness of the material it is possible to determine if bundling occurred and to what degree. Other dimensions of the sample can also be measured such as the length and width of the tubes or bundles. It is also possible to detect impurities, functional groups ), or remaining catalyst by studying the images. Various methods of producing nanotubes have been found and each demonstrates a slightly different profile of homogeneity and purity. These impurities can be carbon coated metal, amorphous carbon, or other allotropes of carbon such as fullerenes and graphite. These facts can be utilized to compare the purity and homogeneity of the samples made from different processes, as well as monitor these characteristics as different steps or reactions are performed on the material. The distance between the tip and the surface has proven itself to be an important parameter in noncontact mode AFM and has shown that if the tip is moved past the threshold distance, approximately 30 μm, it can move or damage the nanotubes. If this occurs, a useful characterization cannot be performed due to these distortions of the image.Atomic force microscopy is best applied to aggregates of fullerenes rather than individual ones. While the AFM can accurately perform height analysis of individual fullerene molecules, it has poor lateral resolution and it is difficult to accurately depict the width of an individual molecule. Another common issue that arises with contact AFM and fullerene deposited films is that the tip shifts clusters of fullerenes which can lead to discontinuities in sample images.The following is intended as a guide for use of the Nanoscope AFM system within the Shared Equipment Authority at Rice University . However, it can be adapted for similar AFM instruments.Please familiarize yourself with the Figures. All relevant parts of the AFM setup are shown.Sign in. Turn on each component shown in .Select imaging mode using the mode selector switch is located on the left hand side of the atomic force microscope (AFM) base , there are three modes:Most particulate samples are imaged by immobilizing them onto mica sheet, which is fixed to a metal puck ). Samples that are in a solvent are easily deposited. To make a sample holder a sheet of Mica is punched out and stuck to double-sided carbon tape on a metal puck. In order to ensure a pristine surface, the mica sheet is cleaved by removing the top sheet with Scotch™ tape to reveal a pristine layer underneath. The sample can be spin coated onto the mica or air dried.The spin coat method;Once all parameters are set, click engage (icon with green arrow down) to start engaging cantilever to sample surface and to begin image acquisition. The bottom of the screen should be “tip secured”. When the tip reaches the surface it automatically begins imaging.If the amplitude set point is high, the cantilever moves far away from the surface, since the oscillation is damped as it approaches. While in free oscillation (set amplitude set point to 3), adjust drive amplitude so that the output voltage (seen on the scope) is 2 V. Big changes in this value while an experiment is running indicate that something is on the tip. Once the output voltage is at 2 V, bring the amplitude set point back down to a value that puts the z outer position line white and in the center of the bar on the software (1 V is very close).Select channel 1 data type – height. Select channel 2 data type - amplitude. Amplitude looks like a 3D image and is an excellent visualization tool or for a presentation. However the real data is the height data.Bring the tip down (begin with amplitude set point to 2). The goal is to tap hard enough to get a good image, but not so hard as to damage the surface of the tip. Set to 3 clicks bellow jus touching by further lowering amplitude set point with 3 left arrow clicks on the keyboard. The tip Z-center position scale on the right hand screen shows the extension on the piezo scanner. When the tip is properly adjusted, expect this value to be near the center.Select view/scope mode (the scope icon). Check to see if trace and retrace are tracking each other. If so, the lines should look the same, but they probably will not overlap each other vertically or horizontally. If they are tracking well, then your tip is scanning the sample surface and you may return to view/image mode (the image icon). If they are not tracking well, adjust the scan rate, gains, and/or set point to improve the tracking. If tracing and retrace look completely different, you may need to decrease the set point to improve the tracking. If trace and retrace look completely different, you may need to decrease the set point one or two clicks with the left arrow key until they start having common features in both directions. Then reduce the scan rate: a reasonable value for scan sizes of 1-3 µm would be 2 Hz. Next try increasing the integral gain. As you increase the integral gain, the tracking should improve, although you will reach a value beyond which the noise will increase as the feedback loop starts to oscillate. If this happens, reduce gains, if trace and retrace still do not track satisfactorily, reduce the set point again. Once the tip is tracking the surface, choose view/image mode.Integral gain controls the amount of integrated error signal used in the feedback calculation. The higher this parameter is set, the better the tip will track the same topography. However, if it is set too high, noise due to feedback oscillation will be introduced into the scan.Proportional gain controls the amount of proportional arrow signal used in the feedback calculation.Once amplitude set point is adjusted with the phase data, change channel 2 to amplitude. The data scale can be changed (it is the same as for display as it does not affect the data). In the amplitude image, lowering the voltage increases the contrast.Move small amounts on the image surface with X and Y offset to avoid large, uninteresting objects. For example, setting the Y offset to -2 will remove features at the bottom of the image, thus shifting the image up. Changing it to -3 will then move the image one more unit up. Make sure you are using µm and not nm if you expect to see a real change.To move further, disengage the tip (click the red up arrow icon so that the tip moves up 25 µm and secures). Move upper translational stage to keep the tip in view in the light camera. Re-engage the tip. If the shadow in the image is drawn out, the amplitude set point should be lowered even further. The area on the image that is being drawn is controlled by the frame pull-down menu (and the up and down arrows). Lower the set point and redraw the same neighborhood to see if there is improvement. The proportional and integral gain can also be adjusted.The frame window allows you to restart from the top, bottom, or a particular line.Another way to adjust the amplitude set point value is to click on signal scope to ensure trace and retrace overlap. To stop Y rastering, slow scan axis.To take a better image, increase the number of lines (512 is max), decrease the speed (1 Hz), and lower the amplitude set point. The resolution is about 10 nm in the X and Y directions due to the size of the tip. The resolution in the Z direction is less than 1 nm.Changing the scan size allows us to zoom in on features. You can zoom in on a center point by using zoom in box (left clicking to toggle between box position and size), or you can manually enter a scan size on the left hand screen.Click on capture (the camera icon) to grab images. To speed things up, restart the scan at an edge to grab a new image after making any changes in the scan and feedback parameters. When parameters are changed, the capture option will toggle to “ next”. There is a forced capture option, which allows you to collect an image even if parameters have been switched during the capture. It is not completely reliable.To change the file name, select capture filename under the capture menu. The file will be saved in the!directory which is d:\capture. To save the picture, under the utility pull-down menu select TIFF export. The zip drive is G:.Analysis involves flattening the image and measuring various particle dimensions, click the spectrum button.Select the height data (image pull-down menu, select left or right image). The new icons in the “analysis” menu are:To remove the bands (striping) in the image, select the rolling pin. The order of flattening is the order of the baseline correction. A raw offset is 0 order, a straight sloping line is order 1. Typically a second order correction is chosen to remove “scanner bow” which are the dark troughs on the image plane.To remove more shadows, draw exclusion boxes over large objects and then re-flatten. Be sure to save the file under a new name. The default is t overwrite it.In section analysis, use the multiple cursor option to measure a particle in all dimensions. Select fixed cursor. You can save pictures of this information, but things must be written down! There is also a particle analysis menu.Disengage the cantilever and make sure that the cantilever is in secure mode before you move the cantilever to the other spots or change to another sample.Loosen the clamp to remove the tip and holder.Remove the tip and replace it onto the gel sticky tape using the fine tweezers.Recover the sample with tweezers.Close the program.Log out of the instrument.After the experiment, turn off the monitor and the power of the light source. Leave the controller on.Sign out in the log book.Atomic force microscopy (AFM) has become a powerful tool to investigate 2D materials and the related 2D materials (e.g., graphene) for both the nano-scale imaging as well as the measurement and analysis of the frictional properties.The basic structure and function of the typical Nanoscope AFM system is discussed in the section on the practical guide.For the contact mode of AFM, a schematic is shown in The tip scans at the surface of the sample, the cantilever will have a shift of Δz, which is a function of the position of the tip. If we know the mechanical constant of the tip C, the interaction force, or the normal load of between the tip and sample can be calculated by \ref{2}, where C is determined by the material and intrinsic properties of the tip and cantilever. As shown in a, we usually treat the back side of the cantilever as a mirror to reflect the laser, so the change of the position will change the path length of the laser, and then detected by the quadrant detector.\[ F\ =\ C \cdot \Delta z \label{2} \]We can get the topography, height profile, phase and lateral force channel while measuring through the contact mode AFM. Comparing the tapping mode, the lateral force, also known as the friction, appears very crucial. The direct signal acquired is the current change caused due to the lateral force on the sample interacting with the tip, so the unit is usually nA. To calculate the real friction force in Newton (N) or nano-Newton (nN), you need to let this current signal time a friction coefficient, which is also determined by the intrinsic properties of the materials that makes the tip.A typical AFM is shown in b. The sample stage is at the inside of the bottom chamber. You can blow the gas into the chamber or pump the vacuum in need for the testing under different ambient. That is especially important in testing the frictional properties of materials.For the sample preparation part, the sample fixed on the mica mentioned earlier in the guide is for the synthesized chemical powders. For graphene, it can be simply placed on any flat substrate, such as mica, SiC, sapphire, silica, etc. Just placing the solid state sample on substrate onto the sample stage and the further work can be conducted.For data collection, the topography and height profile are acquired using the same method in the tapping mode. However, there are two additional pieces of information that are necessary in order to determine the frictional properties of the material. First, the normal load. The normal load is described in \ref{2}; however, what we directly get here proportional to the normal load is the setpoint we give it for the tip to the sample. It is a current. So we need a vertical force coefficient (CVF) to get what the normal load we apply to the material, as illustrated in \ref{3}\[ F\ =\ I_{setpoint} \cdot C_{VF} \label{3} \]For data collection, the topography and height profile are acquired using the same method in the tapping mode. However, there are two additional pieces of information that are necessary in order to determine the frictional properties of the material. First, the normal load. The normal load is described in \ref{4}, where K is the stiffness of the tip, it can be get through the vibrational model of the cantilever, and usually we can get it if we buy the commercial AFM tip. L is the optical coefficient of the cantilever, it can be acquired by calibrate the force-displacement curve of the tip, as shown in . Then L can be acquired by getting the slope of process 1 or 6 in .\[ C_{VP} \ =\ \frac{K}{L} \label{4} \] is a typical friction image, it is composed of n*n lines by scanning. Each point is the friction force value corresponding to that point. All we need to do is to get the average friction for the area we are interested in. Then use this current signal multiplied by the lateral force coefficient then we can obtain the actual friction force.During the process of collecting the original data of the lateral force (friction), for every line in the image, the friction information is actually composed of two data line: trace and retrace (see ). The average of results for trace , black line) and retrace , red line) as the friction signal of the certain point on the line. That is to say, the actual friction is determined from \ref{5}, where the Iforward and Ibackward are data points we can derive from the trace and retrace from the friction image, and CLF is the lateral force coefficient.\[ F_{f}\ =\ \frac{I_{forward}\ -\ I_{backward}}{2} \cdot C_{LF} \label{5} \]There are several ways to compare the details of the frictional properties at the nanoscale. is an example comparing the friction on the sample (in this case, few-layer graphene) and the friction on the substrate (SiO2). As illustrated in \ref{5}, qualitatively we can easily see the friction on the graphene is way smaller than it on the SiO2 substrate. As graphene is a great lubricant and have low friction, the original data just enable us to confirm that. shows multi-layers of graphene on a mica. By selecting a certain cross section line and comparing both height profile and friction profile, it will provide us some information of the friction related to the structure behind this section. The friction-distance curve is a typical important path for the data analysis.We can also take the average of friction signal for an area and compare that from the region to the region. shows a region of the graphene with the layer numbers from 1-4. a and b are also the topography and the friction image respectively. By compare the average friction from the area to the area, we can obviously see the friction on graphene decreases as the number of layers increases. Though c and d we can obviously see this average friction change on the surface from 1 to 4 layers of graphene. But for a more general statistical way, getting the normalized signal of the average friction and comparing them can be more straightforward.Another way to compare the frictional properties is that, to apply different normal load and see how the friction change, then get the information on friction-normal load curve. This is important because we know too much normal load for the materials can easily break or wear the materials. Examples and details will be discussed below.During the process of using tip approach to graphene and applying the normal load (increasing normal load, loading process) and withdrawing the tip gradually (decreasing normal load, unloading process), the friction on graphene exhibits hysteresis, which means a large increment of the friction while we drag off the tip. This process can be analyzed from friction-normal load curve, as shown in . It was thought that this effect may be due to the detail of interacting behavior of the contact area between the tip and graphene. However, if you test this in different ambient conditions, for example if nitrogen was blown into the chamber while testing occured, this hysteresis disappears. Friction hysteresis on the surface of graphene/Cu. Adapted from P. Egberts, G. H. Han, X. Z. Liu, A. T. C. Johnson, and R. W. Carpick, ACS Nano, 2014, 8, 5012. Copyright: American Chemical Society.In order to explore the mechanism of such a phenomenon, a series of friction test under different conditions. A key factor here is the humidity in the testing environment. is a typical friction measurement on monolayer and 3-layer graphene on SiOx. We can see the friction hysteresis is very different under dry nitrogen gas (0.1% humidity) and the ambient (24% humidity) from .Simulation on this system suggests this friction hysteresis on the surface of graphene is due to the water interacting with the surface of graphene. The contact angle between the tip/water molecule-graphene interfaces is the key component. The further study suggests once you put the graphene samples in air and expose them for a long period of times (several days), the chemical bonding at the surface can change due to the water molecule in the air so that the friction properties at nanoscale can be very different.The bond between the material under investigation and the substrate can be very vital for the friction behavior at the nanoscale. The studies during the years suggest that the friction of the graphene will decrease as the number of layers increase. This is adaptable for suspended graphene (with nothing to support it), and graphene on most of substrates (such as SiOx, Cu foil and so on). However, if the graphene is supported by fresh cleaved mica surface, there’s no difference for the frictional properties of different-layer graphene, this is due to the large surface dissipation energy, so the graphene is very firmly fixed to the mica.However, on the other hand, the surface of mica is also hydrophilic, this is causal to the water distribution on the surface of mica, and the water intercalation between the graphene and mica bonding. Through the friction measurement of the graphene on mica, we can analyze this system quantitatively, as shown in .This case study just gives an example that, contact-mode Atomic Force Microscopy, or Frictional Force Microscopy is a powerful tool to investigate the frictional properties of materials, for the use both in scientific research as well as chemical industry.The most important lesson for researchers is that in analyzing any literature data it is important to know what the relative humidity conditions are for the particular experiment, such that various experiments may be compared directly.This page titled 9.2: Atomic Force Microscopy (AFM) is shared under a CC BY 4.0 license and was authored, remixed, and/or curated by Pavan M. V. Raja & Andrew R. Barron (OpenStax CNX) via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.
581
9.3: SEM and its Applications for Polymer Science
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Physical_Methods_in_Chemistry_and_Nano_Science_(Barron)/09%3A_Surface_Morphology_and_Structure/9.03%3A_SEM_and_its_Applications_for_Polymer_Science
The scanning electron microscope (SEM) is a very useful imaging technique that utilized a beam of electrons to acquire high magnification images of specimens. Very similar to the transmission electron microscope (TEM), the SEM maps the reflected electrons and allows imaging of thick (~mm) samples, whereas the TEM requires extremely thin specimens for imaging; however, the SEM has lower magnifications. Although both SEM and TEM use an electron beam, the image is formed very differently and users should be aware of when each microscope is advantageous.All microscopes serve to enlarge the size of an object and allow people to view smaller regions within the sample. Microscopes form optical images and although instruments like the SEM have extremely high magnifications, the physics of the image formation are very basic. The simplest magnification lens can be seen . The formula for magnification is shown in \ref{1}, where M is magnification, f is focal length, u is the distance between object and lens, and v is distance from lens to the image.\[ M\ =\ \frac{f}{u-f}\ = \frac{v-f}{f} \label{1} \]Multistage microscopes can amplify the magnification of the original object even more as shown in (v_{2}\ -\ f_{2})}{f_{1}f_{2}} \label{2} \]In reality, the objects we wish to magnify need to be illuminated. Whether or not the sample is thin enough to transmit light divides the microscope into two arenas. SEM is used for samples that do not transmit light, whereas the TEM (transmission electron microscope) requires transparent samples. Due to the many frequencies of light from the introduced source, a condenser system is added to control the brightness and narrow the range of viewing to reduce aberrations, which distort the magnified image.Microscope images can be formed instantaneous (as in the optical microscope or TEM) or by rastering (scanning) a beam across the sample and forming the image point-by-point. The latter is how SEM images are formed. It is important to understand the basic principles behind SEM that define properties and limitations of the image.The resolution of a microscope is defined as the smallest distance between two features that can be uniquely identified (also called resolving power). There are many limits to the maximum resolution of the SEM and other microscopes, such as imperfect lenses and diffraction effects. Each single beam of light, once passed through a lens, forms a series of cones called an airy ring (see ). For a given wavelength of light, the central spot size is inversely proportional to the aperture size (i.e., large aperture yields small spot size) and high resolution demands a small spot size.Aberrations distort the image and we try to minimize the effect as much as possible. Chromatic aberrations are caused by the multiple wavelengths present in white light. Spherical aberrations are formed by focusing inside and outside the ideal focal length and caused by the imperfections within the objective lenses. Astigmatism is because of further distortions in the lens. All aberrations decrease the overall resolution of the microscope.Electrons are charged particles and can interact with air molecules therefore the SEM and TEM instruments require extremely high vacuum to obtain images (10-7 atm). High vacuum ensures that very few air molecules are in the electron beam column. If the electron beam interacts with an air molecule, the air will become ionized and damage the beam filament, which is very costly to repair. The charge of the electron allows scanning and also inherently has a very small deflection angle off the source of the beam.The electrons are generated with a thermionic filament. A tungsten (W) or LaB6 filament is chosen based on the needs of the user. LaB6 is much more expensive and tungsten filaments meet the needs of the average user. The microscope can be operated as field emission (tungsten filament).To accurately interpret electron microscopy images, the user must be familiar with how high energy electrons can interact with the sample and how these interactions affect the image. The probability that a particular electron will be scattered in a certain way is either described by the cross section, σ, or mean free path, λ, which is the average distance which an electron travels before being scattered.Elastic scatter, or Rutherford scattering, is defined as a process which deflects an electron but does not decrease its energy. The wavelength of the scattered electron can be detected and is proportional to the atomic number. Elastically scattered electrons have significantly more energy that other types and provide mass contrast imaging. The mean free path, λ, is larger for smaller atoms meaning that the electron travels farther.Any process that causes the incoming electron to lose a detectable amount of energy is considered inelastic scattering. The two most common types of inelastic scatter are phonon scattering and plasmon scattering. Phonon scattering occurs when a primary electron looses energy by exciting a phonon, atomic vibrations in a solid, and heats the sample a small amount. A Plasmon is an oscillation within the bulk electrons in the conduction band for metals. Plasmon scattering occurs when an electron interacts with the sample and produces plasmons, which typically have 5 - 30 eV energy loss and small λ.A secondary effect is a term describing any event which may be detected outside the specimen and is essentially how images are formed. To form an image, the electron must interact with the sample in one of the aforementioned ways and escape from the sample and be detected. Secondary electrons (SE) are the most common electrons used for imaging due to high abundance and are defined, rather arbitrarily, as electrons with less than 50 eV energy after exiting the sample. Backscattered electrons (BSE) leave the sample quickly and retain a high amount of energy; however there is a much lower yield of BSE. Backscattered electrons are used in many different imaging modes. Refer to for a diagram of interaction depths corresponding to various electron interactions.The SEM is made of several main components: electron gun, condenser lens, scan coils, detectors, specimen, and lenses (see ). Today, portable SEMs are available but the typical size is about 6 feet tall and contains the microscope column and the control console.A special feature of the SEM and TEM is known as depth of focus, dv/du the range of positions (depths) at which the image can be viewed with good focus, see \ref{3}. This allows the user to see more than a singular plane of a specified height in focus and essentially allows a range of three dimensional imaging.\[ \frac{dv}{du}\ =\ \frac{-v^{2}}{u^{2}}\ =\ -M^{2} \label{3} \]The secondary electron detector (SED) is the main source of SEM images since a large majority of the electrons emitted from the sample are less than 50 eV. These electrons form textural images but cannot determine composition. The SEM may also be equipped with a backscatter electron detector (BSED) which collects the higher energy BSE’s. Backscattered electrons are very sensitive to atomic number and can determine qualitative information about nuclei present (i.e., how much Fe is in the sample). Topographic images are taken by tilting the specimen 20 - 40° toward the detector. With the sample tilted, electrons are more likely to scatter off the top of the sample rather than interact within it, thus yielding information about the surface.The most effective SEM sample will be at least as thick as the interaction volume; depending on the image technique you are using (typically at least 2 µm). For the best contrast, the sample must be conductive or the sample can be sputter-coated with a metal (such as Au, Pt, W, and Ti). Metals and other materials that are naturally conductive do not need to be coated and need very little sample preparation.As previously discussed, to view features that are smaller than the wavelength of light, an electron microscope must be used. The electron beam requires extremely high vacuum to protect the filament and electrons must be able to adequately interact with the sample. Polymers are typically long chains of repeating units composed primarily of “lighter” (low atomic number) elements such as carbon, hydrogen, nitrogen, and oxygen. These lighter elements have fewer interactions with the electron beam which yields poor contrast, so often times a stain or coating is required to view polymer samples. SEM imaging requires a conductive surface, so a large majority of polymer samples are sputter coated with metals, such as gold.The decision to view a polymer sample with an SEM (versus a TEM for example) should be evaluated based on the feature size you expect the sample to have. Generally, if you expect the polymer sample to have features, or even individual molecules, over 100 nm in size you can safely choose SEM to view your sample. For much smaller features, the TEM may yield better results, but requires much different sample preparation than will be described here.A sputter coater may be purchased that deposits single layers of gold, gold-palladium, tungsten, chromium, platinum, titanium, or other metals in a very controlled thickness pattern. It is possible, and desirable, to coat only a few nm’s of metal onto the sample surface.Many polymer films are depositing via a spin coater which spins a substrate (often ITO glass) and drops of polymer liquid are dispersed an even thickness on top of the substrate.Another option for polymer sample preparation is staining the sample. Stains act in different ways, but typical stains for polymers are osmium tetroxide (OsO4), ruthenium tetroxide (RuO4) phosphotungstic acid (H3PW12O40), hydrazine (N2H4), and silver sulfide (Ag2S).This page titled 9.3: SEM and its Applications for Polymer Science is shared under a CC BY 4.0 license and was authored, remixed, and/or curated by Pavan M. V. Raja & Andrew R. Barron (OpenStax CNX) via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.
582
9.4: Catalyst Characterization Using Thermal Conductivity Detector
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Physical_Methods_in_Chemistry_and_Nano_Science_(Barron)/09%3A_Surface_Morphology_and_Structure/9.04%3A_Catalyst_Characterization_Using_Thermal_Conductivity_Detector
A catalyst is a "substance that accelerates the rate of chemical reactions without being consumed". Some reactions, such as the hydrodechlorination of TCE, \ref{1}, don't occur spontaneously, but can occur in the presence of a catalyst.\[ C_{2}Cl_{3}H\ +\ 4H_{2} \underset{PD}\rightarrow C_{2}H_{6}\ +\ 3HCl \label{1} \]Metal dispersion is a commong term within the catalyst industry. The term refers to the amount of metal that is active for a specific reaction. Let’s assume a catalyst material has a composition of 1 wt% palladium and 99% alumina (Al2O3) ) Even though the catalyst material has 1 wt% of palladium, not all the palladium is active. The material might be oxidized due to air exposure or some of the material is not exposed to the surface ), hence it can’t participate in the reaction. For this reason it is important to characterize the material.In order for Pd to react according to \ref{1}, it needs to be in the metallic form. Any oxidized palladium will be inactive. Thus, it is important to determine the oxidation state of the Pd atoms on the surface of the material. This can be accomplished using an experiment called temperature programmed reduction (TPR). Subsequently, the percentage of active palladium can be determined by hydrogen chemisorption. The percentage of active metal is an important parameter when comparing the performance of multiple catalyst. Usually the rate of reaction is normalized by the amount of active catalyst.Thermal conductivity is the ability of a chemical specie to conduct heat. Each gas has a different thermal conductivity. The units of thermal conductivity in the international system of units are W/m·K. Table \(\PageIndex{1}\) shows the thermal conductivity of some common gasses.This detector is part of a typical commercial instrument such as a Micromeritics AutoChem 2920 ). This instrument is an automated analyzer with the ability to perform chemical adsorption and temperature-programmed reactions on a catalyst, catalyst support, or other materials.TPR will determine the number of reducible species on a catalyst and will tell at what temperature each of these species was reduced. For example palladium is ordinarily found as Pd or Pd(II), i.e., oxidation states 0 and +2. Pd(II) can be reduced at very low temperatures (5 - 10 °C) to Pd following \ref{2}.\[ PdO\ +\ H_{2} \rightarrow Pd\ +\ H_{2}O \label{2} \]A 128.9 mg 1wt% Pd/Al2O3 samples is used for the experiment, . Since we want to study the oxidation state of the commercial catalyst, no pre-treatment needs to be executed to the sample. A 10% hydrogen-argon mixture is used as analysis and reference gas. Argon has a low thermal conductivity and hydrogen has a much higher thermal conductivity. All gases will flow at 50 cm3/min. The TPR experiment will start at an initial temperature of 200 K, temperature ramp 10 K/min, and final temperature of 400 K. The H2/Ar mixture is flowed through the sample, and past the detector in the analysis port. While in the reference port the mixture doesn’t become in contact with the sample. When the analysis gas starts flowing over the sample, a baseline reading is established by the detector. The baseline is established at the initial temperature to ensure there is no reduction. While this gas is flowing, the temperature of the sample is increased linearly with time and the consumption of hydrogen is recorded. Hydrogen atoms react with oxygen atoms to form H2O.Water molecules are removed from the gas stream using a cold trap. As a result, the amount of hydrogen in the argon/hydrogen gas mixture decreases and the thermal conductivity of the mixture also decrease. The change is compared to the reference gas and yields to a hydrogen uptake volume. is a typical TPR profile for PdO.Once the catalyst (1 wt% Pd/Al2O3) has been completely reduced, the user will be able to determine how much palladium is active. A pulse chemisorption experiment will determine active surface area, percentage of metal dispersion and particle size. Pulses of hydrogen will be introduced to the sample tube in order to interact with the sample. In each pulse hydrogen will undergo a dissociative adsorption on to palladium active sites until all palladium atoms have reacted. After all active sites have reacted, the hydrogen pulses emerge unchanged from the sample tube. The amount of hydrogen chemisorbed is calculated as the total amount of hydrogen injected minus the total amount eluted from the system.The sample from previous experiment (TPR) will be used for this experiment. Ultra high-purity argon will be used to purge the sample at a flow rate of 40 cm3/min. The sample will be heated to 200 °C in order to remove all chemisorbed hydrogen atoms from the Pd surface. The sample is cooled down to 40 °C. Argon will be used as carrier gas at a flow of 40 cm3/min. Filaments temperature will be 175 °C and the detector temperature will be 110 °C. The injection loop has a volume of 0.03610 cm3 STP. As shown in , hydrogen pulses will be injected in to the flow stream, carried by argon to become in contact and react with the sample. It should be noted that the first pulse of hydrogen was almost completely adsorbed by the sample. The second and third pulses show how the samples is been saturated. The positive value of the TCD detector is consistent with our assumptions. Since hydrogen has a higher thermal conductivity than argon, as it flows through the detector it will tend to cool down the filaments, the detector will then apply a positive voltage to the filaments in order to maintain a constant temperature.Table \(\PageIndex{1}\) shows me the integration of the peaks from . This integration is performed by an automated software provided with the instrument. It should be noted that the first pulse was completely consumed by the sample, the pulse was injected between time 0 and 5 minutes. From we observe that during the first four pulses, hydrogen is consumed by the sample. After the fourth pulse, it appears the sample is not consuming hydrogen. The experiment continues for a total of seven pulses, at this point the software determines that no consumption is occurring and stops the experiment. Pulse eight is denominated the "saturation peak", meaning the pulse at which no hydrogen was consumed.Using \ref{3} the change in area (Δarean) is calculated for each peak pulse area (arean)and compared to that of the saturation pulse area (areasaturation = 0.010580979). Each of these changes in area is proportional to an amount of hydrogen consumed by the sample in each pulse. Table \(\PageIndex{2}\) shows the calculated change in area.\[ \Delta Area_{n}\ =\ Area_{saturation}\ -\ Area_{n} \label{3} \]The Δarean values are then converted into hydrogen gas consumption using \ref{4}, where Fc is the area-to-volume conversion factor for hydrogen and SW is the weight of the sample. Fc is equal to 2.6465 cm3/peak area. Table \(\PageIndex{3}\) shows the results of the volume adsorbed and the cumulative volume adsorbed. Using the data on Table \(\PageIndex{3}\), a series of calculations can now be performed in order to have a better understanding of our catalyst properties.\[ V_{adsorbed}\ =\ \frac{\Delta Area_{n} \times F_{c}}{SW} \label{4} \]Gram molecular weight is the weighted average of the number of moles of each active metal in the catalyst. Since this is a monometallic catalyst, the gram molecular weight is equal to the molecular weight of palladium (106.42 [g/mol]). The GMCCalc is calculated using \ref{5}, where F is the fraction of sample weight for metal N and WatomicN is the gram molecular weight of metal N (g/g-mole). \ref{6} shows the calculation for this experiment.\[ GMW_{Calc}\ =\ \frac{1}{(\frac{F_{1}}{W_{atomic\ 1}})\ +\ (\frac{F_{2}}{W_{atomic\ 2}})\ +\ ...\ +\ (\frac{F_{N}}{W_{atomic\ N}})} \label{5} \]\[ GMW_{Calc}\ =\ \frac{1}{(\frac{F_{1}}{W_{atomic\ Pd}})}\ =\ \frac{W_{atomic\ PD}}{F_{1}}\ =\ \frac{106.42 \frac{g}{g-mole}}{1}\ =\ 106.42 \frac{g}{g-mole} \label{6} \]The metal dispersion is calculated using \ref{7}, where PD is the percent metal dispersion, Vs is the volume adsorbed (cm3 at STP), SFCalc is the calculated stoichiometry factor (equal to 2 for a palladium-hydrogen system), SW is the sample weight and GMWCalc is the calculated gram molecular weight of the sample [g/g-mole]. Therefore, in \ref{8} we obtain a metal dispersion of 6.03%.\[ PD\ =\ 100\ \times \ (\frac{V_{s} \times SF_{Calc}}{SW \times 22414}) \times GMW_{Calc} \label{7} \]\[ PD\ =\ 100\ \times \ (\frac{0.8296556 [cm^{3}]\ \times \ 2}{0.1289 [g]\ \times 22414 [\frac{cm^{3}}{mol}]})\ \times \ 106.42 [\frac{g}{g-mol}]\ =\ 6.03\% \label{8} \]The metallic surface area per gram of metal is calculated using \ref{9}, where SAMetallic is the metallic surface area (m2/g of metal), SWMetal is the active metal weight, SFCalc is the calculated stoichiometric factor and SAPd is the cross sectional area of one palladium atom (nm2). Thus, in \ref{10} we obtain a metallic surface area of 2420.99 m2/g-metal.\[ SA_{Metallic}\ =\ (\frac{V_{S}}{SW_{Metal}\ \times \ 22414})\ \times \ (SF_{Calc})\ \times \ (6.022\ \times \ 10^{23})\ \times \ SA_{Pd} \label{9} \]\[ SA_{Metallic}\ =\ (\frac{0.8296556\ [cm^{3}]}{0.001289\ [g_{metal}]\ \times \ 22414\ [\frac{cm^{3}}{mol}]})\ \times \\ \times \ (6.022\ \times \ 10^{23}\ [\frac{atoms}{mol}])\ \times \ 0.07\ [\frac{nm^{2}}{atom}]\ =\ 2420.99\ [\frac{m^{2}}{g-metal}] \label{10} \]The active particle size is estimated using \ref{11}, where DCalc is palladium metal density (g/cm3), SWMetal is the active metal weight, GMWCalc is the calculated gram molecular weight (g/g-mole), and SAPd is the cross sectional area of one palladium atom (nm2). As seen in \ref{12} we obtain an optical particle size of 2.88 nm.\[ APS\ =\ \frac{6}{D_{Calc}\ \times \ (\frac{W_{S}}{GMW_{Calc}})\ \times \ (6.022\ \times \ 10^23)\ \times \ SA_{Metallic}} \label{11} \]\[ APS\ =\ \frac{600}{(1.202\ \times \ 10^{-20} [\frac{g_{Pd}}{nm^{3}}])\ \times \ (\frac{0.001289\ [g]}{106.42\ [\frac{g_{Pd}}{mol}]})\ \times \ (6.022\ \times \ 10^{23}\ [\frac{atoms}{mol}])\ \times \ (2420.99\ [\frac{m^{2}}{g-Pd}])} \ =\ 2.88\ nm \label{12} \]In a commercial instrument, a summary report will be provided which summarizes the properties of our catalytic material. All the equations used during this example were extracted from the AutoChem 2920-User's Manual.This page titled 9.4: Catalyst Characterization Using Thermal Conductivity Detector is shared under a CC BY 4.0 license and was authored, remixed, and/or curated by Pavan M. V. Raja & Andrew R. Barron (OpenStax CNX) via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.
583
9.5: Nanoparticle Deposition Studies Using a Quartz Crystal Microbalance
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Physical_Methods_in_Chemistry_and_Nano_Science_(Barron)/09%3A_Surface_Morphology_and_Structure/9.05%3A_Nanoparticle_Deposition_Studies_Using_a_Quartz_Crystal_Microbalance
The working principle of a quartz crystal microbalance with dissipation (QCM-D) module is the utilization of the resonance properties of some piezoelectric of materials. A piezoelectric material is a material that exhibits an electrical field when a mechanical strain is applied. This phenomenon is also observed in the contrary where an applied electrical field produce a mechanical strain in the material. The material used is α-SiO2 that produces a very stable and constant frequency. The direction and magnitude of the mechanical strain is directly dependent of the direction of the applied electrical field and the inherent physical properties of the crystal.A special crystal cut is used, called AT-cut, which is obtain as wafers of the crystal of about 0.1 to 0.3 mm in width and 1 cm in diameter. The AT-cut is obtained when the wafer is cut at 35.25° of the main crystallographic axis of SiO2. This special cut allows only one vibration mode, the shear mode, to be accessed and thus exploited for analytical purposes. When a electrical field is applied to the crystal wafer via metal electrodes, that are vapor-deposited in the surface, a mechanical shear is produced and maintained as long as the electrical field is applied. Since this electric field can be controlled by opening and closing an electrical circuit, a resonance within the crystal is formed ). Since the frequency of the resonance is dependent of the characteristics of the crystal, an increase of mass, for example when the sample is loaded into the sensor would change the frequency change. This relation \ref{1} was obtained by Sauerbrey in 1959, where Δm (ng.cm-2) is the areal mass, C (17.7 ngcm-2Hz-1) is the vibrational constant (shear, effective area, etc.), n in Hz is the resonant overtone, and Δf is the change in frequency. The dependence of the change in the frequency can be related directly to the change in mass deposited in the sensor only when three conditions are met and assumed:\[ \Delta m\ =\ -C\frac{1}{n}\Delta f \label{1} \]An important incorporation in recent equipment is the use of the dissipation factor. The inclusion of the dissipation faster takes into account the weakening of the frequency as it travels along the newly deposited mass. In a rigid layer the frequency is usually constant and travels through the newly formed mass without interruption, thus, the dissipation is not important. On the other hand, when the deposited material has a soft consistency the dissipation of the frequency is increased. This effect can be monitored and related directly to the nature of the mass deposited.The applications of the QCM-D ranges from the deposition of nanoparticles into a surface, from the interaction of proteins within certain substrates. It can also monitors the bacterial amount of products when feed with different molecules, as the flexibility of the sensors into what can be deposited in them include nanoparticle, special functionalization or even cell and bacterias!In order to use QCM-D for studing the interaction of nanoparticles with a specific surface several steps must be followed. For demonstration purposes the following procedure will describe the use of a Q-Sense E4 with autosampler from Biolin Scientific. A summary is shown below as a quick guide to follow, but further details will be explained:The decision of what surface of the the sensor to use is the most important decision to make fore each study. Biolin has a large library of available coatings ranging from different compositions of pure elements and oxides ) to specific binding proteins. It is important to take into account the different chemistries of the sensors and the results we are looking for. For example studying a protein with high sulfur content on a gold sensor can lead to a false deposition results, as gold and sulfur have a high affinity to form bonds. For the purpose of this example, a gold coated sensor will be used in the remainder of the discussion.Since QCM-D relies on the amount of mass that is deposited into the surface of the sensor, a thorough cleaning is needed to ensure there is no contaminants on the surface that can lead to errors in the measurement. The procedure the manufacturer established to clean a gold sensor is as follows:Once the sensors are clean, extreme caution should be taken to avoid contamination of the surface. The sensors can be loaded in the flow chamber of the equipment making sure that the T-mark of the sensor matches the T mark of the chamber in order to make sure the electrodes are in constant contact. The correct position is shown in . As the top range of mass that can be detected is merely micrograms, solutions must be prepared accordingly. For a typical run, a buffer solution is needed in which the deposition will be studied as well as, the sample itself and a solution of 2% of sodium dodecylsulfate [CH3(CH2)10CH2OSO3Na, SDS]. For this example we will be using nanoparticles of magnetic iron oxide (nMag) coated with PAMS, and as a buffer 8% NaCl in DI water.Due to the sensitivity of the equipment, it is important to rinse and clean the tubing before loading any sample or performing any experiments. To rinse the tubing and the chambers, use a solution of 2% of SDS. For this purpose, a cycle in the autosampler equipment is program with the steps shown in Table \(\PageIndex{1}\).Once the equipment is cleaned, it is ready to perform an experiment, a second program in the autosampler is loaded with the parameters shown in Table \(\PageIndex{2}\).The purpose of flowing the buffer in the beginning is to provide a background signal to take into account when running the samples. Usually a small quantity of the sample is loaded into the sensor at a very slow flow rate in order to let the deposition take place.Example data obtained with the above parameters is shown in . The blue squares depict the change in the frequency. As the experiment continues, the frequency decreases as more mass is deposited. On the other hand, shown as the red squares, the dissipation increases, describing the increase of both the height and certain loss of the rigidity in the layer from the top of the sensor. To illustrate the different steps of the experiment, each section has been color coded. The blue part of the data obtained corresponds to the flow of the buffer, while the yellow part corresponds to the deposition equilibrium of the nanoparticles onto the gold surface. After certain length of time equilibrium is reached and there is no further change. Once equilibrium indicates no change for about five minutes, it is safe to say the deposition will not change.As a measure preventive care for the equipment, the same cleaning procedure should be followed as what was done before loading the sample. Use of a 2% solution of SDS helps to ensure the equipment remains as clean as possible.Once the data has been obtained, QTools (software that is available in the software suit of the equipment) can be used to convert the change in the frequency to areal mass, via the Sauerbrey equation, \ref{1}. The correspondent graph of areal mass is shown in \ref{1}. From this graph we can observe how the mass is increasing as the nMag is deposited in the surface of the sensor. The blue section again illustrates the part of the experiment where only buffer was been flown to the chamber. The yellow part illustrates the deposition, while the green part shows no change in the mass after a period of time, which indicates the deposition is finished. The conversion from areal mass to mass is a simple process, as gold sensors come with a definite area of 1 cm2, but a more accurate measure should be taken when using functionalized sensors.It is important to take into account the limitations of the Saubery equation, because the equation accounts for a uniform layer on top of the surface of the sensor. Deviations due to clusters of material deposited in one place or the formation of partial multilayers in the sensor cannot be calculated through this model. Further characterization of the surface should be done to have a more accurate model of the phenomena.This page titled 9.5: Nanoparticle Deposition Studies Using a Quartz Crystal Microbalance is shared under a CC BY 4.0 license and was authored, remixed, and/or curated by Pavan M. V. Raja & Andrew R. Barron (OpenStax CNX) via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.
584
InfoPage
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Qualitative_Analysis_of_Common_Cations_in_Water_(Malik)/00%3A_Front_matter/02%3A_InfoPage
This text is disseminated via the Open Education Resource (OER) LibreTexts Project and like the hundreds of other texts available within this powerful platform, it is freely available for reading, printing and "consuming." Most, but not all, pages in the library have licenses that may allow individuals to make changes, save, and print this book. Carefully consult the applicable license(s) before pursuing such effects.Instructors can adopt existing LibreTexts texts or Remix them to quickly build course-specific resources to meet the needs of their students. Unlike traditional textbooks, LibreTexts’ web based origins allow powerful integration of advanced features and new technologies to support learning. The LibreTexts mission is to unite students, faculty and scholars in a cooperative effort to develop an easy-to-use online platform for the construction, customization, and dissemination of OER content to reduce the burdens of unreasonable textbook costs to our students and society. The LibreTexts project is a multi-institutional collaborative venture to develop the next generation of open-access texts to improve postsecondary education at all levels of higher learning by developing an Open Access Resource environment. The project currently consists of 14 independently operating and interconnected libraries that are constantly being optimized by students, faculty, and outside experts to supplant conventional paper-based books. These free textbook alternatives are organized within a central environment that is both vertically (from advance to basic level) and horizontally (across different fields) integrated.The LibreTexts libraries are Powered by NICE CXOne and are supported by the Department of Education Open Textbook Pilot Project, the UC Davis Office of the Provost, the UC Davis Library, the California State University Affordable Learning Solutions Program, and Merlot. This material is based upon work supported by the National Science Foundation under Grant No. 1246120, 1525057, and 1413739.Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation nor the US Department of Education.Have questions or comments? For information about adoptions or adaptions contact More information on our activities can be found via Facebook , Twitter , or our blog .This text was compiled on 07/05/2023
586
1.1: Solubility
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Qualitative_Analysis_of_Common_Cations_in_Water_(Malik)/1%3A_Chemical_Principles/1.1%3A_Solubility
The solution is a homogeneous mixture of two or more substances.Water is one of the most important solvents because it is present all around us -it covers more than 70% of the earth and it is more than 60% of our body mass. Water is a polar molecule having a partial negative end on oxygen and a partially positive end on hydrogen atoms. that can dissolve most of the polar and ionic compounds. In ionic compounds, cations are held by anions through electrostatic interaction. When an ionic compound dissolves into water it dissociates into cations and anions, each surrounded by a layer of water molecules held by ion-dipole interactions. The water molecules around ions make ion-dipole interaction by orienting their partial negative end towards cations and their partial positive end towards anions. The energy needed to break ion-ion interaction in the ionic compounds is partially compensated by the energy released by establishing the ion-dipole interactions. The energy gained due to ion-dipole interactions and nature's tendency to disperse is the driving forces responsible for the dissolution of ionic compounds.Solubility is the ability of a substance to form a solution with another substance.The solubility of a solute in a specific solvent is quantitatively expressed as the concentration of the solute in the saturated solution. Usually, the solubility is tabulated in the units of grams of solute per 100 mL of solvent (g/100 mL). The solubility of ionic compounds in water varies over a wide range. All ionic compounds dissolve to some extent.For practical purposes, a substance is considered insoluble when its solubility is less than 0.1 g per 100 mL of solvent.For example, lead(II)iodide ( \(\ce{PbI2}\) ) and silver chloride ( \(\ce{AgCl}\) ) are insoluble in water because the solubility of \(\ce{PbI2}\) is 0.0016 mol/L of the solution and the solubility of \(\ce{AgCl}\) is about 1.3 x 10-5 mol/L of solution. Potassium iodide (\(\ce{KI}\)) and \(\ce{Pb(NO3)2}\) are soluble in water. When aqueous solutions of \(\ce{KI}\) and \(\ce{Pb(NO3)2}\) are mixed, the insoluble combination of ions, i.e., \(\ce{PbI2}\) in this case, precipitates, as illustrated in .There are no fail-proof guidelines for predicting the solubility of ionic compounds in water. However, the following guideline can predict the solubility of most ionic compounds.Precipitation reactions are a class of chemical reactions in which two solutions are mixed and a solid product, called a precipitate, separates out. Precipitation reaction happening upon mixing solutions of ionic compounds in water can be predicted as illustrated in . The first step is to list the soluble ionic compounds and then cross-combine the cations of one with the anion of the other to make the potential products. If any of the potential products is an insoluble ionic compound, it will precipitate out. For example when \(\ce{NaOH}\) solution is mixed with \(\ce{MgCl2}\) solution, \(\ce{Mg(OH)2}\) is a cross-combination that forms an insoluble compound, it will precipitate out. shows precipitates of some insoluble ionic compounds formed by mixing aqueous solutions of appropriate soluble ionic compounds.This page titled 1.1: Solubility is shared under a Public Domain license and was authored, remixed, and/or curated by Muhammad Arif Malik.
589
1.2: Solubility equilibria
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Qualitative_Analysis_of_Common_Cations_in_Water_(Malik)/1%3A_Chemical_Principles/1.2%3A_Solubility_equilibria
All ionic compounds dissolve in water to some extent. Ionic compounds are strong electrolytes, i.e., they dissociate completely into ions upon dissolution. When the amount of ionic compound added to the mixture is more than the solubility limit, the excess undissolved solute (solid) exists in equilibrium with its dissolved aqueous ions. For example, the following equation represents the equilibrium between solid \(\ce{AgCl(s)}\) and its dissolved \(\ce{Ag^{+}(aq)}\) and \(\ce{Cl^{+}(aq)}\) ions, where the subscript (s) means solid, i.e., the undissolved fraction of the compound, and (aq) means aqueous or dissolved in water.\[\ce{AgCl(s) ⇌ Ag+(aq) + Cl^{-}(aq)}\nonumber \]Like any other chemical equilibrium, this equilibrium has an equilibrium constant (Keq) :\[K_{eq} = [\ce{Ag^{+}}][\ce{Cl^{-}}]\nonumber \]Note that solid or pure liquid species do not appear in the equilibrium constant expression as the concentration in the solid or pure liquid remains constant. This equilibrium constant has a separate name Solubility Product Constant (\(K_{sp}\)) based on the fact that it is a product of the molar concentration of dissolved ions, raised to the power equal to their respective coefficients in the chemical equation, e.g.,\[K_{sp} = \ce{[Ag^{+}][Cl^{-}]} = 1.8 \times 10^{-10}\nonumber \]The solubility product constant (\(K_{sp}\)), is the equilibrium constant for an ionic compound dissolving in an aqueous solution.Similarly, the dissolution equilibrium for \(\ce{PbCl2}\) can be shown as:\[\ce{PbCl2(s) <=> Pb2+(aq) + 2Cl-(aq)} \nonumber\]with\[K_{sp} = \ce{[Pb^{2+}][Cl^{-}]^2} = 1.6 \times 10^{-5} \nonumber\]And the dissolution equilibrium for \(\ce{Hg2Cl2}\) is similar:\[\ce{Hg2Cl2(s) ⇌ Hg22+(aq) + 2Cl-(aq) } \nonumber\]with\[K_{sp} = \ce{[Hg2^{2+}][Cl^{-}]^2} = 1.3 \times 10^{-18} \nonumber\]Selective precipitation is a process involving adding a reagent that precipitates one of the dissolved cations or a particular group of dissolved cations but not the others.According to solubility rule# 5, both \(\ce{Cu^{2+}}\) and \(\ce{Ni^{2+}}\) form insoluble salts with \(\ce{S^{2-}}\). However, the solubility of \(\ce{CuS}\) and \(\ce{NiS}\) differ enough that if an appropriate concentration of \(\ce{S^{2-}}\) is maintained, \(\ce{CuS}\) can be precipitated leaving \(\ce{Ni^{2+}}\) dissolved. The following calculations based on the \(K_{sp}\) values prove it.\[\ce{CuS(s) <=> Cu^{2+}(aq) + S^{2-}(aq)},\quad K_{sp} = \ce{[Cu^{2+}][S^{2-}]} = 8.7\times 10^{-36}\nonumber\]\[\ce{NiS(s) <=> Ni^{2+}(aq) + S^{2-}(aq)},\quad K_{sp} = \ce{[Ni^{2+}][S^{2-}]} = 1.8\times 10^{-21}\nonumber\]Molar concentration of sulfide ions [\(\ce{S^{2-}}\)], in moles/liter in a saturated solution of the ionic compound can be calculated by rearranging their respective \(K_{sp}\) expression, e.g. for \(\ce{CuS}\) solution, \(K_{sp} = \ce{[Cu^{2+}][S^{2-}]}\) rearranges to:\[\ce{[S^{2-}]} = \frac{K_{sp}}{\ce{[Cu^{2+}]}}\nonumber\]Assume \(\ce{Cu^{2+}}\) is 0.1 M, plugging in the values in the above equation allow calculating the molar concentration of \(\ce{S^{2-}}\) in the saturated solution of \(\ce{CuS}\):\[\ce{[S^{2-}]} = \frac{K_{sp}}{\ce{[Cu^{2+}]}} = \frac{8.7\times10^{-36}}{0.1} = 8.7\times10^{-35}\text {~M}\nonumber\]Similar calculations show that the molar concentration of \(\ce{S^{2-}}\) in the saturated solution of 0.1M \(\ce{NiS}\) is 1.8 x 10-20 M. If \(\ce{S^{2-}}\) concentration is kept more than 8.7 x 10-35 M but less than 1.8 x 10-20 M, \(\ce{CuS}\) will selectively precipitate leaving \(\ce{Ni^{2+}}\) dissolved in the solution.Another example is the selective precipitation of Lead, silver, and mercury by adding \(\ce{HCl}\) to the solution. According to rule# 3 of solubility of ionic compounds, chloride \(\ce{Cl^-}\) forms soluble salt with the cations except with Lead (\(\ce{Pb^{2+}}\)), Mercury (\(\ce{Hg_2^{2+}}\)), or Silver (\(\ce{Ag^{+}}\)). Adding \(\ce{HCl}\) as a source of \(\ce{Cl^-}\) in the solution will selectively precipitate lead (\(\ce{Pb^{2+}}\)), mercury (\(\ce{Hg_2^{2+}}\)), and silver (\(\ce{Ag^{+}}\)), leaving other cations dissolved in the solution.This page titled 1.2: Solubility equilibria is shared under a Public Domain license and was authored, remixed, and/or curated by Muhammad Arif Malik.
590
1.3: Varying solubility of ionic compounds
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Qualitative_Analysis_of_Common_Cations_in_Water_(Malik)/1%3A_Chemical_Principles/1.3%3A_Varying_solubility_of_ionic_compounds
Ionic compounds dissociate into ions when they dissolve in water. An equilibrium is established between ions in water and the undissolved compound. The solubility of the ionic compounds can be varied by stressing the equilibrium through changes in the concentration of the ions.Le Chatelier's principle can be stated as “when a system at equilibrium is subjected to a change in concentration, temperature, volume, or pressure, the system will change to a new equilibrium, such that the applied change is partially counteracted.”If the ions in the solubility equilibrium are increased or decreased by another reaction going on in parallel, the equilibrium will counteract by decreasing or increasing the solubility of the compound. This use of Le Chatelier's principle to vary the solubility of sparingly soluble ionic compounds is explained with examples in the following.Consider dissolution of a sparingly soluble ionic compound \(\ce{CaF2}\) in water:\[\ce{CaF2(s) <=> Ca^{2+}(aq) + 2F^{-}(aq)},\quad K_{sp} = \ce{[Ca^{2+}][F^{-}]^2} = 1.5\times 10^{-10}\nonumber\]The solubility (S) can be expressed in the units of mol/L or molarity (M). Similarly, the concentration of any species in square brackets, as [\(\ce{Ca^{2+}}\)] in the above-mentioned \(\ce{K_{sp}}\) expression, is also in the units of mol/L or M.\(\ce{NaF}\) is a water-soluble ionic compound that has \(\ce{F^-}\) in common with the above equilibrium. The addition of \(\ce{NaF}\) into the mixture will increase the concentration of \(\ce{F^-}\) causing a decrease in the solubility of \(\ce{CaF2}\) because the solubility equilibrium will move in the reverse direction to counteract the rise in the concentration of the common ion. This is called the common ion effect.The common ion effect refers to the decrease in the solubility of a sparingly soluble ionic compound by adding a soluble ionic compound that has an ion in common with the sparingly soluble ionic compound.A quantitative estimate of this common ion effect is given with the help of the following calculations. If solubility of \(\ce{CaF2}\) in pure water is S mol/L, then [ \(\ce{Ca^{2+}}\)] = S, and [ \(\ce{F^-}\)] = 2S. Plugging in these values in the \(\ce{K_{sp}}\) expression and rearranging shows that the solubility of \(\ce{CaF2}\) in pure water is 3.3 x 10-4 M:\[K_{sp} = \ce{[Ca^{2+}][F^{-}]^2}\nonumber\]\[1.5 \times 10^{-10} = S(2S)^2\nonumber\]\[S=\sqrt{1.5 \times 10^{-10} / 4}=3.310^{-4} \mathrm{~M}\nonumber\]If \(\ce{F^-}\) concentration is raised to 0.1M by dissolving \(\ce{NaF}\) in the solution, then the molar solubility of \(\ce{CaF2}\) changes to a new value Si , [ \(\ce{Ca^{2+}}\)] = Si, and [ \(\ce{F^-}\)] = (0.1 + Si) = 0.1 (Si cancels because it is negligible compared to 0.1). Plugging in these values in the \(\ce{K_{sp}}\) expression and rearranging shows that the new solubility (Si) of \(\ce{CaF2}\) is 1.5 x 10-5M:\[K_{sp} = \ce{[Ca^{2+}][F^{-}]^2}\nonumber\]\[1.5 \times 10^{-10} = S_{i}(0.1)^2\nonumber\]\[S_{i} = \frac{1.5 \times 10^{-10}}{(0.1)^2} = 1.5\times10^{-8} \mathrm{~M}\nonumber\]It means the solubility of \(\ce{CaF2}\) is decreased by more than twenty thousand times by the common ion effect described above.Generally, the solubility of sparingly soluble ionic compounds decreases by adding a common ion to the equilibrium mixture.An example of a common ion effect is in the separation of \(\ce{PbCl2}\) from \(\ce{AgCl}\) and \(\ce{Hg2Cl2}\) precipitates. \(\ce{PbCl}\) is the most soluble in hot water among these three sparingly soluble compounds. So, \(\ce{PbCl2}\) is selectively dissolved in hot water and separated. The solution is then cooled to room temperate and \(\ce{HCl}\) is added to it as a source of common ion \(\ce{Cl^-}\) to enforce re-precipitation of \(\ce{PbCl2}\):\[\ce{Pb^{2+}(aq) + 2Cl^{-}(aq) <=> PbCl2(s)}\nonumber\]The pH is related to the concentration of \(\ce{H3O^+}\) and \(\ce{OH^-}\) in the solution. Increasing pH increases \(\ce{OH^-}\) and decreases \(\ce{H3O^+}\) concentration in the solution and decreasing pH has the opposite effect. If one of the ions in the solubility equilibrium of a sparingly soluble ionic compound is an acid or a base, its concentration will change with changes in the pH. It is because acids will neutralize with \(\ce{OH^-}\) at high pH and bases will neutralize with \(\ce{H3O^+}\) at low pH. For example, consider the dissolution of \(\ce{Mg(OH)2}\) in pure water.\[\ce{Mg(OH)2(s) <=> Mg^{2+}(aq) + 2OH^{-}(aq)},\quad K_{sp} = \ce{[Mg^{2+}][OH^{-}]^2} = 2.1\times 10^{-13}\nonumber\]Making the solution acidic, i.e., a decrease in pH adds more \(\ce{H3O^+}\) ion that removes \(\ce{OH^-}\) by the following neutralization reaction.\[\ce{H3O^{+}(aq) + OH^{-}(aq) <=> 2H2O(l)}\nonumber\]According to Le Chatelier's principle, the system moves in the forward direction to make up for the loss of \(\ce{OH^-}\). In other words, \(\ce{Mg(OH)2}\) is insoluble in neutral or alkaline water and becomes soluble in acidic water.Generally, the solubility of an ionic compound containing basic anion increases by decreasing pH, i.e., in an acidic medium.In a qualitative analysis of cations, dissociation of \(\ce{H2S}\) is used as a source of \(\ce{S^{2-}}\) ions:\[\ce{H2S(g) + 2H2O(l) <=> 2H3O^+(aq) + S^{2-}(aq)}\nonumber\]The reaction is pH-dependent, i.e., the extent of dissociation of \(\ce{H2S}\) can be decreased by adding \(\ce{HCl}\) as a source of common ion \(\ce{H3O^+}\) or increased by adding a base as a source of \(\ce{OH^-}\) that removes \(\ce{H3O^+}\) from the products:\[\ce{OH^{-}(aq) + H3O^{+}(aq) <=> 2H2O(l)}\nonumber\]Generally, the solubility of weak acids can be increased by increasing the pH and decreased by decreasing the pH. The opposite is true for the weak bases.Transition metal ions, like \(\ce{Ag^+}\), \(\ce{Cu^{2+}}\), \(\ce{Ni^{2+}}\), etc. tend to be strong Lewis acids, i.e., they can accept a lone pair of electrons from Lewis bases. Neutral or anionic species with a lone pair of electrons, like \(\ce{H2{\!\overset{\Large{\cdot\cdot}}{O}}\!:}\), \(\ce{:\!{NH3}}\), \(\ce{:\!\overset{-}{C}N\!:}\), \(\ce{:\!\overset{\Large{\cdot\cdot}}{\underset{\Large{\cdot\cdot}}{Cl}}\!:^{-}}\), etc. can act as Lewis bases in these reactions. The bond formed by the donation of a lone pair of electrons of a Lewis base to a Lewis acid is called a coordinate covalent bond. The neutral compound or ion that results from the Lewis acid-base reaction is called a coordination complex or a complex ion. For example, silver ion dissolved in water is often written as Ag+(aq), but, in reality, it exists as complex ion \(\ce{Ag(H2O)2^+}\) in which \(\ce{Ag^+}\) accepts lone pair of electrons from oxygen atoms in water molecules. Transition metal ion in a coordination complex or complex ion, e.g., \(\ce{Ag^+}\) in \(\ce{Ag(H2O)2^+}\) is called central metal ion and the Lewis base like \(\ce{H2{\!\overset{\Large{\cdot\cdot}}{O}}\!:}\), in \(\ce{Ag(H2O)2^+}\), is called a ligand. The strength of a ligand is the ability of a ligand to donate its lone pair of electrons to a central metal ion. If a stronger ligand is added to the solution, it displaces a weaker ligand. For example, if \(\ce{:\!{NH3}}\) is dissolved in the solution containing \(\ce{Ag(H2O)2^+}\), the \(\ce{:\!{NH3}}\) displaces \(\ce{H2{\!\overset{\Large{\cdot\cdot}}{O}}\!:}\) from the complex ion:\[\ce{Ag(H2O)2^{+}(aq) + 2NH3(aq) <=> Ag(NH3)2^{+}(aq) + 2H2O(aq)}\nonumber\]The lone pair on the ligand is omitted from the equation above and from the following equations. Water is usually omitted from the equation for simplicity that reduces the above reaction to the following:\[\ce{Ag^{+}(aq) + 2NH3(aq) <=> Ag(NH3)2^{+}(aq)}\quad K_f = 1.7\times10^7\nonumber\]Equilibrium constant for the formation of complex ion is called formation constant (\(\ce{K_{f}}\)), e.g, in the case of above reaction:\[K_f = \frac{\ce{[Ag(NH3)2^{+}]}}{\ce{[Ag^+]\times[NH3]^2}} = 1.7\times10^7\nonumber\]Large value of \(\ce{K_{f}}\) in the above reaction shows that the reaction is highly favored in the forward direction. If ammonia is present in water, it increases the solubility of \(\ce{AgCl}\) by removing the \(\ce{Ag^+}\) ion from the products, just like acid (\(\ce{H3O^+}\)) increases the solubility of \(\ce{Mg(OH)2}\) by removing \(\ce{OH^-}\) from the products:\[\ce{AgCl(s) <<=> Ag^{+}(aq) + Cl^{-}(aq)}\quad K_f = 1.8\times10^{-10}\nonumber\]\[\ce{Ag^{+}(aq) + 2NH3(aq) <=>> Ag(NH3)2^{+}(aq)}\quad K_f = 1.7\times10^7\nonumber\]\[\text{Adding above reactions:}~\ce{AgCl(aq) + 2NH3(aq) <=> Ag(NH3)2^{+}(aq) + Cl^{-}(aq)}\quad K = 3.0\times10^{-3}\nonumber\]The equilibrium constant for the dissolution of \(\ce{AgCl}(s)\) changes from 1.8 x 10-10 in pure water to 3.0 x 10-3 in the water containing dissolved ammonia, i.e., a 17 million times increase. It makes insoluble \(\ce{AgCl}(s)\) quite soluble. This reaction is used to separate silver ions from mercury ions in a mixture of \(\ce{AgCl}\) and \(\ce{Hg2Cl2}\) mixture precipitates.Generally, the solubility of metal compounds containing metals capable of coordinate complex formation increases by adding a strong ligand to the solution.The chemical equations can be manipulated like algebraic equations, i.e., they can be multiplied or divided by a constant, added, and subtracted, as demonstrated in the example of the silver ammonia complex formation reactions shown above. Note that the species on the right side of the equation cancel the same species on the left side of any other equation like algebraic equations, e.g., \(Ag^{+}\) is canceled the final equation.When two equilibrium reactions are added, their equilibrium constants are multiplied to get the equilibrium constant of the overall reaction, i.e, \(K = K_{sp}\times{K_f}\) in the above reactions.There are three major types of chemical reactions, precipitation reactions, acid-base reactions, and redox reactions.Precipitation reactions of ionic compounds are double replacement reactions where the cation of one compound combines with the anion of another and vice versa, such that one of the new combinations is an insoluble salt.For example, when silver nitrate (\(\ce{AgNO3}\)) solution is mixed with sodium chloride (\(\ce{NaCl}\)) solution, an insoluble compound silver chloride (\(\ce{AgCl}\)) precipitates out of the solution:\[\ce{AgNO3(aq) + HCl(aq) -> AgCl(s)v + NaNO3(aq)}\nonumber\]Acid-base reactions are the reactions involving the transfer of a proton.For example, \(\ce{H2S}\) dissociates in water by donating its proton to water molecules:\[\ce{H2S(g) + 2H2O(l) <=> 2H3O^{+}(aq) + S^{2-}(aq)}\nonumber\]Redox reactions are the reactions involving the transfer of electrons.For example, when sodium metal (\(\ce{Na}\)) reacts with chlorine gas (\(\ce{Cl2}\)), sodium loses electrons and becomes \(\ce{Na^+}\) cation and chlorine gains electrons and becomes \(\ce{Cl^-}\) anion that combine to form \(\ce{NaCl}\) salt:\[\ce{2Na(s) + 2Cl2(g) -> NaCl(s)}\nonumber\]An example of a redox reaction in qualitative analysis of cations is the dissolution of \(\ce{NiS}\) precipitate by adding an oxidizing acid \(\ce{HNO3}\). The \(\ce{S^{2-}}\) is a weak base that can be removed from the product by adding a strong acid like \(\ce{HCl}\):\[\ce{S^{2-}(aq) + 2H3O^{+}(aq) <=>> H2S(aq) + 2H2O(l)}\nonumber\]Therefore, addition of \(\ce{HCl}\) is sufficient to dissolve \(\ce{FS}\) precipitate by removal of \(\ce{S^{2-}}\) from the products:\[\ce{FeS(s) + 2H3O^{+}(aq) <=>> Fe^{2+}(aq) + H2S(aq) + 2H2O(l)}\nonumber\]However, the addition of \(\ce{HCl}\) does not remove \(\ce{S^{2-}}\) sufficient enough to dissolve a relatively less soluble \(\ce{NiS}\) precipitate. Nitric acid (\(\ce{HNO3}\)) that is a source of an oxidizing agent \(\ce{NO3^{-}}\) is needed to remove \(\ce{S^{2-}}\) to a higher extent for dissolving \(\ce{NiS}\):\[\ce{3S^{2-}(aq) + 2NO3^{-}(aq) + 8H3O^{+}(aq) -> 3S(s, yellow)(v) + 2NO(g)(^) + 12H2O(l)}\nonumber\]In this reaction, sulfur is oxidized from an oxidation state of -2 in \(\ce{S^{2-}}\) to an oxidation state of zero in \(\ce{S}\), and nitrogen is reduced from an oxidation state of +5 in \(\ce{NO3^{-}}\) to an oxidation state of +2 in \(\ce{NO}\).This page titled 1.3: Varying solubility of ionic compounds is shared under a Public Domain license and was authored, remixed, and/or curated by Muhammad Arif Malik.
591
1.4: pH Buffers
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Qualitative_Analysis_of_Common_Cations_in_Water_(Malik)/1%3A_Chemical_Principles/1.4%3A_pH_Buffers
Controlling pH is critically important in the qualitative analysis of cations. Often, pH needs to be maintained in a narrow range in the analysis of cations.A pH buffer is an aqueous solution consisting of a weak acid and its conjugate base or vice versa, which minimizes pH change when a small amount of a strong acid or a strong base is added to it.For example, the addition of 0.020 mol \(\ce{HCl}\) into 1 L of water changes pH from 7 to 1.7, i.e., about 80% change in pH. Similarly, the addition of 0.020 mol \(\ce{NaOH}\) to the same water changes pH from 7 to 12.3, i.e., again, about 80% change in pH. In contrast to pure water, 1 L of buffer solution containing 0.50 mol a week acid acetic acid (\(\ce{CH3COOH}\)) and 0.50 mol of its conjugate base \(\ce{CH3COO^-}\) changes pH from 4.74 to 4.70 by the addition of the same 0.020 mol \(\ce{HCl}\) and from 4.74 to 4.77 by the addition of 0.020 mol \(\ce{NaOH}\), i.e., about 1% change in pH, as illustrated in Fig. 1.7.1.The buffer contains a weak acid and its conjugate base in equilibrium. For example, acetic acid/sodium acetate buffer has the following equilibrium:\[\ce{CH3COOH + H2O <<=> H3O^{+} + CH3COO^{-}}\nonumber\]The molar concentration of hydronium ions [\(\ce{H3O^+}\)] defines the pH of the solution, i.e., \(\mathrm{pH}=-\log \left[\mathrm{H}_{3} \mathrm{O}^{+}\right]\). The conjugate base consumes any strong acid added to the mixture:\[\ce{HA + CH3COO^{-} -> CH3COOH + A^{-}}\nonumber\], where \(\ce{HA}\) is any strong acid and \(\ce{A^-}\) is its conjugate base. The concentration of \(\ce{CH3COOH}\) increases and \(\ce{CH3COO^-}\) decrease, but pH decreases little because [\(\ce{H3O^+}\)] is almost not affected. Similarly, the weak acid consumes any strong base added.\[\ce{MOH + CH3COOH -> CH3COO^{-} + M^{+} + H2O}\nonumber\], where \(\ce{M^+}\) is its conjugate acid. The concentration of \(\ce{CH3COOH}\) decreases and \(\ce{CH3COO^-}\) increases, but pH increases little because [\(\ce{H3O^+}\)] is almost not affected. Buffers are employed on several occasions during the qualitative analysis of cations.This page titled 1.4: pH Buffers is shared under a Public Domain license and was authored, remixed, and/or curated by Muhammad Arif Malik.
592
1.5: Separation of cations in groups
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Qualitative_Analysis_of_Common_Cations_in_Water_(Malik)/1%3A_Chemical_Principles/1.5%3A_Separation_of_cations_in_groups
Qualitative analysis of cations commonly found in water solution is usually done in three stages:For the 1st stage, i.e., separation of cations in groups, a suitable reagent is selected that selectively precipitates certain ions leaving the rest of the ions in the solution.A suitable reagent is the one that:The reagents are added in an order such that the most selective reagent, the one that precipitates out the least number of ions, is added first.The fourteen common cations found in water that are selected in these exercises are separated into five groups.Group I comprises lead II (\(\ce{Pb^{2+}}\)), mercury (I) (\(\ce{Hg2^{2+}}\)), and silver (I) (\(\ce{Ag^{+}}\)) that are selectively precipitated as chlorides by adding 6M \(\ce{HCl}\) to the mixture.\[\ce{HCl + H2O(l) -> H3O^{+}(aq) + Cl^{-}(aq)}\nonumber\]\(\ce{HCl}\) solution is selected as a reagent for group I based on the facts: i) it is a source of chloride (\(\ce{Cl^{-}}\)) ion which is the most selective reagent that makes insoluble salts with only \(\ce{Pb^{2+}}\), \(\ce{Hg2^{2+}}\), and \(\ce{Ag^{+}}\) (recall soluble ions rule#3 described in section 1.1), ii) it leaves behind \(\ce{H3O^{+}}\) that makes the solution acidic which is beneficial for separation of cations of the next group.Group II comprises tin(IV) (\(\ce{Sn^{4+}}\)), cadmium(II) (\(\ce{Cd^{2+}}\)), copper(II) (\(\ce{Cu^{2+}}\)), and bismuth(III) (\(\ce{Bi^{3+}}\)) that are selectively precipitated as sulfides by adding \(\ce{H2S}\) reagent in an acidic medium. \(\ce{H2S}\) is a source of sulfide (\(\ce{S^{2-}}\)) ion in water:\[\ce{H2S(aq) + 2H2O(l) -> 2H3O^{+}(aq) + S^{2-}(aq)}\nonumber\]The \(\ce{S^{2-}}\) ion makes insoluble salts with many cations as stated by insoluble ions rule#1 in section 1.1, i.e., “Hydroxide (\(\ce{OH^{-}}\)) and sulfides (\(\ce{S^{2-}}\)) are insoluble except when the cation is a heavy alkaline earth metal ion: \(\ce{Ca^{2+}}\), \(\ce{Ba^{2+}}\), and \(\ce{Sr^{2+}}\), or an alkali metal ion, or ammonia.”\(\ce{H2S}\) in acidic medium is selected as a source of \(\ce{S^{2-}}\) which is the reagent for selective precipitation of group II, because the concentration of \(\ce{S^{2-}}\) can be controlled by adjusting pH. Acidic medium has higher [\(\ce{H3O^{+}}\)] that decreases \(\ce{S^{2-}}\) due to the common ion effect of \(\ce{HeO{+}}\) ion. Therefore, among the sulfide insoluble salts, only the group II cations having very low solubility are selectively precipitated.Group III comprises chromium(III) (\(\ce{Cr^{3+}}\)), iron(II) (\(\ce{Fe^{2+}}\)), iron(III) (\(\ce{Fe^{3+}}\)), and nickel(II) (\(\ce{Ni^{2+}}\)) selectively precipitated by as insoluble hydroxides and sulfides by adding \(\ce{H2S}\) in alkaline medium with pH maintained at ~9 by \(\ce{NH3}\)/\(\ce{NH4^{+}}\) buffer.\(\ce{H2S}\) in an alkaline medium is the reagent for the selective precipitation of group III cations.When pH is set at 9 by \(\ce{NH3}\)/\(\ce{NH4^{+}}\) buffer, \(\ce{OH^{-}}\) concentration is high enough to precipitate group III cations as insoluble hydroxide except for nickel that forms soluble coordination complex ion with ammonia. When \(\ce{H2S}\) is added in an alkaline medium, it produces a higher concentration of \(\ce{S^{2-}}\) due to the removal of \(\ce{H3O^{+}}\) from its equilibrium by reacting with \(\ce{OH^{-}}\):\[\ce{H3O^{+}(aq) + OH^{-}(aq)-> 2H2O(l)} \nonumber\]All of the group III cations are converted to insoluble sulfides except chromium.Group IV comprise of calcium (\(\ce{Ca^{2+}}\)) and barium (\(\ce{Ba^{2+}}\)) selectively precipitate as insoluble carbonates by adding ammonium carbonate (\(\ce{(NH4)2CO3}\)) as a source of carbonate (\(\ce{CO3^{2-}}\)) ion:\[\ce{(NH4)2CO3(s) + 2H2O <=> 2NH4^{+}(aq) + CO3^{2-}(aq)}\nonumber\]The \(\ce{CO3^{2-}}\) ion makes insoluble salts with many cations as stated by insoluble ions rule#2 in section 1.1, i.e., “Carbonates (\(\ce{CO3^{2-}}\)), phosphates (\(\ce{PO4^{3-}}\)), and oxide (\(\ce{O^{2-}}\)) are insoluble except when the cation is an alkali metal ion or ammonia.” All other ions have already been precipitated at this stage in groups I, II, and III except group IV cations and alkali metal ions.The \(\ce{CO3^{2-}}\) ion is a selective reagent for group IV cations becauseGroup V comprises alkali metal ions, i.e., sodium (\(\ce{Na^{+}}\)) and potassium (\(\ce{K^{+}}\)) in the mixture of ions selected. According to soluble ions rule#1 in section 1.1, alkali metal and ammonium ions form soluble salts. So, group V cations remain in solution after groups I, II, III, and IV cations are removed as insoluble, chloride, sulfide in acid medium, sulfide in basic medium, and carbonates, respectively.The separation of cations in groups, along with the separation of ions within a group and their confirmation tests are described in detail in later chapters. The flow chart shown below shows the summary of the separation of common cations in water into the five groups.This page titled 1.5: Separation of cations in groups is shared under a Public Domain license and was authored, remixed, and/or curated by Muhammad Arif Malik.
593
2.1: Precipitation
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Qualitative_Analysis_of_Common_Cations_in_Water_(Malik)/2%3A_Experimental_techniques/2.1%3A_Precipitation
The chemical reactions in these exercises are performed in a test tube. Test tubes come in different sizes. These experiments are designed for test tubes of 9 mL capacity. The reactant is in a test tube and the reagent (2nd reactant) is added drop by drop from a reagent bottle using a dropper, while the reaction mixture is being stirred. Use a clean glass rod to stir the reaction mixture. Stirring is necessary as the reactants must mix before they can react. illustrates the test tubes and reagent bottles commonly used. illustrated a precipitation reaction and the difference between solution, suspension, supernatant, and precipitate.Precipitation reaction must be tested for completeness, as, otherwise, the residual reactant will interfere with the other tests to be performed using the supernatant. One more drop of reagent is added to the clear supernatant and if no more precipitate forms the precipitation is complete. Otherwise, repeat the centrifugation and check again.This page titled 2.1: Precipitation is shared under a Public Domain license and was authored, remixed, and/or curated by Muhammad Arif Malik.
595
2.2: Water bath
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Qualitative_Analysis_of_Common_Cations_in_Water_(Malik)/2%3A_Experimental_techniques/2.2%3A_Water_bath
Often a reaction mixture needs to be heated for a certain time for the reaction to happen. Heating directly on a Bunsen burner or on a hot plate is not uniform heating is associated with fire hazards. Heating the reaction mixture indirectly in a water bath achieves uniform heating with less fire hazard.A water bath for qualitative analysis of cations is usually a 200 mL capacity beaker filled with distilled or deionized water up to about 150 mL mark and placed on a hot plate for heating. Ramp up the temperature control knob of the hot plate to a maximum in the beginning until the water starts boiling. Then set the temperature to 350 oC to keep it gently boiling as illustrated in . Add water when the water level drops to the range ½ to 1/4th, ramp up temperature again, and then re-set the temperature at 350 oC once it starts boiling again.Hold the hot test tube with a test tube holder while stirring with a clean glass rod or while moving it to centrifuge. Never hold the hot test tube with a bare hand. Always point the mouth of the hot test tube away from you and away from any other person around. Hot test tubes and the hot liquid in the test tube can cause burns.This page titled 2.2: Water bath is shared under a Public Domain license and was authored, remixed, and/or curated by Muhammad Arif Malik.
596
2.3: Centrifugation
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Qualitative_Analysis_of_Common_Cations_in_Water_(Malik)/2%3A_Experimental_techniques/2.3%3A_Centrifugation
The solid product, i.e., the precipitate is forced to form sediment or pellet at the bottom of the test tube under the action of centrifugal force as illustrated in . A laboratory centrifuge machine contains a fast router with compartments to house the test tube as shown in . The test tube compartments are arranged in a circle.Three similar test tubes with the same volume of liquid can also be placed at the corner of a triangle around the axis of the router to balance the weight. Close the lid and start the machine. If the weight is not balanced, the centrifuge machine will vibrate, shake, and may start moving or fly off causing damage when switched on.This page titled 2.3: Centrifugation is shared under a Public Domain license and was authored, remixed, and/or curated by Muhammad Arif Malik.
597
2.4: Separation of the precipitate
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Qualitative_Analysis_of_Common_Cations_in_Water_(Malik)/2%3A_Experimental_techniques/2.4%3A_Separation_of_the_precipitate
After centrifugation, a clear liquid, called supernatant, is floating over the sediment or precipitate. shows the separation of supernatant from the precipitate by decantation and by aspiration.Sometimes the precipitate is not fully packed after centrifugation and tends to go into supernatant during the decantation or aspiration process. In these situations, a cotton-plug technique is used, i.e., a small tuft of cotton is twisted at one end between fingers to make it pointy at one end. Then the pointy end is plugged into the tip of a pasture pipette to act as a filter during aspiration. The loose precipitate is filtered by the cotton plug during aspiration as illustrated in . The cotton plug is removed and then the clear supernatant is transferred to a clean test tube for further analysis.The precipitate is usually washed by re-suspending by stirring with a clean glass rod in a solvent that does not re-dissolve the product but dissolves any impurity in it, as shown in . The suspension is centrifuged or gravity filtered and the supernatant or filtrate of the washing step is discarded as it is just the washing liquid with some impurities in it.Sometimes a precipitate in a suspension is separated by gravity filtration. A gravity filtration setup consists of a funnel placed in a test tube or an Erlenmeyer flask and a filter paper placed in the funnel as illustrated in . Suspension is poured into the filter paper. The solution that passes through the filter paper and is collected in the test tube or Erlenmeyer flask is called the filtrate. The precipitate that is retained on the filter paper is called the residue.The precipitate is washed by adding a washing solution drop by drop while gently stirring the residue with a clean glass rod.This page titled 2.4: Separation of the precipitate is shared under a Public Domain license and was authored, remixed, and/or curated by Muhammad Arif Malik.
598
2.5: pH measurement
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Qualitative_Analysis_of_Common_Cations_in_Water_(Malik)/2%3A_Experimental_techniques/2.5%3A_pH_measurement
The pH is usually measured in laboratories by a digital pH meter. The electrode of the pH meter is first calibrated with solutions of know pH values, and then the electrode is dipped in the test solution to read its pH value.pH papers are a cheaper alternative often used for pH measurement in qualitative analyses of cations that gives quick results, as illustrated in .If the purpose is to monitor when the solution turns from acidic to alkaline or vice versa, a litmus paper is used. A red litmus paper stays red in an acidic solution and turns blue in a basic solution. A blue litmus paper turns red in acidic and stays blue in a basic solution.If the purpose is to determine an approximate pH value, a universal pH indicator paper is used. The test solution is applied to the end of a pH paper strip with a glass rod and the pH is read by matching the color of the test paper socked in the test solution with the color chart on the pH paper box.A common mistake is dipping a pH paper in the test solution and withdrawing immediately to read the color change. It should be avoided as it may leave contaminants in the solution. Further, the test solution is at the bottom of the test tube requiring a long paper strip and making it difficult to avoid touching the sides of the test tube above the liquid. A better approach is to cut a piece of pH paper about 2 cm long and touch one end with a wet glass rod that was used to stir the test solution and then read the color change in the pH paper by matching it to the color on the chart.This page titled 2.5: pH measurement is shared under a Public Domain license and was authored, remixed, and/or curated by Muhammad Arif Malik.
599
2.6: Flame test
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Qualitative_Analysis_of_Common_Cations_in_Water_(Malik)/2%3A_Experimental_techniques/2.6%3A_Flame_test
A flame test is a complex phenomenon that is not fully explained. In simple words, when a solution of metal salts, e.g., an aqueous solution of metal chlorides is injected into a flame, some of the metal ions may gain electrons and become neutral metal atoms. Electrons in the atom can be promoted from the ground state to a higher energy excited state by the strong heat of the flame. The excited electrons ultimately return to the ground state either in one go or in several steps by jumping to lower allowed energy states. When the excited electrons jump from higher to lower allowed energy states they emit electromagnetic radiation of a specific wavelength corresponding to the energy gap between the energy states. Some of these radiations may fall in the visible part of the electromagnetic radiation spectrum. The color we see is a combination of all the colors in the emission spectrum, as illustrated in Fig. 2.7.1..The exact gap between the energy levels allowed for electrons varies from one metal to another metal. Therefore, different metals have different patterns of spectral lines in their emission spectrum, and if some of these spectral lines fall in the visible spectrum range, they impart different colors to the flame. For example, the ground-state electron configuration of the sodium atom is 1s22s22p6. When the sodium atom is in the hot flame some of the electrons can jump to any of the higher energy allowed stages, such as 3s, 3p, etc. The familiar intense yellow flame of sodium is a result of excited electrons jumping back from 3p1 to ground state 3s1 level.Often metal chloride salts are used for the flame tests as they are water-soluble and easier to vaporize in flame from the solution. Metal chloride salts are first dissolved in water. Other metal salts are first treated with 6M \(\ce{HCl}\) to dissolve them as metal chlorides and then used for the flame test. An inert platinum wire is dipped in the test solution. Usually, the wire has a small loop a the end to make a film of the solution that evaporates in the flame. Air and fuel supply to the flame are adjusted to produce a non-luminous flame. The wire carrying the salt solution is touched on the outer edge of a flame somewhere in the middle of the vertical axis of the flame and the color imparted to the flame is observed. Nichrome wire is a cheaper alternative to platinum wire, though nichrome may slightly alter the flame color. A wooden splint or wooden cotton-tipped applicator are other cheaper alternatives. The wooden splint or cotton swab applicator is first dipped in deionized or distilled water overnight so that the cotton or wood may not burn when placed in a flame for a short time. The salt solution is then applied to the wooden splint end or to the cotton swab and exposed to the flame. Wooden splint and cotton-tipped applicators are disposable, i.e., they are discarded after one flame test. Platinum wire can be reused after washing. The wire is dipped in 6M \(\ce{HCl}\) and then heated in a flame to red-hot. The process is repeated till the wire does not alter the color of the flame. Then it can be re-used. Nichrome wire can be washed the same way. However, an easier alternative is to cut out the loop of the wire and make a new loop on the fresh end portion. Then use the wire for the next flame test. shows that flame tests tested using calcium chloride work equally well with nichrome wire, cotton-tipped applicator, and wooden splint. shows flam colors of some metal chloride salt solutions exposed to the flame on a cotton swab applicator.This page titled 2.6: Flame test is shared under a Public Domain license and was authored, remixed, and/or curated by Muhammad Arif Malik.
600
2.7: Common qualitative analysis reagents, their effects, and hazards
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Qualitative_Analysis_of_Common_Cations_in_Water_(Malik)/2%3A_Experimental_techniques/2.7%3A_Common_qualitative_analysis_reagents%2C_their_effects%2C_and_hazards
ReagentEffectsHazards6M Ammonia (\(\ce{NH4OH}\) or \(\ce{NH3}\))increases [\(\ce{NH3}\)], increases [\(\ce{OH^-}\)], decrease [\(\ce{H3O^+}\)], precipitates insoluble hydroxides, forms \(\ce{NH3}\) complexesToxic, corrosive, and irritant6M Hydrochloric acid (\(\ce{HCl}\))increases [\(\ce{H3O^+}\)], increases [\(\ce{Cl^-}\)], decreases [\(\ce{OH^-}\)], dissolves insoluble carbonates, chromates, hydroxides, some sulfates, destroys hydroxo and \(\ce{NH3}\) complexes, and precipitates insoluble chloridesToxic and corrosive3% Hydrogen peroxide (\(\ce{H2O2}\))Oxidizing agent in acidic medium, reducing agent in basic mediumcorrosive6M Nitric acid (\(\ce{HNO3}\))Increases [\(\ce{H3O^+}\)], decreases [\(\ce{OH^-}\)], dissolves insoluble carbonates, chromates, and hydroxides, dissolves insoluble sulfides by oxidizing sulfide ion, destroys hydroxo and ammonia complexes, good oxidizing agent when hotToxic, corrosive, and strong oxidant3M Potassium hydroxide (\(\ce{KOH}\))Increases [\(\ce{OH^-}\)], decreases [\(\ce{H3O^+}\)], forms hydroxo complexes, precipitates insoluble hydroxidesToxic and corrosive1M Thioacetamide (\(\ce{CH3C(S)NH2}\))Produces \(\ce{H2S}\), i.e., a source of sulfide ion (\(\ce{S^{2-}}\)), precipitates insoluble sulfidesToxic and carcinogenThis page titled 2.7: Common qualitative analysis reagents, their effects, and hazards is shared under a Public Domain license and was authored, remixed, and/or curated by Muhammad Arif Malik.
601
3.1: Separation of group I cations
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Qualitative_Analysis_of_Common_Cations_in_Water_(Malik)/3%3A_Group_I_cations/3.1%3A_Separation_of_group_I_cations
Selective precipitation of a set of group I, i.e., lead(II) (\(\ce{Pb^{2+}}\)), mercury(I) (\(\ce{Hg2^{2+}}\)), and silver(I) (\(\ce{Ag^{+}}\)) is based on soluble ions rule#3 in the solubility guidelines in section 1.1 which states "Salts of chloride (\(\ce{Cl^{-}}\)), bromide ( \(\ce{Br^{-}}\)), and Iodide ( \(\ce{I^{-}}\)) are soluble, except when the cation is Lead ( \(\ce{Pb^{2+}}\)), Mercury ( \(\ce{Hg2^{2+}}\)), or Silver ( \(\ce{Ag^{+}}\)). The best source of \(\ce{Cl^{-}}\) for precipitating group 1 cations from a test solution is \(\ce{HCl}\), because it is a strong acid that completely dissociates in water producing \(\ce{Cl^{-}}\) and \(\ce{H3O^{+}}\) ions, both do not get involved in any undesired reactions under the conditions.The \(\ce{K_{sp}}\) expression is used to calculate \(\ce{Cl^{-}}\) that will be sufficient to precipitate group 1 cations. The molar concentration of chloride ions i.e., [\(\ce{Cl^{-}}\)], in moles/liter in a saturated solution of the ionic compound can be calculated by rearranging their respective \(\ce{K_{sp}}\) expression. For example, for \(\ce{AgCl}\) solution, \(\mathrm{K}_{\mathrm{sp}}=\left[\mathrm{Ag}^{+}\right]\left[\mathrm{Cl}^{-}\right]\) rearranges to:\[\left[\mathrm{Cl}^{-}\right]=K_{s p} /\left[\mathrm{Ag}^{+}\right]\nonumber\]and for \(\ce{PbCl2}\) solution, Ksp = [Pb2+][Cl-]2 rearranges to:\[\left[C l^{-}\right]=\sqrt{K_{s p} /\left[P b^{2+}\right]}\nonumber\]The concentration of ions in the unknown sample are ~0.1 M. Plugging in 0.1M value for \(\ce{Pb^{2+}}\) in the above equation shows that [\(\ce{Cl^{-}}\)] in a saturated solution having 0.1M \(\ce{Pb^{2+}}\) is 1.3 x 10-2M:\[\left[C l^{-}\right]=\sqrt{K_{s p} /\left[P b^{2+}\right]}=\sqrt{1.6 \times 10^{-5} / 0.1}=1.3 \times 10^{-2} \mathrm{M}\nonumber\]It means \(\ce{Cl^{-}}\) concentration up to 1.3 x 10-2M will not cause precipitation from 0.1M \(\ce{Pb^{2+}}\) solution. Increasing \(\ce{Cl^{-}}\) above 0.013M will remove \(\ce{Pb^{2+}}\) from the solution as a \(\ce{PbCl2}\) precipitate. If 99.9% removal is desired, then 1.0 x 10-4 M \(\ce{Pb^{2+}}\) will be left in the solution and the [\(\ce{Cl^{-}}\)] have to be raised to 0.40 M:\[\left[C l^{-}\right]=\sqrt{K_{s p} /\left[P b^{2+}\right]}=\sqrt{1.6 \times 10^{-5} / 1.0 \times 10^{-4}}=0.40 \mathrm{M}\nonumber\]The solubility of \(\ce{Hg2Cl2}\) and \(\ce{AgCl}\) is less than that of \(\ce{PbCl2}\). So, a 0.40M \(\ce{Cl^{-}}\) will remove more than 99.9% of \(\ce{Hg2^{2+}}\) and \(\ce{Ag^{+}}\) from the solution.A sample of 20 drops of the aqueous solution is about 1 mL. In these experiments, ~15 drops of the test solution are collected in a test tube and 3 to 4 drops of 6M \(\ce{HCl}\) are mixed with the solution. This results in about 0.9 mL total solution containing 1 to 1.3 M \(\ce{Cl^{-}}\), which is more than twice the concentration needed to precipitate out 99.9% of group 1 cations.A concentrated reagent (6M \(\ce{HCl}\)) is used to minimize the dilution of the test sample because the solution is centrifuged and the supernatant that is separated by decantation is used to analyze the remaining cations. A 12M \(\ce{HCl}\) is available, but it is not used because it is a more hazardous reagent due to being more concentrated strong acid and also because if \(\ce{Cl^{-}}\) concentration is raised to 5M or higher in the test solution, it can re-dissolve \(\ce{AgCl}\), by forming water-soluble [\(\ce{AgCl2}\)]- complex ion.The addition of \(\ce{HCl}\) causes precipitation of group 1 cation as milky white suspension as shown in and by chemical reaction equations below. The precipitates can be separated by gravity filtration, but more effective separation can be achieved by subjecting the suspension to centrifuge in a test tube. Centrifugal force forces the solid suspension to settle and pack at the bottom of the test tube from which the clear solution, called supernatant, can be poured out -a process called decantation. The precipitate is resuspended in pure water by stirring with a clean glass rod, centrifuged, and decanted again to wash out any residual impurities. The washed precipitate is used to separate and confirm the group 1 cations and the supernatant is saved for analysis of group 2, 3, 4, and 5 cations.\[\ce{ Pb^{2+}(aq) + 2Cl^{-}(aq) <=> PbCl2(s)(v)}\nonumber\]\[\ce{Hg2^{2+}(aq) + 2Cl^{-}(aq) <=> Hg2Cl2(s)(v)}\nonumber\]\[\ce{ Ag^{+}(aq) + Cl^{-}(aq) <=> AgCl(s)(v)}\nonumber\]This page titled 3.1: Separation of group I cations is shared under a Public Domain license and was authored, remixed, and/or curated by Muhammad Arif Malik.
602
3.2: Separation and confirmation of individual ions in group I precipitates
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Qualitative_Analysis_of_Common_Cations_in_Water_(Malik)/3%3A_Group_I_cations/3.2%3A_Separation_and_confirmation_of_Pb2__ion
Solubility of \(\ce{PbCl2}\) in water at 20 oC is about 1.1 g/100 mL, which is significantly higher than 1.9 x 10-4 g/100 mL for \(\ce{AgCl}\) and 3.2 x 10-5 g/100 mL for \(\ce{Hg2Cl2}\). Further, the solubility of \(\ce{PbCl2}\) increases three-fold to about 3.2 g/100 mL in boiling water at 100 oC, while solubility \(\ce{AgCl}\) and \(\ce{Hg2Cl2}\) remain negligible. A 15 drops sample that is used to precipitate out group I cations corresponds to about 0.75 mL, which based on the molar mass of \(\ce{PbCl2}\) is 278.1 g and the concentration of each ion ~0.1M, contains about 0.02 g of \(\ce{PbCl2}\) precipitate. This 0.02 g of \(\ce{PbCl2}\) requires ~0.6 mL of heated water for dissolution. The precipitated is re-suspended in ~2 mL water and heated in a boiling water bath to selectively dissolve \(\ce{PbCl2}\), leaving any \(\ce{AgCl}\) and \(\ce{Hg2Cl2}\) almost undissolved, as shown in .\[\ce{ PbCl2 (s) <=>[Hot~water] Pb^{2+}(aq) + 2Cl^{-}(aq)}\nonumber\]The heated suspension is filtered using a heated gravity filtration set up to separate the residue comprising of \(\ce{AgCl}\) and \(\ce{Hg2Cl2}\) from filtrate containing dissolved \(\ce{PbCl2}\).The solubility of \(\ce{PbCl2}\) is three times less at room temperature than in boiling water. Therefore, the 2 mL filtrate is cooled to room temperature to crystalize out \(\ce{PbCl2}\) :\[\ce{Pb^{2+}(aq) + 2Cl^{-}(aq) <=>[Cold~water] PbCl2(s)}\nonumber\]If \(\ce{PbCl2}\) crystals are observed in the filtrate upon cooling to room temperature, it is a confirmation of \(\ce{PbCl2}\) in the test solution. If \(\ce{PbCl2}\) concentration is low in the filtrate, the crystals may not form upon cooling. Few drops of 5M \(\ce{HCl}\) are mixed with the filtrate to force the crystal formation based on the common ion effect of Cl- in the reactants. The formation of \(\ce{PbCl2}\) crystals confirms \(\ce{Pb^{2+}}\) as shown in , and no crystal formation at this stage confirms that \(\ce{Pb^{2+}}\) was absent in the test solution.The residue left after filtering out \(\ce{Pb^{2+}}\) in hot water, is washed further with 10 mL of hot water to washout residual \(\ce{PbCl2}\). Then 2 mL of 6M aqueous \(\ce{NH3}\) solution is passed through the residue drop by drop. Aqueous \(\ce{NH3}\) dissolves \(\ce{AgCl}\) precipitate by forming water soluble complex ion \(\ce{[Ag(NH3)2(aq)]^+}\) through following series of reactions:\[\ce{AgCl(s) <=> Ag^{+}(aq) + Cl^{-}(aq)}\quad K_f = 1.8\times10^{-10}\nonumber\]\[\ce{Ag^{+}(aq) + 2NH3(aq) <=> Ag(NH3)2^{+}(aq)}\quad K_f = 1.7\times10^7\nonumber\]\[\text{Overall reaction:}~\ce{AgCl(aq) + 2NH3(aq) <=> Ag(NH3)2^{+}(aq) + Cl^{-}(aq)}\quad K = 3.0\times10^{-3}\nonumber\]The 2 mL filtrate is collected in a separate test tube for confirmation of \(\ce{Ag^+}\) ion. Although \(\ce{Hg2Cl2}\) precipitate is insoluble in water, it does slightly dissociate like all ionic compounds. The \(\ce{Hg2^{2+}}\) ions undergo auto-oxidation or disproportionation reaction producing black Hg liquid and \(\ce{Hg2^{2+}}\) ions. The \(\ce{Hg2^{2+}}\) ions react with \(\ce{NH3}\) and \(\ce{Cl^-}\) forming white water-insoluble \(\ce{HgNH2Cl}\) precipitate through the following series of reactions:\[\ce{Hg2Cl2(s) <=> Hg2^{2+}(aq) + 2Cl^{-}(aq)}\nonumber\]\[\ce{Hg2^{2+}(aq) <=> Hg(l) + Hg^{2+}(aq)}\nonumber\]\[\ce{Hg^{2+}(aq) + 2NH3(aq) + Cl^{-}(aq) <=> HgNH2Cl(s) + NH4^{+}(aq)}\nonumber\]\[\text{Overall reaction:}\ce{~Hg2Cl2(s, white) + 2NH3(aq) <=> HgNH2Cl(s, white) + NH4^{+}(aq) + Cl^{-}(aq) + Hg(l, black)}\nonumber\]A mixture of white solid \(\ce{HgNH2Cl}\) and black liquid Hg appears gray in color. Turning of white \(\ce{Hg2Cl2}\) precipitate to grayish color upon addition of 6M \(\ce{NH3}\) solution drops confirms \(\ce{Hg2^{2+}}\) ions are present in the test solution as shown in . If the white precipitate redissolves leaving behind no grayish residue, it means the precipitate was \(\ce{AgCl}\) and \(\ce{Hg2^{2+}}\) were absent in the test solution.Although water-soluble complex ion \(\ce{[Ag(NH3)2(aq)]^+}\) is quite stable, it does slightly decompose into \(\ce{Ag^+}\) and \(\ce{NH3(aq)}\). The excess \(\ce{NH3}\) added to dissolve \(\ce{AgCl}\) precipitate and the that produced by dissociation of \(\ce{[Ag(NH3)2(aq)]^+}\) is removed by making the solution acidic by adding 6M \(\ce{HNO3}\). The \(\ce{Cl^-}\) formed from the dissolution of \(\ce{AgCl}\) precipitate in the earlier reactions is still present in the medium. Decomposition of \(\ce{[Ag(NH3)2(aq)]^+}\) in the acidic medium produces enough \(\ce{Ag^+}\) ions to re-form white \(\ce{AgCl}\) precipitate by the following series of equilibrium reactions.\[\ce{[Ag(NH3)2]^{+}(aq) <=> Ag^{+}(aq) + 2NH3(aq)}\nonumber\]\[\ce{2NH3(aq) + 2H3O^{+}(aq) <=> 2NH4^{+}(aq) + 2H2O(l)}\nonumber\]\[\ce{Ag^{+}(aq) + Cl^{-}(aq) <=> AgCl(s, white)}\nonumber\]\[\text{Overall reaction:}\ce{~[Ag(NH3)2]^{+}(aq) + 2H3O^{+}(aq) + Cl^{-}(aq) <=> AgCl(s, white) + 2NH4^{+}(aq) + 2H2O(l)}\nonumber\]The formation of white \(\ce{AgCl}\) precipitate at this stage in the acidified filtrate confirms \(\ce{Ag^+}\) ion was present in the test solution, as shown in , and its absence confirms that \(\ce{Ag^+}\) ion was not present in the test solution.This page titled 3.2: Separation and confirmation of individual ions in group I precipitates is shared under a Public Domain license and was authored, remixed, and/or curated by Muhammad Arif Malik.
603
4.1: Precipitation of group II cations
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Qualitative_Analysis_of_Common_Cations_in_Water_(Malik)/4%3A_Group_II_cations/4.1%3A_Precipitation_of_group_II_cations
The solubility guideline#1 of insoluble ions states “Hydroxide (\(\ce{OH^{-}}\)) and sulfides (\(\ce{S^{2-}}\)) are insoluble except when the cation is alkali metal, ammonia, or a heavy alkaline earth metal ion, i.e., \(\ce{Ca^{2+}}\), \(\ce{Ba^{2+}}\), and \(\ce{Sr^{2+}}\)”. The sulfide of \(\ce{Cr^{3+}}\) is also in the exceptions list as its sulfide is unstable in water. It is obvious that the number of insoluble sulfides and hydroxides is large. The solution is made acidic to decrease [\(\ce{OH^{-}}\)] to below the level that can cause precipitation of any ion. The [\(\ce{S^{2-}}\)] also remains low due to the common ion effect of \(\ce{H3O^{+}}\) in the acidic medium as explained in the next section. Therefore among the insoluble sulfides, only those that have very low solubility limits are selectively precipitated. These include \(\ce{Bi^{3+}}\), \(\ce{Cd^{2+}}\), \(\ce{Cu^{2+}}\), and \(\ce{Sn^{4+}}\) among the cations selected in this study that are left in the solution after group I cations have been separated. Group II comprise of \(\ce{Bi^{3+}}\), \(\ce{Cd^{2+}}\), \(\ce{Cu^{2+}}\), and \(\ce{Sn^{4+}}\).Among the ions in the initial solution after removal of group I cations, the following ions form insoluble sulfides: \(\ce{Bi^{3+}}\), \(\ce{Cd^{2+}}\), \(\ce{Cu^{2+}}\), \(\ce{Fe^{2+}}\), \(\ce{Fe^{3+}}\), \(\ce{Ni^{2+}}\), and \(\ce{Sn^{4+}}\). Among these, \(\ce{Bi^{3+}}\), \(\ce{Cd^{2+}}\), \(\ce{Cu^{2+}}\), and \(\ce{Sn^{4+}}\) are in group II that form very insoluble sulfides, and \(\ce{Cr^{3+}}\), \(\ce{Fe^{2+}}\), \(\ce{Fe^{3+}}\), and \(\ce{Ni^{2+}}\) are in group III form insoluble hydroxides and sulfides in basic medium, as reflected by their solubility product constants (\(\ce{K_{sp}}\)) listed in Table 1. The minimum concentration of \(\ce{S^{2-}}\) needed to start precipitation of the cation can be calculated from the \(\ce{K_{sp}}\) expressions as shown in Table 1. It can be observed from Table 1 that there is a huge difference in the minimum \(\ce{S^{2-}}\) concentration (1.8 x 10-20M) needed to precipitate \(\ce{Ni^{2+}}\) -the least soluble sulfide of group III and \(\ce{Cd^{2+}}\) (7.8 x 10-26) -the most soluble sulfide of group II. If the \(\ce{S^{2-}}\) is kept more than 1.8 x 10-20 M but less than 7.8 x 10-26 M group II cations will selectively precipitate while group III cations and the rest of the cations will remain dissolved.IonSulfideKsp at 25 oCMinimum [S-2] needed to precipitate\(\mathrm{K}_{\mathrm{sp}}=\left[\mathrm{Fe}^{2+}\right]\left[\mathrm{S}^{2-}\right]=4.9 \times 10^{-18}\)\(\left[\mathrm{S}^{-2}\right]=\mathrm{K}_{\mathrm{sp}} /\left[\mathrm{Fe}^{2+}\right]=4.9 \times 10^{-17}\)\(\ce{Ni^{2+}}\)\(\mathrm{K}_{\mathrm{sp}}=\left[\mathrm{Ni}^{2+}\right]\left[\mathrm{S}^{2-}\right]=1.8 \times 10^{-21}\)\(\left[\mathrm{S}^{-2}\right]=\mathrm{K}_{\mathrm{sp}} /\left[\mathrm{Ni}^{2+}\right]=1.8 \times 10^{-20}\)\(\ce{Cd^{2+}}\)\(\mathrm{K}_{\mathrm{sp}}=\left[\mathrm{Cd}^{2+}\right]\left[\mathrm{S}^{2-}\right]=7.8 \times 10^{-27}\)\(\left[\mathrm{S}^{-2}\right]=\mathrm{K}_{\mathrm{sp}} /\left[\mathrm{Cd}^{2+}\right]=7.8 \times 10^{-26}\)\(\ce{Bi^{3+}}\)\(\ce{Ba2S3}\)\(\mathrm{K}_{\mathrm{sp}}=\left[\mathrm{Bi}^{3+}\right]^{2}\left[\mathrm{~S}^{2-}\right]^{3}=6.8 \times 10^{-97}\)\(\left[\mathrm{S}^{-2}\right]=\sqrt{\mathrm{K}_{\mathrm{sp}} /\left[\mathrm{Bi}^{3+}\right]^{2}}=4.1 \times 10^{-32}\)\(\ce{Sn^{4+}}\)\(\ce{SnS2}\)\(\mathrm{K}_{\mathrm{sp}}=\left[\mathrm{Sn}^{4+}\right]\left[\mathrm{S}^{2-}\right]^{2}=1.0 \times 10^{-70}\)\(\left[\mathrm{S}^{-2}\right]=\sqrt{\mathrm{K}_{\mathrm{sp}} /\left[\mathrm{Sn}^{4+}\right]}=3.2 \times 10^{-35}\)\(\ce{Cu^{2+}}\)\(\ce{CuS}\)\(\mathrm{K}_{\mathrm{sp}}=\left[\mathrm{Cu}^{2+}\right]\left[\mathrm{S}^{2-}\right]=8.7 \times 10^{-36}\)\(\left[\mathrm{S}^{-2}\right]=\mathrm{K}_{\mathrm{sp}} /\left[\mathrm{Fe}^{2+}\right]=8.7 \times 10^{-35}\)Source of \(\ce{S^{2-}}\) is \(\ce{H2S}\) gas -a week diprotic acid that dissociated in water by the following equilibrium reactions:\[\ce{H2S(g) + H2O(l) <=> H3O^{+}(aq) + HS^{-}(aq)}\quad \mathrm{K}_{\mathrm{a} 1}=\left[\mathrm{H}_{3} \mathrm{O}^{+}\right]\left[\mathrm{HS}^{-}\right] /\left[\mathrm{H}_{2} \mathrm{~S}\right]=1.0 \times 10^{-7}\nonumber\]\[\ce{HS^{-}(aq) + H2O(l) <=> H3O^{+}(aq) + S^{2-}(aq)}\quad \mathrm{K}_{\mathrm{a} 2}=\left[\mathrm{H}_{3} \mathrm{O}^{+}\right]\left[\mathrm{S}^{2-}\right] /\left[\mathrm{HS}^{-}\right]=1.3 \times 10^{-13}\nonumber\]\[\text{Overall reaction: }\ce{H2S(g) + 2H2O(l) <=> 2H3O^{+}(aq) + S^{2-}(aq)}\quad\mathrm{K}_{\mathrm{a}}=\left[\mathrm{H}_{3} \mathrm{O}^{+}\right]^{2}\left[\mathrm{~S}^{2-}\right] /\left[\mathrm{H}_{2} \mathrm{~S}\right]=1.3 \times 10^{-20}\nonumber\]Extent of \(\ce{H2S}\) dissociation, and, consequently, the concentration of \(\ce{S^{2-}}\) produced is dependent on \(\ce{H3O^{+}}\):\[K_a = \frac{\ce{[H3O^{+}]^{2}[S^{2-}]}}{\ce{[H2S]}}\quad\quad\text{ rearranges to: }\quad\quad\ce{[S^{2-}]} = \frac{K_{a}\ce{[H2S]}}{\ce{[H3O^{+}]^{2}}}\nonumber\]It is obvious from the above formula that \(\ce{[S^{2-}]}\) is dependent on \(\ce{[H3O^{+}]}\), which is related to pH ( \(pH = Log\frac{1}{\ce{[H3O^{+}]}} = \text{-Log}\ce{[H3O^{+}]}\). Therefore, \(\ce{[S^{2-}]}\) can be controlled by adjusting the pH.\(\ce{H2S}\) is a toxic gas. To minimize the exposure, \(\ce{H2S}\) is produced in-situ by decomposition of thioacetamide (\(\ce{CH3CSNH2}\)) in water:\[\ce{CH3CSNH2(aq) + 2H2O <=> CH3COO^{-} + NH4^{+}(aq) + H2S(aq)}\nonumber\]The decomposition of thioacetamide is an endothermic reaction, which, according to Le Chatelier's principle, moves in the forward direction upon heating. An aqueous solution of thioacetamide is heated in a boiling water bath in a fume hood producing ~0.01M \(\ce{H2S}\) solution.Rearranging acid dissociation constant of \(\ce{H2S}\) and plugging in 0.01M \(\ce{H2S}\) in the rearranged formula allows calculating \(\ce{S^{2-}}\) concentration at various concentrations of \(\ce{H3O^{+}}\), i.e., at various pH values:\[\ce{[S^{2-}]} = \frac{K_{a}\ce{[H2S]}}{\ce{[H3O^{+}]^{2}}} = \frac{1.3\times10^{-20}\times0.01}{\ce{[H3O^{+}]^{2}}} = \frac{1.3\times10^{-22}}{\ce{[H3O^{+}]^{2}}}\nonumber\]It shows that \(\ce{S^{2-}}\) concentration can be varied by [\(\ce{H3O^{+}}\)], i.e., by varying pH. At pH 1 and 0, \(\ce{H3O^{+}}\) is 0.10 M and 1.0 M, respectively, that produces [\(\ce{S^{2-}}\)] in the range of 1.3 x 10-20 M S2- and 1.3 x 10-22 M S2-:\[\ce{[S^{2-}]} = \frac{1.3\times10^{-22}}{(0.10)^{2}} = 1.3\times10^{-20} ~M\quad\quad\text{ and }\quad\quad\ce{[S^{2-}]} = \frac{1.3\times10^{-22}}{(1.0)^{2}} = 1.3\times10^{-22}~M\nonumber\]This range of [\(\ce{S^{2-}}\)] is less than the solubility limit of \(\ce{Ni^{2+}}\) -the least soluble cation of group III but more than the solubility limit of \(\ce{Cd^{2+}}\) -the most soluble cation of group II. If pH of the test solution is maintained between 0 and 1, group II cations will precipitate and group III and higher group cations will remain dissolved. At pH 0.5, S2- is 5.2 x 10-22 M that will precipitate more than 99.99% \(\ce{Cd^{2+}}\):\[\mathrm{K}_{\mathrm{sp}}=\left[\mathrm{Cd}^{2+}\right]\left[\mathrm{S}^{2-}\right]=7.8 \times 10^{-27}\quad\quad\text{gives:}\quad\quad\ce{[Cd^{2+}]} = \frac{7.8\times 10^{-27}}{\ce{[S^{-2}]}} = \frac{7.8\times 10^{-27}}{5.2\times10^{-22}} = 1.5\times10^{-5}~M\nonumber\], which is 0.0002% of the initial [\(\ce{Cd^{2+}}\)].The supernatant after removal of group I chlorides is usually within the pH range of 0.5 ±0.3, which is the appropriate pH for precipitation of group II cations under the conditions of this study. If the pH of the test sample is outside this range, the pH can be increased to ~0.5 by adding 0.5M \(\ce{NH3(aq)}\) drop by drop under stirring. Determine pH by using a pH paper after each drop of 0.5M \(\ce{NH3(aq)}\)­ is added and thoroughly mixed. Keep in mind that \(\ce{NH3}\) solution in water is also labeled as \(\ce{NH4OH}\). Similarly, the pH can be decreased to ~0.5 by adding 0.5M \(\ce{HCl(aq)}\) drop by drop under stirring. Determine pH by using a pH paper after each drop of 0.5M \(\ce{HCl(aq)}\)­ is added and thoroughly mixed.Thioacetamide reagent is added to the test solution at pH ~0.5 and heated in a boiling water bath to precipitate out group II cations.The precipitates include \(\ce{SnS2}\) (yellow), \(\ce{CdS}\) (yellow-orange), \(\ce{CuS}\) (Black-brown), \(\ce{Bi2S3}\) (black), formed by the following precipitation reactions:\[\ce{SnCl6^{2-}(aq) + 2S^{2-}(aq) <=> 6Cl^{-}(aq) + SnS2(s, yellow)}\nonumber\]\[\ce{Cd^{2+}(aq) + S^{2-}(aq) <=> CdS(s, yellow-orange)}\nonumber\]\[\ce{Cu^{2+}(aq) + S^{2-}(aq) <=> CuS(s, black-brown)}\nonumber\]\[\ce{2Bi^{3+}(aq) + 3S^{2-}(aq) <=> Bi2S3(s, black)}\nonumber\]The overall color of the combined precipitate may vary depending on its composition. Black color dominates, i.e., if all precipitates are present, the color of the mixture will be black as shown in .The solution is cooled to room temperature by using a room temperature water bath. Cooling helps precipitation of \(\ce{CdS}\). A drop of 0.5 M \(\ce{NH3(aq)}\) is added while stirring, which promotes precipitation of \(\ce{CdS}\) and \(\ce{CnS2}\), as both tend to stay dissolved in a supersaturated solution. The mixture is centrifuged and decanted to separate the supernatant that is used for the analysis of group III and higher group cations. The precipitate is washed with 0.1M \(\ce{NH4Cl}\) solution and the washed precipitate is used to separate and confirm individual cations of group II.This page titled 4.1: Precipitation of group II cations is shared under a Public Domain license and was authored, remixed, and/or curated by Muhammad Arif Malik.
606
4.2: Separation and confirmation of individual ions in group II precipitates
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Qualitative_Analysis_of_Common_Cations_in_Water_(Malik)/4%3A_Group_II_cations/4.2%3A_Separation_and_confirmation_of_individual_ions_in_group_II_precipitates
Among the sulfides of group II, only \(\ce{SnS2}\) is amphoteric and reacts with \(\ce{OH^{-}}\) ions in an alkaline medium to produce \(\ce{[Sn(OH)6]^{2-}}\) -a coordination complex anion, and stannate ion \(\ce{[SnS3]^{2-}}\), both are water-soluble. 3M \(\ce{KOH}\) is mixed with the precipitates of group II ions and the mixture is heated to dissolve the \(\ce{SnS2}\) through the following equilibrium reaction:\[\ce{3SnS2(s, yellow) + 6OH^{-}(aq) <=> [Sn(OH)6]^{2-}(aq) + 2[SnS3]^{2-}(aq)}\nonumber\]The hot solution is centrifuged and decanted to separate the supernatant that contains \(\ce{[Sn(OH)6]^{2-}}\) and \(\ce{[SnS3]^{2-}}\) dissolved in it and the precipitate that contains the sulfides of the rest of the group II cations, as shown in . A better approach is to separate the supernatant by aspiration using the cotton-plug technique to avoid contamination of precipitates in the supernatant.The above reaction is reversible, which means removing \(\ce{OH^{-}}\) from the supernatant by acid-base neutralization reaction moves the equilibrium in the reverse direction re-producing yellow \(\ce{SnS2}\) precipitate as shown in .\[\ce{[Sn(OH)6]^{2-}(aq) + 2[SnS3]^{2-}(aq) <=> 3SnS2(s, yellow) + 6OH^{-}(aq)}\nonumber\]\[\ce{6HCl(aq) + 6OH^{-}(aq) <=> 6H2O(l) + 6Cl^{-}(aq)}\nonumber\]\[\text{Overall reaction: }\ce{~6HCl(aq) + [Sn(OH)6]^{2-}(aq) + 2[SnS3]^{2-}(aq) <=> 3SnS2(s, yellow) + 6H2O(l)}\nonumber\]Some of the sulfides may be lost due to air oxidation of \(\ce{H2S}\) by the following reaction:\[\ce{2H2S(aq) + O2(g) <=> 2S(s, whitish-yellow) + 2H2O(l)}\nonumber\]To compensate for the loss of sulfide, 1M thioacetamide solution is also added along with 6M \(\ce{HCl}\) to the supernatant and the mixture is heated to reform the yellow \(\ce{SnS2}\) precipitate that confirms the presence of \(\ce{Sn^{4+}}\) in the test solution. Note that both S and \(\ce{SnS2}\) are yellow solids. Add 3M \(\ce{KOH}\) solution to the mixture to turn it alkaline again the \(\ce{SnS2}\) precipitate will re-dissolve confirming \(\ce{Sn^{4+}}\) is present in the test solution. The \(\ce{S}\) particles will not re-dissolve.\(\ce{CdS}\) is the most soluble sulfide among group II sulfide precipitates. According to Le Chatelier's principle, the removal of products, i.e., \(\ce{Cd^{2+}}\) and \(\ce{S^{2-}}\) of the dissolution reaction in this case, drives the reaction forward. \(\ce{CdS}\) can be redissolved by adding 1M \(\ce{HCl}\) to the precipitates after the removal of \(\ce{Sn^{4+}}\). Dissociation of \(\ce{HCl}\) produces \(\ce{H3O^{+}}\) in water that removes S2- by forming \(\ce{H2S}\) which is a weak acid. At the same time, Cl- removes \(\ce{Cd^{2+}}\) b y forming soluble coordination complex anion \(\ce{[CdCl4]^{2-}}\) which is quite stable with \(K_f\) = 6.3×102:\[\ce{CdS(s, yellow-orange) <=> Cd^{2+}(aq) + S^{2-}(aq)}\nonumber\]\[\ce{4HCl(aq) + 4H2O(l) <=> 4H3O^{+}(aq) + 4Cl^{-}(aq)}\nonumber\]\[\ce{S^{2-}(aq) + 2H3O^{+}(aq) <=> H2S(aq) + 2H2O(l)}\nonumber\]\[\ce{Cd^{2+}(aq) + 4Cl^{-}(aq) <=> [CdCl4]^{2-}(aq)}\nonumber\]\[\text{Overall reaction:} \ce{~CdS(s, yellow-orange) + 4HCl(aq) + 2H2O(l) <=> [CdCl4]^{2-}(aq) + 2H3O^{+}(aq) +H2S(aq)}\nonumber\]The dissolution of \(\ce{CdS}\)is facilitated by heating the reaction mixture. The rest of the group II cations, i.e., \(\ce{CuS}\) and \(\ce{Bi2S3}\) are very insoluble and do not dissolve under these conditions. The solution is centrifuged and decanted or aspirated to separate the supernatant that contains \(\ce{[CdCl4]^{2-}}\) and the precipitate that contains \(\ce{CuS}\) and/or \(\ce{Bi2S3}\) if \(\ce{Cu^{2+}}\) and/or \(\ce{Bi^{3+}}\) are present. The precipitate tends to go into the supernatant, so, the cotton plug technique is needed to prevent precipitates from going into the supernatant during the separation as shown in .All the reactions responsible for the dissolution of \(\ce{CdS}\) are reversible. The addition of \(\ce{HCl}\) dissolves \(\ce{CdS}\) by moving the equilibrium forward and the removal of \(\ce{HCl}\) moves the equilibrium in the reverse direction to reform yellow \(\ce{CdS}\) precipitate. Ammonia \(\ce{NH3}\) is such a base that removes \(\ce{HCl}\)l:\[\ce{HCl(aq) + NH3(aq) <=> NH4Cl(aq)}\nonumber\]6M \(\ce{NH3}\) solution is added drop by drop under stirring and tested with red-litmus paper till the solution turns alkaline. If yellow precipitate forms, it is \(\ce{CdS}\) confirming \(\ce{Cd^{2+}}\) was present in the test solution:\[\ce{[CdCl4]^{2-} +2H3O^{+} + H2S(aq) + 4NH3(aq)<=> CdS(s, yellow-orange)(v) +4NH4^{+}(aq) + Cl^{-} + 2H2O(l)}\nonumber\]If no precipitate forms, add 1M thioacetamide and heat to make up for any loss of \(\ce{S^{2-}}\) in the solution. If yellow precipitate forms, it is \(\ce{CdS}\) confirming \(\ce{Cd^{2+}}\) is present in the test solution as shown in .After removal of \(\ce{Sn^{4+}}\) and \(\ce{Cd^{2+}}\), if there is a precipitate left it could be \(\ce{CuS}\) and/or \(\ce{Bi2S3}\), which are the least soluble sulfides in group II. To dissolve \(\ce{CuS}\) and \(\ce{Bi2S3}\), the \(\ce{S^{2-}}\) in the products need to be removed to a higher extent than in the case of \(\ce{CdS}\) re-dissolution.Nitric acid provides \(\ce{NO3^{2-}}\) which is a strong oxidizing agent that can remove \(\ce{S^{2-}}\) sufficient to drive the equilibrium forward to dissolve \(\ce{CuS}\) and \(\ce{Bi2S3}\).\[\ce{Bi2S3(s, black) <=> 2Bi^{3+}(aq) + 3S^{2-}(aq)}\nonumber\]\[\ce{3S^{2-}(aq) + 2NO3^{-}(aq) + 8H3O^{+}(aq) <=> 3S(s, yellow)(v) + 2NO(g)(^) + 12H2O(l)}\nonumber\]\[\text{Overall reaction:}\ce{~Bi2S3(s, black) + 2NO3^{-}(aq) + 8H3O^{+}(aq) <=> 3S(s, yellow)(v) + 2Bi^{3+}(aq) + 2NO(g)(^) + 12H2O(l)}\nonumber\]\[\ce{3CuS(s, black-brown) <=> 3Cu^{2+}(aq) + 3S^{2-}(aq)}\nonumber\]\[\ce{3S^{2-}(aq) + 2NO3^{-}(aq) + 8H3O^{+}(aq) <=> 3S(s, yellow)(v) + 2NO(g)(^) + 12H2O(l)}\nonumber\]\[\text{Overall reaction:}\ce{~3CuS(s, black-brown) + 2NO3^{-}(aq) + 8H3O^{+}(aq) <=> 3S(s, yellow)(v) + 2Cu^{2+}(aq) + 2NO(g)(^) + 12H2O(l)}\nonumber\]The mixture is heated to enhance the above reactions. The \(\ce{S^{2-}}\) is oxidized to solid light-yellow colored sulfur particles. Brown colored fumes are observed over the solution as a result of air oxidation of nitric oxide (\(\ce{NO}\)) that evaporates out of the solution as shown in :\[\ce{2NO(g) + O2(g) <=> 2NO2(g, red-brown)}\nonumber\]Removal of \(\ce{NO}\) and \(\ce{S^{2-}}\) from the products drives the reaction in the forward direction based on Le Chatelier's principle.The solid sulfur precipitate is removed by centrifugation followed by decantation.The supernatant is acidic and appears light blue if copper ions are present, as shown in .If the solution is made alkaline, \(\ce{Cu^{2+}}\), and \(\ce{Bi^{3+}}\) form solid hydroxides. However, aqueous ammonia (\(\ce{NH3}\)) selectively precipitate out\(\ce{Bi(OH)3}\), while keeping copper dissolved as coordination complex ions, \(\ce{[Cu(NH3)4]^{2+}}\):\[\ce{Bi^{3+}(aq) + 3NH3(aq) + 3H2O(l) <=> Bi(OH)3(s, white)(v) + 3NH4^{+}(aq)}\quad K = 3.3\times10^{39}\nonumber\]\[\ce{Cu^{2+}(aq) + 4NH3(aq) <=> [Cu(NH3)4]^{2+}(aq, blue)}\quad K = 3.8\times10^{12}\nonumber\]The solution is made alkaline by adding 6M \(\ce{NH3}\) drop by drop and tested using red-litmus paper. Excess \(\ce{NH3}\) solution is added to make sure that any residual \(\ce{Cd^{2+}}\) is also removed as \(\ce{[Cu(NH3)4]^{2+}}\). If the supernatant turns blue by making it alkaline with ammonia, it confirms \(\ce{Cu^{2+}}\) is present in the test sample, as shown in . The presence of residual \(\ce{Cd^{2+}}\) does not interfere because it forms a colorless \(\ce{[Cu(NH3)4]^{2+}}\) ion.The mixture is centrifuged and decanted to separate the white precipitate of \(\ce{Bi(OH)3}\), but, if ammonia addition was not sufficient, white \(\ce{Cd(OH)2}\) may also form from any residual \(\ce{Cd^{2+}}\) ions:\[\ce{Cd^{2+}(aq) + 2NH3(aq) + 2H2O(l)<=> Cd(OH)2(s, white)(v) + 2NH4^{+}}\nonumber\]The precipitate is resuspended in 6M \(\ce{NH3}\) to redissolve \(\ce{Cd(OH)2}\), if there is any present. \(\ce{Bi(OH)3}\) precipitate does not dissolve in 6M \(\ce{NH3}\). If the white precipitate persists after washing with 6M \(\ce{NH3}\) it confirms \(\ce{Bi^{3+}}\) is present in the test solution, as shown in .This page titled 4.2: Separation and confirmation of individual ions in group II precipitates is shared under a Public Domain license and was authored, remixed, and/or curated by Muhammad Arif Malik.
607
4.3: Procedure, flowchart, and datasheets for separation and confirmation of group II cations
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Qualitative_Analysis_of_Common_Cations_in_Water_(Malik)/4%3A_Group_II_cations/4.3%3A_Procedure%2C_flowchart%2C_and_datasheets_for_separation_and_confirmation_of_group_II_cations
ChemicalHazard0.1M ammonium chloride (\(\ce{NH4Cl}\))Toxic and irritant0.1M bismuth nitrate in 0.3M \(\ce{HNO3}\)Toxic, irritant, and oxidant0.1M cadmium chloride in 0.3M \(\ce{HNO3}\)Toxic and suspected carcinogen0.1M copper(II) nitrate in 0.3M \(\ce{HNO3}\)Toxic, irritant, and oxidant0.1M Tin(IV) chloride in 0.3M \(\ce{HNO3}\)Corrosive and irritantThis page titled 4.3: Procedure, flowchart, and datasheets for separation and confirmation of group II cations is shared under a Public Domain license and was authored, remixed, and/or curated by Muhammad Arif Malik.
608
5.1: Separation of group III cations
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Qualitative_Analysis_of_Common_Cations_in_Water_(Malik)/5%3A_Group_III_cations/5.1%3A_Separation_of_group_III_cations
Group II cations form sulfides that have very low solubility. After group II cations are removed under a low concentration of \(\ce{S^{2-}}\) in an acidic medium, the solution is made alkaline. Remember that like sulfides, hydroxides are also insoluble according to insoluble ions rule#1 of solubility guidelines described in chapter 1 states “Hydroxide (\(\ce{OH^{-}}\)) and sulfides (\(\ce{S^{2-}}\)) are insoluble except when the cation is a heavy alkaline earth metal ion: \(\ce{Ca^{2+}}\), \(\ce{Ba^{2+}}\), and \(\ce{Sr^{2+}}\), alkali metal ions, and ammonium ion.”Table 1 lists solubility product constants of hydroxides of group III & IV cations at 25 oC, maximum hydroxide (\(\ce{OH^{-}}\)) concentration, and the maximum pH that can exist in a saturated solution containing 0.1M cation solutions that may be present in the test solution at this stage. It can be observed that the ions listed in table 1 will not precipitate as hydroxides during the precipitation of group II cations under the acidic pH range of 0.5 to 1.\(\ce{Fe{3+}}\) forms the most insoluble hydroxide, but it is reduced to \(\ce{Fe^{2+}}\) by \(\ce{H2S}\) during precipitation of group II cations:\[\ce{2Fe^{3+}(aq) + S^{2-}(aq) <=> 2Fe^{2+}(aq) + S(s)}\nonumber\]\(\ce{Fe^{3+}}\) may be present only if precipitation of group III starts from a fresh sample that has not been subjected to group II separation.It can be observed from Table 1 that if the pH of the sample solution is increased to a range of 7 to 10, \(\ce{Fe^{3+}}\), \(\ce{Cr^{3+}}\), \(\ce^{Ni{2+}}\), and \(\ce{Fe^{2+}}\) will precipitate as \(\ce{Fe(OH)3(s, rusty)}\), \(\ce{Cr(OH)3(s, gray-green)}\), \(\ce{Ni(OH)2(s, green)}\), and \(\ce{Fe(OH)2(s, green)}\), leaving behind in the solution rest of the ions that may still be present at this stage. Group III comprise of , \(\ce{Fe^{3+}}\), \(\ce{Cr^{3+}}\), \(\ce{Ni^{2+}}\), and \(\ce{Fe^{2+}}\) ions.IonSaltKsp at 25 oCMinimum [OH-] and pH needed to precipitate\(\ce{Fe(OH)3}\)\(\mathrm{K}_{\mathrm{sp}}=\left[\mathrm{Fe}^{3+}\right]\left[\mathrm{OH}^{-}\right]^{3}=2.8 \times 10^{-39}\)\(\left[\mathrm{OH}^{-}\right]=3.0 \times 10^{-13}~M=\mathrm{pH} ~1.5\)\(\ce{Cr(OH)3}\)\(\mathrm{K}_{\mathrm{sp}}=\left[\mathrm{Cr}^{3+}\right]\left[\mathrm{OH}^{-}\right]^{3}=1.0 \times 10^{-30}\)\(\left[\mathrm{OH}^{-}\right]=2.2 \times 10^{-10}~M=\mathrm{pH} ~4.3 \)\(\ce{Ni(OH)2}\)\( \mathrm{K}_{\mathrm{sp}}=\left[\mathrm{Ni}^{2+}\right]\left[\mathrm{OH}^{-}\right]^{2}=5.5 \times 10^{-16}\)\(\left[\mathrm{OH}^{-}\right]==7.4 \times 10^{-9}~M=\mathrm{pH} ~4.3 \)\(\ce{Fe(OH)2}\)\( \mathrm{K}_{\mathrm{sp}}=\left[\mathrm{Fe}^{2+}\right]\left[\mathrm{OH}^{-}\right]^{2}=4.9 \times 10^{-17}\)\( \left[\mathrm{OH}^{-}\right]=2.2 \times 10^{-9}~M=\mathrm{pH} ~5.6\)\(\ce{Ca^{2+}}\)\(\ce{Ca(OH)2}\)\( \mathrm{K}_{\mathrm{sp}}=\left[\mathrm{Ca}^{2+}\right]\left[\mathrm{OH}^{-}\right]^{2}=5.0 \times 10^{-6}\)\(\left[\mathrm{OH}^{-}\right]=7.1 \times 10^{-4}~M=\mathrm{pH} \mathrm{} ~10.9 \)\(\ce{Ba^{2+}}\)\(\ce{Ba(OH)2}\)\( \mathrm{K}_{\mathrm{sp}}=\left[\mathrm{Ba}^{2+}\right]\left[\mathrm{OH}^{-}\right]^{2}=2.6 \times 10^{-4}\)\(\left[\mathrm{OH}^{-}\right]=5.1 \times 10^{-3}~M=\mathrm{pH} \mathrm{} 11.7 \)Buffers, that resist change in pH are employed in such a situation where pH needs to be maintained in a narrow range. Buffers are a mixture of a weak acid and its conjugate base or a mixture of a weak base and its conjugate acid. Ammonia (\(\ce{NH3}\)), i.e., a week base and ammonium ion (\(\ce{NH4^{+}}\)) is its conjugate acid.The \(\ce{NH3}\)/\(\ce{NH4^{+}}\) is a suitable buffer that can maintain pH of around 9. The buffer is prepared by adding 2 drops of 6M \(\ce{HCl}\) into 15 drops of the sample and then adding 6M \(\ce{NH3}\) drop by drop to neutralize the acid. \[\ce{HCl(aq) + H2O(l) -> H3O^{+}(aq) + Cl^{-}(aq)}\nonumber\]\[\ce{NH3(aq) + H3O^{+}(aq) -> NH4^{+}(aq) + H2O(l)}\nonumber\]\[\text{Overall reaction:} \ce{~HCl(aq) + NH3(aq) -> NH4^{+}(aq) + Cl^{-}(aq)}\nonumber\]Then 5 drops more of 6M \(\ce{NH3}\) are added after the \(\ce{HCl}\) has been neutralized to make a mixture of \(\ce{NH3}\) and \(\ce{NH4^{+}}\) that maintains pH ~9 and OH- at around 1 x 10-5 M.The group III cations precipitate at this stage as hydroxides, as shown in , except \(\ce{Ni^{2+}}\):\[\ce{Fe^{3+}(aq) + 3OH^{-}(aq) -> Fe(OH)3(s, reddish-brown ~or ~rusty)(v),}\nonumber\]\[\ce{Cr^{3+}(aq) + 3OH^{-}(aq) -> Cr(OH)3(s, gray-green)(v),}\nonumber\]\[\ce{Fe^{2+}(aq) + 2OH^{-}(aq) -> Fe(OH)2(s, green)(v).}\nonumber\]The concentration of \(\ce{Fe^{2+}}\), i.e., the most soluble hydroxide of group III cations, is reduced by more than 99.99%, i.e., from 0.1M to 4.9 x 10-7 M when pH is increased to 9 and \(\ce{OH^{-}}\) concentration is increased to 1 x 10-5 M:\[\mathrm{Fe}^{2+}=\frac{\mathrm{K}_{\mathrm{sp}}}{\left[\mathrm{OH}^{-}\right]^{2}}=\frac{4.9 \times 10^{-17}}{\left(1 \times 10^{-5}\right)^{2}}=4.9 \times 10^{-7} \mathrm{~M}\nonumber\]Nickle ion is not precipitated at this stage as it forms soluble coordination cation \(\ce{[Ni(NH3)6]^{2+}}\) with ammonia:\[\ce{Ni^{2+}(aq, green) + 6NH3(aq) <=> Ni(NH3)6(aq, blue)}\nonumber\]Therefore, \(\ce{S^{2-}}\) is introduced by adding thioacetamide and heating the mixture in a boiling water bath. Decomposition of thioacetamide produces ~0.01M \(\ce{H2S}\):\[\ce{CH3CSNH2(aq) + 2H2O(l) <=> CH3COO^{-}(aq) + NH4^{+}(aq) + H2S(aq)}\nonumber\]Nearly all of the \(\ce{H2S}\) dissociates to form ~0.01M \(\ce{S^{2-}}\) at pH ~9:\[\ce{H2S(aq) + 2H2O(l) <=>2H3O^{+}(aq) + S^{2-}(aq)}\quad K_a = \frac{\left[\mathrm{H}_{3} \mathrm{O}^{+}\right]^{2}\left[\mathrm{~S}^{2-}\right]}{\left[\mathrm{H}_{2} \mathrm{~S}\right]}=1.3 \times 10^{-20}\nonumber\]The ammonia complex of nickel, i.e., \(\ce{[Ni(NH3)6]^{2+}}\) precipitates out as \(\ce{NiS}\), and, at the same time, \(\ce{Fe(OH)3}\) and \(\ce{Fe(OH)2}\) also convert to \(\ce{Fe2S2}\) and \(\ce{FeS}\):\[\ce{Ni(NH3)6^{2+}(aq, blue) + S^{2-}(aq) <=> NiS(s, black) + 6NH3(aq)}\nonumber\]\[\ce{2Fe(OH)3(s, reddish-brown) + 3S^{2-}(aq) <=> Fe2S3(s, yellow-green) + 6OH^{-}(aq)}\nonumber\]\[\ce{Fe(OH)2(s, geen) + S^{2-}(aq) <=> FeS(s, black) + 2OH^{-}(aq)}\nonumber\]Chromium remains as \(\ce{Cr(OH)3}\) precipitate because chromium sulfide is unstable in water.Group III precipitates, i.e., \(\ce{Cr(OH)3(s, gray-green)}\), \(\ce{NiS(s, black)}\), \(\ce{Fe2Se3(s, yellow-green)}\), and \(\ce{FeS(s, black)}\) in the mixture are separated as precipitates, and the rest of the ions, i.e, \(\ce{Ca^{2+}}\), \(\ce{Ba^{2+}}\), \(\ce{Na^{+}}\) and \(\ce{K^{+}}\), etc. remain dissolved in the supernatant, as shown in . The color of the precipitate does not give a clear indication of what ions are present at this stage as several species of different colors may be mixed at this stage.This page titled 5.1: Separation of group III cations is shared under a Public Domain license and was authored, remixed, and/or curated by Muhammad Arif Malik.
609
5.2: Separation and confirmation if individual ions in group III precipitates
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Qualitative_Analysis_of_Common_Cations_in_Water_(Malik)/5%3A_Group_III_cations/5.2%3A_Separation_and_confirmation_if_individual_ions_in_group_III_precipitates
Acid like \(\ce{HCl}\) dissolves precipitates of group III cations, i.e., \(\ce{Cr(OH)3(s, gray-green)}\), \(\ce{Fe2Se3(s, yellow-green)}\), and \(\ce{FeS(s, black)}\), by the following series of reactions: \[\ce{Cr(OH)3(s, gray-green) <=> Cr^{3+}(aq) + 3OH^{-}(aq)}\nonumber\]\[\ce{FeS(s, black) <=> Fe^{2+}(aq) + 2S^{2-}(aq)}\nonumber\]\[\ce{Fe2S3(s, yellow-green) <=> 2Fe^{3+}(aq) + 3S^{2-}(aq)}\nonumber\]\[\ce{2Fe^{3+}(aq) + 2S^{2-}(aq) + 2H3O^{+} <=> 2Fe^{2+}(aq) + H2S(aq) + S(s) + 2H2O(l)}\nonumber\]\[\ce{3OH^{-}(aq) + 3H3O^{+} <=> + 6H2O(l)}\nonumber\]\[\ce{3S^{2-}(aq) + 6H3O^{+} <=> 3H2S(aq) + 6H2O(l)}\nonumber\]\[\text{Overall reaction:}\ce{~Cr(OH)3(s, gray-green) + FeS(s, black) + Fe2S3(s, yellow-green) + 11H3O^{+} <=> Cr^{3+}(aq) + 3Fe^{2+}(aq) + 4H2S(aq) + S(s) + 14H2O(l)}\nonumber\]Removal of basic \(\ce{OH^{-}}\) and \(\ce{S^{2-}}\) ions from products by acid-base neutralization drives these reactions in the forward direction. \(\ce{Fe^{3+}}\) is reduced to \(\ce{Fe^{2+}}\) by \(\ce{S^{2-}}\) under the acidic condition.The solubility of \(\ce{NiS}\) is very low and it does not dissolve in non-oxidizing acid like \(\ce{HCl}\).Therefore, the supernatant separated at this stage contains \(\ce{Cr^{3+}}\) and \(\ce{Fe^{2+}}\) and precipitate, if present is \(\ce{NiS}\), as shown in .Aqua regia, i.e., a mixture of \(\ce{HCl}\) and \(\ce{HNO3}\), can dissolve \(\ce{NiS}\) precipitate by removing \(\ce{Ni^{2+}}\) as soluble coordination anion \(\ce{[NiCl4]^{2-}}\) and, at the the same time, removing \(\ce{S^{2-}}\) by oxidizing it, using \(\ce{NO3^{-}}\) as oxidizing agent in the acidic medium.\[\ce{NiS(s, black) <=> Ni^{2+}(aq) + 2S^{2-}(aq)}\nonumber\]\[\ce{Ni^{2+}(aq) + 4Cl^{-}(aq) <=> [NiCl4]^{2-}(aq)}\nonumber\]\[\ce{3S^{2-}(aq) + 2NO3^{-}(aq) + 8H3O^{+}(aq) <=> 3S(s)(v) + 2NO(g)(^) + 12H2O(l)}\nonumber\]Nitrogen oxide (\(\ce{NO}\)) evaporates from the liquid mixture further driving the equilibrium to the forward direction. Most of the \(\ce{NO}\) in is oxidized to nitrogen dioxide (\(\ce{NO2}\)) that forms brown color fumes over the liquid mixture as shown in :\[\ce{2NO(g) + O2(g) <=> 2NO2(g, red-brown)}\nonumber\]The S precipitates are removed by centrifugation and decantation. The \(\ce{[NiCl4]^{2-}}\) coordination anion is converted to [Ni(NH3)]2+ coordination cation by making the solution alkaline by ammonia addition:\[\ce{[NiCl4]^{2-}(aq) + 6NH3(aq) <=> [Ni(NH3)6]^{2+}(aq, blue) + 4Cl^{-}(aq)}\nonumber\]Dimethyl glyoxime \(\ce{(CH3)2C2(NOH)2}\) is a ligand that is capable of forming two coordinate covalent bonds with transition metal ions. The ligands like \(\ce{Cl^{-}}\), \(\ce{NH3}\), \(\ce{H2O}\), etc. that form one coordinate covalent bond with transition metals are called mono-dentate, and the chelates like dimethyl glyoxime form two coordinate covalent bonds are called bidentate. The ligands that can form two or more coordinate covalent bonds are called chelates or chelating agents. Coordination complexes with chelates are usually more stable, i.e., have higher formation constants than with mono-dentate ligands.The addition of dimethyl glyoxime \(\ce{(CH3)2C2(NOH)2}\) to the liquid mixture containing \(\ce{[Ni(NH3)6]^{2+}}\) in an alkaline medium forms an insoluble coordination compound \(\ce{NiC8H14O4}\) that separates as a red color precipitate, as shown in :The structure of the dimethyl glyoxime chelating agent and its coordination complex with nickel is illustrated in below.The formation of red color precipitate upon the addition of dimethyl glyoxime at this stage confirms the presence of nickel ion in the test sample.The supernatant containing \(\ce{Fe^{2+}}\) and \(\ce{Cr^{3+}}\) ions is separated from \(\ce{NiS}\) precipitate after the addition of \(\ce{HCl}\) to the precipitates of group III cations. The supernatant is made alkaline to pH 9 to 10 by adding ammonia solution. A pH paper is used to determine the pH. Hydrogen peroxide (\(\ce{H2O2}\)) is added as an oxidizing agent to the alkaline solution. \(\ce{Fe^{2+}}\) is oxidized to \(\ce{Fe^{3+}}\) and precipitates out as rusty-brown solid \(\ce{Fe(OH)3}\), and \(\ce{Cr^{3+}}\) is oxidized to soluble chromate ion (\(\ce{CrO4^{2-}}\)) under this condition:\[\ce{2Fe^{2+}(aq) + H2O2(aq) <=> 2Fe^{3+}(aq) + 2OH^{-}(aq)}\nonumber\]\[\ce{Fe^{3+}(aq) + 3OH^{-}(aq) <=> Fe(OH)3(s, rusty-brown)(v)}\nonumber\]\[\ce{2Cr^{3+}(aq) + 3H2O2(aq) + 10OH^{-}(aq) <=> 2CrO4^{2-}(aq) + 8H2O(l)}\nonumber\]The mixture is centrifuged to separate supernatant containing \(\ce{CrO4^{2-}}\) ions and precipitate containing rusty brown precipitate \(\ce{Fe(OH)3}\), as shown in .The \(\ce{Fe(OH)3}\) precipitate is dissolved in \(\ce{HCl}\) solution:\[\ce{Fe(OH)3(s, rusty-brown) <=> Fe^{3+}(aq) + 3OH^{-}(aq)}\nonumber\]\[\ce{3OH^{-}(aq) + 3H3O^{+}(aq) <=> 6H2O(l)}\nonumber\]Thiocyanate (\(\ce{SCN^{-}}\)) is a ligand that forms deep-red coordination complex ion \(\ce{[FeSCN]^{2+}}\) by reacting with \(\ce{Fe^{3+}}\), as shown in .\[\ce{Fe^{3+}(aq) + SCN^{-}(aq) <=> [FeSCN]^{2+}(aq, deep-red)}\nonumber\]Turning the supernatant color to deep-red upon addition of thiocyanate confirms iron ions are present in the test sample.The supernatant obtained after removal of \(\ce{Fe(OH)3}\) precipitate contains \(\ce{CrO4^{2-}}\) ions in an alkaline medium. The solution is made acidic by the addition of nitric acid where \(\ce{CrO4^{2-}}\) converts to dichromate ion (\(\ce{Cr2O7^{2-}}\)):\[\ce{2CrO4^{2-}(aq) + 2H3O^{+}(aq) <=> Cr2O7^{2-}(aq) + 3H2O(l)} \nonumber\]\(\ce{H2O2}\) is a reducing agent in acidic medium. \(\ce{H2O2}\) is added to the acidic mixture to reduce \(\ce{Cr2O7^{2-}}\) to \(\ce{Cr^{3+}}\) through the following reactions:\[\ce{2Cr2O7^{2-}(aq) + 8H2O2(aq) + 4H3O^{+}(aq) <=> 4CrO5(aq, dark-blue) + 14H2O(l)} \nonumber\]\[\ce{4CrO5(aq) + 12H3O^{+}(aq) <=> 4Cr^{3+}(aq, light-blue) + 7O2(g)(^) + 18H2O(l)} \nonumber\]Oxygen evolves from the mixture and can be observed as gas bubbles in the solution. \(\ce{CrO5}\) intermediate is a dark blue color in which one oxygen is in -2 oxidation state and the other four oxygen are in -1 oxidation state. \(\ce{CrO5}\) is unstable in solution and decomposes to \(\ce{Cr^{3+}}\) which is a light blue color. Residual \(\ce{H2O2}\) is destroyed by heating the mixture in a boiling water bath, which can be observed through oxygen gas bubbling out. Keep in mind that the destruction of \(\ce{H2O2}\) is significantly slower in an acidic medium than in an alkaline medium. It may take a longer time to destroy \(\ce{H2O2}\) in the acidic medium. Then the solution is changed from acidic to alkaline by adding 6M NaOH to the mixture. \(\ce{Cr^{3+}}\) precipitates out as gray-green \(\ce{Cr(OH)3}\) solid:\[\ce{Cr^{3+}(aq) + 3OH^{-}(aq) <=> Cr(OH)3(s, gray-green)(v)} \nonumber\]The formation of gray-green precipitate at this stage confirms \(\ce{Cr^{3+}}\) is present in the test sample, as shown in .This page titled 5.2: Separation and confirmation if individual ions in group III precipitates is shared under a Public Domain license and was authored, remixed, and/or curated by Muhammad Arif Malik.
610
6.1: Separating group IV cations
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Qualitative_Analysis_of_Common_Cations_in_Water_(Malik)/6%3A_Group_IV_and_Group_V_cations/6.1%3A_Separating_group_IV_cations
After removing chloride insoluble salts as the group I, and sulfide insoluble salts as group II and group III, the cations that may still be present in the solution from the initial mixture include \(\ce{Ca^{2+}}\), \(\ce{Ba^{2+}}\), \(\ce{Na^{+}}\), and \(\ce{K^{+}}\). Group IV comprises \(\ce{Ca^{2+}}\) and \(\ce{Ba^{2+}}\) that are separated from the other two ions based on the insoluble ions rule#2 described in chapter 1 which states “Carbonates (\(\ce{CO3^{2-}}\)), phosphates (\(\ce{PO4^{3-}}\)), and oxide (\(\ce{O^{2-}}\)) are insoluble with the exception of alkali metals and ammonia.” Carbonate ion is introduced as ammonium carbonate (\(\ce{(NH4)2CO3}\)):\[\ce{(NH4)2CO3(s) + 2H2O(l) -> 2NH4^{+}(aq) + CO3^{2-}(aq)}\nonumber\]Addition of \(\ce{(NH4)2CO3}\) solution cause precipitation of \(\ce{Ca^{2+}}\) and \(\ce{Ba^{2+}}\) as white precipitates \(\ce{CaCO3}\) and \(\ce{BaCO3}\), as shown in :\[\ce{Ca^{2+}(aq) + CO3^{2-}(aq) <=> CaCO3(s, white)(v)}\quad K_{sp} = 6\times10^{-9}\nonumber\]\[\ce{Ba^{2+}(aq) + CO3^{2-}(aq) <=> BaCO3(s, white)(v)}\quad K_{sp} = 3\times10^{-9}\nonumber\]The precipitates of group IV cations are separated by centrifugation and decantation. The precipitate is used to separate and confirm group IV cations and the supernatant is kept for the analysis of group V cations.This page titled 6.1: Separating group IV cations is shared under a Public Domain license and was authored, remixed, and/or curated by Muhammad Arif Malik.
612
6.2: Separation and confirmation of individual ions in group IV precipitates and group V mixture
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Qualitative_Analysis_of_Common_Cations_in_Water_(Malik)/6%3A_Group_IV_and_Group_V_cations/6.2%3A_Separation_and_confirmation_of_individual_ions_in_group_IV_precipitates_and_group_V_mixture
The precipitates of Group IV cations, i.e., \(\ce{CaCO3}\) and \(\ce{BaCO3}\) are soluble in acidic medium. In these experiments acetic acid \(\ce{CH3COOH}\) is used to make the solution acidic that results in the dissolution of \(\ce{CaCO3}\) and \(\ce{BaCO3}\):\[\ce{4CH3COOH(aq) + 4H2O(l) <=> 4CH3COO^{-}(aq) + 4H3O^{+}(aq)}\nonumber\]\[\ce{CaCO3(s, white) <=> Ca^{2+}(aq) + CO3^{2-}(aq)}\nonumber\]\[\ce{BaCO3(s, white) <=> Ba^{2+}(aq) + CO3^{2-}(aq)}\nonumber\]\[\ce{2CO3^{2-}(aq) + 4H3O^{+}(aq) <=> 2H2CO3(aq) +4H2O(l)}\nonumber\]\[\ce{2H2CO3(aq) <=> 2H2O(l) + 2CO2(g)(^)}\nonumber\]\[\text{Overall reaction: }\ce{~4CH3COOH(aq) + CaCO3(s, white) + BaCO3(s, white) <=> 4CH3COO^{-}(aq) + Ca^{2+} + Ba^{2+} + 2H2O(l) + 4CO2(g)(^)}\nonumber\]\(\ce{CO3^{2-}}\) ion is a weak base that reacts with \(\ce{H3O^{+}}\) and forms carbonic acid (\(\ce{H2CO3}\). Carbonic acid is unstable in water and decomposes into carbon dioxide and water. Carbon dioxide leaves the solution that drives the reactions forward, as shown in .The Acetate ion (\(\ce{CH3COO^{-}}\)) produced in the above reactions is a conjugate base of a weak acid acetic acid (\(\ce{CH3COOH}\)). More acetic acid is added to the solution to make a \(\ce{CH3COOH}\)/\(\ce{CH3COO^{-}}\) buffer that can maintain \(pH\) at ~5.Potassium chromate (\(\ce{K2CrO4}\)) solution is added at this stage that introduces chromate ion \(\ce{CrO4^{2-}}\):\[\ce{K2CrO4(s) <=> 2K^{+}(aq) + CrO4^{2-}(aq)}\nonumber\]Although both calcium and barium ions form insoluble salt with chromate ion (\(\ce{CaCrO4}\) \(K_{sp}\) = 7.1 x 10-4 and \(\ce{BaCrO4}\) \(K_{sp}\) = 1.8 x 10-10), \(\ce{BaCrO4}\) is less soluble and can be selectively precipitated by controlling \(\ce{CrO4^{2-}}\) concentration. The chromate ion is involved in the following \(pH\) dependent equilibrium:\[\ce{2CrO4^{2-}(aq) + 2H3O^{+}(aq) <=> Cr2O7^{2-}(aq) + 3H2O(l)}\quad K = 4.0\times10^{14}\nonumber\]At \(pH\) ~5 in a \(\ce{CH3COOH}\)/\(\ce{CH3COO^{-}}\) buffer, the concentration of \(\ce{CrO4^{2-}}\) is enough to selectively precipitate barium ions leaving calcium ions in the solution:\[\ce{Ba^{2+}(aq) + CrO4^{2-}(aq) <=> BaCrO4(s, light~yellow)(v)}\nonumber\]The mixture is centrifuged and decanted to separate \(\ce{BaCrO4}\) precipitate from the supernatant containing \(\ce{Ca^{2+}}\) ions as shown in . Although the formation of a light yellow precipitate (\(\ce{BaCrO4}\) ) at this stage is a strong indication that \(\ce{Ba^{2+}}\) is present in the test sample, \(\ce{Ca^{2+}}\) may also form a light yellow precipitate (\(\ce{CaCrO4}\) ), particularly if pH is higher than the recommended value of 5.Group IV and V cations are most often confirmed by the flame test. shows the flame test results of group IV cations. The presence of barium is further confirmed by a flame test. For this purpose, \(\ce{BaCrO4}\) precipitate is treated with 12M \(\ce{HCl}\). The concentrated \(\ce{HCl}\) removes \(\ce{CrO4^{2-}}\) by converting it to dichromate (\(\ce{Cr2O7^{2-}}\)) that resulting in the dissolution of \(\ce{BaCrO4}\):\[\ce{2BaCrO4(s, light~yellow) <=> 2Ba^{2+}(aq) + 2CrO4^{2-}(aq)}\nonumber\]\[\ce{2CrO4^{2-}(aq) + 2H3O^{+}(aq) <=> Cr2O7^{2-}(aq) + 3H2O(l)}\nonumber\]A flame test is applied to the solution. \(\ce{Ba^{2+}}\) imparts characteristic yellow-green color to the flame. If the yellow-green color is observed in the flame test, it confirms \(\ce{Ba^{2+}}\) is present in the test sample.The \(\ce{Ca^{2+}}\) present in the supernatant is precipitated by adding oxalate ion (\(\ce{C2O4^{2-}}\)):\[\ce{Ca^{2+}(aq) + C2O4^{2-}(aq) <=> CaC2O4(s, white)(v)}\nonumber\]The formation of white precipitate, i.e., \(\ce{CaC2O4}\) shown in , is a strong indication that \(\ce{Ca^{2+}}\) is present in the test sample. However, if \(\ce{Ba^{2+}}\) is not fully separated earlier, it also forms a white precipitate \(\ce{BaC2O4}\). The presence of \(\ce{Ca^{2+}}\) is further verified by flame test. For this purpose, the precipitate is dissolved in 6M \(\ce{HCl}\):\[\ce{CaC2O4(s, white) <=> Ca^{2+}(aq) + C2O4^{2-}(aq)}\nonumber\]\[\ce{C2O4^{2-}(aq) + 2H3O^{+}(aq) <=> H2C2O4(aq)}\nonumber\]Strong acid like \(\ce{HCl}\) increases \(\ce{H3O^{+}}\) ion concentration that drives the above reaction forward based on Le Chatelier's principle. The flame test is applied to the solution. If \(\ce{Ca^{2+}}\) is present in the solution, it imparts characteristic brick-red color to the flame, as shown in Fig. 6.2.3. Observation of the brick-red color in the flame test confirms the presence of \(\ce{Ca^{2+}}\) in the test sample. The flame color changes to light green when seen through cobalt blue glass.Group V cations, i.e., alkali metal, \(\ce{Na^{+}}\), \(\ce{K^{+}}\), etc. form soluble ionic compounds. Separation of alkali metals cations by selective precipitation is not possible using commonly available reagents. So, group V cations are not separated in these analyses. However, alkali metal cations impart characteristic color to the flame that helps in their confirmation as shown in . The supernatant after separating group IV precipitate is concentrated by heating the solution to evaporate the solvent. A flame test is applied to the concentrated solution.Lithium imparts carmine red, sodium imparts intense yellow, and potassium imparts lilac color to the flame.This page titled 6.2: Separation and confirmation of individual ions in group IV precipitates and group V mixture is shared under a Public Domain license and was authored, remixed, and/or curated by Muhammad Arif Malik.
613
Note for instructors and acknowledgments
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Qualitative_Analysis_of_Common_Cations_in_Water_(Malik)/00%3A_Front_matter/Note_for_instructors_and_acknowledgments
The author thanks the following persons for their help in preparing this book: Suggested plan for executing the experiments: The whole set of experiments is prepared for a duration of almost one semester. The first lab meeting may be reserved to teach the chemistry principles behind the experiments which are in chapter 1. The second lab meeting my be used to introduce the basic experimental techniques in chapter 2 including the demonstration of the techniques by the instructor. Students learning may be assessed through quizzes. Then the students may start to practice the analyses one group of cations at a time. Students may be asked to complete the first column, "net ionic equations, and observations from the expected reaction", of the datasheet of the relevant group as a pre-lab and couple it with a pre-lab quiz to force the students to understand the experiments before performing them. The post-lab assignment may be filling the second column "actual observations and conclusions" of the relevant datasheet at the completion of the lab activity. Start with the analysis of a known sample containing all ions belonging to the group, followed by a sample containing at least one ion from the group (unknown to students). Group-I analyses can be completed in one lab meeting of about three hours duration. Groups II and III need two lab meetings each. Further, one ion may be removed from groups II and III to save some time, e.g. chromium in group III and tin in group IV may be removed, if needed, as they are relatively difficult for students to identify and the contempt taught through them are repeated in the analyses of some other ions in the same group. Instructors may choose to demonstrate the flame tests in groups IV and V, leaving the analyses of calcium and barium without the flame test for students which can be easily completed in one lab meeting. This approach also minimizes the fire hazard associated with the flame test.As a lab exam, a sample containing at least one ion from each of the groups I, II, III, and IV (unknown to students) be assigned to each student for the analysis in a time of three to four lab meetings. Students may be asked to fill column 1 of each datasheet again, this time as a pre-lab exam assignment, which may be complemented with a pre-lab exam quiz. Post-lab exam assignment may be filling the second column of the four datasheets as each step of the analysis completes. Tell the students that the data gathered in a lab needs to be discriminated in the form of a scientific report. The data gathered in analytical labs need to be discriminated in the form of a lab report. So, the students may be asked to prepare a lab report based on their findings, systematically explaining the analysis of each ion, supported with reaction equations, what they were expecting to observe, and what their conclusions are supported by the evidence gathered, concluding a summary of the ions identified in the unknown sample.Preparation of the cation solutions: \(\ce{Ba(NO3)2}\), \(\ce{Ca(NO3)2.4H2O}\), \(\ce{Cd(NO3)2.4H2O}\), \(\ce{Cu(NO3)2.3H2O}\), and \(\ce{Ni(NO3)2.6H2O}\) dissolved in distilled water; \(\ce{AgNO3}\), \(\ce{Cr(NO3)3.9H2O}\), \(\ce{Fe(NO3)3.9H2O}\), (\ce{Cu(NO3)2.6H2O}\), dissolve in 0.1 M \(\ce{HNO3}\); \(\ce{Bi(NO3)3.5H2O}\) dissolves in 3M \(\ce{HNO3}\), and \(\ce{SnCl4.4H2O}\) dissolves in 2.5M \(\ce{HCl}\). Prepare a 0.5M stock solution of each ion needed and then mix the appropriate amount of each with 0.1 M \(\ce{HNO3}\) or distilled water, depending on the solubilities of these ions, to make a solution that is 0.1M with respect to each ion in it.Some cations do not mix well in solution initially. Heat and stir solutions until cations dissolve sufficiently into the solution.
616
Goals and Objectives
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Supplemental_Modules_(Analytical_Chemistry)/Microscopy/Scanning_Probe_Microscopy/01_Goals_and_Objectives
The basic theory and applications of scanning probe microscopy (SPM) will be presented. Emphasis will be placed on SPM characterization methods including:Upon completion of this module you understand the basic principles, operation and applications of SPM.This page titled Goals and Objectives is shared under a CC BY-NC-SA 2.5 license and was authored, remixed, and/or curated by Heather A. Bullen & Robert A. Wilson via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.
617
History
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Supplemental_Modules_(Analytical_Chemistry)/Microscopy/Scanning_Probe_Microscopy/02_History
Scanning Tunneling Microscope (STM)Developed in 1982 by Binnig, Rohrer, Gerber, and Weibel at IBM in Zurich, SwitzerlandAtomic Force Microscope (AFM)Developed in 1986 by Binnig, Quate, and Gerber as a collaboration between IBM and Stanford University.During the 20th century a world of atomic and subatomic particles opened new avenues. In order to study and manipulate material on an atomic scale there needed to be a development in new instrumentation. Physicist Richard Feynman said in his now famous lecture in 1959: “if you want to make atomic-level manipulations, first you must be able to see what’s going on.”1 Until the 1980s researchers lacked any method for studying the surfaces on atomic scale. It was known that the arrangement of atoms on the surface differed from the bulk, but investigators had no way to determine how they were different. The scanning tunneling microscope (STM) was developed in the early 1980s by Binnig, Rohrer, and co-workers.2The STM provides a 3D profile of the surface on a nanoscale, by tunneling electrons between a sharp conductive probe (etched Tungsten wire) and a conductive surface. The flow of electrons is very sensitive to the probe-sample distance (1-2 nm). As the probe moves across surface features the probe position is adjusted to keep the current flow constant. From this a topographic image of the surface can be obtained on an atomic scale.Note: A more detailed explanation is found in SPM Basic TheoryBinnig and Rohrer receive the Nobel Prize in Physics for their work on the STM. They shared this award with German scientist Ernst Ruska, designer of the first electron microscope.The STM that Binnig and Rohrer had built was actually based upon the field ion microscope invented by Erwin Wilhelm Müller.3A precursor instrument, the topografiner, was invented by Russell Young and colleagues between 1965 and 1971 at the National Bureau of Standards (NBS).4This instrument was the fundamental tool in the development of nanotechnology. It opened the door for the ability to control, see, measure, and manipulate matter on the atomic scale.Drawbacks: Although the STM was considered a fundamental advancement for scientific research it has limited applications, as it only works for conducting or semi-conducting samples (needed for tunneling of electrons). In 1986, Binnig, Quate, and Gerber extended the field of application to non-conducting (biological, insulators etc.) by developing an atomic force microscope (AFM).5The AFM provides a 3D profile of the surface on a nanoscale, by measuring forces between a sharp probe (<10 nm) and surface at very short distance (0.2-10 nm probe-sample separation). The probe is supported on a flexible cantilever.Note: A more detailed explanation is found in AFM Basic TheoryThe STM and AFM may be applied to samples in very different environments: These microscopes work under vacuum conditions, air, and, in liquids (with specific modifications).This page titled History is shared under a CC BY-NC-SA 2.5 license and was authored, remixed, and/or curated by Heather A. Bullen & Robert A. Wilson via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.
618
Additional SPM Methods
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Supplemental_Modules_(Analytical_Chemistry)/Microscopy/Scanning_Probe_Microscopy/04_Additional_SPM_Methods
Lateral Force MicroscopyLateral Force Microscopy (LFM) is conducted when imaging in the contact mode. During scanning in contact mode the cantilever bends not only along vertically to the surface as a result of repulsive Van der Waals interactions, but the cantilever also undergoes torsional (lateral) deformation. LFM measures the torsional bending (or twisting) of the cantilever, which is dependent on a frictional force acting on tip. As a result, this method is also known as friction force microscopy (FFM).Chemical Force MicroscopyMagnetic Force MicroscopyPhase Imaging This page titled Additional SPM Methods is shared under a CC BY-NC-SA 2.5 license and was authored, remixed, and/or curated by Heather A. Bullen & Robert A. Wilson via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.
620
1: Introduction to Chemiluminescence
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Supplemental_Modules_(Analytical_Chemistry)/Analytical_Chemiluminescence/1%3A_Introduction_to_Chemiluminescence
1.1: Electronic transitions and luminescenceLuminescence is the emission of light due to transitions of electrons from molecular orbitals of higher energy to those of lower energy, usually the ground state or the lowest unoccupied molecular orbitals. Such transitions are referred to as relaxations.1.2: Chemiluminescence SpectroscopyThe importance of chemiluminescence spectroscopy lies more in elucidating the mechanisms of chemiluminescence reactions rather than in analytical applications. In particular, spectroscopic investigations have been found useful for the identification of the emitter species in particular chemiluminescence reactions. Thumbnail: Chemiluminescence after a reaction of hydrogen peroxide and luminol. This is an image from video youtu.be/8_82cNtZSQE. (CC BY-SA 4.0; Tavo Romann).1: Introduction to Chemiluminescence is shared under a CC BY-SA 4.0 license and was authored, remixed, and/or curated by LibreTexts.
622
2: Chemiluminescence Reagents
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Supplemental_Modules_(Analytical_Chemistry)/Analytical_Chemiluminescence/2%3A_Chemiluminescence_Reagents
2.1: LuminolLuminol is the common name for 5-amino-2,3-dihydro-1,4-phthalazinedione (often called 3-aminopthalhydrazide). Oxidation of luminol produces excited 3-aminophthalate, which on relaxation emits light (λmax = 425 nm) with quantum yield of 1%. Alternatively, luminol chemiluminescence may be triggered electrochemically.2.2: Lophine and pyrogallolLophine and pyrogallol are the earliest-known chemiluminescence reagents. Lophine exhibits lemon yellow chemiluminescence in solution and is one of the few long-lasting chemiluminescent molecules. It forms dimers that have piezochromic and photochromic properties. It has been proposed as an analytical reagent for trace metal ion detection.2.3: LuciferinsLuciferases are enzymes that catalyse light-emitting reactions in living organisms - bioluminescence. They occur in several species of firefly and in many species of bacterium. Firefly Luciferases are extracted by differential centrifugation and purified by gel filtration. Luciferins are substrates of luciferases . Firefly luciferin emits at 562 nm on reaction with oxygen, catalysed by luciferase in the presence of adenosine triphosphate (ATP) and magnesium ions.2.4: Lucigenin and coelenterazineLucigenin is used in a wide variety of assays, especially those involving enzymatic production of hydrogen peroxide, and as a label in immunoassays. It reacts with various reductants, including those present in normal human blood, such as glutathione, uric acid, glucuronic acid, creatinine, ascorbic acid and creatine. The chemiluminescence intensity for a mixture of these analytes is equal to the sum of the intensities, measured separately for each analyte present.2.5: Dioxetanes and oxalatesPeroxy-oxalate chemiluminescence (PO-CL) was first reported in 1963 as a very weak bluish-white emission from oxalyl chloride, Cl-CO.CO-Cl, on oxidation by hydrogen peroxide; a similar blue emission occurs from related oxalyl peroxides. Much more intense emission is obtained in the reaction between aryl oxalates and hydrogen peroxide in the presence of a fluorophore; it is this version of the reaction that is analytically useful.2.6: Organic peroxides and lipid peroxidation2.7: ManganeseManganese (VII) in the form of potassium permanganate has been used as a chemiluminescence reagent for several decades. A broad band of red light is emitted on reaction with over 270 compounds in acidic solution. Among the organic analytes are morphine and a wide range of other pharmaceuticals, phenolic substances, amines and hydrazines in addition to well-known reductants such as ascorbic acid and uric acid2.8: CeriumCerium(IV)-based chemiluminescence systems involve the reduction of cerium(IV), which suggests that the emitter is a cerium(III) species. The chemiluminescence reaction is carried out in an acidic medium (generally sulfuric acid) and has been applied for the determination of substances of biological interest.2.9: RutheniumThe chemiluminescence involving tris(2,2'-bipyridyl)ruthenium(II), [Ru(bpy)3]2+, is most interesting. It involves the oxidation of [Ru(bpy)3]2+ to [Ru(bpy)3]3+, which is followed by reduction with an analyte species to produce an emission of light.2.10: Oxygen radicals2.11: Sulfites and persulfates2.12: Hypohalites and halates Thumbnail: Chemiluminescence after a reaction of hydrogen peroxide and luminol. This is an image from video youtu.be/8_82cNtZSQE. (CC BY-SA 4.0; Tavo Romann).2: Chemiluminescence Reagents is shared under a CC BY-SA 4.0 license and was authored, remixed, and/or curated by LibreTexts.
623
3: Enhancement of Chemiluminescence
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Supplemental_Modules_(Analytical_Chemistry)/Analytical_Chemiluminescence/3%3A_Enhancement_of_Chemiluminescence
3.1: Micellar Enhancement of ChemiluminescenceWell-defined mechanistic principles have emerged to rationalize micellar enhancement of chemiluminescence. These occur in the microenvironment (i.e. polarity, viscosity and/or acidity, etc.), in the chemical and photophysical pathway and in the solubilization, concentration and organization of the solute/reactant. We shall now use these principles as a framework for discussing this work and it will become clear that they are highly inter-related rather than mutually exclusive.3.2: Dye Enhancement of ChemiluminescenceChemiluminescence is often very weak and to use it, or even to investigate it, it is necessary to enhance it. One way to do this is to use fluorescent dyes. So it is necessary to find a link between the properties of the dye and the degree of enhancement achieved. One key property is the fluorescence quantum yield of the dye; this must be greater than the chemiluminescence quantum yield of the original emitter.3.3: Enhancement of Chemiluminescence by UltrasoundA novel ultrasonic flow injection chemiluminescence (FI-CL) manifold for determining hydrogen peroxide (H2O2) has been designed. Chemiluminescence obtained from the luminol-H22-cobalt (II) reaction was enhanced by applying 120 W of ultrasound for a period of 4 s to the reaction coil in the FI-CL system and this enhancement was verified by comparison with an identical manifold without ultrasound. Thumbnail: Chemiluminescence after a reaction of hydrogen peroxide and luminol. This is an image from video youtu.be/8_82cNtZSQE. (CC BY-SA 4.0; Tavo Romann).3: Enhancement of Chemiluminescence is shared under a CC BY-SA 4.0 license and was authored, remixed, and/or curated by LibreTexts.
624
4: Instrumentation
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Supplemental_Modules_(Analytical_Chemistry)/Analytical_Chemiluminescence/4%3A_Instrumentation
4.1: Detection of chemiluminescence4.2: Flow Injection Analysis (FIA)4.3: Sequential Injection Analysis (SIA): lab on a valve4.4: Lab on a Chip4.5: Chemiluminescence SensorsChemiluminescence has the advantage of lower background emission than fluorescence, avoiding noise caused by light scattering. However, because chemiluminescence reagents are irreversibly consumed, chemiluminescence sensors have shorter lifetimes than fluorescence sensors and their signals have a tendency to drift downwards due to consumption, migration and breakdown of reagents.4.6: Chemiluminescence ImagingChemiluminescence imaging combines the sensitive detection of chemiluminescence with the ability to locate and quantify the light emission, but above all it massively provides parallel determinations of the analyte. A digital image is made up of thousands of pixels, each generated by an independent sensor, detecting and measuring the light that falls on it. This enables simultaneous measurement of multiple samples or analytes for high throughput screening.4.7: ElectrochemiluminescenceElectrochemiluminescence is chemiluminescence arising as a result of electrochemical reactions. It includes electrochemical initiation of ordinary chemiluminescent reactions, electrochemical modification of an analyte enabling it to take part in a chemiluminescent reaction, or electron transfer reactions between radicals or ions generated at electrodes.4.8: Photo-induced chemiluminescencePhoto-induced chemiluminescence (PICL) involves irradiating an analyte with ultra-violet light in order to convert it to a photoproduct of different chemiluminescence behaviour, usually substantially increased emission. Such reactions form the basis of highly sensitive and selective analytical techniques.4.9: Chemiluminescence detection in gas chromatography4.10: Chemiluminescence detection in high performance liquid chromatography4.11: Chemiluminescence Detection in Capillary Electrophoresis Thumbnail: Chemiluminescence after a reaction of hydrogen peroxide and luminol. This is an image from video youtu.be/8_82cNtZSQE. (CC BY-SA 4.0; Tavo Romann).4: Instrumentation is shared under a CC BY-SA 4.0 license and was authored, remixed, and/or curated by LibreTexts.
625
Accuracy of Spectrophotometer Readings
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Supplemental_Modules_(Analytical_Chemistry)/Quantifying_Nature/Accuracy_of_Spectrophotometer_Readings
The needle deflection or the number shown on the digital display of a spectrophotometer is proportional to the transmittance of the solution. How do errors in transmittance readings affect the accuracy of solution concentration values? The concentration as a function of the transmittance is given by the equation\[c(T) = - \dfrac{\log T}{ \epsilon \,b}\]Let \(c_o\) be the true concentration and \(T_o\) the corresponding transmittance, i.e. \(c_o = c(T_o)\). Suppose that the actual transmittance measured is \(T_o + T\), corresponding to the concentration\[c_o + c = c(T_o + T).\]The error in the transmittance is \(T\) and that of the concentration is \(c\).By using a Taylor series expansion, and discarding all terms higher than T to the first power, it is possible to show that:\[\Delta c = - \dfrac{\Delta T}{ 2.303 \epsilon \,b\,T}\]Dividing the second equation by the first gives us:\[ \dfrac{\Delta c}{c} = \dfrac{\Delta T}{ 2.303 \epsilon \,b\,T} = \dfrac{\Delta T}{ T \,ln T}\]Values of -(TlnT)-1 as a function of T or A (A = -logT) are tabulated below. Below the tabulation one finds a plot of -(TlnT)-1 versus T.The relative error in the concentration, for a given T, has its smallest value, when T = 1/e = 0.368 or when A = 0.434. The minimum is not sharp and good results can be expected in a transmittance range from 0.2 to 0.6 or an absorbance range from 0.7 to 0.2. An inspection of the graph below indicates that transmittance values of 0.1 and 0.8 are the outside limits between which one can expect to obtain reasonably accurate results. These transmittance values correspond to an absorbance range of 0.1 to 1.0 absorbance units. This is the rationale for limiting your calibration curve to that absorbance range.Graph of -(T ln T)-1 vs. TThis page titled Accuracy of Spectrophotometer Readings is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by Oliver Seely.
626
Atomic Emission Spectroscopy (AES)
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Supplemental_Modules_(Analytical_Chemistry)/Instrumentation_and_Analysis/Atomic_Emission_Spectroscopy_(AES)
This module provides an introduction to Atomic Emission Spectroscopy (AES). AES is a broad area that includes several analytical chemistry techniques focused on elemental analysis, the identification, quantification, and (sometimes) speciation of the elemental makeup of a sample. AES can be an extremely useful tool and is utilized in academic and industrial settings within biological and chemical sciences. It is mainly used in quantitative analysis, but can be used in qualitative work as well. This page titled Atomic Emission Spectroscopy (AES) is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by Alexander Scheeline & Thomas M. Spudich via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.
628
Atomic Force Microscopy
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Supplemental_Modules_(Analytical_Chemistry)/Microscopy/Atomic_Force_Microscopy
Atomic force microscopy utilizes a microscale probe to produce three dimensional image of surfaces at sub nanometer scales. The atomic force microscope obtains images by measurement of the attractive and repulsive forces acting on a microscale probe interacting with the surface of a sample. Ideally the interaction occurs at an atomically fine probe tip being attracted and repulsed by the atoms of the surface giving atomically resolved surface images.The atomic force microscope (AFM) probe is mounted onto a flexible cantilever that is manipulated by a vertical piezoelectric into interacting with the sample. The piezoelectric expands and exerts a force on the cantilever proportional to the potential voltage. The force is balanced by those acting on the probe by its interaction with the surface and the stain on the cantilever.The flexible cantilever will bend proportionally to the force acting upon it in accordance to Hooke's law. By measuring the reflection of a laser source on the cantilever it is possible to determine the degree of the bend and by a feedback loop control the force exerted by the cantilever. By using the strain as a restoring force, a piezoelectric element can be used to drive the probe as a mechanical oscillator with a calculable resonate frequency, allowing for tapping mode microscopy. The laser acts on a photodiode array to give measurement of the deflection both horizontally and vertically. Using the deflection it is possible to calculate the quantity of force acting on the probe in both the horizontal and vertical directions. From the force applied by the vertical piezoelectric and the force acting on the probe it is possible to obtain a measure of the relative height of the probe.As the probe encounters a feature, it raises with the feature, causing a deflection measured by the photodiode and a change in the force on the vertical piezoelectric. The potential may be adjusted to minimize the deflection, by feedback from the photodiode array, knowing the expansion rate of the vertical piezoelectric allows for a direct computation of the height, this is the z-sensor. By computation from the total deflection and thus strain on the cantilever, it is also possible to obtain the relative height of the feature given a known spring constant of the cantilever.The probe is scanned across the surface, with either the probe or the sample being moved by piezoelectric elements. This allows for the measure of the interaction of the forces across the entire sample, allowing the surface to be rendered as a three dimensional image. The force exerted by the probe has the potential to alter the surface by etching or simply moving loosely bound surface features. As such, this microscopy technique can be potentially used to write (etch) as well as read, atomic scale surface features. The strain on the probe tip may cause deformation that leads to loss in resolution due to flattening or the generation of artifacts due to secondary tips.The probe tip is idealized to be an atomically perfect spherical surface with a nanoscopic radius of curvature, leading to a single point of contact between the probe tip and the surface. Tips however may have multiple points of contact, leading to image artifacts such as doubled images or shadowing. Alternatively tips may be flattened or event indented, causing a lower resolution as smaller surface features are passed over. Significant error may arise from the expansion of the piezoelectric materials as they become heated. This problem is typically mitigated by attempting to maintain an isothermal environment. Drift may be measured and accounted for by repeat measurement of a known point and normalizing the data from that known height.There are mutiple methods of imaging a surface using an atomic force microscope, these imaginging modes uniquely utilize the probe and its interaction with the surface to obtain data.If a constant potential is maintained for the vertical piezoelectric, the probe will maintain continuous contact with the surface. It is then possible to use deflection and z-sensor to yield accurate height information of surface features.When the probe is in contact with the sample the resistance acting on the probe's horizontal motion causes it to be strained horizontally. This strain is proportional to the resistance, friction, allowing for a direct measurement of the friction of the sample and probe surface.Also known as non-contact mode, this method utilizes the attractive forces acting on the probe. To measure this, the probe is pulled from the sample by the vertical piezoelectric with a set force, as a feature is encountered the attractive force acting on the probe increases causing it to deflect downward. The downward deflection is counteracted by the vertical piezoelectric in a similar manner to contact mode, reaching the same equilibrium of forces acting on the cantilever.By using a piezoelectric device it is possible to use the cantilever as a harmonic oscillator with a resonance frequency proportional to the known spring constant. The probe then moves with an amplitude proportional to the driving force which is controlled and a frequency which is proportional to the spring constant. When the probe tip contact the surface the effective restoring force increases, increasing the frequency.The total change in frequency is proportional to the feature's height, and vertical piezoelectric can then be used to raise the cantilever and restore the frequency to the original giving additional data from the z-sensor regarding the feature's height.Additionally, as the probe contacts the surface it acts as a driving force, deforming the surface which is restored by the internal stress (that acts on the probe to repel it). The phenomenon is proportional to the Young's modulus of compressibility of the sample and will case a phase shift between the oscillation of the piezoelectric driver of the probe and the probe itself.Use of specialized probes allows a further expansion of atomic force microscopy's role in nanoscience. By alterations in probe design it is possible to: directly obtain data about the surface interaction with other functional groups, alter the surface by etching or causing chemical change, or deposit substrates onto a surface; all at the nanoscopic scale.Functionalizing a probe can be accomplished by binding a protein or functional groups to the probe tip surface. The probe now takes direct measure of the iteration between the surface and the functional group. This technique is particularly useful for biological application, e.g. affinity of a protein to the binding site of a membrane.The strain exerted by the tip on the surface has the potential to manipulate or alter the surface features, allowing for the mechanical etching of the system. By using specialized thermal tips, it becomes possible to heat the surface. Either technique can be used to precisely carve into or otherwise alter a surface at the scales necessary for many nanotechnologic advances.Use of a special tip similar to a fountain pen head, it is possible to deposit units of a substrate onto a surface. Precise deposition allows for building very precise surface structures, e.g. protien binding sites, onto an atomically flat surface.AFM has been demonstrated as a potential means of data storage by the IBM corperation. By using a heated tip it is possible to alter a polymer surface by a reversible polymerization reaction. The indentation created may be read by contact or tapping mode allowing for written data to be read. Data may be erased by use of the thermal tip to cause a polymerization on the surface, sealing the indentation.Atomic Force Microscopy is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by LibreTexts.
629
Balancing Redox Reactions - Examples
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Supplemental_Modules_(Analytical_Chemistry)/Electrochemistry/Redox_Chemistry/Balancing_Redox_reactions/Balancing_Redox_Reactions_-_Examples
Oxidation-Reduction or "redox" reactions occur when elements in a chemical reaction gain or lose electrons, causing an increase or decrease in oxidation numbers. The Half Equation Method is used to balance these reactions.In a redox reaction, one or more element becomes oxidized, and one or more element becomes reduced. Oxidation is the loss of electrons whereas reduction is the gain of electrons. An easy way to remember this is to think of the charges: an element's charge is reduced if it gains electrons (an acronym to remember the difference is LEO = Lose Electron Oxidation & GER = Gain Electron Reduction). Redox reactions usually occur in one of two environments: acidic or basic. In order to balance redox equations, understanding oxidation states is necessary.Some points to remember when balancing redox reactions:Next, these steps will be shown in another example: Example \(\PageIndex{1A}\): In Acidic Aqueous SolutionBalance this reaction\[\ce{MnO4^{-} + I^{-} -> I2 + Mn^{2+}} \nonumber\]SolutionSteps to balance:Step 1: Separate the half-reactions that undergo oxidation and reduction.Oxidation: \[\ce{ I^{-} -> I2} \nonumber \]This is the oxidation half because the oxidation state changes from -1 on the left side to 0 on the right side. This indicates a gain in electrons.Reduction: \[ \ce{ MnO4^{-} -> Mn^{2+}} \nonumber\]This is the reduction half because the oxidation state changes from +7 on the left side to +2 on the right side. This indicates a reduction in electrons.Step 2: In order to balance this half reaction we must start by balancing all atoms other than any Hydrogen or Oxygen atoms.Oxidation: \[\ce{2I^{-} -> I2} \nonumber\]In order to balance the oxidation half of the reaction you must first add a 2 in front of the \(\ce{I}\) on the left hand side so there is an equal number of atoms on both sides.Reduction: \[ \ce{MnO4^{-} -> Mn^{2+}} \nonumber\]For the reduction half of the reaction, you can notice that all atoms other than Hydrogen and Oxygen are already balanced because there is one manganese atom on both sides of the half reaction.Step 3: Balance Oxygen atoms by adding \(\ce{H2O}\) to the side of the equation that needs Oxygen. Once you have completed this step add \(\ce{H^{+}}\) to the side of the equation that lacks \(\ce{H}\) atoms necessary to be balanced.Oxidation: \[\ce{2 I^{-} -> I2} \nonumber\]Because there are no Oxygen or Hydrogen atoms in this half of the reaction, it is not required to perform any kind of balancing.Reduction: \[ \ce{ MnO4^{-} -> Mn^{2+} + 4 H2O} \nonumber\]The first step in balancing this reaction using step 3 is to add4 H2O atoms in order to balance the Oxygen atoms with the 4 on the other side of MnO4-Reduction: \[ \ce{MnO4^{-} + 8 H^{+} -> Mn^{2+} + 4 H2O} \nonumber\]Now that the Oxygen atoms have been balanced you can see that there are 8 \( \ce{H}\) atoms on the right hand side of the equation and none on the left. Therefore, you must add 8 \(\ce{H^{+}}\) atoms to the left hand side of the equation to make it balanced.Step 4: Now that the two half reactions have been balanced correctly one must balance the charges in each half reaction so that both the reduction and oxidation halves of the reaction consume the same number of electrons.Oxidation: \[ \ce{2 I^{-} -> I2 + 2e^{-} } \nonumber\]Because of the fact that there are two \(\ce{I}\)'s on the left hand side of the equation which a charge of -1 we can state that the left hand side has an overall charge of -2. The I on the left side of the equation has an overall charge of 0. Therefore to balance the charges of this reaction we must add 2 electrons to the right side of the equation so that both sides of the equation have equal charges of -2.Reduction: \[ \ce{5 e^{-} + 8 H^{+} + MnO4^{-} -> Mn^{2+} + 4 H2O} \nonumber\]Looking at the left hand side of the equation you can notice that there are 8 Hydrogen atoms with a +1 charge. There is also a \(\ce{MnO4^{-}}\) ion that has a charge of -1. When we add these two charges up we can calculate that the left hand side of the equation has an overall charge of +7. The right hand side has an \(\ce{Mn}\) atom with a charge of +2 and then 4 water molecules that have charges of 0. Therefore, the overall charge of the right side is +2. We must add 5 electrons to the left side of the equation to make sure that both sides of the equation have equal charges of +2.Step 5: Multiply both sides of both reactions by the least common multiple that will allow the half-reactions to have the same number of electrons and cancel each other out.Oxidation: \( 10I^- \rightarrow 5I_2 +10e^- \)We multiply this half reaction by 5 to come up with the following result above.Reduction: \(10e^- + 16H^+ + 2MnO_4^- \rightarrow 2Mn^{2+} + 8H_2O \)We multiply the reduction half of the reaction by 2 and arrive at the answer above.By multiplying the oxidation half by 5 and the reduction half by 2 we are able to observe that both half-reactions have 10 electrons and are therefore are able to cancel each other out.Step 6: Add the two half reactions in order to obtain the overall equation by canceling out the electrons and any \(\ce{H2O}\) and \(\ce{H^{+}}\) ions that exist on both sides of the equation.Overall: \[\ce{10 I^{-} + 16 H^{+} + 2 MnO4^{-} -> 5 I2 + 2 Mn^{2+} + 8 H_2O} \nonumber\]In this problem, there is not anything that exists on both halves of the equation that can be cancelled out other than the electrons. Finally, double check your work to make sure that the mass and charge are both balanced. To double check this equation you can notice that everything is balanced because both sides of the equation have an overall charge of +4.Example \(\PageIndex{1B}\): In Basic Aqueous SolutionsThe balancing procedure in basic solution differs slightly because \(\ce{OH^{-}}\) ions must be used instead of \(\ce{H^{+}}\) ions when balancing hydrogen atoms. To give the previous reaction under basic conditions, sixteen \(\ce{OH^{-}}\) ions can be added to both sides. on the left the \(\ce{OH^{-}}\) and the \(\ce{H^{+}}\) ions will react to form water, which will cancel out with some of the \(\ce{H2O}\) on the right.\[\ce{10I^{-} (aq) + 2MnO4^{-} (aq) + 16H^{+} (aq) + 16OH^{-} (aq) -> 5I2 (s) + 2Mn^{2+} (aq) + 8H2O (l) + 16OH^{-} (aq)} \nonumber\]On the left side the OH- and the \(\ce{H^{+}}\) ions will react to form water, which will cancel out with some of the \(\ce{H2O}\) on the right:\[\ce{10I^{-} (aq) + 2MnO4^{-} (aq) + 16H2O (l) -> 5I2 (s) + 2Mn^{2+} (aq) + 8H2O (l) + 16OH^{-} (aq)} \nonumber\]Eight water molecules can be canceled, leaving eight on the reactant side:\[\ce{10I^{-} (aq) + 2MnO4^{-} (aq) + 8H2O (l) -> 5I2 (s) + 2Mn^{2+} (aq) + 16OH^{-} (aq)} \nonumber\]This is the balanced reaction in basic solution.Example \(\PageIndex{2}\)Balance the following in an acidic solution.\[\ce{SO3^{2-} (aq) + MnO4^{-} (aq) \rightarrow SO4^{2-} (aq) + Mn^{2+} (aq)} \nonumber\]SolutionTo balance a redox reaction, first take an equation and separate into two half reaction equations specifically oxidation and reduction, and balance them.Step 1: Split into two half reaction equations: Oxidation and ReductionStep 2: Balance each of the half equations in this order:The \(\ce{S}\) and \(\ce{Mn}\) atoms are already balanced,Balancing \(\ce{O}\) atoms\[\begin{align*} &\text{Oxidation}: \quad \ce{SO3^{2-}(aq)} + \ce{H2O(l)} \rightarrow \ce{SO4^{2-} (aq)} \\[4pt] &\text{Reduction}: \quad \ce{MnO4^{-} (aq)} \rightarrow \ce{Mn^{2+}(aq)} + \ce{4H2O(l)} \end{align*}\]Then balance out \(\ce{H}\) atoms on each side\[\begin{align*} &\text{Oxidation}: \quad \ce{SO3^{2-}(aq)} + \ce{H2O(l)} \rightarrow \ce{SO4^{2-} (aq)} + \ce{2H^{+}(aq)}\\[4pt] &\text{Reduction}: \quad \ce{MnO4^{-} (aq)} + 8 \ce{H^{+}} \rightarrow \ce{Mn^{2+}(aq)} + \ce{4H2O(l)} \end{align*}\]Step 3: Balance the charges of the half reactions by adding electrons\[\begin{align*} &\text{Oxidation}: \quad \ce{SO3^{2-}(aq)} + \ce{H2O(l)} \rightarrow \ce{SO4^{2-} (aq)} + \ce{2H^{+}(aq)} + \ce{2e^{-}}\\[4pt] &\text{Reduction}: \quad \ce{MnO4^{-} (aq)} + 8 \ce{H^{+}} + \ce{5e^{-}} \rightarrow \ce{Mn^{2+}(aq)} + \ce{4H2O(l)} \end{align*}\]Step 4: Obtain the overall redox equation by combining the half reaction, but multiply entire equation by number of electrons in oxidation with reduction equation, and number of electrons in reduction with oxidation equation.\[\begin{align*} &\text{Oxidation}: \quad 5 \times \left[\ce{SO3^{2-}(aq)} + \ce{H2O(l)} \rightarrow \ce{SO4^{2-} (aq)} + \ce{2H^{+}(aq)} + \ce{2e^{-}} \right] \\[4pt] &\text{Reduction}: \quad 2 \times \left[ \ce{MnO4^{-} (aq)} + 8 \ce{H^{+}} + \ce{5e^{-}} \rightarrow \ce{Mn^{2+}(aq)} + \ce{4H2O(l)}\right] \end{align*}\]Overall Reaction:\[\begin{align*} &\text{Oxidation}: \quad \ce{5SO3^{2-}(aq)} + \ce{5H2O(l)} \rightarrow \ce{5 SO4^{2-} (aq)} + \ce{10 H^{+}(aq)} + \ce{10e^{-}} \\[4pt] &\text{Reduction}: \quad \ce{2 MnO4^{-} (aq)} + \ce{16H^{+}} + \ce{10e^{-}} \rightarrow \ce{2Mn^{2+}(aq)} + \ce{8H2O(l)} \\[4pt] \hline &\text{total}: \quad \ce{5SO3^{2-}(aq)} + \ce{5H2O(l)} + \ce{2 MnO4^{-} (aq)} + \ce{16H^{+}} + \ce{10e^{-}} \rightarrow \ce{5 SO4^{2-} (aq)} + \ce{10 H^{+}(aq)} + \ce{2Mn^{2+}(aq)} + \ce{8H2O(l)} + \ce{10e^{-}} \end{align*}\]Step 5: Simplify and cancel out similar terms on both sides\[\ce{5SO3^{2-}(aq)} + \cancel{\ce{5H2O(l)}} + \ce{2 MnO4^{-} (aq)} + \ce{\cancelto{6}{16}H^{+}} + \cancel{\ce{10e^{-}}} \rightarrow \ce{5 SO4^{2-} (aq)} + \cancel{\ce{10 H^{+}(aq)}} + \ce{2Mn^{2+}(aq)} + \ce{\cancelto{3}{8}H2O(l)} + \cancel{\ce{10e^{-}}} \nonumber\]To get\[\ce{5SO3^{2-}(aq)} + \ce{2 MnO4^{-} (aq)} + \ce{6H^{+}} \rightarrow \ce{5 SO4^{2-} (aq)} + \ce{2Mn^{2+}(aq)} + \ce{3H2O(l)} \nonumber\]Example \(\PageIndex{3}\):Balance this reaction in both acidic and basic aqueous solutions\[\ce{MnO4^{-}(aq) + SO3^{2-}(aq) -> MnO2(s) + SO4^{2-}(aq)} \nonumber\]SolutionFirst, they are separated into the half-equations: \[\ce{MnO4^{-}(aq) -> MnO2(s)} \nonumber\]This is the reduction half-reaction because oxygen is LOST) and\[\ce{SO3^{2-}(aq) -> SO4^{2-}(aq)} \nonumber\](the oxidation, because oxygen is GAINED)Now, to balance the oxygen atoms, we must add two water molecules to the right side of the first equation, and one water molecule to the left side of the second equation:\[\ce{MnO4^{-}(aq) -> MnO2(s) + 2H2O(l)} \nonumber \]\[\ce{H2O(l) + SO3^{2-}(aq) -> SO4^{2-}(aq)} \nonumber\]To balance the hydrogen atoms (those the original equation as well as those added in the last step), we must add four H+ ions to the left side of the first equation, and two H+ ions to the right side of the second equation.\[\ce{4H^{+} + MnO4^{-}(aq) -> MnO2(s) + 2H2O(l)} \nonumber\]\[\ce{H2O(l) + SO3^{2-}(aq) -> SO4^{2-}(aq) + 2H^{+}} \nonumber\]Now we must balance the charges. In the first equation, the charge is +3 on the left and 0 on the right, so we must add three electrons to the left side to make the charges the same. In the second equation, the charge is -2 on the left and 0 on the right, so we must add two electrons to the right.\[\ce{3e- + 4H^{+} + MnO4^{-}(aq) -> MnO2(s) + 2H2O(l)} \nonumber \]\[\ce{H2O(l) + SO3^{2-}(aq) --> SO4^{2-}(aq) + 2H^{+} + 2e^{-}} \nonumber \]Now we must make the electrons equal each other, so we multiply each equation by the appropriate number to get the common multiple (in this case, by 2 for the first equation, and by 3 for the second).\[\ce{2(3e^{-} + 4H^{+} + MnO4^{-}(aq) -> MnO2(s) + 2H2O(l))} \nonumber \]\[\ce{3(H2O(l) + SO3^{2-}(aq) -> SO4^{2-}(aq) + 2H^{+} + 2e^{-})} \nonumber \]With the result:\[\ce{6e^{-} + 8H^{+} + 2MnO4^{-}(aq) -> 2MnO2(s) + 4H2O(l)} \nonumber \]\[\ce{3H2O(l) + 3SO3^{2-}(aq) -> 3SO4^{2-}(aq) + 6H^{+} + 6e^{-}} \nonumber \]Now we cancel and add the equations together. We can cancel the 6e- because they are on both sides. We can get rid of the 6H+ on both sides as well, turning the 8H+ in the first equation to \(\ce{2H^{+}}\). The same method gets rid of the \(\ce{3H2O(l)}\) on the bottom, leaving us with just one \(\ce{H2O(l)}\) on the top. In the end, the overall reaction should have no electrons remaining. Now we can write one balanced equation:\[\ce{2MnO4^{-}(aq) + 2H^{+} + 3SO3^{2-}(aq) -> H2O(l) + 2MnO2(s) + 3SO4^{2-}(aq)} \nonumber \]The equation is now balanced in an acidic environment. To balance in a basic environment add \(\ce{OH^{-}}\) to each side to neutralize the \(\ce{H^{+}}\) into water molecules:\[\ce{2MnO4^{-}(aq) + 2H2O + 3SO3^{2-}(aq) -> H2O(l) + 2MnO2(s) + 3SO4^{2-}(aq) + 2OH^{-}} \nonumber \]and then cancel the water molecules\[\ce{2MnO4^{-}(aq) + H2O + 3SO3^{2-}(aq) -> + 2MnO2(s) + 3SO4^{2-}(aq) + 2OH^{-}} \nonumber\]The equation is now balanced in a basic environment.Example \(\PageIndex{4}\)Balance this reaction in acidic solution\[\ce{Fe(OH)3 + OCl^{-} \rightarrow FeO4^{2-} + Cl^{-}} \nonumber \]SolutionStep 1:Steps 2 and 3:Step 4:Overall Equation:\[\begin{align*} 3 \times \big[ \ce{ 2H^{+} + OCl^{-} + 2e^{-}} &\ce{ -> Cl^{-} + H2O} \big] \\[4pt] \ce{ 6H^{+} + 3OCl^{-} + 6e^{-}} &\ce{ -> 3Cl^{-} + 3H2O} \end{align*}\]and\[\begin{align*} 2 \times \big[ \ce{Fe(OH)3 + H2O} & \ce{-> FeO4^{2-} + 3e- + 5H^{+}} \big] \\[4pt] \ce{2Fe(OH)3 + 2H2O} & \ce{-> 2FeO4^{2-} + 6e- + 10H^{+}} \end{align*}\]Adding these together results\[\ce{6H^{+} + 3OCl^{-} + 2e^{-} + 2Fe(OH)3 +2 H2O -> 3Cl^{-} +3 H2O + 2FeO4^{2-} + 6e^{-} + 10H^{+} }\nonumber\]Step 5:Simplify:\[\ce{3OCl^{-} + 2Fe(OH)3 \rightarrow 3Cl^{-} + H2O + 2FeO4^{2-} + 4H^{+}} \nonumber\]Example \(\PageIndex{5}\)Balance this equation in acidic aqueous solution\[\ce{VO4^{3-} + Fe^{2+} -> VO^{2+} + Fe^{3+}} \nonumber\]SolutionStep 1:Step 2/3:Step 4:Overall Reaction:\[\ce{Fe^{2+} -> Fe^{3+} + e^{-}} \nonumber\]+\[\ce{6H^{+} + VO4^{3-} + e^{-} -> VO^{2+} + 3H2O} \nonumber\]____________________________\[\ce{Fe^{2+} + 6H^{+} + VO4^{3-}} + \cancel{e^{-}} \ce{ \rightarrow Fe^{3+}} + \cancel{e^{-}} \ce{ + VO^{2+} + 3H2O} \nonumber\]Step 5:Simplify:\[\ce{Fe^{2+} + 6H^{+} + VO4^{3-} \rightarrow Fe^{3+} + VO^{2+} + 3H2O} \nonumber\]Exercise \(\PageIndex{1}\)Balance the following equation in both acidic and basic environments:\[\ce{Cr2O7^{2-}(aq) + C2H5OH(l) -> Cr^{3+}(aq) + CO2(g)} \nonumber\]In acidic aqueous solution: \[\ce{2Cr2O7^{-}(aq) + 16H^{+}(aq) + C2H5OH(l) -> 4Cr^{3+}(aq) + 2CO2(g) + 11H2O(l)} \nonumber\]In basic aqueous solution: \[\ce{2Cr2O7^{-}(aq) + 5H2O(l) + C2H5OH(l) -> 4Cr^{3+}(aq) + 2CO2(g) + 16OH^{-}(aq) } \nonumber\]Exercise \(\PageIndex{2}\)Balance the following equation in both acidic and basic environments:\[\ce{Fe^{2+}(aq) + MnO4^{-}(aq) -> Fe^{3+}(aq) + Mn^{2+}(aq)} \nonumber\]In acidic aqueous solution: \[\ce{ MnO4^{-}(aq) + 5Fe^{2+}(aq) + 8H^{+}(aq) -> Mn^{2+}(aq) + 5Fe^{3+}(aq) + 4H2O(l)} \nonumber\]In basic aqueous solution: \[\ce{MnO4^{-}(aq) + 5Fe^{2+}(aq) + 4H2O(l) -> Mn^{2+}(aq) + 5Fe^{3+}(aq) + 8OH^{-}(aq) } \nonumber\]In a redox reaction, also known as an oxidation-reduction reaction, it is a must for oxidation and reduction to occur simultaneously. In the oxidation half of the reaction, an element gains electrons. A species loses electrons in the reduction half of the reaction. These reactions can take place in either acidic or basic solutions.Balancing Redox Reactions - Examples is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by LibreTexts.
630
Balancing Redox Reactions
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Supplemental_Modules_(Analytical_Chemistry)/Electrochemistry/Redox_Chemistry/Balancing_Redox_reactions
Oxidation-Reduction Reactions, or redox reactions, are reactions in which one reactant is oxidized and one reactant is reduced simultaneously. This module demonstrates how to balance various redox equations.The first step in balancing any redox reaction is determining whether or not it is even an oxidation-reduction reaction. This requires that one and typically more species changing oxidation states during the reaction. To maintain charge neutrality in the sample, the redox reaction will entail both a reduction component and an oxidation components. These are often separated into independent two hypothetical half-reactions to aid in understanding the reaction. This requires identifying which element is oxidized and which element is reduced. For example, consider this reaction:\[\ce{ Cu (s) + 2 Ag^+ (aq) \rightarrow Cu^{2+} (aq) + 2 Ag (s)} \nonumber \]The first step in determining whether the reaction is a redox reaction is to split the equation into two hypothetical half-reactions. Let's start with the half-reaction involving the copper atoms:\[\ce{ Cu (s) \rightarrow Cu^{2+}(aq)} \nonumber \]The oxidation state of copper on the left side is 0 because it is an element on its own. The oxidation state of copper on the right hand side of the equation is +2. The copper in this half-reaction is oxidized as the oxidation states increases from 0 in \(\ce{Cu}\) to +2 in \(\ce{Cu^{2+}}\). Now consider the silver atoms\[\ce{ 2 Ag^+ (aq) \rightarrow 2 Ag (s)} \nonumber \]In this half-reaction, the oxidation state of silver on the left side is a +1. The oxidation state of silver on the right is 0 because it is an pure element. Because the oxidation state of silver decreases from +1 to 0, this is the reduction half-reaction.Consequently, this reaction is a redox reaction as both reduction and oxidation half-reactions occur (via the transfer of electrons, that are not explicitly shown in equations 2). Once confirmed, it often necessary to balance the reaction (the reaction in equation 1 is balanced already though), which can be accomplished in two ways because the reaction could take place in neutral, acidic or basic conditions.Balancing redox reactions is slightly more complex than balancing standard reactions, but still follows a relatively simple set of rules. One major difference is the necessity to know the half-reactions of the involved reactants; a half-reaction table is very useful for this. Half-reactions are often useful in that two half reactions can be added to get a total net equation. Although the half-reactions must be known to complete a redox reaction, it is often possible to figure them out without having to use a half-reaction table. This is demonstrated in the acidic and basic solution examples. Besides the general rules for neutral conditions, additional rules must be applied for aqueous reactions in acidic or basic conditions.One method used to balance redox reactions is called the Half-Equation Method. In this method, the equation is separated into two half-equations; one for oxidation and one for reduction.Half-Equation Method to Balance redox Reactions in Acidic Aqueous SolutionsEach reaction is balanced by adjusting coefficients and adding \(\ce{H2O}\), \(\ce{H^{+}}\), and \(\ce{e^{-}}\) in this order:The equation can now be checked to make sure that it is balanced.Half-Equation Method to Balance redox Reactions in Basic Aqueous SolutionsIf the reaction is being balanced in a basic solution, the above steps are modified with the the addition of one step between #3 and #4: 3b Add the appropriate number of \(\ce{OH^{-}}\) to neutralize all \(\ce{H^{+}}\) and to convert into water molecules.The equation can now be checked to make sure that it is balanced.The first step to balance any redox reaction is to separate the reaction into half-reactions. The substance being reduced will have electrons as reactants, and the oxidized substance will have electrons as products. (Usually all reactions are written as reduction reactions in half-reaction tables. To switch to oxidation, the whole equation is reversed and the voltage is multiplied by -1.) Sometimes it is necessary to determine which half-reaction will be oxidized and which will be reduced. In this case, whichever half-reaction has a higher reduction potential will by reduced and the other oxidized.Example \(\PageIndex{1}\): Balancing in a Neutral SolutionBalance the following reaction\[\ce{Cu^+(aq) + Fe(s) \rightarrow Fe^{3+} (aq) + Cu (s)} \nonumber\]SolutionStep 1: Separate the half-reactions. By searching for the reduction potential, one can find two separate reactions:\[\ce{Cu^+ (aq) + e^{-} \rightarrow Cu(s)} \nonumber\]and\[\ce{Fe^{3+} (aq) + 3e^{-} \rightarrow Fe(s)} \nonumber\]The copper reaction has a higher potential and thus is being reduced. Iron is being oxidized so the half-reaction should be flipped. This yields:\[\ce{Cu^+ (aq) + e^{-} \rightarrow Cu(s)} \nonumber\]and\[\ce{Fe (s) \rightarrow Fe^{3+}(aq) + 3e^{-}} \nonumber\]Step 2: Balance the electrons in the equations. In this case, the electrons are simply balanced by multiplying the entire \(Cu^+(aq) + e^{-} \rightarrow Cu(s)\) half-reaction by 3 and leaving the other half reaction as it is. This gives:\[\ce{3Cu^+(aq) + 3e^{-} \rightarrow 3Cu(s)} \nonumber\]and\[\ce{Fe(s) \rightarrow Fe^{3+}(aq) + 3e^{-}} \nonumber\]Step 3: Adding the equations give:\[\ce{3Cu^+(aq) + 3e^{-} + Fe(s) \rightarrow 3Cu(s) + Fe^{3+}(aq) + 3e^{-}} \nonumber\]The electrons cancel out and the balanced equation is left.\[\ce{3Cu^{+}(aq) + Fe(s) \rightarrow 3Cu(s) + Fe^{3+}(aq)} \nonumber\]Acidic conditions usually implies a solution with an excess of \(\ce{H^{+}}\) concentration, hence making the solution acidic. The balancing starts by separating the reaction into half-reactions. However, instead of immediately balancing the electrons, balance all the elements in the half-reactions that are not hydrogen and oxygen. Then, add \(\ce{H2O}\) molecules to balance any oxygen atoms. Next, balance the hydrogen atoms by adding protons (\(\ce{H^{+}}\)). Now, balance the charge by adding electrons and scale the electrons (multiply by the lowest common multiple) so that they will cancel out when added together. Finally, add the two half-reactions and cancel out common terms.Example \(\PageIndex{2}\): Balancing in a Acid SolutionBalance the following redox reaction in acidic conditions.\[\ce{Cr_2O_7^{2-} (aq) + HNO_2 (aq) \rightarrow Cr^{3+}(aq) + NO_3^{-}(aq) } \nonumber\]SolutionStep 1: Separate the half-reactions. The table provided does not have acidic or basic half-reactions, so just write out what is known.\[\ce{Cr_2O_7^{2-}(aq) \rightarrow Cr^{3+}(aq) } \nonumber\]\[\ce{HNO_2 (aq) \rightarrow NO_3^{-}(aq)} \nonumber\]Step 2: Balance elements other than O and H. In this example, only chromium needs to be balanced. This gives:\[\ce{Cr_2O_7^{2-}(aq) \rightarrow 2Cr^{3+}(aq)} \nonumber\]and\[\ce{HNO_2(aq) \rightarrow NO_3^{-}(aq)} \nonumber\]Step 3: Add H2O to balance oxygen. The chromium reaction needs to be balanced by adding 7 \(\ce{H2O}\) molecules. The other reaction also needs to be balanced by adding one water molecule. This yields:\[\ce{Cr_2O_7^{2-} (aq) \rightarrow 2Cr^{3+} (aq) + 7H_2O(l)} \nonumber\]and\[\ce{HNO_2(aq) + H_2O(l) \rightarrow NO_3^{-}(aq) } \nonumber\]Step 4: Balance hydrogen by adding protons (H+). 14 protons need to be added to the left side of the chromium reaction to balance the 14 (2 per water molecule * 7 water molecules) hydrogens. 3 protons need to be added to the right side of the other reaction.\[\ce{14H^+(aq) + Cr_2O_7^{2-}(aq) \rightarrow 2Cr^{3+} (aq) + 7H_2O(l)} \nonumber\]and\[\ce{HNO_2 (aq) + H2O (l) \rightarrow 3H^+(aq) + NO_3^{-}(aq)} \nonumber\]Step 5: Balance the charge of each equation with electrons. The chromium reaction has (14+) + (2-) = 12+ on the left side and (2 * 3+) = 6+ on the right side. To balance, add 6 electrons (each with a charge of -1) to the left side:\[\ce{6e^{-} + 14H^+(aq) + Cr_2O_7^{2-}(aq) \rightarrow 2Cr^{3+}(aq) + 7H_2O(l)} \nonumber\]For the other reaction, there is no charge on the left and a (3+) + (-1) = 2+ charge on the right. So add 2 electrons to the right side:\[\ce{HNO_2(aq) + H_2O(l) \rightarrow 3H^+(aq) + NO_3^{-}(aq) + 2e^{-}} \nonumber\]Step 6: Scale the reactions so that the electrons are equal. The chromium reaction has 6e- and the other reaction has 2e-, so it should be multiplied by 3. This gives:\[\ce{6e^{-} + 14H^+(aq) + Cr_2O_7^{2-} (aq) \rightarrow 2Cr^{3+} (aq) + 7H_2O(l).} \nonumber\]and\[ \begin{align*} 3 \times \big[ \ce{HNO2 (aq) + H2O(l)} &\rightarrow \ce{3H^{+}(aq) + NO3^{-} (aq) + 2e^{-}} \big] \\[4pt] \ce{3HNO2 (aq) + 3H_2O (l)} &\rightarrow \ce{9H^{+}(aq) + 3NO_3^{-}(aq) + 6e^{-}} \end{align*} \]Step 7: Add the reactions and cancel out common terms.\[\begin{align*} \ce{3HNO_2 (aq) + 3H_2O (l)} &\rightarrow \ce{9H^+(aq) + 3NO_3^{-}(aq) + 6e^{-} } \\[4pt] \ce{6e^{-} + 14H^+(aq) + Cr_2O_7^{2-}(aq)} &\rightarrow \ce{2Cr^{3+}(aq) + 7H_2O(l)} \\[4pt] \hline \ce{3HNO_2 (aq)} + \cancel{\ce{3H_2O (l)}} + \cancel{6e^{-}} + \ce{14H^+(aq) + Cr_2O_7^{2-} (aq)} &\rightarrow \ce{9H^+(aq) + 3NO_3^{-}(aq)} + \cancel{6e^{-}} + \ce{2Cr^{3+}(aq)} + \cancelto{4}{7}\ce{H_2O(l)} \end{align*}\]The electrons cancel out as well as 3 water molecules and 9 protons. This leaves the balanced net reaction of:\[\ce{3HNO_2(aq) + 5H^+(aq) + Cr_2O_7^{2-} (aq) \rightarrow 3NO_3^{-}(aq) + 2Cr^{3+}(aq) + 4H_2O(l)} \nonumber\]Bases dissolve into \(\ce{OH^{-}}\) ions in solution; hence, balancing redox reactions in basic conditions requires \(\ce{OH^{-}}\). Follow the same steps as for acidic conditions. The only difference is adding hydroxide ions to each side of the net reaction to balance any \(\ce{H^{+}}\). \(\ce{OH^{-}}\) and \(\ce{H^{+}}\) ions on the same side of a reaction should be added together to form water. Again, any common terms can be canceled out.Example \(\PageIndex{1}\): Balancing in Basic SolutionBalance the following redox reaction in basic conditions.\[\ce{ Ag(s) + Zn^{2+}(aq) \rightarrow Ag_2O(aq) + Zn(s)} \nonumber\]SolutionGo through all the same steps as if it was in acidic conditions.Step 1: Separate the half-reactions.\[\ce{Ag (s) \rightarrow Ag_2O (aq)} \nonumber\]\[\ce{Zn^{2+} (aq) \rightarrow Zn (s)} \nonumber\]Step 2: Balance elements other than O and H.\[\ce{ 2Ag (s) \rightarrow Ag_2O (aq)} \nonumber\]\[\ce{Zn^{2+} (aq) \rightarrow Zn (s)} \nonumber\]Step 3: Add H2O to balance oxygen.\[\ce{H_2O(l) + 2Ag(s) \rightarrow Ag_2O(aq)} \nonumber\]\[\ce{Zn^{2+}(aq) \rightarrow Zn(s)} \nonumber\]Step 4: Balance hydrogen with protons.\[\ce{H_2O (l) + 2Ag (s) \rightarrow Ag_2O (aq) + 2H^+ (aq)} \nonumber\]\[\ce{Zn^{2+} (aq) \rightarrow Zn (s)} \nonumber\]Step 5: Balance the charge with e-.\[\ce{H_2O (l) + 2Ag (s) \rightarrow Ag_2O (aq) + 2H^+ (aq) + 2e^{-}} \nonumber\]\[\ce{Zn^{2+} (aq) + 2e^{-} \rightarrow Zn (s)} \nonumber\]Step 6: Scale the reactions so that they have an equal amount of electrons. In this case, it is already done.Step 7: Add the reactions and cancel the electrons.\[\ce{H_2O(l) + 2Ag(s) + Zn^{2+}(aq) \rightarrow Zn(s) + Ag_2O(aq) + 2H^+(aq). } \nonumber\]Step 8: Add OH- to balance H+. There are 2 net protons in this equation, so add 2 OH- ions to each side.\[\ce{H_2O(l) + 2Ag(s) + Zn^{2+}(aq) + 2OH^{-}(aq) \rightarrow Zn(s) + Ag_2O(aq) + 2H^+(aq) + 2OH^{-}(aq).} \nonumber\]Step 9: Combine OH- ions and H+ ions that are present on the same side to form water.\[\ce{\cancel{H2O(l)} + 2Ag(s) + Zn^{2+}(aq) + 2OH^{-}(aq) -> Zn(s) + Ag_2O(aq) + \cancel{2}H_2O(l)} \nonumber\]Step 10: Cancel common terms.\[\ce{2Ag(s) + Zn^{2+}(aq) + 2OH^{-} (aq) \rightarrow Zn(s) + Ag_2O(aq) + H_2O(l)} \nonumber\]Balancing Redox Reactions is shared under a CC BY 4.0 license and was authored, remixed, and/or curated by Ann Nguyen & Luvleen Brar.
631
Electrochemistry Basics
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Supplemental_Modules_(Analytical_Chemistry)/Electrochemistry/Basics_of_Electrochemistry
Electrochemistry is the study of chemical processes that cause electrons to move. This movement of electrons is called electricity, which can be generated by movements of electrons from one element to another in a reaction known as an oxidation-reduction ("redox") reaction. A redox reaction is a reaction that involves a change in oxidation state of one or more elements. When a substance loses an electron, its oxidation state increases; thus, it is oxidized. When a substance gains an electron, its oxidation state decreases, thus being reduced. For example, for the redox reaction\[ \ce{H_2 + F_2 → 2 HF} \nonumber\]can be rewritten as follows:\[ \ce{H_2 \rightarrow 2H^+ + 2e^{-}} \nonumber\]\[\ce{F_2 + 2e^{-} \rightarrow 2F^{-}} \nonumber\]\[ \ce{H_2 + F_2 \rightarrow 2H^{+} + 2F^{-}} \nonumber\]Oxidation is the loss of electrons, whereas reduction refers to the acquisition of electrons, as illustrated in the respective reactions above. The species being oxidized is also known as the reducing agent or reductant, and the species being reduced is called the oxidizing agent or oxidant. In this case, H2 is being oxidized (and is the reducing agent), while F2 is being reduced (and is the oxidizing agent). The following acronym is useful in remembering this concept:"OIL RIG""OIL RIG" is a useful mnemonic for remembering the definitions of oxidation nd reduction.Oxidation Is Losing electrons; Reduction Is Gaining electronsExample \(\PageIndex{1}\): IRon-Vanadium CoupleGiven the redox reaction \[ \ce{Fe^{3+} + V^{2+}→ Fe^{2+} + V^{3+}} \nonumber\]which species is oxidized? Which is reduced? Identify the reducing agent and the oxidizing agent.Solution\(\ce{Fe^{3+}}\) is reduced into \(\ce{Fe^{2+}}\) and \(\ce{V^{2+}}\) is oxidized into \(\ce{V^{3+}}\). This is because the oxidized species loses electrons, and the reduced species gains electrons.Iron gains an electron\[\ce{Fe^{3+}→ Fe^{2+}} \nonumber\]and vanadium loses an electron\[\ce{V^{2+}→ V^{3+}}. \nonumber\]Thus, \(\ce{Fe^{3+}}\) is the oxidizing agent and \(\ce{V^{2+}}\) is the reducing agent.Rules for Assigning Oxidation StatesExercise \(\PageIndex{1}\)What is the oxidation state of magnesium in \(\ce{MgF2}\)?Using rule 5 and 7.\(\ce{MgF2}\) total charge=0 Total Charge=(+2)+(-1*2)=0Exercise \(\PageIndex{2}\)What is the oxidation state of hydrogen in \(\ce{H2O}\)?Using rule 6 and 8\(\ce{H2O}\) total charge=0 Total Charge=(+1*2)+(-2)=0Method 1: Oxidation Number MethodExample \(\PageIndex{2}\): ManganeseBalance the following reaction in an acidic aqueous Solution\[2 \times \left(\ce{5e^{-} + 8H^+ + MnO_4^{-} \rightarrow Mn^{2+} + 4H_2O}\right) \nonumber\]\[5 \times \left( \ce{H_2C_2O_4 \rightarrow 2CO_2 + 2H^{+} + 2e^{-}}\right) \nonumber\]Combining and canceling gives the following:\[\cancel{\ce{10e^{-}}} + \ce{16H^{+} + 2MnO_4^{1-} + 5H_2C_2O_4 \rightarrow 2Mn^{2+} + 8H_2O + 10 CO_2 + 10H^{+}} + \cancel{\ce{10e^{-}}} \nonumber\]Answer\[\ce{6H^{+} + 2MnO_4^{1-} + 5H_2C_2O_4 \rightarrow 2Mn^{2+} + 8H_2O + 10 CO_2} \nonumber\]In 1793, Alessandro Volta discovered that electricity could be produced by placing different metals on the opposite sides of a wet paper or cloth. He made his first battery by placing Ag and Zn on the opposite sides of a moistened cloth with salt or weak acid Solution. Therefore, these batteries acquired the name voltaic cells. Voltaic (galvanic) cells are electrochemical cells that contain a spontaneous reaction, and always have a positive voltage. The electrical energy released during the reaction can be used to do work. A voltaic cell consists of two compartments called half-cells. The half-cell where oxidation occurs is called the anode. The other half-cell, where reduction occurs, is called the cathode. The electrons in voltaic cells flow from the negative electrode to the positive electrode—from anode to cathode (see figure below). (Note: the electrodes are the sites of the oxidation and reduction reactions). The following acronym is useful in keeping this information straight:Red Cat and An OxReduction Cathode and Anode OxidationFor an oxidation-reduction reaction to occur, the two substances in each respective half-cell are connected by a closed circuit such that electrons can flow from the reducing agent to the oxidizing agent. A salt bridge is also required to maintain electrical neutrality and allow the reaction to continue.The figure above shows that \(\ce{Zn(s)}\) is continuously oxidized, producing aqueous \(\ce{Zn^{2+}}\):\[\ce{Zn(s) \rightarrow Zn^{2+}(aq) + 2e^{-}}\]Conversely, in the cathode, \(\ce{Cu^{2+}}\) is reduced and continuously deposits onto the copper bar:\[\ce{Cu^{2+} (aq) + 2e^{-} \rightarrow Cu(s)}\]As a result, the Solution containing \(\ce{Zn(s)}\) becomes more positively charged as the Solution containing \(\ce{Cu(s)}\) becomes more negatively charged. For the voltaic cell to work, the Solutions in the two half-cells must remain electrically neutral. Therefore, a salt bridge containing KNO3 is added to keep the Solutions neutral by adding NO3-, an anion, into the anode Solution and \(\ce{K^{+}}\), a cation, into the cathode Solution. As oxidation and reduction proceed, ions from the salt bridge migrate to prevent charge buildup in the cell compartments.The cell diagram (or cell notation) is a shorthand notation to represent the redox reactions of an electrical cell. For the cell described, the cell diagram is as follows:\[\ce{Zn(s) | Zn^{2+} (aq) || Cu^{2+} (aq) | Cu(s)}\]\[ \ce{Fe^{2+} (aq), Fe^{3+} (aq) || Ag^{+} (aq) | Ag(s)}\]: A voltaic cell works by the different reactivity of metal ions, and not require external battery source. Image taken at Hope College as part of their General Chemistry Lab curriculum.The figure above shows a set of electrochemical half-cells that can be used to measure various voltages within galvanic cells. The cells shown are made of agar saturated with KCl Solution so as to act as a salt bridge. The zinc electrode in the middle can be used as a reference while the various concentrations of copper (labeled 1, 2, 3, 4 and 5) can be tested to form a calibration curve. The potential of the unknown can be used to determine the concentration of an unknown copper Solution. This application of the Nernst equation allows for rapid data collection without the need for a complicated salt bridge apparatus. Example \(\PageIndex{3}\): Copper-silver ReactionWrite the cell diagram for this reaction:Solution\[\ce{Cu(s) | Cu^{2+}(aq) || Ag^{+}(aq) | Ag(s)} \nonumber\]Example \(\PageIndex{4}\): Aluminum-Tin ReactionWrite cell reactions for this cell diagram:Oxidation: {Al(s) → Al3+(aq) +3e-} x 2The oxidation of Zn(s) into Zn2+ and the reduction of Cu2+ to Cu(s) occur spontaneously. In other words, the redox reaction between Zn and Cu2+ is spontaneous. This is due to the difference in potential energy between the two substances. The difference in potential energy between the anode and cathode dictates the direction of electronic movement. Electrons move from areas of higher potential energy to areas of lower potential energy. In this case, the anode has a higher potential energy; electrons therefore move from anode to cathode. The potential difference between the two electrodes is measured in units of volts. One volt (V) is the potential difference necessary to generate a charge of 1 coulomb (C) from 1 joule (J) of energy.For a voltaic cell, this potential difference is called the cell potential (or EMF for electromotive force, although it is not really a force), which is denoted Ecell. For a spontaneous reaction, Ecell is positive and ΔG (Gibbs free energy, used to determine if a reaction occurs spontaneously) is negative. Thus, when ΔG is negative the reaction is spontaneous. Merging electrochemistry with thermodynamics gives this formula:\[\Delta G = -n F E_{cell} \]Cell potential is different for each voltaic cell; its value depends upon the concentrations of specific reactants and products as well as temperature of the reaction. For standard cell potential, temperature of the reaction is assumed to be 25o Celsius, the concentration of the reactants and products is 1 M, and reaction occurs at 1 atm pressure. The standard cell potential is denoted Eocell, and can be written as oxidation potential + reduction potential. For voltaic cells:\[E^o_{cell}=E^o_{cathode}-E^o_{Anode} \label{Celldef}\]WarningTo use Equation \ref{Celldef}, the cell potentials must be in reduction form.Since standard potentials are given in the form of standard reduction potential for each half-reaction, to calculate the standard cell potential \(E^o_{cell}\), the substance is being oxidized must be identified; then the standard reduction potential of the oxidation reaction is subtracted from the standard reduction potential of the reducing reaction.Example \(\PageIndex{5}\)What is the cell potential for the following reaction?\[\ce{Zn(s) + Cu^{2+} (aq) \rightarrow Zn^{2+}(aq) + Cu(s)} \nonumber\]Solution\(\ce{Zn(s)}\) is being oxidized, and \(\ce{Cu(s)}\) is being reduced. The potentials for the two half reaction are given in the reduction form:\[\ce{Zn(s) \rightarrow Zn^{2+}(aq) + 2e^{-}} \nonumber\]\[\ce{Cu^{2+} (aq) + 2e^{-} \rightarrow Cu (s)} \nonumber\]The cell potentials indicate which reaction takes place at the anode and which at the cathode. The cathode has a more positive potential energy, and thus:To calculate \(E^o_{cell}\), subtract the \(E^o\) of the oxidized half reaction from the \(E^o_{cell}\) of the reduction half reaction, we use Equation \ref{Celldef}}:\[E^o_{cell}=E^o_{cathode}-E^o_{anode} \nonumber\]Oxidation half reaction: Eo= -0.763V\[\ce{Zn(s) \rightarrow Zn^{2+}(aq) + 2e^{-}} \nonumber\]\[\ce{Cu^{2+}(aq) + 2e^{-} \rightarrow Cu(s)} \nonumber \]Net reaction:\[\ce{Zn(s) + Cu^{2+}(aq) \rightarrow Zn^{2+}(aq) + Cu(s)} \nonumber\]Therefore:\[E^o_{cell}=E^o_{cathode}-E^o_{anode}=0.340\; V - (-0.763\; V)= +1.103 \; V \nonumber\]Exercise \(\PageIndex{6}\)Calculate Eocell for the following redox reaction under standard conditions:\[\ce{2Al (s) + 3Sn^{2+} (aq) \rightarrow 2Al^{3+} (aq) + 3Sn(s)} \nonumber\]Oxidation: \[\{\ce{Al(s) → Al^{3+} (aq) +3e^{-}}\} \times 2 \,\,\, -E^o = +1.676\,V \nonumber\]Reduction: \[\{\ce{Sn^{2+}(aq) + 2e^{-} → Sn(s)}\} \times 3 \,\,\, E^o = -0.137\,V \nonumber\]Net reaction: \[\ce{2Al(s) + 3Sn^{2+}(aq) → 2Al^{3+}(aq) + 3Sn(s)} \nonumber\]Then using Equation \ref{Celldef} \[\begin{align*} E^o_{cell} = -0.137\,V - (-1.676V) \\[4pt] &= +1.539 \,V. \end{align*}\]Standard reduction potential is an intensive property, meaning that changing the stoichiometric coefficient in a half reaction does not affect the value of the standard potential. For example,Oxidation:{Al(s) → Al3+(aq) +3e-} x 2 is still Eo= -1.676Reduction:{Sn2+(aq) +2e- → Sn(s)} x 3 is still Eo= -0.137If the stoichiometric coefficient is multiplied by 2, the standard potential does not change:Example \(\PageIndex{7}\): Iron/Vanadium ChemistryCalculate the cell potential in the following redox reaction under standard conditions:\[\ce{ Fe^{3+} (aq) + V^{2+} (aq) \rightarrow Fe^{2+} (aq) + V^{3+}(aq)} \nonumber\]SolutionConsult the table of standard reduction potentials (Table P1) for each half reaction:\[Fe^{3+}_{(aq)}+e^- \rightarrow Fe^{2+}_{(aq)} \;\;\;\; \text{with } E^o=0.771\; V \nonumber\]\[V^{2+}_{(aq)} \rightarrow V^{3+}_{(aq)} + e^- \;\;\;\; \text{with } E^o=-0.255\; V \nonumber\]The cell potential is\[E^o_{cell}=E^o_{cathode}-E^o_{anode}=0.771\; V -(-0.255\; V)=1.026 \; V \nonumber\]Electrochemistry Basics is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by LibreTexts.
632
Batteries: Electricity though chemical reactions
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Supplemental_Modules_(Analytical_Chemistry)/Electrochemistry/Exemplars/Batteries%3A_Electricity_though_chemical_reactions
Batteries consist of one or more electrochemical cells that store chemical energy for later conversion to electrical energy. Batteries are used in many day-to-day devices such as cellular phones, laptop computers, clocks, and cars. Batteries are composed of at least one electrochemical cell which is used for the storage and generation of electricity. Though a variety of electrochemical cells exist, batteries generally consist of at least one voltaic cell. Voltaic cells are also sometimes referred to as galvanic cells. Chemical reactions and the generation of electrical energy is spontaneous within a voltaic cell, as opposed to the reactions electrolytic cells and fuel cells.It was while conducting experiments on electricity in 1749 that Benjamin Franklin first coined the term "battery" to describe linked capacitors. However his battery was not the first battery, just the first ever referred to as such. Rather it is believed that the Baghdad Batteries, discovered in 1936 and over 2,000 years old, were some of the first ever batteries, though their exact purpose is still debated. Luigi Galvani (for whom the galvanic cell is named) first described "animal electricity" in 1780 when he created an electrical current through a frog. Though he was not aware of it at the time, this was a form of a battery. His contemporary Alessandro Volta (for whom the voltaic cell and voltaic pile are named) was convinced that the "animal electricity" was not coming from the frog, but something else entirely. In 1800, his produced the first real battery: the voltaic pile. In 1836, John Frederic Daniell created the Daniell cell when researching ways to overcome some of the problems associated with Volta's voltaic pile. This discovery was followed by developments of the Grove cell by William Robert Grove in 1844; the first rechargeable battery, made of a lead-acid cell in 1859 by Gaston Plante; the gravity cell by Callaud in the 1860s; and the Leclanche cell by Georges Leclanche in 1866. Until this point, all batteries were wet cells. Then in 1887 Carl Gassner created the first dry cell battery, made of a zinc-carbon cell. The nickel-cadmium battery was introduced in 1899 by Waldmar Jungner along with the nickel-iron battery. However Jungner failed to patent the nickel-iron battery and in 1903, Thomas Edison patented a slightly modified design for himself. A major breakthrough came in 1955 when Lewis Urry, an employee of what is now know as Energizer, introduced the common alkaline battery. The 1970s led to the nickel hydrogen battery and the 1980s to the nickel metal-hydride battery. Lithium batteries were first created as early as 1912, however the most successful type, the lithium ion polymer battery used in most portable electronics today, was not released until 1996. Voltaic cells are composed of two half-cell reactions (oxidation-reduction) linked together via a semipermeable membrane (generally a salt bath) and a wire. Each side of the cell contains a metal that acts as an electrode. One of the electrodes is termed the cathode, and the other is termed the anode. The side of the cell containing the cathode is reduced, meaning it gains electrons and acts as the oxidizing agent for the anode. The side of the cell containing the anode is where oxidation occurs, meaning it loses electrons and acts as the reducing agent for the cathode. The two electrodes are each submerged in an electrolyte, a compound that consists of ions. This electrolyte acts as a concentration gradient for both sides of the half reaction, facilitating the process of the electron transfer through the wire. This movement of electrons is what produces energy and is used to power the battery. The cell is separated into two compartments because the chemical reaction is spontaneous. If the reaction was to occur without this separation, energy in the form of heat would be released and the battery would not be effective. A Zinc-Copper Voltaic cell. The voltaic cell is providing the electricity needed to power the light-bulb. Primary versus Secondary Batteries. Primary batteries (left) are non-rechargeable and disposable. Secondary batteries (right) are rechargeable, like this cellular phone battery. Primary batteries are non-rechargeable and disposable. The electrochemical reactions in these batteries are non-reversible. The materials in the electrodes are completely utilized and therefore cannot regenerate electricity. Primary batteries are often used when long periods of storage are required, as they have a much lower discharge rate than secondary batteries. Use of primary batteries is exemplified by smoke detectors, flashlights, and most remote controls.Secondary batteries are rechargeable. These batteries undergo electrochemical reactions that can be readily reversed. The chemical reactions that occur in secondary batteries are reversible because the components that react are not completely used up. Rechargeable batteries need an external electrical source to recharge them after they have expended their energy. Use of secondary batteries is exemplified by car batteries and portable electronic devices. Wet cell batteries contain a liquid electrolyte. They can be either primary or secondary batteries. Due to the liquid nature of wet cells, insulator sheets are used to separate the anode and the cathode. Types of wet cells include Daniell cells, Leclanche cells (originally used in dry cells), Bunsen cells, Weston cells, Chromic acid cells, and Grove cells. The lead-acid cells in automobile batteries are wet cells.A lead-acid battery in an automobile.In dry cell batteries, no free liquid is present. Instead the electrolyte is a paste, just moist enough to allow current flow. This allows the dry cell battery to be operated in any position without worrying about spilling its contents. This is why dry cell batteries are commonly used in products which are frequently moved around and inverted, such as portable electronic devices. Dry cell batteries can be either primary or secondary batteries. The most common dry cell battery is the Leclanche cell. The capacity of a battery depends directly on the quantity of electrode and electrolyte material inside the cell. Primary batteries can lose around 8% to 20% of their charge over the course of a year without any use. This is caused by side chemical reactions that do not produce current. The rate of side reactions can be slowed by lowering temperature. Warmer temperatures can also lower the performance of the battery, by speeding up the side chemical reactions. Primary batteries become polarized with use. This is when hydrogen accumulates at the cathode, reducing the battery's effectiveness. Depolarizers can be used to remove this build up of hydrogen.Secondary batteries self-discharge even more rapidly. They usually lose about 10% of their charge each month. Rechargeable batteries gradually lose capacity after every recharge cycle due to deterioration. This is caused by active materials falling off the electrodes or electrolytes moving away from the electrodes.Peukert's law can be used to approximate relationships between current, capacity, and discharge time. This is represented by the equation\[t = \dfrac{Q_p}{K^k}\]where I is the current, k is a constant of about 1.3, t is the time the battery can sustain the current, and Qp is the capacity when discharged at a rate of 1 amp. There is a significant correlation between a cell's current and voltage. Current, as the name implies, is the flow of electrical charge. Voltage is how much current can potentially flow through the system. illustrates the difference between current and voltage. The difference between voltage and current. Water is flowing out of a hose and onto a waterwheel, turning it. Current can be thought of as the amount of water flowing through the hose. Voltage can be thought of as the pressure or strength of water flowing through the hose. The first hose does not have much water flowing through it and also lacks pressure, and is consequently unable to turn the waterwheel very effectively. The second hose has a significant amount of water flowing through it, so it has a large amount of current. The third hose does not have as much water flowing through it, but does have something blocking much of the hose. This increases the pressure of the water flowing out of the hose, giving it a large voltage and allowing the water to hit the waterwheel with more force than the first hose.Standard reduction potential, Eo, is a measurement of voltage. Standard reduction potential can be calculated with the knowledge that it is the difference in energy potentials between the cathode and the anode: Eocell = Eocathode − Eoanode. For standard conditions, the electrode potentials for the half cells can be determined by using a table of standard electrode potentials.For nonstandard conditions, determining the electrode potential for the cathode and the anode is not as simple as looking at a table. Instead, the Nernst equation must be used in to determine Eo for each half cell. The Nernst equation is represented by , where R is the universal gas constant (8.314 J K-1 mol-1), T is the temperature in Kelvin, n is the number of moles of electrons transferred in the half reaction, F is the Faraday constant (9.648 x 104 C mol-1), and Q is the reaction quotient. Batteries vary both in size and voltage due to the chemical properties and contents within the cell. However, batteries of different sizes may have the same voltage. The reason for this phenomenon is that the standard cell potential does not depend on the size of a battery but rather on its internal content. Therefore, batteries of different sizes can have the same voltage. Additionally, there are ways in which batteries can amplify their voltages and current. When batteries are lined up in a series of rows it increases their voltage, and when batteries are lined up in a series of columns it can increases their current.Four batteries of different sizes all of 1.5 voltageBatteries can explode through misuse or malfunction. By attempting to overcharge a rechargeable battery or charging it at an excessive rate, gases can build up in the battery and potentially cause a rupture. A short circuit can also lead to an explosion. A battery placed in a fire can also lead to an explosion as steam builds up inside the battery. Leakage is also a concern, because chemicals inside batteries can be dangerous and damaging. Leakage emitted from the batteries can ruin the device they are housed in, and is dangerous to handle. There are numerous environmental concerns with the widespread use of batteries. The production of batteries consumes many resources and involves the handling of many dangerous chemicals. Used batteries are often improperly disposed of and contribute to electronic waste. The materials inside batteries can potentially be toxic pollutants, making improper disposal especially dangerous. Through electronic recycling programs, toxic metals such as lead and mercury are kept from entering and harming the environment. Consumption of batteries is harmful and can lead to death.Any liquid or moist object that has enough ions to be electrically conductive can be used to make a battery. It is even possible to generate small amounts of electricity by inserting electrodes of different metals into potatoes, lemons, bananas, or carbonated cola. A voltaic pile can be created using two coins and a paper dipped in salt water. Stacking multiple coins in a series can results in an increase in current.1. Yes/No2. T/F3. Determine the standard electrode potential of a voltaic cell within a Leclanche (Dry) cell with half cell voltages of .875V at the graphite cathode and .253V at the zinc anode.4. Determine the standard electrode potential with given half cell voltages of .987V at the cathode and .632V at the anode.5. Explain why rechargeable batteries might be advantageous over disposable batteries.1. Yes/No2. T/F3. E0cell=E0(cathode)-E0(anode)E0celll= 0.875V - 0.253V = 0.622v 4. E0cell=E0(cathode)-E0(anode)E0cell= 0.987V -0.632V = 0.355V5. Even though disposable batteries are cheaper initially and easier to make, the longer lifespan of rechargeable batteries is often more efficient and useful. Rechargeable batteries mean less waste, as less batteries need to be made and less are disposed of in land-fills or through recycling programs. Rechargeable batteries are also more convenient as changing batteries is no longer required. This is especially beneficial in portable electronic devices. Also, because the components in a secondary cell are reusable, rechargeable batteries will generally cost less than disposable batteries in the long run. Batteries: Electricity though chemical reactions is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by LibreTexts.
633
Capillary Electrophoresis
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Supplemental_Modules_(Analytical_Chemistry)/Instrumentation_and_Analysis/Capillary_Electrophoresis
Capillary electrophoresis is an analytical technique that separates ions based on their electrophoretic mobility with the use of an applied voltage. The electrophoretic mobility is dependent upon the charge of the molecule, the viscosity, and the atom's radius. The rate at which the particle moves is directly proportional to the applied electric field--the greater the field strength, the faster the mobility. Neutral species are not affected, only ions move with the electric field. If two ions are the same size, the one with greater charge will move the fastest. For ions of the same charge, the smaller particle has less friction and overall faster migration rate. Capillary electrophoresis is used most predominately because it gives faster results and provides high resolution separation. It is a useful technique because there is a large range of detection methods available.1Endeavors in capillary electrophoresis (CE) began as early as the late 1800’s. Experiments began with the use of glass U tubes and trials of both gel and free solutions.1 In 1930, Arnes Tiselius first showed the capability of electrophoresis in an experiment that showed the separation of proteins in free solutions.2 His work had gone unnoticed until Hjerten introduced the use of capillaries in the 1960’s. However, their establishments were not widely recognized until Jorgenson and Lukacs published papers showing the ability of capillary electrophoresis to perform separations that seemed unachievable. Employing a capillary in electrophoresis had solved some common problems in traditional electrophoresis. For example, the thin dimensions of the capillaries greatly increased the surface to volume ratio, which eliminated overheating by high voltages. The increased efficiency and the amazing separating capabilities of capillary electrophoresis spurred a growing interest among the scientific society to execute further developments in the technique.A typical capillary electrophoresis system consists of a high-voltage power supply, a sample introduction system, a capillary tube, a detector and an output device. Some instruments include a temperature control device to ensure reproducible results. This is because the separation of the sample depends on the electrophoretic mobility and the viscosity of the solutions decreases as the column temperature rises.3 Each side of the high voltage power supply is connected to an electrode. These electrodes help to induce an electric field to initiate the migration of the sample from the anode to the cathode through the capillary tube. The capillary is made of fused silica and is sometimes coated with polyimide.3 Each side of the capillary tube is dipped in a vial containing the electrode and an electrolytic solution, or aqueous buffer. Before the sample is introduced to the column, the capillary must be flushed with the desired buffer solution. There is usually a small window near the cathodic end of the capillary which allows UV-VIS light to pass through the analyte and measure the absorbance. A photomultiplier tube is also connected at the cathodic end of the capillary, which enables the construction of a mass spectrum, providing information about the mass to charge ratio of the ionic species.Electrophoresis is the process in which sample ions move under the influence of an applied voltage. The ion undergoes a force that is equal to the product of the net charge and the electric field strength. It is also affected by a drag force that is equal to the product of \(f\), the translational friction coefficient, and the velocity. This leads to the expression for electrophoretic mobility:\[ \mu_{EP} = \dfrac{q}{f} = \dfrac{q}{6\pi \eta r} \label{1}\]where f for a spherical particle is given by the Stokes’ law; η is the viscosity of the solvent, and \(r\) is the radius of the atom. The rate at which these ions migrate is dictated by the charge to mass ratio. The actual velocity of the ions is directly proportional to E, the magnitude of the electrical field and can be determined by the following equation4: \[ v = \mu_{EP} E \label{2}\]This relationship shows that a greater voltage will quicken the migration of the ionic species.The electroosmotic flow (EOF) is caused by applying high-voltage to an electrolyte-filled capillary.4 This flow occurs when the buffer running through the silica capillary has a pH greater than 3 and the SiOH groups lose a proton to become SiO- ions. The capillary wall then has a negative charge, which develops a double layer of cations attracted to it. The inner cation layer is stationary, while the outer layer is free to move along the capillary. The applied electric field causes the free cations to move toward the cathode creating a powerful bulk flow. The rate of the electroosmotic flow is governed by the following equation:\[ \mu_{EOF} = \dfrac{\epsilon}{4\pi\eta} E\zeta \label{3}\]where ε is the dielectric constant of the solution, η is the viscosity of the solution, E is the field strength, and ζ is the zeta potential. Because the electrophoretic mobility is greater than the electroosmotic flow, negatively charged particles, which are naturally attracted to the positively charged anode, will separate out as well. The EOF works best with a large zeta potential between the cation layers, a large diffuse layer of cations to drag more molecules towards the cathode, low resistance from the surrounding solution, and buffer with pH of 9 so that all the SiOH groups are ionized.1There are six types of capillary electroseparation available: capillary zone electrophoresis (CZE), capillary gel electrophoresis (CGE), micellar electrokinetic capillary chromatography (MEKC), capillary electrochromatography (CEC), capillary isoelectric focusing (CIEF), and capillary isotachophoresis (CITP). They can be classified into continuous and discontinuous systems as shown in A continuous system has a background electrolyte acting throughout the capillary as a buffer. This can be broken down into kinetic (constant electrolyte composition) and steady-state (varying electrolyte composition) processes. A discontinuous system keeps the sample in distinct zones separated by two different electrolytes.6Capillary Zone Electrophoresis (CZE), also known as free solution capillary electrophoresis, it is the most commonly used technique of the six methods.A mixture in a solution can be separated into its individual components quickly and easily.The separation is based on the differences in electrophoretic mobility, which is directed proportional to the charge on the molecule, and inversely proportional to the viscosity of the solvent and radius of the atom.The velocity at which the ion moves is directly proportional to the electrophoretic mobility and the magnitude of the electric field.1The fused silica capillaries have silanol groups that become ionized in the buffer. The negatively charged SiO- ions attract positively charged cations, which form two layers—a stationary and diffuse cation layer. In the presence of an applied electric field, the diffuse layer migrates towards the negatively charged cathode creating an electrophoretic flow (\(\mu_{ep}\)) that drags bulk solvent along with it. Anions in solution are attracted to the positively charged anode, but get swept to the cathode as well. Cations with the largest charge-to-mass ratios separate out first, followed by cations with reduced ratios, neutral species, anions with smaller charge-to-mass ratios, and finally anions with greater ratios. The electroosmotic velocity can be adjusted by altering pH, the viscosity of the solvent, ionic strength, voltage, and the dielectric constant of the buffer.1CGE uses separation based on the difference in solute size as the particles migrate through the gel. Gels are useful because they minimize solute diffusion that causes zone broadening, prevent the capillary walls from absorbing the solute, and limit the heat transfer by slowing down the molecules. A commonly used gel apparatus for the separation of proteins is capillary SDS-PAGE. It is a highly sensitive system and only requires a small amount of sample.1 MEKC is a separation technique that is based on solutes partitioning between micelles and the solvent. Micelles are aggregates of surfactant molecules that form when a surfactant is added to a solution above the critical micelle concentration. The aggregates have polar negatively charged surfaces and are naturally attracted to the positively charged anode. Because of the electroosmotic flow toward the cathode, the micelles are pulled to the cathode as well, but at a slower rate. Hydrophobic molecules will spend the majority of their time in the micelle, while hydrophilic molecules will migrate quicker through the solvent. When micelles are not present, neutral molecules will migrate with the electroosmotic flow and no separation will occur. The presence of micelles results in a retention time to where the solute has little micelle interaction and retention time tmc where the solute strongly interacts. Neutral molecules will be separated at a time between to and tmc. Factors that affect the electroosmotic flow in MEKC are: pH, surfactant concentration, additives, and polymer coatings of the capillary wall.1 The separation mechanism is a packed column similar to chromatography. The mobile liquid passes over the silica wall and the particles. An electroosmosis flow occurs because of the charges on the stationary surface. CEC is similar to CZE in that they both have a plug-type flow compared to the pumped parabolic flow that increases band broadening.1CIEF is a technique commonly used to separate peptides and proteins. These molecules are called zwitterionic compounds because they contain both positive and negative charges. The charge depends on the functional groups attached to the main chain and the surrounding pH of the environment. In addition, each molecule has a specific isoelectric point (pI). When the surrounding pH is equal to this pI, the molecule carries no net charge. To be clear, it is not the pH value where a protein has all bases deprotonated and all acids protonated, but rather the value where positive and negative charges cancel out to zero. At a pH below the pI, the molecule is positive, and then negative when the pH is above the pI. Because the charge changes with pH, a pH gradient can be used to separate molecules in a mixture. During a CIEF separation, the capillary is filled with the sample in solution and typically no EOF is used (EOF is removed by using a coated capillary). When the voltage is applied, the ions will migrate to a region where they become neutral (pH=pI). The anodic end of the capillary sits in acidic solution (low pH), while the cathodic end sits in basic solution (high pH). Compounds of equal isoelectric points are “focused” into sharp segments and remain in their specific zone, which allows for their distinct detection.6An amino acid with n ionizable groups with their respective pKa values pK1, pK2, ... pkn will have the pI equal to the average of the group pkas: pI = (pK1+pK2+...+pkn)/n. Most proteins have many ionizable sidechains in addition to their amino- and carboxy- terminal groups. The pI is different for each protein and it can be theoretically calculated according to the Henderson-Hasselbalch approximation, if we know amino acids composition of protein. In order to experimentally determine a protein's pI 2-Dimensional Electrophoresis (2-DE) can be used. The proteins of a cell lysate are applied to a pH immobilized gradient strip, upon electrophoresis the proteins migrate to their pI within the strip. The second dimension of 2-DE is the separation of proteins by MW using a SDS-gel.CITP is the only method to be used in a discontinuous system. The analyte migrates in consecutive zones and each zone length can be measured to find the quantity of sample present.1 Capillary Electrophoresis is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by Juliet Precissi.
634
Case Study: Battery Types
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Supplemental_Modules_(Analytical_Chemistry)/Electrochemistry/Exemplars/Case_Study%3A_Battery_Types
Ranging from the very crude to the highly sophisticated, batteries come in a plethora of variety. Batteries in short are electrochemical cells that produce a current of electricity via chemical reactions. More specifically, batteries produce electrical energy from oxidation-reduction reactions. A collection of electrochemical cells wired in series is properly called a battery. A flashlight battery is really a single electrochemical cell, while a car battery is really a battery since it is three electrochemical cells in series.Electrochemical cells have been in use longer than what was once thought. Discovered in Khujut Rabu of modern day Iraq and dating from the Parthian (250 B.C.-A.D. 224) and Sassanid (A.D. 224-600) periods, the Baghdad Battery is the first known battery in the world. Consisting of a copper cylinder, an iron rod, an asphalt stopper, and a small earthenware jar, the Baghdad Battery was filled with an unknown electrolytic solution and may have been used for electroplating. About 2000 years later the Voltaic Pile, a stack of individual cells of zinc and copper disks immersed in sulfuric acid, was created by the Italian Count Volta and effectively replaced the use of the Leyden Jar, an instrument that stored static electricity for future use. Volta's battery is considered the first electrochemical cell and the reaction for which is as follows:oxidation half-reaction: \(Zn \rightarrow Zn^{2+} + 2e^-\)reduction half-reaction: \(2H^+ + 2e^- \rightarrow H_2\)Because zinc is higher in the electrochemical series, the zinc anode reacts with sulfate anions and is oxidized whilst protons are reduced to hydrogen gas. The copper cathode remains unchanged and acts only as electrode for the chemical reaction. Because the Voltaic pile was unafe to use and the cell power diminished over time, it was abandoned.Electrochemical cells typically consist of an anode (the negative electrode where oxidation occurs), a cathode (the positive electrode where reduction occurs), and an electrolyte (the medium conducting anions and cations within a reaction) all contained within a cell. Electrons flow in a closed circuit from the anode to the cathode. Depending on the configuration of the cell and the electrolyte used, a salt bridge may be necessary to conduct ions from one half cell to another as an electric charge is created when electrons move from electrode to another. The difference created would keep electrons from flowing any further. Because a salt bridge permits the flux of ions, a balance in charge is kept between the half cells whilst keeping them separate.The two main categories of batteries are primary and secondary. Essentially, primary cells are batteries which cannot be recharged while secondary cells are rechargeable. The distinction begs the question as to why primary cells are still in use today, and the reason being is that primary cells have lower self-discharge rates meaning that they can be stored for longer periods of time than rechargeable batteries and maintain nearly the same capacity as before. Reserve and backup batteries present a unique example of this advantage of primary cells. In reserve, or stand-by, batteries components of the battery containing active chemicals are separated until the battery is needed, thus greatly decreasing self-discharge. An excellent example is the Water-Activated Battery. As opposed to inert reserve batteries, backup batteries are already activated and functional but not producing any current until the main power supply fails.Devices that generate electric energy via the digestion of carbohydrates, fats, and protiens by enzymes. The most common biobatteries are the lemon or potato battery and the frog or ox-head battery better described as a "muscular pile". In a lemon cell, the energy for the battery is not produced by the lemon but by the metal electrodes. Usually zinc and copper electrodes are inserted into a lemon (the electrolyte being citric acid) and connected by a circuit. The zinc is oxidized in the lemon in order to reach a preferred lower energy level and the electrons discharged provide the energy. Using zinc and copper electrodes, a lemon can produce about 0.9 Volts. While not technically a biobattery, an Earth battery is comprised of two different electrodes which are either buried underground or immersed in natural bodies of water which tap into Telluric currents to produce electric energy.During the 1860s, a French man named George Lelanche developed the Lelanche cell also known today as the dry-cell battery. A dry-cell battery is a battery with a paste electrolyte (as opposed to a wet-cell battery with a liquid electrolyte) in the the middle of its cylinder and attached are metal electrodes. A dry-cell battery is a primary cell that cannot be reused. In order to function, each dry-cell battery has a cathode and an anode. Some examples of dry-cell batteries used in everyday objects today are remote controls, clocks, and calculators. surrounded by a moist electrolyte paste that is enclosed in a metal cylinder. 1.5 volts is the most commonly used voltage for dry-cell batteries. The sizes of dry-cell batteries vary, however, it does not change the voltage of the battery. Zinc-carbon cells were the first really portable energy source. These cells have a short lifetime and the zinc casings become porous as the zinc is converted to zinc chloride. The substances in the cell that leak out are corrosive to metal and can terminally destroy electronic equipment or flashlights. Zinc-carbon cells produce 1.5 volts.For a dry-cell battery to operate, oxidation will occur from the zinc anode and reduction will take place in the cathode. The most common type of cathode is a carbon graphite. Once reactants have been turned into products, the dry-cell battery will work to produce electricity. For example, in a dry-cell battery, once \(Zn^{2+}\) has been oxidized to react with \(NH_3\), it will produce chloride salt to insure that too much \(NH_3\) will not block the current of the cathode.\[Zn^{2+}_{(aq)} + 2NH_{3\;(g)} + 2Cl^-_{(aq)} \rightarrow [Zn(NH_3)_2]Cl_{2\; (s)}\]How does the reaction work? While the zinc anode is being oxidized, it is producing electrons that will be captured by reducing Maganese from an oxidation state of +4 to a +3.The electrons produced by Zinc will then connect to the cathode to produce it's product.Recently, the most popular dry-cell battery to be used has been the alkaline-cell battery. In the zinc-carbon battery shown above, the zinc is not easily dissolved in basic solutions. Though it is fairly cheap to construct a zinc-carbon battery, the alkaline-cell battery is favored because it can last much longer. Instead of using \(NH_4Cl\) as an electrolyte, the alkaline-cell battery will use \(NaOH\) or \(KOH\) instead. The reaction will occur the same where zinc is oxidized and it will react with \(OH^-\) instead.\[Zn^{2+}_{(aq)} + 2OH^-_{(aq)} \rightarrow Zn(OH)_{2\; (s)}\]Once the chemicals in the dry-cell battery can no longer react together, the dry-cell battery is dead and cannot be recharged. Alkaline electrochemical cells have a much longer lifetime but the zinc case still becomes porous as the cell is discharged and the substances inside the cell are still corrosive. Alkaline cells produce 1.54 volts.Mercury batteries are small, circular metal batteries that were used in watches. Mercury cells offer a long lifetime in a small size but the mercury produced as the cell discharges is very toxic. This mercury is released into the atmosphere if the cells are incinerated in the trash. About 90% of the 1.4 million pounds of mercury in our garbage comes from mercury cells. Mercury cells only produce 1.3 Volts.\[HgO + Zn + H_2O \rightarrow Hg + Zn(OH)_2\]Mercury batteries utilize either pure mercuric oxide or a mix of mercuric oxide with manganese dioxide as the cathode. The anode is made with zinc and is separated from the cathode with a piece of paper or other porous substance that has been soaked in the electrolyte (which is generally either sodium or potassium oxide).In the past, these batteries were widely used because of their long shelf life of about 10 years, and also because of their stable, steady voltage output. Also, they had the highest capacity per size. They were popular for use in button-type battery applications, such as watches or hearing aids. However, the environmental impact for the amount of mercury present in the batteries became an issue, and the mercury batteries were discontinued from public sale.The lead-acid battery used in cars and trucks consists of six electrochemical cells joined in series. Each cell in a lead-acid battery produces 2 volts. The electrodes are composed of lead and are immersed in sulfuric acid. The negative electrodes are spongy lead metal and the positive electrodes are lead impregnated with lead oxide. As the battery is discharged, metallic lead is oxidized to lead sulfate at the negative electrodes and lead oxide is reduced to lead sulfate at the positive electrodes. When a lead-acid battery is recharged by an alternator, electrons are forced to flow in the opposite direction which reverses the reaction.\[Pb + PbO_2 + 2 H_2SO_4 \rightarrow 2 PbSO_4 + 2 H_2O\]Nickel-cadmium cells can also be regenerated by reversing the flow of the electrons in a battery charger. The cadmium is oxidized in these cells to cadmium hydroxide and the nickel is reduced. Nickel-cadmium cells generate 1.46 Volts.\[Cd + NiO_2 + 2 H_2O \rightarrow Cd(OH)_2 + Ni(OH)_2\]A Nickel-metal Hyride battery is a secondary cell very similar to the nickel-cadmium cell except that it uses a hydrogen-absorbing alloy in place of cadmium. The Nickel-metal Hyride battery has 2-3 times the capacity of a nickel-cadmium cell. Case Study: Battery Types is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by LibreTexts.
635
Case Study: Fuel Cells
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Supplemental_Modules_(Analytical_Chemistry)/Electrochemistry/Exemplars/Case_Study%3A_Fuel_Cells
Fuels cells generate electricity from an electrochemical reaction in which oxygen and a fuel combine to form water. Fuel cells work by converting the chemical energy found in the fuel, e.g. hydrogen gas, into electrical energy (electricity).In a fuel cell, the fuel is oxidized at the anode which yields electrons that flow through an external circuit doing electrical work, before an oxidant (often oxygen) is reduced by these electrons at the cathode.\[O_{2(g)} + \underset{\text{Fuel}}{H_{2(g)}} \rightarrow 2H_2O_{(l)}\]The resulting electricity can be used in a variety of ways, including: powering motor vehicles, electrical devices, and airplanes. Fuel cells also produce heat as a by-product which is finding increasing use in heating homes, especially in Japan. With its multifunctional products and by-products, fuel cells are rapidly becoming the hot ticket alternative source of energy as we move into a greener and more progressive era.A hydrogen fuel cell enables hydrogen to be combined electrochemically with oxygen to produce electricity, water, and heat. One fuel cell alone only produces a small amount of power. However, grouping the individual fuel cells together creates a fuel cell stack (as shown below in the diagram). When delivered to a fuel cell engine, fuel cell stacks create enough energy to power buses and other vehicles, and have even been used to power spacecraft.Each fuel cell in the stack contains two electrodes - a positive cathode (where the reduction occurs) and a negative anode (where the oxidation occurs). The energy-producing reactions take place at the surfaces of the electrodes. Each individual pair of electrodes is separated by an electrolyte (either in solid or liquid form). This electrolyte carries electrically charged particles (ions) between the electrodes. The rate of a given reaction can be increased with the help of a catalyst such as platinum or nickel.The power (\(P\)) produced by a fuel cell is equal to the voltage (\(V\)) multiplied by the current (\(I\)) at which it is being operated. This is measured in watts (W).\[P= IV\]There are a number of factors that reduce the efficiency in fuel cells, which can be broadly categorized as reversible, irreversible and fuel utilization losses.'Reversible' losses correspond to losses that underpin the deviation between the standard electrode potential of the full electrochemical cell and the actual operating open-circuit voltage (OCV). 'Irreversible' losses describe the various contributions from different components of the fuel cell to the loss of voltage with respect to OCV, as the current drawn from the cell/stack is increased. This includes an activation barrier that must be overcome at low currents, which can represent a loss of ~200 mV. This results from the reactions requiring energy to overcome a threshold before occurring, despite a thermodynamic driving force. This is especially relevant in the case of the reduction of oxygen at the cathode. Further losses are experienced in the intermediate regime as a result of ohmic resistance (both ionic and electronic) and due to mass transport limitations at high current loads.Particularly in the case of PEMFCs, the flows of waste water and unspent fuel occur at rates that exceed the capabilities of the fuel cell's physical capabilities and not all fuel that enters at the inlet is utilized. This also results in a drop of overall efficiency.Since the efficiency of a fuel cell is directly proportional to the power generated, its efficiency is almost proportional to its voltage.Molten Carbonate Fuel Cells (MCFC): MCFC's use molten carbonate salts as their electrolyte. This fuel cell has a high electrical efficiency of 60 %. These cells operate at about 600 degrees Celsius. The generated power varies, and some units have been built with outputs as high as 100 MW. Because of the high temperatures, these cells are not generally used in the home.Zinc Air Fuel Cells (ZAFC): In this type of fuel cell, there is a gas diffusion electrode, a zinc anode separated by electrolyte and a mechanical separator. Oxygen is reduced to hydroxide, which combines with oxidized zinc, generating electrons in the process.Phosphoric Acid Fuel Cells (PAFC): This fuel cell's anode and cathode are made with specks of platinum on a carbon and silicon carbide matrix that supports the phosphoric acid electrolyte. PAFC's are commonly used in large commercial vehicles i.e. buses and were the first fuel cells to be commercialized.Proton Exchange Membrane Fuel Cells (PEMFC): This fuel cell uses a polymeric membrane as the electrolyte along with platinum-activated, carbon-based electrodes. PEMFC's can function at relatively low temperatures and are therefore commonly used in scenarios requiring short start-up times, including transport and portable applications.Direct Methanol Fuel Cells (DMFC): This fuel cell is similar to the Proton Exchange Membrane Fuel Cell; however, instead of using gaseous hydrogen as the fuel, liquid methanol is used.Solid Oxide Fuel Cells (SOFC): SOFC's can operate from ~ 550 °C to 1000 °C. SOFC's are able to do so because they use a solid ceramic electrolyte between their electrodes. Like MCFC's, SOFC's can also perform at an efficiency of around 60 %. These fuel cells are often used for generating heat and electricity in industry and for providing auxiliary power in motor vehicles. Fuel cells create little to no environmentally damaging emissions: generally, the only byproduct in a hydrogen fuel cell, typically found in automobiles, is water.Fuel cells are being used all over the globe. Here are some examples:1. Answer: A) Solid Oxide Fuel Cell (SOFC)Explanation: As explained by the table in the 'Different Types of Fuel Cells' section, the Solid Oxide Fuel Cell (SOFC) is the most efficient with an efficiency of 50-65%.2. Answer: Water OR HeatExplanation: Both water and heat are by-products of an active hydrogen fuel cell.3. Answer: United Technologies CorporationExplanation: United Technologies Corporation was the first company to commercialize fuel cells.4. Answer: It serves as a bridge between anode and cathode.Explanation: An electrolyte serves a bridge that connects the anode and cathode parts of a fuel cell. This is because of the ionic nature of salts in solution. As a voltage is applied, these ions align either at the cathode or at the anode according to their charge. This creates a bridge that the voltage can cross.5. Answer: No (very little) carbon-dioxide emissions OR Very high efficiencyExplanation: The very efficient nature of fuel cells relative to standard combustion engines and other similar devices is highly valued in today's society because of the rising cost of energy. The relatively low emissions produced by fuel cells is also beneficial to the environment.Case Study: Fuel Cells is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by LibreTexts.
636
Case Study: Industrial Electrolysis
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Supplemental_Modules_(Analytical_Chemistry)/Electrochemistry/Exemplars/Case_Study%3A_Industrial_Electrolysis
Electrolysis reactions are the basic foundations of today's modern industry. There are various elements, chemical compounds, and organic compounds that are only produced by electrolysis including aluminum, chlorine, and NaOH. Electrolysis is the process by which an electric current spurs an otherwise non-spontaneous reaction.The electrorefining process refines metals or compounds at a high purity for a low cost. The pure metal can coat an otherwise worthless object.Let's consider the electrorefining process of copper: At the anode, there is an impure piece of copper that has other metals such as Ag, Au, Pt, Sn, Bi, Sb, As, Fe, Ni, Co, and Zn. The copper in this impure ore is oxidized to form Cu2+ at the anode, and moves through an aqueous sulfuric acid-Copper (II) sulfate solution into the cathode. When it reaches the cathode, the Cu2+ is reduced to Cu. This whole process takes place at a fairly low voltage (about .15 to .30 V), so Ag, Au, and Pt are not oxidized at the anode, as their standard oxidation electrode potentials are -.800, -1.36 and -1.20 respectively; these unoxidized impurities turn into a mixture called anode mud, a sludge at the bottom of the tank. This sludge can be recovered and used in different processes. Unlike Ag, Au and Pt, the impurities of Sb, Bi and Sn in the ore are indeed oxidized at the anode, but they are precipitated as they form hydroxides and oxides. Finally, Fe, Ni, Co and Zn are oxidized as well, but they are dissolved in water. Therefore, the only solid we are left with is the pure solid copper plate at the cathode, which has a purity level of about 99.999%. The image below gives an outline about the fate of the main components of an impure iron ore.Electrosynthesis is the method of producing substances through electrolysis reactions. This is useful when reaction conditions must be carefully controlled. One example of electrosynthesis is that of MnO2, Manganese dioxide. MnO2 occurs naturally in the form of the mineral pyrolusite, but this mineral is not easily used due to the nature of its size and lattice structure. However, MnO2 can be obtained a different way, through the electrolysis of MnSO4 in a sulfuric acid solution. \(\begin{align} &\textrm{Oxidation: } &&\mathrm{Mn^{2+} + 2H_2O \rightarrow MnO_2 + 4H^+ + 2e^-} &&\mathrm{\hspace{12px}E^0_{MnO_2/H_2}=-1.23}\\ &\textrm{Reduction: } &&\mathrm{2e^- + 2H^+ \rightarrow H_2} &&\mathrm{{-E}^0_{H^+/H_2}= -0}\\ &\textrm{Overall: } &&\mathrm{Mn^{2+} + 2H_2O \rightarrow MnO_2 + 2H^+ +H_2} &&\mathrm{\hspace{12px}E^0_{MnO_2/H_2} -E^0_{H^+/H_2}= -1.23 - 0= -1.23} \end{align}\)The commercial process for organic chemicals that is currently practiced on a scale comparable to that of inorganic chemicals and metals is the electrohydrodimerization of acrylonitrile to adiponitrile.\(\begin{align} &\textrm{Anode: } &&\mathrm{H_2O \rightarrow 2H^+ + \dfrac{1}{2} O_2 + 2e^-}\\ &\textrm{Cathode: } &&\ce{2CH2=CHCN + 2H2O + 2e- \rightarrow NC(CH2)CN + 2OH-}\\ &\textrm{Overall: } &&\textrm{(acrylonitrile) }\ce{2CH2=CHCN + H2O \rightarrow \dfrac{1}{2} O2 + NC(CH2)4CN}\:\textrm{(adiponitrile)} \end{align}\)The importance of adiponitrile is that it can be readily converted to other useful compounds.This process is the electrolysis of sodium chloride (NaCl) at an industrial level. We will begin by discussing the equation for the chlor-alkali process, followed by discussing three different types of the process: the diaphragm cell, the mercury cell and the membrane cell.We will begin the explanation of the chlor-alkali process by determining the reactions that occur during the electrolysis of NaCl. Because NaCl is in an aqueous solution, we also have to consider the electrolysis of water at both the anode and the cathode. Therefore, there are two possible reduction equations and two possible oxidation reactions.Reduction:\begin{align} & \mathrm{Na^+_{\large{(aq)}} + 2e^- \rightarrow Na_{\large{(s)}}} && \mathrm{E^0_{Na^+/Na}= -2.71\: V \tag{1}}\\ & \mathrm{2 H_2O_{\large{(l)}} + 2e^- \rightarrow H_{2\large{(g)}} +2OH^-_{\large{(aq)}}} && \mathrm{E^0_{H_2O/H_2}= -.83\: V \tag{2}} \end{align}Oxidation:\(\begin{align} &\mathrm{2Cl^-_{\large{(aq)}} \rightarrow Cl_2 + 2e^-} &&\mathrm{-E^0_{Cl_2/Cl^-}= -(1.36\: V) \tag{3}}\\ &\mathrm{2H_2O_{\large{(l)}} \rightarrow O_2 + 4H^+ + 4e^-} &&\mathrm{-E^0_{O_2/H_2O}= -(1.23\: V) \tag{4}} \end{align}\)As we can see, due to the very much more negative electrode potential, the reduction of sodium ions is much less likely to occur than the reduction of water, so we can assume that in the electrolysis of NaCl, the reduction that occurs is reaction. Therefore, we should try to determine what the oxidation reaction that occurs is. Let's say we have reaction as the reduction and reaction as the oxidation. We would get:\(\begin{align} &\textrm{Reduction: } &&\mathrm{2 H_2O_{\large{(l)}} + 2e^- \rightarrow H_{2\large{(g)}} +2OH^-_{\large{(aq)}}} &&\mathrm{\hspace{12px}E^0_{H_20/H_2}= -.83\: V \tag{2}}\\ &\textrm{Oxidation: } &&\mathrm{2Cl^-_{\large{(aq)}} \rightarrow Cl_2 + 2e^-} &&\mathrm{-E^0_{Cl_2/Cl^-}= -(1.36\: V) \tag{3}} \\ &\textrm{Overall: } &&\mathrm{2 H_2O_{\large{(l)}} + 2Cl^-_{\large{(aq)}}} &&\mathrm{\hspace{12px}E^0_{H_20/H_2} - E^0_{Cl_2/Cl^-}\tag{5}}\\ & &&\mathrm{\hspace{10px}\rightarrow H_{2\large{(g)}} +2OH^-_{\large{(aq)}} +Cl_2} &&\mathrm{\hspace{22px} = -.83 + (-1.36)= -2.19} \end{align}\)On the other hand, we could also have reaction with reaction\(\begin{align} &\textrm{Reduction: } &&\mathrm{2\,[2 H_2O_{\large{(l)}} + 2e^- \rightarrow H_{2\large{(g)}} +2OH^-_{\large{(aq)}}]} &&\mathrm{\hspace{12px}E_{H_2O/H_2O}=-.83\: V \tag{2}}\\ &\textrm{Oxidation: } &&\mathrm{2H_2O_{\large{(l)}} \rightarrow O_2 + 4H^+ + 4e^-} &&\mathrm{-E_{O_2/H_20}^0= -(1.23\: V) \tag{4}}\\ &\textrm{Overall: } &&\mathrm{2H_2O_{\large{(l)}} \rightarrow 2H_{2\large{(g)}} + O_{2\large{(g)}}} &&\mathrm{\hspace{12px}E^0_{H_2O/H_2} - E^0_{O_2/H_20}\tag{6}}\\ & && && \mathrm{\hspace{22px}= -.83\: V -(1.23\: V)= -2.06} \end{align}\)At first glance it would appear as though reaction would occur due to the smaller (less negative) electrode potential. However, O2 actually has a fairly large overpotential, so instead Cl2 is more likely to form, making reaction the most probable outcome for the electrolysis of NaCl. Depending on the method used, there can be several different products produced through the chlor-alkali process. The value of these products is what makes the chlor-alkali process so important. The name comes from the two main products of the process, chlorine and the alkali, sodium hydroxide (NaOH). Therefore, one of the main uses of the chlor-alkali process is the production of NaOH. As described earlier, the equation for the chlor-alkali process, that is, the electrolysis of NaCl, is as follows: \(\begin{align} &\textrm{Reduction: } &&\mathrm{2 H_2O_{\large{(l)}} + 2e^- \rightarrow H_{2\large{(g)}} +2OH^-_{\large{(aq)}}} &&\mathrm{E_{H_2O/H_2O}= -.83}\\ &\textrm{Oxidation: } &&\mathrm{2Cl^-_{\large{(aq)}} \rightarrow Cl_2 + 2e^-} &&\mathrm{E^0_{Cl_2/Cl^-}= -1.36}\\ &\textrm{Overall: } &&\mathrm{2Cl^- + 2H_2O_{\large{(l)}} \rightarrow 2 OH^- + H_{2\large{(g)}} +Cl_{2\large{(g)}}} &&\mathrm{E^0_{H_2O/H_2}- E^0_{Cl/l2^-}}\\ & && &&\hspace{12px}\mathrm{= -83\:V- (-1.36\: V)= -2.19\: V} \end{align}\)CathodeTo even further improve the purity of NaOH, a mercury cell can be used for the location of electrolysis, opposed to a diaphragm cell.Figure 2:Anode SideCathode Side**Some of the main problems with the mercury cell are as follows:One final way to make even more pure NaOH is to use a membrane cell. It is preferred over the diaphragm cell or mercury cell method because it uses the least amount of electric energy and produces the highest quality NaOH. For instance, it can produce NaOH with a degree of chlorine ion contamination of only 50 parts per million. An ion-permeable membrane is used to separate the anode and cathode. Anode CathodeTrue or False:General Chemistry, Principles & modern Applications, Petrucci, 2007, 2002, 1997Case Study: Industrial Electrolysis is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by LibreTexts.
637
Cell Diagrams
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Supplemental_Modules_(Analytical_Chemistry)/Electrochemistry/Voltaic_Cells/Cell_Diagrams
Cell notation is shorthand that expresses a certain reaction in an electrochemical cell.Learning ObjectivesRecall that standard cell potentials can be calculated from potentials E0cell for both oxidation and reduction reactions. A positive cell potential indicates that the reaction proceeds spontaneously in the direction in which the reaction is written. Conversely, a reaction with a negative cell potential proceeds spontaneously in the reverse direction.Cell notations are a shorthand description of voltaic or galvanic (spontaneous) cells. The reaction conditions (pressure, temperature, concentration, etc.), the anode, the cathode, and the electrode components are all described in this unique shorthand.Recall that oxidation takes place at the anode and reduction takes place at the cathode. When the anode and cathode are connected by a wire, electrons flow from anode to cathode.A typical arrangement of half-cells linked to form a galvanic cell.Using the arrangement of components, let's put a cell together.One beaker contains 0.15 M Cd(NO3)2 and a Cd metal electrode. The other beaker contains 0.20 M AgNO3 and a Ag metal electrode. The net ionic equation for the reaction is written:In the reaction, the silver ion is reduced by gaining an electron, and solid Ag is the cathode. The cadmium is oxidized by losing electrons, and solid Cd is the anode.The anode reaction is:The cathode reaction is:Using these rules, the notation for the cell we put together is:Cd (s) | Cd2+ (aq, 0.15 M) || Ag+ (aq, 0.20 M) | Ag (s)Give us feedback on this content: Assign Concept ReadingAssign just this concept or entire chapters to your class for free.Edit this content PREV CONCEPTElectrolytic CellsStandard Reduction Potentials NEXT CONCEPT Referenced in 3 quiz questionsGiven the following information, provide the appropriate electrochemical cell notation for the following reaction:ZnSO4(aq) + Mn(s) Zn(s) + MnSO4(aq)assuming all solutions are at 1.0M, 1.0 atm and 298 KZn (s) | Zn2+ || Mn2+ | Mn (s), Mn (s) | Mn2+ || Zn2+ | Zn (s), Zn2+ | Zn (s) || Mn (s) | Mn2+, or Mn2+ | Mn (s) || Zn (s) | Zn2+Which of the following conditions are considered standard when writing electrochemical cell notation?a. 1 liter of solution volumeb. 1 atmosphere of pressurec. 1.00 molar solution concentrationd. 298 kelvin temperaturea, b and d, a, c and d, a. b and c, or b, c and dWhat is the cell notation for a voltaic cell with the following equation? Pb2+(aq) + Cd(s) → Pb(s) + Cd2+(aq)Pb | Pb2⁺|| Cd2⁺| Cd, Pb2⁺ | Pb || Cd | Cd2⁺, Cd | Cd2⁺|| Pb2⁺ | Pb, or Cd | Pb2⁺|| Pb | Cd2SummarySource: Boundless. “Electrochemical Cell Notation.” Boundless Chemistry. Boundless, 21 Jul. 2015. Retrieved 11 Apr. 2016 from www.boundless.com/chemistry/...tion-513-3688/Cell Diagrams is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by LibreTexts.
638
Cell EMF
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Supplemental_Modules_(Analytical_Chemistry)/Electrochemistry/Basics_of_Electrochemistry/Electrochemistry/Cell_EMF
Learning ObjectivesThe electromotive force (EMF) is the maximum potential difference between two electrodes of a galvanic or voltaic cell. This quantity is related to the tendency for an element, a compound or an ion to acquire (i.e. gain) or release (lose) electrons. For example, the maximum potential between \(\ce{Zn}\) and \(\ce{Cu}\) of a well known cell\(\ce{Zn_{\large{(s)}}\, |\, Zn^2+\: (1\: M)\, ||\, Cu^2+\: (1\: M)\, |\, Cu_{\large{(s)}}}\)has been measured to be 1.100 V. A concentration of 1 M in an ideal solution is defined as the standard condition, and 1.100 V is thus the standard electromotive force, DEo, or standard cell potential for the \(\ce{Zn-Cu}\) galvanic cell.The standard cell potential, DEo, of a galvanic cell can be evaluated from the standard reduction potentials of the two half cells Eo. The reduction potentials are measured against the standard hydrogen electrode (SHE):\(\mathrm{Pt_{\large{(s)}} \,|\, H_{2\: \large{(g,\: 1.0\: atm)}} \,|\, H^+\: (1.0\: M)}\).Its reduction potential or oxidation potential is defined to be exactly zero.The reduction potentials of all other half-cells measured in volts against the SHE are the difference in electrical potential energy per coulomb of charge.Note that the unit for energy J = Coulomb volt, and the Gibbs free energy G is the product of charge q and potential difference E:G in J = q E in C Vfor electric energy calculations.A galvanic cell consists of two half-cells. The convention in writing such a cell is to put the (reduction) cathode on the right-hand side, and the (oxidation) anode on the left-hand side. For example, the cell\(\ce{Pt\, |\, H2\, |\, H+\, ||\, Zn^2+\, |\, Zn}\)consists of the oxidation and reduction reactions:If the concentrations of \(\ce{H+}\) and \(\ce{Zn^2+}\) ions are 1.0 M and the pressure of \(\ce{H2}\) is 1.0 atm, the voltage difference between the two electrodes would be -0.763 V(the \(\ce{Zn}\) electrode being the negative electrode). The conditions specified above are called the standard conditions and the EMF so obtained is the standard reduction potential.Note that the above cell is in reverse order compared to that given in many textbooks, but this arrangement gives the standard reduction potentialsdirectly, because the \(\ce{Zn}\) half cell is a reduction half-cell. The negative voltage indicates that the reverse chemical reaction is spontaneous. This corresponds to the fact that \(\ce{Zn}\) metal reacts with an acid to produce \(\ce{H2}\) gas.As another example, the cell\(\ce{Pt\, |\, H2\, |\, H+\, ||\, Cu+\, |\, Cu}\)consists of an oxidation and a reduction reaction:and the standard cell potential is 0.337 V. The positive potential indicates a spontaneous reaction,\(\ce{Cu^2+ + H2 \rightarrow Cu + 2 H+}\)but the potential is so small that the reaction is too slow to be observed.Example 1What is the potential for the following cell?\(\mathrm{Zn\, |\, Zn^{2+}\:(1.0\: M)\, ||\, Cu^{2+}\:(1.0\: M)\, |\, Cu}\)SolutionFrom a table of standard reduction potentials we have the following values\(\ce{Cu^2+ + 2 e^- \rightarrow Cu} \hspace{15px} E^\circ = 0.337 \tag{1}\)\(\ce{Zn \rightarrow Zn^2+ + 2 e^-} \hspace{15px} E^* = 0.763 \tag{2}\)Add and to yield\(\ce{Zn + Cu^2+ \rightarrow Zn^2+ + Cu} \hspace{15px} \ce D E^\circ = E^\circ + E^* = \textrm{1.100 V}\)Note that E* is the oxidation standard potential, and E° is the reduction standard potential, E* = - E°. The standard cell potential is represented by dE°.DISCUSSION The positive potential confirms your observation that zinc metal reacts with cupric ions in solution to produce copper metal.Example 2What is the potential for the following cell?\(\mathrm{Ag\, |\, Ag^+\:(1.0\: M)\, ||\, Li^+\:(1.0\: M)\, |\, Li}\)SolutionFrom the table of standard reduction potentials, you find\(\ce{Li+ + e^- \rightarrow Li} \hspace{15px} E^\circ = -3.045 \tag{3}\)\(\ce{Ag \rightarrow Ag+ + e^-}\hspace{15px} E^* = -0.799 \tag{4}\)According to the convention of the cell, the reduction reaction is on the right. The cell on your left-hand side is an oxidation process. Thus, you add and to obtain\(\ce{Li+ + Ag \rightarrow Ag+ + Li} \hspace{15px} \ce d E^\circ = \textrm{-3.844 V}\)DISCUSSIONThe negative potential indicates that the reverse reaction should be spontaneous.Some calculators use a lithium battery. The atomic weight of \(\ce{Li}\) is 6.94, much lighter than \(\ce{Zn}\) (65.4).\(\mathrm{Pt\, |\, H_2,\: 1\: atm\, |\, H^+,\: 1\: M\, ||\, \mathit M^{n+},\: 1\: M\, |\, \mathit M}\)\(\mathrm{\mathit M\, |\, \mathit M^{n+},\: 1\: M\, ||\, H^+,\: 1\: M\, |\, H_2,\: 1\: atm\, |\, Pt}\)\(\ce{|\, left\, |\, left^+\, ||\, right^+\, |\, right\, |}\)?\(\ce{Pt \,|\, H2 \,|\, H+ \,||\, Cl2 \,|\, Cl- \,|\, Pt}\)\(\ce{Pt \,|\, H2 \,|\, H+\: 1\: M \,||\, right^+ \,|\, right}\)\(\begin{align} \ce{Cl2 + 2 e^- \rightarrow 2 Cl-} &\hspace{15px}E^\circ = 1.36\\ \mathrm{H_2 \rightarrow 2 H^+ + 2 e^-} &\hspace{15px }E^\circ = 0.00\\ \overline{\hspace{140px}}&\overline{\hspace{100px}}\\ \ce{Cl2 + H2 \rightarrow 2 HCl} \hspace{15px} &\hspace{15px}\ce DE^\circ = 1.36\: \ce V \end{align}\)Chung (Peter) Chieh (Professor Emeritus, Chemistry University of Waterloo)Cell EMF is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by LibreTexts.
639
Characteristic Reactions of Select Metal Ions
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Supplemental_Modules_(Analytical_Chemistry)/Qualitative_Analysis/Characteristic_Reactions_of_Select_Metal_Ions
Characteristic Reactions of Antimony Ions (Sb³⁺)Antimony is brittle and silvery. Not very active, but reacts with oxygen, sulfur and chlorine at high temperatures.Characteristic Reactions of Aluminum Ions (Al³⁺)Silvery, rather soft. Very active, but protected by an oxide coating.Characteristic Reactions of Ammonium Ion (NH₄⁺)Ammonium ion is formed by the reaction between acids and aqueous ammonia:. The ammonium ion behaves chemically like the ions of the alkali metals, particularly potassium ion, which is almost the same size. All ammonium salts are white and soluble.Characteristic Reactions of Arsenic Ions (As³⁺)Arsenic solid is a gray, very brittle substance; sublimes at 615º. Combines readily with sulfur and oxygen at high temperatures.Characteristic Reactions of Barium (Ba²⁺)Silvery metal. Extremely active, reacts quickly with oxygen in air, and with most non-metals.Characteristic Reactions of Bismuth (Bi³⁺)Bismuth is hard and brittle, with a reddish cast. Rather inactive, but will dissolve in nitric acid or hot sulfuric acid.Characteristic Reactions of Cadmium Ions (Cd²⁺)Cadmium is a silvery, crystalline metal, resembling zinc. Moderately active. Cd²⁺ is colorless in solution and forms complex ions readily.Characteristic Reactions of Calcium Ions (Ca²⁺)Calcium is a rather soft, very active metal. Very similar to barium in its chemical properties.Characteristic Reactions of Chromium Ions (Cr³⁺)Chromium is a silvery, rather brittle metal. Similar to aluminum, but exhibits several oxidation states.Characteristic Reactions of Cobalt Ions (Co²⁺)Cobalt is a steel-gray, hard, tough metal. Dissolves easily in nitric acid and also in dilute hydrochloric and sulfuric acids.Characteristic Reactions of Copper Ions (Cu²⁺)Reddish-yellow, rather inactive metal. Dissolves readily in nitric acid and in hot, concentrated sulfuric acid.Characteristic Reactions of Iron (Fe³⁺)Iron is a gray, moderately active metal.Characteristic Reactions of Lead Ions (Pb²⁺)Lead is a soft metal having little tensile strength, and it is the densest of the common metals excepting gold and mercury. It has a metallic luster when freshly cut but quickly acquires a dull color when exposed to moist air.Characteristic Reactions of Magnesium Ions (Mg²⁺)Magnesium is a silvery metal that is quite active, reacting slowly with boiling (but not cold) water to give hydrogen and the rather insoluble magnesium hydroxide. It combines easily with oxygen and at high temperatures reacts with such nonmetals as the halogens, sulfur, and even nitrogen.Characteristic Reactions of Manganese Ions (Mn²⁺)Manganese is a gray or reddish-white metal. Very hard and brittle. Very similar to iron in activity. Dissolves readily in dilute acids.Characteristic Reactions of Mercury Ions (Hg²⁺ and Hg₂²⁺)Mercury is one of the few liquid elements. It dissolves in oxidizing acids, producing either Hg²⁺ or Hg₂²⁺ , depending on which reagent is in excess. The metal is also soluble in aqua regia ( a mixture of hydrochloric and nitric acids).Characteristic Reactions of Nickel Ions (Ni²⁺)Nickel is a silvery-gray metal. Not oxidized by air under ordinary conditions. Easily dissolved in dilute nitric acid.Characteristic Reactions of Silver Ions (Ag⁺)Silver is a inactive metal. It will react with hot concentrated H2SO4 , with HNO3 , and with aqua regia.Characteristic Reactions of Strontium Ions (Sr²⁺)Strontium is an active metal, very similar to barium and calcium.Characteristic Reactions of Tin Ions (Sn²⁺, Sn⁴⁺)Metallic tin is soft and malleable. It slowly dissolves in dilute nonoxidizing acids or more readily in hot concentrated HCl . It reacts with HNO3 to form metastannic acid, H2SnO3 , a white substance insoluble in alkalies or acids. In neutral or only slightly acidic solutions, zinc displaces tin from its compounds, forming the metal.Characteristic Reactions of Zinc Ions (Zn²⁺)Zinc is a bluish-gray metal. Quite active; burns readily in air to form white ZnO and combines with many nonmetals. Thumbnail: Lead(II) iodide precipitates when solutions of potassium iodide and lead(II) nitrate are combined. (CC BY-SA 3.0; PRHaney).This page titled Characteristic Reactions of Select Metal Ions is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by James P. Birk.
640
Chromatography
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Supplemental_Modules_(Analytical_Chemistry)/Instrumentation_and_Analysis/Chromatography
Chromatography is a method by which a mixture is separated by distributing its components between two phases. The stationary phase remains fixed in place while the mobile phase carries the components of the mixture through the medium being used. The stationary phase acts as a constraint on many of the components in a mixture, slowing them down to move slower than the mobile phase. The movement of the components in the mobile phase is controlled by the significance of their interactions with the mobile and/or stationary phases. Because of the differences in factors such as the solubility of certain components in the mobile phase and the strength of their affinities for the stationary phase, some components will move faster than others, thus facilitating the separation of the components within that mixture. Chromatographic ColumnsChromatography is an analytical technique that separates components in a mixture. Chromatographic columns are part of the instrumentation that is used in chromatography. Five chromatographic methods that use columns are gas chromatography (GC), liquid chromatography (LC), Ion exchange chromatography (IEC), size exclusion chromatography (SEC), and chiral chromatography. The basic principals of chromatography can be applied to all five methods.ChromatographyChromatography is a method by which a mixture is separated by distributing its components between two phases. The stationary phase remains fixed in place while the mobile phase carries the components of the mixture through the medium being used. The stationary phase acts as a constraint on many of the components in a mixture, slowing them down to move slower than the mobile phase.Gas ChromatographyGas chromatography is a term used to describe the group of analytical separation techniques used to analyze volatile substances in the gas phase. In gas chromatography, the components of a sample are dissolved in a solvent and vaporized in order to separate the analytes by distributing the sample between two phases: a stationary phase and a mobile phase. The mobile phase is a chemically inert gas that serves to carry the molecules of the analyte through the heated column.High Performance Liquid ChromatographyHigh Performance Liquid Chromotagraphy (HPLC) is an analytical technique used for the separation of compounds soluble in a particular solvent.Liquid ChromatographyLiquid chromatography is a technique used to separate a sample into its individual parts. This separation occurs based on the interactions of the sample with the mobile and stationary phases. Because there are many stationary/mobile phase combinations that can be employed when separating a mixture, there are several different types of chromatography that are classified based on the physical states of those phases. Liquid-solid column chromatography, the most popular chromatography technique. Thumbnail: Two-dimensional chromatograph GCxGC-TOFMS at Chemical Faculty of GUT Gdańsk, Poland, 2016. (CC BY-SA 4.0; LukaszKatlewa).Chromatography is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by LibreTexts.
641
Commercial Galvanic Cells
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Supplemental_Modules_(Analytical_Chemistry)/Electrochemistry/Exemplars/Commercial_Galvanic_Cells
Because galvanic cells can be self-contained and portable, they can be used as batteries and fuel cells. A battery (storage cell) is a galvanic cell (or a series of galvanic cells) that contains all the reactants needed to produce electricity. In contrast, a fuel cell is a galvanic cell that requires a constant external supply of one or more reactants to generate electricity. In this section, we describe the chemistry behind some of the more common types of batteries and fuel cells.There are two basic kinds of batteries: disposable, or primary, batteries, in which the electrode reactions are effectively irreversible and which cannot be recharged; and rechargeable, or secondary, batteries, which form an insoluble product that adheres to the electrodes. These batteries can be recharged by applying an electrical potential in the reverse direction. The recharging process temporarily converts a rechargeable battery from a galvanic cell to an electrolytic cell.Batteries are cleverly engineered devices that are based on the same fundamental laws as galvanic cells. The major difference between batteries and the galvanic cells we have previously described is that commercial batteries use solids or pastes rather than solutions as reactants to maximize the electrical output per unit mass. The use of highly concentrated or solid reactants has another beneficial effect: the concentrations of the reactants and the products do not change greatly as the battery is discharged; consequently, the output voltage remains remarkably constant during the discharge process. This behavior is in contrast to that of the Zn/Cu cell, whose output decreases logarithmically as the reaction proceeds ). When a battery consists of more than one galvanic cell, the cells are usually connected in series—that is, with the positive (+) terminal of one cell connected to the negative (−) terminal of the next, and so forth. The overall voltage of the battery is therefore the sum of the voltages of the individual cells.The major difference between batteries and the galvanic cells is that commercial typically batteries use solids or pastes rather than solutions as reactants to maximize the electrical output per unit mass. An obvious exception is the standard car battery which used solution phase chemistry.The dry cell, by far the most common type of battery, is used in flashlights, electronic devices such as the Walkman and Game Boy, and many other devices. Although the dry cell was patented in 1866 by the French chemist Georges Leclanché and more than 5 billion such cells are sold every year, the details of its electrode chemistry are still not completely understood. In spite of its name, the Leclanché dry cell is actually a “wet cell”: the electrolyte is an acidic water-based paste containing \(MnO_2\), \(NH_4Cl\), \(ZnCl_2\), graphite, and starch (part (a) in ). The half-reactions at the anode and the cathode can be summarized as follows:\[\ce{2MnO2(s) + 2NH^{+}4(aq) + 2e^{−} -> Mn2O3(s) + 2NH3(aq) + H2O(l)} \nonumber \]\[\ce{Zn(s) -> Zn^{2+}(aq) + 2e^{−}} \nonumber \]The \(\ce{Zn^{2+}}\) ions formed by the oxidation of \(\ce{Zn(s)}\) at the anode react with \(\ce{NH_3}\) formed at the cathode and \(\ce{Cl^{−}}\) ions present in solution, so the overall cell reaction is as follows:\[\ce{2MnO2(s) + 2NH4Cl(aq) + Zn(s) -> Mn2O3(s) + Zn(NH3)2Cl2(s) + H2O(l)} \label{Eq3} \]The dry cell produces about 1.55 V and is inexpensive to manufacture. It is not, however, very efficient in producing electrical energy because only the relatively small fraction of the \(\ce{MnO2}\) that is near the cathode is actually reduced and only a small fraction of the zinc cathode is actually consumed as the cell discharges. In addition, dry cells have a limited shelf life because the \(\ce{Zn}\) anode reacts spontaneously with \(\ce{NH4Cl}\) in the electrolyte, causing the case to corrode and allowing the contents to leak out.The alkaline battery is essentially a Leclanché cell adapted to operate under alkaline, or basic, conditions. The half-reactions that occur in an alkaline battery are as follows:\[\ce{2MnO2(s) + H2O(l) + 2e^{−} -> Mn2O3(s) + 2OH^{−}(aq)} \nonumber \]\[\ce{Zn(s) + 2OH^{−}(aq) -> ZnO(s) + H2O(l) + 2e^{−}} \nonumber \]\[\ce{Zn(s) + 2MnO2(s) -> ZnO(s) + Mn2O3(s)} \nonumber \]This battery also produces about 1.5 V, but it has a longer shelf life and more constant output voltage as the cell is discharged than the Leclanché dry cell. Although the alkaline battery is more expensive to produce than the Leclanché dry cell, the improved performance makes this battery more cost-effective.Although some of the small button batteries used to power watches, calculators, and cameras are miniature alkaline cells, most are based on a completely different chemistry. In these "button" batteries, the anode is a zinc–mercury amalgam rather than pure zinc, and the cathode uses either \(\ce{HgO}\) or \(\ce{Ag2O}\) as the oxidant rather than \(\ce{MnO2}\) in ).The cathode, anode and overall reactions and cell output for these two types of button batteries are as follows (two half-reactions occur at the anode, but the overall oxidation half-reaction is shown):The major advantages of the mercury and silver cells are their reliability and their high output-to-mass ratio. These factors make them ideal for applications where small size is crucial, as in cameras and hearing aids. The disadvantages are the expense and the environmental problems caused by the disposal of heavy metals, such as \(\ce{Hg}\) and \(\ce{Ag}\).None of the batteries described above is actually “dry.” They all contain small amounts of liquid water, which adds significant mass and causes potential corrosion problems. Consequently, substantial effort has been expended to develop water-free batteries. One of the few commercially successful water-free batteries is the lithium–iodine battery. The anode is lithium metal, and the cathode is a solid complex of \(I_2\). Separating them is a layer of solid \(LiI\), which acts as the electrolyte by allowing the diffusion of Li+ ions. The electrode reactions are as follows:\[I_{2(s)} + 2e^− \rightarrow {2I^-}_{(LiI)}\label{Eq11} \]\[2Li_{(s)} \rightarrow 2Li^+_{(LiI)} + 2e^− \label{Eq12} \]\[2Li_{(s)}+ I_{2(s)} \rightarrow 2LiI_{(s)} \label{Eq12a} \]with \(E_{cell} = 3.5 \, V\)As shown in part (c) in , a typical lithium–iodine battery consists of two cells separated by a nickel metal mesh that collects charge from the anode. Because of the high internal resistance caused by the solid electrolyte, only a low current can be drawn. Nonetheless, such batteries have proven to be long-lived (up to 10 yr) and reliable. They are therefore used in applications where frequent replacement is difficult or undesirable, such as in cardiac pacemakers and other medical implants and in computers for memory protection. These batteries are also used in security transmitters and smoke alarms. Other batteries based on lithium anodes and solid electrolytes are under development, using \(TiS_2\), for example, for the cathode.Dry cells, button batteries, and lithium–iodine batteries are disposable and cannot be recharged once they are discharged. Rechargeable batteries, in contrast, offer significant economic and environmental advantages because they can be recharged and discharged numerous times. As a result, manufacturing and disposal costs drop dramatically for a given number of hours of battery usage. Two common rechargeable batteries are the nickel–cadmium battery and the lead–acid battery, which we describe next.The nickel–cadmium, or NiCad, battery is used in small electrical appliances and devices like drills, portable vacuum cleaners, and AM/FM digital tuners. It is a water-based cell with a cadmium anode and a highly oxidized nickel cathode that is usually described as the nickel(III) oxo-hydroxide, NiO(OH). As shown in , the design maximizes the surface area of the electrodes and minimizes the distance between them, which decreases internal resistance and makes a rather high discharge current possible.The electrode reactions during the discharge of a \(NiCad\) battery are as follows:\[2NiO(OH)_{(s)} + 2H_2O_{(l)} + 2e^− \rightarrow 2Ni(OH)_{2(s)} + 2OH^-_{(aq)} \label{Eq13} \]\[Cd_{(s)} + 2OH^-_{(aq)} \rightarrow Cd(OH)_{2(s)} + 2e^- \label{Eq14} \]\[Cd_{(s)} + 2NiO(OH)_{(s)} + 2H_2O_{(l)} \rightarrow Cd(OH)_{2(s)} + 2Ni(OH)_{2(s)} \label{Eq15} \]\(E_{cell} = 1.4 V\)Because the products of the discharge half-reactions are solids that adhere to the electrodes [Cd(OH)2 and 2Ni(OH)2], the overall reaction is readily reversed when the cell is recharged. Although NiCad cells are lightweight, rechargeable, and high capacity, they have certain disadvantages. For example, they tend to lose capacity quickly if not allowed to discharge fully before recharging, they do not store well for long periods when fully charged, and they present significant environmental and disposal problems because of the toxicity of cadmium.A variation on the NiCad battery is the nickel–metal hydride battery (NiMH) used in hybrid automobiles, wireless communication devices, and mobile computing. The overall chemical equation for this type of battery is as follows:\[NiO(OH)_{(s)} + MH \rightarrow Ni(OH)_{2(s)} + M_{(s)} \label{Eq16} \]The NiMH battery has a 30%–40% improvement in capacity over the NiCad battery; it is more environmentally friendly so storage, transportation, and disposal are not subject to environmental control; and it is not as sensitive to recharging memory. It is, however, subject to a 50% greater self-discharge rate, a limited service life, and higher maintenance, and it is more expensive than the NiCad battery.Directive 2006/66/EC of the European Union prohibits the placing on the market of portable batteries that contain more than 0.002% of cadmium by weight. The aim of this directive was to improve "the environmental performance of batteries and accumulators"The lead–acid battery is used to provide the starting power in virtually every automobile and marine engine on the market. Marine and car batteries typically consist of multiple cells connected in series. The total voltage generated by the battery is the potential per cell (E°cell) times the number of cells.As shown in , the anode of each cell in a lead storage battery is a plate or grid of spongy lead metal, and the cathode is a similar grid containing powdered lead dioxide (\(PbO_2\)). The electrolyte is usually an approximately 37% solution (by mass) of sulfuric acid in water, with a density of 1.28 g/mL (about 4.5 M \(H_2SO_4\)). Because the redox active species are solids, there is no need to separate the electrodes. The electrode reactions in each cell during discharge are as follows:\[PbO_{2(s)} + HSO^−_{4(aq)} + 3H^+_{(aq)} + 2e^− \rightarrow PbSO_{4(s)} + 2H_2O_{(l)} \label{Eq17} \]with \(E^°_{cathode} = 1.685 \; V\)\[Pb_{(s)} + HSO^−_{4(aq)} \rightarrow PbSO_{4(s) }+ H^+_{(aq)} + 2e^−\label{Eq18} \]with \(E^°_{anode} = −0.356 \; V\)\[Pb_{(s)} + PbO_{2(s)} + 2HSO^−_{4(aq)} + 2H^+_{(aq)} \rightarrow 2PbSO_{4(s)} + 2H_2O_{(l)} \label{Eq19} \]and \(E^°_{cell} = 2.041 \; V\)As the cell is discharged, a powder of \(PbSO_4\) forms on the electrodes. Moreover, sulfuric acid is consumed and water is produced, decreasing the density of the electrolyte and providing a convenient way of monitoring the status of a battery by simply measuring the density of the electrolyte. This is often done with the use of a hydrometer.A hydrometer can be used to test the specific gravity of each cell as a measure of its state of charge (www.youtube.com/watch?v=SRcOqfL6GqQ).When an external voltage in excess of 2.04 V per cell is applied to a lead–acid battery, the electrode reactions reverse, and \(PbSO_4\) is converted back to metallic lead and \(PbO_2\). If the battery is recharged too vigorously, however, electrolysis of water can occur:\[ 2H_2O_{(l)} \rightarrow 2H_{2(g)} +O_{2 (g)} \label{EqX} \]This results in the evolution of potentially explosive hydrogen gas. The gas bubbles formed in this way can dislodge some of the \(PbSO_4\) or \(PbO_2\) particles from the grids, allowing them to fall to the bottom of the cell, where they can build up and cause an internal short circuit. Thus the recharging process must be carefully monitored to optimize the life of the battery. With proper care, however, a lead–acid battery can be discharged and recharged thousands of times. In automobiles, the alternator supplies the electric current that causes the discharge reaction to reverse.A fuel cell is a galvanic cell that requires a constant external supply of reactants because the products of the reaction are continuously removed. Unlike a battery, it does not store chemical or electrical energy; a fuel cell allows electrical energy to be extracted directly from a chemical reaction. In principle, this should be a more efficient process than, for example, burning the fuel to drive an internal combustion engine that turns a generator, which is typically less than 40% efficient, and in fact, the efficiency of a fuel cell is generally between 40% and 60%. Unfortunately, significant cost and reliability problems have hindered the wide-scale adoption of fuel cells. In practice, their use has been restricted to applications in which mass may be a significant cost factor, such as US manned space vehicles.These space vehicles use a hydrogen/oxygen fuel cell that requires a continuous input of H2(g) and O2(g), as illustrated in . The electrode reactions are as follows:\[O_{2(g)} + 4H^+ + 4e^− \rightarrow 2H_2O_{(g)} \label{Eq20} \]\[2H_{2(g)} \rightarrow 4H^+ + 4e^− \label{Eq21} \]\[2H_{2(g)} + O_{2(g)} \rightarrow 2H_2O_{(g)} \label{Eq22} \]The overall reaction represents an essentially pollution-free conversion of hydrogen and oxygen to water, which in space vehicles is then collected and used. Although this type of fuel cell should produce 1.23 V under standard conditions, in practice the device achieves only about 0.9 V. One of the major barriers to achieving greater efficiency is the fact that the four-electron reduction of \(O_2 (g)\) at the cathode is intrinsically rather slow, which limits current that can be achieved. All major automobile manufacturers have major research programs involving fuel cells: one of the most important goals is the development of a better catalyst for the reduction of \(O_2 (g)\).Commercial batteries are galvanic cells that use solids or pastes as reactants to maximize the electrical output per unit mass. A battery is a contained unit that produces electricity, whereas a fuel cell is a galvanic cell that requires a constant external supply of one or more reactants to generate electricity. One type of battery is the Leclanché dry cell, which contains an electrolyte in an acidic water-based paste. This battery is called an alkaline battery when adapted to operate under alkaline conditions. Button batteries have a high output-to-mass ratio; lithium–iodine batteries consist of a solid electrolyte; the nickel–cadmium (NiCad) battery is rechargeable; and the lead–acid battery, which is also rechargeable, does not require the electrodes to be in separate compartments. A fuel cell requires an external supply of reactants as the products of the reaction are continuously removed. In a fuel cell, energy is not stored; electrical energy is provided by a chemical reaction.Commercial Galvanic Cells is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by LibreTexts.
642
Comparing Strengths of Oxidants and Reductants
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Supplemental_Modules_(Analytical_Chemistry)/Electrochemistry/Redox_Chemistry/Comparing_Strengths_of_Oxidants_and_Reductants
Learning ObjectivesWe can measure the standard potentials for a wide variety of chemical substances, some of which are listed in Table P2. These data allow us to compare the oxidative and reductive strengths of a variety of substances. The half-reaction for the standard hydrogen electrode (SHE) lies more than halfway down the list in Table \(\PageIndex{1}\). All reactants that lie below the SHE in the table are stronger oxidants than H+, and all those that lie above the SHE are weaker. The strongest oxidant in the table is F2, with a standard electrode potential of 2.87 V. This high value is consistent with the high electronegativity of fluorine and tells us that fluorine has a stronger tendency to accept electrons (it is a stronger oxidant) than any other element.Not all oxidizers and reducers are created equal. The standard reduction potentials in Table \(\PageIndex{1}\) can be interpreted as a ranking of substances according to their oxidizing and reducing power. Strong oxidizing agents are typically compounds with elements in high oxidation states or with high electronegativity, which gain electrons in the redox reaction ). Examples of strong oxidizers include hydrogen peroxide, permanganate, and osmium tetroxide. Reducing agents are typically electropositive elements such as hydrogen, lithium, sodium, iron, and aluminum, which lose electrons in redox reactions. Hydrides (compounds that contain hydrogen in the formal -1 oxidation state), such as sodium hydride, sodium borohydride and lithium aluminum hydride, are often used as reducing agents in organic and organometallic reactions.Similarly, all species in Table \(\PageIndex{1}\) that lie above H2 are stronger reductants than H2, and those that lie below H2 are weaker. The strongest reductant in the table is thus metallic lithium, with a standard electrode potential of −3.04 V. This fact might be surprising because cesium, not lithium, is the least electronegative element. The apparent anomaly can be explained by the fact that electrode potentials are measured in aqueous solution, where intermolecular interactions are important, whereas ionization potentials and electron affinities are measured in the gas phase. Due to its small size, the Li+ ion is stabilized in aqueous solution by strong electrostatic interactions with the negative dipole end of water molecules. These interactions result in a significantly greater ΔHhydration for Li+ compared with Cs+. Lithium metal is therefore the strongest reductant (most easily oxidized) of the alkali metals in aqueous solution.The standard reduction potentials can be interpreted as a ranking of substances according to their oxidizing and reducing power. Species in Table \(\PageIndex{1}\) that lie above H2 are stronger reducing agents (more easily oxidized) than H2. Species that lie below \(\ce{H2}\) are stronger oxidizing agents.Because the half-reactions shown in Table \(\PageIndex{1}\) are arranged in order of their E° values, we can use the table to quickly predict the relative strengths of various oxidants and reductants. Any species on the left side of a half-reaction will spontaneously oxidize any species on the right side of another half-reaction that lies below it in the table. Conversely, any species on the right side of a half-reaction will spontaneously reduce any species on the left side of another half-reaction that lies above it in the table. We can use these generalizations to predict the spontaneity of a wide variety of redox reactions (E°cell > 0), as illustrated in Example \(\PageIndex{1}\).Example \(\PageIndex{1}\): Silver SulfideThe black tarnish that forms on silver objects is primarily \(\ce{Ag2S}\). The half-reaction for reversing the tarnishing process is as follows:\[\ce{Ag2S(s) + 2e^{−} → 2Ag(s) + S^{2−} (aq)} \quad E°=−0.69\, V \nonumber\]Given: reduction half-reaction, standard electrode potential, and list of possible reductantsAsked for: reductants for \(\ce{Ag2S}\), strongest reductant, and potential reducing agent for removing tarnishStrategy:A From their positions in Table \(\PageIndex{1}\), decide which species can reduce \(\ce{Ag2S}\). Determine which species is the strongest reductant.B Use Table \(\PageIndex{1}\) to identify a reductant for \(\ce{Ag2S}\) that is a common household product.SolutionWe can solve the problem in one of two ways: compare the relative positions of the four possible reductants with that of the Ag2S/Ag couple in Table \(\PageIndex{1}\) or compare E° for each species with E° for the Ag2S/Ag couple (−0.69 V).Example \(\PageIndex{2}\)Use the data in Table \(\PageIndex{1}\) to determine whether each reaction is likely to occur spontaneously under standard conditions:Given: redox reaction and list of standard electrode potentials (Table \(\PageIndex{1}\))Asked for: reaction spontaneityStrategy:SolutionB Adding the two half-reactions gives the overall reaction: \[\begin{align*}\textrm{cathode} &: \ce{Be^{2+}(aq)} +\ce{2e^-} \rightarrow \ce{Be(s)} \\ \textrm{anode} &: \ce{Sn(s)} \rightarrow \ce{Sn^{2+}(s)} +\ce{2e^-} \\ \hline \textrm{overall} &: \ce{Sn(s)} + \ce{Be^{2+}(aq)} \rightarrow \ce{Sn^{2+}(aq)} + \ce{Be(s)} \end{align*}\] with \[\begin{align*} E^\circ_{\textrm{cathode}} &=\textrm{-1.99 V} \\[4pt] E^\circ_{\textrm{anode}} &=\textrm{-0.14 V} \\[4pt] E^\circ_{\textrm{cell}} &=E^\circ_{\textrm{cathode}}-E^\circ_{\textrm{anode}} \\[4pt] &=-\textrm{1.85 V} \end{align*}\]The standard cell potential is quite negative, so the reaction will not occur spontaneously as written. That is, metallic tin cannot reduce Be2+ to beryllium metal under standard conditions. Instead, the reverse process, the reduction of stannous ions (Sn2+) by metallic beryllium, which has a positive value of E°cell, will occur spontaneously.B The two half-reactions and their corresponding potentials are as follows: \[\begin{align*}\textrm{cathode} &: \ce{MnO_2(s)}+\ce{4H^+(aq)}+\ce{2e^-} \rightarrow\ce{Mn^{2+}(aq)}+\ce{2H_2O(l)} \\ \ce{anode} &: \ce{H_2O_2(aq)}\rightarrow\ce{O_2(g)}+\ce{2H^+(aq)}+\ce{2e^-} \\\hline \textrm{overall} &: \ce{MnO_2(s)}+\ce{H_2O_2(aq)}+\ce{2H^+(aq)}\rightarrow\ce{O_2(g)}+\ce{Mn^{2+}(aq)}+\ce{2H_2O(l)} \end{align*}\] with \[\begin{align*} E^\circ_{\textrm{cathode}} &=\textrm{1.23 V} \\[4pt] E^\circ_{\textrm{anode}} &=\textrm{0.70 V} \\[4pt] E^\circ_{\textrm{cell}} &=E^\circ_{\textrm{cathode}}-E^\circ_{\textrm{anode}} \\[4pt] &=+\textrm{0.52 V} \end{align*}\]The standard potential for the reaction is positive, indicating that under standard conditions, it will occur spontaneously as written. Hydrogen peroxide will reduce \(\ce{MnO2}\), and oxygen gas will evolve from the solution.Exercise \(\PageIndex{2}\)Use the data in Table \(\PageIndex{1}\) to determine whether each reaction is likely to occur spontaneously under standard conditions:spontaneous with \(E^o_{cell} = 1.61\, V - 1.396\, V = 0.214\, V\)nonspontaneous with \(E^°_{cell} = −0.20\, V\)Although the sign of \(E^o_{cell}\) tells us whether a particular redox reaction will occur spontaneously under standard conditions, it does not tell us to what extent the reaction proceeds, and it does not tell us what will happen under nonstandard conditions. To answer these questions requires a more quantitative understanding of the relationship between electrochemical cell potential and chemical thermodynamics.The relative strengths of various oxidants and reductants can be predicted using \(E^o\) values. The oxidative and reductive strengths of a variety of substances can be compared using standard electrode potentials. Apparent anomalies can be explained by the fact that electrode potentials are measured in aqueous solution, which allows for strong intermolecular electrostatic interactions, and not in the gas phase.5.Comparing Strengths of Oxidants and Reductants is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by LibreTexts.
643
Concentration Cell
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Supplemental_Modules_(Analytical_Chemistry)/Electrochemistry/Voltaic_Cells/Electrochemical_Cells_under_Nonstandard_Conditions/Concentration_Cell
A concentration cell is an electrolytic cell that is comprised of two half-cells with the same electrodes, but differing in concentrations. A concentration cell acts to dilute the more concentrated solution and concentrate the more dilute solution, creating a voltage as the cell reaches an equilibrium. This is achieved by transferring the electrons from the cell with the lower concentration to the cell with the higher concentration.The standard electrode potential, commonly written as Eocell, of a concentration cell is equal to zero because the electrodes are identical. But, because the ion concentrations are different, there is a potential difference between the two half-cells. One can find this potential difference via the Nernst Equation,\[ E_{cell} = E^\circ_{cell} - \dfrac{0.0592}{n}\log Q \]at 25oC. The E stands for the voltage that can be measured using a voltmeter (make sure if the voltmeter measures it in millivolts that you convert the number before using it in the equation). Note that the Nernst Equation indicates that cell potential is dependent on concentration, which results directly from the dependence of free energy on concentration. Remember that to find Q you use this equation:\[ \ce{$aA$ + $bB$ <=> $cC$ + $dD$} \]\[ Q = \frac{[C]^{c}[D]^{d}}{[A]^{a}[B]^{b}} \]When Q=1, meaning that the concentrations for the products and reactants are the same, then taking the log of this equals zero. When this occurs, the Ecell is equal to the Eocell.Another way to use the Eocell , or to find it, is using the equation below.\[ E^\circ_{cell} = E^\circ_{cathode} - E^\circ_{anode}\]Fig.1 An example of a concentration cellThese concepts are useful for understanding the electron transfer and what occurs in half-cells.The two compartments of a cell must be separated so they do not mix, but cannot be completely separated with no way for ions to be transferred. A wire cannot be used to connect the two compartments because it would react with the ions that flow from one side to another. Because of this, a salt bridge is an important part of a concentration cell. It solves the major problem of electrons beginning to pile up too much in the right beaker. This buildup is due to electrons moving from the left side, or left beaker, to the right side, or right beaker. The salt bridge itself can be in a few different forms, such as a salt solution in a U-tube or a porous barrier (direct contact). It evens the charge by moving ions to the left side, or left beaker. In the written expression which shows what is occurring in specific reactions, the salt bridge is represented by the double lines. An example of this would be:\[ \text{Zn(s)} | \text{Zn}^{2+} (1~\text{M}) || \text{Cu}^{2+} (1~\text{M}) | \text{Cu} \]The double lines between the Zn2+(1 M) and the Cu2+(1 M) signify the salt bridge. The single lines, however, do not represent bridges; they represent the different phase changes, for example, from solid zinc to liquid zinc solution. If there is a comma where you would expect to see a single line, this is not incorrect. It simply means that no phase changes occurred.In this type of reaction, there are two electrodes which are involved. These are known as the anode and the cathode, or the left and right side, respectively. The anode is the side which is losing electrons (oxidation) while the cathode is the side which is gaining electrons (reduction).A voltmeter (not to be confused with a different kind of voltmeter which also measures a type of energy) is used to measure the cell potential that is passed between the two sides. It is typically located in between the two cells. This cell potential (also known as an electromotive force) occurs due to the flow of electrons. The value it shows can be negative or positive depending on the direction in which the electrons are flowing. If the potential is positive then the transfer of electrons is spontaneous, but the reverse reaction is NONspontaneous. Conversely, if the value of the potential is negative, the transfer of electrons is nonspontaneous and the reverse reaction. The voltmeter measures this potential in volts or millivolts.The tendency of electrons to flow from one chemical to another is known as electrochemistry. This is what occurs in a concentration cell. The electrons flow from the left side (or left beaker) to the right side (or right beaker). Because the left side is losing electrons and the right is gaining them, the left side is called the oxidation side and the right side is the reduction side. Although you could switch the two to be on the opposite sides, this is the general way in which the set up is done. The oxidation side is called the anode and the reduction side is the cathode. It is the flow of the electrons that cause one side to be oxidized and the other to be reduced.Corrosion can occur on a concentration cell when the metal being used is in contact with different concentrations, causing parts of the metal to have different electric potential than the other parts. One element that is often linked to this corrosion is oxygen. In the areas in which there is a low oxygen concentration corrosion occurs.This can be somewhat prevented through sealing off the cell and keeping it clean, but even this cannot prevent any corrosion from occurring at some point.Corrosion is most frequently a problem when the cell is in contact with soil. Because of the variations that occur within soil, which is much greater than the variations that occur within a fluid, contact with soil often causes corrosion for the cell.A pH meter is a specific type of concentration cell that uses the basic setup of a concentration cell to determine the pH, or the acidity/basicity, of a specific solution. It is comprised of two electrodes and a voltmeter. One of the electrodes, the glass one has two components: a metal (commonly silver chloride) wire and a separate semi-porous glass part filled with a potassium chloride solution with a pH of 7 surrounding the AgCl. The other electrode is called the reference electrode, which contains a potassium chloride solution surrounding a potassium chloride wire. The purpose of this second electrode is to act as a comparison for the solution being tested. When the glass electrode comes into contact with a solution of different pH, an electric potential is created due to the reaction of the hydrogen ions with the metal ions. This potential is then measured by the voltmeter, which is connected to the electrode. The higher the voltage, the more hydrogen ions the solution contains, which means the solution is more acidic.Fig.2 An example of a pH meter1.) For the concentration cell bellow determine the flow of electrons.\[ \text{Fe} | \text{Fe}^{2+}(0.01~\text{M}) || \text{Fe}^{2+}(0.1~\text{M})| \text{Fe}\]Solution: The cells will reach equilibrium if electrons are transferred from the left side to the right side. As a result, Fe2+ will be formed in the left compartment and metal iron will be deposited on the right electrode.2.) Calculate cell potential for a concentration cell with two silver electrodes with concentrations 0.2 M and 3.0 M.Solution:Reaction:\[ \ce{Ag^{2+} + 2e^- -> Ag(s)}\]Cell Diagram:\[ \text{Ag(s)}|\text{Ag}^{2+}(0.2~\text{M})||\text{Ag}^{2+}(3.0~\text{M})|\text{Ag(s)}\]Nernst Equation:\[ E = E^\circ - \dfrac{0.0592}{2}\log \dfrac{0.02}{3.0}\]**Eo= 0 for concentration cellsE = 0.0644 V3.) Calculate the concentration of the unknown, given the equation below and a cell potential of 0.26&nsbp;V.\[ \text{Ag}|\text{Ag}^+(x~\text{M})||\text{Ag}^+(1.0~\text{M})|\text{Ag} \]\[ E = E^\circ - \dfrac{0.0592}{1} \log \dfrac{x}{1.0}\]\[0.26 = 0 - 0.0592 \log \dfrac{x}{1.0}\]\[4.362 = -\log(x) + \log(1.0)\]\[\log(x) = \log(1.0) - 4.362\]x = 4.341 x 10-5 MConcentration Cell is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by LibreTexts.
644
Confirmatory Tests
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Supplemental_Modules_(Analytical_Chemistry)/Qualitative_Analysis/Confirmatory_Tests
Confirmatory tests should be performed on separate solutions of some of your ions, in order to see what these tests look like before using them on an unknown. Generally a confirmatory test is used only after other reactions have been used to isolate the ion. When working with stock solutions of an ion, dilute 1 drop with 9 drops of water to simulate the concentration that would exist in an unknown. In a mixture or a solution obtained from an unknown or known mixture, the dilution is not necessary since the ions have already been diluted compared to the stock solutions. In the tests described here, it is assumed that 10 drops of solution will be used. If you change the amount of solution of the ion being tested, you must also adjust the amounts of the reagents to be added.Even if hydrogen sulfide was not used for separation of ions, it may be useful for confirmatory tests. The most convenient and safe source of \(\ce{H2S}\) is thioacetamide. When heated, aqueous solutions of thioacetamide hydrolyze to produce \(\ce{H2S}\):In acid solution:In basic solution using ammonia:In basic solution using strong base:To 10 drops of solution, add 6 M \(\ce{NH3(aq)}\) until neutral. Make the solution acidic by adding one or more drops of 6 M \(\ce{HCl}\). Add 1 mL of thioacetamide and stir well. Heat the test tube in the boiling water bath for 5 minutes. If antimony is present, a red orange precipitate of antimony sulfide should form. This same test will also work with arsenic(III),tin(II),and tin(IV).These precipitates are yellow, brown, and yellow, respectively. Three related procedures can be used.a. Follow the procedure described for antimony(III). The precipitate should be yellow.b. Follow the procedure described for antimony(III), but first make the solution basic with aqueous ammonia. If a precipitate forms on addition of ammonia, continue to add ammonia until the precipitate dissolves, before adding the thioacetamide.c. Add ammonia, as described in procedure b. Then add 10 drops of water and 10 drops of 6 M \(\ce{NaOH}\). A white precipitate should form. If it does not form, either increase the amount of \(\ce{NaOH}\), or do not add the 10 drops of water. Centrifuge and discard the solution. Wash the precipitate twice with a mixture of 1 mL of water and 1 mL of 6 M \(\ce{NH3(aq)}\). Dissolve the precipitate by adding 6 M \(\ce{HCl}\) drop by drop until no precipitate remains. Add 6 M \(\ce{NaOH}\) until the solution is just basic. A white precipitate of \(\cd{Cd(OH)2}\) will form. Then add 1 mL of 1 M thioacetamide and heat the mixture in a boiling water bath for 5 minutes. A yellow precipitate of \(\ce{CdS}\) will form. Follow the procedure described for antimony(III). The precipitate should be black.Try to dissolve the precipitate in 1 mL of 12 M \(\ce{HCl}\) with heating. If it does not dissolve in \(\ce{HCl}\), try the same procedure with 1 mL of 6 M (dilute) \(\ce{HNO3}\). If it still does not dissolve, then try to dissolve it in a mixture of 1 mL of 6 M \(\ce{HCl}\) and 1 mL of 6 M \(\ce{HNO3}\), heating for 2 minutes in a water bath. Most of the black precipitate should dissolve. Mercury(II) sulfide is the least soluble of the metal sulfides.To 10 drops of solution, add 2 drops of aluminon. Add 6 M \(\ce{NH3(aq)}\) dropwise until the solution is basic to litmus. White \(\ce{Al(OH)3}\) should form and adsorb the aluminon, which colors it red. The solution should become colorless. Place 10 drops of solution in a 30 mL beaker. Place a moistened piece of red litmus paper and put it on the bottom of a small watch glass. Add 1 mL of 6 M \(\ce{NaOH}\) to the sample in the beaker. Cover the beaker with the watch glass. Gently heat the solution to near the boiling point. Do not allow the solution to splatter onto the litmus paper. The paper should turn blue from ammonia fumes. Two procedures can be used.a. To 10 drops of solution, add a freshly prepared sodium stannite solution dropwise. A black precipitate should form.Sodium Stannite: The sodium stannite solution is prepared by diluting 4 drops of 0.25 M \(\ce{SnCl2}\0 with 2 mL water and adding 6 M \(\ce{NaOH}\) dropwise, stirring well after each drop, until a permanent precipitate forms. Then add excess NaOH to dissolve this precipitate.b. To 10 drops of solution, add 2 drops of 6 M \(\ce{HCl}\). Then add water dropwise until a white precipitate forms. The precipitate may not be very pronounced, but may instead show up as a turbidity of the solution. Two procedures may be used.a. To 10 drops of solution, add aqueous ammonia to make the solution basic. Then add \(\ce{(NH4)2C2O4}\) (ammonium oxalate) solution dropwise. A white precipitate should form.b. Perform a flame test. The flame should turn orange-red.To 10 drops of solution, add 1 mL of 3% \(\ce{H2O2}\). Then add 6 M \(\ce{NaOH}\) dropwise until the solution is basic. Heat in a boiling water bath for a few minutes. A yellow solution of \(\ce{CrO4^{2-}}\) should form. To 10 drops of solution, add 5 drops of 0.5 M \(\ce{KNCS}\). To this mixture, add an equal volume of acetone and mix. A blue color indicates the formation of \(\ce{[Co(NCS)4]^{2-}\). To 10 drops of solution, add 0.5 M \(\ce{K4Fe(CN)6}\) dropwise until a red-brown precipitate forms. Two procedures can be used.a. To 10 drops of solution, add 0.5 M \(\ce{K4Fe(CN)6}\) dropwise until a dark blue precipitate forms. b. To 10 drops of solution, add 1 or 2 drops of 0.5 M \(\ce{KNCS}\). The solution should become deep red due to formation of \(\ce{FeNCS^{2+}}\). To 10 drops of solution, add 2 drops of magnesium reagent, 4-(p-nitrophenylazo)resorcinol. Then add 6 M \(\ce{NaOH}\) dropwise. A "blue lake" -- a precipitate of \(\ce{Mg(OH)2}\) with adsorbed magnesium reagent -- forms. To 10 drops of solution, add 1 mL of 6 M \(\ce{HNO3}\). Add a spatula-tip quantity of solid sodium bismuthate (\(\ce{NaBiO3}\)) and stir well. There should be a slight excess of solid bismuthate. Wait about 1 minute and then centrifuge the mixture. The solution should be purple due to the presence of \(\ce{MnO4^{-}}\). To 10 drops of solution, add 6 M \(\ce{HCl}\) to form a white precipitate. Centrifuge and discard the centrifugate. Add 6 M \(\ce{NH3(aq)}\) to the precipitate. The color of the precipitate should change to gray or black due to formation of mercury metal. To 10 drops of solution, add 1 or more drops of 0.25 M \(\ce{SnCl2}\). A grayish precipitate should form. The precipitate might be white or black instead of gray.To 10 drops of solution, add 6 M \(\ce{NH3(aq)}\) until the solution is basic. Add 2 or 3 drops of dimethylglyoxime reagent (DMG). A rose-red precipitate of \(\ce{Ni(DMG)2}\) should form.To 10 drops of solution, add 6 M \(\ce{HCl}\) dropwise, with shaking, until precipitation is complete. Centrifuge and decant. Discard the centrifugate. Suspend the silver chloride precipitate in 1 mL of water and add 6 M \(\ce{NH3(aq)}\) dropwise until the precipitate dissolves. Acidify the solution with 6 M \(\ce{HNO3}\) and the white precipitate should reappear.To 10 drops of the solution , add 5 drops of ethanol (\(\ce{C2H5OH}\)). Then add 3 M (6 N) \(\ce{H2SO4}\) dropwise until precipitation is complete. Heat the sample in a water bath for a few minutes. Test again for complete precipitation by adding 1 more drop of sulfuric acid. Centrifuge and decant the supernatant. If any further tests will be carried out on the supernatant liquid, heat it in a boiling water bath to expel the \(\ce{C2H5OH}\), which could interfere with further tests. The white precipitate of \(\ce{SrSO4}\) confirms the presence of strontium.To 10 drops of solution, add 1 or 2 drops of 0.2 M \(\ce{HgCl2}\). A white precipitate should form, but it could be gray or black instead.To 10 drops of solution, add 6 M \(\ce{NH3(aq)}\) to give a neutral pH. Then make the solution slightly acidic to litmus paper with 6 M \(\ce{HCl}\). Add 1 or 2 drops of 0.5 M \(\ce{K4Fe(CN)6}\) and stir. A gray-white precipitate of \(\ce{K2Zn[Fe(CN)6]2}\) is formed.This page titled Confirmatory Tests is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by James P. Birk.
645
Corrosion Basics
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Supplemental_Modules_(Analytical_Chemistry)/Electrochemistry/Exemplars/Corrosion/Corrosion_Basics
Corrosion is a process through which metals in manufactured states return to their natural oxidation states. This process is a reduction-oxidation reaction in which the metal is being oxidized by its surroundings, often the oxygen in air. This reaction is both spontaneous and electrochemically favored. Corrosion is essentially the creation of voltaic, or galvanic, cells where the metal in question acts as an anode and generally deteriorates or loses functional stability.Corrosion Basics is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by LibreTexts.
648
Countercurrent Separations
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Supplemental_Modules_(Analytical_Chemistry)/Instrumentation_and_Analysis/Countercurrent_Separations
In 1949, Lyman Craig introduced an improved method for separating analytes with similar distribution ratios.1 The technique, which is known as a countercurrent liquid–liquid extraction, is outlined in , and L2 is the lower phase introduced at step 2 (during the third extraction). Finally, the partitioning of analyte in any extraction tube results in a fraction p remaining in the upper phase, and a fraction q remaining in the lower phase. Values of q are calculated using equation A16.1, which is identical to equation 7.26 in Chapter 7.\[(q_{aq})_1 = \dfrac{(\text{moles aq})_1 }{(\text{moles aq})_0 }= \dfrac{V_{aq}}{(DV_{org} + V_{aq})} \tag{A16.1}\]The fraction p, of course is equal to 1 – q. Typically Vaq and Vorg are equal in a countercurrent extraction, although this is not a requirement.Let’s assume that the analyte we wish to isolate is present in an aqueous phase of 1 M HCl, and that the organic phase is benzene. Because benzene has the smaller density, it is the upper phase, and 1 M HCl is the lower phase. To begin the countercurrent extraction we place the aqueous sample containing the analyte in tube 0 along with an equal volume of benzene. As shown in . This, too, is identical to a simple liquid-liquid extraction. Here is where the power of the countercurrent extraction begins—instead of setting aside the phase U0, we place it in tube 1 along with a portion of analyte-free aqueous 1 M HCl as phase L1 (see . Tube 0 now contains a fraction q of the analyte, and tube 1 contains a fraction p of the analyte. Completing the extraction in tube 0 results in a fraction p of its contents remaining in the upper phase, and a fraction q remaining in the lower phase. Thus, phases U1 and L0 now contain, respectively, fractions pq and q2 of the original amount of analyte. Following the same logic, it is easy to show that the phases U0 and L1 in tube 1 contain, respectively, fractions p2 and pq of analyte. This completes step 1 of the extraction (see . As shown in the remainder of the upper and lower phases of tube 1 contain fractions pq2 and 2pq2 of the analyte, respectively; thus, the total fraction of analyte in the tube is 3pq2. Table A16.1 summarizes this for the steps outlined in = \dfrac{n!}{(n−r)!r!} p^rq^{n−r} \tag{A16.2}\]where f(r, n) is the fraction of analyte present in tube r at step n of the countercurrent extraction, with the upper phase containing a fraction p×f(r, n) of analyte and the lower phase containing a fraction q×f(r, n) of the analyte.Example \(\PageIndex{A1}\):The countercurrent extraction shown in = 1 / (5 + 1) = 0.167qB = 1 / (DB+ 1) = 1 / (4 + 1) = 0.200Because we know that p + q = 1, we also know that pA is 0.833 and that pB is 0.333. For analyte A, the fraction in tubes 5, 10, 15, 20, 25, and 30 after the 30th step aref = (30! / ((30−5)!5!))(0.833)5(0.167)30−5 = 2.1×10−15 ≈ 0f = (30! / ((30−10)!10!))(0.833)10(0.167)30−10 = 1.4×10−9 ≈ 0f = (30! / ((30−15)!15!))(0.833)15(0.167)30−15 = 2.2×10−5 ≈ 0f = (30! / ((30−20)!20!))(0.833)20(0.167)30−20 = 0.013f = (30! / ((30−25)!25!))(0.833)25(0.167)30−25 = 0.192f = (30! / ((30−30)!30!))(0.833)30(0.167)30−30= 0.004The fraction of analyte B in tubes 5, 10, 15, 20, 25, and 30 is calculated in the same way, yielding respective values of 0.023, 0.153, 0.025, 0, 0, and 0. ) = √(npq) Furthermore, if both np and nq are greater than 5, the binomial distribution is closely approximated by the normal distribution and we can use the properties of a normal distribution to determine the location of the analyte and its recovery.2Example \(\PageIndex{A2}\):Two analytes, A and B, with distribution ratios of 9 and 4, respectively, are separated using a countercurrent extraction in which the volumes of the upper and lower phases are equal. After 100 steps determine the 99% confidence interval for the location of each analyte.SolutionThe fraction, q, of each analyte remaining in the lower phase is calculated using equation A16.1. Because the volumes of the lower and upper phases are equal, we find thatqA = 1 / (DA + 1) = 1 / (9 + 1) = 0.10qB = 1 / (DB + 1) = 1 / (4 + 1) = 0.20Because we know that p + q = 1, we also know that pA is 0.90 and pB is 0.80. After 100 steps, the mean and the standard deviation for the distribution of analytes A and B areµA = npA =(0.90) = 90 and σA = √(npAqA) = √((0.90)(0.10)) = 3µB = npB =(0.80) = 80 and σB = √(npBqB) = √((0.80)(0.20)) = 4Given that npA, npB, nqA, and nqB are all greater than 5, we can assume that the distribution of analytes follows a normal distribution and that the confidence interval for the tubes containing each analyte isr = µ ± zσwhere r is the tube’s number and the value of z is determined by the desired significance level. For a 99% confidence interval the value of z is 2.58 (Appendix 4); thus,rA = 90 ± (2.58) = 90 ± 8rB = 80 ± (2.58) = 80 ± 10Because the two confidence intervals overlap, a complete separation of the two analytes is not possible using a 100 step countercurrent extraction. The complete distribution of the analytes is shown in :For the countercurrent extraction in Example A16.2, calculate the recovery and separation factor for analyte A if the contents of tubes 85–99 are pooled together.SolutionFrom Example A16.2 we know that after 100 steps of the countercurrent extraction, analyte A is normally distributed about tube 90 with a standard deviation of 3. To determine the fraction of analyte A in tubes 85–99, we use the single-sided normal distribution in Appendix 3 to determine the fraction of analyte in tubes 0–84, and in tube 100. The fraction of analyte A in tube 100 is determined by calculating the deviation zz = (r − µ) / σ = (99 − 90) / 3 = 3and using the table in Appendix 3 to determine the corresponding fraction. For z = 3 this corresponds to 0.135% of analyte A. To determine the fraction of analyte A in tubes 0–84 we again calculate the deviationz = (r − µ) / σ = (85 − 90) / 3 = –1.67From Appendix 3 we find that 4.75% of analyte A is present in tubes 0–84. Analyte A’s recovery, therefore, is100% – 4.75% – 0.135% ≈ 95%To calculate the separation factor we determine the recovery of analyte B in tubes 85.99 using the same general approach as for analyte A, finding that approximately 89.4% of analyte B remains in tubes 0.84 and that essentially no analyte B is in tube 100. The recover for B, therefore, is100% – 89.4% – 0% ≈ 10.6%and the separation factor isSB,A= RA / RB = 10.6 / 95 = 0.112ReferencesDavid Harvey (DePauw University)Countercurrent Separations is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by LibreTexts.
649
Courseware
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Supplemental_Modules_(Analytical_Chemistry)/Analytical_Sciences_Digital_Library/Courseware
Analytical Electrochemistry: The Basic ConceptsChemical EquilibriumTeaching Instrumental Analysis without a TextbookIntroduction to XRF- An Analytical PerspectiveIntroduction to X-ray Diffraction (XRD)Separation ScienceCHEM355: A Flipped Analytical Chemistry Course (Fitzgerald)Introduction to Signals and NoiseAnalytical Electrochemistry: Potentiometry Thumbnail: //www.pexels.com/photo/book-bo...browse-415071/ This page titled Courseware is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by Contributor.
650
Cyclic Voltammetry
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Supplemental_Modules_(Analytical_Chemistry)/Instrumentation_and_Analysis/Cyclic_Voltammetry
Cyclic Voltammetry (CV) is an electrochemical technique which measures the current that develops in an electrochemical cell under conditions where voltage is in excess of that predicted by the Nernst equation. CV is performed by cycling the potential of a working electrode, and measuring the resulting current.The potential of the working electrode is measured against a reference electrode which maintains a constant potential, and the resulting applied potential produces an excitation signal such as that of figure 1.² In the forward scan of figure 1, the potential first scans negatively, starting from a greater potential (a) and ending at a lower potential (d). The potential extrema (d) is call the switching potential, and is the point where the voltage is sufficient enough to have caused an oxidation or reduction of an analyte. The reverse scan occurs from (d) to (g), and is where the potential scans positively. shows a typical reduction occurring from (a) to (d) and an oxidation occurring from (d) to (g). It is important to note that some analytes undergo oxidation first, in which case the potential would first scan positively. This cycle can be repeated, and the scan rate can be varied. The slope of the excitation signal gives the scan rate used.A cyclic voltammogram is obtained by measuring the current at the working electrode during the potential scans.² shows a cyclic voltammogram resulting from a single electron reduction and oxidation. Consider the following reversible reaction: \[M^+ + e^- \rightleftharpoons M \] In the initial potential to (d) the switching potential. In this region the potential is scanned negatively to cause a reduction. The resulting current is called cathodic current (ipc). The corresponding peak potential occurs at (c), and is called the cathodic peak potential (Epc). The Epc is reached when all of the substrate at the surface of the electrode has been reduced. After the switching potential has been reached (d), the potential scans positively from (d) to (g). This results in anodic current (Ipa) and oxidation to occur. The peak potential at (f) is called the anodic peak potential (Epa), and is reached when all of the substrate at the surface of the electrode has been oxidized.Electrode potential (\(E\)):\[ E = E_i + vt \tag{1}\]whereWhen the direction of the potential sweep is switched, the equation becomes,\[ E = E_s - vt \tag{2}\]Where \(E_s\) is the potential at the switching point. Electron stoichiometry (\(n\)):\[E_p - E_{p/2} > \dfrac{0.0565}{n} \tag{3} \]whereFormal Reduction Potential (E°’) is the mean of the \(E_{pc}\) and \(E_{pa}\) values:\[E°’ = \dfrac{E_{pa} + E_{pc}}{2}.\]In an unstirred solution, mass transport of the analyte to the electrode surface occurs by diffusion alone.¹ Fick’s Law for mass transfer diffusion relates the distance from the electrode (x), time (t), and the reactant concentration (CA) to the diffusion coefficient (DA). \[ \dfrac{\partial c_A}{\partial t} = D_A \dfrac{\partial^2c_A}{\partial x^2} \tag{4} \]During a reduction, current increases until it reaches a peak: when all M+ exposed to the surface of an electrode has been reduced to M. At this point additional M+ to be reduced can travel by diffusion alone to the surface of the electrode, and as the concentration of M increases, the distance M+ has to travel also increases. During this process the current which has peaked, begins to decline as smaller and smaller amounts of M+ approach the electrode. It is not practical to obtain limiting currents Ipa, and Ipc in a system in which the electrode has not been stirred because the currents continually decrease with time.¹In a stirred solution, a Nernst diffusion layer ~10-2 cm thick, lies adjacent to the electrode surface. Beyond this region is a laminar flow region, followed by a turbulent flow region which contains the bulk solution.¹ Because diffusion is limited to the narrow Nernst diffusion region, the reacting analytes cannot diffuse into the bulk solution, and therefore Nernstian equilibrium is maintained and diffusion-controlled currents can be obtained. In this case, Fick’s Law for mass transfer diffusion can be simplified to give the peak current\[ i_p = (2.69 \; x \; 10^5) \; n^{3/2} \; SD_A^{1/2} \; V^{1/2} \; C_A^* \tag{5} \]Here, (n) is equal to the number of electrons gained in the reduction, (S) is the surface area of the working electrode in cm², (DA) is the diffusion coefficient, (v) is the sweep rate, and (CA) is the molar concentration of A in the bulk solution.A CV system consists of an electrolysis cell, a potentiostat, a current-to-voltage converter, and a data acquisition system. The electrolysis cell consists of a working electrode, counter electrode, reference electrode, and electrolytic solution. The working electrode’s potential is varied linearly with time, while the reference electrode maintains a constant potential. The counter electrode conducts electricity from the signal source to the working electrode. The purpose of the electrolytic solution is to provide ions to the electrodes during oxidation and reduction. A potentiostat is an electronic device which uses a dc power source to produce a potential which can be maintained and accurately determined, while allowing small currents to be drawn into the system without changing the voltage. The current-to-voltage converter measures the resulting current, and the data acquisition system produces the resulting voltammogram.Cyclic Voltammetry can be used to study qualitative information about electrochemical processes under various conditions, such as the presence of intermediates in oxidation-reduction reactions, the reversibility of a reaction. CV can also be used to determine the electron stoichiometry of a system, the diffusion coefficient of an analyte, and the formal reduction potential, which can be used as an identification tool. In addition, because concentration is proportional to current in a reversible, Nernstian system, concentration of an unknown solution can be determined by generating a calibration curve of current vs. concentration.Cyclic Voltammetry is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by LibreTexts.
651
Data Analysis
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Supplemental_Modules_(Analytical_Chemistry)/Data_Analysis/Data_Analysis
Any quantitative measurement of a property requires the placing of a numerical value on that property and also a statement of the units in which the measurement is made (cm, g, mL etc.) The number of digits used to designate the numerical value is referred to as the number of significant figures or digits, and these depend upon the precision of the measuring device. Valuable information may be lost if digits that are significant are omitted. It is equally wrong to record too many digits, since this implies greater precision than really exists.Thus, significant figures are those digits that give meaningful but not misleading information. Only the last digit contains an uncertainty, which is due to the precision of the measurement. Therefore, when a measurement is made and the precision of the measurement is considered, all digits thought to be reasonably reliable are significant. For example:Zeroes may or may not be significant. The following rules should be helpful:It is important to realize that significant digits are taken to be all digits that are certain plus one digit, namely the last one, which has an uncertainty of plus or minus one in that place. The left-most digit in a number is said to be the most-significant digit (msd) and the right-most digit is the least-significant-digit (lsd). For another discussion of this topic see pages 39-40 in SHW.When adding or subtracting significant figures, the answer is expressed only as far as the last complete column of digits. Here are some examples:15.42+0.307=15.733.43+8.6=12.027.0-0.364=26.6It is often stated that the number of significant digits in the answer should be the same as the number of significant digits in the datum which has the smallest number of significant digits. For example for the result of the following division 9.8/9.41 = 1.0414 the result, according to the above rule should be rounded to two significant digits since the datum with the fewest significant digits, namely 9.8 has only two digits. This rule which is often quoted and one that many students find familiar and simple suffers from a serious defect. The relative uncertainty of the two pieces of data is quite different. For 9.8 it is 1/98 0.01, while for 9.41 it is 1/941 0.001. Clearly, the answer should not show a relative uncertainty smaller than the largest relative uncertainty in the data. Conversely, the answer should not be given in such a manner that its relative uncertainty is larger that warranted by the data. In the example given the application of the common rule would indicate that the answer should have two significant digits, i.e. it should be 1.0. The relative uncertainty then would be 1/10 = 0.1, which is far larger than 0.01. For this reason it appears that a more sophisticated rule, which considers the relative uncertainties of both data and answer, is needed. A relatively simple rule which does this can be derived from the following considerations. For single and chained multiplications, and to a good approximation for divisions, the uncertainty in A is related to the uncertainty in D by:\[ \Delta A = \dfrac{\Delta D}{D} \times A\]The relative uncertainty in D is equal to the relative uncertainty in A. The improved product-quotient rule, based on the preceding analysis, is given below.In the above example 9.8 is clearly the datum with the fewest number of digits. One therefore divides 1.0414 by 98 obtaining 0.01063. The most significant digit in this answer is in the hundredth's place. The result of dividing 9.8 by 9.41 should be expressed with the least significant digit in the hundredth's place, i.e. 1.04. Note that the relative uncertainty of this result is 1/104 0.01, which is precisely the relative uncertainty in 9.8.Let A = KDa, where K is a constant and "a" is a constant exponent, either integral or fractional. It can be shown that the relative uncertainty in A is equal to the relative uncertainty in D multiplied by "a", i.e. \[\dfrac{\Delta A}{A} = \alpha \dfrac{Delta D}{D}\]For example, let A = (0. 0768)1/4 = 0.52643.... Then\[ \Delta A = 0.52642 \times 0.25 \dfrac{0.0001}{0.0768} = 0.00017\]Since the most significant digit in the latter answer appears in the fourth decimal place, the correct number of significant figures in A is four, i.e. A = 0.5264.Given a [H+] = 1. 8 x 10-4 we can calculate the pH from the definition of quantity, i.e.pH = -log[H+]How many significant should the pH show?A logarithm consists of two parts. Digits to the left of the decimal point, these digits are known as the characteristic. The characteristic is not a significant digit since it only indicates the magnitude of the number. The digits to the right of the decimal point are the mantissa and they represent the accuracy to which a result is known. This then suggests the following rules:When we use significant figures in numerical operations, we often obtain answers with more digits than are justified. We must then round the answers to the correct number of significant digits by dropping extraneous digits. Use the following rules for rounding purposes:1. If the digit to be dropped is 0, 1, 2, 3 or 4 drop it and leave the last remaining digit as it is.2. If the digit to be dropped is 5, 6, 7, 8 or 9 increase the last remaining digit by 1.The above rules can be summarized as follows: "If the first (leftmost) digit to be dropped is significant and is 5-9 round up, otherwise truncate. It is important to realize that rounding must be postponed until the calculation is complete, i.e. do not round intermediate results.Experimental determination of any quantity is subject to error because it is impossible to carry out any measurement with absolute certainty. The extent of error in any determination is a function of the quality of the instrument or the measuring device, the skill and experience of the experimenter. Thus, discussion of errors is an essential part of experimental work in any quantitative science.The types of errors encountered in making measurements are classified into three groups:Students can recognize the occurrence of careless or random errors by deviations of the separate determinations from each other. This is called the precision of the measurement. The existence of systematic errors is realized when the experimental results are compared with the true value. This is the accuracy of the result. A further discussion of these terms is given below:Accuracy of a measurement refers to the nearness of a numerical value to the correct or accepted value. It is often expressed in terms of the relative percent error:It is evaluated only when there is an independent determination that is accepted as the true value. In those cases where the true value is not known it is possible to substitute for the "true" value the mean of the replicate determinations in order to calculate the relative percent error.Precision of a measurement refers to the reproducibility of the results, i.e. the agreement between values in replicated trials. Chemical analyses are usually performed in triplicate. It is unsafe to use only two trials because in case of a deviation one has no idea which of the two values is more reliable. It is generally too laborious to use more than three samples.The precision of a set of measurements is usually expressed in terms of the standard deviation. A somewhat easier to understand and for small data sets just as meaningful measure of precision is the average deviation. The steps required to calculate this average deviation are summarized below.The result of the analysis can then be expressed as the "mean + average deviation".This procedure may be illustrated with the following data. Assume that you wanted to calculate the average mileage per gallon of gasoline of your car. Results of three different trials carried out under similar driving conditions gave the following miles per gallon:20.8, 20.4 and 21.2The arithmetic mean can be calculated as (20.8 + 20.4 + 21.2)/3=20.8The Deviation from the Mean in each case is|20.8-20.8|=0.0 |20.4-20.8|=0.4 |21.2-20.8|=0.4The average deviation from the mean is then calculated as(0.0+0.4+0.4)/3=0.3, to one significant figure Therefore, the experimental value should be reported as:When a set of data contains an outlying result that appears to deviate excessively from the average or median, the decision must be made to either retain or reject this particular measurement. The rejection of a piece of data is a serious matter which should never be made on anything but the most objective criteria available, certainly never on the basis of hunches or personal prejudice. Even a choice of criteria for the rejection of a suspected result has its perils. If one demands overwhelming odds in favor of rejection and thereby makes it difficult ever to reject a questionable measurement, one runs the risk of retaining results that are spurious. On the other hand, if an unrealistically high estimate of the precision of a set of measurements is assumed a valuable result might be discarded. Most unfortunately, there is no simple rule to give one guidance. The Q-Test has some usefulness if there is a single measurement which one suspects might deviate inordinately from the rest of the measurements:\[Q_{exp} = \dfrac{d}{w} = \dfrac{|x+q = x_m|}{|x_1 - x_n}\] In a set of n measurements if one observes a questionable value (an outlier), xq, the absolute value of the difference between that value and its nearest neighbor, xnn , divided by the absolute value of difference between the highest and lowest value in the set is the experimental quotient Q, or Qexp . If Qexp exceeds a given "critical" Q (Qcrit) for a given level of confidence then one might decide to reject that value at the given level of confidence. A table of values of Qcrit is given below: In light of the foregoing, a number of recommendations, for the treatment of data sets containing a suspect result, can be made.This page titled Data Analysis is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by Oliver Seely.
652
Definitions of Oxidation and Reduction
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Supplemental_Modules_(Analytical_Chemistry)/Electrochemistry/Redox_Chemistry/Definitions_of_Oxidation_and_Reduction
This page discusses the various definitions of oxidation and reduction (redox) in terms of the transfer of oxygen, hydrogen, and electrons. It also explains the terms oxidizing agent and reducing agent.The terms oxidation and reduction can be defined in terms of the adding or removing oxygen to a compound. while this is not the most robust definition, as discussed below, it is the easiest to remember.Oxidation and Reduction with respect to Oxygen TransferFor example, in the extraction of iron from its ore:Because both reduction and oxidation are occurring simultaneously, this is known as a redox reaction.An oxidizing agent is substance which oxidizes something else. In the above example, the iron(III) oxide is the oxidizing agent. A reducing agent reduces something else. In the equation, the carbon monoxide is the reducing agent.These are old definitions which are no longer used, except occasionally in organic chemistry.Oxidation and Reduction with respect to Hydrogen TransferNotice that these are exactly the opposite of the oxygen definitions (#1).For example, ethanol can be oxidized to ethanal:An oxidizing agent is required to remove the hydrogen from the ethanol. A commonly used oxidizing agent is potassium dichromate(VI) solution acidified with dilute sulfuric acid. Ethanal can also be reduced back to ethanol by adding hydrogen. A possible reducing agent is sodium tetrahydridoborate, NaBH4. Again the equation is too complicated to consider at this point.More precise definitionsof oxidizing and reducing agents areOxidation and Reduction with respect to Electron TransferRemembering these definitions is essential, and easily done using this convenient acronym:Example 1The equation below shows an obvious example of oxygen transfer in a simple redox reaction:\[ \ce{CuO + Mg \rightarrow Cu + MgO} \nonumber\]Copper(II) oxide and magnesium oxide are both ionic compounds. If the above is written as an ionic equation, it becomes apparent that the oxide ions are spectator ions. Omitting them gives:In the above reaction, magnesium reduces the copper(II) ion by transferring electrons to the ion and neutralizing its charge. Therefore, magnesium is a reducing agent. Another way of putting this is that the copper(II) ion is removing electrons from the magnesium to create a magnesium ion. The copper(II) ion is acting as an oxidizing agent.Confusion can result from trying to learn both the definitions of oxidation and reduction in terms of electron transfer and the definitions of oxidizing and reducing agents in the same terms. The following thought pattern can be helpful:​Here is another mental exercise:This page titled Definitions of Oxidation and Reduction is shared under a CC BY-NC 4.0 license and was authored, remixed, and/or curated by Jim Clark.
653
Density and Percent Compositions
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Supplemental_Modules_(Analytical_Chemistry)/Quantifying_Nature/Density_and_Percent_Compositions
Which one weighs more, a kilogram of feathers or a kilogram of bricks? Though many people will say that a kilogram of bricks is heavier, they actually weigh the same! However, many people are caught up by the concept of density, which causes them to answer the question incorrectly. A kilogram of feathers clearly takes up more space, but this is because it is less "dense." But what is density, and how can we determine it?Density (\(\rho\)) is a physical property found by dividing the mass of an object by its volume. Regardless of the sample size, density is always constant. For example, the density of a pure sample of tungsten is always 19.25 grams per cubic centimeter. This means that whether you have one gram or one kilogram of the sample, the density will never vary. The equation is as follows:\[ Density = \dfrac{Mass}{Volume} \]or just\[\rho = \dfrac{m}{v} \label{dens}\]Based on Equation \(\ref{dens}\), it's clear that density can, and does, vary from element to element and substance to substance due to differences in the relation of mass and volume. Let us break it down one step further. What are mass and volume? We cannot understand density until we know its parts: mass and volume. The following two sections will teach you all the information you need to know about volume and mass to properly solve and manipulate the density equation. Mass concerns the quantity of matter in an object. The SI unit for mass is the kilogram (kg), although grams (g) are commonly used in the laboratory to measure smaller quantities. Often, people mistake weight for mass. Weight concerns the force exerted on an object as a function of mass and gravity. This can be written as \[\text{Weight} = \text{mass} \times \text{gravity}\]\(Weight = {m}{g}\)Hence, weight changes due to variations in gravity and acceleration. For example, the mass of a 1 kg cube will continue to be 1 kg whether it is on the top of a mountain, the bottom of the sea, or on the moon, but its weight will differ. Another important difference between mass and weight is how they are measured. Weight is measured with a scale, while mass must be measured with a balance. Just as people confuse mass and weight, they also confuse scales and balances. A balance counteracts the effects of gravity while a scale incorporates it. There are two types of balances found in the laboratory: electronic and manual. With a manual balance, you find the unknown mass of an object by adjusting or comparing known masses until equilibrium is reached.Volume describes the quantity of three dimensional space than an object occupies. The SI unit for volume is meters cubed (m3), but milliliters (mL), centimeters cubed (cm3), and liters (L) are more common in the laboratory. There are many equations to find volume. Here are just a few of the easy ones:Volume = (length)3 or (length)(width)(height) or (base area)(height)We know all of density's components, so let's take a closer look at density itself. The unit most widely used to express density is g/cm3 or g/mL, though the SI unit for density is technically kg/m3. Grams per centimeter cubed is equivalent to grams per milliliter (g/cm3 = g/mL). To solve for density, simply follow the equation d = m/v. For example, if you had a metal cube with mass 7.0 g and volume 5.0 cm3, the density would be \[\rho = \dfrac{7\,g}{5\,cm^3}= 1.4\, g/cm^3\]Sometimes, you have to convert units to get the correct units for density, such as mg to g or in3 to cm3.Density can be used to help identify an unknown element. Of course, you have to know the density of an element with respect to other elements. Below is a table listing the density of a few elements from the Periodic Table at standard conditions for temperature and pressure, or STP corresponding to a temperature of 273 K (0° Celsius) and 1 atmosphere of pressure.As can be seen from the table, the most dense element is Osmium (Os) with a density of 22.6 g/cm3. The least dense element is Hydrogen (H) with a density of 0.09 g/cm3.Density generally decreases with increasing temperature and likewise increases with decreasing temperatures. This is because volume differs according to temperature. Volume increases with increasing temperature. If you are curious as to why the density of a pure substance could vary with temperature, check out the ChemWiki page on Van Der Waal interactions. Below is a table showing the density of pure water with differing temperatures.As can be seen from Table \(\PageIndex{2}\), the density of water decreases with increasing temperature. Liquid water also shows an exception to this rule from 0 degrees Celsius to 4 degrees Celsius, where it increases in density instead of decreasing as expected. Looking at the table, you can also see that ice is less dense than water. This is unusual as solids are generally denser than their liquid counterparts. Ice is less dense than water due to hydrogen bonding. In the water molecule, the hydrogen bonds are strong and compact. As the water freezes into the hexagonal crystals of ice, these hydrogen bonds are forced farther apart and the volume increases. With this volume increase comes a decrease in density. This explains why ice floats to the top of a cup of water: the ice is less dense.Even though the rule of density and temperature has its exceptions, it is still useful. For example, it explains how hot air balloons work.Density increases with increasing pressure because volume decreases as pressure increases. And since density=mass/volume , the lower the volume, the higher the density. This is why all density values in the Periodic Table are recorded at STP, as mentioned in the section "Density and the Periodic Table." The decrease in volume as related to pressure is explained in Boyle's Law: \(P_1V_1 = P_2V_2\) where P = pressure and V = volume. This idea is explained in the figure below. More about Boyle's Law, as well as the other gas laws, can be found here.Archimedes' PrincipleThe Greek scientist Archimedes made a significant discovery in 212 B.C. The story goes that Archimedes was asked to find out for the King if his goldsmith was cheating him by replacing his gold for the crown with silver, a cheaper metal. Archimedes did not know how to find the volume of an irregularly shaped object such as the crown, even though he knew he could distinguish between elements by their density. While meditating on this puzzle in a bath, Archimedes recognized that when he entered the bath, the water rose. He then realized that he could use a similar process to determine the density of the crown! He then supposedly ran through the streets naked shouting "Eureka," which means "I found it!" in Latin.Archimedes then tested the king's crown by taking a genuine gold crown of equal mass and comparing the densities of the two. The king's crown displaced more water than the gold crown of the same mass, meaning that the king's crown had a greater volume and thus had a smaller density than the real gold crown. The king's "gold" crown, therefore, was not made of pure gold. Of course, this tale is disputed today because Archimedes was not precise in all his measurements, which would make it hard to determine accurately the differences between the two crowns.Archimedes' Principle states that if an object has a greater density than the liquid that it is placed into, it will sink and displace a volume of liquid equal to its own. If it has a smaller density, it will float and displace a mass of liquid equal to its own. If the density is equal, it will not sink or float. This principle also explains why balloons filled with helium float. Balloons, as we learned in the section concerning density and temperature, float because they are less dense than the surrounding air. Helium is less dense than the atmospheric air, so it rises. Archimedes' Principle can also be used to explain why boats float. Boats, including all the air space, within their hulls, are far less dense than water. Boats made of steel can float because they displace their mass in water without submerging all the way. Table \(\PageIndex{3}\) below gives the densities of a few liquids to put things into perspective.Percent composition is very simple. Percent composition tells you by mass what percent of each element is present in a compound. A chemical compound is the combination of two or more elements. If you are studying a chemical compound, you may want to find the percent composition of a certain element within that chemical compound. The equation for percent composition is (mass of element/molecular mass) x 100.Steps to calculating the percent composition of the elements in an compoundTips for solving:Example \(\PageIndex{1}\): Phosphorus PentachlorideWhat is the percent composition of phosphorus and chlorine in \(PCl_5\)?SolutionFind the molar mass of all the elements in the compound in grams per mole.Find the molecular mass of the entire compound.Divide the component's molar mass by the entire molecular mass.Therefore, in \(PCl_5\) is 14.87% phosphorus and 85.13% chlorine by mass.Example \(\PageIndex{2}\): HClWhat is the percent composition of each element in hydrochloric acid (HCl).SolutionFirst find the molar mass of hydrogen.\[H = 1.00794 \,g\]Now find the molecular mass of the HCl molecule:\[1.00794\,g + 35.4527\,g = 36.46064\,g\]Follow steps 3 and 4:\[ \left(\dfrac{1.00794\,g}{36.46064\,g}\right) \times 100\% = 2.76\% \]Now just subtract to find the percent by mass of chlorine in the compound:\[100\%-2.76\% = 97.24\%\]Therefore, \(HCl\) is 2.76% hydrogen and 97.24% chlorine by mass.Percent composition plays an important role in everyday life. It is more than just the amount of chlorine in your swimming pool because it concerns everything from the money in your pocket to your health and how you live. The next two sections describe percent composition as it relates to you.The nutrition label found on the container of every bit of processed food sold by the local grocery store employs the idea of percent composition. On all nutrition labels, a known serving size is broken down in five categories: Total Fat, Cholesterol, Sodium, Total Carbohydrate, and Protein. These categories are broken down into further subcategories, including Saturated Fat and Dietary Fiber. The mass for each category, except Protein, is then converted to percent of Daily Value. Only two subcategories, Saturated Fat and Dietary Fiber are converted to percent of Daily Value. The Daily Value is based on a the mass of each category recommended per day per person for a 2000 calorie diet. The mass of protein is not converted to percent because their is no recommended daily value for protein. Following is a picture outlining these ideas.For example, if you wanted to know the percent by mass of the daily value for sodium you are eating when you eat one serving of the food with this nutrition label, then go to the category marked sodium. Look across the same row and read the percent written. If you eat one serving of this food, then you will have consumed about 9% of your daily recommended value for sodium. To find the percent mass of fat in the whole food, you could divide 3.5 grams by 15 grams, and see that this snack is 23.33% fat.Penny: The Lucky Copper CoinThe penny should be called "the lucky copper coated coin." The penny has not been made of solid copper since part of 1857. After 1857, the US government started adding other cheaper metals to the mix. The penny, being only one cent, is literally not worth its weight in copper. People could melt copper pennies and sell the copper for more than the pennies were worth. After 1857, nickel was mixed with the more expensive copper. After 1864, the penny was made of bronze. Bronze is 95% copper and 5% zinc and tin. For one year, 1943, the penny had no copper in it due to the expense of the World War II. It was just zinc coated steel. After 1943 until 1982, the penny went through periods where it was brass or bronze.Today, the penny in America is 2.5% copper with 97.5% zinc. The copper coats the outside of the penny while the inner portion is zinc. For comparison's sake, the penny in Canada is 94% steel, 1.5% nickel, and 4.5% copper.The percent composition of a penny may actually affect health, particularly the health of small children and pets. Since the newer pennies are made mainly of zinc instead of copper, they are a danger to a child's health if ingested. Zinc is very susceptible to acid. If the thin copper coating is scratched and the hydrochloric acid present in the stomach comes into contact with the zinc core it could cause ulcers, anemia, kidney and liver damage, or even death in severe cases. Three important factors in penny ingestion are time, pH of the stomach, and amount of pennies ingested. Of course, the more pennies swallowed, the more danger of an overdose of zinc. The more acidic the environment, the more zinc will be released in less time. This zinc is then absorbed and sent to the liver where it begins to cause damage. In this kind of situation, time is of the essence. The faster the penny is removed, the less zinc is absorbed. If the penny or pennies are not removed, organ failure and death can occur.Below is a picture of a scratched penny before and after it had been submerged in lemon juice. Lemon juice has a similar pH of 1.5-2.5 when compared to the normal human stomach after food has been consumed. Time elapsed: 36 hours.As you can see, the copper is vastly unharmed by the lemon juice. That's why pennies made before 1982 with mainly copper (except the 1943 penny) are relatively safe to swallow. Chances are they would pass through the digestive system naturally before any damage could be done. Yet, it is clear that the zinc was partially dissolved even though it was in the lemon juice for only a limited amount of time. Therefore, the percent composition of post 1982 pennies is hazardous to your health and the health of your pets if ingested.Density and percent composition are important concepts in chemistry. Each have basic components as well as broad applications. Components of density are: mass and volume, both of which can be more confusing than at first glance. An application of the concept of density is determining the volume of an irregular shape using a known mass and density. Determining Percent Composition requires knowing the mass of entire object or molecule and the mass of its components. In the laboratory, density can be used to identify an element, while percent composition is used to determine the amount, by mass, of each element present in a chemical compound. In daily life, density explains everything from why boats float to why air bubbles will try to escape from soda. It even affects your health because bone density is very important. Similarly, percent composition is commonly used to make animal feed and compounds such as the baking soda found in your kitchen.These problems are meant to be easy in the beginning and then gradually become more challenging. Unless otherwise stated, answers should be in g/mL or the equivalent g/cm3.11) Below is a model of a pyramid found at an archeological dig made of an unknown substance. It is too large to find the volume by submerging it in water. Also, the scientists refuse to remove a piece to test because this pyramid is a part of history. Its height is 150.0m. The length of its base is 75.0m and the width is 50.0m. The mass of this pyramid is 5.50x105kg. What is the density?These problems will follow the same pattern of difficulty as those of density.Density and Percent Compositions is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by LibreTexts.
654
Diffraction Scattering Techniques
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Supplemental_Modules_(Analytical_Chemistry)/Instrumentation_and_Analysis/Diffraction_Scattering_Techniques
When an X-ray is shined on a crystal, it diffracts in a pattern characteristic of the structure. Bragg's LawThe structures of crystals and molecules are often being identified using x-ray diffraction studies, which are explained by Bragg’s Law. The law explains the relationship between an x-ray light shooting into and its reflection off from crystal surface.Powder X-ray DiffractionWhen an X-ray is shined on a crystal, it diffracts in a pattern characteristic of the structure. In powder X-ray diffraction, the diffraction pattern is obtained from a powder of the material, rather than an individual crystal. Powder diffraction is often easier and more convenient than single crystal diffraction since it does not require individual crystals be made. Powder X-ray diffraction (XRD) also obtains a diffraction pattern for the bulk material of a crystalline solid, rather than of a sX-ray CrystallographyX-ray Crystallography is a scientific method used to determine the arrangement of atoms of a crystalline solid in three dimensional space. This technique takes advantage of the interatomic spacing of most crystalline solids by employing them as a diffraction gradient for x-ray light, which has wavelengths on the order of 1 angstrom (10-8 cm).X-ray DiffractionThe construction of a simple powder diffractometer was first described by Hull in 1917 which was shortly after the discovery of X-rays by Wilhelm Conrad Röntgen in 1895. Diffractometer measures the angles at which X-rays get reflected and thus get the structural information they contains. Nowadays resolution of this technique get significant improvement and it is widely used as a tool to analyze the phase information and solve crystal structures of solid-state materials. Thumbnail: Photo of an X-Ray Diffraction machine. Photo from the Australian Microscopy & Microanalysis Research Facility Website. .Diffraction Scattering Techniques is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by LibreTexts.
655
Dimensional Analysis
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Supplemental_Modules_(Analytical_Chemistry)/Data_Analysis/Dimensional_Analysis
Dimensional analysis is amongst the most valuable tools physical scientists use. Simply put, it is the conversion between an amount in one unit to the corresponding amount in a desired unit using various conversion factors. This is valuable because certain measurements are more accurate or easier to find than others. The use of units in a calculation to ensure that we obtain the final proper units is called dimensional analysis. For example, if we observe experimentally that an object’s potential energy is related to its mass, its height from the ground, and to a gravitational force, then when multiplied, the units of mass, height, and the force of gravity must give us units corresponding to those of energy.Energy is typically measured in joules, calories, or electron volts (eV), defined by the following expressions:Performing dimensional analysis begins with finding the appropriate conversion factors. Then, you simply multiply the values together such that the units cancel by having equal units in the numerator and the denominator. To understand this process, let us walk through a few examples. Example \(\PageIndex{1}\)Imagine that a chemist wants to measure out 0.214 mL of benzene, but lacks the equipment to accurately measure such a small volume. The chemist, however, is equipped with an analytical balance capable of measuring to \(\pm 0.0001 \;g\). Looking in a reference table, the chemist learns the density of benzene (\(\rho=0.8765 \;g/mL\)). How many grams of benzene should the chemist use?Solution\[0.214 \; \cancel{mL} \left( \dfrac{0.8765\; g}{1\;\cancel{mL}}\right)= 0.187571\; g \nonumber\]Notice that the mL are being divided by mL, an equivalent unit. We can cancel these our, which results with the 0.187571 g. However, this is not our final answer, since this result has too many significant figures and must be rounded down to three significant digits. This is because 0.214 mL has three significant digits and the conversion factor had four significant digits. Since 5 is greater than or equal to 5, we must round the preceding 7 up to 8.Hence, the chemist should weigh out 0.188 g of benzene to have 0.214 mL of benzene.Example \(\PageIndex{2}\)To illustrate the use of dimensional analysis to solve energy problems, let us calculate the kinetic energy in joules of a 320 g object traveling at 123 cm/s.SolutionTo obtain an answer in joules, we must convert grams to kilograms and centimeters to meters. Using Equation 5.4, the calculation may be set up as follows:\[ \begin{align*} KE &=\dfrac{1}{2}mv^2=\dfrac{1}{2}(g) \left(\dfrac{kg}{g}\right) \left[\left(\dfrac{c}{ms}\right)\left(\dfrac{m}{cm}\right) \right]^2 \\[4pt] &= (\cancel{g})\left(\dfrac{kg}{\cancel{g}}\right) \left(\dfrac{\cancel{m^2}}{s^2}\right) \left(\dfrac{m^2}{\cancel{cm^2}}\right) = \dfrac{kg⋅m^2}{s^2} \\[4pt] &=\dfrac{1}{2}320\; \cancel{g} \left( \dfrac{1\; kg}{1000\;\cancel{g}}\right) \left[\left(\dfrac{123\;\cancel{cm}}{1 \;s}\right) \left(\dfrac{1 \;m}{100\; \cancel{cm}}\right) \right]^2=\dfrac{0.320\; kg}{2}\left[\dfrac{123 m}{s}\right]^2 \\[4pt] &=\dfrac{1}{2} 0.320\; kg \left[ \dfrac{^2 m^2}{s^2^2} \right]= 0.242 \dfrac{kg⋅m^2}{s^2} = 0.242\; J \end{align*}\]Alternatively, the conversions may be carried out in a stepwise manner:Step 1: convert \(g\) to \(kg\)\[320\; \cancel{g} \left( \dfrac{1\; kg}{1000\;\cancel{g}}\right) = 0.320 \; kg \nonumber\]Step 2: convert \(cm\) to \(m\)\[123\;\cancel{cm} \left(\dfrac{1 \;m}{100\; \cancel{cm}}\right) = 1.23\ m \nonumber \]Now the natural units for calculating joules is used to get final results\[ KE=\dfrac{1}{2} 0.320\; kg \left(1.23 \;ms\right)^2=\dfrac{1}{2} 0.320\; kg \left(1.513 \dfrac{m^2}{s^2}\right)= 0.242\; \dfrac{kg⋅m^2}{s^2}= 0.242\; J \nonumber\]Of course, steps 1 and 2 can be done in the opposite order with no effect on the final results. However, this second method involves an additional step.Example \(\PageIndex{3}\)Now suppose you wish to report the number of kilocalories of energy contained in a 7.00 oz piece of chocolate in units of kilojoules per gram.SolutionTo obtain an answer in kilojoules, we must convert 7.00 oz to grams and kilocalories to kilojoules. Food reported to contain a value in Calories actually contains that same value in kilocalories. If the chocolate wrapper lists the caloric content as 120 Calories, the chocolate contains 120 kcal of energy. If we choose to use multiple steps to obtain our answer, we can begin with the conversion of kilocalories to kilojoules:\[120 \cancel{kcal} \left(\dfrac{1000 \;\cancel{cal}}{\cancel{kcal}}\right)\left(\dfrac{4.184 \;\cancel{J}}{1 \cancel{cal}}\right)\left(\dfrac{1 \;kJ}{1000 \cancel{J}}\right)= 502\; kJ \nonumber\]We next convert the 7.00 oz of chocolate to grams:\[7.00\;\cancel{oz} \left(\dfrac{28.35\; g}{1\; \cancel{oz}}\right)= 199\; g \nonumber\]The number of kilojoules per gram is therefore\[\dfrac{ 502 \;kJ}{199\; g}= 2.52\; kJ/g \nonumber\]Alternatively, we could solve the problem in one step with all the conversions included:\[\left(\dfrac{120\; \cancel{kcal}}{7.00\; \cancel{oz}}\right)\left(\dfrac{1000 \;\cancel{cal}}{1 \;\cancel{kcal}}\right)\left(\dfrac{4.184 \;\cancel{J}}{1 \; \cancel{cal}}\right)\left(\dfrac{1 \;kJ}{1000 \;\cancel{J}}\right)\left(\dfrac{1 \;\cancel{oz}}{28.35\; g}\right)= 2.53 \; kJ/g \nonumber\]The discrepancy between the two answers is attributable to rounding to the correct number of significant figures for each step when carrying out the calculation in a stepwise manner. Recall that all digits in the calculator should be carried forward when carrying out a calculation using multiple steps. In this problem, we first converted kilocalories to kilojoules and then converted ounces to grams. Skill Builder ES2 allows you to practice making multiple conversions between units in a single step.Dimensional Analysis is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by LibreTexts.
656
Dynamic Light Scattering
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Supplemental_Modules_(Analytical_Chemistry)/Quantifying_Nature/Dynamic_Light_Scattering
Dynamic Light Scattering (DLS), also called Photon Correlation Spectroscopy, is a spectroscopic technique used in Chemistry, Biochemistry, and Physics primarily to characterize the hydrodynamic radius of polymers, proteins, and colloids in solution. DLS is a useful technique for determining the size distribution of nanoparticles in a suspension and detecting small amounts of high mass species in protein samples.In a typical DLS experiment, a solution/suspension of analyte is irradiated with monochromatic laser light and fluctuations in the intensity of the diffracted light are measured as a function of time. Intensity data is then collected using an autocorrelator to determine the size distribution of particles or molecules in a sample.In general, when a sample of particles with diamter much smaller than the wavelength of light is irradiated with light, each particle will diffract the incident light in all directions. This is called Rayleigh Scattering. If that diffracted light is projected as an image onto a screen it will generate a “speckle" pattern like the one seen here.3 A Typical Speckle PatternThe dark areas in the speckle pattern represent regions where the diffracted light from the particles arrives out of phase interfering destructively and the bright areas represent regions where the diffracted light arrives in phase interfering constructively.In practice, particle samples are typically not stationary because they are suspended in a solution and as a result they are moving randomly due to collisions with solvent molecules. This type of motion is called Brownian Motion and it is vital for DLS analysis because it allows the use of the Stokes-Einstein equation to relate the velocity of a particle in solution to its hydrodynamic radius.1\[D=\frac{kT}{6\pi \eta a}\]In the Stokes-Einstein equation, D is the diffusion velocity of the particle, k is the Boltzmann constant, T is the temperature, η is the viscosity of the solution and a is the hydrodynamic radius of the particle. The diffusion velocity (D) in the Stokes-Einstein relation is inversely proportional to the radius of the particle (a) and this shows that for a system undergoing Brownian Motion, small particles should diffuse faster than large ones. This is a key concept in DLS analysis. In a sample of particles experiencing Brownian Motion, the distance between particles is constantly changing and this results in a Doppler Shift between the frequency of incoming light and the frequency of scattered light. Since the distance between particles affects the phase overlap of the diffracted light, the brightness of the spots on the speckle pattern will fluctuate in intensity as the particles change position with respect to each other. The rate of these intensity fluctuations depends on how fast the particles are moving and will therefore be slower for large particles and faster for small particles. This means that the fluctuating scattered light data contains information about the size of the particles.2 In a typical DLS experiment, a suspension of analyte such as nanoparticles or polymer molecules is irradiated with monochromatic light from a laser while intensity of the diffracted light is measured. The detector is typically a photomultiplier positioned at 90° to the light source and it is used to collect light diffracted from the sample. Collimating lenses are used to focus laser light to the center of the sample holder and to prevent saturation of the photomultiplier tube.1Ideally, the sample itself should be free of unwanted particles that could contribute to light scattering. For this reason dispersions are often filtered or purified before being measured. Samples are also diluted to low concentrations in order to prevent the particles from interacting with each other and disrupting Brownian Motion.Since the fluctuating intensity data contains a wide spectrum of Doppler shifted frequencies it is not usually measured directly but instead it is compiled for processing using a device called a digital correlator. The function of the correlator in a DLS system is essentially to compare the intensity of two signals over a short period of time \(\tau\) (nanoseconds to microseconds) and to calculate the extent of similarity between the two signals using the correlation function. The electric field correlation function is defined mathematically as\[G_{1}(t)=\lim_{t\to\infty}\frac{1}{T}\int_{-T}^{T}I(t)I(t+\tau )dt\]And is related to the intensity correlation function (G2) by the Siegart relationship\[G_{2}(\tau )=1+\beta \left |G_{1}(\tau ) \right |^{2}\]where β is an experimental factor that is related to the angle of scattering in the DLS setup being used. Consider the fluctuating intensities of the speckle pattern mentioned earlier. If the intensity signal at a location on a speckle pattern is compared to itself with no change in time (t) then the correlator will measure a perfect correlation and it will assign a value of 1. However, if the same intensity signal is compared with another signal a short time later (t+Δt) then the correlation has now diminished and the correlator will assign a value less than 1. With most speckle patterns the signal correlation drops to zero after 1-10 milliseconds so Δt, the time scale of measurements, must be on a faster time scale of nanoseconds to microseconds.1 Since these intensity fluctuations that are being measured are directly related to the movement of particles in solution, it is useful to recall the Stokes-Einstein relation above which shows that smaller particles move more quickly through solution than larger particles. This means that the intensity signal for smaller particles should fluctuate more rapidly than for larger particles and as a result the correlation decreases at a faster rate as seen in the figure below. Exponential decay of the correlation function.The correlation function for a system experiencing Brownian motion \(G(t)\) decays exponentially with decay constant \(\Gamma\). \[G(t)=e^{-\Gamma t}\]\(\Gamma\) is related to the diffusivity of the particle by \[\Gamma=-Dq^{^{2}} \]where\[q=\frac{4\pi n }{\lambda}\sin(\dfrac{\Theta }{2})\]Dynamic Light Scattering is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by Ben Nail.
657
Electrochemical Cell Conventions
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Supplemental_Modules_(Analytical_Chemistry)/Electrochemistry/Voltaic_Cells/Electrochemical_Cell_Conventions
Using chemical reactions to produce electricity is now a priority for many researchers. Being able to adequately use chemical reactions as a source of power would greatly help our environmental pollution problems. In this section of electrochemistry, we will be learning how to use chemical reactions to produce this clean electricity and even use electricity to generate chemical reactions. In order to induce a flow of electric charges, we place a strip of metal (the electrode) in a solution containing the same metal, which is in aqueous state. The combination of an electrode and its solution is called a half cell. Within the half cell, metals ions from the solution could gain electrons from the electrode and become metal atoms;or the metal atoms from the electrode could lose electrons and become metals ions in the solution.We use two different half cells to measure how readily electrons can flow from one electrode to another, and the device used for measurement is called a voltmeter. The voltmeter measures the cell potential, denoted by Ecell, (in units of Volts, 1 V=1 J/C), which is the potential difference between two half cells. The salt bridge allows the ions to flow from one half cell to another but prevents the flow of solutions. As indicated in the diagram, the anode is the electrode where oxidation occurs; \(\ce{Cu}\) loses two electrons to form \(\ce{Cu^{+2}}\). The cathode is the electrode where reduction occurs; \(\ce{Ag^{+} (aq)}\) gains electron to become \(\ce{Ag(s)}\). As a convenient substitution for the drawing, we use a cell diagram to show the parts of an electrochemical cell. For example above, the cell notation is:\[\underbrace{\ce{Cu(s) | Cu^{2+} (aq)}}_{\text{oxidation half-reaction}} \,||\, \underbrace{\ce{Ag^{+} (aq)| Ag(s)}}_{\text{reduction half-reaction}} \nonumber\]Where we place the anode on the left and cathode on the right, "\(|\)" represents the boundary between the two phases, and "\(||\)" represents the salt bridge. There are two types of electrochemical cells:A Galvanic Cell (aka Voltaic Cell) induces a spontaneous redox reaction to create a flow of electrical charges, or electricity. Non-rechargeable batteries are examples of Galvanic cells.An Electrolytic cell is one kind of battery that requires an outside electrical source to drive the non-spontaneous redox reaction. Rechargeable batteries act as Electrolytic cells when they are being recharged.Similarities between Galvanci and Electrolytic CellsBoth Galvanic and Electrolytic cells contain:Electrochemical cells use a vast amount of terminology. Here is a brief definition of some of the more common terms:A galvanic cell produces an electrical charge from the flow of electrons. The electrons move due to the redox reaction. As we can see in , \(\ce{Cu(s)}\) oxidizes to \(\ce{Cu^{2+}(aq)}\), while \(\ce{Ag^{+}(s)}\) reduces to \(\ce{Ag(s)}\). The cell notation for this cell is:\[\underbrace{\ce{Cu (s) | Cu^{2+} (aq)}}_{\text{oxidation half-reaction}} \,||\, \underbrace{\ce{Ag^{+} (aq)| Ag(s)}}_{\text{reduction half-reaction}} \nonumber\]To understand the cell, solve the redox equation.First, split the reaction into two half reactions, with the same elements paired with one another.\[\underbrace{\ce{Cu(s) -> Cu^{+2}(aq)}}_{\text{Oxidation reaction occurs at the Anode}} \nonumber\]\[\underbrace{\ce{Ag^{+}(aq) -> Ag(s)}}_{\text{Reduction reaction occurs at the Cathode}} \nonumber\]Next, we balance the two equations.\[\text{Oxidation}: \ce{Cu(s) -> Cu^{2+}(aq) + 2e^{-} (Anode)} \nonumber\]\[\text{Reduction}: \ce{e^{-} + Ag^{+}(aq) → Ag(s) (Cathode)} \nonumber\]Finally, we recombine the two equations.\[\ce{Cu(s) + 2Ag^{+}(aq) -> Cu^{2+}(aq) + 2Ag(s)} \nonumber\]This is a spontaneous reaction that releases energy so this system does work on the surroundings.Galvanic cells are quite common. A, AA, AAA, D, C, etc. batteries are all galvanic cells. Any non-rechargeable battery that does not depend on an outside electrical source is a Galvanic cell.An electrolytic cell is a cell which requires an outside electrical source to initiate the redox reaction. The process of how electric energy drives the non-spontaneous reaction is called electrolysis. Whereas the galvanic cell used a redox reaction to make electrons flow, the electrolytic cell uses electron movement (in the source of electricity) to cause the redox reaction. In an electrolytic cell, electrons are forced to flow in the opposite direction. Since the direction is reversed of the voltaic cell, the E0cell for electrolytic cell is negative. Also, in order to force the electrons to flow in the opposite direction, the electromotive force that connects the two electrode-the battery must be larger than the magnitude of \(E^o_{cell}\). This additional requirement of voltage is called overpotential.Reaction in can be switched by applying a voltage\[\underbrace{\ce{Ag(s) -> Ag^{+}(aq) + e^{-}}}_{\text{Oxidation reaction occurs at the Anode}} \nonumber\]\[\underbrace{\ce{Cu^{+2}(aq) + 2e^{-} -> Cu(s)}}_{\text{Reduction reaction occurs at the Cathode}} \nonumber\]with the opposite total reaction\[\ce{Cu^{2+}(aq) + 2Ag(s) ->[\text{applied voltage}] Cu(s) + 2Ag^{+}(aq)} \nonumber\]The most common form of Electrolytic cell is the rechargeable battery (cell phones, mp3's, etc) or electroplating. While the battery is being used in the device it is a galvanic cell function (using the redox energy to produce electricity). While the battery is charging it is an electrolytic cell function (using outside electricity to reverse the completed redox reaction).Electrochemical Cell Conventions is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by LibreTexts.
658
Electrochemical Cells under Nonstandard Conditions
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Supplemental_Modules_(Analytical_Chemistry)/Electrochemistry/Voltaic_Cells/Electrochemical_Cells_under_Nonstandard_Conditions
Electrochemical Cell Conventions cells are devices that use the transfer of energy, in the form of electrons, to measure the energy available after a given reaction. There are two forms of electrochemical cells: galvanic (voltaic) and electrolytic. Spontaneous reactions take place in galvanic cells and non-spontaneous reactions take place in electrolytic cells. Regardless of the resulting energy, each electrochemical cell consists of an anode, where oxidation takes place; and a cathode, where reduction takes place.Anodes and cathodes are both called electrodes, and are two of the vital pieces in constructing an electrochemical cell. Electrochemical cells can take place under standard conditions or non-standard conditions (in both, electrons always flow from the anode to the cathode). Standard conditions are those that take place at 298.15 Kelvin (temperature), 1 atmosphere (pressure), and have a Molarity of 1.0 M for both the anode and cathode solutions. Non-Standard conditions occur when any of these three conditions is changed, but generally involve a change in concentration (check the Concentration Cell section for more details). ANODE: oxidation --> always on the left CATHODE: reduction --> always on the rightRecall:The Cell Potential is the potential (in volts) that results from a change in electron number. Cell potential if a cell at standard conditions can be obtained by the equation: E°CELL = E°CAT - E°AN. This can also be solved using the Standard Hydrogen Electrode.Electrochemical reactions rarely occur under standard conditions. Even if we start at standard conditions, species involved in electrochemical reactions change in concentration throughout the reaction, removing them from standard conditions. For electrochemical cells under non-standard conditions, we use the Nernst Equation:Ecell = E°cell - [(RT)/(nF)]*ln Q >E°cell = E°cat - E°an >n = how many electrons were transferred between the cathode and the anode >Q = activities (Q of homogenous or pure solids and liquids is 1; recall how to calculate this from concepts of equilibrium ) >R is the Ideal Gas Constant = 8.314 J/(mol K)>F is Faraday's Constant = 96485 C/molAs demonstrated by this equation, determining the electrochemical potential of electrochemical cells under non-standard conditions is almost identical to the process of finding the electrochemical cells under standard conditions. The difference, however, lies in the fact that another equation is used for reactions occurring under non-standard conditions because we take into account a change in concentration among the species. Example: The following reaction takes place in an electrochemical cell. Demonstrate whether the reaction will proceed spontaneously or non-spontaneously. Cu (s) l Cu2+ (0.15 M) ll Fe3+ (0.35 M), Fe 2+ (0.25 M) l Pt (s)1. identify which species are reduce and which are oxidized. We know iron will be reduced (it's on the right of our cell diagram) and copper will be oxidized (it's on the left of our cell diagram)Cu → Cu2+ + 2e- : OXIDIZED (anode)Fe3+ + e- → Fe2+: REDUCED (cathode)2. write out the overall equation for the reaction (remember to multiply our equations with the appropriate numbers so the electrons cancel)2Fe3+ (aq)+ Cu (s) → Cu2+ (aq) + 2Fe 2+ (aq)3. find n (the number of electrons transferred) = 24. look at the reduction potential tables and solve E°cell = E°cat - E°anE°cell = 0.769V - 0.339V = 0.43V5. Plug the standard electrode potential into the Nernst equation Ecell = E°cell - [(RT)/(nF)]*ln Q Ecell = E°cell - [(RT)/(nF)]*ln ( [Fe 2+]2 [Cu2+] ) / [Fe3+]2 Ecell = 0.43 - [(8.314 * 298)/(2*96485)] ln [( 0.252 *0.15 ) / 0.352] Ecell = + 0.463 V*note: since my Ecellis positive, I know this reaction is spontaneous (and my ΔG is negative).Determine Ecellfor the reaction in non-standard conditions:Al (s) l Al3+ (.36 M) ll Sn4+ (0.086 M), Sn2+ (0.54 M) l Pt (s).Also indicate which element is being oxidized and which element is being reduced as well as the anode and the cathode. Determine cell voltage: Al(s)→ Al3+(aq) + 3e- oxidation (anode) E°cell = -1.676 VE°cell is the Standard REDUCTION potential for the equation written above. The voltage of the equation above is actually +1.676V since we would be looking at the standard OXIDATION potential (the equation above is an oxidation one).Sn4+ (aq) + 2e- → Sn2+ (aq) reduction (cathode) E°cell = 0.154 VThe electrons need to be balanced: multiply the first reaction by 2 and the second reaction by 3, you should get the net equation to be:2Al(s)+ 3Sn4+ (aq) → 2Al3+ (aq) + 3Sn2+ (aq)recall: E°cell = E°cat - E°an (these are standard REDUCTION potentials), therefore E°cell = 0.154 - (-1.676) = +1.830 VUse the Nernst equationEcell = E°cell - [(RT)/(nF)] * ln Qn = 6 (see oxidation-reduction equation, this is the number of electrons transferred)Ecell= 1.830 - (8.314*298)/(6*96485) * ln ([Al3+]2[Sn2+]3)/([Sn4+]3)remember that solids are not included in QEcell= 1.830 - (8.314*298)/(6*96485) log(.36M)2 (.54M)3 / (.086M)3 = +1.851V (spontaneous because Ecell is positive)Electrochemical Cells under Nonstandard Conditions is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by LibreTexts.
659
Electrochemistry
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Supplemental_Modules_(Analytical_Chemistry)/Electrochemistry/Basics_of_Electrochemistry/Electrochemistry
Cell EMFThe electromotive force (EMF) is the maximum potential difference between two electrodes of a galvanic or voltaic cell. This quantity is related to the tendency for an element, a compound or an ion to acquire (i.e. gain) or release (lose) electrons.Electrochemistry ReviewElectrolysisChemical reactions in batteries or galvanic cells provide the driving force for electrons to struggle through loads. This is how chemical energy is transformed into electric energy. Electrolysis can be carried out in solutions or molten salts (liquid). Because the atoms and ions have to move physically, the medium has to be a fluid. The products, like the reactants in a galvanic cell, can be in a solid, liquid, or gas state.Galvanic CellsHalf-Cell ReactionA half cell is one of the two electrodes in a galvanic cell or simple battery. For example, in the Zn−Cu battery, the two half cells make an oxidizing-reducing couple. Placing a piece of reactant in an electrolyte solution makes a half cell. Unless it is connected to another half cell via an electric conductor and salt bridge, no reaction will take place in a half cell.Nernst EquationElectrochemistry deals with cell potential as well as energy of chemical reactions. The energy of a chemical system drives the charges to move, and the driving force gives rise to the cell potential of a system called galvanic cell. The energy aspect is also related to the chemical equilibrium. All these relationships are tied together in the concept of the Nernst equation. Chung (Peter) Chieh (Professor Emeritus, Chemistry University of Waterloo)Electrochemistry is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by LibreTexts.
660
Electrochemistry Review
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Supplemental_Modules_(Analytical_Chemistry)/Electrochemistry/Basics_of_Electrochemistry/Electrochemistry/Electrochemistry_Review
Chemical reactions involving transfer of electrons are called oxidation and reduction reactions or redox reactions. When properly set up, these reactions generate power in a Battery, or galvanic cell. A galvanic cell consists of at least two half cells, each of which consists of an electrode and an electrolyte solution.A redox reaction can be divided into two Half Reactions, an oxidation reaction and a reduction half reaction.Each half reaction can be set up as a Half Cell and putting two half cells together makes a simple battery. A galvanic cell can be considered as a battery, but batteries include packages of galvanic cells in series to supply higher voltages than a single galvanic cell.Oxidation of a species (atom, ion, or molecule) is a loss of electron(s), and reduction of a species a gain of electron(s). Some conventions are used to define the Oxidation States, and oxidation of an atom causes an increase in its oxidation state. Conversely, reduction causes a decrease in its oxidation state.Balancing Redox Equations is a complicated task, but the use of half reactions is a very good strategy.Energy is the driving force for chemical reactions. Energies of oxidation and reduction reactions are related to the electrochemical potentials (E), Cell EMF, of the galvanic cells. The standard reduction potentials for half cells are values that make sensible comparisons, because a set of conventions have been followed. The energy (called Gibb's free energy G) is the electric work of the reaction. Therefore,\(\ce \Delta G^\circ = n F \, \ce D E^\circ\).The difference of Gibb's free energy (\(\ce \Delta G^\circ\)) between products and reactants is equal to the charge n F times the potential difference DEo. Since n electrons are transferred in the equation, n F (F is the Faraday constant of 96485 C) is the charge involved for the equation in terms of moles.The equilibrium constant K is related to \(\ce \Delta G^\circ\),\(\ce \Delta G^\circ = - R T \ln K\).For the reaction,\(\mathrm{a\: A + b\: B \rightleftharpoons c\: C + d\: D}\),the Nernst Equation is a natural result:\(\ce \Delta E = \ce \Delta E^\circ - \dfrac{R T}{n F} \ln \mathrm{\dfrac{[C]^c[D]^d}{[A]^a[B]^b}}\)In the summary given above, we have used many energy and potential related notations. For completeness, the notations are given as follows:The definitions for anode and cathode apply to both galvanic cells and electrolytic cells. Oxidation takes place on anodes, for example,\(\begin{align} &\textrm{reaction:}\: \ce{Zn \rightarrow Zn^2+ + 2 e^-} &&E^* = 0.762\\ &\textrm{notation:}\: \ce{Zn \,|\, Zn^2+} &&E^* = - E^\circ \end{align}\)Note that we have defined and used E* to represent the oxidation potential which has the same absolute value as the reduction potential Eo, but a different sign. This notation is not used in most text books.Reduction takes place on cathodes, for example,\(\begin{align} &\textrm{reaction:}\: \ce{Hg2^2+ + 2 e^- \rightarrow 2 Hg} &&E^\circ = 0.796\\ &\textrm{notation:}\: \ce{Hg^2+ \,|\, Hg} && \end{align}\)The details regarding oxidation and reduction half reactions have been given on the page of Half-Cell Reactions.The standard cell potential DEo is the sum of the reduction potential of the cathode and the oxidation potential of the anode,\(\ce D E^\circ = E^\circ + E^*\).The standard Gibb's free energy, DGo, is the negative of the maximum available electrical work. The electrical work We is the product of the charge and the potential of the cell.\(\begin{align} W_{\ce e} &= q E\\ \ce D G &= - q E\\ &= - n F E \end{align}\)where q is the charge in C and E is the potential in V. Note also that q = n F, and F = 96485 C is the Faraday constant, whereas n is the number of moles of electrons in the reaction equation.The change of Gibbs free energy at standard condition is derived in the following way:\(\begin{align} \ce D G^\circ &= - W_{\ce e}\\ &= - q E^\circ\\ &= - n F E^\circ \end{align}\)This equation is for a galvanic cell under the standard conditions. The notations Eo and DGo are used for cells under standard conditions. When the cells are not at standard conditions, DE and DGo are used for these quantities.Note also that DGo is the theoretical amount of energy per reaction equation as written. The amount of energy for a given system depends on the quantities of the reactants and their conditions (concentration or pressure etc.).\(\ce{Zn \,|\, Zn^2+ \,||\, Cu^2+ \,|\, Cu}\),\(\ce{Zn \,|\, Zn^2+ \,||\, Cu^2+ \,|\, Cu}\),\(\ce{Cu^2+ + Zn \rightarrow Zn^2+ + Cu}\)\(\mathrm{Zn \,|\, Zn^{2+}\: (0.010\: M) \,||\, Cu^{2+}\: (1.0\: M) \,|\, Cu}\),\(\ce{Zn \,|\, Zn^2+ \,||\, Cu^2+ \,|\, Cu}\),\(\mathrm{Zn \,|\, Zn^{2+}\: (0.010\: M) \,||\, Cu^{2+}\: (1.0\: M) \,|\, Cu}\),\(\ce{Pb_{\large{(s)}} + Co^2+ \rightarrow Pb^2+ + Co_{\large{(s)}}}\)?What is the equilibrium constant for the reaction \(\ce{Pb_{\large{(s)}} + Co^2+ \rightarrow Pb^2+ + Co_{\large{(s)}}}\)?Hint...\(\mathrm{1.97\: g\: Au \:\dfrac{1\: mol\: Au}{197\: g\: Au} \:\dfrac{1\: mol\: e^-}{1\: mol\: Au} \:\dfrac{96485\: C}{1\: mol\: e^-} \:\dfrac{1\: s}{1.5\: C} \:\dfrac{1\: min}{60\: s} =\: ?\: min}\)Answer 10.72 Consider...The reaction is\(\ce{Au(CN)- + e^- \rightarrow Au + 2 CN-}\)In exams, the reaction\(\ce{Al^3+ + 3 e^- \rightarrow Al}\)is usually used. Pay attention to the number of electrons involved in the reaction.Hint... Did you get these data?\(\begin{align} &\ce{Zn \rightarrow Zn^2+ + 2 e^-} &&E^* = 0.762\\ &\ce{Zn \,|\, Zn^2+} &&\textrm{Note: } E^* = -E^\circ\\ &\ce{Cu \rightarrow Cu^2+ + 2 e^-} &&E^* = -0.339\\ &\ce{Cu \,|\, Cu^2+} &&\textrm{Note: } E^* = -E^\circ \end{align}\)dE = E0 + E* Note sign for convention? Calculate dE0Answer 1.10 V Consider... The reaction is \(\ce{Zn + Cu^2+ \rightarrow Zn^2+ + Cu}\)Hint... At equilibrium, dE = 0;\(\ce d E^\circ = \dfrac{0.0592}{2}\log K_{\ce c}\)Kc = antilog 37.2 = ?Answer 2e37 Consider... You may be asked to calculate\(\mathrm{[Zn^{2+}] / [Cu^{2+}] =\: ?}\)or\(\mathrm{[Cu^{2+}] / [Zn^{2+}] =\: ?}\)The large Kc value means that the reaction is almost quantitative. The \(\ce{Zn}\) metal almost causes all \(\ce{Cu^2+}\) ions to deposit as copper metal.\(\ce{Cu^2+ + Zn \rightarrow Zn^2+ + Cu}\)Hint... Assume you've done the previous problems.\(\begin{align} \ce d E &= \ce d E^\circ - \dfrac{0.0592}{2}\log \dfrac{0.01}{1.0}\\ &=\: ? \end{align}\)Answer 1.16 Consider... The voltage is higher than the standard potential of 1.10 because \(\ce{Zn^2+}\) concentration is less than 1.0 M in this case. The Nernst equation enables us to give a quantitative value when the conditions change.Hint...\(\begin{align} \ce d G^\circ &= - n F \:\ce d E^\circ\\ &= - 2 \times 96485 \times 1.10 =\: ? \end{align}\)Answer -212267 J Consider... More often, you'll see -212 kJ rather than -212000 JHint... From \(\ce d E = \ce dE^\circ - \dfrac{0.0592}{2} \log \ce K = 1.26\: \ce V\: \ce d G = - n F \:\ce d E = - 2 \times 96485 \times 1.16 =\: ?\)Answer -223845 J Consider... dG0 = - 212 kJ; dG for the said condition is -224 kJ. Concentration makes a difference.Hint... The purpose of this problem is to ask you to search the table for the proper reduction potentials. If you get dE0 = -0.151 V, you are probably right. The negative potential indicates that the reaction should be reversed.Hint... Use the Nernst equation to derive the equilibrium constant. If you get a value of K = 7.6x10-6, you have acquired the skill.We have used mostly the \(\ce{Zn/Cu}\) cell in these questions. A similar set of questions can be set up using any two couples of reduction potentials.Next page: SulfurChung (Peter) Chieh (Professor Emeritus, Chemistry University of Waterloo)Electrochemistry Review is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by LibreTexts.
661
Connection between Cell Potential, ∆G, and K
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Supplemental_Modules_(Analytical_Chemistry)/Electrochemistry/Electrochemistry_and_Thermodynamics
Learning ObjectivesThe connection between cell potential, Gibbs energy and constant equilibrium are directly related in the following multi-part equation:\[ \Delta G^o= -RT\ln K_{eq} = -nFE^o_{cell} \]∆G is the change of Gibbs (free) energy for a system and ∆G° is the Gibbs energy change for a system under standard conditions (1 atm, 298K). On an energy diagram, ∆G can be represented as:Where ∆G is the difference in the energy between reactants and products. In addition ∆G is unaffected by external factors that change the kinetics of the reaction. For example if Ea(activation energy) were to decrease in the presence of a catalyst or the kinetic energy of molecules increases due to a rise in temperature, the ∆G value would remain the same.E°cell is the electromotive force (also called cell voltage or cell potential) between two half-cells. The greater the E°cell of a reaction the greater the driving force of electrons through the system, the more likely the reaction will proceed (more spontaneous). E°cell is measured in volts (V). The overall voltage of the cell = the half-cell potential of the reduction reaction + the half-cell potential of the oxidation reaction. To simplify,\[E^o_{cell} = E^o_{reduction} - E^o_{oxidation} \label{1}\]or\[E^o_{cell} = E^o_{cathode} - E^o_{anode} \label{2}\]The potential of an oxidation reduction (loss of electron) is the negative of the potential for a reduction potential (gain of electron). Most tables only record the standard reduction half-reactions as standard reduction potentials. To find the standard oxidation potential, simply reverse the sign of the standard reduction potential.NoteThe more positive reduction potential of reduction reactions are more spontaneous. When viewing a cell reduction potential table, the higher the cell is on the table, the higher potential it has as an oxidizing agent.Eº cell is the standard state cell potential, which means that the value was determined under standard states. The standard states include a concentration of 1 Molar (mole per liter) and an atmospheric pressure of 1. Similar to the standard state cell potential, Eºcell, the Ecell is the non-standard state cell potential, which means that it is not determined under a concentration of 1 M and pressure of 1 atm. The two are closely related in the sense that the standard cell potential is used to calculate for the cell potential in many cases.\[ E_{cell}= E^o_{cell} -\dfrac{RT}{nF} \ln\; Q \label{3}\]Other simplified forms of the equation that we typically see:\[ E_{cell}= E^o_{cell} -\dfrac{0.0257}{n} \ln \; Q \label{4}\]or in terms of \(\log_{10}\) (base 10) instead of the natural logarithm (base e)\[ E_{cell}= E^o_{cell} - \dfrac{0.0592}{n} \log_{10}\; Q \label{5}\]Both equations applies when the temperature is 25ºC. Deviations from 25ºC requires the use of the original equation. Essentially, Eº is E at standard conditionsExample 1What is the value of \(E_{cell}\) for the voltaic cell below:\[\ce{Pt(s)} | \ce{Fe^{2+}}(0.1\, M), \ce{Fe^{3+}}(0.2\, M) || \ce{Ag^{+}} (0.1\, M)| \ce{Ag(s)} \nonumber\]SolutionTo use the Nernst equation, we need to establish \(E^o_{cell}\) and the reaction to which the cell diagram corresponds so that the form of the reaction quotient (Q) can be revealed. Once we have determined the form of the Nernst equation, we can insert the concentration of the species.Solve:\[ \begin{align*} E^o_{cell} &= E^o_{SRP}(cathode - E^o_{SRP}(anode)\\[4pt] &= E^o(Ag^{+}/Ag) - E^o(Fe^{3+}/Fe^{2+}) \\[4pt] &= 0.800\,V - 0.771\,V \\[4pt] &= 0.029\, V \end{align*}\]Now to determine \(E_{cell}\) for the reaction\[\ce{Fe^{2+}}(0.1\,M) + \ce{Ag^{+}}(1.0\,M) → \ce{Fe^{3+}}(0.20\,M) + \ce{Ag(s)} \nonumber\]Use the Nernst equation\[\begin{align*} E_{cell} &= 0.029\,V - \left(\dfrac{0.0592\,V}{1}\right) \log \dfrac{[Fe^{3+}]}{[Fe^{2+}][Ag]} \\[4pt] &=0.029\,V - 0.0592\,V \log \dfrac{0.2}{(0.1)(0.1)} \\[4pt] &=-0.048\,V \end{align*}\]Note that this reaction is spontaneous (positive \(E_{cell}\)) as written under standard conditions, but is non-spontaneous (negative \(E_{cell}\)) under the non-standard conditions of question.\(K\) is the equilibrium constant of a general reaction\[ aA + bB \leftrightharpoons cC + dD \label{6} \]and is expressed by the reaction quotient:\[ K_c= \dfrac{[C]^c[D]^d}{[A]^a[B]^b} \label{7}\]Example 2Given \(K = 2.81 \times 10^{-16}\) for a following reaction\[\ce{Cu^{2+}(aq) + Ag(s) \rightleftharpoons Cu(s) + 2Ag^{+}} \nonumber\]Find ∆G.SolutionUse the following formula:\[\begin{align*} ∆G &=-RT\ln K \\[4pt] &= (8.314) (298\,K) \ln (2.81 \times 10^{-16}) \\[4pt] &= -8.87 \times 10^5 J/mole \\[4pt] &= 8.871 \,kJ/mol \end{align*}\]The relationship between \(∆G\), \(K\), and \(E^°_{cell}\) can be represented by the following diagram.where\(E^o_{cell}\) can be calculated using the following formula:\[E^o_{cell} = E^o_{cathode} – E^o_{anode} = E^o_{Reduction} – E^o_{Oxidation} \label{8}\]Example 3: Using ∆G=-RT lnKFind the \(E^o_{cell}\) for the following coupled half-reactionsSolution1. Determine the cathode and anode in the reaction\[\ce{Zn(s) <=> Zn^{2+}(aq) + 2e^{-}} \nonumber\]Anode, Oxidation half-reaction (since \(\ce{Zn(s)}\) increase oxidation state from 0 to +2)\[\ce{Cu^{2+}(aq) + 2e^{-} <=> Cu(s)} \nonumber\]Cathode, Reduction half-rection (since \(\ce{Cu^{2+}(aq)}\) decreases oxidation state from +2 to 0)2. Determine the \(E^o_{cell}\) values using the standard reduction potential table (Table P1)\[\ce{Zn(s) <=> Zn^{2+}(aq) + 2e^{-}} \quad\quad E_{SRP} = -0.763 \nonumber\]\[\ce{Cu^{2+}(aq) + 2e^{-} <=> Cu(s)} \quad\quad E_{SRP}=+0.340 \nonumber\]3. Use\[\begin{align*} E^°_{cell} &= E^°_{SRP}(\text{cathode}) - E^o_{SRP}(\text{anode}) \\[4pt] &= 0.340 \,V - (-0.763\, V) \\[4pt] &= 1.103\, V \end{align*}\]Example 4: Gibbs Energy DifferenceFind ∆G for the following reaction:\[\ce{2Al(s) + 3Br2(l) <=> 2Al^{3+}}(aq, 1.0\,M) + \ce{6Br^{-}}(aq, 1.0\,M) \nonumber\]SolutionStep 1: Separate the reaction into its two half reactions\[\ce{2Al(s) <=> 2Al^{3+}(aq)} \nonumber\]\[\ce{3Br2(l) <=> 6Br^{-}(aq, 1\,M)} \nonumber\]Step 2: Balance the half equations using O, H, and charge using e-2Al(s) ↔ 2Al3+(aq) +6e-6e- + 3Br2(l) ↔ 6Br-(aq)Step 3: From the balanced half reactions, we can conclude the number of moles of e- for use later in the calculation of ∆G. Determine the E° values using the standard reduction potentials, using the E° cell table.2Al2(s) ↔ Al3+(aq) +6e- -1.676V3Br2(l) + 6e- ↔ 6Br-(aq) +1.065VStep 4: Determine E°cell = E°cathode - E°anode.= 1.065 - (-1.676)= 2.741 VStep 5: Once E° cell has be calculated and the number of moles of electrons have been determined, we can use ∆G = -nFE°cell= (-6 mol e-)(96485 C/mol e-)(2.741 V)= -1586kJThis equation can be used to calculate E° cell given K or K given \(E^o_{cell}\). If T=298 K, the RT is a constant then the following equation can be used: E°cell= (0.025693V/n) ln KExample 5: Using E° cell=(RT/nF) lnKGiven the \(E^°_{cell}\) for the reaction\[\ce{Cu(s) + 2H^{+}(aq) \rightleftharpoons Cu^{2+}(aq) +H2(g)} \nonumber\]is -0.34V, find the equilibrium constant (\(K\)) for the reaction.SolutionStep 1: Split into two half reaction\[\ce{Cu(s) <=> Cu^{2+}(aq)} \nonumber\]\[\ce{2H^{+}(aq) <=> H2(g)} \nonumber\]Step 2: Balance the half reactions with charges to determine nCu(s) ↔ Cu2+(aq) + 2e-2H+(aq) +2e- ↔ H2(g)Therefore n=2Step 3: From the example above, \(E^°_{cell} = -0.34\,V\)\[\begin{align*} -0.34 &= \left(\dfrac{0.025693}{2} \right) \ln K \\[4pt] K &= \exp \left(\dfrac{-0.34 \times 2}{0.025693}\right) \\[4pt] K &= 3.19 \times 10^{-12} \end{align*}\]Example 6: SpontaneityGiven the following reaction determine \(∆G\), \(K\), and \(E^o_{cell}\) for the following reaction at standard conditions? Is this reaction spontaneous?\[\ce{Mn^{2+}(aq) + K(s) <=> MnO2(s) + K(aq)} \nonumber\]SolutionStep 1: Separate and balance the half reactions. Label which one is reduction and which one is oxidation. Find the corresponding \(E^o\) values for the half reactions.\[\ce{MnO2(s) + 4H^{+}(aq) + 2e^{-} <=> Mn^{2+}(aq) + 2H2O(l)} \nonumber\]Reduction with \(E_{SRP}\) of +1.23V\[\ce{K^{+}(aq) + e^{-} <=> K(s)} \nonumber\]Oxidation with \(E_{SRP}\) of -2.92VStep 2: Write net balanced reaction in acidic solution, and determine the E° cell.\[\begin{align*} E^o_{cell} &= E^o_{SRP}(cathode) - E^{°}_{SRP} (anode) \\[4pt] &= +1.23\,V - (-2.92\,V) = 4.15\,V \end{align*}\]Mn2+(aq) + 2K+(aq) + 2H2O(l) ↔ MnO2(s) + 4H+(aq) + K(s) E° cell = 4.15VStep 3: Find ∆G for the reaction.Use ∆G = -nFE°cell= -2 mol e- x 96485 C x 4.15 = -800.60kJTherefore, since \(E^o_{cell}\) is positive and \(∆G\) is negative, this reaction is spontaneous.Exercise \(\PageIndex{1}\)Find \(E^o_{cell}\) for \(\ce{2Br^{-}(aq) + I2(s) <=> Br2(s) + 2I^{-}(aq)}\)+0.530 VExercise \(\PageIndex{2}\)Find \(E^o\) for \(\ce{Sn(s) <=> Sn^{2+}(aq) + 2e^{-}}\)+0.137 VExercise \(\PageIndex{3}\)Find \(E^o\) cell for \(\ce{Zn(s) | Zn^{2+}(aq) || Cr^{3+}(aq), Cr^{2+}(aq)}\)+0.339 VExercise \(\PageIndex{4}\)Find ∆G for the following combined half reactions:\[\ce{F2(g) + 2e^{-} <=> 2F^{-}(aq)} \nonumber\]\[\ce{Li^{+}(aq) + e^{-} <=> Li(s)} \nonumber\]+1139.68 kJExercise \(\PageIndex{5}\)Find the equilibrium constant (\(K\)) for the following reaction:\[\ce{Zn(s) + 2H^{+}(aq) <=> Zn^{2+}(aq) + H2(g)} \nonumber\]Hint: Find \(E^o_{cell}\) first!6.4 x 1025Exercise \(\PageIndex{6}\)Find \(E^o_{cell}\) for the given reaction at standard conditions:\[\ce{Cu^{+}(aq) + e^{-} <=> Cu(s)} \nonumber\]\[\ce{I2(s) + 2e^{-} <=> 2I^{-}(aq)} \nonumber\]+0.195 VConnection between Cell Potential, ∆G, and K is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by Deborah S. Gho & Shamsher Singh.
662
Electrolysis
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Supplemental_Modules_(Analytical_Chemistry)/Electrochemistry/Electrolytic_Cells/Electrolysis
The use of electric current to stimulate a non-spontaneous reaction. Electrolysis can be used to separate a substance into its original components/elements and it was through this process that a number of elements have been discovered and are still produced in today's industry. In Electrolysis, an electric current it sent through an electrolyte and into solution in order to stimulate the flow of ions necessary to run an otherwise non-spontaneous reaction. Processes involving electrolysis include: electro-refining, electro-synthesis, and the chloro-alkali process.Example: When we electrolyze water by passing an electric current through it, we can separate it into hydrogen and oxygen.\[ 2 H_2O(l) \rightarrow 2H_2(g) + O_2(g) \]More information : The Electrolysis of WaterAn electrolytic cell is essentially the non-spontaneous reaction's voltaic cell, (in fact if we reversed the flow of electricity within a voltaic cell by exceeding a required voltage, we would create an electrolytic cell). Electrolytic cells consist of two electrodes (one that acts as a cathode and one that acts as an anode), and an electrolyte. Unlike a voltaic cell, reactions using electrolytic cells must be electrically induced and it's anode and cathode are reversed (anode on the left, cathode one the right). Electrolysis is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by LibreTexts.
664
Electrolysis I
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Supplemental_Modules_(Analytical_Chemistry)/Electrochemistry/Electrolytic_Cells/Electrolysis_I
In this chapter, we have described various galvanic cells in which a spontaneous chemical reaction is used to generate electrical energy. In an electrolytic cell, however, the opposite process, called electrolysis, occurs: an external voltage is applied to drive a nonspontaneous reaction. In this section, we look at how electrolytic cells are constructed and explore some of their many commercial applications.If we construct an electrochemical cell in which one electrode is copper metal immersed in a 1 M Cu2+ solution and the other electrode is cadmium metal immersed in a \(\,1\; M\, Cd^{2+}\) solution and then close the circuit, the potential difference between the two compartments will be 0.74 V. The cadmium electrode will begin to dissolve (Cd is oxidized to Cd2+) and is the anode, while metallic copper will be deposited on the copper electrode (Cu2+ is reduced to Cu), which is the cathode ).The overall reaction is as follows:\[ \ce{Cd (s) + Cu^{2+} (aq) \rightarrow Cd^{2+} (aq) + Cu (s)} \nonumber \]with \(E°_{cell} = 0.74\; V\)This reaction is thermodynamically spontaneous as written (\(ΔG^o < 0\)):\[ \begin{align*} \Delta G^\circ &=-nFE^\circ_\textrm{cell} \\[4pt] &=-(\textrm{2 mol e}^-)[\mathrm{96,485\;J/(V\cdot mol)}](\mathrm{0.74\;V}) \\[4pt] &=-\textrm{140 kJ (per mole Cd)} \end{align*} \nonumber \]In this direction, the system is acting as a galvanic cell.In an electrolytic cell, an external voltage is applied to drive a nonspontaneous reaction.The reverse reaction, the reduction of Cd2+ by Cu, is thermodynamically nonspontaneous and will occur only with an input of 140 kJ. We can force the reaction to proceed in the reverse direction by applying an electrical potential greater than 0.74 V from an external power supply. The applied voltage forces electrons through the circuit in the reverse direction, converting a galvanic cell to an electrolytic cell. Thus the copper electrode is now the anode (Cu is oxidized), and the cadmium electrode is now the cathode (Cd2+ is reduced) ). The signs of the cathode and the anode have switched to reflect the flow of electrons in the circuit. The half-reactions that occur at the cathode and the anode are as follows:\[\ce{Cd^{2+}(aq) + 2e^{−} \rightarrow Cd(s)}\label{20.9.3} \]with \(E^°_{cathode} = −0.40 \, V\)\[\ce{Cu(s) \rightarrow Cu^{2+}(aq) + 2e^{−}} \label{20.9.4} \]with \(E^°_{anode} = 0.34 \, V \)\[\ce{Cd^{2+}(aq) + Cu(s) \rightarrow Cd(s) + Cu^{2+}(aq) } \label{20.9.5} \]with \(E^°_{cell} = −0.74 \: V\)Because \(E^°_{cell} < 0\), the overall reaction—the reduction of \(Cd^{2+}\) by \(Cu\)—clearly cannot occur spontaneously and proceeds only when sufficient electrical energy is applied. The differences between galvanic and electrolytic cells are summarized in Table \(\PageIndex{1}\).At sufficiently high temperatures, ionic solids melt to form liquids that conduct electricity extremely well due to the high concentrations of ions. If two inert electrodes are inserted into molten \(\ce{NaCl}\), for example, and an electrical potential is applied, \(\ce{Cl^{-}}\) is oxidized at the anode, and \(\ce{Na^{+}}\) is reduced at the cathode. The overall reaction is as follows:\[\ce{ 2NaCl (l) \rightarrow 2Na(l) + Cl2(g)} \label{20.9.6} \]This is the reverse of the formation of \(\ce{NaCl}\) from its elements. The product of the reduction reaction is liquid sodium because the melting point of sodium metal is 97.8°C, well below that of \(\ce{NaCl}\) (801°C). Approximately 20,000 tons of sodium metal are produced commercially in the United States each year by the electrolysis of molten \(\ce{NaCl}\) in a Downs cell ). In this specialized cell, \(\ce{CaCl2}\) (melting point = 772°C) is first added to the \(\ce{NaCl}\) to lower the melting point of the mixture to about 600°C, thereby lowering operating costs.Similarly, in the Hall–Heroult process used to produce aluminum commercially, a molten mixture of about 5% aluminum oxide (Al2O3; melting point = 2054°C) and 95% cryolite (Na3AlF6; melting point = 1012°C) is electrolyzed at about 1000°C, producing molten aluminum at the cathode and CO2 gas at the carbon anode. The overall reaction is as follows:\[\ce{2Al2O3(l) + 3C(s) -> 4Al(l) + 3CO2(g)} \label{20.9.7} \]Oxide ions react with oxidized carbon at the anode, producing CO2(g).There are two important points to make about these two commercial processes and about the electrolysis of molten salts in general.In the Hall–Heroult process, C is oxidized instead of O2− or F− because oxygen and fluorine are more electronegative than carbon, which means that C is a weaker oxidant than either O2 or F2. Similarly, in the Downs cell, we might expect electrolysis of a NaCl/CaCl2 mixture to produce calcium rather than sodium because Na is slightly less electronegative than Ca (χ = 0.93 versus 1.00, respectively), making Na easier to oxidize and, conversely, Na+ more difficult to reduce. In fact, the reduction of Na+ to Na is the observed reaction. In cases where the electronegativities of two species are similar, other factors, such as the formation of complex ions, become important and may determine the outcome.If a molten mixture of MgCl2 and KBr is electrolyzed, what products will form at the cathode and the anode, respectively?Given: identity of saltsAsked for: electrolysis productsA The possible reduction products are Mg and K, and the possible oxidation products are Cl2 and Br2. Because Mg is more electronegative than K (χ = 1.31 versus 0.82), it is likely that Mg will be reduced rather than K. Because Cl is more electronegative than Br (3.16 versus 2.96), Cl2 is a stronger oxidant than Br2.B Electrolysis will therefore produce Br2 at the anode and Mg at the cathode.Predict the products if a molten mixture of AlBr3 and LiF is electrolyzed.Br2 and AlElectrolysis can also be used to drive the thermodynamically nonspontaneous decomposition of water into its constituent elements: H2 and O2. However, because pure water is a very poor electrical conductor, a small amount of an ionic solute (such as H2SO4 or Na2SO4) must first be added to increase its electrical conductivity. Inserting inert electrodes into the solution and applying a voltage between them will result in the rapid evolution of bubbles of H2 and O2 ).The reactions that occur are as follows:For a system that contains an electrolyte such as Na2SO4, which has a negligible effect on the ionization equilibrium of liquid water, the pH of the solution will be 7.00 and [H+] = [OH−] = 1.0 × 10−7. Assuming that \(P_\mathrm{O_2}\) = \(P_\mathrm{H_2}\) = 1 atm, we can use the standard potentials to calculate E for the overall reaction:\[\begin{align}E_\textrm{cell} &=E^\circ_\textrm{cell}-\left(\dfrac{\textrm{0.0591 V}}{n}\right)\log(P_\mathrm{O_2}P^2_\mathrm{H_2}) \\ &=-\textrm{1.23 V}-\left(\dfrac{\textrm{0.0591 V}}{4}\right)\log=-\textrm{1.23 V}\end{align} \label{20.9.11} \]Thus Ecell is −1.23 V, which is the value of E°cell if the reaction is carried out in the presence of 1 M H+ rather than at pH 7.0.In practice, a voltage about 0.4–0.6 V greater than the calculated value is needed to electrolyze water. This added voltage, called an overvoltage, represents the additional driving force required to overcome barriers such as the large activation energy for the formation of a gas at a metal surface. Overvoltages are needed in all electrolytic processes, which explain why, for example, approximately 14 V must be applied to recharge the 12 V battery in your car.In general, any metal that does not react readily with water to produce hydrogen can be produced by the electrolytic reduction of an aqueous solution that contains the metal cation. The p-block metals and most of the transition metals are in this category, but metals in high oxidation states, which form oxoanions, cannot be reduced to the metal by simple electrolysis. Active metals, such as aluminum and those of groups 1 and 2, react so readily with water that they can be prepared only by the electrolysis of molten salts. Similarly, any nonmetallic element that does not readily oxidize water to O2 can be prepared by the electrolytic oxidation of an aqueous solution that contains an appropriate anion. In practice, among the nonmetals, only F2 cannot be prepared using this method. Oxoanions of nonmetals in their highest oxidation states, such as NO3−, SO42−, PO43−, are usually difficult to reduce electrochemically and usually behave like spectator ions that remain in solution during electrolysis.In general, any metal that does not react readily with water to produce hydrogen can be produced by the electrolytic reduction of an aqueous solution that contains the metal cation.In a process called electroplating, a layer of a second metal is deposited on the metal electrode that acts as the cathode during electrolysis. Electroplating is used to enhance the appearance of metal objects and protect them from corrosion. Examples of electroplating include the chromium layer found on many bathroom fixtures or (in earlier days) on the bumpers and hubcaps of cars, as well as the thin layer of precious metal that coats silver-plated dinnerware or jewelry. In all cases, the basic concept is the same. A schematic view of an apparatus for electroplating silverware and a photograph of a commercial electroplating cell are shown in .The half-reactions in electroplating a fork, for example, with silver are as follows:The overall reaction is the transfer of silver metal from one electrode (a silver bar acting as the anode) to another (a fork acting as the cathode). Because \(E^o_{cell} = 0\, V\), it takes only a small applied voltage to drive the electroplating process. In practice, various other substances may be added to the plating solution to control its electrical conductivity and regulate the concentration of free metal ions, thus ensuring a smooth, even coating.If we know the stoichiometry of an electrolysis reaction, the amount of current passed, and the length of time, we can calculate the amount of material consumed or produced in a reaction. Conversely, we can use stoichiometry to determine the combination of current and time needed to produce a given amount of material.The quantity of material that is oxidized or reduced at an electrode during an electrochemical reaction is determined by the stoichiometry of the reaction and the amount of charge that is transferred. For example, in the reaction\[\ce{Ag^{+}(aq) + e^{−} → Ag(s)} \nonumber \]1 mol of electrons reduces 1 mol of \(\ce{Ag^{+}}\) to \(\ce{Ag}\) metal. In contrast, in the reaction\[\ce{Cu^{2+}(aq) + 2e^{−} → Cu(s)} \nonumber \]1 mol of electrons reduces only 0.5 mol of \(\ce{Cu^{2+}}\) to \(\ce{Cu}\) metal. Recall that the charge on 1 mol of electrons is 1 faraday (1 F), which is equal to 96,485 C. We can therefore calculate the number of moles of electrons transferred when a known current is passed through a cell for a given period of time. The total charge (\(q\) in coulombs) transferred is the product of the current (\(I\) in amperes) and the time (\(t\), in seconds):\[ q = I \times t \label{20.9.14} \]The stoichiometry of the reaction and the total charge transferred enable us to calculate the amount of product formed during an electrolysis reaction or the amount of metal deposited in an electroplating process.For example, if a current of 0.60 A passes through an aqueous solution of \(\ce{CuSO4}\) for 6.0 min, the total number of coulombs of charge that passes through the cell is as follows:\[\begin{align*} q &= \textrm{(0.60 A)(6.0 min)(60 s/min)} \\[4pt] &=\mathrm{220\;A\cdot s} \\[4pt] &=\textrm{220 C} \end{align*} \nonumber \]The number of moles of electrons transferred to \(\ce{Cu^{2+}}\) is therefore\[\begin{align*} \textrm{moles e}^- &=\dfrac{\textrm{220 C}}{\textrm{96,485 C/mol}} \\[4pt] &=2.3\times10^{-3}\textrm{ mol e}^- \end{align*} \nonumber \]Because two electrons are required to reduce a single Cu2+ ion, the total number of moles of Cu produced is half the number of moles of electrons transferred, or 1.2 × 10−3 mol. This corresponds to 76 mg of Cu. In commercial electrorefining processes, much higher currents (greater than or equal to 50,000 A) are used, corresponding to approximately 0.5 F/s, and reaction times are on the order of 3–4 weeks.A silver-plated spoon typically contains about 2.00 g of Ag. If 12.0 h are required to achieve the desired thickness of the Ag coating, what is the average current per spoon that must flow during the electroplating process, assuming an efficiency of 100%?Given: mass of metal, time, and efficiencyAsked for: current requiredA We must first determine the number of moles of Ag corresponding to 2.00 g of Ag:\(\textrm{moles Ag}=\dfrac{\textrm{2.00 g}}{\textrm{107.868 g/mol}}=1.85\times10^{-2}\textrm{ mol Ag}\)B The reduction reaction is Ag+(aq) + e− → Ag(s), so 1 mol of electrons produces 1 mol of silver.C Using the definition of the faraday,The current in amperes needed to deliver this amount of charge in 12.0 h is therefore\[\begin{align*}\textrm{amperes} &=\dfrac{1.78\times10^3\textrm{ C}}{(\textrm{12.0 h})(\textrm{60 min/h})(\textrm{60 s/min})}\\ & =4.12\times10^{-2}\textrm{ C/s}=4.12\times10^{-2}\textrm{ A}\end{align*} \nonumber \]Because the electroplating process is usually much less than 100% efficient (typical values are closer to 30%), the actual current necessary is greater than 0.1 A.A typical aluminum soft-drink can weighs about 29 g. How much time is needed to produce this amount of Al(s) in the Hall–Heroult process, using a current of 15 A to reduce a molten Al2O3/Na3AlF6 mixture?5.8 hElectroplating: Electroplating(opens in new window) [youtu.be]In electrolysis, an external voltage is applied to drive a nonspontaneous reaction. The quantity of material oxidized or reduced can be calculated from the stoichiometry of the reaction and the amount of charge transferred. Relationship of charge, current and time:\[ q = I \times t \nonumber \]In electrolysis, an external voltage is applied to drive a nonspontaneous reaction. Electrolysis can also be used to produce H2 and O2 from water. In practice, an additional voltage, called an overvoltage, must be applied to overcome factors such as a large activation energy and a junction potential. Electroplating is the process by which a second metal is deposited on a metal surface, thereby enhancing an object’s appearance or providing protection from corrosion. The amount of material consumed or produced in a reaction can be calculated from the stoichiometry of an electrolysis reaction, the amount of current passed, and the duration of the electrolytic reaction.Electrolysis I is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by LibreTexts.
665
Electrolytic Cells
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Supplemental_Modules_(Analytical_Chemistry)/Electrochemistry/Electrolytic_Cells
Voltaic cells are driven by a spontaneous chemical reaction that produces an electric current through an outside circuit. These cells are important because they are the basis for the batteries that fuel modern society. But they are not the only kind of electrochemical cell. The reverse reaction in each case is non-spontaneous and requires electrical energy to occur.The general form of the reaction can be written as:\[ \underset{\longleftarrow \text{Non spontaneous}}{\overset{\text{Spontaneous} \longrightarrow}{\text{Reactants} \rightleftharpoons \text{Products} + \text{Electrical Energy}}}\]It is possible to construct a cell that does work on a chemical system by driving an electric current through the system. These cells are called electrolytic cells. Electrolytic cells, like galvanic cells, are composed of two half-cells--one is a reduction half-cell, the other is an oxidation half-cell. The direction of electron flow in electrolytic cells, however, may be reversed from the direction of spontaneous electron flow in galvanic cells, but the definition of both cathode and anode remain the same, where reduction takes place at the cathode and oxidation occurs at the anode. Because the directions of both half-reactions have been reversed, the sign, but not the magnitude, of the cell potential has been reversed.Electrolytic cells are very similar to voltaic (galvanic) cells in the sense that both require a salt bridge, both have a cathode and anode side, and both have a consistent flow of electrons from the anode to the cathode. However, there are also striking differences between the two cells. The main differences are outlined below: : Electrochemical Cells. A galvanic cell (left) transforms the energy released by a spontaneous redox reaction into electrical energy that can be used to perform work. The oxidative and reductive half-reactions usually occur in separate compartments that are connected by an external electrical circuit; in addition, a second connection that allows ions to flow between the compartments (shown here as a vertical dashed line to represent a porous barrier) is necessary to maintain electrical neutrality. The potential difference between the electrodes (voltage) causes electrons to flow from the reductant to the oxidant through the external circuit, generating an electric current. In an electrolytic cell (right), an external source of electrical energy is used to generate a potential difference between the electrodes that forces electrons to flow, driving a nonspontaneous redox reaction; only a single compartment is employed in most applications. In both kinds of electrochemical cells, the anode is the electrode at which the oxidation half-reaction occurs, and the cathode is the electrode at which the reduction half-reaction occurs.To explain what happens in an electrolytic cell let us examine the decomposition of molten sodium chloride into sodium metal and chlorine gas. The reaction is written below.If molten \(NaCl_{(l)}\) is placed into the container and inert electrodes of \(C_{(s)}\) are inserted, attached to the positive and negative terminals of a battery, an electrolytic reaction will occur.Predicting Electrolysis ReactionThere are four primary factors that determine whether or not electrolysis will take place even if the external voltage exceeds the calculated amount:If all four of these factors are accounted for, we can successfully predict electrode half reactions and overall reactions in electrolysis.Exercise \(\PageIndex{1}\)Predict the electrode reactions and the overall reaction when the anode is made of (a) copper and (b) platinum.Michael Faraday discovered in 1833 that there is always a simple relationship between the amount of substance produced or consumed at an electrode during electrolysis and the quantity of electrical charge Q which passes through the cell. For example, the half-equation\[Ag^+ + e^– \rightarrow Ag\]tells us that when 1 mol Ag+ is plated out as 1 mol Ag, 1 mol e– must be supplied from the cathode. Since the negative charge on a single electron is known to be 1.6022 × 10–19 C, we can multiply by the Avogadro constant to obtain the charge per mole of electrons. This quantity is called the Faraday Constant, symbol F:F = 1.6022 × 10–19 C × 6.0221 × 1023 mol–1 = 9.649 × 104 C mol–1Thus in the case of Eq., 96 490 C would have to pass through the cathode in order to deposit 1 mol Ag. For any electrolysis the electrical charge Q passing through an electrode is related to the amount of electrons ne– by\[F=\dfrac{Q}{n_{e^-}}\]Thus F serves as a conversion factor between \(n_{e^-}\) and \(Q\).Often the electrical current rather than the quantity of electrical charge is measured in an electrolysis experiment. Since a coulomb is defined as the quantity of charge which passes a fixed point in an electrical circuit when a current of one ampere flows for one second, the charge in coulombs can be calculated by multiplying the measured current (in amperes) by the time (in seconds) during which it flows:\[Q = It\]In this equation I represents current and t represents time. If you remember thatcoulomb = 1 ampere × 1 second 1 C = 1 A syou can adjust the time units to obtain the correct result. Now that we can predict the electrode half-reactions and overall reactions in electrolysis, it is also important to be able to calculate the quantities of reactants consumed and the products produced. For these calculations we will be using the Faraday constant:1 mol of electron = 96,485 Ccharge (C) = current (C/s) x time (s)(C/s) = 1 coulomb of charge per second = 1 ampere (A)Simple conversion for any type of problem:Example \(\PageIndex{1}\)The electrolysis of dissolved Bromine sample can be used to determine the amount of Bromine content in sample. At the cathode, the reduction half reaction is \[Br^{2+}_{(aq)} + 2 e^- \rightarrow 2 Br^-\]. What mass of Bromine can be deposited in 3.00 hours by a current of 1.18 A?Solution:3.00 hours x 60 min/hour x 60 sec/1 min x 1.18 C(A) / 1 sec x 1 mol e-/96,485 C= 0.132 mol e-1) Predict the products of electrolysis by filling in the graph:Cl-, Br-, I-, H+, OH-, Cu2+, Pb2+, Ag+, K+, Na+,2) Calculate the quantity of electrical charge needed to plate 1.386 mol Cr from an acidic solution of K2Cr2O7according to half-equationH2Cr2O7(aq) + 12H+(aq) + 12e– → 2Cr(s) + 7 H2O(l)3) Hydrogen peroxide, H2O2, can be manufactured by electrolysis of cold concentrated sulfuric acid. The reaction at the anode is 2H2SO4 → H2S2O8 + 2H+ + 2e–When the resultant peroxydisulfuric acid, H2S2O8, is boiled at reduced pressure, it decomposes:2H2O + H2S2O8 → 2H2SO4 + H2O2Calculate the mass of hydrogen peroxide produced if a current of 0.893 flows for 1 hour.4) The electrolysis of dissolved Cholride sample can be used to determine the amount of Chloride content in sample. At the cathode, the reduction half reaction is Cl2+(aq) + 2 e- -> 2 Cl-. What mass of Chloride can be deposited in 6.25 hours by a current of 1.11 A?5) In an electrolytic cell the electrode at which the electrons enter the solution is called the ______ ; the chemical change that occurs at this electrode is called _______.6)How long (in hours) must a current of 5.0 amperes be maintained to electroplate 60 g of calcium from molten CaCl2?8) How many faradays are required to reduce 1.00 g of aluminum(III) to the aluminum metal?9) Find the standard cell potential for an electrochemical cell with the following cell reaction.Zn(s) + Cu2+(aq) → Zn2+(aq) + Cu(s)1). Cl- chlorine H+ hydrogenCl- chlorine Cu2+ copperI- iodine H+ Hhydrogen2) 12 mol e– is required to plate 2 mol Cr, giving us a stoichiometric ratio S(e–/Cr). Then the Faraday constant can be used to find the quantity of charge. nCr ne– QQ = 1.386 mol Cr × × = 8.024 × 105 C3) The product of current and time gives us the quantity of electricity, Q. Knowing this we easily calculate the amount of electrons, ne–. From the first half-equation we can then find the amount of peroxydisulfuric acid, and the second leads to nH2O2 and finally to mH2O2.= 05666 × g H2O2 = 0.5666 g H2O24) 0.259 mol e-5) d6) d7) b8) d9) Write the half-reactions for each process.Zn(s) → Zn2+(aq) + 2 e-Cu2+(aq) + 2 e- → Cu(s)Look up the standard potentials for the reduction half-reaction.Eoreduction of Cu2+ = + 0.339 VEoreduction of Zn2+ = - 0.762 VDetermine the overall standard cell potential.Eocell = + 1.101 VElectrolytic Cells is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by LibreTexts.
666
Electroplating
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Supplemental_Modules_(Analytical_Chemistry)/Electrochemistry/Electrolytic_Cells/Electroplating
Electroplating is the process of plating one metal onto another by hydrolysis, most commonly for decorative purposes or to prevent corrosion of a metal. There are also specific types of electroplating such as copper plating, silver plating, and chromium plating. Electroplating allows manufacturers to use inexpensive metals such as steel or zinc for the majority of the product and then apply different metals on the outside to account for appearance, protection, and other properties desired for the product. The surface can be a metal or even plastic.Sometimes finishes are solely decorative such as the products we use indoors or in a dry environment where they are unlikely to suffer from corrosion. These types of products normally have a thin layer of gold, or silver applied so that it has an attractive appeal to the consumer. Electroplating is widely used in industries such as automobile, airplanes, electronics, jewelry, and toys. The overall process of electroplating uses an electrolytic cell, which consists of putting a negative charge on the metal and dipping it into a solution that contains metal salt (electrolytes) which contain positively charged metal ions. Then, due to the negative and positive charges, the two metals are attracted to each other.The Purposes of Electroplating:The cathode would be the piece to be plated and the anode would be either a sacrificial anode or an inert anode, normally either platinum or carbon (graphite form). Sometimes plating occurs on racks or barrels for efficiency when plating many products. Please refer to electrolysis for more information. In the figure below, the Ag+ ions are being drawn to the surface of the spoon and it eventually becomes plated. The process is undergone using silver as the anode, and a screw as the cathode. The electrons are transferred from the anode to the cathode and is underwent in a solution containing silver.Electroplating silver onto a spoon.Electroplating was first discovered by Luigi Brugnatelli in 1805 through using the electrodeposition process for the electroplating of gold. However his discovery was not noted as he was disregarded by the French Academy of Science as well as Napolean Bonaparte. But a couple of decades later, John Wright managed to use potassium cyanide as an electrolyte for gold and silver. He discovered that potassium cyanide was in fact an efficient electrolyte. The Elkington cousins later in 1840 used potassium cyanide as their electrolyte and managed to create a feasible electroplating method for gold and silver. They attained a patent for electroplating and this method became widely spread throughout the world from England. Electroplating method has gradually become more efficient and advanced through the use of more eco-friendly formulas and by using direct current power supplies.There are many different metals that can be used in plating and so determining the right electrolyte is important for the quality of plating. Some electrolytes are acids, bases, metal salts or molten salts. When choosing the type of electrolyte some things to keep in mind are corrosion, resistance, brightness or reflectivity, hardness, mechanical strength, ductility, and wear resistance.The purpose of preparing the surface before beginning to plate another metal onto it is to ensure that it is clean and free of contaminants, which may interfere with the bonding. Contamination often prevents deposition and lack of adhesion. Normally this is done in three steps: cleaning, treatment and rinsing. Cleaning usually consists of using certain solvents such as alkaline cleaners, water, or acid cleaners in order to remove layers of oil on the surface. Treatment includes surface modification which is the hardening of the parts and applying metal layers. Rinsing leads to the final product and is the final touch to electroplating.Two certain methods of preparing the surface are physical cleaning and chemical cleaning. Chemical cleaning consists of using solvents that are either surface-active chemicals or chemicals which react with the metal/surface. In physical cleaning there is mechanical energy being applied in order to remove contaminants. Physical cleaning includes brush abrasion and ultrasonic agitation.There are different processes by which people can electroplate metals such as by mass plating (also barrel plating), rack plating, continuous plating, and line plating. Each process has its own set of procedures which allow for the ideal plating.Most electroplating coatings can be separated into these categories:Electroplating is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by LibreTexts.
667
Factors that Influence Reduction Potential
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Supplemental_Modules_(Analytical_Chemistry)/Electrochemistry/Redox_Potentials/Factors_that_Influence_Reduction_Potential
In general, the ions of very late transition metals -- those towards the right-hand end of the transition metal block, such as copper, silver and gold -- have high reduction potentials. In other words, their ions are easily reduced. Alkali metal ions -- on the very left edge of the periodic table, such as potassium or cesium -- have very negative reduction potentials. These ions are very difficult to reduce. These trends are not surprising, because alkali metals are generally at the lower end of the electronegativity scale (Table A2) and are typically found as cations, not as neutral atoms. Late transition metals are comparatively electronegative in this case, and so we would expect their ions to attract electrons more easily than alkali metal ions.The nice thing about redox is you can always look at it from either direction. Oxidation is simply the opposite of reduction. How easily does an alkali metal lose an electron? If the standard reduction potential of lithium is very negative, then the oxidation potential of lithium ion is very positive. If it is uphill to transfer an electron from hydrogen to lithium cation, it must be downhill to transfer an electron from a lithium atom to a proton. After all, hydrogen is more electronegative than any of the alkalis. Of course, since a late transition metal is generally more electronegative than an alkali metal, copper or silver or gold ought to be more difficult to oxidize than sodium or potassium.The large trends in redox chemistry are not surprising, then. It's simply a matter of the electron moving to a lower energy level on another atom.If we look a little more closely, though, there are plenty of exceptions to the general trend. For example, in the coinage triad (Group 11), gold has the most positive reduction potential, followed by silver, then copper. That's exactly the opposite of expectations; copper, at the top of the column, should be the most electronegative and have the most positive reduction potential, not the least. What's going on in those cases? Well, there's more going on than just moving an electron. Remember, in the measurement of a reduction potential, we are generally working with a metal electrode in an aqueous solution of ions.What else is going on in this reaction? Well, the atom that gets reduced starts out as an ion in water, but an ion in water does not sit around on its own. It's a Lewis acid, an electrophile. Water is a nucleophile, a potential ligand. So the ion in solution is actually a coordination complex. It swims around for a while, then bumps into the cathode, where it picks up the electron. But the resulting ion does not stay in solution; it gets deposited at the electrode, along with others of its kind. It becomes part of a metal solid.So there are three different things happening here:If we could get some physical data on each of those events, we might be able to explain why these reduction potentials are contrary to expectations.The kinds of data we have available for these individual steps may actually fit the opposite reaction better. We can estimate the energy involved in the removal of a metal atom from the solid, the loss of an electron from the metal, and the binding of water to the resulting ion. These data come from measurement of the heat of vaporization of the metal, the ionization energy of the metal, and the enthalpy of hydration of the metal.The trouble is, these data all involve the gas phase. If they really applied to this situation, it would be as if metal atoms sprayed out into the air above the electrode, shot their electrons back, grabbed some water molecules that drifted by, and then dropped down into the solution. Of course that does not happen; we do not see a little, sparkly, metallic mist appear when we connect the circuit, or little lightning bolts from the cloud of metal atoms to the electrode, and we do not see a splash or a fizz or little tendrils of steam as the resulting ions drop into the water.That does not matter. The data we have here are still very useful.That's because what we are looking at -- the energy difference between two states -- is a state function. That means it does not matter how we get from one state to the other; the overall difference will always be the same. So if the reaction did happen via the gas phase, the energy change would be exactly as it is when it happens directly at the electrode - solution interface. We can do a sort of thought experiment using the data we know, and even though those steps do not really happen the way they do in the experiments that gave rise to the data, they will eventually lead to the right place. This sort of imaginary path to mimic a reaction we want to know more about employs an idea called "Hess's Law". It is frequently used to gain insight into reactions throughout chemistry.Taking all of these data together, we can get a better picture of the overall energy changes that would occur during a reduction or, more directly, an oxidation. The first thing to note is that copper has a higher ionization energy than silver. As expected, Cu+ is harder to form than Ag+, because copper is more electronegative than silver. However, Au+ appears to be the hardest to form of all three. It is as if gold were the most electronegative of these three elements, but it's at the bottom of this column. Gold really is more electronegative than copper or silver (Table A2).There are a few deviations from expectation in periodic trends, but this one is probably attributable to a phenomenon called "the lanthanide contraction." Notice the covalent radii of gold and silver in the table above. Normally, we expect atoms to get bigger row by row, as additional layers of electrons are filled in. Not so for the third row of transition metals. To see the probable reason for that, you have to look at the whole periodic table, and remember for the first time ever that the lanthanides and actinides -- the two orphaned rows at the bottom -- actually fit in the middle of the periodic table. The lanthanides, in particular (lanthanum, La, to ytterbium, Yb), go in between lutetium (Lu) and Hafnium (Hf).As a result, the third row of transition metals contain many more protons in their nuclei, compared to the second row transition metals of the same column. Silver has ten more protons in its nucleus than rubidium, the first atom in the same row as silver, but gold has twenty four more than cesium. The third row "contracts" because of these additional protons. So the exceptionally positive reduction potential of Au+ (and, by relation, the exceptionally negative oxidation potential of gold metal) may be a result of the lanthanide contraction. What about copper versus silver? Copper still has a higher electronegativity than silver, but copper metal is more easily oxidized. It's not that copper is more easily pulled away from the metallic bonds holding it in the solid state; copper's heat of vaporization is a little higher than silver's. That leaves hydration. In fact, copper ion does have a higher enthalpy of hydration than silver; more energy is released when water binds to copper than when water binds to silver. The difference between these two appears to be all about the solvation of the copper ion, which is more stable with respect to the metal than is silver ion.Why would that be? Well, copper is smaller than silver. A simple look at Coulomb's law reminds us that the closer the electrons of the donor ligand are to the cation, the more tightly bound they will be. Looking at it a slightly different way, copper is smaller and "harder" than silver, and forms a stronger bond with water, which is a "hard" ligand.Taking a look at a Hess's Law cycle for a redox reaction is a useful approach to get some additional insight into the reaction. It lets us use data to assess the influence of various aspects of the reaction that we can't evaluate directly from the reduction potential, because in the redox reaction all of these factors are conflated into one number.Chris P Schaller, Ph.D., (College of Saint Benedict / Saint John's University) Chris P Schaller, Ph.D., (College of Saint Benedict / Saint John's University)This page titled Factors that Influence Reduction Potential is shared under a CC BY-NC 3.0 license and was authored, remixed, and/or curated by Chris Schaller.
669
Faraday's Law
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Supplemental_Modules_(Analytical_Chemistry)/Electrochemistry/Faraday's_Law
In every electrochemical process, whether spontaneous or not, a certain amount of electric charge is transferred during the oxidation and reduction. The half-reactions we have written for electrode processes include the electrons which carry that charge. It is possible to measure the rate at which the charge is transferred with a device called an ammeter.An ammeter measures the current flowing through a circuit. The units of current are amperes (A) (amps, for short). Unlike a voltmeter, ammeters allow electrons to pass and essentially "clock" them as they go by. The amount of electric charge which has passed through the circuit can then be calculated by a simple relationship:Charge = current x time OR Coulombs = amps x secondsThis enables us to connect reaction stoichiometry to electrical measurements. The principles underlying these relationships were worked out in the first half of the 19th century by the English scientist, Michael Faraday.The diagram shows how voltage and current might be measured for a typical galvanic cell but the arrangment is the same for any electrochemical cell. Notice that the voltmeter is placed across the electron conduit (i.e., the wire) while the ammeter is part of that conduit. A good quality voltmeter can be used in this way even though it might appear to be "shorting out" the circuit. Since electrons cannot pass through the voltmeter, they simply continue along the wire.Both the voltmeter and ammeter are polarized. They have negative and positive terminals marked on them. Electrons are "expected" only in one direction. This is important in measurements of direct current (DC) such as comes out of (or goes into) electrochemical cells.Faraday's law of electrolysis might be stated this way: the amount of substance produced at each electrode is directly proportional to the quantity of charge flowing through the cell. Of course, this is somewhat of a simplification. Substances with different oxidation/reduction changes in terms of the electrons/atom or ion will not be produced in the same molar amounts. But when those additional ratios are factored in, the law is correct in all cases.Stephen R. Stephen R. MarsdenFaraday's Law is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by LibreTexts.
670