title
stringlengths
13
247
url
stringlengths
35
578
text
stringlengths
197
217k
__index_level_0__
int64
1
8.68k
Galvanic Cells
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Supplemental_Modules_(Analytical_Chemistry)/Electrochemistry/Basics_of_Electrochemistry/Electrochemistry/Galvanic_Cells
Learning ObjectivesChemistry is the driving force behind the magic of batteries. A battery is a package of one or more galvanic cells used for the production and storage of electric energy by chemical means. A galvanic cell consists of at least two half cells, a reduction cell and an oxidation cell. Chemical reactions in the two half cells provide the energy for the galvanic cell operations.Each half cell consists of an electrode and an electrolyte solution. Usually the solution contains ions derived from the electrode by oxidation or reduction reaction.A galvanic cell is also called a voltaic cell. The spontaneous reactions in it provide the electric energy or current.Two half cells can be put together to form an electrolytic cell, which is used for electrolysis. In this case, electric energy is used to force nonspontaneous chemical reactions.Many definitions can be given to oxidation and reduction reactions. In terms of electrochemistry, the following definition is most appropriate, because it let's us see how the electrons perform their roles in the chemistry of batteries.NoteLoss of electrons is oxidation (LEO), and gain of electrons is reduction (GER).Oxidation and reduction reactions cannot be carried out separately. They have to appear together in a chemical reaction. Thus oxidation and reduction reactions are often called redox reactions. In terms of redox reactions, a reducing agent and an oxidizing agent form a redox couple as they undergo the reaction:\(\ce{Oxidant + n\: e^- \rightarrow Reductant}\) \(\ce{Reducant \rightarrow Oxidant + n\: e^-}\)An oxidant is an oxidizing reagent, and a reductant is a reducing agent. Thereductant | oxidant or oxidant | reductantTwo members of the couple are the same element or compound, but of different oxidation state.As an introduction to electrochemistry let us take a look at a simple voltaic cell or a galvanic cell.When a stick of zinc (\(\ce{Zn}\)) is inserted in a salt solution, there is a tendency for \(\ce{Zn}\) to lose electrons according to the reaction,\(\ce{Zn \rightarrow Zn^2+ + 2 e^-}\).The arrangement of a \(\ce{Zn}\) electrode in a solution containing \(\ce{Zn^2+}\) ions is a half cell, which is usually represented by the notation:\(\ce{Zn | Zn^2+}\),Zinc metal and \(\ce{Zn^2+}\) ion form a redox couple, \(\ce{Zn^2+}\) being the oxidant, and \(\ce{Zn}\) the reductant. The same notation was used to designate a redox couple earlier.Similarly, when a stick of copper (\(\ce{Cu}\)) is inserted in a copper salt solution, there is also a tendency for \(\ce{Cu}\) to lose electrons according to the reaction,\(\ce{Cu \rightarrow Cu^2+ + 2 e^-}\).This is another half cell or redox couple: \(\ce{Cu | Cu^2+}\).However, the tendency for \(\ce{Zn}\) to lose electrons is stronger than that for copper. When the two cells are connected by a salt bridge and an electric conductor as shown to form a closed circuit for electrons and ions to flow, copper ions (\(\ce{Cu^2+}\)) actually gain electrons to become copper metal. The reaction and the redox couple are respectively represented below,\(\ce{Cu^2+ + 2 e^- \rightarrow Cu}\), \(\ce{Cu^2+ | Cu}\).This arrangement is called a galvanic cell or battery as shown here. In a text form, this battery is represented by,\(\ce{Zn | Zn^2+ || Cu^2+ | Cu}\),in which the two vertical lines ( || ) represent a salt bridge, and a single vertical line ( | ) represents the boundary between the two phases (metal and solution). Electrons flow through the electric conductors connecting the electrodes, and ions flow through the salt bridge. When\(\ce{[Zn^2+]} = \ce{[Cu^2+]} = \textrm{1.0 M}\),the voltage between the two terminals has been measured to be 1.100 V for this battery.A battery is a package of one or more galvanic cells used for the production and storage of electric energy. The simplest battery consists of two half cells, a reduction half cell and an oxidation half cell.The overall reaction of the galvanic cell is\(\ce{Zn + Cu^2+ \rightarrow Zn^2+ + Cu}\)Note that this redox reaction does not involve oxygen at all. For a review, note the following:Theoretically, any redox couple may form a half cell, and any two half cells may combine to give a battery, but we have considerable technical difficulty in making some couples into a half cell.Skill - Identify and explain redox reactions. Loss of electrons by \(\ce{Zn}\) means \(\ce{Zn}\) is oxidized, LEO.Skill - Identify and explain oxidized and reduced species. The \(\ce{Cu^2+}\) ions gain electrons to become \(\ce{Cu}\) (metal) atoms. Thus \(\ce{Cu^2+}\) is reduced.Discussion - The \(\ce{Zn}\) metal is more reactive than copper. In an acidic solution, \(\ce{Zn}\) atoms lose electrons to \(\ce{H+}\) ions, but copper atoms will not. The tendency is measured in terms of standard reduction potential.Skill - Explain conductance of solution. The positive and negative ions move in opposite directions in a solution leading to conduction of electricity.Next page: Oxidation StatesChung (Peter) Chieh (Professor Emeritus, Chemistry University of Waterloo)Galvanic Cells is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by LibreTexts.
671
Galvanization
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Supplemental_Modules_(Analytical_Chemistry)/Electrochemistry/Exemplars/Corrosion/Galvanization
Galvanization is a metal coating process in which a ferrous part is coated with a thin layer of zinc. The zinc coating seals the surface of the part from the environment, preventing oxidation and weathering from occurring.The primary method of galvanization is “hot dip galvanization”, which has been in use for over 150 years. While the idea of coating a part in molten zinc was first proposed by chemist Paul Jacques Malouin in 1742, the process was not put into practice until patented by chemist Stanislas Sorel in 1836. Sorel’s process has changed little since then, and still involves coating a part in molten zinc after cleaning it with an acid solution and coating the part in flux.Crystalline surface of a hot-dip galvanized handrail. (Public Domain; TMg). Galvanization helps to extend the life of steel parts by providing a barrier between the steel and the atmosphere, preventing iron oxide from forming on the surface of the steel. Galvanization also provides superior corrosion resistance to parts exposed to the environment. Galvanization provides a cost-effective solution for coating steel parts, specifically those that will receive significant environmental exposure over their lifetime.Binod Shrestha (University of Lorraine)Galvanization is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by LibreTexts.
672
Half-Cell Reaction
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Supplemental_Modules_(Analytical_Chemistry)/Electrochemistry/Basics_of_Electrochemistry/Electrochemistry/Half-Cell_Reaction
Learning ObjectivesA half cell is one of the two electrodes in a galvanic cell or simple battery. For example, in the \(\ce{Zn-Cu}\) battery, the two half cells make an oxidizing-reducing couple. Placing a piece of reactant in an electrolyte solution makes a half cell. Unless it is connected to another half cell via an electric conductor and salt bridge, no reaction will take place in a half cell.On the cathode, reduction takes place.On the anode, oxidation takes place.A battery requires at least two electrodes, the anode at which oxidation occurs, and the cathode at which reduction occurs. Reduction and oxidation are always required in any battery setup.A battery operation requires an anode, a cathode, a load, and a salt bridge (if the salt bridge is not there already). These are the key elements of a battery.Example 1Write the anode and cathode reactions for a galvanic cell that utilizes the reaction\(\ce{Ni_{\large{(s)}} + 2 Fe^3+ \rightarrow Ni^2+ + 2 Fe^2+}\)SolutionOxidation takes place at the anode, and the electrode must be \(\ce{Ni\, |\, Ni^2+}\),\(\ce{Ni_{\large{(s)}} \rightarrow Ni^2+_{\large{(aq)}} + 2 e^-}\)and the reduction occurs at the cathode: \(\ce{Fe^3+,\: Fe^2+}\):\(\ce{2 Fe^3+ + 2 e^- \rightarrow 2 Fe^2+}\)For every \(\ce{Ni}\) atom oxidized, two \(\ce{Fe^3+}\) ions are reduced. The electrons from the \(\ce{Ni}\) metal will flow from the anode, pass the load, and then carry out the reduction at the surface of the cathode to reduce the ferric (\(\ce{Fe^3+}\)) ions to ferrous ions. In the meantime the ions in the solution move accordingly to keep the charges balanced.DiscussionThe galvanic cell is:\(\ce{Ni_{\large{(s)}}\, |\, Ni^2+_{\large{(aq)}}\, ||\, Fe^3+_{\large{(aq)}},\: Fe^2+_{\large{(aq)}}\, |\, Pt_{\large{(s)}}}\)where "\(\ce{Fe^3+_{\large{(aq)}},\: Fe^2+_{\large{(aq)}}}\)" represents a solution containing two types of ions. An inert \(\ce{Pt}\) electrode is placed in the solution to provide electrons for the reduction.Example 2The charge on an electron is 1.602x10-19 C (coulomb). What is the charge on 1 mole of electrons?SolutionThe charge on one mole (Avogadro's number of) electrons is called a Faraday (F).\(\begin{align} F &= (6.022045\times10^{23} / \ce{mol}) \times (1.602\times10^{-19}\: \ce C)\\ &= \textrm{96485 C/mol} \end{align}\)The chemical history involving the determination of Avogadro's number, and the charge on an electron, and how the two values agree with each other is very interesting.DiscussionWho determined the charge on a single electron? Robert Millikan was awarded with the Nobel Prize for his determination of electron charge at University of Chicago.If 96485 C of charge is required to deposit 107.9 g of silver, what is the charge of an electron?Example 3A galvanic cell with a voltage of 1.1 V utilizes the reaction\(\ce{Zn + Cu^2+ \rightarrow Cu + Zn^2+}\)as a source of energy. If 6.3 g of \(\ce{Cu}\) and 11 g \(\ce{Zn}\) are used, what is the maximum usable energy in this battery?SolutionThe 6.3 g \(\ce{Cu}\) and 11 g \(\ce{Zn}\) correspond to 0.10 and 0.17 mol of \(\ce{Cu}\) and \(\ce{Zn}\) respectively. Thus, \(\ce{Cu}\) is the limiting reagent, and 0.10 mol corresponds to a charge of 2×96485×0.10 C (2 significant figures). The maximum available energy is then\(\begin{align} \textrm{Max. Energy} &= \mathrm{(1.1\: V)(96485\: C)(0.10)}\\ &= \mathrm{22000\: J \hspace{15px} (1\: J = 1\: VC)} \end{align}\)DiscussionThis energy corresponds to 2500 cal, which is enough to bring 25 g water from 273 K to its boiling point (373 K). Another way of looking at it: 22000 J is enough energy to send a 20-gram rocket to a height of 56 m.Example 4If the galvanic cell of Example 3 is used to power a calculator, which consumes 1 mW, how long theoretically will the battery last in continuous operation?SolutionPower consumption of 1 mW is equivalent to 0.001 J/sec.\(\begin{align} \mathrm{\dfrac{22000\: J}{0.001\: J/sec}} &= \textrm{2.2E7 sec}\\ &= \textrm{6200 hrs}\\ &= \textrm{254 days} \end{align}\)This is a realistic example. Most recent calculators use very little power. I noted that a SHARP programmable calculator uses 15 mW, a Casio calculator uses 0.5 mW, and an HP 25 uses 500 mW.A half cell consists of an electrode and the species to be oxidized or reduced. If the material conducts electricity, it may be used as an electrode. The hydrogen electrode consists of a \(\ce{Pt}\) electrode, \(\ce{H2}\) gas and \(\ce{H+}\). This half cell is represented by:\(\ce{Pt_{\large{(s)}}\, |\, H_{2\large{(g)}}\, |\, H+_{\large{(aq)}}}\)where the vertical bars represent the phase boundaries. Conventionally, the cell potential for the hydrogen electrode is defined to be exactly zero if it has the condition as given below:\(\ce{Pt\, |\, H_{2\: \large{(g,\: 1\: atm)}}\, |\, H+_{\large{(aq)}},\: 1\: M}\)The notations for half cells are not rigid, but a simplified way to represent a rather complicated setup.The tendency for a reduction reaction is measured by its reduction potential.The reduction potential is a quantity measured by comparison. As mentioned earlier, the reduction potential of the standard hydrogen electrode (SHE) is arbitrarily defined to be zero as a reference point for comparison. When a half cell \(\ce{Cu^2+ || Cu}\) for the reaction\(\ce{Cu^2+ + 2 e^- \rightarrow Cu}\)is coupled with the Standard Hydrogen Electrode (SHE), the copper electrode is a cathode, where reduction takes place. The potential across the cell\(\ce{Pt\, |\, H_{2\: \large{(g,\: 1\: atm)}}\, |\, H+_{\large{(aq)}},\: 1\: M\, ||\, Cu^2+ \, |\, Cu}\)has been measured to be 0.339 V. This indicates that \(\ce{Cu^2+}\) ions are easier to reduce than the hydrogen ions, and we usually represent it by\(\ce{Cu^2+ + 2 e^- \rightarrow Cu} \hspace{15px} E^{\ce o} = 0.339\: \ce V\)A positive cell potential indicates a spontaneous reaction.When the cell \(\ce{Zn\, |\, Zn^2+}\) is coupled with the SHE,\(\ce{Zn\, |\, Zn^2+_{\large{(aq)}}\: 1\: M\, ||\, H+_{\large{(aq)}},\: 1\: M\, |\, H_{2\: \large{(g,\: 1\: atm)}}\, |\, Pt}\)The potential has been measured to be 0.76 V. However, in this cell, \(\ce{Zn}\) is oxidized, and its electrode is the anode. Therefore, the reduction potential has a negative value for the reduction reaction\(\ce{Zn^2+ + 2 e^- \rightarrow Zn} \hspace{15px} E^{\ce o} = - 0.76\: \ce V\)This means that \(\ce{Zn^2+}\) ions are less ready to accept electrons than hydrogen ions.Ideally, for every redox couple, there is a reduction potential. Reduction potentials of standard cells have been measured against the SHE or other standards; their potentials are measured. These values are usually tabulated in handbooks. A short Standard Reduction Potentials table is available from the HandbookMenu, but you may also click the live link to see one.\(\mathrm{2 H^+_{\large{(aq,\: 1.00\: F)}} + 2 e^- \rightarrow H_{2\:\large{(g,\: 1\:atm)}}}\)Answer... A. Consider... No cell potential is ABSOLUTELY zero. \(\ce{H2}\) is reactive. This is not the reason at all.Answer... A. Consider... The vertical bar | is used to indicate a boundary between two phases. \(\ce{Pt\, |\, H2\, |\, H+}\: (1.0\: \ce M)\) represents the hydrogen half cell.Answer... C. Consider... Only | and || are used among the four notations. Two vertical bars, ||, represent a salt bridge.Chung (Peter) Chieh (Professor Emeritus, Chemistry University of Waterloo)Half-Cell Reaction is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by LibreTexts.
673
Half-Reactions
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Supplemental_Modules_(Analytical_Chemistry)/Electrochemistry/Redox_Chemistry/Half-Reactions
A half reaction is either the oxidation or reduction reaction component of a redox reaction. A half reaction is obtained by considering the change in oxidation states of individual substances involved in the redox reaction.Often, the concept of half-reactions is used to describe what occurs in an electrochemical cell, such as a Galvanic cell battery. Half-reactions can be written to describe both the metal undergoing oxidation (known as the anode) and the metal undergoing reduction (known as the cathode).Half-reactions are often used as a method of balancing redox reactions. For oxidation-reduction reactions in acidic conditions, after balancing the atoms and oxidation numbers, one will need to add H+ ions to balance the hydrogen ions in the half reaction. For oxidation-reduction reactions in basic conditions, after balancing the atoms and oxidation numbers, first treat it as an acidic solution and then add OH- ions to balance the H+ ions in the half reactions (which would give H2O).Galvanic cellConsider the Galvanic cell shown in the image to the right: it is constructed with a piece of zinc (Zn) submerged in a solution of zinc sulfate (ZnSO2) and a piece of copper (Cu) submerged in a solution of copper(II) sulfate (CuSO4). The overall reaction is:At the Zn anode, oxidation takes place (the metal loses electrons). This is represented in the following oxidation half-reaction (note that the electrons are on the products side):At the Cu cathode, reduction takes place (electrons are accepted). This is represented in the following reduction half-reaction (note that the electrons are on the reactants side):Consider the example burning of magnesium ribbon (Mg). When magnesium burns, it combines with oxygen (O2) from the air to form magnesium oxide (MgO) according to the following equation:Magnesium oxide is an ionic compound containing Mg2+ and O2- ions whereas Mg(s) and O2(g) are elements with no charges. The Mg(s) with zero charge gains a +2 charge going from the reactant side to product side, and the O2(g) with zero charge gains a -2 charge. This is because when Mg(s) becomes Mg2+, it loses 2 electrons. Since there are 2 Mg on left side, a total of 4 electrons are lost according to the following oxidation half reaction:On the other hand, O2 was reduced: its oxidation state goes from 0 to -2. Thus, a reduction half-reaction can be written for the O2 as it gains 4 electrons:The overall reaction is the sum of both half-reactions:When chemical reaction, especially, redox reaction takes place, we do not see the electrons as they appear and disappear during the course of the reaction. What we see is the reactants (starting material) and end products. Due to this, electrons appearing on both sides of the equation are canceled. After canceling, the equation is re-written asTwo ions, positive (Mg2+) and negative (O2-) exist on product side and they combine immediately to form a compound magnesium oxide (MgO) due to their opposite charges (electrostatic attraction). In any given oxidation-reduction reaction, there are two half-reactions – oxidation half- reaction and reduction half-reaction. The sum of these two half-reactions is the oxidation- reduction reaction.Consider the reaction below:The two elements involved, iron and chlorine, each change oxidation state; iron from +2 to +3, chlorine from 0 to -1. There are then effectively two half-reactions occurring. These changes can be represented in formulas by inserting appropriate electrons into each half-reaction:Given two half-reactions it is possible, with knowledge of appropriate electrode potentials, to arrive at the full (original) reaction the same way. The decomposition of a reaction into half-reactions is key to understanding a variety of chemical processes. For example, in the above reaction, it can be shown that this is a redox reaction in which Fe is oxidised, and Cl is reduced. Note the transfer of electrons from Fe to Cl. Decomposition is also a way to simplify the balancing of a chemical equation. A chemist can atom balance and charge balance one piece of an equation at a time.For example:It is also possible and sometimes necessary to consider a half-reaction in either basic or acidic conditions, as there may be an acidic or basic electrolyte in the redox reaction. Due to this electrolyte it may be more difficult to satisfy the balance of both the atoms and charges. This is done by adding H2O, OH-, e-, and or H+ to either side of the reaction until both atoms and charges are balanced.Consider the half-reaction below:PbO2 → PbOOH-, H2O, and e- can be used to balance the charges and atoms in basic conditions.2e- + H2O + PbO2 → PbO + 2OH-Again Consider the half-reaction below:PbO2 → PbOH+, H2O, and e- can be used to balance the charges and atoms in acidic conditions.2e- + 2H+ + PbO2 → PbO + H2ONotice that both sides are both charge balanced and atom balanced.Often there will be both H+ and OH- present in acidic and basic conditions but that the resulting reaction of the two ions will yield water H2O (shown below):H+ + OH- → H2Ohttp://en.Wikipedia.org/wiki/Half-reactionHalf-Reactions is shared under a CC BY-SA 4.0 license and was authored, remixed, and/or curated by LibreTexts.
674
Introduction to Lasers
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Supplemental_Modules_(Analytical_Chemistry)/Instrumentation_and_Analysis/Introduction_to_Lasers
The basic theory of lasers will be presented with emphasis on:This module discusses basic concepts related to Lasers. Lasers are light sources that produce electromagnetic radiation through the process of stimulated emission. Laser light has properties different from more common light sources, such as incandescent bulbs and fluorescent lamps. Typically, laser radiation spans a small range of wavelengths and is emitted in a beam that is spatially narrow. The word laser is an acronym for Light Amplification by Stimulated Emission of Radiation. Lasers are ubiquitous in our lives and are broadly applied in areas that include scientific research, medicine, engineering, telecommunications, industry and business (see the Applications page for examples). This module is aimed at presenting the most basic principles of lasers and discussing aspects of common types. Properties of laser radiation and laser optical components are introduced. This page titled Introduction to Lasers is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by Carol Korzeniewski via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.
677
Lasers
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Supplemental_Modules_(Analytical_Chemistry)/Instrumentation_and_Analysis/Lasers
LASER is an acronym for Light Amplification by Stimulated Emission of Radiation. Laser is a type of light source which has the unique characteristics of directionality, brightness, and monochromaticity. The goal of this module is to explain how a laser operates (stimulated or spontaneous emission), describe important components, and give some examples of types of lasers and their applications. Gas LasersGas lasers have lasing media that are made-up of one or a mixture of gases or vapors. Gas lasers can be classified in terms of the type of transitions that lead to their operation: atomic or molecular. The most common of all gas lasers is the helium-neon (He-Ne) laser.Laser TheoryThere are four laser demands: population inversion, laser threshold, energy source and active medium.Overview of LasersLASER is an acronym for Light Amplification by Stimulated Emission of Radiation. Laser is a type of light source which has the unique characteristics of directionality, brightness, and monochromaticity. The goal of this module is to explain how a laser operates (stimulated or spontaneous emission), describe important components, and give some examples of types of lasers and their applications.Semiconductor and Solid-state lasersIn both solid-state and semiconductor lasers the lasing medium is a solid. Aside from this similarity, however, these two laser types are very different from each other. In the case of the solid-state lasers the lasing species is typically an impurity that resides in a solid host, a crystal of some sort. The crystal modifies some of the quantized energy levels of the impurity, but still the lasing is almost atomic - similar to gas lasers. Lasers is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by LibreTexts.
678
Linear and Nonlinear Regression
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Supplemental_Modules_(Analytical_Chemistry)/Data_Analysis/Linear_and_Nonlinear_Regression
Regression analysis is a statistical methodology concerned with relating a variable of interest, which is called the dependent variable and denoted by the symbol y, to a set of independent variables, which are denoted by the symbols \(x_1\), \(x_2\), …, \(x_p\). The dependent and independent variables are also called response and explanatory variables, respectively. The objective is to build a regression model that will enable us to adequately describe, predict, and control the dependent variable on the basis of the independent variables.The simple linear regression model is a model with a single explanatory variable \(x\) that has a relationship with a response variable y that is a straight line. This simple linear regression model is \[y=\beta_{0}+\beta_{1}{x}+\varepsilon \label{1}\]where the intercept \(β_0\) and the slope \(β_1\) are unknown constants and ε is a random error component. The errors are assumed to have mean zero and unknown variance \(σ^2\). Additionally, we usually assume that the errors are uncorrelated. This means that the value of one error does not depend on the value of any other error.It is convenient to view the explanatory variable \(x\) as controlled by the data analyst and measured with negligible error, while the response variable \(y\) is a random variable. That is, there is a probability distribution for \(y\) at each possible value for \(x\). The mean of this distribution is\[E(y|x)=\beta_{0}+\beta_{1}{x}\label{2}\] and the variance is\[Var(y|x)=Var(\beta_{0}+\beta_{1}{x}+\varepsilon)=\sigma^2\label{3}\]Thus, the mean of \(y\) is a linear function of \(x\) although the variance of y does not depend on the value of \(x\). Furthermore, because the errors are uncorrelated, the response variables are also uncorrelated.The parameters \(β_0\) and \(β_1\) are usually called regression coefficients. These coefficients have a simple and often useful interpretation. The slope β1 is the change in the mean of the distribution of y produced by a unit change in \(x\). If the range of data on x includes \(x=0\), then the intercept \(β_0\) is the mean of the distribution of the response variable \(y\) when \(x=0\). If the range of \(x\) does not include zero, then \(β_0\) has no practical interpretation.The method of least squares is used to estimate \(β_0\) and \(β_1\). That is, \(β_0\) and \(β_1\) will be estimated so that the sum of the squares of the differences between the observations yi and the straight line is a minimum. Equation \ref{1} can be written as\[y_{i}=\beta_{0}+\beta_{1}x_{i}+\varepsilon_{i}, \;\;\; i=1, 2,..., n\label{4}\]Equation \ref{1} maybe viewed as a population regression model while Equation \ref{4} is a sample regression model, written in terms of the n pairs of data (\(y_i\), \(x_i\)) (i=1, 2, ..., n). Thus, the least-squares criterion is\[ S(\beta_0,\beta_1)=\sum_{i=1}^n(y_i-\beta_0-\beta_{1}x_{i})^2\label{5}\]The least-squares estimators of \(β_0\) and \(β_1\), say \(\hat{\beta}_0\) and \(\hat{\beta}_1\) , must satisfy\[ \dfrac{\partial{S}}{\partial{\beta_0}}=-2\sum_{i=1}^n(y_i-\hat{\beta}_0-\hat{\beta}_1x_{i})=0\label{6}\]\[ \dfrac{\partial{S}}{\partial{\beta_1}}=-2\sum_{i=1}^n(y_i-\hat{\beta}_0-\hat{\beta}_1x_{i})x_i=0\label{7}\] Simplifying these two equations yields\[ n\hat{\beta}_0+\hat{\beta}_1\sum_{i=1}^nx_i=\sum_{i=1}^ny_i\label{8}\]\[ \hat{\beta}_0\sum_{i=1}^nx_i+\hat{\beta}_1\sum_{i=1}^nx_i^2=\sum_{i=1}^ny_ix_i\label{9}\]Equations \ref{8} and \ref{9} are called the least-squares normal equations, and the general solution for these simultaneous equations is\[\hat{\beta}_0=\dfrac{1}{n}\sum_{i=1}^ny_i-\dfrac{\hat{\beta}_1}{n}\sum_{i=1}^nx_i\label{10}\]\[ \hat{\beta}_1=\dfrac{\sum_{i=1}^ny_ix_i-\dfrac{1}{n}(\sum_{i=1}^ny_i)(\sum_{i=1}^nx_i)}{ \sum_{i=1}^nx_i^2-\dfrac{1}{n}(\sum_{i=1}^nx_i)^2} \label{11}\]In Equations \ref{10} and \ref{11}, \(\hat{\beta}_0\) and \(\hat{\beta}_1\) are the least-squares estimators of the intercept and slope, respectively. Thus the fitted simple linear regression model will be\[ \hat{y}=\hat{\beta}_0+\hat{\beta}_1x\label{12}\]Equation \ref{12} gives a point estimate of the mean of y for a particular x.Given the averages of \(y_i\) and \(x_i\) as\[\bar{y}=\dfrac{1}{n} \sum_{i=1}^ny_i\]and\[\bar{x}=\dfrac{1}{n} \sum_{i=1}^nx_i\]the denominator of Equation \ref{11} can be written as\[S_{xx}= \sum_{i=1}^nx_i^2-\dfrac{1}{n}(\sum_{i=1}^nx_i)^2=\sum_{i=1}^n(x_i-\bar{x})^2 \label{13}\]and the numerator of that can be written as\[S_{xy}= \sum_{i=1}^ny_ix_i-\dfrac{1}{n}(\sum_{i=1}^ny_i)(\sum_{i=1}^nx_i)=\sum_{i=1}^ny_i(x_i-\bar{x})\label{14}\]Therefore, Equation \ref{11} can be written in a convenient way as\[\hat{\beta}_1=\dfrac{S_{xy}}{S_{xx}}\label{15}\]The difference between the observed value yi and the corresponding fitted value \(\hat{y}\) is a residual. Mathematically the ith residual is\[e_i=y_i-\hat{y}_i=y_i-(\hat{\beta}_0+\hat{\beta}_1x_i), \;\;\; i=1,2,..., n\label{16}\]Residuals play an important role in investigating model adequacy and in detecting departures from the underlying assumptions.Nonlinear regression is a powerful tool for analyzing scientific data, especially if you need to transform data to fit a linear regression. The objective of nonlinear regression is to fit a model to the data you are analyzing. You will use a program to find the best-fit values of the variables in the model which you can interpret scientifically. However, choosing a model is a scientific decision and should not be based solely on the shape of the graph. The equations that fit the data best are unlikely to correspond to scientifically meaningful models.Before microcomputers were popular, nonlinear regression was not readily available to most scientists. Instead, they transformed their data to make a linear graph, and then analyzed the transformed data with linear regression. This sort of method will distort the experimental error. Linear regression assumes that the scatter of points around the line follows a Gaussian distribution, and that the standard deviation is the same at every value of \(x\). Also, some transformations may alter the relationship between explanatory variables and response variables. Although it is usually not appropriate to analyze transformed data, it is often helpful to display data after a linear transform, since the human eye and brain evolved to detect edges, but not to detect rectangular hyperbolas or exponential decay curves.Given the validity, or approximate validity, of the assumption of independent and identically distributed normal error, one can make certain general statements about the least-squares estimators not only in linear but also in nonlinear regression models. For a linear regression model, the estimates of the parameters are unbiased, are normally distributed, and have the minimum possible variance among a class of estimators known as regular estimators. Nonlinear regression models differ from linear regression models in that the least-squares estimators of their parameters are not unbiased, normally distributed, minimum variance estimators. The estimators achieve this property only asymptotically, that is, as the sample sizes approach infinity.\[y=log(x-\alpha)\label{17}\]The statistical properties in estimation of this model are good, so the model behaves in a reasonably close-to-linear manner in estimation. An even better-behaved model is obtained by replacing \(α\) by an expected-value parameter, to yield\[y=log[x-x_1+exp(y_1)]\label{18}\]where y1 is the expected value corresponding to \(x=x_1\), where \(x_1\) should be chosen to be somewhere within the observed range of the \(x\) values in the data set.\[y=\dfrac{1}{1+\alpha x}\label{19}\]When \(α <0\), there is a vertical asymptote occurring at \(x=-1/α\).\[y=exp(x-\alpha)\label{20}\]This model is, in fact, a disguised intrinsically linear model, since it may be reparameterized to yield a linear model. That is, replacing \(α\) by an expected value parameter y1, corresponding to \(x=x_1\), yields\[y=y_1exp(x-x_1)\label{21}\]which is clearly linear in the parameter \(y_1\).Linear and Nonlinear Regression is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by LibreTexts.
679
Mass Spectrometry
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Supplemental_Modules_(Analytical_Chemistry)/Instrumentation_and_Analysis/Mass_Spectrometry
Mass spectrometry is an analytic method that employs ionization and mass analysis of compounds in order to determine the mass, formula and structure of the compound being analyzed. A mass analyzer is the component of the mass spectrometer that takes ionized masses and separates them based on charge to mass ratios and outputs them to the detector where they are detected and later converted to a digital output. Accelerator Mass SpectroscopyAccelerator Mass Spectroscopy (AMS) is a highly sensitive technique that is useful in isotopic analysis of specific elements in small samples (1mg or less of sample containing 106 atoms or less of the isotope of interest).Fragmentation Patterns in Mass SpectraThis page looks at how fragmentation patterns are formed when organic molecules are fed into a mass spectrometer, and how you can get information from the mass spectrum.How the Mass Spectrometer WorksThis page describes how a mass spectrum is produced using a mass spectrometer.Introductory Mass SpectrometryFragmentationHigh Resolution vs Low ResolutionIntroduction to Mass SpectrometryIsotopes: 13CIsotopes: Br and ClMolecular IonsMolecular Ion and NitrogenThe Mass Spectrometry ExperimentMALDI-TOFProteins and peptides have been characterized by high pressure liquid chromatography (HPLC) or SDS PAGE by generating peptide maps. These peptide maps have been used as fingerprints of protein or as a tool to know the purity of a known protein in a known sample. Mass spectrometry gives a peptide map when proteins are digested with amino end specific, carboxy end specific, or amino acid specific digestive enzymes.Mass SpecA mass spectrometer creates charged particles (ions) from molecules. It then analyzes those ions to provide information about the molecular weight of the compound and its chemical structure. There are many types of mass spectrometers and sample introduction techniques which allow a wide range of analyses. This discussion will focus on mass spectrometry as it's used in the powerful and widely used method of coupling Gas Chromatography (GC) with Mass Spectrometry (MS).Interpreting a Mass SpectrumMass Spectrometry - Fragmentation PatternsMass Spectroscopy: QuizesMass Spectra Interpretation: ALDEHYDESMass Spectrometers (Instrumentation)Electrospray Ionization Mass SpectrometryInjection StageMass Analyzers (Mass Spectrometry)Mass Spectrometry: Isotope EffectsThe ability of a mass spectrometer to distinguish different isotopes is one of the reasons why mass spectrometry is a powerful technique. The presence of isotopes gives each fragment a characteristic series of peaks with different intensities. These intensities can be predicted based on the abundance of each isotope in nature, and the relative peak heights can also be used to assist in the deduction of the empirical formula of the molecule being analyzed.Organic Compounds Containing Halogen AtomsThis page explains how the M+2 peak in a mass spectrum arises from the presence of chlorine or bromine atoms in an organic compound. It also deals briefly with the origin of the M+4 peak in compounds containing two chlorine atoms.The Mass Spectra of ElementsThis page looks at the information you can get from the mass spectrum of an element. It shows how you can find out the masses and relative abundances of the various isotopes of the element and use that information to calculate the relative atomic mass of the element. It also looks at the problems thrown up by elements with diatomic molecules - like chlorine.The Molecular Ion (M⁺) PeakThis page explains how to find the relative formula mass (relative molecular mass) of an organic compound from its mass spectrum. It also shows how high resolution mass spectra can be used to find the molecular formula for a compound.The M+1 PeakThis page explains how the M+1 peak in a mass spectrum can be used to estimate the number of carbon atoms in an organic compound. Thumbnail: SIMS mass spectrometer, model IMS 3f. (GNU Free Documentation Licenses; CAMECA Archives).Mass Spectrometry is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by LibreTexts.
680
Membrane Potentials
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Supplemental_Modules_(Analytical_Chemistry)/Electrochemistry/Exemplars/Membrane_Potentials
Membrane potential is what we use to describe the difference in voltage (or electrical potential) between the inside and outside of a cell.Without membrane potentials human life would not be possible. All living cells maintain a potential difference across their membrane. Simply stated, membrane potential is due to disparities in concentration and permeability of important ions across a membrane. Because of the unequal concentrations of ions across a membrane, the membrane has an electrical charge. Changes in membrane potential elicit action potentials and give cells the ability to send messages around the body. More specifically, the action potentials are electrical signals; these signals carry efferent messages to the central nervous system for processing and afferent messages away from the brain to elicit a specific reaction or movement. Numerous active transports embedded within the cellular membrane contribute to the creation of membrane potentials, as well as the universal cellular structure of the lipid bilayer. The chemistry involved in membrane potentials reaches to many scientific disciplines. Chemically it involves molarity, concentration, electrochemistry and the Nernst equation. From a physiological standpoint, membrane potential is responsible for sending messages to and from the central nervous system. It is also very important in cellular biology and shows how cell biology is fundamentally connected with electrochemistry and physiology. The bottom line is that membrane potentials are at work in your body right now and always will be as long as you live. The subject of membrane potential stretches across multiple scientific disciplines; Membrane Potential plays a role in the studies of Chemistry, Physiology and Biology. The culmination of the study of membrane potential came in the 19th and early 20th centuries. Early in the 20th century, a man named professor Bernstein hypothesized that there were three contributing factors to membrane potential; the permeability of the membrane and the fact that [K+] was higher inside and lower on the outside of the cell. He was very close to being correct, but his proposal had some flaws. Walther H. Nernst, notable for the development of the Nernst equation and winner of 1920 Nobel Prize in chemistry, was a major contributor to the study of membrane potential. He developed the Nernst equation to solve for the equilibrium potential for a specific ion. Goldman, Hodgkin and Katz furthered the study of membrane potential by developing the Goldman-Hodgkin-Katz equation to account for any ion that might permeate the membrane and affect its potential. The study of membrane potential utilizes electrochemistry and physiology to formulate a conclusive idea of how charges are separated across a membrane.Differences in concentration of ions on opposite sides of a cellular membrane produce a voltage difference called the membrane potential. The largest contributions usually come from sodium (Na+) and chloride (Cl–) ions which have high concentrations in the extracellular region, and potassium (K+) ions, which along with large protein anions have high concentrations in the intracellular region. Calcium ions, which sometimes play an important role, are not shown.In discussing the concept of membrane potentials and how they function, the creation of a membrane potential is essential. The lipid bilayer structure of the cellular membrane, with its lipid-phosphorous head and fatty acid tail, provides a perfect building material that creates both a hydrophobic and hydrophilic side to the cellular membrane. The membrane is often referred to as a mosaic model because of its semi-permeability and its ability to keep certain substances from entering the cell. Molecules such as water can diffuse through the cell based on concentration gradients; however, larger molecules such as glucose or nucleotides require channels. The lipid bilayer also houses the Na+/K+ pump, ATPase pump, ion transporters, and voltage gated channels, and it is the site of vesicular transport. The structure regulates which ions enter and exit to determine the concentration of specific ions inside of the cell.Why is membrane potential essential to the survival of all living creatures?Animals and plants require the breakdown of organic substances through cellular respiration to generate energy. This process, which produces ATP, is dependent on the electron transport chain. Electrons travel down this path to be accepted by oxygen or other electron acceptors. The initial electrons are obtained from the breakdown of water molecules. The hydrogen build up in the extracellular fluid leaving a gradient. As per membrane potentials, when there a gradient, the molecules flow in the opposite direction. In this case, hydrogen flows back into the cell through a protein known as ATP synthase which creates ATP in the process. This action is essential to life because the number of ATP created from each glucose increases drastically. Chemical disequilibrium and membrane potentials allow bodily functions to take place. and permeability of Ions Responsible for Membrane Potential in a Resting Nerve CellCheck out this YouTube video if you want to know more about how the Na+/K+ pump and how the membrane potential works. www.youtube.com/watch?v=iA-Gdkje6pgThe calculation for the charge of an ion across a membrane, The Nernst Potential, is relatively easy to calculate. The equation is as follows: (RT/zF) log([X]out/[X]in). RT/F is approximately 61, therefore the equation can be written as (61/z) ln([X]out/[X]in)The only difference in the Goldman-Hodgkin-Katz equation is that is adds together the concentrations of all permeable ions as follows(RT/zF) log([K+]o+[Na+]o+[Cl-]o /[K+]i+[Na+]i+[Cl-]i)(Clockwise From Upper Left) 1) The charges are equal on both sides; therefore the membrane has no potential. 2)There is an unbalance of charges, giving the membrane a potential. 3) The charges line up on opposite sides of the membrane to give the membrane its potential. 4) A hypothetical neuron in the human body; a large concentration of potassium on the inside and sodium on the outside.1. List the following in order from highest to lowest permeability. A-, K+, Na+ 2. Which of the following statement is NOT true?3. What would be the equilibrium potential for the ion K+ be if [K+]in= 5mM and [K+]in=150mM?4. True or false: At resting membrane potential, the inside of the membrane is slightly negatively charged while the outside is slightly positively charged.1. K+ > Na+ > A-2. Answer c.)is not true; membrane potential exists in neurons and is responsible for action potential propagation in neurons.3. Ek+ = (61/z) log([K+]out/[K+]in) = (61/1) log([5mM]/[150mM]) = -90mV z=14. True. The resting membrane potential is negative as a result of this disparity in concentration of charges.Membrane Potentials is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by LibreTexts.
681
Method of Linear Regression
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Supplemental_Modules_(Analytical_Chemistry)/Data_Analysis/Method_of_Linear_Regression
In the case that one believes that a series of two variables correlate linearly with each other, the method of least squares may be used to find the "best" straight line through the points. The method which follows assumes that one "knows" the variable on the x-axis more accurately than the variable on the y-axis. The y-axis variable is often referred to as the dependent variable and the x-axis variable the independent variable. Where a mathematical function y=f(x) is being considered, one might say that the value of x determines the value of y. Where y represents values measured with an instrument and there is only presumed to be a relationship between x and y, not only would one anticipate favorably that such a relationship will be exhibited, but that relationship might also be expected to be somewhat muddled by possible biases and random errors typical of the instrument which measured the y values and those typical of some other instrument responsible for establishing the x values.In colorimetry for example the x-axis variable is concentration of a known solution and the y-axis variable is a measured absorbance of that solution. Once the relationship is established the absorbance of an unknown solution is measured and the line representing the relationship between the two variables can then be used to determine the concentration of the unknown. A melting point curve would show concentration or mole fraction or w/w % on the x-axis and melting point on the y-axis. There are many cases though in which a distinguishing feature such as knowing the x-axis variable more accurately is not clear or is not followed. A pressure/volume diagram is one in which both variables might be known with equal precision. When calibrating a buret, the volume customarily would be shown across the x-axis and the "corrected volume" obtained from a mass measured on an analytical balance would appear along the y-axis, even though the mass can be determined to 4-5 significant figures and the volume only to 3-4. In any case, one says for the method described below, that it is the y-axis variable which has a measurable error and the "residuals" or differences in a vertical direction between each measured y value and the best straight line between all the points are taken into account for this method. The method is to find m (the slope) and b (the y-intercept) for a relationship given by\[ y = mx + b\]Five intermediate quantities are defined for the convenience of calculating various values associated with a least squares linear regression in two variables. Seven useful results can be calculated from these five intermediate quantities but for the purpose of this discussion only three will be shown: the method of finding m, the method of finding the (y-intercept) and the method of finding the standard deviation about the regression line. "N" in each equation below represents the number of xi and yi pairs, or the number of measurements.\[S_{xx} = \sum_i x_i^2 - \dfrac{\left(\sum x_i\right)^2}{N}\]\[S_{yy} = \sum_i y_i^2 - \dfrac{\left(\sum y_i\right)^2}{N}\]\[S_{xy} = \sum_i x_ix_i - \dfrac{\left(\sum x_i\right)\left(\sum y_i\right)}{N}\]\[ \bar{x} = \dfrac{\sum x_i}{N}\]\[ \bar{y} = \dfrac{\sum y_i}{N}\]The slope m may be calculated using the formula\[ m =\dfrac{S_{xy}}{S_{xx}}\]The (y-intercept) may be calculated using the formula\[b = \bar{y} - m \bar{x}\]The standard deviation \(s_r\) about the regression line may be calculated using the formula\[ s_r = \sqrt{\dfrac{S_{yy} -m^2 S_xx}{N-2}}\]A number of calculators have built-in software to obtain these results. The process is often referred to as "linear regression" in calculator manuals. Spread sheet programs also offer this feature.This page titled Method of Linear Regression is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by Oliver Seely.
682
Metric/Imperial Conversion Errors
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Supplemental_Modules_(Analytical_Chemistry)/Quantifying_Nature/Units_of_Measure/Metric%2F%2FImperial_Conversion_Errors
To learn more about systems of measurement, visit the SI Unit page and the Non-SI Unit page. As the four examples below can attest, small errors in these unit systems can harbor massive ramifications.Although NASA declared the metric system as its official unit system in the 1980s, conversion factors remain an issue. The Mars Climate Orbiter, meant to help relay information back to Earth, is one notable example of the unit system struggle. The orbiter was part of the Mars Surveyor ’98 program, which aimed to better understand the climate of Mars. As the spacecraft journeyed into space on September 1998, it should have entered orbit at an altitude of 140-150 km above Mars, but instead went as close as 57km. This navigation error occurred because the software that controlled the rotation of the craft’s thrusters was not calibrated in SI units. The spacecraft expected newtons, while the computer, which was inadequately tested, worked in pound forces; one pound force is equal to about 4.45 newtons. Unfortunately, friction and other atmospheric forces destroyed the Mars Climate Orbiter. The project cost $327.6 million in total. Tom Gavin, an administrator for NASA's Jet Propulsion Laboratory in Pasadena, stated, "This is an end-to-end process problem. A single error like this should not have caused the loss of Climate Orbiter. Something went wrong in our system processes in checks and balances that we have that should have caught this and fixed it."The Mars Climate Orbiter, image courtesy NASA/JPL-CaltechAnother NASA related conversion concern involves the Constellation project, which is focused mainly on manned spaceflight. Established in 2005, it includes plans for another moon landing. The Constellation project is partially based upon decades-old projects such as the Ares rocket and the Orion crew capsule. These figures and plans are entirely in British Imperial units; converting this work into metric units would cost approximately $370 million.Work on the Constellation Project, image courtesy NASA/Kim ShiflettTokyo Disneyland’s Space Mountain roller coaster came to a sudden halt just before the end of a ride on December 5, 2003. This startling incident was due a broken axle. The axle in question fractured because it was smaller than the design’s requirement; because of the incorrect size, the gap between the bearing and the axle was over 1 mm – when it should have been a mere 0.2 mm (the thickness of a dime vs. the thickness of two sheets of common printer paper.) The accumulation of excess vibration and stress eventually caused it to break. Though the coaster derailed, there were no injuries. Once again, unit systems caused the accident. In September 1995, the specifications for the coaster’s axles and bearings were changed to metric units. In August 2002, however, the British-Imperial-unit plans prior to 1995 were used to order 44.14 mm axels instead of the needed 45 mm axels.A Boeing 767 airplane flying for Air Canada on July 23, 1983 diminished its fuel supply only an hour into its flight. It was headed to Edmonton from Montreal, but it received low fuel pressure warnings in both fuel pumps at an altitude of 41,000 feet; engine failures followed soon after. Fortunately, the captain was an experienced glider pilot and the first officer knew of an unused air force base about 20 kilometers away. Together, they landed the plan on the runway, and only a few passengers sustained minor injuries. This incident was due partially to the airplane’s fuel indication system, which had been malfunctioning. Maintenance workers resorted to manual calculations in order to fuel the craft. They knew that 22,300 kg of fuel was needed, and they wanted to know how much in liters should be pumped. They used 1.77 as their density ratio in performing their calculations. However, 1.77 was given in pounds per liter, not kilograms per liter. The correct number should have been 0.80 kilograms/liter; thus, their final figure accounted for less than half of the necessary fuel.The Air Canada craft, image courtesy AkradeckiExample \(\PageIndex{1}\)If Jimmy walks 5 miles, how many kilometers did he travel?\[5 \;\cancel{miles} \times \left (\dfrac{1.6\; kilometers }{1\; \cancel{mile}}\right) = 8\; kilometers \nonumber\]Example \(\PageIndex{2}\)A solid rocket booster is ordered with the specification that it is to produce a total of 10 million pounds of thrust. If this number is mistaken for the thrust in Newtons, by how much, in pounds, will the thrust be in error? (1 pound = 4.5 Newtons)10,000,000 Newtons x (1 pound / 4.448 Newtons) = 2,200,000 pounds.10,000,000 pounds - 2,200,000 pounds = 7.800,000 pounds.The error is a missing 7,800,000 pounds of thrust.Example \(\PageIndex{3}\)The outer bay tank at the Monterey Bay Aquarium holds 1.3 million gallons. If NASA takes out all the fish in this tank and sends them to swim around in space, what is the theoretical volume of all the fish in liters? Assume there are 3,027,400 liters of water left in the tank after the fish are removed.\[3,027,400\; \cancel{liters} \times \left(\dfrac{0.264 \;gallons}{1\; \cancel{liter}}\right) = 800,000\;\text{gallons remaining in tank} \nonumber\]The volume of the space fish is 1,300,000 - 800,000 = 500,000 gallons, which converts to 1,892,100 liters worth of fish swimming around the solar system.Example \(\PageIndex{4}\)A bolt is ordered with a thread diameter of 1.25 inches. What is this diameter in millimeters? If the order was mistaken for 1.25 centimeters, by how many millimeters would the bolt be in error?\[1.25\; \cancel{\rm{ inches}} \times \dfrac{25.4\; \rm{millimeters}}{1\; \cancel{ \rm{inch}}} = 31.75 \; \rm{millimeters} \nonumber\]Since 1.25 centimeters x (10 millimeters / 1 centimeter) = 12.5 millimeters, the bolt would delivered 31.75 - 12.5 = 19.25 millimeters too small.Example \(\PageIndex{5}\)The Mars Climate Orbiter was meant to stop about 160 km away from the surface of Mars, but it ended up within 36 miles of the surface. How far off was it from its target distance (in km)? If the Orbiter is able to function as long as it stays at least 85 km away from the surface, will it still be functional despite the mistake?\[36 \; \cancel{\text{miles}} \times \dfrac {1.6 \; \text{kilometers} }{1\; \cancel{\text{mile}}} = 57.6 \;\text{km kilometers from surface} \nonumber\]The difference then is (in kilometers): 160 - 57.6 kilometers = 102.4 kilometers away from targeted distance. Hence, the Orbiter is unable to function due to this mistake since it is beyond the 85 km error designed into its function.Metric/Imperial Conversion Errors is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by LibreTexts.
683
Metric Prefixes - from yotta to yocto
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Supplemental_Modules_(Analytical_Chemistry)/Quantifying_Nature/Units_of_Measure/Metric_Prefixes_-_from_yotta_to_yocto
In introductory chemistry we use only a few of the most common metric prefixes, such as milli, centi, and kilo. Our various textbooks and lab manuals contain longer lists of prefixes, but few if any contain a complete list. There is no point of memorizing this, but it is nice to have a place to look them up. You will find prefixes from throughout the range as you read the scientific literature. In particular, the smaller prefixes such as nano, pico, femto, etc., are becoming increasingly common as analytical chemistry and biotechnology develop more sensitive methods. To help you visualize the effect of these prefixes, there is a column "a sense of scale", which gives some examples of the magnitudes represented. a sense of scale (for some) Most are approximate.You have probably heard words such as kilobyte, in the context of computers. What does it mean? It might seem to mean 1000 bytes, since kilo means 1000. But in the computer world it often means 1024 bytes. That is 210 - a power of two very close to 1000. Now, in common usage it often does not matter whether the intent was 1000 bytes or 1024 bytes. But they are different numbers and sometimes it does matter. So, a new set of "binary prefixes", distinguished by "bi" in the name or "i" in the abbreviation, was introduced in 1998. By this new system, 1024 bytes would be properly called a kibibyte or KiB. (Sounds like something you would feed the dog.)This new system of binary prefixes has been endorsed by the International Electrotechnical Commission (IEC) for use in electrical technology. See the NIST page at http://physics.nist.gov/cuu/Units/binary.html. Whether these will catch on remains to be seen, but at least if you see such an unusual prefix you might want to be aware of this.>Robert Bruner Metric Prefixes - from yotta to yocto is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by LibreTexts.
684
Microscopy - Overview
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Supplemental_Modules_(Analytical_Chemistry)/Microscopy/Miscellaneous_Microscopy/Microscopy_-_Overview
The word microscopy comes from the Greek words for small and to view. On April 13, 1625, Giovanni Faber coined the term microscope. A microscope is an instrument that enables us to view small objects that are otherwise invisible to our naked eye. One way that microscopes allow us to see smaller objects is through the process of magnification, i.e. enlarging the image of the object. When a microscope enlarges an image of a 1 mm object to 10 mm, this is a 10 x magnification.Lens: The lens is the part of a microscope that bends a beam of light and focuses this on the object or sample.The resolution of a microscope is the smallest distance between two objects that results in two images that are distinguishable from each other. For example, the resolution of our eyes ranges from 0.1 to 0.2 mm. This means that our eyes can distinguish between two objects that are separated by 0.1 to 0.2 mm.Early Light MicroscopesLight has both a particle and a wave property. A beam of light can be polarized by lining up its vibrations with each other. Thus, the polarizing microscope polarizes light in order to magnify images. This microscope also determines properties of materials that transmit light, whether they are crystalline or non crystalline.The optical features of transparent material were recognized when William Henry Fox Talbot added two Nicol prisms (prisms that can polarize light) to a microscope. However, it was Henry Clifton Sorby who used polarized light microscopy to study thinned sections of transparent rocks. He showed that through their optical properties, these thinned sections of minerals could be analyzed.The polarizing microscope can be divided into three major component sets:The quality of magnification depends on the objective lens and the smaller the diameter of the outermost lens, the higher the magnification.In 1740, Dr. Johann N. Lieberkuhn authenticated an instrument for illuminating opaque materials that had a cup shaped mirror encircling the objective lens of a microscope. This mirror is called a reflector. A reflector has a concave reflecting surface and a lens in its center. This evenly illuminates the specimen when the specimen is fixed up to the light and the light rays reflected from it and to the specimen.Henry Clifton Sorby used a small reflector and attached this over the objective lens of his microscope. When he used this to study steel, he was able to see residues and distinguish these from the hard components of the steel. From then on, several scientists that study minerals also used reflected light microscopes and this technology improved throughout time.Professor Michael Isaacson of Cornell University invented this type of a microscope. This microscope also uses light but not lenses. In order to focus the light on a sample, Isaacson passed light through a very tiny hole. The hole and the sample are so closed together that the light beam does not spread out. This type of a microscope enabled Isaacson's team to resolve up to 40 nm when they used yellow-green light. In this type of a microscope, the resolution is not really limited by the wavelength of light but the amount of the sample since it is very small.Because they only have resolutions in the micrometer range by using visible light, the light microscopes cannot be used to see in the nanometer range. In order to see in the nanometer range, we would need something that has higher energy than visible light. A physicist named de Broglie came up with an equation that shows the shorter the wavelength of a wave, the higher the energy it has. From the wave-particle duality, we know that matter, like light, can have both wave and particle properties. This means that we can also use matter, like electrons, instead of light. Electrons have shorter wavelengths than light and thus have higher energy and better resolution.Electron microscopes use electrons to focus on a sample. In 1926-1927, Busch demonstrated that an appropriately shaped magnetic field could be used as a lens. This discovery made it possible to use magnetic fields to focus the electron beam for electron microscopes.After Busch’s discovery and development of electron microscopes, companies in different parts of the world developed and produced a prototype of an electron microscope called Transmission Electron Microscopes (TEM). In TEM, the beam of electrons goes through the sample and their interactions are seen on the side of the sample where the beam exits. Then, the image is gathered on a screen. TEMs consist of three major parts:TEM has a typical resolution of approximately 2 nm. However, the sample has to be thin enough to transmit electrons so it cannot be used to look at living cells.In 1942, Zworykin, Hillier, and R.L. Snyder developed another type of an electron microscope called Scanning Electron Microscope (SEM). SEM is another example of an electron microscope and is arguably the most widely used electron beam instrument. In SEM, the electron beam excites the sample and its radiation is detected and photographed. SEM is a mapping device—a beam of electrons scanning across the surface of the sample creates the overall image. SEM also consists of major parts:SEM’s resolution is about 20 nm and its magnification is about 200,000x. SEM cannot be used to study living cells as well since the sample for this process must be very dry.Scanning probe microscopes are also capable of magnifying or creating images of samples in the nanometer range. Some of them can even give details up to the atomic level.Briab Josephson shared when he explained Tunneling. This phenomenon eventually led to the development of Scanning Tunneling Microscopes by Heinrich Rohrer and Gerd Binnig around 1979. Rohrer and Binnig received the Nobel Prize in physics in 1986. The STM uses an electron conductor needle, composed of either platinum-rhodium or tungsten, as a probe to scan across the surface of a solid that conducts electricity as well. The tip of the needle is usually very fine; it may even be a single atom that is 0.2 nm wide. Electrons tunnel across the space between the tip of the needle and the specimen surface when the tip and the surface are very close to each other. The tunneling current is very sensitive to the distance of the tip from the surface. As a result, the needle moves up and down depending on the surface of the solid—a piezoelectric cylinder monitors this movement. The three-dimensional image of the surface is then projected on a computer screen.The STM has a resolution of about 0.1 nm. However, the fact that the needle-tip and the sample must be electrical conductors limits the amount of materials that can be studied using this technology.In 1986, Binnig, Berger, and Calvin Quate invented the first derivative of the STM—the Atomic Force Microscope. The AFM is another type of a scanning microscope that scans the surface of the sample. It is different from the STM because it does not measure the current between the tip of the needle and the sample. The AFM has a stylus with a sharp tip that is attached on the end of a long a cantilever. As the stylus scans the sample, the force of the surface pushes or pulls it. The cantilever deflects as a result and a laser beam is used to measure this deflection. This deflection is then turned into a three dimensional topographic image by a computer.With AFM, a much higher resolution is attained with less sample damage. The AFM can be used on non-conducting samples as well as on liquid samples because there is no current applied on the sample. Thus the AFM can be used to study biological molecules such as cells and proteins.Microscopy - Overview is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by Kristeen Pareja.
685
Miscellaneous Microscopy
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Supplemental_Modules_(Analytical_Chemistry)/Microscopy/Miscellaneous_Microscopy
Microscopy - OverviewThe word microscopy comes from the Greek words for small and to view. On April 13, 1625, Giovanni Faber coined the term microscope. A microscope is an instrument that enables us to view small objects that are otherwise invisible to our naked eye. One way that microscopes allow us to see smaller objects is through the process of magnification, i.e. enlarging the image of the object. When a microscope enlarges an image of a 1 mm object to 10 mm, this is a 10 x magnification. Miscellaneous Microscopy is shared under a not declared license and was authored, remixed, and/or curated by LibreTexts.
686
Nernst Equation
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Supplemental_Modules_(Analytical_Chemistry)/Electrochemistry/Nernst_Equation
The Nernst Equation enables the determination of cell potential under non-standard conditions. It relates the measured cell potential to the reaction quotient and allows the accurate determination of equilibrium constants (including solubility constants).The Nernst Equation is derived from the Gibbs free energy under standard conditions.\[E^o = E^o_{reduction} - E^o_{oxidation} \label{1}\]\(\Delta{G}\) is also related to \(E\) under general conditions (standard or not) via\[\Delta{G} = -nFE \label{2}\]withUnder standard conditions, Equation \ref{2} is then\[\Delta{G}^{o} = -nFE^{o}. \label{3}\]Hence, when \(E^o\) is positive, the reaction is spontaneous and when \(E^o\) is negative, the reaction is non-spontaneous. From thermodynamics, the Gibbs energy change under non-standard conditions can be related to the Gibbs energy change under standard Equations via\[\Delta{G} = \Delta{G}^o + RT \ln Q \label{4}\]Substituting \(\Delta{G} = -nFE\) and \(\Delta{G}^{o} = -nFE^{o}\) into Equation \ref{4}, we have:\[-nFE = -nFE^o + RT \ln Q \label{5}\]Divide both sides of the Equation above by \(-nF\), we have\[E = E^o - \dfrac{RT}{nF} \ln Q \label{6}\]Equation \ref{6} can be rewritten in the form of \(\log_{10}\):\[E = E^o - \dfrac{2.303 RT}{nF} \log_{10} Q \label{Generalized Nernst Equation}\]At standard temperature T = 298 K, the \(\frac{2.303 RT}{F}\) term equals 0.0592 V and Equation \ref{Generalized Nernst Equation} can be rewritten:\[E = E^o - \dfrac{0.0592\, V}{n} \log_{10} Q \label{Nernst Equation 298 K}\]The Equation above indicates that the electrical potential of a cell depends upon the reaction quotient \(Q\) of the reaction. As the redox reaction proceeds, reactants are consumed, and thus concentration of reactants decreases. Conversely, the products concentration increases due to the increased in products formation. As this happens, cell potential gradually decreases until the reaction is at equilibrium, at which \(\Delta{G} = 0\). At equilibrium, the reaction quotient \(Q = K_{eq}\). Also, at equilibrium, \(\Delta{G} = 0\) and \(\Delta{G} = -nFE\), so \(E = 0\). Therefore, substituting \(Q = K_{eq}\) and \(E = 0\) into the Nernst Equation, we have:\[0 = E^o - \dfrac{RT}{nF} \ln K_{eq} \label{7}\]At room temperature, Equation \ref{7} simplifies into (notice natural log was converted to log base 10):\[0 = E^o - \dfrac{0.0592\, V}{n} \log_{10} K_{eq} \label{8}\]This can be rearranged into: \[\log K_{eq} = \dfrac{nE^o}{0.0592\, V} \label{9}\]The Equation above indicates that the equilibrium constant \(K_{eq}\) is proportional to the standard potential of the reaction. Specifically, when:This result fits Le Châtlier's Principle, which states that when a system at equilibrium experiences a change, the system will minimize that change by shifting the equilibrium in the opposite direction.Example \(\PageIndex{1}\)The \(E^{o}_{cell} = +1.10 \; V\) for the Zn-Cu redox reaction:\[Zn_{(s)} + Cu^{2+}_{(aq)} \rightleftharpoons Zn^{2+}_{(aq)} + Cu_{(s)}.\]What is the equilibrium constant for this reversible reaction?Under standard conditions, \([Cu^{2+}] = [Zn^{2+}] = 1.0\, M\) and T = 298 K. As the reaction proceeds, \([Cu^{2+}]\) decreases as \([Zn^{2+}]\) increases. Lets say after one minute, \([Cu^{2+}] = 0.05\, M\) while \([Zn^{2+}] = 1.95\, M\). According to the Nernst Equation, the cell potential after 1 minute is:\[E = E^o - \dfrac{0.0592 V}{n} \log Q\]\[E = 1.10V - \dfrac{0.0592 V}{2} \log\dfrac{1.95 \; M}{0.05 \; M}\]\[E = 1.05 \; V\]As you can see, the initial cell potential is \(E = 1.10\, V\), after 1 minute, the potential drops to 1.05 V. This is after 95% of the reactants have been consumed. As the reaction continues to progress, more \(Cu^{2+}\) will be consumed and more \(Zn^{2+}\) will be generated (at a 1:1 ratio). As a result, the cell potential continues to decrease and when the cell potential drops down to 0, the concentration of reactants and products stops changing.This is when the reaction is at equilibrium. From from Equation 9, the \(K_{eq}\) can be calculated from\[\begin{align} \log K_{eq} & = \dfrac{2 \times 1.10\, V}{0.0592\,V}\\ & = 37.2 \end{align}\]\[K_{eq} = 10^{37.2}= 1.58 \times 10^{37}\]This make sense from a Le Châtlier's Principle, since the reaction strongly favors the products over the reactants to result in a large \(E^{o}_{cell}\) of 1.103 V. Hence, the cell is greatly out of equilibrium under standard conditions. Reactions that are just weakly out of equilibrium will have smaller \(E^{o}_{cell}\) values (neglecting a change in \(n\) of course).Nernst Equation is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by LibreTexts.
687
Non-SI Units
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Supplemental_Modules_(Analytical_Chemistry)/Quantifying_Nature/Units_of_Measure/Non-SI_Units
The metric system of measurement, the International system of units (SI Units), is widely used for quantitative measurements of matter in science and in most countries. However, different systems of measurement existed before the SI system was introduced. Any units used in other system of measurement (i.e. not included in the SI system of measurement), will be referred to as non-SI units. In most science courses, non-SI Units are not be used regularly. Non-SI Units is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by LibreTexts.
688
Nonstandard Conditions: The Nernst Equation
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Supplemental_Modules_(Analytical_Chemistry)/Electrochemistry/Nonstandard_Conditions%3A_The_Nernst_Equation
The standard cell potentials refer to cells in which all dissolved substances are at unit activity, which essentially means an "effective concentration" of 1 M. Similarly, any gases that take part in an electrode reaction are at an effective pressure of 1 atm. If these concentrations or pressures have other values, the cell potential will change in a manner that can be predicted from the principles you already know.Suppose, for example, that we reduce the concentration of \(Zn^{2+}\) in the \(Zn/Cu\) cell from its standard effective value of 1 M to an to a much smaller value:Zn(s) | Zn2+(aq, .001M) || Cu2+(aq) | Cu(s)This will reduce the value of \(Q\) for the cell reactionZn(s) + Cu2+ → Zn2+ + Cu(s)thus making it more spontaneous, or "driving it to the right" as the Le Chatelier principle would predict, and making its free energy change \(\Delta G\) more negative than \(\Delta G°\), so that E would be more positive than E°.The relation between the actual cell potential E and the standard potential E° is developed in the following way. We begin with the equation derived previously which relates the standard free energy change (for the complete conversion of products into reactants) to the standard potential\[\Delta G° = –nFE° \]By analogy we can write the more general equation\[\Delta G = –nFE\]which expresses the change in free energy for any extent of reaction— that is, for any value of the reaction quotient \(Q\). We now substitute these into the expression that relates \(\Delta G\) and \(\Delta G°\) which you will recall from the chapter on chemical equilibrium:\[\Delta G = \Delta G° + RT \ln Q\]which gives\[–nFE = –nFE° + RT \ln Q \]which can be rearranged to\[ E=E° -\dfrac{RT}{nF} \ln Q \label{1}\]This is the Nernst equation that relates the cell potential to the standard potential and to the activities of the electroactive species. Notice that the cell potential will be the same as \(E°\) only if \(Q\) is unity. The Nernst equation is more commonly written in base-10 log form and for 25 °C:\[ E=E° -\dfrac{0.059}{n} \log_{10} Q \label{2}\]The equation above indicates that the electrical potential of a cell depends upon the reaction quotient \(Q\) of the reaction. As the redox reaction proceeds, reactants are consumed, thus concentration of reactants decreases. Conversely, the products concentration increases due to the increased in products formation. As this happens, cell potential gradually decreases until the reaction is at equilibrium, at which \(\Delta{G} = 0\).The Nernst equation tells us that a half-cell potential will change by 59 millivolts per 10-fold change in the concentration of a substance involved in a one-electron oxidation or reduction; for two-electron processes, the variation will be 28 millivolts per decade concentration change.Thus for the dissolution of metallic copperCu(s) → Cu2+ + 2e–the potentialE = (– 0.337) – .0295 log [Cu2+]becomes more positive (the reaction has a greater tendency to take place) as the cupric ion concentration decreases. This, of course, is exactly what the Le Chatelier Principle predicts; the more dilute the product, the greater the extent of the reaction.Example \(\PageIndex{1}\):Consider the Zn-Cu redox reaction:\[Zn_{(s)} + Cu^{2+}_{(aq)} \rightarrow Zn^{2+}_{(aq)} + Cu_{(s)} E^{o}_{cell} = +1.10 \; V\]Initially, [Cu2+] = [Zn2+] = 1.0 M at standard T = 298K As the reaction proceeds, [Cu2+] decreases as [Zn2+] increases. Lets say after one minute, [Cu2+] = 0.05 M while [Zn2+] = 5.0 M. According to Nernst, cell potential after 1 minute is:\[E = E^o - \dfrac{0.0592 V}{n} \log Q\]\[E = 1.10V - \dfrac{0.0592 V}{2} \log\dfrac{5.0 \; M}{.05 \; M}\]\[E = 1.04 \; V\]As you can see, the initial cell potential is \(E = 1.10\, V\), after 1 minute, the potential drops to 1.04 V. As the reaction continues to progress, more Cu2+ will be consumed and more Zn2+ will be generated. As a result, the cell potential continues to decrease and when the cell potential drops down to 0, the concentration of reactants and products stops changing. This is when the reaction is at equilibrium.At equilibrium, the reaction quotient \(Q = K_{eq}\). Also, at equilibrium, \(\Delta{G} = 0\) and \(\Delta{G} = -nFE\), so \(E = 0\). Therefore, substituting \(Q = K_{eq}\) and \(E = 0\) into the Nernst equation, we have:\[0 = E^o - \dfrac{RT}{nF} \ln K_{eq}\]At standard conditions, the equation above simplifies into:\[0 = E^o - \dfrac{0.0592}{n} \log K_{eq}\]This equation can be rearranged into: \[log K_{eq} = \dfrac{nE^o}{0.0592}\]The equation above indicates that the equilibrium constant Keq is proportional to the standard potential of the reaction. Specifically, when:This result fits Le Châtlier's Principle, which states that when a system at equilibrium experiences a change, the system will minimize that change by shifting the equilibrium in the opposite direction.Nonstandard Conditions: The Nernst Equation is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by LibreTexts.
689
Oxidation-Reduction Reactions
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Supplemental_Modules_(Analytical_Chemistry)/Electrochemistry/Redox_Chemistry/Oxidation-Reduction_Reactions
An oxidation-reduction (redox) reaction is a type of chemical reaction that involves a transfer of electrons between two species. An oxidation-reduction reaction is any chemical reaction in which the oxidation number of a molecule, atom, or ion changes by gaining or losing an electron. Redox reactions are common and vital to some of the basic functions of life, including photosynthesis, respiration, combustion, and corrosion or rusting.Rules for Assigning Oxidation StatesThe oxidation state (OS) of an element corresponds to the number of electrons, e-, that an atom loses, gains, or appears to use when joining with other atoms in compounds. In determining the oxidation state of an atom, there are seven guidelines to follow:The sum of the oxidation states is equal to zero for neutral compounds and equal to the charge for polyatomic ion species.Example \(\PageIndex{1}\): Assigning Oxidation StatesDetermine the Oxidation States of each element in the following reactions:SolutionsExample \(\PageIndex{2}\): Assigning Oxidation StatesDetermine the oxidation states of the phosphorus atom bold element in each of the following species:SolutionsExample \(\PageIndex{3}\): Identifying Reduced and Oxidized ElementsDetermine which element is oxidized and which element is reduced in the following reactions (be sure to include the oxidation state of each):SolutionsAn atom is oxidized if its oxidation number increases, the reducing agent, and an atom is reduced if its oxidation number decreases, the oxidizing agent. The atom that is oxidized is the reducing agent, and the atom that is reduced is the oxidizing agent. (Note: the oxidizing and reducing agents can be the same element or compound).Redox reactions are comprised of two parts, a reduced half and an oxidized half, that always occur together. The reduced half gains electrons and the oxidation number decreases, while the oxidized half loses electrons and the oxidation number increases. Simple ways to remember this include the mnemonic devices OIL RIG, meaning "oxidation is loss" and "reduction is gain." There is no net change in the number of electrons in a redox reaction. Those given off in the oxidation half reaction are taken up by another species in the reduction half reaction.The two species that exchange electrons in a redox reaction are given special names:Hence, what is oxidized is the reducing agent and what is reduced is the oxidizing agent. (Note: the oxidizing and reducing agents can be the same element or compound, as in disproportionation reactions discussed below).A good example of a redox reaction is the thermite reaction, in which iron atoms in ferric oxide lose (or give up) \(\ce{O}\) atoms to \(\ce{Al}\) atoms, producing \(\ce{Al2O3}\).\[\ce{Fe2O3(s) + 2Al(s) \rightarrow Al2O3(s) + 2Fe(l)} \nonumber \]Example \(\PageIndex{4}\): Identifying Oxidizing and Reducing AgentsDetermine what is the oxidizing and reducing agents in the following reaction.\[\ce{Zn + 2H^{+} -> Zn^{2+} + H2} \nonumber \]SolutionThe oxidation state of \(\ce{H}\) changes from +1 to 0, and the oxidation state of \(\ce{Zn}\) changes from 0 to +2. Hence, \(\ce{Zn}\) is oxidized and acts as the reducing agent. \(\ce{H^{+}}\) ion is reduced and acts as the oxidizing agent.Combination reactions are among the simplest redox reactions and, as the name suggests, involves "combining" elements to form a chemical compound. As usual, oxidation and reduction occur together. The general equation for a combination reaction is given below:\[\ce{ A + B -> AB} \nonumber \]Example \(\PageIndex{5}\): Combination ReactionConsider the combination reaction of hydrogen and oxygen\[\ce{H2 + O2 -> H2O } \nonumber \]Solution0 + 0 →(+1) + (-2) = 0In this reaction both H2 and O2 are free elements; following Rule #1, their oxidation states are 0. The product is H2O, which has a total oxidation state of 0. According to Rule #6, the oxidation state of oxygen is usually -2. Therefore, the oxidation state of H in H2O must be +1.A decomposition reaction is the reverse of a combination reaction, the breakdown of a chemical compound into individual elements:\[\ce{AB -> A + B} \nonumber \]Example \(\PageIndex{6}\): Decomposition ReactionConsider the following reaction:\[\ce{H2O -> H2 + O2}\nonumber \]This follows the definition of the decomposition reaction, where water is "decomposed" into hydrogen and oxygen.(+1) + (-2) = 0 → 0 + 0As in the previous example the \(\ce{H2O}\) has a total oxidation state of 0; thus, according to Rule #6 the oxidation state of oxygen is usually -2, so the oxidation state of hydrogen in \(\ce{H2O}\) must be +1.Note that the autoionization reaction of water is not a redox nor decomposition reaction since the oxidation states do not change for any element:\[\ce{H2O -> H^{+} + OH^{-}}\nonumber \]A single replacement reaction involves the "replacing" of an element in the reactants with another element in the products:\[\ce{A + BC -> AB + C} \nonumber \]Example \(\PageIndex{7}\): Single Replacement ReactionEquation:\[\ce{Cl_2 + Na\underline{Br} \rightarrow Na\underline{Cl} + Br_2 } \nonumber \]Calculation: + ((+1) + (-1) = 0) -> ((+1) + (-1) = 0) + 0In this equation, \(\ce{Br}\) is replaced with \(\ce{Cl}\), and the \(\ce{Cl}\) atoms in \(\ce{Cl2}\) are reduced, while the \(\ce{Br}\) ion in \(\ce{NaBr}\) is oxidized.A double replacement reaction is similar to a single replacement reaction, but involves "replacing" two elements in the reactants, with two in the products:\[\ce{AB + CD -> AD + CB} \nonumber \]An example of a double replacement reaction is the reaction of magnesium sulfate with sodium oxalate\[\ce{MgSO4(aq) + Na2C2O4(aq) -> MgC2O4(s) + Na2SO4(aq)} \nonumber \]Combustion is the formal terms for "burning" and typically involves a substance reacts with oxygen to transfer energy to the surroundings as light and heat. Hence, combustion reactions are almost always exothermic. For example, internal combustion engines rely on the combustion of organic hydrocarbons \(\ce{C_{x}H_{y}}\) to generate \(\ce{CO2}\) and \(\ce{H2O}\):\[\ce{C_{x}H_{y} + O2 -> CO2 + H2O}\nonumber \]Although combustion reactions typically involve redox reactions with a chemical being oxidized by oxygen, many chemicals can "burn" in other environments. For example, both titanium and magnesium metals can burn in nitrogen as well:\[\ce{ 2Ti(s) + N2(g) -> 2TiN(s)} \nonumber \]\[\ce{ 3 Mg(s) + N2(g) -> Mg3N2(s)} \nonumber \]Moreover, chemicals can be oxidized by other chemicals than oxygen, such as \(\ce{Cl2}\) or \(\ce{F2}\); these processes are also considered combustion reactions.Example \(\PageIndex{8}\): Identifying Combustion ReactionsWhich of the following are combustion reactions?SolutionBoth reaction b and reaction d are combustion reactions, although with different oxidizing agents. Reaction b is the conventional combustion reaction using \(\ce{O2}\) and reaction uses \(\ce{N2}\) instead.In disproportionation reactions, a single substance can be both oxidized and reduced. These are known as disproportionation reactions, with the following general equation:\[\ce{2A -> A^{+n} + A^{-n}} \nonumber \]where \(n\) is the number of electrons transferred. Disproportionation reactions do not need begin with neutral molecules, and can involve more than two species with differing oxidation states (but rarely).Example \(\PageIndex{9}\): Disproportionation ReactionDisproportionation reactions have some practical significance in everyday life, including the reaction of hydrogen peroxide, \(\ce{H2O2}\) poured over a cut. This a decomposition reaction of hydrogen peroxide, which produces oxygen and water. Oxygen is present in all parts of the chemical equation and as a result it is both oxidized and reduced. The reaction is as follows:\[\ce{2H2O2(aq) -> 2H2O(l) + O2(g)} \nonumber \]On the reactant side, \(\ce{H}\) has an oxidation state of +1 and \(\ce{O}\) has an oxidation state of -1, which changes to -2 for the product \(\ce{H2O}\) (oxygen is reduced), and 0 in the product \(\ce{O2}\) (oxygen is oxidized).Exercise \(\PageIndex{9}\)Which element undergoes a bifurcation of oxidation states in this disproportionation reaction:\[\ce{HNO2 -> HNO3 + NO + H2O} \nonumber\]The \(\ce{N}\) atom undergoes disproportionation. You can confirm that by identifying the oxidation states of each atom in each species.Oxidation-Reduction Reactions is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by LibreTexts.
690
Oxidation State
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Supplemental_Modules_(Analytical_Chemistry)/Electrochemistry/Redox_Chemistry/Oxidation_State
Oxidation-Reduction (redox) reactions take place in the world at every moment. In fact, they are directly related to the origin of life. For instance, oxidation of nutrients forms energy and enables human beings, animals, and plants to thrive. If elements or compounds were exposed to oxygen, after a series of reactions the oxygen will be converted into carbon dioxide or water (combustion). To fully understand redox and combustion reactions, we must first learn about oxidation states (OS).Oxidation State is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by LibreTexts.
691
Oxidation States (Oxidation Numbers)
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Supplemental_Modules_(Analytical_Chemistry)/Electrochemistry/Redox_Chemistry/Oxidation_States_(Oxidation_Numbers)
Oxidation states simplify the process of determining what is being oxidized and what is being reduced in redox reactions. However, for the purposes of this introduction, it would be useful to review and be familiar with the following concepts:To illustrate this concept, consider the element vanadium, which forms a number of different ions (e.g., \(\ce{V^{2+}}\) and \(\ce{V^{3+}}\)). The 2+ ion will be formed from vanadium metal by oxidizing the metal and removing two electrons:\[ \ce{V \rightarrow V^{2+} + 2e^{-}} \label{1}\]The vanadium in the \( \ce{V^{2+}}\) ion has an oxidation state of +2. Removal of another electron gives the \(\ce{V^{3+}}\) ion:\[ \ce{V^{2+} \rightarrow V^{3+} + e^{-}} \label{2}\]The vanadium in the \(\ce{V^{3+} }\) ion has an oxidation state of +3. Removal of another electron forms the ion \(\ce{VO2+}\):\[ \ce{V^{3+} + H_2O \rightarrow VO^{2+} + 2H^{+} + e^{-}} \label{3}\]The vanadium in the \(\ce{VO^{2+}}\) is now in an oxidation state of +4.Notice that the oxidation state is not always the same as the charge on the ion (true for the products in Equations \ref{1} and \ref{2}), but not for the ion in Equation \ref{3}).The positive oxidation state is the total number of electrons removed from the elemental state. It is possible to remove a fifth electron to form another the \(\ce{VO_2^{+}}\) ion with the vanadium in a +5 oxidation state.\[ \ce{VO^{2+} + H_2O \rightarrow VO_2^{+} + 2H^{+} + e^{-}}\]Each time the vanadium is oxidized (and loses another electron), its oxidation state increases by 1. If the process is reversed, or electrons are added, the oxidation state decreases. The ion could be reduced back to elemental vanadium, with an oxidation state of zero.If electrons are added to an elemental species, its oxidation number becomes negative. This is impossible for vanadium, but is common for nonmetals such as sulfur:\[ \ce{S + 2e^- \rightarrow S^{2-}} \]Here the sulfur has an oxidation state of -2.The oxidation state of an atom is equal to the total number of electrons which have been removed from an element (producing a positive oxidation state) or added to an element (producing a negative oxidation state) to reach its present state.Recognizing this simple pattern is the key to understanding the concept of oxidation states. The change in oxidation state of an element during a reaction determines whether it has been oxidized or reduced without the use of electron-half-equations.Counting the number of electrons transferred is an inefficient and time-consuming way of determining oxidation states. These rules provide a simpler method.Rules to determine oxidation statesThe reasons for the exceptionsHydrogen in the metal hydrides: Metal hydrides include compounds like sodium hydride, NaH. Here the hydrogen exists as a hydride ion, H-. The oxidation state of a simple ion like hydride is equal to the charge on the ion—in this case, -1.Alternatively, the sum of the oxidation states in a neutral compound is zero. Because Group 1 metals always have an oxidation state of +1 in their compounds, it follows that the hydrogen must have an oxidation state of -1 (+1 -1 = 0).Oxygen in peroxides: Peroxides include hydrogen peroxide, H2O2. This is an electrically neutral compound, so the sum of the oxidation states of the hydrogen and oxygen must be zero.Because each hydrogen has an oxidation state of +1, each oxygen must have an oxidation state of -1 to balance it.Oxygen in F2O: The deviation here stems from the fact that oxygen is less electronegative than fluorine; the fluorine takes priority with an oxidation state of -1. Because the compound is neutral, the oxygen has an oxidation state of +2.Chlorine in compounds with fluorine or oxygen: Because chlorine adopts such a wide variety of oxidation states in these compounds, it is safer to simply remember that its oxidation state is not -1, and work the correct state out using fluorine or oxygen as a reference. An example of this situation is given below. Example \(\PageIndex{1}\): ChromiumWhat is the oxidation state of chromium in Cr2+?SolutionFor a simple ion such as this, the oxidation state equals the charge on the ion: +2 (by convention, the + sign is always included to avoid confusion)What is the oxidation state of chromium in CrCl3?This is a neutral compound, so the sum of the oxidation states is zero. Chlorine has an oxidation state of -1 (no fluorine or oxygen atoms are present). Let n equal the oxidation state of chromium:n + 3(-1) = 0n = +3 The oxidation state of chromium is +3.Example \(\PageIndex{2}\): ChromiumWhat is the oxidation state of chromium in Cr(H2O)63+?SolutionThis is an ion and so the sum of the oxidation states is equal to the charge on the ion. There is a short-cut for working out oxidation states in complex ions like this where the metal atom is surrounded by electrically neutral molecules like water or ammonia.The sum of the oxidation states in the attached neutral molecule must be zero. That means that you can ignore them when you do the sum. This would be essentially the same as an unattached chromium ion, Cr3+. The oxidation state is +3.What is the oxidation state of chromium in the dichromate ion, Cr2O72-?The oxidation state of the oxygen is -2, and the sum of the oxidation states is equal to the charge on the ion. Don't forget that there are 2 chromium atoms present.2n + 7(-2) = -2n = +6Example \(\PageIndex{3}\): CopperWhat is the oxidation state of copper in CuSO4?SolutionUnfortunately, it isn't always possible to work out oxidation states by a simple use of the rules above. The problem in this case is that the compound contains two elements (the copper and the sulfur) with variable oxidation states.In cases like these, some chemical intuition is useful. Here are two ways of approaching this problem:You will have come across names like iron(II) sulfate and iron(III) chloride. The (II) and (III) are the oxidation states of the iron in the two compounds: +2 and +3 respectively. That tells you that they contain Fe2+ and Fe3+ ions.This can also be extended to negative ions. Iron(II) sulfate is FeSO4. The sulfate ion is SO42-. The oxidation state of the sulfur is +6 (work it out!); therefore, the ion is more properly named the sulfate(VI) ion.The sulfite ion is SO32-. The oxidation state of the sulfur is +4. This ion is more properly named the sulfate(IV) ion. The -ate ending indicates that the sulfur is in a negative ion.FeSO4 is properly named iron(II) sulfate(VI), and FeSO3 is iron(II) sulfate(IV). Because of the potential for confusion in these names, the older names of sulfate and sulfite are more commonly used in introductory chemistry courses.This is the most common function of oxidation states. Remember:In each of the following examples, we have to decide whether the reaction is a redox reaction, and if so, which species have been oxidized and which have been reduced.Example \(\PageIndex{4}\):This is the reaction between magnesium and hydrogen chloride:\[ \ce{Mg + 2HCl -> MgCl2 +H2} \nonumber\]SolutionAssign each element its oxidation state to determine if any change states over the course of the reaction:The oxidation state of magnesium has increased from 0 to +2; the element has been oxidized. The oxidation state of hydrogen has decreased—hydrogen has been reduced. The chlorine is in the same oxidation state on both sides of the equation—it has not been oxidized or reduced.Example \(\PageIndex{5}\):The reaction between sodium hydroxide and hydrochloric acid is:\[ NaOH + HCl \rightarrow NaCl + H_2O\]The oxidation states are assigned:None of the elements are oxidized or reduced. This is not a redox reaction.Example \(\PageIndex{6}\):The reaction between chlorine and cold dilute sodium hydroxide solution is given below:\[ \ce{2NaOH + Cl_2 \rightarrow NaCl + NaClO + H_2O} \nonumber\]It is probable that the elemental chlorine has changed oxidation state because it has formed two ionic compounds. Checking all the oxidation states verifies this:Chlorine is the only element to have changed oxidation state. However, its transition is more complicated than previously-discussed examples: it is both oxidized and reduced. The NaCl chlorine atom is reduced to a -1 oxidation state; the NaClO chlorine atom is oxidized to a state of +1. This type of reaction, in which a single substance is both oxidized and reduced, is called a disproportionation reaction.Oxidation states can be useful in working out the stoichiometry for titration reactions when there is insufficient information to work out the complete ionic equation. Each time an oxidation state changes by one unit, one electron has been transferred. If the oxidation state of one substance in a reaction decreases by 2, it has gained 2 electrons.Another species in the reaction must have lost those electrons. Any oxidation state decrease in one substance must be accompanied by an equal oxidation state increase in another.Example \(\PageIndex{1}\):Ions containing cerium in the +4 oxidation state are oxidizing agents, capable of oxidizing molybdenum from the +2 to the +6 oxidation state (from Mo2+ to MoO42-). Cerium is reduced to the +3 oxidation state (Ce3+) in the process. What are the reacting proportions?SolutionThe oxidation state of the molybdenum increases by 4. Therefore, the oxidation state of the cerium must decrease by 4 to compensate. However, the oxidation state of cerium only decreases from +4 to +3 for a decrease of 1. Therefore, there must be 4 cerium ions involved for each molybdenum ion; this fulfills the stoichiometric requirements of the reaction.The reacting proportions are 4 cerium-containing ions to 1 molybdenum ion.Here is a more common example involving iron(II) ions and manganate(VII) ions:A solution of potassium manganate(VII), KMnO4, acidified with dilute sulfuric acid oxidizes iron(II) ions to iron(III) ions. In the process, the manganate(VII) ions are reduced to manganese(II) ions. Use oxidation states to work out the equation for the reaction.The oxidation state of the manganese in the manganate(VII) ion is +7, as indicated by the name (but it should be fairly straightforward and useful practice to figure it out from the chemical formula)In the process of transitioning to manganese(II) ions, the oxidation state of manganese decreases by 5. Every reactive iron(II) ion increases its oxidation state by 1. Therefore, there must be five iron(II) ions reacting for every one manganate(VII) ion.The left-hand side of the equation is therefore written as: MnO4- + 5Fe2+ + ?The right-hand side is written as: Mn2+ + 5Fe3+ + ?The remaining atoms and the charges must be balanced using some intuitive guessing. In this case, it is probable that the oxygen will end up in water, which must be balanced with hydrogen. It has been specified that this reaction takes place under acidic conditions, providing plenty of hydrogen ions.The fully balanced equation is displayed below:\[ MnO_4^- + 8H^+ + 5Fe^{2+} \rightarrow Mn^{2+} + 4H_2O + 5Fe^{3+} \nonumber\]Jim Clark (Chemguide.co.uk)This page titled Oxidation States (Oxidation Numbers) is shared under a CC BY-NC 4.0 license and was authored, remixed, and/or curated by Jim Clark.
692
Oxidation States II
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Supplemental_Modules_(Analytical_Chemistry)/Electrochemistry/Redox_Chemistry/Oxidation_State/Oxidation_States_II
Oxidation state is a number assigned to an element in a compound according to some rules. This number enable us to describe oxidation-reduction reactions, and balancing redox chemical reactions. When a covalent bond forms between two atoms with different electronegativities the shared electrons in the bond lie closer to the more electronegative atom: In HCl (above) the oxidation number for the hydrogen would be +1 and that of the Cl would be -1For oxidation numbers we write the sign first to distinguish them from ionic (electronic) chargesOxidation state (or oxidation numbers) do not refer to real charges on the atoms, except in the case of actual ionic substances. Oxidation numbers can be determined using the following rules:The oxidation state (OS) of an element corresponds to the number of electrons, e-, that an atom loses, gains, or appears to use when joining with other atoms in compounds. In determining the OS of an atom, there are seven guidelines to follow:(Note: The sum of the OSs is equal to zero for neutral compounds and equal to the charge for polyatomic ion species.)Determine the OSs of the elements in the following reactions:SolutionSDetermine the OS of the bold element in each of the following:SolutionSDetermine which element is oxidized and which element is reduced in the following reactions (be sure to include the OS of each): SolutionS(For further discussion, see the article on oxidation numbers).An atom is oxidized if its oxidation number increases, the reducing agent, and an atom is reduced if its oxidation number decreases, the oxidizing agent. The atom that is oxidized is the reducing agent, and the atom that is reduced is the oxidizing agent. (Note: the oxidizing and reducing agents can be the same element or compound).Compounds of the alkali (oxidation number +1) and alkaline earth metals (oxidation number +2) are typically ionic in nature. Compounds of metals with higher oxidation numbers (e.g., tin +4) tend to form molecular compounds fluorideOF2oxygen difluorideMn2O3manganese(III) oxideCl2O3dichlorine trioxideAn oxidation-reduction (redox) reaction is a type of chemical reaction that involves a transfer of electrons between two species. An oxidation-reduction reaction is any chemical reaction in which the oxidation number of a molecule, atom, or ion changes by gaining or losing an electron. Redox reactions are common and vital to some of the basic functions of life, including photosynthesis, respiration, combustion, and corrosion or rusting.Redox reactions are comprised of two parts, a reduced half and an oxidized half, that always occur together. The reduced half gains electrons and the oxidation number decreases, while the oxidized half loses electrons and the oxidation number increases. Simple ways to remember this include the mnemonic devices OIL RIG, meaning "oxidation is loss" and "reduction is gain," and LEO says GER, meaning "loss of e- = oxidation" and "gain of e- = reduced." There is no net change in the number of electrons in a redox reaction. Those given off in the oxidation half reaction are taken up by another species in the reduction half reaction.The two species that exchange electrons in a redox reaction are given special names. The ion or molecule that accepts electrons is called the oxidizing agent; by accepting electrons it causes the oxidation of another species. Conversely, the species that donates electrons is called the reducing agent; when the reaction occurs, it reduces the other species. In other words, what is oxidized is the reducing agent and what is reduced is the oxidizing agent. (Note: the oxidizing and reducing agents can be the same element or compound, as in disproportionation reactions). O atoms to Al atoms, producing \(Al_2O_3\) .\[Fe_2O_{3(s)} + 2Al_{(s)} \rightarrow Al_2O_{3(s)} + 2Fe_{(l)}\]Another example of the redox reaction is the reaction between zinc and copper sulfate.Using the equations from the previous examples, determine what is oxidized in the following reaction.\[Zn + 2H^+ \rightarrow Zn^{2+} + H_2\]SolutionThe OS of H changes from +1 to 0, and the OS of Zn changes from 0 to +2. Hence, Zn is oxidized and acts as the reducing agent.What is reduced species in this reaction?\[Zn + 2H^+ \rightarrow Zn^{2+} + H_2\]SolutionThe OS of H changes from +1 to 0, and the OS of Zn changes from 0 to +2. Hence, H+ ion is reduced and acts as the oxidizing agent.Combination reactions are among the simplest redox reactions and, as the name suggests, involves "combining" elements to form a chemical compound. As usual, oxidation and reduction occur together. The general equation for a combination reaction is given below:\[A + B \rightarrow AB \]Equation: H2 + O2 → H2O Calculation: 0 + 0 →(+1) + (-2) = 0 Explanation: In this equation both H2 and O2 are free elements; following Rule #1, their OSs are 0. The product is H2O, which has a total OS of 0. According to Rule #6, the OS of oxygen is usually -2. Therefore, the OS of H in H2O must be +1.A decomposition reaction is the reverse of a combination reaction, the breakdown of a chemical compound into individual elements:\[AB \rightarrow A + B\]Consider the decomposition of water:\[H_2O \rightarrow H_2 + O_2\]Calculation:(+1) + (-2) = 0 → 0 + 0 Explanation: In this reaction, water is "decomposed" into hydrogen and oxygen. As in the previous example the H2O has a total OS of 0; thus, according to Rule #6 the OS of oxygen is usually -2, so the OS of hydrogen in H2O must be +1.A single replacement reaction involves the "replacing" of an element in the reactants with another element in the products:\[A + BC \rightarrow AB + C\]Equation:\[Cl_2 + Na\underline{Br} \rightarrow Na\underline{Cl} + Br_2\]Calculation: + ((+1) + (-1) = 0) -> ((+1) + (-1) = 0) + 0 Explanation: In this equation, Br is replaced with Cl, and the Cl atoms in Cl2 are reduced, while the Br ion in NaBr is oxidized.A double replacement reaction is similar to a double replacement reaction, but involves "replacing" two elements in the reactants, with two in the products:\[AB + CD \rightarrow AD + CB \]Equation: Fe2O3 + HCl → FeCl3 + H2O Explanation: In this equation, Fe and H trade places, and oxygen and chlorine trade places.Combustion reactions almost always involve oxygen in the form of O2, and are almost always exothermic, meaning they produce heat. Chemical reactions that give off light and heat and light are colloquially referred to as "burning."\[C_xH_y + O_2 \rightarrow CO_2 + H_2O\]Although combustion reactions typically involve redox reactions with a chemical being oxidized by oxygen, many chemicals "burn" in other environments. For example, both titanium and magnesium burn in nitrogen as well:\[ 2Ti(s) + N_{2}(g) \rightarrow 2TiN(s)\]\[3 Mg(s) + N_{2}(g) \rightarrow Mg_3N_{2}(s) \]Moreover, chemicals can be oxidized by other chemicals than oxygen, such as Cl2 or F2; these processes are also considered combustion reactionsDisproportionation Reactions: In some redox reactions a single substance can be both oxidized and reduced. These are known as disproportionation reactions, with the following general equation:\[2A \rightarrow A^{+n} + A^{-n}\]Where n is the number of electrons transferred. Disproportionation reactions do not need begin with neutral molecules, and can involve more than two species with differing oxidation states (but rarely).Disproportionation reactions have some practical significance in everyday life, including the reaction of hydrogen peroxide, H2O2 poured over a cut. This a decomposition reaction of hydrogen peroxide, which produces oxygen and water. Oxygen is present in all parts of the chemical equation and as a result it is both oxidized and reduced. The reaction is as follows:\[2H_2O_{2}(aq) \rightarrow 2H_2O(l) + O_{2}(g)\]Explanation: On the reactant side, H has an OS of +1 and O has an OS of -1, which changes to -2 for the product H2O (oxygen is reduced), and 0 in the product O2 (oxygen is oxidized).Oxidation States II is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by LibreTexts.
693
Oxidizing and Reducing Agents
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Supplemental_Modules_(Analytical_Chemistry)/Electrochemistry/Redox_Chemistry/Oxidizing_and_Reducing_Agents
Oxidizing and reducing agents are key terms used in describing the reactants in redox reactions that transfer electrons between reactants to form products. This page discusses what defines an oxidizing or reducing agent, how to determine an oxidizing and reducing agent in a chemical reaction, and the importance of this concept in real world applications.An oxidizing agent, or oxidant, gains electrons and is reduced in a chemical reaction. Also known as the electron acceptor, the oxidizing agent is normally in one of its higher possible oxidation states because it will gain electrons and be reduced. Examples of oxidizing agents include halogens, potassium nitrate, and nitric acid.A reducing agent, or reductant, loses electrons and is oxidized in a chemical reaction. A reducing agent is typically in one of its lower possible oxidation states, and is known as the electron donor. A reducing agent is oxidized, because it loses electrons in the redox reaction. Examples of reducing agents include the earth metals, formic acid, and sulfite compounds.To help eliminate confusion, there is a mnemonic device to help determine oxidizing and reducing agents.OIL RIG: Oxidation Is Loss and Reduction Is Gain of electronsExample \(\PageIndex{1}\): Identify Reducing and Oxidizing AgentsIdentify the reducing and oxidizing agents in the balanced redox reaction:Oxidation half reactionReduction Half ReactionOverviewExercise \(\PageIndex{1}\)Identify the oxidizing agent and the reducing agent in the following redox reaction:\[\ce{MnO4^{-} + SO3^{2-} -> Mn^{+2} + SO4^{2-}}\nonumber\]\(\ce{SO3^{2-}}\) is the reducing agent and \(\ce{MnO4^{-}}\) is the oxidizing agent. Note that while a specific atom typically has an odization state changes, the agents are the actual species, not the atoms.Oxidizing and reducing agents are important in industrial applications. They are used in processes such as purifying water, bleaching fabrics, and storing energy (such as in batteries and gasoline). Oxidizing and reducing agents are especially crucial in biological processes such as metabolism and photosynthesis. For example, organisms use electron acceptors such as NAD+ to harvest energy from redox reactions as in the hydrolysis of glucose:\[C_6H_{12}O_6 + 2ADP + 2P + 2NAD^+ \rightarrow 2CH_3COCO_2H + 2ATP + 2NADH \nonumber\]All combustion reactions are also examples of redox reactions. A combustion reaction occurs when a substance reacts with oxygen to create heat. One example is the combustion of octane, the principle component of gasoline:\[2 C_8H_{18} (l) + 25 O_2 (g) \rightarrow 16 CO_2 (g) + 18 H_2O (g) \nonumber\]Combustion reactions are a major source of energy for modern industry.By looking at each element's oxidation state on the reactant side of a chemical equation compared with the same element's oxidation state on the product side, one can determine if the element is reduced or oxidized, and can therefore identify the oxidizing and reducing agents of a chemical reaction.\(NO_3^-\), \(NO\), \(N_2H_4\), \(NH_3\)Oxidizing and Reducing Agents is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by LibreTexts.
694
Physical Quantities
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Supplemental_Modules_(Analytical_Chemistry)/Quantifying_Nature/Units_of_Measure/Physical_Quantities
Chemistry is a quantitative science. Amounts of substances and energies must always be expressed in numbers and units (in order to make some sense of what you are talking about). You should also develop a sensation about quantities every time you encounter them; you should be familiar with the name, prefix, and symbol used for various quantities.However, due to the many different units we use, expression of quantities is rather complicated. We will deal with the number part of quantities on this page, using SI Units. are expressed by eXXX or EXXX.By now, you probably realized that every time the number increases by a factor of a thousand, we give a new name, a new prefix, and a new symbol in its expression.After you are familiar with the words associated with these numbers, you should be able to communicate numbers with ease. Consider the following number:123,456,789,101,234,567In words, this 18-digit number takes up a few lines:One hundred twenty three quindrillions, four hundred fifty six trillions, seven hundred eighty nine billions, one hundred and one millions, two hundred and thirty four thousands, five hundred and sixty seven.If a quantity makes use of this number, the quantity has been measured precisely. Most quantities do not have a precise measurement to warrant so many significant figures. The above number may often be expressed as 123e15 or read as one hundred twenty three quindrillions.There are seven basic quantities in science, and these quantities, their symbols, names of their units, and unit symbols are listed below:*The unit ampere, A, is equal to Coulombs per second, (A = C/s).Chung (Peter) Chieh (Professor Emeritus, Chemistry University of Waterloo) Chung (Peter) Chieh (Professor Emeritus, Chemistry University of Waterloo)Physical Quantities is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by LibreTexts.
695
Prefixes
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Supplemental_Modules_(Analytical_Chemistry)/Quantifying_Nature/Units_of_Measure/Prefixes
Prefixes are often used for decimal multiples and submultiples of units. Often, the symbols are used together with units. For example, MeV means million electron volts, units of energy. To memorize these prefixes is not something you will enjoy, but if you do know them by heart, you will appreciate the quantity when you encounter them in your reading. You can come back here to check them in case you forgot. The table is arranged in a symmetric fashion for convenience of comparison. Note the increments or decrements by thousand 10+3 or 10-3 are used aside from hecto (centi) and deca (deci).When you want to express some large or small quantities, you may also find these prefixes useful.Greek prefixes are often used for naming compounds. You will need the prefixes in order to give a proper name of many compounds. You also need to know them to figure out the formula from their names. The common prefixes are given in this Table.Note that some of the prefixes may change slightly when they are applied to the names. Some of the examples show the variations.Note also that some names are given using other conventions. For example, \(\ce{P4O6}\) and \(\ce{P4O10}\) are called phosphorus trioxide and phosphorus pentoxide respectively. These are given according to their empirical formulas.Chung (Peter) Chieh (Professor Emeritus, Chemistry University of Waterloo)Prefixes is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by LibreTexts.
696
Propagation of Error
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Supplemental_Modules_(Analytical_Chemistry)/Quantifying_Nature/Significant_Digits/Propagation_of_Error
Propagation of Error (or Propagation of Uncertainty) is defined as the effects on a function by a variable's uncertainty. It is a calculus derived statistical calculation designed to combine uncertainties from multiple variables to provide an accurate measurement of uncertainty. Every measurement has an air of uncertainty about it, and not all uncertainties are equal. Therefore, the ability to properly combine uncertainties from different measurements is crucial. Uncertainty in measurement comes about in a variety of ways: instrument variability, different observers, sample differences, time of day, etc. Typically, error is given by the standard deviation (\(\sigma_x\)) of a measurement.Anytime a calculation requires more than one variable to solve, propagation of error is necessary to properly determine the uncertainty. For example, lets say we are using a UV-Vis Spectrophotometer to determine the molar absorptivity of a molecule via Beer's Law: A = ε l c. Since at least two of the variables have an uncertainty based on the equipment used, a propagation of error formula must be applied to measure a more exact uncertainty of the molar absorptivity. This example will be continued below, after the derivation.Suppose a certain experiment requires multiple instruments to carry out. These instruments each have different variability in their measurements. The results of each instrument are given as: a, b, c, d... (For simplification purposes, only the variables a, b, and c will be used throughout this derivation). The end result desired is \(x\), so that \(x\) is dependent on a, b, and c. It can be written that \(x\) is a function of these variables:\[x=f(a,b,c) \label{1}\]Because each measurement has an uncertainty about its mean, it can be written that the uncertainty of \(dx_i\) of the ith measurement of \(x\) depends on the uncertainty of the ith measurements of a, b, and c:\[dx_i=f(da_i,db_i,dc_i)\label{2}\]The total deviation of \(x\) is then derived from the partial derivative of x with respect to each of the variables:\[dx=\left(\dfrac{\delta{x}}{\delta{a}}\right)_{b,c}da, \; \left(\dfrac{\delta{x}}{\delta{b}}\right)_{a,c}db, \; \left(\dfrac{\delta{x}}{\delta{c}}\right)_{a,b}dc \label{3}\]A relationship between the standard deviations of x and a, b, c, etc... is formed in two steps:In the first step, two unique terms appear on the right hand side of the equation: square terms and cross terms.\[\left(\dfrac{\delta{x}}{\delta{a}}\right)^2(da)^2,\; \left(\dfrac{\delta{x}}{\delta{b}}\right)^2(db)^2, \; \left(\dfrac{\delta{x}}{\delta{c}}\right)^2(dc)^2\label{4}\]\[\left(\dfrac{\delta{x}}{da}\right)\left(\dfrac{\delta{x}}{db}\right)da\;db,\;\left(\dfrac{\delta{x}}{da}\right)\left(\dfrac{\delta{x}}{dc}\right)da\;dc,\;\left(\dfrac{\delta{x}}{db}\right)\left(\dfrac{\delta{x}}{dc}\right)db\;dc\label{5}\]Square terms, due to the nature of squaring, are always positive, and therefore never cancel each other out. By contrast, cross terms may cancel each other out, due to the possibility that each term may be positive or negative. If \(da\), \(db\), and \(dc\) represent random and independent uncertainties, about half of the cross terms will be negative and half positive (this is primarily due to the fact that the variables represent uncertainty about a mean). In effect, the sum of the cross terms should approach zero, especially as \(N\) increases. However, if the variables are correlated rather than independent, the cross term may not cancel out.Assuming the cross terms do cancel out, then the second step - summing from \(i = 1\) to \(i = N\) - would be:\[\sum{(dx_i)^2}=\left(\dfrac{\delta{x}}{\delta{a}}\right)^2\sum(da_i)^2 + \left(\dfrac{\delta{x}}{\delta{b}}\right)^2\sum(db_i)^2\label{6}\]Dividing both sides by \(N - 1\):\[\dfrac{\sum{(dx_i)^2}}{N-1}=\left(\dfrac{\delta{x}}{\delta{a}}\right)^2\dfrac{\sum(da_i)^2}{N-1} + \left(\dfrac{\delta{x}}{\delta{b}}\right)^2\dfrac{\sum(db_i)^2}{N-1}\label{7}\]The previous step created a situation where Equation \ref{7} could mimic the standard deviation equation. This is desired, because it creates a statistical relationship between the variable \(x\), and the other variables \(a\), \(b\), \(c\), etc... as follows:The standard deviation equation can be rewritten as the variance (\(\sigma_x^2\)) of \(x\):\[\dfrac{\sum{(dx_i)^2}}{N-1}=\dfrac{\sum{(x_i-\bar{x})^2}}{N-1}=\sigma^2_x\label{8}\]Rewriting Equation \ref{7} using the statistical relationship created yields the Exact Formula for Propagation of Error:\[\sigma^2_x=\left(\dfrac{\delta{x}}{\delta{a}}\right)^2\sigma^2_a+\left(\dfrac{\delta{x}}{\delta{b}}\right)^2\sigma^2_b+\left(\dfrac{\delta{x}}{\delta{c}}\right)^2\sigma^2_c\label{9}\]Thus, the end result is achieved. Equation \ref{9} shows a direct statistical relationship between multiple variables and their standard deviations. In the next section, derivations for common calculations are given, with an example of how the derivation was obtained.In the following calculations \(a\), \(b\), and \(c\) are measured variables from an experiment and \(\sigma_a\), \(\sigma_b\), and \(\sigma_c\) are the standard deviations of those variables.If \(x = a + b - c\) then\[\sigma_x= \sqrt{ {\sigma_a}^2+{\sigma_b}^2+{\sigma_c}^2} \label{10}\]If \(x = \dfrac{ a \times b}{c}\) then\[ \dfrac{\sigma_x}{x}=\sqrt{\left(\dfrac{\sigma_a}{a}\right)^2+\left(\dfrac{\sigma_b}{b}\right)^2+\left(\dfrac{\sigma_c}{c}\right)^2}\label{11} \]If \(x = a^y\) then\[\dfrac{\sigma_x}{x}=y \left(\dfrac{\sigma_a}{a}\right) \label{12}\]If \(x = \log(a)\) then\[\sigma_x=0.434 \left(\dfrac{\sigma_a}{a}\right) \label{13}\]If \(x = \text{antilog}(a)\) then\[\dfrac{\sigma_x}{x}=2.303({\sigma_a}) \label{14}\]Addition, subtraction, and logarithmic equations leads to an absolute standard deviation, while multiplication, division, exponential, and anti-logarithmic equations lead to relative standard deviations.The Exact Formula for Propagation of Error in Equation \(\ref{9}\) can be used to derive the arithmetic examples noted above. Starting with a simple equation:\[x = a \times \dfrac{b}{c} \label{15}\]where \(x\) is the desired results with a given standard deviation, and \(a\), \(b\), and \(c\) are experimental variables, each with a difference standard deviation. Taking the partial derivative of each experimental variable, \(a\), \(b\), and \(c\):\[\left(\dfrac{\delta{x}}{\delta{a}}\right)=\dfrac{b}{c} \label{16a}\]\[\left(\dfrac{\delta{x}}{\delta{b}}\right)=\dfrac{a}{c} \label{16b}\]and\[\left(\dfrac{\delta{x}}{\delta{c}}\right)=-\dfrac{ab}{c^2}\label{16c}\]Plugging these partial derivatives into Equation \(\ref{9}\) gives:\[\sigma^2_x=\left(\dfrac{b}{c}\right)^2\sigma^2_a+\left(\dfrac{a}{c}\right)^2\sigma^2_b+\left(-\dfrac{ab}{c^2}\right)^2\sigma^2_c\label{17}\]Dividing Equation \(\ref{17}\) by Equation \(\ref{15}\) squared yields:\[\dfrac{\sigma^2_x}{x^2}=\dfrac{\left(\dfrac{b}{c}\right)^2\sigma^2_a}{\left(\dfrac{ab}{c}\right)^2}+\dfrac{\left(\dfrac{a}{c}\right)^2\sigma^2_b}{\left(\dfrac{ab}{c}\right)^2}+\dfrac{\left(-\dfrac{ab}{c^2}\right)^2\sigma^2_c}{\left(\dfrac{ab}{c}\right)^2}\label{18}\]Canceling out terms and square-rooting both sides yields Equation \ref{11}:\[\dfrac{\sigma_x}{x}={\sqrt{\left(\dfrac{\sigma_a}{a}\right)^2+\left(\dfrac{\sigma_b}{b}\right)^2+\left(\dfrac{\sigma_c}{c}\right)^2}} \nonumber\]Continuing the example from the introduction (where we are calculating the molar absorptivity of a molecule), suppose we have a concentration of 13.7 (±0.3) moles/L, a path length of 1.0 (±0.1) cm, and an absorption of 0.172807 (±0.000008). The equation for molar absorptivity is dictated by Beer's law:\[ε = \dfrac{A}{lc}. \nonumber\]Since Beer's Law deals with multiplication/division, we'll use Equation \ref{11}:\[\begin{align*} \dfrac{\sigma_{\epsilon}}{\epsilon} &={\sqrt{\left(\dfrac{0.000008}{0.172807}\right)^2+\left(\dfrac{0.1}{1.0}\right)^2+\left(\dfrac{0.3}{13.7}\right)^2}} \\[4pt] &=0.10237 \end{align*}\]As stated in the note above, Equation \ref{11} yields a relative standard deviation, or a percentage of the ε variable. Using Beer's Law, ε = 0.012614 L moles-1 cm-1 Therefore, the \(\sigma_{\epsilon}\) for this example would be 10.237% of ε, which is 0.001291.Accounting for significant figures, the final answer would be:ε = 0.013 ± 0.001 L moles-1 cm-1If you are given an equation that relates two different variables and given the relative uncertainties of one of the variables, it is possible to determine the relative uncertainty of the other variable by using calculus. In problems, the uncertainty is usually given as a percent. Let's say we measure the radius of a very small object. The problem might state that there is a 5% uncertainty when measuring this radius.To actually use this percentage to calculate unknown uncertainties of other variables, we must first define what uncertainty is. Uncertainty, in calculus, is defined as:\[\left(\dfrac{dx}{x}\right) = \left(\dfrac{∆x}{x}\right) = \text{uncertainty} \nonumber\]Let's look at the example of the radius of an object again. If we know the uncertainty of the radius to be 5%, the uncertainty is defined as\[\left(\dfrac{dx}{x}\right)=\left(\dfrac{∆x}{x}\right)= 5\% = 0.05.\nonumber\]Now we are ready to use calculus to obtain an unknown uncertainty of another variable. Let's say we measure the radius of an artery and find that the uncertainty is 5%. What is the uncertainty of the measurement of the volume of blood pass through the artery? Let's say the equation relating radius and volume is:\[V(r) = c(r^2) \nonumber\]where \(c\) is a constant, \(r\) is the radius and \(V(r)\) is the volume.The first step to finding the uncertainty of the volume is to understand our given information. Since we are given the radius has a 5% uncertainty, we know that (∆r/r) = 0.05. We are looking for (∆V/V).Now that we have done this, the next step is to take the derivative of this equation to obtain:\[\dfrac{dV}{dr} = \dfrac{∆V}{∆r}= 2cr \nonumber\]We can now multiply both sides of the equation to obtain:\[∆V = 2cr(∆r) \nonumber\]Since we are looking for (∆V/V), we divide both sides by V to get:\[\dfrac{∆V}{V} = \dfrac{2cr(∆r)}{V} \nonumber\]We are given the equation of the volume to be \(V = c(r)^2\), so we can plug this back into our previous equation for \(V\) to get:\[\dfrac{∆V}{V} = \dfrac{2cr(∆r)}{c(r)^2} \nonumber \]Now we can cancel variables that are in both the numerator and denominator to get:\[\dfrac{∆V}{V} = \dfrac{2∆r}{r} = 2 \left(\dfrac{∆r}{r}\right) \nonumber \]We have now narrowed down the equation so that ∆r/r is left. We know the value of uncertainty for ∆r/r to be 5%, or 0.05. Plugging this value in for ∆r/r we get:\[\dfrac{∆V}{V} = 2 (0.05) = 0.1 = 10\% \nonumber\]The uncertainty of the volume is 10%. This method can be used in chemistry as well, not just the biological example shown above.In an ideal case, the propagation of error estimate above will not differ from the estimate made directly from the measurements. However, in complicated scenarios, they may differ because of:Covariance terms can be difficult to estimate if measurements are not made in pairs. Sometimes, these terms are omitted from the formula. Guidance on when this is acceptable practice is given below:Propagation of Error is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by Jarred Caldwell & Alex Vahidsafa.
697
Propagation of Uncertainty
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Supplemental_Modules_(Analytical_Chemistry)/Data_Analysis/Propagation_of_Uncertainty
In Chapter 4 we considered the basic mathematical details of a propagation of uncertainty, limiting our treatment to the propagation of measurement error. This treatment is incomplete because it omits other sources of uncertainty that influence the overall uncertainty in our results. Consider, for example, Practice Exercise 4.2, in which we determined the uncertainty in a standard solution of Cu2+ prepared by dissolving a known mass of Cu wire with HNO3, diluting to volume in a 500-mL volumetric flask, and then diluting a 1-mL portion of this stock solution to volume in a 250-mL volumetric flask. To calculate the overall uncertainty we included the uncertainty in the sample's mass and the uncertainty of the volumetric glassware. We did not consider other sources of uncertainty, including the purity of the Cu wire, the effect of temperature on the volumetric glassware, and the repeatability of our measurements. In this appendix we take a more detailed look at the propagation of uncertainty, using the standardization of NaOH as an example.Because solid NaOH is an impure material, we cannot directly prepare a stock solution by weighing a sample of NaOH and diluting to volume. Instead, we determine the solution's concentration through a process called a standardization.2 A fairly typical procedure is to use the NaOH solution to titrate a carefully weighed sample of previously dried potassium hydrogen phthalate, C8H5O4K, which we will write here, in shorthand notation, as KHP. For example, after preparing a nominally 0.1 M solution of NaOH, we place an accurately weighed 0.4-g sample of dried KHP in the reaction vessel of an automated titrator and dissolve it in approximately 50 mL of water (the exact amount of water is not important). The automated titrator adds the NaOH to the KHP solution and records the pH as a function of the volume of NaOH. The resulting titration curve provides us with the volume of NaOH needed to reach the titration's endpoint.3The end point of the titration is the volume of NaOH corresponding to a stoichiometric reaction between NaOH and KHP.\[\ce{NaOH + C8H5O4K → C8H4O4- + K+ + Na+ + H2O}(l)\]Knowing the mass of KHP and the volume of NaOH needed to reach the endpoint, we use the following equation to calculate the molarity of the NaOH solution.\[\mathrm{C_{NaOH}}= \dfrac{1000 × m_\ce{KHP} × P_\ce{KHP}}{M_\ce{KHP} × V_\ce{NaOH}}\]where CNaOH is the concentration of NaOH (in mol KHP/L), mKHP is the mass of KHP taken (in g), PKHP is the purity of the KHP (where PKHP = 1 means that the KHP is pure and has no impurities), MKHP is the molar mass of KHP (in g KHP/mol KHP), and VNaOH is the volume of NaOH (in mL). The factor of 1000 simply converts the volume in mL to L.Although it seems straightforward, identifying sources of uncertainty requires care as it easy to overlook important sources of uncertainty. One approach is to use a cause-and-effect diagram, also known as an Ishikawa diagram—named for its inventor, Kaoru Ishikawa—or a fish bone diagram. To construct a cause-and-effect diagram, we first draw an arrow pointing to the desired result; this is the diagram's trunk. We then add five main branch lines to the trunk, one for each of the four parameters that determine the concentration of NaOH and one for the method's repeatability. Next we add additional branches to the main branch for each of these five factors, continuing until we account for all potential sources of uncertainty. and an uncertainty in the slope (linearity). We can ignore the calibration bias because it contributes equally to both mKHP(gross) and mKHP(tare), and because we determine the mass of KHP by difference.\[m_\textrm{KHP} = m_\textrm{KHP(gross)} - m_\textrm{KHP(tare)}\]The volume of NaOH at the end point has three sources of uncertainty. First, an automated titrator uses a piston to deliver the NaOH to the reaction vessel, which means the volume of NaOH is subject to an uncertainty in the piston's calibration. Second, because a solution's volume varies with temperature, there is an additional source of uncertainty due to any fluctuation in the ambient temperature during the analysis. Finally, there is a bias in the titration's end point if the NaOH reacts with any species other than the KHP.Repeatability, R, is a measure of how consistently we can repeat the analysis. Each instrument we use—the balance and the automatic titrator—contributes to this uncertainty. In addition, our ability to consistently detect the end point also contributes to repeatability. Finally, there are no additional factors that affect the uncertainty of the KHP's purity or molar mass.To complete a propagation of uncertainty we must express each measurement’s uncertainty in the same way, usually as a standard deviation. Measuring the standard deviation for each measurement requires time and may not be practical. Fortunately, most manufacture provides a tolerance range for glassware and instruments. A 100-mL volumetric glassware, for example, has a tolerance of ±0.1 mL at a temperature of 20oC. We can convert a tolerance range to a standard deviation using one of the following three approaches.Assume a Uniform Distribution. a uniform distribution; (b) a triangular distribution; and (c) a normal distribution.Now we are ready to return to our example and determine the uncertainty for the standardization of NaOH. First we establish the uncertainty for each of the five primary sources—the mass of KHP, the volume of NaOH at the end point, the purity of the KHP, the molar mass for KHP, and the titration’s repeatability. Having established these, we can combine them to arrive at the final uncertainty.Uncertainty in the Mass of KHP. After drying the KHP, we store it in a sealed container to prevent it from readsorbing moisture. To find the mass of KHP we first weigh the container, obtaining a value of 60.5450 g, and then weigh the container after removing a portion of KHP, obtaining a value of 60.1562 g. The mass of KHP, therefore, is 0.3888 g, or 388.8 mg.To find the uncertainty in this mass we examine the balance’s calibration certificate, which indicates that its tolerance for linearity is ±0.15 mg. We will assume a uniform distribution because there is no reason to believe that any result within this range is more likely than any other result. Our estimate of the uncertainty for any single measurement of mass, u(m), is\[u(m) = \mathrm{\dfrac{0.15\: mg}{\sqrt 3} = 0.09\: mg}\]Because we determine the mass of KHP by subtracting the container’s final mass from its initial mass, the uncertainty of the mass of KHP u(mKHP), is given by the following propagation of uncertainty.\[u(m_\ce{KHP}) = \mathrm{\sqrt{(0.09\: mg)^2 + (0.09\: mg)^2} = 0.13\: mg}\]Uncertainty in the Volume of NaOH. After placing the sample of KHP in the automatic titrator’s reaction vessel and dissolving with water, we complete the titration and find that it takes 18.64 mL of NaOH to reach the end point. To find the uncertainty in this volume we need to consider, as shown in is\[u(V_\ce{cal}) = \mathrm{\dfrac{0.03\: mL}{\sqrt 6} = 0.012\: mL}\]To determine the uncertainty due to the lack of temperature control, we draw on our prior work in the lab, which has established a temperature variation of ±3oC with a confidence level of 95%. To find the uncertainty, we convert the temperature range to a range of volumes using water’s coefficient of expansion\[\mathrm{(2.1×10^{−4}{^\circ C}^{−1}) × (±3^\circ C) × 18.64\: mL = ±0.012\: mL}\]and then estimate the uncertainty due to temperature, u(Vtemp) as\[u(V_\ce{temp}) = \mathrm{\dfrac{0.012\: mL}{1.96} = 0.006\: mL}\]Titrations using NaOH are subject to a bias due to the adsorption of CO2, which can react with OH–, as shown here.\[\ce{CO2}(aq) + \ce{2OH-}(aq) → \ce{CO3^2-}(aq) + \ce{H2O}(l)\]If CO2 is present, the volume of NaOH at the end point includes both the NaOH reacting with the KHP and the NaOH reacting with CO2. Rather than trying to estimate this bias, it is easier to bathe the reaction vessel in a stream of argon, which excludes CO2 from the titrator’s reaction vessel.Adding together the uncertainties for the piston’s calibration and the lab’s temperature fluctuation gives the uncertainty in the volume of NaOH, u(VNaOH) as\[u(V_\ce{NaOH}) = \mathrm{\sqrt{(0.012\: mL)^2 + (0.006\: mL)^2} = 0.013\: mL}\]Uncertainty in the Purity of KHP. According to the manufacturer, the purity of KHP is 100% ± 0.05%, or 1.0 ± 0.0005. Assuming a rectangular distribution, we report the uncertainty, u(PKHP) as\[u(P_\ce{KHP}) = \dfrac{0.0005}{\sqrt 3} = 0.00029\]Uncertainty in the Molar Mass of KHP. The molar mass of C8H5O4K is 204.2212 g/mol, based on the following atomic weights: 12.0107 for carbon, 1.00794 for hydrogen, 15.9994 for oxygen, and 39.0983 for potassium. Each of these atomic weights has an quoted uncertainty that we can convert to a standard uncertainty assuming a rectangular distribution, as shown here (the details of the calculations are left to you).Adding together the uncertainties gives the uncertainty in the molar mass, u(MKHP), as\[u(M_\ce{KHP}) = \mathrm{\sqrt{8 × (0.00046)^2 + 5 × (0.000040)^2 + 4 × (0.00017)^2 + (0.000058)} = 0.0038\: g/mol}\]Uncertainty in the Titration’s Repeatability. To estimate the uncertainty due to repeatability we complete five titrations, obtaining results for the concentration of NaOH of 0.1021 M, 0.1022 M, 0.1022 M, 0.1021 M, and 0.1021 M. The relative standard deviation, sr, for these titrations is\[s_\ce{r} = \dfrac{5.477×10^{-5}}{0.1021} = 0.0005\]If we treat the ideal repeatability as 1.0, then the uncertainty due to repeatability, u(R), is equal to the relative standard deviation, or, in this case, 0.0005.Combining the Uncertainties. Table A2.1 summarizes the five primary sources of uncertainty. As described earlier, we calculate the concentration of NaOH we use the following equation, which is slightly modified to include a term for the titration’s repeatability, which, as described above, has a value of 1.0.\[\mathrm{C_{NaOH}} = \dfrac{1000 × m_\ce{KHP}× P_\ce{KHP}}{M_\ce{KHP}× V_\ce{NaOH}} × R\]Using the values from Table A2.1, we find that the concentration of NaOH is\[C_\ce{NaOH} = \dfrac{1000 × 0.3888 × 1.0}{204.2212 × 18.64} × 1.0 = \mathrm{0.1021\: M}\]Because the calculation of CNaOH includes only multiplication and division, the uncertainty in the concentration, u(CNaOH) is given by the following propagation of uncertainty.\[\dfrac{u(C_\ce{NaOH})}{C_\ce{NaOH}}= \dfrac{u(C_\ce{NaOH})}{0.1021\: \ce M} = \sqrt{\dfrac{(0.00013)^2}{(0.3888)^2} + \dfrac{(0.00029)^2}{(1.0)^2} + \dfrac{(0.0038)^2}{(204.2212)^2} + \dfrac{(0.013)^2}{(18.64)^2} + \dfrac{(0.0005)^2}{(1.0)^2}}\]Solving for u(CNaOH) gives its value as ±0.00010 M, which is the final uncertainty for the analysis. David Harvey (DePauw University)This page titled Propagation of Uncertainty is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by David Harvey.
698
Properties of Select Nonmetal Ions
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Supplemental_Modules_(Analytical_Chemistry)/Qualitative_Analysis/Properties_of_Select_Nonmetal_Ions
Carbonate Ion (CO₃²⁻)Carbonate ion, a moderately strong base, undergoes considerable hydrolysis in aqueous solution. In strongly acidic solution, CO2 gas is evolved.Halide Ions (Cl⁻, Br⁻, I⁻)These ions are all very weak bases since they are the conjugate bases of very strong acids. Hence, they undergo negligible hydrolysis.Phosphate Ion (PO₄³⁻)Phosphate ion is a reasonably strong base. It hydrolyzes in water to form a basic solution.Sulfate Ion (SO₄²⁻)Sulfate ion is a very weak base. Because it is such a weak base, sulfate ion undergoes negligible hydrolysis in aqueous solution.Sulfide Ion (S²⁻)Sulfide is a strong base, so solutions of sulfide in water are basic, due to hydrolysis. Sulfide solutions develop the characteristic rotten-egg odor of H2S as a result of this hydrolysis.Sulfite Ion (SO₃²⁻)Sulfite ion is a weak base, but does undergo some hydrolysis to produce basic solutions. In acidic solution, the equilibria are shifted to form sulfurous acid, resulting in the evolution of SO2 gas. Sulfur dioxide is a colorless gas with a characteristic choking odor. This page titled Properties of Select Nonmetal Ions is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by James P. Birk.
699
Rechargeable Batteries
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Supplemental_Modules_(Analytical_Chemistry)/Electrochemistry/Exemplars/Rechargeable_Batteries
Rechargeable batteries (also known as secondary cells) are batteries that potentially consist of reversible cell reactions that allow them to recharge, or regain their cell potential, through the work done by passing currents of electricity. As opposed to primary cells (not reversible), rechargeable batteries can charge and discharge numerous times.Secondary cells encompass the same mechanism as the primary cells with the only difference being that the Redox reaction of the secondary cell could be reversed with sufficient amount of energy placed into the equation. The figure below illustrates the mechanism of a charging secondary cell. The Charger shown on the top of the diagram is pulling the negative charges toward the right side of the separator. This makes it seem like the positive charges are compiling on the other side of the cell which is not allowed to pass the separator. This disequilibrium is the representation of the cell potential that, when allowed, could once again approach equilibrium through the transferring of the electrons.Different secondary batteries provide various functions. For long-term use (followed by discharging and charging), long storage time when not in use, remote activation, and use under harsh weather conditions are just a few obstacles of creating such secondary cells. Unfortunately, there are no batteries that are capable of encompassing all functions mentioned above. Therefore, the user must decide which application is the most important for a specific task in order to determine the most compatible version of rechargeable batteries.Lead-acid batteries are one of the most common secondary batteries, used primarily for storing large cell potential. These are commonly found in automobile engines. Its advantages include low cost, high voltage and large storage of cell potential; and disadvantages include heavy mass, incompetence under low-temperatures, and inability to maintain its potential for long periods of time through disuse. The reactions of a lead-acid battery are shown below:\[PbO_{2(s)} + HSO^−_{4(aq)} + 3H^+_{(aq)} + 2e^− \rightarrow PbSO_{4(s)} + 2H_2O_{(l)} \label{19.90}\]\[Pb_{(s)} + PbO_{2(s)} + 2HSO^−_{4(aq)} + 2H^+_{(aq)} \rightarrow 2PbSO_{4(s)} + 2H_2O_{(l)} \label{19.92}\]Discharging occurs when the engine is started and where the cell potential equals 2.02V. Charging occurs when the car is in motion and where the electrode potential equals -2.02V, a non- spontaneous reaction which requires an external electrical source. The reverse reaction takes place during charging.The nickel-cadmium (NiCd) battery is another common secondary battery that is suited for low-temperature conditions with a long shelf life. However, the nickel-cadmium batteries are more expensive and their capacity in terms of watt-hours per kilogram is less than that of the nickel-zinc rechargeable batteries.\[2NiO(OH)_{(s)} + 2H_2O_{(l)} + 2e^− \rightarrow 2Ni(OH)_{2(s)} + 2OH^-_{(aq)} \label{19.86}\]\[Cd_{(s)} + 2OH^-_{(aq)} \rightarrow Cd(OH)_{2(s)} + 2e^- \label{19.87}\]\[Cd_{(s)} + 2NiO(OH)_{(s)} + 2H_2O_{(l)} \rightarrow Cd(OH)_{2(s)} + 2Ni(OH)_{2(s)} \label{19.88}\]Advantages of the nickel-zinc battery are its long life span, high voltage, and the sufficient energy to mass to volume ratio. These characteristics make the nickel-zinc battery more attractive than some. However, it is not yet made in sealed form.Rechargeable Batteries is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by LibreTexts.
700
Redox Chemistry
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Supplemental_Modules_(Analytical_Chemistry)/Electrochemistry/Redox_Chemistry
A REDOX reaction is made up a oxidation reaction and reduction reaction occurring simultaneously. Balancing Redox ReactionsOxidation-Reduction Reactions, or redox reactions, are reactions in which one reactant is oxidized and one reactant is reduced simultaneously. This module demonstrates how to balance various redox equations.Balancing Redox Reactions - ExamplesComparing Strengths of Oxidants and ReductantsThe relative strengths of various oxidants and reductants can be predicted using E° values. The oxidative and reductive strengths of a variety of substances can be compared using standard electrode potentials. Apparent anomalies can be explained by the fact that electrode potentials are measured in aqueous solution, which allows for strong intermolecular electrostatic interactions, and not in the gas phase.Definitions of Oxidation and ReductionThis page discusses the various definitions of oxidation and reduction (redox) in terms of the transfer of oxygen, hydrogen, and electrons. It also explains the terms oxidizing agent and reducing agent.Half-ReactionsA half reaction is either the oxidation or reduction reaction component of a redox reaction. A half reaction is obtained by considering the change in oxidation states of individual substances involved in the redox reactionOxidation-Reduction ReactionsAn oxidation-reduction (redox) reaction is a type of chemical reaction that involves a transfer of electrons between two species. An oxidation-reduction reaction is any chemical reaction in which the oxidation number of a molecule, atom, or ion changes by gaining or losing an electron. Redox reactions are common and vital to some of the basic functions of life, including photosynthesis, respiration, combustion, and corrosion or rusting.Oxidation StateOxidation States IIOxidation States (Oxidation Numbers)This page explains what oxidation states (oxidation numbers) are and how to calculate and use them.Oxidizing and Reducing AgentsOxidizing and reducing agents are key terms used in describing the reactants in redox reactions that transfer electrons between reactants to form products. This page discusses what defines an oxidizing or reducing agent, how to determine an oxidizing and reducing agent in a chemical reaction, and the importance of this concept in real world applications.Standard Reduction PotentialThe standard reduction potential is the tendency for a chemical species to be reduced, and is measured in volts at standard conditions. The more positive the potential is the more likely it will be reduced.The Fall of the ElectronIn oxidation-reduction ("redox") reactions, electrons are transferred from a donor (reducing agent) to an acceptor (oxidizing agent). But how one predict whether, or in which direction, such a reaction will actually go? Presented below is a very simple way of understanding how different redox reactions are related.Writing Equations for Redox ReactionsThis page explains how to work out electron-half-reactions for oxidation and reduction processes, and then how to combine them to give the overall ionic equation for a redox reaction. This is an important skill in inorganic chemistry. Redox Chemistry is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by LibreTexts.
701
Reduction Potential Intuition
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Supplemental_Modules_(Analytical_Chemistry)/Electrochemistry/Redox_Potentials/Reduction_Potential_Intuition
Reduction potentials are relative and are reported relative to the reduction of protons in a standard hydrogen electrode (SHE) in Tables P1 and P2. However, what would we see if we used some other sort of electrode for comparison? For example, instead of a hydrogen electrode, we might use a fluorine electrode, in which we have fluoride salts and fluorine gas in solution with a platinum electrode. The "reduction potentials" we measure would all be relative to this reaction now. They would tell us: how much more motivated is this ion to gain an electron than fluorine? The resulting table would look something like this:The trouble is, there is not much out there that would be more motivated than fluorine to gain an electron. All of the potentials in the table are negative because none of the species on the left could take an electron away from fluoride ion. However, if we turned each of these reactions around, the potentials would all become positive. That means gold, for instance, could give an electron to fluorine to become \(\ce{Au^{+}}\). This electron transfer would be spontaneous, and a voltmeter in the circuit between the gold electrode and the fluorine electrode would measure a voltage of -1.04 V.That potential is generated by the reaction:\[ \ce{ 2 Au (s) + F_2 (g) -> 2 Au^{+} + 2 F^{-}} \nonumber \]Notice that, because 2 electrons are needed to reduce the fluorine to fluoride, and because gold only supplies one electron, two atoms of gold would be needed to supply enough electrons.The reduction potentials in Table \(\PageIndex{1}\) are, indirectly, an index of differences in electronic energy levels. The electron on gold is at a higher energy level than if it were on fluoride. It is thus motivated to spontaneously transfer to the fluorine atom, generating a potential in the circuit of -1.04V.Silver metal is even more motivated to donate an electron to fluorine. An electron from silver can "fall" even further than an electron from gold, to a lower energy level on fluoride. The potential in that case would be - 2.074 V.There are a couple of things to note here. The first is that, if potential is an index of the relative energy level of an electron, it does not matter whether one electron or two is transferred. They are transferred from the same, first energy level to the same, second energy level. The distance that the electron falls is the same regardless of the number of electrons that fall. A reduction potential reflects an inherent property of the material and does not depend on how many electrons are being transferred.Another important note is that, if reduction potentials provide a glimpse of electronic energy levels, we may be able to deduce new relationships from previous information. For example, if an electron on gold is 1.04 V above an electron on fluoride, and an electron on silver is 1.04 V above an electron on fluoride, what can we deduce about the relative energy levels of an electron on silver vs. gold?The answer is that the electron on silver is 1.034 V above the electron on gold. We know this because reduction potentials are "state functions", reflecting an intrinsic property of a material. It does not matter how we get from one place to another; the answer will always be the same. That means that if we transfer an electron from silver to gold indirectly, via fluorine, the overall potential will be the same as if we transfer the electron directly from silver to gold. So, the electron drops from silver to fluorine (a drop of 2.074 V). The electron then hops (under duress) up to gold (a climb of 1.04 V). The net drop is only 1.034 V.That's the same value we would expect to measure if we took a standard solution of gold salts and a gold electrode and connected it, via a circuit, to a standard solution of silver salts and a silver electrode.In fact, the table above does not reflect any experimental measurements; it's simply the table of standard reduction potentials from the previous page, with the reduction potential of fluorine subtracted from all the other values.In other words, mapping out the distance from iron to SHE (in terms of the reduction potential for \(Fe^{2+} + 2 e^- \rightarrow Fe(s)\)), together with the distance from SHE to fluorine gives the potential relative to IFE (imaginary fluorine electrode).Frequently in biology, electron transfers are made more efficient through a series of smaller drops rather than one big jump. To think about this, consider the transfer of an electron from lithium to fluorine. (Neither of these species is likely to be found in an organism, but this transfer is a good illustration of a big energy difference.)An "activity series" is a ranking of elements in terms of their "activity" or their ability to provide electrons. The series is normally written in a column, with the strongest reducing metals at the top. Beside these elements, we write the ion produced when the metal loses its electron(s). Looking at the table of reduction potentials relative to fluorine on this page, construct an activity series for the available elements.Construct an activity series for the alkali metals using the following standard reduction potentials (relative to SHE): Fr, -2.9 V; Cs, -3.026 V; Rb, -2.98 V; K, -2.931 V; Na, -2.71 V; Li, -3.04 V.In reality, the energy gap that leads to a reduction potential is sometimes more complicated than following an electron as it moves from one level to another. Use the activity series you have constructed for the alkali metals to compare and contrast the redox potential with your expectations of energy level / ease of electron donation based on standard periodic trends.Chris P Schaller, Ph.D., (College of Saint Benedict / Saint John's University)This page titled Reduction Potential Intuition is shared under a CC BY-NC 3.0 license and was authored, remixed, and/or curated by Chris Schaller.
703
SI Units
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Supplemental_Modules_(Analytical_Chemistry)/Quantifying_Nature/Units_of_Measure/SI_Units
The International System of Units (SI) is system of units of measurements that is widely used all over the world. This modern form of the Metric system is based around the number 10 for convenience. A set unit of prefixes have been established and are known as the SI prefixes or the metric prefixes (or units). The prefixes indicate whether the unit is a multiple or a fraction of the base ten. It allows the reduction of zeros of a very small number or a very larger number such as 0.000000001 meter and 7,500,000 Joules into 1 nanometer and 7.5 Megajoules respectively. These SI prefixes also have a set of symbols that precede unit symbol. However countries such as the United States, Liberia, and Berma have not officially adopted the International System of Units as their primary system of measurements. Since the SI Units are nearly globally though, the scientific and mathematical field will use these SI units in order to provide ease between the sharing data with one another because of a common set of measurements.The SI contains seven BASE UNITS that each represent a different kind of physical quantity. These are commonly used as a convention. Derived Units are created by mathematical relationships ​between other Base Units and are expressed in a combination of fundamental and base quantities. Metric units use a prefix, used for conversion from or to an SI unit. Below is a chart illustrating how prefixes are labeled in metric measurements.Temperature is usually measured in Celsius (although the U.S. still uses Fahrenheit), but is often converted to for the absolute Kelvin scale for many chemistry problems. Reference Points:The Kelvin scale does not use the degree symbol (°) and only K, which can only be positive since it is an absolute scaleMass is usually measured by a sensitive balance machine The U.S. usually makes measurements in inches and feet, but the SI system prefers meters as the unit for length.SI units commonly uses derived units for Volume such as meters cubed to liters. Convert to the appropriate SI Units:#1-4#5-8SI Units is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by LibreTexts.
704
SI Units - A Summary
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Supplemental_Modules_(Analytical_Chemistry)/Quantifying_Nature/Units_of_Measure/SI_Units_-_A_Summary
Learning ObjectivesList all the basic quantities and their units you know of and search for those that you do not know yet. Understanding and proper expression of quantities are basic skills for any modern educated person. You have to master all quantities described here.Quantities form the basis for science and engineering and any moment of our lives. Unless you have expressed the quantities in numbers and units, you have not expressed anything. Quantities are defined only when they are expressed in numbers and units. Missing units and improper use of units are serious omissions and errors.Years ago, physicists used either the mks (meter-kilogram-second) system of units or the cgs (centimeter-gram-second) systems for length, mass, and time. In addition to these three basic quantities are four others: the electric charge or current, temperature, luminous intensity and the amount of substance. Chemical quantities are mostly based on the last one. Thus, these are seven basic quantities, and each has an unit.The international system of units (Systeme International d'Units) was adopted by the General Conference on Weights and Measure in 1960, and the SI units are widely used today. All SI units are based on these basic units.Close your eyes, and see if you can name the 7 fundamental quantities in science and their (SI) Units. Science is based on only 7 basic quantities; for each, we have to define a standard unit. Think why these are the basic quantities. Are these related to any other quantities? Can they be derived from other quantities?There are other quantities aside from the seven basic quantities mentioned above. However, all other quantities are related to the basic quantities. Thus, their units can be derived from the seven SI units above. For this reason, other units are called derived units The table below lists some examples:Derived Quantities and Their SI UnitsDerived units can be expressed in terms of basic quantities. From the specific derived unit, you can reason its relationship with the basic quantities.For some specific common quantities, the SI units have special symbols. As you use these often, you will feel at home with them. To remember it is very hard. However, you will encounter them during your study of these quantities. They are collected here to point out to you that these are special SI symbols.Special Symbols of Some SI UnitsThe following units are still in common use for chemistry. There are some other commonly used units too, but their meanings are clear by the time you use them.Common Units Still in UseThe following units are used in special technologies or disciplines. Since most people are not familiar with them, they are explained in more detail here.The unit erg is for energy, 1 J = 10,000,000 erg.Newton (N), he defined forceOne N is the gravitational pull of 98 g massPascal (Pa), who studied effect of pressure on fluid1 atm = 101325 Pa = 101.3 kPaJoule (J) is an energy unit1 J = 1 N m = 10e7 ergsKelvin (K)1C is the same as 273.15 Kmole (mol), derived from Latin, meaning massone mole has 6.023e23 atoms or moleculesm, kg, s, A, K, cd, molm, k, s, current, temperature, luminous, moleM stands for mol/L, a concentration unit1 C/s, for an electric current10 C/s V (J/s = watt)watt is the unit for powerBecquerel (B), he discovered radioactivity1 Ci = 3.7e10 BChung (Peter) Chieh (Professor Emeritus, Chemistry University of Waterloo)SI Units - A Summary is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by LibreTexts.
705
Sacrificial Anode
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Supplemental_Modules_(Analytical_Chemistry)/Electrochemistry/Exemplars/Corrosion/Sacrificial_Anode
Sacrificial Anodes are highly active metals that are used to prevent a less active material surface from corroding. Sacrificial Anodes are created from a metal alloy with a more negative electrochemical potential than the other metal it will be used to protect. The sacrificial anode will be consumed in place of the metal it is protecting, which is why it is referred to as a "sacrificial" anode.When metal surfaces come into contact with electrolytes, they undergo an electrochemical reaction known as corrosion. Corrosion is the process of returning a metal to its natural state as an ore and in this process, causing the metal to disintegrate and its structure to grow weak. These metal surfaces are used all around us -- from pipelines to buildings to ships. It is important to ensure that these metals last as long as they can and thus necessitates what is known as cathode protection. Sacrificial anodes are among several forms of cathode protection. Other forms of cathode protection areMetal in seawater is one such example with the iron metal coming into contact with electrolytes. Under normal circumstances, the iron metal would react with the electrolytes and begin to corrode, growing weaker in structure and disintegrating. The addition of zinc, a sacrificial anode, would prevent the iron metal from "corroding". According to the table of Standard Reduction Potentials, the standard reduction potential of zinc is about -0.76 volts. The standard reduction potential of iron is about -0.44 volts. This difference in reduction potential means that Zinc would oxidize much faster than iron would. In fact, zinc would oxidize completely before iron would begin to react. The materials used for sacrificial anodes are either relatively pure active metals, such as zinc or magnesium, or are magnesium or aluminum alloys that have been specifically developed for use as sacrificial anodes. In applications where the anodes are buried, a special backfill material surrounds the anode in order to insure that the anode will produce the desired output.Since the sacrificial anode works by introducing another metal surface with a more negative electronegative and much more anodic surface. The current will flow from the newly introduced anode and the protected metal becomes cathodic creating a galvanic cell. The oxidation reactions are transferred from the metal surface to the galvanic anode and will be sacrificed in favor of the protected metal structure.Partially corroded sacrificial anode on the hull of a ship. is sacrificed to protect the less active metal (cathode). The amount of corrosion depends on the metal being used as an anode but is directly proportional to the amount of current supplied.Sacrificial Anodes are used to protect the hulls of ships, water heaters, pipelines, distribution systems, above-ground tanks, underground tanks, and refineries. The anodes in sacrificial anode cathodic protection systems must be periodically inspected and replaced when consumed.Sacrificial Anode is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by LibreTexts.
706
Scanning Probe Microscopy
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Supplemental_Modules_(Analytical_Chemistry)/Microscopy/Scanning_Probe_Microscopy
This module provides an introduction to Scanning Probe Microscopy (SPM). SPM is a family of microscopy techniques where a sharp probe (2-10 nm) is scanned across a surface and probe-sample interactions are monitored. SPM is an extremely useful tool that is utilized in numerous research settings ranging from chemistry and materials to biological sciences. In addition to imaging surfaces with nanometer resolution, SPM can also be used to determine a variety of properties including: surface roughness, friction, surface forces, binding energies, and local elasticity. This module is aimed at presenting the basic theory and applications of SPM. It is aimed towards undergraduates and anyone who wants an introduction into SPM. There are two primary forms of SPM: Scanning Tunneling Microscopy (STM) and Atomic Force Microscopy (AFM). The basic theory of both of these techniques is presented here along with an introduction into some additional SPM characterization methods. This work is partially supported through NSF grant DMR-0526686. The authors would also like to acknowledge the participants at the ASDL Curriculum Development Workshop held at the University of California - Riverside, July 10-14, 2006.This page titled Scanning Probe Microscopy is shared under a CC BY-NC-SA 2.5 license and was authored, remixed, and/or curated by Heather A. Bullen & Robert A. Wilson via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.
707
Limitations of the Scientific Method
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Supplemental_Modules_(Analytical_Chemistry)/Quantifying_Nature/The_Scientific_Method/Science_vs._Pseudo-science%3A_Limitations_of_the_Scientific_Method
Learning ObjectivesPseudo-science, basically "fake"-science," consists of scientific claims which are made to appear factual when they are actually false. Many people question whether Pseudo-science should even contain the word "science" as Pseudo-science isn't really even an imitation of science; it pretty much disregards the scientific method all together. Also known as alternative or fringe-science, Pseudo-science relies on invalid arguments called sophisms, a word Webster dictionary defines as "an argument apparently correct in form but actually invalid; especially : such an argument used to deceive". Pseudo-science usually lacks supporting evidence and does not abide by the scientific method. That is, pseudo-theories fail to use carefully cultivated and controlled experiments to test a hypothesis. A scientific hypothesis must include observable, empirical and testable data, and must allow other experts to test the hypothesis. Pseudo-science does not accomplish these goals. Several examples of Pseudo-Science include phrenology, astrology, homeopathy, reflexology and iridology.In order to distinguish a pseudoscience, one must look at the definition of science, and the aspects that make science what it is. Science is a process based on observations, conjectures, and assessments to provide better understanding of the natural phenomena of the world. Science generally always follows a formal system of inquiry which consists of observations, explanations, experiments, and lastly, hypothesis and predictions. Scientific theories are always challenged by experts and revised to fit new theories. Pseudo-science, however, is mostly based on beliefs and it greatly opposes contradictions. Their hypothesis are never revised to fit new data or information. Scientist continually disprove ideas to achieve a better understanding of the physical world, whereas pseudo-scienctists focus on proving theories to make their claims seem plausible. For example, science text books come out with new editions every couple of years to correct typos, update information, add new illustrations, etc. However, it has been observed that pseudo-science textbooks only come out with one edition, and is never updated or revised even if their theory has been proven to be false.Pseudo-science beliefs often tend to be greatly exaggerated and very vague. Complicated technical language is often used to sound impressive but it is usually meaningless. For example, a phrase like "energy vibrations" is used to sound remarkable but a phrase like this is insignificant and doesn't really explain anything. Furthermore, Pseudo-science often consists of outrageous, yet unprovable claims. Thus, pseudo-scientists tend to focus on confirming their ideas, rather than finding evidence that refutes them. The following dialogue contains the thought-processes behind Pseudo-Science.The dialogue above features many key characteristics of Pseudo-Science. The speaker makes his or her point valid though the two facts alone that her friend had a personal experience and that science has no proof to prove the theory wrong. Finally, the speaker insults anyone who would challenge the theory. In science, challenges to a theory are accepted as everyone has the same common goal of improving the understanding of the natural word. Below is a table that lays out the key characteristics of Science and Pseudo-SciencePhrenology, also known as craniology, was a "science" popular during the early 1800s that was centered around the idea that the brain was an organ of the mind. During this time, most people believed that the brain was divided into distinct sections that all controlled different parts of a person's personality or intelligence. The basis of phrenology revolves around the concept that the brain mirrors a muscle and those parts of the brain which are "exercised" the most, will be proportionally larger than those parts of the brain that aren't often used. Thus, the scientists pictured the brain as a bumpy surface, with the make-up of the surface differing for every person depending on their personality and intelligence. By the mid 19th century, automated phrenology machines existed, which was basically a set of spring loaded probes that were placed on the head to measure the topography of one's skull. The machine then gave an automated reading about a person's characteristics based on this.Let's consider some of the key characteristics of pseudo-science from our chart, and see how they apply to phrenology.Reflexology is a way of treatment that involves physically applying pressure to the feet or hands with the belief that each are divided up into different zones that are "connected" to other parts of the body. Thus, reflexologists assert that they can make physical changes throughout the body simply by rubbing ones hands or feet. Like we did with phrenology, lets go through some of the main characteristics of Pseudo-Science and see how they apply to reflexology. An important distinction should be made between Pseudo-science and other types of defective science. Take for example, the "discovery" of N-rays. While attempting to polarize X-rays, physicist René Prosper Blondlot claimed to have discovered a new type of radiation he called N-rays. After Blondlot shared with others his exciting discovery, many other scientists confirmed his beliefs by saying they too had saw the N-rays. Though he claimed N-rays contained impossible properties, Blondlot asserted when he put a hot wire in an iron tube, he was able to detect the N-rays when he used a thread of calcium sulfite that glowed slightly when the rays were sent through a prism of aluminum. Blondlot claimed that all substances except some treated metals and green wood emit N-rays. However, Nature magazine was skeptical of Blondlot and sent physicist Robert Wood to investigate. Before Blondlot was about to show Wood the rays, Wood removed the aluminum prism from the machine without telling Blondlot. Without the prism, the rays would be impossible to detect. However, Blondlot claimed to still see the N-rays, demonstrating how the N-rays did not exist; Blondlot just wanted them to exist. This is an example of Pathological science, a phenomenon which occurs when scientists practice wishful data interpretation and come up with results they want to see. This case of Pathological science and Pseudo-science differ. For one, Blondlot asked for a confirmation by other experts, something Pseudo-science usually lacks. More importantly, in pathological science, a scientist starts by following the scientific method; Blondlot was indeed doing an experiment when he made his discovery and proceeded to experiment when he found the substances that did not emit the rays. However, Pseudo-science usually includes a complete disregard of the scientific method, while Pathological scientists includes following the scientific method but seeing the results you wish to see.Another type of invalid science, called hoax science occurred in 1999 when a team at the Lawrence Berkeley National Laboratory claimed to have discovered elements 116 and 118 when they bombarded Lead with Krypton particles. However, by 2002 it had been discovered that physicist Victor Ninov had intentionally fudged the data to get the ideal results. Thus, the concept of hoax science, which occurs when the data is intentionally falsified, differs both from pathological and pseudo science. In pathological science, scientists wishfully interpret the data and legitimately think they see what they want to see. However, in Hoax science, scientists know they don't see what they want to see, but just say they did. Finally, in Pseudo-Science, scientists don't consider the scientific method at all as they don't use valid experiments to back up their data in the first place. There have been incidents where what was once considered pseudo-science became a respectable theory. In 1911, German astronomer and meteorologist Alfred Wegener first began developing the idea of Continental Drift. The observation that the coastlines of African and South American seemed to fit together was not a new observation: scientists just couldn't believe that the continents could have drifted so far to cross the 5,000 mile Atlantic Ocean. At the time, it was a common theory that a land bridge had existed between Africa and Brazil. However, one day in the library Wegener read a study about a certain species that could not have crossed the ocean, yet had fossils appeared on both sides of the supposed land bridge. This piece of evidence lead Wegener to believe that our world had once been one piece, and had since drifted apart. However, Wegener's theory encountered much hostility and disbelief. In this time, it was the norm for scientists to stay within the scopes of their fields, meaning biologists did not study physics, chemists did not study oceanology and of course, meteorologists/astronomers like Wegener did not study geology. Thus, Wegener's theory faced much criticism just due to the fact that he was not a geologist. Also, Wegener could not explain why the continents moved, just that they did. This lack of reasoning lead to more skepticism about the theory and all these factors combined lead to the viewing of continental drift as Pseudo-Science. However, today much evidence exists that shows that Continental Drift is a perfectly acceptable scientific theory. Today, the modern ideas of plate tectonics can help explain Continental Drift, as the Plate Tectonic Theory presents the idea that the earth's surface is made up of several large plates that often move up to a few inches every year. Also, the development of paleomagnetism, which allows us to determine the earth's magnetic poles at the time a rock formed, suggests that the earth's magnetic poles have changed many times in the last 175 million years and that at one time South America and Africa were connected.Due to the need to have completely controlled experiments to test a hypothesis, science can not prove everything. For example, ideas about God and other supernatural beings can never be confirmed or denied, as no experiment exists that could test their presence. Supporters of Intelligent Design attempt to convey their beliefs as scientific, but nonetheless the scientific method can never prove this. Science is meant to give us a better understanding of the mysteries of the the natural world, by refuting previous hypotheses, and the existence of supernatural beings lies outside of science all together. Another limitation of the scientific method is when it comes making judgements about whether certain scientific phenomenons are "good" or "bad". For example, the scientific method cannot alone say that global warming is bad or harmful to the world, as it can only study the objective causes and consequences. Furthermore, science cannot answer questions about morality, as scientific results lay out of the scope of cultural, religious and social influences.Determine if each statement is true or false (see answers at bottom of the page)This page titled Limitations of the Scientific Method is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by Stephen Lower via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.
708
Semimicro Analytical Techniques
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Supplemental_Modules_(Analytical_Chemistry)/Qualitative_Analysis/Semimicro_Analytical_Techniques
Performing qualitative analysis properly requires a knowledge of certain basic laboratory techniques. In order to speed up procedures, all techniques will be on a semimicro scale. This scale involves volumes of 1­2 mL of solutions and adding reagents dropwise with eye droppers. Containers will generally be standard 75 mm test tubes which hold about 3 mL. Techniques for working with volumes of this magnitude will be outlined below.Whenever it is necessary to use water in a procedure, use distilled water. Ordinary tap water is not completely pure and may introduce substances for which you are trying to test or it may introduce incompatible contamination. When obtaining reagents from the reagent bottles, always dispense the reagent with the dropper contained in the reagent bottle, whether dispensing the reagent directly into your sample, or obtaining a quantity of reagent in another container. Do not touch the dropper to the solution to which you are adding the reagent or to your sample container. Do not set the dropper on the reagent bench or lab bench. Return the stopper promptly to the reagent bottle from which it originated. Do not place anything into a reagent bottle other than the dropper which is contained in it. If you need a volume greater than 2 mL, use a graduated cylinder. For lesser volumes, you may want to calibrate one of your eye droppers by counting how many drops of water it takes to deliver 1 mL into a graduated cylinder. When reagents are added to a solution, it is essential that the solution be stirred thoroughly. Stirring rods can be prepared by cutting short lengths of thin glass rod and fire­polishing the ends. The stirring rods get wet with each usage, and if not properly cleaned, will contaminate the next solution. A simple way to keep stirring rods clean is to place them in a beaker of clean distilled water and swirl them about after each use. The contamination will be highly diluted and can remain in the water. It is advisable to change the water periodically to minimize contamination however. At times you will want to make a solution acidic or basic. Add the proper reagent dropwise, stirring well with a stirring rod after each addition, and test the pH at appropriate intervals by touching the tip of the stirring rod to litmus or other pH indicating paper. Continue this procedure until the paper turns the proper color. If litmus paper is not sufficiently sensitive, obtain some pH indicator paper, which is available for various ranges of the pH scale. In order to detect the formation of a precipitate, both the solution being used and the reagent must be clear (transparent, but not necessarily colorless). Precipitation is accomplished by adding the specified amount of reagent to the solution and stirring well. Stir both in a circular direction and up and down. When precipitation appears to be complete, centrifuge to separate the solid. Before removing the supernatant liquid with a dropper or by decanting (pouring off), add a few drops more of the reagent to check for complete precipitation. If more precipitation occurs, add a few more drops of reagent, centrifuge, and test again. A centrifuge is used to separate a precipitate from a liquid. Put the test tube containing the precipitate into one of the locations in the centrifuge. Place another test tube containing an equal volume of water in the centrifuge location directly opposite your first test tube. This procedure is extremely important; it must be followed to maintain proper balance in the centrifuge. Otherwise, the centrifuge will not function properly and may be damaged.Turn on the centrifuge and let it run for at least 30 seconds. Turn the centrifuge off and let it come to a complete stop without touching it. Stopping the centrifuge with your hand is not only dangerous, but is likely to stir up your precipitate. The precipitate should settle to a compact mass at the bottom of the test tube. The liquid above the precipitate (the supernatant) should not have any precipitate suspended in it. If it does, centrifuge again. The supernatant can then be poured off (decanted) into another test tube without disturbing the precipitate. All of the liquid should be decanted in a single pouring motion to avoid resuspending the precipitate. An eye dropper or a dropper with an elongated tip may also be used to draw off the supernatant. After a precipitate has been centrifuged and the supernatant liquid decanted or drawn off, there is still a little liquid present in the precipitate. To remove any ions which might interfere with further testing, this liquid should be removed with a wash liquid, usually distilled water. The wash liquid must be a substance which will not interfere with the analysis, cause further precipitation, or dissolve the precipitate. Add the wash liquid to the precipitate, stir well, centrifuge, and decant the wash liquid. The wash liquid is usually discarded. Precipitates should be washed twice for best results. Sometimes you will want to divide a separated and washed precipitate into two portions, in order to carry out two additional tests. To transfer part of the precipitate to another test tube, add a small amount of distilled water to the precipitate, stir the mixture to form a slurry, and quickly pour half of the slurry into another container. Do not use a spatula. This could contaminate your sample. Test tubes containing reactions mixtures are never to be heated directly over an open flame. If a solution is to be heated, it should be placed in a test tube and suspended in a beaker of boiling (or in some cases only hot) water. It will be convenient to keep a beaker of water hot throughout the laboratory period. If hot water is required in a procedure, it should be distilled water heated in a test tube suspended in the beaker of boiling water. Do not use water directly from the beaker ­­ it may be contaminated. Sometimes it is necessary to boil a solution to reduce the volume and concentrate a species or drive off a volatile species. To boil a liquid, place it in a small porcelain casserole or evaporating dish and heat it on a wire gauze with a small flame. Watch it carefully and do not overheat it. Generally, you do not want to heat to dryness as this might decompose the sample. Stir the solution during the evaporation. Do not try to evaporate a solution in a small test tube. It will take much longer and the contents of the tube may be ejected if the tube is heated too strongly. Never place a metal spatula in a solution. It may dissolve and cause contamination. If you need to manipulate a solid, use a rubber policeman on a stirring rod. Cleanliness is essential for a successful procedure. All apparatus must be cleaned well with soap and a brush, rinsed with tap water, and finally rinsed with distilled water. In any procedures involving sulfide ion, thioacetamide (\(\ce{CH3CSNH2}\)) should be used as the source of sulfide ion. Upon heating in water (or acidic or basic solution), thioacetamide decomposes to \(\ce{CH3CO2}\)­, \(\ce{NH4^{+}}\), and \(\ce{H2S}\) (or \(S_2^{-}\)­ in basic solution):This page titled Semimicro Analytical Techniques is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by James P. Birk.
709
Separations with Thioacetamide
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Supplemental_Modules_(Analytical_Chemistry)/Qualitative_Analysis/Separations_with_Thioacetamide
Thioacetamide is an organosulfur compound with the formula \(\ce{C2H5NS}\). This white crystalline solid is soluble in water and serves as a source of sulfide ions in the synthesis of organic and inorganic compounds.Ions not listed here either do not react with sulfide (\(\ce{S2^{-}}\)­), or they should have been removed by precipitation as chlorides or sulfates before sulfide is added to the metal ion mixture. Ions that will precipitate in acidic solutions of sulfide:Of these, \(\ce{Hg^{2+}}\), \(\ce{Sn^{2+}}\), and \(\ce{Sb^{3+}}\) dissolve in basic solutions containing excess \(\ce{S2^{-}}\)­ due to complex ion formation. The others remain insoluble in basic solution.Ions that will precipitate in moderately basic (pH=9) solutions of sulfide:Of these ions, \(\ce{Al^{3+}}\), \(\ce{Fe^{3+}}\), and \(\ce{Cr^{3+}}\) precipitate as the hydroxide rather than the sulfide. Of those that form insoluble sulfides, all except \(\ce{CoS}\) and \(\ce{NiS}\) are soluble in dilute aqueous hydrochloric acid.The concentration of \(\ce{S2^{-}}\)­­ is the controlling factor in determining whether an ion will precipitate. Consider the dissociation equilibrium for hydrosulfuric acid:The more strongly acidic the solution, the lower the concentration of \(\ce{S2^{-}}\). The sulfides that precipitate in base will not precipitate in acid, because the concentration of \(\ce{S2^{-}}\) is too low. Those sulfides that precipitate in acid will also precipitate in base because the concentration of \(\ce{S2^{-}}\)­ is higher than necessary for precipitation.To separate the acidic sulfide group ions from the basic sulfide group ions, follow this procedure:To 5 mL of solution containing ions from both sulfide groups, add 10 drops (0.5 mL) of 6 M \(\ce{HNO3}\) and 20 drops (1 mL) of 6 M \(\ce{HCl}\). Evaporate the mixture to dryness slowly in an evaporating dish in a hood. The last few drops should be evaporated with steam by placing the evaporating dish on top of a beaker of boiling water. Heating to complete dryness with a flame could evaporate the chloride salts \(\ce{PbCl2}\), \(\ce{HgCl2}\), or \(\ce{SnCl4}\), if they are present. It is necessary to use steam to get the salt mixture completely dry so there will be no excess acid present after evaporation.Add 2 mL of \(\ce{H2O}\) to the cool sulfide salt mixture. Swirl and stir to dissolve as much salt as possible. Transfer the solution and the residue to a test tube for precipitation. Rinse the evaporating dish with 1 mL of \(\ce{H2O}\) and 4 drops of 6 M \(\ce{HCl}\) and add to the same test tube. The test tube should now contain all your salts in 3 mL of solution. If a precipitate is still present, it is probably some oxychloride salts that may not be completely dissolved in the 0.38 M \(\ce{H^{+}(aq)}\) solution.Precipitate the acidic sulfide group ions by adding 1 mL of 1 M thioacetamide. Stir and heat the mixture in a boiling water bath for 7 minutes. Then add 1.5 mL \(\ce{H2O}\) and 0.5 mL thioacetamide and heat for another 5 minutes. Prepare the following wash solution while heating your sample if you need your precipitate for additional separations or tests.Wash solution: Add 2 drops of 1 M thioacetamide and 1 mL of 1 M \(\ce{NH4Cl}\) to 1 mL of \(\ce{H2O}\) and heat in a water bath. Remove any pale yellow elemental sulfur present from decomposition of thioacetamide by centrifugation and decanting.The solution should contain any basic sulfide group ions, so centrifuge and save the solution for analysis, if it might contain any basic sulfide group ions. Wash the precipitate, which contains the acidic sulfide group ions as sulfide (or hydroxide) salts, with 1 mL of the wash solution. Centrifuge and add the decanted wash liquid to the basic sulfide group solution. Wash the precipitate again with the remaining 1 mL of wash solution. Centrifuge and discard the decanted wash solution.Sulfide precipitates can be dissolved by adding 2­5 mL of 6 M \(\ce{HNO3}\). If necessary to dissolve all the solid, add more nitric acid. Heat the mixture in a boiling water bath for a few minutes. Centrifuge and remove the solution. Nitric acid will dissolve some precipitates by shifting the solubility equilibrium; for example:Hot nitric acid will also oxidize sulfide ion to sulfur:\(\ce{HgS}\) does not dissolve unless heated for a long time with more concentrated \(\ce{HNO3}\) because it is so insoluble. Be cautious however, since prolonged heating might oxidize sulfur to sulfate ion, which could precipitate \(\ce{PbSO4}\) if lead ion is present.To get rid of excess sulfide ion in a solution, acidify the solution with \(\ce{HNO3}\) and heat. Centrifuge off any sulfur formed. The solution can be tested for sulfide ion with lead acetate paper, which will turn black due to formation of lead sulfide if sulfide ion is present in the solution.CAUTIONHydrogen sulfide is an extremely toxic gas. Work only under a hood.This page titled Separations with Thioacetamide is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by James P. Birk.
710
Significant Digits
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Supplemental_Modules_(Analytical_Chemistry)/Quantifying_Nature/Significant_Digits
Accuracy and precision are very important in chemistry. However, the laboratory equipment and machines used in labs are limited in such a way that they can only determine a certain amount of data. For example, a scale can only mass an object up until a certain decimal place, because no machine is advanced enough to determine an infinite amount of digits. Machines are only able to determine a certain amount of digits precisely. These numbers that are determined precisely are called significant digits. Thus, a scale that could only mass until 99.999 mg, could only measure up to 5 figures of accuracy (5 significant digits). Furthermore, in order to have accurate calculations, the end calculation should not have more significant digits than the original set of data.Significant Digits - Number of digits in a figure that express the precision of a measurement instead of its magnitude. The easiest method to determine significant digits is done by first determining whether or not a number has a decimal point. This rule is known as the Atlantic-Pacific Rule. The rule states that if a decimal point is Absent, then the zeroes on the Atlantic/right side are insignificant. If a decimal point is Present, then the zeroes on the Pacific/left side are insignificant. Example \(\PageIndex{1}\):The first two zeroes in 200500 (four significant digits) are significant because they are between two non-zero digits, and the last two zeroes are insignificant because they are after the last non-zero digit.It should be noted that both constants and quantities of real world objects have an infinite number of significant figures. For example if you were to count three oranges, a real world object, the value three would be considered to have an infinite number of significant figures in this context.Example \(\PageIndex{1}\)How many significant digits are in 5010?Solution 5 0 1 0 Key: 0 = significant zero. 0 = insignificant zero.3 significant digits.Example \(\PageIndex{3}\)The first two zeroes in 0.058000 (five significant digits) are insignificant because they are before the first non-zero digit, and the last three zeroes are significant because they are after the first non-zero digit.Example \(\PageIndex{4}\)How many significant digits are in 0.70620?Solution0 . 7 0 6 2 0 Key: 0 = significant zero.0 = insignificant zero. 5 significant digits.Scientific notation form: a x 10b, where “a” and “b” are integers, and "a" has to be between 1 and 10.Example \(\PageIndex{5}\)The scientific notation for 4548 is 4.548 x 103.SolutionExample \(\PageIndex{6}\)How many significant digits are in 1.52 x 106?NOTE: Only determine the amount of significant digits in the "1.52" part of the scientific notation form. Answer 3 significant digits.When rounding numbers to a significant digit, keep the amount of significant digits wished to be kept, and replace the other numbers with insignificant zeroes. The reason for rounding a number to a particular amount of significant digits is because in a calculation, some values have less significant digits than other values, and the answer to a calculation is only accurate to the amount of significant digits of the value with the least amount. NOTE: be careful when rounding numbers with a decimal point. Any zeroes added after the first non-zero digit is considered to be a significant zero. TIP: When doing calculations for quizzes/tests/midterms/finals, it would be best to not round in the middle of your calculations, and round to the significant digit only at the end of your calculations. Example \(\PageIndex{7}\)Round 32445.34 to 2 significant digits.Answer32000 (NOT 32000.00, which has 7 significant digits. Due to the decimal point, the zeroes after the first non-zero digit become significant).When adding or subtracting numbers, the end result should have the same amount of decimal places as the number with the least amount of decimal places.Example \(\PageIndex{8}\)Y = 232.234 + 0.27 Find Y.Answer Y = 232.50NOTE: 232.234 has 3 decimal places and 0.27 has 2 decimal places. The least amount of decimal places is 2. Thus, the answer must be rounded to the 2nd decimal place (thousandth).When multiplying or dividing numbers, the end result should have the same amount of significant digits as the number with the least amount of significant digits.Example \(\PageIndex{9}\)Y = 28 x 47.3 Find YAnswer Y = 1300NOTE: 28 has 2 significant digits and 47.3 has 3 significant digits. The least amount of significant digits is 2. Thus, the answer must me rounded to 2 significant digits (which is done by keeping 2 significant digits and replacing the rest of the digits with insignificant zeroes).Exact numbers can be considered to have an unlimited number of significant figures, as such calculations are not subject to errors in measurement. This may occur: 1. a) 1 significant digit.b) 2 significant digits.2. 4 significant digits.3. 42800004. 0.065. Y = 61.96. Y = -37. Y = 92708. Y = 169. Y = (23.2 + 16.723) x 28Y = 39.923 x 28 (TIP: Do not round until the end of calculations.)Y = 1100 (NOTE: 28 has the least amount of significant digits (2 sig. figs.) Thus, answer must be rounded to 2 sig. figs.)10. Y = (16.7 x 23) – (23.2 ÷ 2.13)Y = 384.1 – 10.89201878 (TIP: Do not round until the end of calculations.)Y = 373.2 (NOTE: 384.1 has the least amount of decimal point (tenth). Thus, answer must be rounded to the tenth.)Significant Digits is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by LibreTexts.
711
Significant Figures
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Supplemental_Modules_(Analytical_Chemistry)/Quantifying_Nature/Significant_Digits/Significant_Figures
Significant figures are used to keep track of the quality (variability) of measurements. This includes propagating that information during calculations using the measurements. The purpose of this page is to help you organize the information about significant figures -- to help you set priorities. Sometimes students are overwhelmed by too many rules, and lack guidance about how to sort through them. What is the purpose? Which rules are most important?The following points as being most important:I will de-emphasize the following:Let's break that into two parts. One is about the information per se, and the other is about priorities, about the approach to thinking about Significant Digits. The information here should agree, for the most part. However, what may be different is the order of presenting things, with a different perspective in the approach -- the steps -- to learning Significant Digits. We will all end up in the same place.If you were completely happy with how the Significant Digits topic is presented in your own course, you probably wouldn't be reading this page. Think of it as another approach -- to the same thing. Sometimes, looking at things differently can help. Trying two approaches can be better than trying only one. There is no claim that one approach is "right" or even "better". If there is a discrepancy between any information here and your own course, please let me know -- or check with your own instructor. Some details are a matter of preference. In the lab. When you take a measurement, you record not only the value of the measurement, but also some information about its quality. Using Significant Digits is one simple way to record the quality of the information.A simple and useful statement is that the significant figures (Significant Digits) are the digits that are certain in the measurement plus one uncertain digit.Significant Digits is not a set of arbitrary rules. Almost everything about Significant Digits follows from how you make the measurements, and then from understanding how numbers work when you do calculations. Unfortunately, there are "special cases" that can come up with Significant Digits. If all the rules are presented together, it is easy to get lost in the rules. Better -- and what we will do here -- is to emphasize the logic of using Significant Digits. This involves a few basic ideas, which can be stated as rules. We will leave special cases for a while, so they do not confuse the big picture. The number of high priority rules about Significant Digits is small.The best way to start with Significant Digits is in the lab, taking measurements. An alternative is to use an activity that simulates taking measurements -- of various accuracy. We will do that here, using drawings of measurement scales. A bad way to start with Significant Digits is to learn a list of rules.When you take a measurement, you write down the correct number of digits. You write down the significant digits. That is, the way you write a number conveys some information about how accurate it is. It is up to you to determine how many digits are worth writing down. It is important that you do so, since what you write conveys not only the measurement but something about its quality. For many common lab instruments, the proper procedure is to estimate one digit beyond those shown directly by the measurement scale. If that one estimated digit seems meaningful, then it is indeed a significant digit.The scale shown here is a "typical" measurement scale. The specific scale is from a 10 mL graduated cylinder -- shown horizontally here for convenience. The arrow marks the position of a measurement. Glossary entry: Scale. Our goal is to read the scale at the position of the arrow. Let's go through this in detail.How meaningful is a drawing of a measurement scale, such as the one in the example above? It illustrates one particular issue very well: how to read a scale per se, figure out what the marks and labels mean, and how to estimate the final digit. Real measuring instruments, such as graduated cylinders, have those issues. Depending on the situation, there may be other issues that affect the ease of reading. In the drawing above, the goal is to read a well-defined arrow. With a real graduated cylinder, you may need to deal with a meniscus (curved surface) and parallax. Those issues are beyond our topic here.A final zero? In estimating that last digit, be sure to write down the zero if your best estimate is indeed zero. For example, if the last digit reflects hundredths of a mL, you might estimate in one case that there are 6 hundredths; thus you would write 6 as the last digit (e.g., 8.16 mL -- 3 Significant Digits). But you might (in another case) estimate that there are 0 hundredths; it is important that you write that zero (e.g., 8.10 mL -- 3 Significant Digits). That final zero says you looked for hundredths and found none. If you wrote only 8.1 mL (2 Significant Digits), it would imply that you did not look for hundredths.The arrow below appears to be "right on" the "4.7" line. (Let's assume that. The point here is to deal with the case where you think the arrow is "on" the line.) Thus we estimate that the hundredths place is 0. The proper reading, then, is 4.70 mL (3 Significant Digits). That final zero means that we looked for hundredths, and found none. If we wrote 4.7 mL (2 Significant Digits), it would imply that we didn't look for hundredths.The scale shown in Example 2 is the same scale as in Example 1. In Example 1 our proper reading had 3 Significant Digits. That is also true in Example 2. That final 0 in Example 2 is an estimate; it is entirely equivalent to the final 8 estimated in Example 1.There are a couple of ways to approach this:Both approaches will work. They reflect the same principles. Often, simply looking at the number will be sufficient. However, when you are not sure, it helps to go back to basics: think about the underlying measurement. We will illustrate this in the next section, on zeroes -- the situation most likely to cause confusion.We tend to spend more time on this issue than it really is worth. Only one tenth of all digits are zeroes, yet the bulk of a list of Significant Digits rules may be about how to treat the zeroes. Many zeroes are clear enough, but indeed it can take a bit of thought to decide whether some zeroes are or are not significant.If you understand where Significant Digits come from, then whether a zero is significant should be clear -- at least most of the time. If you are learning Significant Digits by memorizing rules, then you are doing it the hard way -- not understanding the meaning. If, for whatever reason, you are struggling with Significant Digits, the problem of the zeroes is a low priority problem.Here is what I usually suggest to students. Don't worry too much about the rules for zeroes, especially when you are just starting. As you go on, ask about specific cases where you are not sure about the zeroes. That way, you will gradually learn how to deal with the zeroes, but not get bogged down with what can seem to be a bunch of picky rules.The key point in deciding whether a zero is significant is to decide if it is part of the measurement, or simply a digit that is there to "fill space". The next section will help with much of the "zeroes problem".When a number is written in standard scientific (exponential) notation format, there should be no problem with zeroes. In this format, with one digit before the decimal point and only Significant Digits after the decimal point, all digits shown are significant.How many Significant Digits are in the measurement 0.00023456 m?In scientific notation that is 2.3456x10-4 m. 5 Significant Digits. Scientific notation makes clear that all the zeroes to the left are not significant. The first zero is just decorative and could be omitted; the others are place-holders, so you can show that the 2 is the fourth decimal place.The "rule" that covers this case may be stated: zeroes on the left end of a number are not significant -- regardless of where the decimal point is. Hopefully, the example, showing how this plays out in scientific notation, makes this rule clearer.How many Significant Digits are in the measurement 0.00023450 m?In scientific notation that is 2.3450x10-4 m. 5 Significant Digits. That final zero is part of the measurement. If it weren't, why would it be there?The "rule" that covers this case may be stated: zeroes on the right end of a number are significant -- if they are to the right of the decimal point. This rule may seem confusing in words, but showing the case in scientific notation should make it clearer.How many Significant Digits are in the measurement 234000 m?In scientific notation that is ... Hm, what is it? It's not really clear. Let's suggest that it is 2.34x105 m. That is clearly 3 Significant Digits.Why did I choose to not consider the zeroes significant? Maybe they are significant. Or maybe one of them is significant. The problem is that there is no way to tell from the number 234000 whether those zeroes are significant or are merely place holders, telling us (for example) that the 4 is in the thousands place. So why choose to make them not significant? First, that is the conservative position. I don't know whether they are significant, and to claim that they are is an unwarranted claim of quality. Second, 3 Significant Digits is reasonable -- a common way to measure distances; 6 Significant Digits is not likely. What if the person making the measurement knows that the measurement is good to 4 Significant Digits, with the first zero being significant? Then, somehow, they need to say so. One good way is to put the measurement in proper scientific notation in the first place: 2.340x105 m, 4 Significant Digits.It depends on the type of calculation. Each math operation has its own rules for handling Significant Digits. More precisely, there is one rule each for:Those three rules are distinct; you must be careful to use the right rule for the right operation. But there is good news: The multiplication rule is by far the most important in basic chemistry -- and it is perhaps also the simplest. So, as a matter of priority, emphasize the multiplication rule. When you have mastered it, you can go on and learn the addition rule. It is useful, though much less important. Whether you need the rule for logs will depend on your course; some courses manage to avoid this rule completely.In summary ... there are three rules, but there is a clear set of priorities with them. Emphasize the multiplication rule. It is the most important rule, and the easiest one.If you multiply two numbers with the same number of Significant Digits, then the answer should have that same number of Significant Digits. If you multiply together two numbers that each have 4 Significant Digits, then the answer should have 4 Significant Digits.Multiply 12.3 cm by 2.34 cm.Doing the arithmetic on the calculator gives 28.782. In this case, each number has 3 Significant Digits. Thus we report the result to 3 Significant Digits. Proper rounding of 28.782 to 3 Significant Digits gives 28.8. With the units, the final answer is 28.8 cm2.If you multiply together two numbers with different numbers of Significant Digits, then the answer should have the same number of Significant Digits as the "weaker" number. Hm, that is a lot of words. An example should help. Multiply a number with 3 Significant Digits and a number with 4 Significant Digits. Keep 3 Significant Digits in the answer.Multiply 24 cm by 268 cm.Doing the arithmetic on the calculator gives 6432. One measurement has 2 Significant Digits and one has 3 Significant Digits. The 2 Significant Digits number is "weaker": it has less information; it has only two digits of information in it. That is, the 2 Significant Digits number limits the calculation. Thus we report the result to 2 Significant Digits. Proper rounding of 6432 to 2 Significant Digits gives 6400. That is clearer in scientific notation, as 6.4x103. With the units, the final answer is 6.4x103 cm2. [Recall section Why is scientific notation helpful?, especially Example 5.]The following two examples serve as reminders that it is important to understand the context of the particular problem. In Example 7, we reported the product of 24 & 268 to 2 Significant Digits. But in Example 8, which follows, we report the product of those same two numbers to 3 Significant Digits. Both are correct -- because the contexts are different. Example 9 reminds us of another issue in carefully recording measurements.You have an object that is 268 cm long. What would be the total length of 24 such objects?The calculator gives 6432, as in Example 7. Now we look at the Significant Digits; we must carefully think about what each number means. "268 cm" is an ordinary measurement; it has 3 Significant Digits. But the "24" is a count, and is taken as exact (with no uncertainty). That is, the "24" does not limit the calculation, and we report 3 Significant Digits. With the units, the final answer is 6.43x103 cm.You measure the sides of a rectangle. The sides are 28.2 cm and 25 cm. What is the area? But before you calculate the area... There is probably something wrong with the statement of this question. What?What's wrong? Well, we have an object, approximately square. Someone has measured two sides. One would think they used the same measuring instrument -- the same ruler. But the two reported measurements are inconsistent. One is reported to the nearest cm, and one is reported to the nearest tenth. That is suspicious. Why were they not reported the same way?The purpose of this example is to remind you of the importance of reading the measuring instrument carefully and consistently, and recording the final zero if indeed that is your estimate. There is no need to carry out the calculation in this case.Notes...For students who are just starting chemistry, the addition rule for Significant Digits is not as important as the multiplication rule. The intent of that statement is to help you set priorities. Learn one thing at a time -- especially if you are finding the topic difficult. The multiplication rule is more important; learn it first and get comfortable with it.Most instructors will want you to learn the addition rule. I am not suggesting otherwise. Again, the emphasis here is to guide you to learn one thing at a time.Here is an example of a basic chem situation that would seem to involve the addition rule, yet where using that rule is not really needed. Consider calculating the molar mass (formula weight) of a compound, say KOH. Using the atomic masses shown on the periodic table, the molar mass of KOH is 39.10 + 16.00 + 1.008 = 56.108 (in g/mol).One answer might be to use the Significant Digits rule for addition and note that the result is only good to the hundredths place. Therefore, we round it to 56.11 g/mol.However, that may be unnecessary -- and even undesirable. The reason for calculating a molar mass is to use it in a real calculation. In real cases, it is usually fine to calculate molar mass by using the atomic masses shown on your periodic table. No rounding, at least now. When you use the molar mass for a calculation, you round the final result. At this step, you should -- in principle -- consider the quality of the molar mass number. However, in practice, it is likely to not matter. It is most likely -- especially in beginning chemistry -- that the Significant Digits of the final result will be limited by other parts of the calculation, not the molar mass.Therefore, I encourage beginning students to use the procedure above... Use all the digits of the atomic weights shown on their periodic table. Just add them up, and use the molar mass you get. Don't round the molar mass. Round the final result for the overall calculation, assuming that the molar mass Significant Digits is not a concern. This is usually fine, and lets you worry about the addition rule a bit later.Now, it is easy enough for the textbook to make up problems where the above method would not be satisfactory. My point is that such cases are uncommon in real problems, especially in introductory chemistry. In fact, a simple example of a question is "Calculate the molar mass of ... [some chemical]." How many Significant Digits do you report? Well, you'll need to use the addition rule for Significant Digits. But that is an artificial question; in the real world one almost always wants to know a molar mass in the context of a specific calculation involving some measurement, and it is quite likely that the measurement will limit the quality of the result.The logarithm of 74 is 1.87. (We will use base 10 logs here, but the Significant Digits rule is the same in any case.) 74 has 2 Significant Digits, and the log shown, 1.87, has 2 Significant Digits. Why? Because the 1 in the log (the part before the decimal point -- the "characteristic") relates to the exponent, and is an "exact" number.Whoa! What exponent? Well, it will help to put the number in standard scientific notation. 74 is 7.4x101. Now consider the log of each part: the log of 101 is 1, an exact number; the log of 7.4 is 0.87 -- with a proper 2 Significant Digits. Add those together, and you get log 74 = 1.87 -- with 2 Significant Digits.Log of 740,000? That is log of 7.4x105. 5.87. In scientific notation only the exponent is different from the previous number; therefore in the logarithm, only the leading integer is different.This log rule is often skipped in an intro chem course for a couple of reasons. First, logs may come up only once, with pH. Second, students in an intro chem course often are weak with using exponents -- and may not have learned about logs at all. So, sometimes one just suggests that pH be reported to two decimal places -- a usable if rough approximation.The short answer is "no".It is common now that most calculations are done on a calculator. Just do all the steps with the calculator, letting the machine keep track of the intermediate results. There is no need to even write down intermediates, much less round them. Why avoid rounding at each step? Each time you round, you are throwing away some information. If you do it over and over, it gets worse and worse; you accumulate rounding errors -- and that is not so good.Imagine that we want to calculate 1.00 * (1.127)10. For our purposes here, the numbers are measurements, and we are to give the answer with proper Significant Digits. Proper Significant Digits in this case is 3 (because 1.00 is 3 Significant Digits). (For a clarification, see * note at end of this example box.)We might consider two ways to do this:Well, those two calculations give answers that are quite different! How can we judge them? Here is one approach... The original number 1.127, by convention, means 1.127 +/- 0.001. That is, this measurement might be 1.126 to 1.128. If we do the calculation with 1.126, we get 3.28. If we do the calculation with 1.128, we get 3.34. Thus it seems that the result should be in the range of those two numbers, 3.28-3.34. In fact, method 1 (calculate with the original number and round only at the end) gives 3.31 -- which is in the middle of that range. However, method 2 (round first), gives 3.39 -- which is outside the range, by quite a bit. The reason should be clear enough in this example: we have rounded "up" ten times, and thus biased the result upwards. This is an example of how rounding errors can accumulate. It is better to round only at the end.At the start of this example we said that the proper number of Significant Digits in this case was 3. As we went on, we found that the range of possible answers was 3.28-3.34, or 3.31 +/- 0.03. Obviously, this means that stating the answer as 3.31, to 3 Significant Digits with an implication of +/- 0.01, is not so good. This illustrates a limitation of Significant Digits; it is not so good when there are many error terms to keep track of (10, in this case). The main point of this example was to show the effect of compounding rounding errors -- hence the desirability of not rounding off at intermediate stages. (For more about such limitations of Significant Digits, see the section below: Limitations and complications of Significant Digits.)The discussion of Significant Digits when adding up atomic weights to calculate a molecular weight, in the section Significant figures in addition, is consistent with this point. The question of how to round when the final digit is a 5 -- or at least appears to be a 5 -- is discussed below in the Special cases section on Rounding: What to do with a final 5.How many Significant Digits do conversion factors have? Well, it depends. Conversion factors within the metric system, i.e., involving only metric prefixes, are exact. Similarly, conversion factors between large and small units within the American system (e.g., 12 inches per foot, are exact). Conversion factors between metric and American systems are typically not exact, and it is your responsibility to try to make sure you use a conversion factor that has enough Significant Digits for your case. It is generally not good to allow a conversion factor to limit the quality of a calculation.The conversion factor between centimeters and inches, 2.54 cm = 1 inch, is exact -- because it has been defined to be exact. If you convert 14.626 cm to inches, at 2.54 cm/inch, you can properly report the result as 5.7583 inches -- 5 Significant Digits, like the original measurement -- because the conversion factor is exact.Many conversion factors we use in chemistry relate one property to another. Examples are density (mass per volume, g/mL) and molar mass (mass per mole, g/mol). These conversion factors are based on measurements, and their Significant Digits must be considered. It is your responsibility to think about the Significant Digits of a conversion factor. The best approach is usually to think about where the number came from. Is it a definition? a measurement?Using Significant Digits can be a good simple way to introduce students to the idea of measurement errors. It allows us to begin to relate the measurement scale to measurement quality, and does not require much math to implement. However, Significant Digits are only an approximation to true error analysis, and it is important to avoid getting bogged down in trying to make Significant Digits work well when they really don't.One type of difficulty with Significant Digits can be seen with reading a scale to the nearest "tenth". (The scale shown with Example 1 illustrates this case.) In this case, 1.1 and 9.1 are both proper measurements. If we assume for simplicity that each measurement is good to +/- 0.1, the uncertainty in the first measurement is about 10% and the uncertainty in the second measurement is about 1%. Clearly, simply saying that both numbers are good to two Significant Digits is only a rough indication of the quality of the measurement.Further, Significant Digits does not convey the magnitude of the reading uncertainty for any specific scale. The common statement, which I used in the previous paragraph, is that readings are assumed to be good to 1 in the last place shown. But on some scales, it would be much more realistic to suggest that the uncertainty is 2 or even 5 in the last place shown. A similar problem can occur when the errors from many numbers are accumulated in one calculation. Example 10 illustrated this.Another limitation of Significant Digits is that it deals with only one source of error, that inherent in reading the scale. Real experimental errors have many contributions, including operator error and sometimes even hidden systematic errors. One cannot do better than what the scale reading allows, but the total uncertainty may well be more than what the Significant Digits of the measurements would suggest.I have found that, even in introductory courses, some of the students will realize some of these limitations. When they point them out to me, I am happy to compliment them on their understanding. I then explain that Significant Digits is a simple and approximate way to start looking at measurement errors, and assure them that more sophisticated -- but more labor-intensive -- ways are available.Some modern measuring instruments have a digital scale. Electronic balances are particularly common. How do you know how many Significant Digits to write down from a digital scale? Good question. Most such instruments will display the proper number of digits. However, you should watch the instrument and see if that seems reasonable. Remember that we usually estimate one digit beyond what is certain. With a digital scale, this is reflected in some fluctuation of the last digit. So if you see the last digit fluctuating by 1 or 2, that is fine. Write down that last digit; you should try to write down a value that is about in the middle of the range the scale shows.If the fluctuation is more than 2 or so in the last digit, it may mean that the instrument is not working properly. For example, if the balance display is fluctuating much, it may mean that the balance is being influenced by air currents -- or by someone bumping the bench. Regardless of the reason, a large fluctuation may mean that a displayed digit is not really significant.These measuring instruments have only one calibration line. You adjust the liquid level to the calibration line -- as close as you can; you then have the volume that is shown on the device. A 10 mL volumetric pipet measures 10 mL; that is the only thing it can do. So, how many Significant Digits do we report in such a measurement? Obviously the usual procedures for determining Significant Digits are not applicable.One key determinant of the quality of a measurement with a volumetric pipet is the tolerance -- the accuracy of the device as guaranteed by the manufacturer. The tolerance may be shown on the instrument; if not, it can be obtained from the catalog or other reference source.There is no necessary relationship between the tolerance and measurement error. However, it turns out that these instruments have been designed so that the tolerance is close to the typical measurement error. Thus, as an approximation, but a useful one, one can treat the stated tolerance as the measurement error. As a rule of thumb, high quality ("Class A") volumetric glassware will give 4 Significant Digits measurements. (In contrast, ordinary glassware will give about 3 Significant Digits at best.) Of course, this assumes that the instrument is being used by trained personnel. In serious work, one would take care to measure actual experimental errors.There are two points to be made here. The first is to make sure that the final 5 really is a final 5. And then, if it is, what to do.Is the final 5 really a final 5? This might seem to be simple enough, but with common calculators it is easy to be misled. Calculators know nothing about Significant Digits; how many digits they display depends on various things, including how you set them. It is easy for a calculator to mislead you about a final 5. For example, imagine that the true result of a calculation is 8.347, but that the calculator is set to display two decimal places (two digits beyond the decimal point). It will show 8.35. If you want 2 Significant Digits, you would be tempted to round to 8.4. However, that is clearly incorrect, if you look at the complete result 8.347, which should round to 8.3 for 2 Significant Digits. How do you avoid this problem? If you see a final 5 that you want to round off, increase the number of digits displayed before making your decision.What to do if you really have a final 5. There are two schools of thought on this.What should you do? Well, this is really a rather arcane point, not worth much attention. If your instructor prefers a particular way, do it. It really is not a big deal, one way or the other. If you are looking to decide your own preferred approach, I'd suggest you read a bit about what various people suggest, and why. If you just want my opinion, well, I suggest "rounding even".Significant Figures is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by Robert Bruner.
712
Spectrometer
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Supplemental_Modules_(Analytical_Chemistry)/Instrumentation_and_Analysis/Spectrometer
Strictly speaking, a spectrometer is any instrument used to view and analyze a range (or a spectrum) of a given characteristic for a substance (for example, a range of mass-to-charge values as in mass spectrometry), or a range of wavelengths as in absorption spectrometry like nuclear magnetic radiation spectroscopy or infrared spectroscopy). A spectrophotometer is a spectrometer that only measures the intensity of electromagnetic radiation (light) and is distinct from other spectrometers such as mass spectrometers.A spectrometer is typically used to measure wavelengths of electromagnetic radiation (light) that has interacted with a sample. Incident light can be reflected off, absorbed by, or transmitted through a sample; the way the incident light changes during the interaction with the sample is characteristic of the sample. A spectrometer measures this change over a range of incident wavelengths (or at a specific wavelength).There are three main components in all spectrometers; these components can vary widely between instruments for specific applications and levels of resolution. Very generally, these components produce the electromagnetic radiation, somehow narrows the electromagnetic radiation to a specified range, and then detect the resulting electromagnetic radiation after is has interacted with the sample.There are two classes of radiation sources used in spectrometry: continuum sources and line sources. The former are usually lamps or heated solid materials that emit a wide range of wavelengths that must be narrowed greatly using a wavelength selection element to isolate the wavelength of interest. The latter sources include lasers and specialized lamps, that are designed to emit discrete wavelengths specific to the lamp’s material.Electrode lamps are constructed of a sealed, gas-filled chamber that has one or more electrodes inside. Electrical current is passed through the electrode, which causes excitation of the gas. This excitation produces radiation at a wavelength or a range of wavelengths, specific to the gas. Examples include argon, xenon, hydrogen or deuterium, and tungsten lamps, which emit radiation in the following ranges.There are also non-electrode lamps used as line sources that contain a gas and a piece of metal that will emit narrow radiation at the desired wavelength. Ionization of the gas occurs from radiation (usually in the radio or microwave frequencies). The metal atoms are then excited by a transfer of energy from the gas, thereby producing radiation at a very specific wavelength.Laser (an acronym for light amplification by stimulated emission of radiation) sources work by externally activating a lasing material so that photons of a specific energy are produced and aimed at the material. This triggers photon production within the material, with more and more photons being produced as they reflect inside the material. Because all the photons are of equal energy they are all in phase with each other so that energy (and wavelength) is isolated and enhanced. The photons are eventually focused into a narrow beam and then directed at the sample.There are three methods of narrowing the incident electromagnetic radiation down to the desired wavelength: dispersive or non-dispersive.Wavelength selection elements are non-dispersive materials that filter out the unwanted ranges of wavelengths from the incident light source, thereby allowing only a certain range of wavelengths to pass through. For example, UV filters (as used on cameras) work by absorbing the UV radiation (100-400nm) but allowing other wavelengths to be transmitted. This type of filter is not common in modern spectrometers now that there are more precise elements available for narrowing the radiation.There are also interference filters that select wavelengths by causing interference effects between the incident and reflected radiation waves at each of the material boundaries in the filter. The filter has layers of a dielectric material, semitransparent metallic films, and glass; the incident light is partitioned according to the properties of each material as it passes through the layers (Ingle). If the light is of the proper wavelength when it encounters the second metallic film, then the reflected portion remains in phase with some of the incident light still entering that layer. This effectively isolates and enhances this particular wavelength while all others are removed via destructive interference.One filter can be adjusted to allow various wavelengths to pass through it by manually changing the angle of the incident radiation (\(\theta\)) angle:\[2d \sqrt{\epsilon^2 – \sin^2 } = m \lambda \]Where \(d\) is the thickness of the dialectic material (on the order of the wavelength of interest), ? is the refractive index of the material, \(m\) is the order of interference, and ? is the passable wavelength. This shows that for a given material (constant d, \(\epsilon\), and m) changing \(\lambda\) results in a different \(\theta\). Note that when the incident radiation is normal (perpendicular) to the filter surface, then the transmittable wavelength is independent of the radiation angle:\[ \theta = \dfrac{2d\lambda}{m}\]Interferometers are also non-dispersive systems that use reflectors (usually mirrors) to direct the incident radiation along a specified path before being recombined and/or focused. Some systems also include a beam splitter that divides the incident beam and directs each portion along a different path before being recombined and directed to the detector. When the beams are recombined, only the radiation that is in phase when the beams recombine will be detected. All other radiation suffers destructive interference and is therefore removed from the spectrum. An interferogram is a photographic record produced by an interferometer.A Fabry-Perot Interferometer allows the incident radiation to be reflected back and forth between a pair of reflective plates that are separated by an air gap (Ingle). Diffuse, multi-beam incident radiation passes through a lens and is directed to the plates. Some of the radiation reflects out of the plates back towards the incident source. The remaining radiation reflects back and forth between the plates and is eventually transmitted through the pair of plates towards a focusing lens. Here all constructively interfering radiation is focused onto a screen where it creates a dark or bright spot.Constructive interference occurs when\[2d \cos \theta’ = m?\]Where \(\theta’\) equals the angle of refraction in the air gap. This air gap can be changed to isolate particular wavelengths.The mathematics relationships of the Fabry-Perot Interferometer relates the difference in the optical path length, \(\Delta(OPL)\), with the reflectance of the plate coatings:\[\Delta(OPL) = 2d\lambda \cos \theta’\]Phase difference = ? = 2? (2d? cos ?’)/ ?And ? ? 4?/(1- ?)2Where ? is the reflectance of the plate coating.A Michelson Interferometer uses a beam splitter plate to divide the incident radiation into two beams of equal intensity. A pair of perpendicular mirrors then reflects the beams back to the splitter plate where they recombine and are directed towards the detector. One mirror is movable and the other is stationary. By moving one mirror, the path length of each beam is different, creating interference at the detector that can be measured as a function of the position of the movable mirror. At a certain distance from the splitter plate, the movable mirror causes constructive interference of the radiation at the detector such that a bright spot is detected. By varying the distance from this location, the adjustable mirror causes the radiation to fluctuate sinusoidally between being “in phase” or “out of phase” at the detector (Ingle).The sample material to be tested is placed in the path of one of the interferometer’s beams, which changes the path length difference between the two beams. It is the change in the interference pattern at the detector between the two beams that is measured.Other interferometers work in a similar manner, but change the angle of the mirrors rather than the position. These variations are found in the Sagnac Interferometer or the Mach-Zender Interferometer.These work by dispersing the incident radiation out spatially, creating a spectrum of wavelengths (Ingle). In a prism the diffuse radiation beam is separated because of the refractive index of the material. For example, when white light is shone onto a prism, a rainbow of colors is observed coming out the other side. This is a result of wavelength dependence on the refractive index of the prism material.Gratings are also used to disperse incident light into component wavelengths. They work by reflecting the light off the angled grating surface, causing the wavelengths to be dispersed through constructive interference at wavelength-dependent diffraction angles (Ingle).The condition for constructive interference (and therefore wavelength selection) on a grating surface is:\[d (\sin \theta + \sin \theta) = m \theta \nonumber\]This relationship shows that the wavelength selection is not based on the grating material, but on the angle of incidence (\(\theta\)). The angle \(\theta\) can also be used to describe the angle between normal and ?1.Detectors are transducers that transform the analog output of the spectrometer into an electrical signal that can be viewed and analyzed using a computer. There are two types: photon detectors and thermal detectors.Spectrometer is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by Julie Bower.
713
Standard Electrodes
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Supplemental_Modules_(Analytical_Chemistry)/Electrochemistry/Electrodes/Standard_Hydrogen_Electrode
An electrode by definition is a point where current enters and leaves the electrolyte. When the current leaves the electrodes it is known as the cathode and when the current enters it is known as the anode. Electrodes are vital components of electrochemical cells. They transport produced electrons from one half-cell to another, which produce an electrical charge. This charge is based off a standard electrode system (SHE) with a reference potential of 0 volts and serves as a medium for any cell potential calculation.An electrode is a metal whose surface serves as the location where oxidation-reduction equilibrium is established between the metal and what is in the solution. The electrode can either be an anode or a cathode. An anode receives current or electrons from the electrolyte mixture, thus becoming oxidized. When the atoms or molecules get close enough to the surface of the electrode, the solution in which the electrode is placed into, donates electrons. This causes the atoms/molecules to become positive ions. The opposite occurs with the cathode. Here the electrons are released from the electrode and the solution around it is reduced. An electrode has to be a good electrical conductor so it is usually a metal. Now what this metal is made out of is dependent on whether or not it is involved in the reaction. Some reactions require an inert electrode that does not participate. An example of this would be platinum in the SHE reaction(described later). While other reactions utilize solid forms of the reactants, making them the electrodes. An example of this type of cell would be:(left side is the anode) Cu(s)|Cu(NO3)2(aq) (0.1M)||AgNO3(aq) (.01M)|Ag(s) (right side is cathode)(In the above cell set up: the outside components are the electrodes for the reaction while the inner parts are the solutions they are immersed in)Here you can see that a solid form of the reactant, copper, is used. The copper, as well as the silver, are participating as reactants and electrodes.Some commonly used inert electrodes: graphite(carbon), platinum, gold, and rhodium.Some commonly used reactive (or involved) electrodes: copper, zinc, lead, and silver.A Standard Hydrogen Electrode (SHE) is an electrode that scientists use for reference on all half-cell potential reactions. The value of the standard electrode potential is zero, which forms the basis one needs to calculate cell potentials using different electrodes or different concentrations. It is important to have this common reference electrode just as it is important for the International Bureau of Weights and Measures to keep a sealed piece of metal that is used to reference the S.I. Kilogram.SHE is composed of a 1.0 M H+(aq) solution containing a square piece of platinized platinum (connected to a platinum wire where electrons can be exchanged) inside a tube. During the reaction, hydrogen gas is then passed through the tube and into the solution causing the reaction:2H+(aq) + 2e- <==> H2(g).Platinum is used because it is inert and does not react much with hydrogen.First an initial discharge allows electrons to fill into the highest occupied energy level of Pt. As this is done, some of the H+ ions form H3O+ ions with the water molecules in the solution. These hydrogen and hydronium ions then get close enough to the Pt electrode (on the platinized surface of this electrode) to where a hydrogen is attracted to the electrons in the metal and forms a hydrogen atom. Then these combine with other hydrogen atoms to create H2(g). This hydrogen gas is released from the system. In order to keep the reaction going, the electrode requires a constant flow of H2(g). The Pt wire is connected to a similar electrode in which the opposite process is occurring, thus producing a charge that is referenced at 0 volts. Other standard electrodes are usually preferred because the SHE can be a difficult electrode to set up. The difficulty arises in the preparation of the platinized surface and in controlling the concentration of the reactants. For this reason the SHE is referred to as a hypothetical electrode.The three electrode system is made up of the working electrode, reference electrode, and the auxiliary electrode. The three electrode system is important in voltammetry. All three of these electrodes serve a unique roll in the three electrode system. A reference electrode refers to an electrode that has an established electrode potential. In an electrochemical cell, the reference electrode can be used as a half cell. When the reference electrode acts as a half cell, the other half cell's electrode potential can be discovered. An auxiliary electrode is an electrode makes sure that current does not pass through the reference cell. It makes sure the current is equal to that of the working electrode's current. The working electrode is the electrode that transports electrons to and from the substances that are present. Some examples of reference cells include:Calomel electrode: This reference electrode consists of a mercury and mercury-chloride molecules. This electrode can be relatively easier to make and maintain compared to the SHE. It is composed of a solid paste of Hg2Cl2 and liquid elemental mercury attached to a rod that is immersed in a saturated KCl solution. It is necessary to have the solution saturated because this allows for the activity to be fixed by the potassium chloride and the voltage to be lower and closer to the SHE. This saturated solution allows for the exchange of chlorine ions to take place. All this is usually placed inside a tube that has a porous salt bridge to allow the electrons to flow back through and complete the circuit.\[\dfrac{1}{2} Hg_2Cl_{2(s)}+e- \rightleftharpoons Hg_{(l)}+Cl^-_{(aq)}\]Silver-Silver Chloride electrode: An electrode of this sort precipitates a salt in the solution that participates in the electrode reaction. This electrode consists, of solid silver and its precipitated salt AgCl. This a widely used reference electrode because it is inexpensive and not as toxic as the Calomel electrode that contains mercury. A Silver-Silver Chloride electrode is made by taking a wire of solid silver and coding it in AgCl. Then it is placed in a tube of KCl and AgCl solution. This allows ions to be formed (and the opposite) as electrons flow in and out of the electrode system.\[AgCl_{(s)}+e^- \rightleftharpoons Ag^+_{(aq)}+Cl^-_{(aq)}\]1. Which electrode oxidizes the solution in the half-cell? Anode or Cathode?2. Why is the Standard Hydrogen Electrode important to calculating cell potentials?3. Identify the which side is the cathode and which side is the anode.Ag(s) | Ag+(aq)(.5M) || Ag+(aq) (.05M) | Ag(s)4. Why is it important to use an inert electrode in situations like the SHE?5. What is the standard half cell potential for the SHE?Answers (highlight to see):1. Anode2. It is important in calculating half-cell potentials because it serves as a reference. Without this electrode, there would be no basis to calculate values of cell potentials.3. The left is the anode and the right is the cathode.4. It is important to use an inert electrode in this situation because it will not react or participate in the reaction in the cell, just provide a surface area for the reaction to occur.5. 0 volts.Standard Electrodes is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by LibreTexts.
714
Standard Potentials
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Supplemental_Modules_(Analytical_Chemistry)/Electrochemistry/Redox_Potentials/Standard_Potentials
Learning ObjectivesIn a galvanic cell, current is produced when electrons flow externally through the circuit from the anode to the cathode because of a difference in potential energy between the two electrodes in the electrochemical cell. In the Zn/Cu system, the valence electrons in zinc have a substantially higher potential energy than the valence electrons in copper because of shielding of the s electrons of zinc by the electrons in filled d orbitals. Hence electrons flow spontaneously from zinc to copper(II) ions, forming zinc(II) ions and metallic copper. Just like water flowing spontaneously downhill, which can be made to do work by forcing a waterwheel, the flow of electrons from a higher potential energy to a lower one can also be harnessed to perform work.Because the potential energy of valence electrons differs greatly from one substance to another, the voltage of a galvanic cell depends partly on the identity of the reacting substances. If we construct a galvanic cell similar to the one in part (a) in , defined as the potential of a cell measured under standard conditions—that is, with all species in their standard states (1 M for solutions,Concentrated solutions of salts (about 1 M) generally do not exhibit ideal behavior, and the actual standard state corresponds to an activity of 1 rather than a concentration of 1 M. Corrections for nonideal behavior are important for precise quantitative work but not for the more qualitative approach that we are taking here. 1 atm for gases, pure solids or pure liquids for other substances) and at a fixed temperature, usually 25°C.NoteMeasured redox potentials depend on the potential energy of valence electrons, the concentrations of the species in the reaction, and the temperature of the system.It is physically impossible to measure the potential of a single electrode: only the difference between the potentials of two electrodes can be measured. (This is analogous to measuring absolute enthalpies or free energies. Recall that only differences in enthalpy and free energy can be measured.) We can, however, compare the standard cell potentials for two different galvanic cells that have one kind of electrode in common. This allows us to measure the potential difference between two dissimilar electrodes. For example, the measured standard cell potential (E°) for the Zn/Cu system is 1.10 V, whereas E° for the corresponding Zn/Co system is 0.51 V. This implies that the potential difference between the Co and Cu electrodes is 1.10 V − 0.51 V = 0.59 V. In fact, that is exactly the potential measured under standard conditions if a cell is constructed with the following cell diagram:\[Co_{(s)} ∣ Co^{2+}(aq, 1 M)∥Cu^{2+}(aq, 1 M) ∣ Cu (s)\;\;\; E°=0.59\; V \label{19.9}\]This cell diagram corresponds to the oxidation of a cobalt anode and the reduction of Cu2+ in solution at the copper cathode.All tabulated values of standard electrode potentials by convention are listed for a reaction written as a reduction, not as an oxidation, to be able to compare standard potentials for different substances (Table P1). The standard cell potential (E°cell) is therefore the difference between the tabulated reduction potentials of the two half-reactions, not their sum:\[E°_{cell} = E°_{cathode} − E°_{anode} \label{19.10}\]In contrast, recall that half-reactions are written to show the reduction and oxidation reactions that actually occur in the cell, so the overall cell reaction is written as the sum of the two half-reactions. According to Equation \(\ref{19.10}\), when we know the standard potential for any single half-reaction, we can obtain the value of the standard potential of many other half-reactions by measuring the standard potential of the corresponding cell.NoteThe overall cell reaction is the sum of the two half-reactions, but the cell potential is the difference between the reduction potentials:\[E°_{cell} = E°_{cathode} − E°_{anode}\]Although it is impossible to measure the potential of any electrode directly, we can choose a reference electrode whose potential is defined as 0 V under standard conditions. The standard hydrogen electrode (SHE) is universally used for this purpose and is assigned a standard potential of 0 V. It consists of a strip of platinum wire in contact with an aqueous solution containing 1 M H+. The [H+] in solution is in equilibrium with H2 gas at a pressure of 1 atm at the Pt-solution interface ). Protons are reduced or hydrogen molecules are oxidized at the Pt surface according to the following equation:\[2H^+_{(aq)}+2e^− \rightleftharpoons H_{2(g)} \label{19.11}\]One especially attractive feature of the SHE is that the Pt metal electrode is not consumed during the reaction. shows a galvanic cell that consists of a SHE in one beaker and a Zn strip in another beaker containing a solution of Zn2+ ions. When the circuit is closed, the voltmeter indicates a potential of 0.76 V. The zinc electrode begins to dissolve to form Zn2+, and H+ ions are reduced to H2 in the other compartment. Thus the hydrogen electrode is the cathode, and the zinc electrode is the anode. The diagram for this galvanic cell is as follows:\[Zn_{(s)}∣Zn^{2+}_{(aq)}∥H^+(aq, 1 M)∣H_2(g, 1 atm)∣Pt_{(s)} \label{19.12}\]The half-reactions that actually occur in the cell and their corresponding electrode potentials are as follows:\[E°_{cell}=E°_{cathode}−E°_{anode}=0.76\; V\]Although the reaction at the anode is an oxidation, by convention its tabulated E° value is reported as a reduction potential. The potential of a half-reaction measured against the SHE under standard conditions is called the standard electrode potential for that half-reaction.In this example, the standard reduction potential for Zn2+(aq) + 2e− → Zn(s) is −0.76 V, which means that the standard electrode potential for the reaction that occurs at the anode, the oxidation of Zn to Zn2+, often called the Zn/Zn2+ redox couple, or the Zn/Zn2+ couple, is −(−0.76 V) = 0.76 V. We must therefore subtract E°anode from E°cathode to obtain E°cell: 0 − (−0.76 V) = 0.76 V.Because electrical potential is the energy needed to move a charged particle in an electric field, standard electrode potentials for half-reactions are intensive properties and do not depend on the amount of substance involved. Consequently, E° values are independent of the stoichiometric coefficients for the half-reaction, and, most important, the coefficients used to produce a balanced overall reaction do not affect the value of the cell potential.NoteE° values do NOT depend on the stoichiometric coefficients for a half-reaction, because it is an intensive property.To measure the potential of the Cu/Cu2+ couple, we can construct a galvanic cell analogous to the one shown in but containing a Cu/Cu2+ couple in the sample compartment instead of Zn/Zn2+. When we close the circuit this time, the measured potential for the cell is negative (−0.34 V) rather than positive. The negative value of E°cell indicates that the direction of spontaneous electron flow is the opposite of that for the Zn/Zn2+ couple. Hence the reactions that occur spontaneously, indicated by a positive E°cell, are the reduction of Cu2+ to Cu at the copper electrode. The copper electrode gains mass as the reaction proceeds, and H2 is oxidized to H+ at the platinum electrode. In this cell, the copper strip is the cathode, and the hydrogen electrode is the anode. The cell diagram therefore is written with the SHE on the left and the Cu2+/Cu couple on the right:\[Pt_{(s)}∣H_2(g, 1 atm)∣H^+(aq, 1\; M)∥Cu^{2+}(aq, 1 M)∣Cu_{(s)} \label{19.16}\]The half-cell reactions and potentials of the spontaneous reaction are as follows:\[E°_{cell} = E°_{cathode}− E°_{anode} = 0.34\; V\]Thus the standard electrode potential for the Cu2+/Cu couple is 0.34 V.Electrode Potentials and ECell: //youtu.be/zeeAXleT1c0 Previously, we described a method for balancing redox reactions using oxidation numbers. Oxidation numbers were assigned to each atom in a redox reaction to identify any changes in the oxidation states. Here we present an alternative approach to balancing redox reactions, the half-reaction method, in which the overall redox reaction is divided into an oxidation half-reaction and a reduction half-reaction, each balanced for mass and charge. This method more closely reflects the events that take place in an electrochemical cell, where the two half-reactions may be physically separated from each other.We can illustrate how to balance a redox reaction using half-reactions with the reaction that occurs when Drano, a commercial solid drain cleaner, is poured into a clogged drain. Drano contains a mixture of sodium hydroxide and powdered aluminum, which in solution reacts to produce hydrogen gas:\[Al_{(s)} + OH^−_{(aq)} \rightarrow Al(OH)^−_{4(aq)} + H_{2(g)} \label{19.20}\]In this reaction, \(Al_{(s)}\) is oxidized to Al3+, and H+ in water is reduced to H2 gas, which bubbles through the solution, agitating it and breaking up the clogs.The overall redox reaction is composed of a reduction half-reaction and an oxidation half-reaction. From the standard electrode potentials listed Table P1, we find the corresponding half-reactions that describe the reduction of H+ ions in water to H2and the oxidation of Al to Al3+ in basic solution:The half-reactions chosen must exactly reflect the reaction conditions, such as the basic conditions shown here. Moreover, the physical states of the reactants and the products must be identical to those given in the overall reaction, whether gaseous, liquid, solid, or in solution.In Equation \(\ref{19.21}\), two H+ ions gain one electron each in the reduction; in Equation \(\ref{19.22}\), the aluminum atom loses three electrons in the oxidation. The charges are balanced by multiplying the reduction half-reaction (Equation \(\ref{19.21}\)) by 3 and the oxidation half-reaction (Equation \(\ref{19.22}\)) by 2 to give the same number of electrons in both half-reactions:\[6H_2O_{(l)} + 2Al_{(s)} + 8OH^−_{(aq)} \rightarrow 2Al(OH)^−{4(aq)} + 3H_{2(g)} + 6OH^−_{(aq)} \label{19.25}\]Simplifying by canceling substances that appear on both sides of the equation,\[6H_2O_{(l)} + 2Al_{(s)} + 2OH^−_{(aq)} \rightarrow 2Al(OH)^−_{4(aq)} + 3H_{2(g)} \label{19.26}\]We have a −2 charge on the left side of the equation and a −2 charge on the right side. Thus the charges are balanced, but we must also check that atoms are balanced:\[2Al + 8O + 14H = 2Al + 8O + 14H \label{19.27}\]The atoms also balance, so Equation \(\ref{19.26}\) is a balanced chemical equation for the redox reaction depicted in Equation \(\ref{19.20}\).NoteThe half-reaction method requires that half-reactions exactly reflect reaction conditions, and the physical states of the reactants and the products must be identical to those in the overall reaction.We can also balance a redox reaction by first balancing the atoms in each half-reaction and then balancing the charges. With this alternative method, we do not need to use the half-reactions listed in Table P1 but instead focus on the atoms whose oxidation states change, as illustrated in the following steps:Step 1: Write the reduction half-reaction and the oxidation half-reaction.For the reaction shown in Equation \(\ref{19.20}\), hydrogen is reduced from H+ in OH− to H2, and aluminum is oxidized from Al° to Al3+:Elements other than O and H in the previous two equations are balanced as written, so we proceed with balancing the O atoms. We can do this by adding water to the appropriate side of each half-reaction:Step 3: Balance the charges in each half-reaction by adding electrons.Two electrons are gained in the reduction of H+ ions to H2, and three electrons are lost during the oxidation of Al° to Al3+:In this case, we multiply Equation \(\ref{19.34}\) (the reductive half-reaction) by 3 and Equation \(\ref{19.35}\) (the oxidative half-reaction) by 2 to obtain the same number of electrons in both half-reactions:Adding and, in this case, canceling 8H+, 3H2O, and 6e−,\[2Al_{(s)} + 5H_2O_{(l)} + 3OH^−_{(aq)} + H^+_{(aq)} \rightarrow 2Al(OH)^−_{4(aq)} + 3H_{2(g)} \label{19.38}\]We have three OH− and one H+ on the left side. Neutralizing the H+ gives us a total of 5H2O + H2O = 6H2O and leaves 2OH− on the left side:\[2Al_{(s)} + 6H_2O_{(l)} + 2OH^−_{(aq)} \rightarrow 2Al(OH)^−_{4(aq)} + 3H_{2(g)} \label{19.39}\]Step 6: Check to make sure that all atoms and charges are balanced.Equation \(\ref{19.39}\) is identical to Equation \(\ref{19.26}\), obtained using the first method, so the charges and numbers of atoms on each side of the equation balance.Example \(\PageIndex{1}\)In acidic solution, the redox reaction of dichromate ion (\(Cr_2O_7^{2−}\)) and iodide (\(I^−\)) can be monitored visually. The yellow dichromate solution reacts with the colorless iodide solution to produce a solution that is deep amber due to the presence of a green \(Cr^{3+}_{(aq)}\) complex and brown I2(aq) ions ):\[Cr_2O^{2−}_{7(aq)} + I^−_{(aq)} \rightarrow Cr^{3+}_{(aq)} + I_{2(aq)}\]Balance this equation using half-reactions.Given: redox reaction and Table P1Asked for: balanced chemical equation using half-reactionsStrategy:Follow the steps to balance the redox reaction using the half-reaction method.SolutionFrom the standard electrode potentials listed in Table P1 we find the half-reactions corresponding to the overall reaction:Balancing the number of electrons by multiplying the oxidation reaction by 3,Adding the two half-reactions and canceling electrons,\[Cr_2O^{2−}_{7(aq)} + 14H^+_{(aq)} + 6I^−_{(aq)} \rightarrow 2Cr^{3+}_{(aq)} + 7H_2O_{(l)} + 3I_{2(aq)}\]We must now check to make sure the charges and atoms on each side of the equation balance:The charges and atoms balance, so our equation is balanced.We can also use the alternative procedure, which does not require the half-reactions listed in Table P1.Step 1: Chromium is reduced from \(Cr^{6+}\) in \(Cr_2O_7^{2−}\) to \(Cr^{3+}\), and \(I^−\) ions are oxidized to \(I_2\). Dividing the reaction into two half-reactions,Step 2: Balancing the atoms other than oxygen and hydrogen,We now balance the O atoms by adding H2O—in this case, to the right side of the reduction half-reaction. Because the oxidation half-reaction does not contain oxygen, it can be ignored in this step.Next we balance the H atoms by adding H+ to the left side of the reduction half-reaction. Again, we can ignore the oxidation half-reaction.Step 3: We must now add electrons to balance the charges. The reduction half-reaction (2Cr+6 to 2Cr+3) has a +12 charge on the left and a +6 charge on the right, so six electrons are needed to balance the charge. The oxidation half-reaction (2I− to I2) has a −2 charge on the left side and a 0 charge on the right, so it needs two electrons to balance the charge:Step 4: To have the same number of electrons in both half-reactions, we must multiply the oxidation half-reaction by 3:Step 5: Adding the two half-reactions and canceling substances that appear in both reactions,Step 6: This is the same equation we obtained using the first method. Thus the charges and atoms on each side of the equation balance.Exercise \(\PageIndex{1}\)Copper is found as the mineral covellite (\(CuS\)). The first step in extracting the copper is to dissolve the mineral in nitric acid (\(HNO_3\)), which oxidizes sulfide to sulfate and reduces nitric acid to \(NO\):\[CuS_{(s)} + HNO_{3(aq)} \rightarrow NO_{(g)} + CuSO_{4(aq)}\]Balance this equation using the half-reaction method.Answer \[3CuS_{(s)} + 8HNO{3(aq)} \rightarrow 8NO_{(g)} + 3CuSO_{4(aq)} + 4H_2O_{(l)}\]Balancing a Redox Reaction in Acidic Conditions: //youtu.be/IB-fWLsI0lc The standard cell potential for a redox reaction (E°cell) is a measure of the tendency of reactants in their standard states to form products in their standard states; consequently, it is a measure of the driving force for the reaction, which earlier we called voltage. We can use the two standard electrode potentials we found earlier to calculate the standard potential for the Zn/Cu cell represented by the following cell diagram:\[ Zn{(s)}∣Zn^{2+}(aq, 1 M)∥Cu^{2+}(aq, 1 M)∣Cu_{(s)} \label{19.40}\]We know the values of E°anode for the reduction of Zn2+ and E°cathode for the reduction of Cu2+, so we can calculate E°cell:\[E°_{cell} = E°_{cathode} − E°_{anode} = 1.10\; V\]This is the same value that is observed experimentally. If the value of E°cell is positive, the reaction will occur spontaneously as written. If the value of E°cell is negative, then the reaction is not spontaneous, and it will not occur as written under standard conditions; it will, however, proceed spontaneously in the opposite direction. As we shall see, this does not mean that the reaction cannot be made to occur at all under standard conditions. With a sufficient input of electrical energy, virtually any reaction can be forced to occur. Example 4 and its corresponding exercise illustrate how we can use measured cell potentials to calculate standard potentials for redox couples.NoteA positive E°cell means that the reaction will occur spontaneously as written. A negative E°cell means that the reaction will proceed spontaneously in the opposite direction.Example \(\PageIndex{2}\)A galvanic cell with a measured standard cell potential of 0.27 V is constructed using two beakers connected by a salt bridge. One beaker contains a strip of gallium metal immersed in a 1 M solution of GaCl3, and the other contains a piece of nickel immersed in a 1 M solution of NiCl2. The half-reactions that occur when the compartments are connected are as follows:cathode: Ni2+(aq) + 2e− → Ni(s)anode: Ga(s) → Ga3+(aq) + 3e−If the potential for the oxidation of Ga to Ga3+ is 0.55 V under standard conditions, what is the potential for the oxidation of Ni to Ni2+?Given: galvanic cell, half-reactions, standard cell potential, and potential for the oxidation half-reaction under standard conditionsAsked for: standard electrode potential of reaction occurring at the cathodeStrategy:SolutionA We have been given the potential for the oxidation of Ga to Ga3+ under standard conditions, but to report the standard electrode potential, we must reverse the sign. For the reduction reaction Ga3+(aq) + 3e− → Ga(s), E°anode = −0.55 V.B Using the value given for E°cell and the calculated value of E°anode, we can calculate the standard potential for the reduction of Ni2+ to Ni from Equation \(\ref{19.10}\):This is the standard electrode potential for the reaction Ni2+(aq) + 2e− → Ni(s). Because we are asked for the potential for the oxidation of Ni to Ni2+ under standard conditions, we must reverse the sign of E°cathode. Thus E° = −(−0.28 V) = 0.28 V for the oxidation. With three electrons consumed in the reduction and two produced in the oxidation, the overall reaction is not balanced. Recall, however, that standard potentials are independent of stoichiometry.Exercise \(\PageIndex{2}\)A galvanic cell is constructed with one compartment that contains a mercury electrode immersed in a 1 M aqueous solution of mercuric acetate \(Hg(CH_3CO_2)_2\) and one compartment that contains a strip of magnesium immersed in a 1 M aqueous solution of \(MgCl_2\). When the compartments are connected, a potential of 3.22 V is measured and the following half-reactions occur:If the potential for the oxidation of Mg to Mg2+ is 2.37 V under standard conditions, what is the standard electrode potential for the reaction that occurs at the anode?Answer 0.85 VWe can use this procedure described to measure the standard potentials for a wide variety of chemical substances, some of which are listed in Table P2. These data allow us to compare the oxidative and reductive strengths of a variety of substances. The half-reaction for the standard hydrogen electrode (SHE) lies more than halfway down the list in Table \(\PageIndex{1}\). All reactants that lie below the SHE in the table are stronger oxidants than H+, and all those that lie above the SHE are weaker. The strongest oxidant in the table is F2, with a standard electrode potential of 2.87 V. This high value is consistent with the high electronegativity of fluorine and tells us that fluorine has a stronger tendency to accept electrons (it is a stronger oxidant) than any other element.\[Ce^{4+}(aq) + e^− \rightleftharpoons Ce^{3+}(aq)\]Similarly, all species in Table \(\PageIndex{1}\) that lie above H2 are stronger reductants than H2, and those that lie below H2 are weaker. The strongest reductant in the table is thus metallic lithium, with a standard electrode potential of −3.04 V. This fact might be surprising because cesium, not lithium, is the least electronegative element. The apparent anomaly can be explained by the fact that electrode potentials are measured in aqueous solution, where intermolecular interactions are important, whereas ionization potentials and electron affinities are measured in the gas phase. Due to its small size, the Li+ ion is stabilized in aqueous solution by strong electrostatic interactions with the negative dipole end of water molecules. These interactions result in a significantly greater ΔHhydration for Li+ compared with Cs+. Lithium metal is therefore the strongest reductant (most easily oxidized) of the alkali metals in aqueous solution.NoteSpecies in Talbe Table \(\PageIndex{1}\) (or Table P2) that lie above H2 are stronger reducing agents (more easily oxidized) than H2. Species that lie below H2 are stronger oxidizing agents.Because the half-reactions shown in Table \(\PageIndex{1}\) are arranged in order of their E° values, we can use the table to quickly predict the relative strengths of various oxidants and reductants. Any species on the left side of a half-reaction will spontaneously oxidize any species on the right side of another half-reaction that lies below it in the table. Conversely, any species on the right side of a half-reaction will spontaneously reduce any species on the left side of another half-reaction that lies above it in the table. We can use these generalizations to predict the spontaneity of a wide variety of redox reactions (E°cell > 0), as illustrated below.Example \(\PageIndex{3}\)The black tarnish that forms on silver objects is primarily Ag2S. The half-reaction for reversing the tarnishing process is as follows:Given: reduction half-reaction, standard electrode potential, and list of possible reductantsAsked for: reductants for Ag2S, strongest reductant, and potential reducing agent for removing tarnishStrategy:A From their positions inTable \(\PageIndex{1}\), decide which species can reduce Ag2S. Determine which species is the strongest reductant.B Use Table \(\PageIndex{1}\) to identify a reductant for Ag2S that is a common household product.SolutionWe can solve the problem in one of two ways: compare the relative positions of the four possible reductants with that of the Ag2S/Ag couple in Table \(\PageIndex{1}\) or compare E° for each species with E° for the Ag2S/Ag couple (−0.69 V).Example \(\PageIndex{4}\)Use the data in Table \(\PageIndex{1}\) to determine whether each reaction is likely to occur spontaneously under standard conditions:Given: redox reaction and list of standard electrode potentials (Table P2 )Asked for: reaction spontaneityStrategy:SolutionB Adding the two half-reactions gives the overall reaction:\(\textrm{cathode:} \; \mathrm{Be^{2+}(aq)} +\mathrm{2e^-} \rightarrow \mathrm{Be(s)}\)\(\textrm{anode:} \; \mathrm{Sn(s) \rightarrow \mathrm{Sn^{2+}}(s)} +\mathrm{2e^-} \)\(\textrm{total:} \; \mathrm{Sn(s)+ \mathrm{Be^{2+}(aq)} \rightarrow \mathrm{Sn^{2+}}(aq)} + \mathrm{Be(s)}\)The standard cell potential is quite negative, so the reaction will not occur spontaneously as written. That is, metallic tin cannot reduce Be2+ to beryllium metal under standard conditions. Instead, the reverse process, the reduction of stannous ions (Sn2+) by metallic beryllium, which has a positive value of E°cell, will occur spontaneously.B The two half-reactions and their corresponding potentials are as followsThe standard potential for the reaction is positive, indicating that under standard conditions, it will occur spontaneously as written. Hydrogen peroxide will reduce MnO2, and oxygen gas will evolve from the solution.Exercise \(\PageIndex{4}\)Use the data in Table \(\PageIndex{1}\) to determine whether each reaction is likely to occur spontaneously under standard conditions:AnswerAlthough the sign of E°cell tells us whether a particular redox reaction will occur spontaneously under standard conditions, it does not tell us to what extent the reaction proceeds, and it does not tell us what will happen under nonstandard conditions. To answer these questions requires a more quantitative understanding of the relationship between electrochemical cell potential and chemical thermodynamics.When using a galvanic cell to measure the concentration of a substance, we are generally interested in the potential of only one of the electrodes of the cell, the so-called indicator electrode, whose potential is related to the concentration of the substance being measured. To ensure that any change in the measured potential of the cell is due to only the substance being analyzed, the potential of the other electrode, the reference electrode, must be constant. You are already familiar with one example of a reference electrode: the SHE. The potential of a reference electrode must be unaffected by the properties of the solution, and if possible, it should be physically isolated from the solution of interest. To measure the potential of a solution, we select a reference electrode and an appropriate indicator electrode. Whether reduction or oxidation of the substance being analyzed occurs depends on the potential of the half-reaction for the substance of interest (the sample) and the potential of the reference electrode.NoteThe potential of any reference electrode should not be affected by the properties of the solution to be analyzed, and it should also be physically isolated.There are many possible choices of reference electrode other than the SHE. The SHE requires a constant flow of highly flammable hydrogen gas, which makes it inconvenient to use. Consequently, two other electrodes are commonly chosen as reference electrodes. One is the silver–silver chloride electrode, which consists of a silver wire coated with a very thin layer of AgCl that is dipped into a chloride ion solution with a fixed concentration. The cell diagram and reduction half-reaction are as follows:\[Cl^−_{(aq)}∣AgCl_{(s)}∣Ag_{(s)} \label{19.44}\]\[AgCl_{(s)}+e^− \rightarrow Ag_{(s)} + Cl^−_{(aq)}\]If a saturated solution of KCl is used as the chloride solution, the potential of the silver–silver chloride electrode is 0.197 V versus the SHE. That is, 0.197 V must be subtracted from the measured value to obtain the standard electrode potential measured against the SHE.A second common reference electrode is the saturated calomel electrode (SCE), which has the same general form as the silver–silver chloride electrode. The SCE consists of a platinum wire inserted into a moist paste of liquid mercury (Hg2Cl2; called calomel in the old chemical literature) and KCl. This interior cell is surrounded by an aqueous KCl solution, which acts as a salt bridge between the interior cell and the exterior solution (part (a) in ). Although it sounds and looks complex, this cell is actually easy to prepare and maintain, and its potential is highly reproducible. The SCE cell diagram and corresponding half-reaction are as follows:\[Pt_{(s)} ∣ Hg_2Cl_{2(s)}∣KCl_{(aq, sat)} \label{19.45}\]\[Hg_2Cl_{2(s)} + 2e^− \rightarrow 2Hg_{(l)} + 2Cl^−{(aq)} \label{19.46}\]At 25°C, the potential of the SCE is 0.2415 V versus the SHE, which means that 0.2415 V must be subtracted from the potential versus an SCE to obtain the standard electrode potential.One of the most common uses of electrochemistry is to measure the H+ ion concentration of a solution. A glass electrode is generally used for this purpose, in which an internal Ag/AgCl electrode is immersed in a 0.10 M HCl solution that is separated from the solution by a very thin glass membrane (part (b) in ). The glass membrane absorbs protons, which affects the measured potential. The extent of the adsorption on the inner side is fixed because [H+] is fixed inside the electrode, but the adsorption of protons on the outer surface depends on the pH of the solution. The potential of the glass electrode depends on [H+] as follows (recall that pH = −log[H+]:\[E_{glass} = E′ + (0.0591\; V \times \log[H^+]) = E′ − 0.0591\; V \times pH \label{19.47}\]The voltage E′ is a constant that depends on the exact construction of the electrode. Although it can be measured, in practice, a glass electrode is calibrated; that is, it is inserted into a solution of known pH, and the display on the pH meter is adjusted to the known value. Once the electrode is properly calibrated, it can be placed in a solution and used to determine an unknown pH.Ion-selective electrodes are used to measure the concentration of a particular species in solution; they are designed so that their potential depends on only the concentration of the desired species (part (c) in ). These electrodes usually contain an internal reference electrode that is connected by a solution of an electrolyte to a crystalline inorganic material or a membrane, which acts as the sensor. For example, one type of ion-selective electrode uses a single crystal of Eu-doped \(LaF_3\) as the inorganic material. When fluoride ions in solution diffuse to the surface of the solid, the potential of the electrode changes, resulting in a so-called fluoride electrode. Similar electrodes are used to measure the concentrations of other species in solution. Some of the species whose concentrations can be determined in aqueous solution using ion-selective electrodes and similar devices are listed in Table \(\PageIndex{2}\).The Standard Hydrogen Electrode (SHE): //youtu.be/GS-SE7IDDtY The flow of electrons in an electrochemical cell depends on the identity of the reacting substances, the difference in the potential energy of their valence electrons, and their concentrations. The potential of the cell under standard conditions (1 M for solutions, 1 atm for gases, pure solids or liquids for other substances) and at a fixed temperature (25°C) is called the standard cell potential (E°cell). Only the difference between the potentials of two electrodes can be measured. By convention, all tabulated values of standard electrode potentials are listed as standard reduction potentials. The overall cell potential is the reduction potential of the reductive half-reaction minus the reduction potential of the oxidative half-reaction (E°cell = E°cathode − E°anode). The potential of the standard hydrogen electrode (SHE) is defined as 0 V under standard conditions. The potential of a half-reaction measured against the SHE under standard conditions is called its standard electrode potential. The standard cell potential is a measure of the driving force for a given redox reaction. All E° values are independent of the stoichiometric coefficients for the half-reaction. Redox reactions can be balanced using the half-reaction method, in which the overall redox reaction is divided into an oxidation half-reaction and a reduction half-reaction, each balanced for mass and charge. The half-reactions selected from tabulated lists must exactly reflect reaction conditions. In an alternative method, the atoms in each half-reaction are balanced, and then the charges are balanced. Whenever a half-reaction is reversed, the sign of E° corresponding to that reaction must also be reversed.The oxidative and reductive strengths of a variety of substances can be compared using standard electrode potentials. Apparent anomalies can be explained by the fact that electrode potentials are measured in aqueous solution, which allows for strong intermolecular electrostatic interactions, and not in the gas phase.If E°cell is positive, the reaction will occur spontaneously under standard conditions. If E°cell is negative, then the reaction is not spontaneous under standard conditions, although it will proceed spontaneously in the opposite direction. The potential of an indicator electrode is related to the concentration of the substance being measured, whereas the potential of the reference electrode is held constant. Whether reduction or oxidation occurs depends on the potential of the sample versus the potential of the reference electrode. In addition to the SHE, other reference electrodes are the silver–silver chloride electrode; the saturated calomel electrode (SCE); the glass electrode, which is commonly used to measure pH; and ion-selective electrodes, which depend on the concentration of a single ionic species in solution. Differences in potential between the SHE and other reference electrodes must be included when calculating values for E°. Anonymous Standard Potentials is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by LibreTexts.
715
Standard Reduction Potential
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Supplemental_Modules_(Analytical_Chemistry)/Electrochemistry/Redox_Chemistry/Standard_Reduction_Potential
The standard reduction potential is in a category known as the standard cell potentials or standard electrode potentials. The standard cell potential is the potential difference between the cathode and anode. For more information view Cell Potentials. The standard potentials are all measured at 298 K, 1 atm, and with 1 M solutions.As stated above, the standard reduction potential is the likelihood that a species will be reduced. It is written in the form of a reduction half reaction. An example can be seen below where "A" is a generic element and C is the charge.\[ A^{C+} + C \,e^- \rightarrow A\]For example, copper's Standard Reduction Potential of \(E^o =+0.340 \;V)\) is for this reaction:\[ Cu^{2+} + 2 \,e^- \rightarrow Cu\]The standard oxidation potential is much like the standard reduction potential. It is the tendency for a species to be oxidized at standard conditions. It is also written in the form of a half reaction, and an example is shown below.\[ A(s) \rightarrow A^{c+} + C\,e^-\]Copper's Standard Oxidation Potential\[ Cu (s) \rightarrow Cu^{2+}+ 2e^- \]\[ E_0^o (SOP) = -0.34\, V\]The standard oxidation potential and the standard reduction potential are opposite in sign to each other for the same chemical species.\[ E_0^o (SRP) = -E_0^o (SOP)\]Standard reduction or oxidation potentials can be determined using a SHE (standard hydrogen electrode).Universally, hydrogen has been recognized as having reduction and oxidation potentials of zero. Therefore, when the standard reduction and oxidation potential of chemical species are measured, it is actually the difference in the potential from hydrogen. By using a galvanic cell in which one side is a SHE, and the other side is half cell of the unknown chemical species, the potential difference from hydrogen can be determined using a voltmeter. Standard reduction and oxidation potentials can both be determined in this fashion. When the standard reduction potential is determined, the unknown chemical species is being reduced while hydrogen is being oxidized, and when the standard oxidation potential is determined, the unknown chemical species is being oxidized while hydrogen is being reduced. The following diagrams show how a standard reduction potential is determined.Standard reduction potentials are used to determine the standard cell potential. The standard reduction cell potential and the standard oxidation cell potential can be combined to determine the overall Cell Potentials of a galvanic cell. The equations that relate these three potentials are shown below:\[ E^o_{cell} = E^o_{reduction} \text{ of reaction at cathode} + E^o_{oxidation} \text{ of reaction at anode}\]or alternatively\[ E^o_{cell} = E^o_{reduction} \text{ of reaction at cathode} - E^o_{reduction} \text{ of reaction at anode}\]When solving for the standard cell potential, the species oxidized and the species reduced must be identified. This can be done using an activity series. The table shown below is simply a table of standard reduction potentials in decreasing order. The species at the top have a greater likelihood of being reduced while the ones at the bottom have a greater likelihood of being oxidized. Therefore, when a species at the top is coupled with a species at the bottom, the one at the top will become reduced while the one at the bottom will become oxidized. Below is a table of standard reduction potentials.Standard Reduction Potential is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by LibreTexts.
716
Temperature Basics
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Supplemental_Modules_(Analytical_Chemistry)/Quantifying_Nature/Temperature_Basics
Learning ObjectivesThe concept of temperature may seem familiar to you, but many people confuse temperature with heat. Temperature is a measure of how hot or cold an object is relative to another object (its thermal energy content), whereas heat is the flow of thermal energy between objects with different temperatures.Three different scales are commonly used to measure temperature: Fahrenheit (expressed as °F), Celsius (°C), and Kelvin (K). Thermometers measure temperature by using materials that expand or contract when heated or cooled. Mercury or alcohol thermometers, for example, have a reservoir of liquid that expands when heated and contracts when cooled, so the liquid column lengthens or shortens as the temperature of the liquid changes.The Fahrenheit temperature scale was developed in 1717 by the German physicist Gabriel Fahrenheit, who designated the temperature of a bath of ice melting in a solution of salt as the zero point on his scale. Such a solution was commonly used in the 18th century to carry out low-temperature reactions in the laboratory. The scale was measured in increments of 12; its upper end, designated as 96°, was based on the armpit temperature of a healthy person—in this case, Fahrenheit’s wife. Later, the number of increments shown on a thermometer increased as measurements became more precise. The upper point is based on the boiling point of water, designated as 212° to maintain the original magnitude of a Fahrenheit degree, whereas the melting point of ice is designated as 32°.The Celsius scale was developed in 1742 by the Swedish astronomer Anders Celsius. It is based on the melting and boiling points of water under normal atmospheric conditions. The current scale is an inverted form of the original scale, which was divided into 100 increments. Because of these 100 divisions, the Celsius scale is also called the centigrade scale.Lord Kelvin, working in Scotland, developed the Kelvin scale in 1848. His scale uses molecular energy to define the extremes of hot and cold. Absolute zero, or 0 K, corresponds to the point at which molecular energy is at a minimum. The Kelvin scale is preferred in scientific work, although the Celsius scale is also commonly used. Temperatures measured on the Kelvin scale are reported simply as K, not °K.: A Comparison of the Fahrenheit, Celsius, and Kelvin Temperature Scales. Because the difference between the freezing point of water and the boiling point of water is 100° on both the Celsius and Kelvin scales, the size of a degree Celsius (°C) and a kelvin (K) are precisely the same. In contrast, both a degree Celsius and a kelvin are 9/5 the size of a degree Fahrenheit (°F).The kelvin is the same size as the Celsius degree, so measurements are easily converted from one to the other. The freezing point of water is 0°C = 273.15 K; the boiling point of water is 100°C = 373.15 K. The Kelvin and Celsius scales are related as follows:T (in °C) + 273.15 = T (in K) T (in K) − 273.15 = T (in °C)Degrees on the Fahrenheit scale, however, are based on an English tradition of using 12 divisions, just as 1 ft = 12 in. The relationship between degrees Fahrenheit and degrees Celsius is as follows:where the coefficient for degrees Fahrenheit is exact. (Some calculators have a function that allows you to convert directly between °F and °C.) There is only one temperature for which the numerical value is the same on both the Fahrenheit and Celsius scales: −40°C = −40°F. The relationship between the scales are as follows: °C = (5/9)*(°F-32) °F = (9/5)*(°C)+32Exercise \(\PageIndex{1}\)Convert the temperature of the surface of the sun (5800 K) and the boiling points of gold (3080 K) and liquid nitrogen (77.36 K) to °C and °F.A student is ill with a temperature of 103.5°F. What is her temperature in °C and K?Temperature Basics is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by LibreTexts.
717
The Cell Potential
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Supplemental_Modules_(Analytical_Chemistry)/Electrochemistry/Voltaic_Cells/The_Cell_Potential
The batteries in your remote and the engine in your car are only a couple of examples of how chemical reactions create power through the flow of electrons. The cell potential is the way in which we can measure how much voltage exists between the two half cells of a battery. We will explain how this is done and what components allow us to find the voltage that exists in an electrochemical cell.The cell potential, \(E_{cell}\), is the measure of the potential difference between two half cells in an electrochemical cell. The potential difference is caused by the ability of electrons to flow from one half cell to the other. Electrons are able to move between electrodes because the chemical reaction is a redox reaction. A redox reaction occurs when a certain substance is oxidized, while another is reduced. During oxidation, the substance loses one or more electrons, and thus becomes positively charged. Conversely, during reduction, the substance gains electrons and becomes negatively charged. This relates to the measurement of the cell potential because the difference between the potential for the reducing agent to become oxidized and the oxidizing agent to become reduced will determine the cell potential. The cell potential (Ecell) is measured in voltage (V), which allows us to give a certain value to the cell potential. An electrochemical cell is comprised of two half cells. In one half cell, the oxidation of a metal electrode occurs, and in the other half cell, the reduction of metal ions in solution occurs. The half cell essentially consists of a metal electrode of a certain metal submerged in an aqueous solution of the same metal ions. The electrode is connected to the other half cell, which contains an electrode of some metal submerged in an aqueous solution of subsequent metal ions. The first half cell, in this case, will be marked as the anode. In this half cell, the metal in atoms in the electrode become oxidized and join the other metal ions in the aqueous solution. An example of this would be a copper electrode, in which the Cu atoms in the electrode loses two electrons and becomes Cu2+ .The Cu2+ ions would then join the aqueous solution that already has a certain molarity of Cu2+ ions. The electrons lost by the Cu atoms in the electrode are then transferred to the second half cell, which will be the cathode. In this example, we will assume that the second half cell consists of a silver electrode in an aqueous solution of silver ions. As the electrons are passed to the Ag electrode, the Ag+ ions in solution will become reduced and become an Ag atom on the Ag electrode. In order to balance the charge on both sides of the cell, the half cells are connected by a salt bridge. As the anode half cell becomes overwhelmed with Cu2+ ions, the negative anion of the salt will enter the solution and stabilized the charge. Similarly, in the cathode half cell, as the solution becomes more negatively charged, cations from the salt bridge will stabilize the charge. For electrons to be transferred from the anode to the cathode, there must be some sort of energy potential that makes this phenomenon favorable. The potential energy that drives the redox reactions involved in electrochemical cells is the potential for the anode to become oxidized and the potential for the cathode to become reduced. The electrons involved in these cells will fall from the anode, which has a higher potential to become oxidized to the cathode, which has a lower potential to become oxidized. This is analogous to a rock falling from a cliff in which the rock will fall from a higher potential energy to a lower potential energy.NoteThe difference between the anode's potential to become reduced and the cathode's potential to become reduced is the cell potential. \[E^o_{Cell}= E^o_{Red,Cathode} - E^o_{Red,Anode}\]Note:Here is the list of the all the components: All of these components create the Electrochemical Cell. The image above is an electrochemical cell. The voltmeter at the very top in the gold color is what measures the cell voltage, or the amount of energy being produced by the electrodes. This reading from the voltmeter is called the voltage of the electrochemical cell. This can also be called the potential difference between the half cells, Ecell. Volts are the amount of energy for each electrical charge; 1V=1J/C: V= voltage, J=joules, C=coulomb. The voltage is basically what propels the electrons to move. If there is a high voltage, that means there is high movement of electrons. The voltmeter reads the transfer of electrons from the anode to the cathode in Joules per Coulomb. The image above is called the cell diagram. The cell diagram is a representation of the overall reaction in the electrochemical cell. The chemicals involved are what are actually reacting during the reduction and oxidation reactions. (The spectator ions are left out). In the cell diagram, the anode half cell is always written on the left side of the diagram, and in the cathode half cell is always written on the right side of the diagram. Both the anode and cathode are seperated by two vertical lines (ll) as seen in the blue cloud above. The electrodes (yellow circles) of both the anode and cathode solutions are seperated by a single vertical line (l). When there are more chemicals involved in the aqueous solution, they are added to the diagram by adding a comma and then the chemical. For example, in the image above, if copper wasn't being oxidized alone, and another chemical like K was involved, you would denote it as (Cu, K) in the diagram. The cell diagram makes it easier to see what is being oxidized and what is being reduced. These are the reactions that create the cell potential.The standard cell potential (\(E^o_{cell}\)) is the difference of the two electrodes, which forms the voltage of that cell. To find the difference of the two half cells, the following equation is used:\[E^o_{Cell}= E^o_{Red,Cathode} - E^o_{Red,Anode} \tag{1a}\]withThe units of the potentials are typically measured in volts (V). Note that this equation can also be written as a sum rather than a difference\[E^o_{Cell}= E^o_{Red,Cathode} + E^o_{Ox,Anode} \tag{1b}\]where we have switched our strategy from taking the difference between two reduction potentials (which are traditionally what one finds in reference tables) to taking the sum of the oxidation potential and the reduction potential (which are the reactions that actually occur). Since E^o_{Red}=-E^o_{Ox}, the two approaches are equivalent.The example will be using the picture of the Copper and Silver cell diagram. The oxidation half cell of the redox equation is:Cu(s) → Cu2+(aq) + 2e- EoOx= -0.340 Vwhere we have negated the reduction potential EoRed= 0.340 V, which is the quantity we found from a list of standard reduction potentials, to find the oxidation potential EoOx. The reduction half cell is:( Ag+ + e- → Ag(s) ) x2 EoRed= 0.800 Vwhere we have multiplied the reduction chemical equation by two in order to balance the electron count but we have not doubled EoRed since Eo values are given in units of voltage. Voltage is energy per charge, not energy per reaction, so it does not need to account for the number of reactions required to produce or consume the quantity of charge you are using to balance the equation. The chemical equations can be summed to find:Cu(s) + 2Ag+ + 2e- → Cu2+(aq) + 2Ag(s) + 2e-and simplified to find the overall reaction:Cu(s) + 2Ag+ → Cu2+(aq) + 2Ag(s)where the potentials of the half-cell reactions can be summedEoCell= EoRed,Cathode+EoOx,AnodeEoCell = 0.800 V + (-0.340 V)EoCell = 0.460Vto find that the standard cell potential of this cell is 0.460 V. We are done.Note that since E^o_{Red}=-E^o_{Ox} we could have accomplished the same thing by taking the difference of the reduction potentials, where the absent or doubled negation accounts for the fact that the reverse of the reduction reaction is what actually occurs.EoCell= EoRed,Cathode-EoRed,AnodeEoCell = 0.800V - 0.340VEoCell = 0.460VThe table below is a list of important standard electrode potentials in the reduction state. To determine oxidation electrodes, the reduction equation can simply be flipped and its potential changed from positive to negative (and vice versa). When using the half cells below, instead of changing the potential the equation below can be used without changing any of the potentials from positive to negative (and vice versa):EoCell= EoRed,Cathode - EoRed,AnodeEocell= 2.71V= +0.401V - Eo{Al(OH)4]-(aq)/Al(s)}Eo{[Al(OH)4]-(aq)/Al(s)} = 0.401V - 2.71V = -2.31VConfirm this on the table of standard reduction potentialsThe Cell Potential is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by LibreTexts.
718
The Fall of the Electron
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Supplemental_Modules_(Analytical_Chemistry)/Electrochemistry/Redox_Chemistry/The_Fall_of_the_Electron
In oxidation-reduction ("redox") reactions, electrons are transferred from a donor (reducing agent) to an acceptor (oxidizing agent). But how one predict whether, or in which direction, such a reaction will actually go? Presented below is a very simple way of understanding how different redox reactions are related.When discussing acid-base reactions, the picture can be constructed of minimizing free energy for a acid/base base in which protons "Fall" from higher-energy sources (acids) to lower-energy sinks (bases). Similarly, electrons-transfer reactions spontaneously proceed in the direction in which electrons fall (in free energy) from sources (reducing agents) to sinks (oxidizing agents). For example: Will the reaction the following reaction go in the forward or reverse direction as written?\[ \ce{ Zn + Cu^{2+} -> Zn^{2+} + Cu}\]Assume equal effective concentrations of the two ions to avoid favoring one or the other direction. We make can use of an electron free energy diagram, the relevant portion of which is shown here:Because electrons have a higher energy on Zn than they do on Cu, copper ions will serve as an electron sink to Zn, and the reaction will go to the right: the Zn gets oxidized to Zn2+, and the Cu2+ is reduced to metallic copper.This is just minimizing free energy of the entire system as a guiding principle in all reactions viewed from a thermodynamics perspective. Since free energy is related to potential of a reaction via recall the relation \( \Delta{G} = -nFE \), the free energy (or Gibbs energy) on the y-axes can be substituted with the negative potential.In this diagram, electron donors (otherwise known as reducing agents or reductants) are shown on the left, and their conjugate oxidants (acceptors) are on the right. The vertical location of each redox couple represents the free energy of an electron in the reduced form of the couple, relative to the free energy of the electron when attached to the hydrogen ion (and thus in H2).An oxidant can be regarded as a substance possessing vacant electron levels; the "stronger" the oxidizing agent, the lower the energy of the vacancy (the sink). If a reductant is added to a solution containing several oxidants, it will supply electrons to the various empty levels below it, filling them from the lowest up. Note however, that electron transfer reactions can be very slow, so kinetic factors may alter the order in which these steps actually take place.H2 and H2O. Locate the couples involving these two elements within the vertical section labeled "water stability range" (light blue background). The metals above the H2/H+ couple are known as the active metals because they can all donate electrons to\(H^+\), reducing it to \(H_2\) and leaving the metal cation. In other words, \(H^+\) can serve as an electron sink to these metals, which are therefore attacked by acid. But since some \(H^+\) is always present in water, all of these metals can react with water. Generally, the higher they are, the more readily they react. With zinc and below, reaction with water is so slow at room temperature as to be negligible, but these metals will be attacked by acidic solutions, in which the concentration of \(H^+\) ions is much greater. Those metals that are below hydrogen in this table are not attached by \(H^+\) and are referred to as the noble metals. (Gold, Au, just below chlorine, is the noblest of all.)The species on the right side below the H2O/O2 couple can all serve as electron sinks to water and will oxidize it to O2. However, this reaction can be extremely slow; only F2, the strongest of all the oxidizing agents (at the very bottom of the table) reacts quickly. It turns out, then, that only those redox pairs situated within the water stability region are thermodynamically stable in aqueous solution; all others will tend to decompose the water.Three scales of free energy are shown in the figure on the right.This diagram provides an overview of the major redox couples that provide the energy that drives the life process. Most organisms derive their metabolic energy from respiration, a process in which electrons from foodstuffs (nominally glucose), fall to lower-free energy acceptors on the right. In eucaryotic organisms this electron sink is dioxygen. Aerobic respiration is the most efficient of all because the electron falls so far (as it does so, part of the energy is captured by a series of intermediates and used for the synthesis of ATP).To make the glucose, animals rely on plants, which utilize the energy of sunlight to force electrons from \(O_2\) back up to the top left of the diagram. This, of course, is photosynthesis, which is just respiration driven in reverse. Aerobic respiration is a fairly recent development in the history of life. There still exist a host of primitive organisms (all bacteria) that inhabit anoxic environments and must employ other electron sinks that reside higher on the scale, and thus yield smaller amounts of energy.Among the more familiar of these sinks:Not all organisms start with glucose; H2, just below it, can serve as an electron source and was likely an important one during the earliest stages of life, as were most of the sources below it.If you already know some electrochemistry, you probably know how to use the Nernst Equation to carry out quantitative calculations. Nevertheless, this is still a very helpful picture when you have to deal with multiple redox systems which occur very commonly in environmental chemistry, analytical chemistry, and biochemistry.This page titled The Fall of the Electron is shared under a CC BY license and was authored, remixed, and/or curated by Stephen Lower via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.
719
The Scientific Method
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Supplemental_Modules_(Analytical_Chemistry)/Quantifying_Nature/The_Scientific_Method
The Scientific Method is simply a framework for the systematic exploration of patterns in our world. It just so happens that this framework is extremely useful for the examination of chemistry and its many questions. The scientific process, an iterative process, uses the repeated acquisition and testing of data through experimental procedures to disprove hypotheses. A hypothesis is a proposed explanation of natural phenomena, and after a hypothesis has survived many rounds of testing, it may be accepted as a theory and used to explain the phenomena in question. Thus, the scientific method is not a linear process of steps, but a method of inductive reasoning.The Scientific Method is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by LibreTexts.
720
Uncertainties in Measurements
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Supplemental_Modules_(Analytical_Chemistry)/Quantifying_Nature/Significant_Digits/Uncertainties_in_Measurements
All measurements have a degree of uncertainty regardless of precision and accuracy. This is caused by two factors, the limitation of the measuring instrument (systematic error) and the skill of the experimenter making the measurements (random error).The graduated buret in contains a certain amount of water (with yellow dye) to be measured. The amount of water is somewhere between 19 ml and 20 ml according to the marked lines. By checking to see where the bottom of the meniscus lies, referencing the ten smaller lines, the amount of water lies between 19.8 ml and 20 ml. The next step is to estimate the uncertainty between 19.8 ml and 20 ml. Making an approximate guess, the level is less than 20 ml, but greater than 19.8 ml. We then report that the measured amount is approximately 19.9 ml. The graduated cylinder itself may be distorted such that the graduation marks contain inaccuracies providing readings slightly different from the actual volume of liquid present. A meniscus as seen in a burette of colored water. '20.00 mL' is the correct depth measurement. Click here for a more complete description on buret use, including proper reading. by a constant.Random errors: Sometimes called human error, random error is determined by the experimenter's skill or ability to perform the experiment and read scientific measurements. These errors are random since the results yielded may be too high or low. Often random error determines the precision of the experiment or limits the precision. For example, if we were to time a revolution of a steadily rotating turnable, the random error would be the reaction time. Our reaction time would vary due to a delay in starting (an underestimate of the actual result) or a delay in stopping (an overestimate of the actual result). Unlike systematic errors, random errors vary in magnitude and direction. It is possible to calculate the average of a set of measured positions, however, and that average is likely to be more accurate than most of the measurements. Uncertainties in Measurements is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by LibreTexts.
721
Unit Conversions
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Supplemental_Modules_(Analytical_Chemistry)/Quantifying_Nature/Units_of_Measure/Unit_Conversions
Learning ObjectivesIn the field of science, the metric system is used in performing measurements. The metric system is actually easier to use than the English system, as you will see shortly. The metric system uses prefixes to indicate the magnitude of a measured quantity. The prefix itself gives the conversion factor. You should memorize some of the common prefixes, as you will be using them on a regular basis. Common prefixes are shown below:Suppose you wanted to convert the mass of a \(250\; mg\) aspirin tablet to grams. Start with what you know and let the conversion factor units decide how to set up the problem. If a unit to be converted is in the numerator, that unit must be in the denominator of the conversion factor in order for it to cancel.Notice how the units cancel to give grams. the conversion factor numerator is shown as \(1 \times 10^{-3}\) because on most calculators, it must be entered in this fashion, not as just 10-3. If you don't know how to use the scientific notation on your calculator, try to find out as soon as possible. Look in your calculator's manual, or ask someone who knows. Also, notice how the unit, mg is assigned the value of 1, and the prefix, milli-, is applied to the gram unit. In other words, \(1\, mg\) literally means \(1 \times 10^{-3}\, g\).Next, let's try a more involved conversion. Suppose you wanted to convert 250 mg to kg. You may or may not know a direct, one-step conversion. In fact, the better method (foolproof) to do the conversion would be to go to the base unit first, and then to the final unit you want. In other words, convert the milligrams to grams and then go to kilograms:Example \(\PageIndex{1}\)The world's ocean is estimated to contain \(\mathrm{1.4 \times 10^9\; km^3}\) of water.Solution\(\mathrm{1.4e9 \left (\dfrac{1000\: m}{1\: km} \right )^3 \left (\dfrac{10\: dm}{1\:m} \right )^3\\ = 1.4e21\: dm^3 \left (\dfrac{1\: L}{1\: dm^3} \right )\\ = 1.5e21\: L \left (\dfrac{1.1\: kg}{1\: L} \right )\\ = 1.5e21\: kg \left (\dfrac{1000\: g}{1\: kg} \right )\left (\dfrac{1\: mol}{18\: g} \right )\\ = 8.3e22\: mol\: H_2O \left (\dfrac{2\: mol\: H\: atoms}{1\: mol\: H_2O} \right )\\ = 1.7e23\: mol\: H \left (\dfrac{6.02e23\: atoms}{1\: mol} \right )\\ = 5.0e46\: H\: atoms}\)In this example, a quantity has been converted from a unit for volume into other units of volume, weight, amount in moles, and number of atoms. Every factor used for the unit conversion is a unity. The numerator and denominator represent the same quantity in different ways.Even in this simple example, several concepts such as the quantity in moles, Avogadro's number, and specific density (or specific gravity) have been applied in the conversion. If you have not learned these concepts, you may have difficulty in understanding some of the conversion processes. Identify what you do not know and find out in your text or from a resource.Example \(\PageIndex{2}\)A typical city speed for automobiles is 50 km/hr. Some years ago, most people believed that 10 seconds to dash a 100 meter race was the lowest limit. Which speed is faster, 50 km/hr or 10 m/s?SolutionFor comparison, the two speeds must be expressed in the same unit. Let's convert 50 km/hr to m/s.\[ \mathrm{50 \;\dfrac{\cancel{km}}{\cancel{hr}} \left(\dfrac{1000\; m}{1\; \cancel{km}}\right) \left(\dfrac{1\; \cancel{hr}}{60\;\cancel{min}}\right) \left(\dfrac{1\;\cancel{min}}{ 60\; s}\right) =13.89\; m/s} \]Thus, 50 km/hr is faster.Note: a different unit can be selected for the comparison (e.g., miles/hour) but the result will be the same (test this out if interested).Exercise \(\PageIndex{1}\)The speed of a typhoon is reported to be 100 m/s. What is the speed in km/hr and in miles per hour?These conversions are accomplished in the same way as metric - metric conversions. The only difference is the conversion factor used. It would be a good idea to memorize a few conversion factors involving converting mass, volume, length and temperature. Here are a few useful conversion factors.All of the above conversions are to three significant figures, except length, which is an exact number. As before, let the units help you set up the conversion.Suppose you wanted to convert mass of my \(23\, lb\) cat to kilograms. One can quickly see that this conversion is not achieved in one step. The pound units will be converted to grams, and then from grams to kilograms. Let the units help you set up the problem:\[ \dfrac{23 \, lb}{1} \times \dfrac{454\,g}{1 \, lb} \times \dfrac{1 \, kg}{ 1 \times 10^3 \, g} = 10 \, kg\]Let's try a conversion which looks "intimidating", but actually uses the same basic concepts we have already examined. Suppose you wish to convert pressure of 14 lb/in2 to g/cm2. When setting up the conversion, worry about one unit at a time, for example, convert the pound units to gram units, first:Next, convert in2 to cm2. Set up the conversion without the exponent first, using the conversion factor, 1 in = 2.54 cm. Since we need in2 and cm2, raise everything to the second power:Notice how the units cancel to the units sought. Always check your units because they indicate whether or not the problem has been set up correctly.Example \(\PageIndex{2}\): Convert Quantities into SI unitsMr. Smart is ready for a T-bone steak. He went to market A and found the price to be 4.99 dollars per kilograms. He drove out of town to a roadside market, which sells at 2.29 per pound. Which price is better for Mr. Smart?SolutionTo help Mr. Smart, we have to know that 1.0 kg is equivalent to 2.206531 lb or 1 lb = 453.2 g. By the way, are these the same?\[ \mathrm{4.99\; \dfrac{$}{\cancel{kg}} \left( \dfrac{1\; \cancel{kg}}{2.206532\; lb} \right) = 2.26468 \;\dfrac{$}{lb}}\]Of course, with the money system in Canada, there is no point quoting the price as detailed as it is given above. This brings about the significant digit issue, and the quantization. The price is therefore 2.26 $/lb, better for Mr. Smart than the price of 2.29 $/lb.Skill - Converting a quantity into SI units.Skill - To convert temperature from one scale to another scale.Skill - Converting two quantities.Skill - Determine the costs per unit common volume.6486 mSkill - Convert quantities into SI units.Chung (Peter) Chieh (Professor Emeritus, Chemistry University of Waterloo)Unit Conversions is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by LibreTexts.
722
Using Redox Potentials to Predict the Feasibility of Reactions
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Supplemental_Modules_(Analytical_Chemistry)/Electrochemistry/Redox_Potentials/Using_Redox_Potentials_to_Predict_the_Feasibility_of_Reactions
This page explains how to use redox potentials (electrode potentials) to predict the feasibility of redox reactions. It also looks at how you go about choosing a suitable oxidizing agent or reducing agent for a particular reaction.Standard electrode potentials (redox potentials) are one way of measuring how easily a substance loses electrons. In particular, they give a measure of relative positions of equilibrium in reactions such as:\[ Zn^{2+} + 2e^- \rightleftharpoons Zn (s)\]with \(E^o = -0.76 \,V\) and\[ Cu^{2+} + 2e^- \rightleftharpoons Cu (s)\]with \(E^o = +0.34 \,V\).The more negative the E° value, the further the position of equilibrium lies to the left. Remember that this is always relative to the hydrogen equilibrium - and not in absolute terms. The negative sign of the zinc E° value shows that it releases electrons more readily than hydrogen does. The positive sign of the copper E° value shows that it releases electrons less readily than hydrogen.The more negative the E° value, the further the position of equilibrium lies to the left of a standard reduction half reaction.Whenever you link two of these equilibria together (either via a bit of wire, or by allowing one of the substances to give electrons directly to another one in a test tube) electrons flow from one equilibrium to the other. That upsets the equilibria and Le Chatelier's principle applies to figure out how new equilibria are established. The positions of equilibrium move - and keep on moving if the electrons continue to be transferred.The two equilibria essentially turn into two one-way reactions:4Will magnesium react with dilute sulfuric acid?SolutionThe relevant reduction reactions and associated potentials (via Table P2) for this system are \[ Mg^{2+} (aq) + 2e^- \rightleftharpoons Mg (s)\]with \(E^o = -2.37 \,V\) and \[S_2O_8^{2−} + 2e^− \rightleftharpoons 2SO_4^{2−}\]with \(E^o = +1.96 \,V\) and since we are in an aqueous solvent\[ 2H^{+} (aq) + 2e^- \rightleftharpoons H_2 (g)\]with \(E^o = 0 \,V\).The sulfate ions are spectator ions and play no part in the reaction; you are essentially starting with magnesium metal and hydrogen ions in the acid.Is there anything to stop the sort of movements we have suggested? No! The magnesium can freely turn into magnesium ions and give electrons to the hydrogen ions producing hydrogen gas.The reaction is feasible.Now for a reaction which turns out not to be feasible . .Example \(\PageIndex{2}\): Copper and Sulfuric acidWill copper react with dilute sulfuric acid?SolutionYou probably know that the answer is that it will not. How do the E° values predict this? The relevant reduction reactions and associated potentials (via Table P2) for this system are \[ Cu^{2+} (aq) + 2e^- \rightleftharpoons Cu (s)\]with \(E^o = +0.34 \,V\) and since we are in an aqueous solvent\[ 2H^{+} (aq) + 2e^- \rightleftharpoons H_2 (g)\]with \(E^o = 0 \,V\).Doing the same sort of thinking as in Example \(\PageIndex{1}\):The diagram shows the way that the E° values are telling us that the equilibria will tend to move. Is this possible? No!There is no possibility of a reaction.In the next couple of examples, decide for yourself whether or not the reaction is feasible before you read the text.Example \(\PageIndex{3}\): Iron and Hydroxide IonsWill oxygen oxidize iron(II) hydroxide to iron(III) hydroxide under alkaline conditions?SolutionThe relevant reduction reactions and associated potentials (via Table P2) for this system are \[ Fe(OH)_3 (s) + e^- \rightleftharpoons Fe(OH)_2 (s) + OH^- (aq)\]with \(E^o = -0.56 \,V\) and \[ O_2 (g) + 2H_2O (l) \rightleftharpoons 4 OH^- (aq)\]with \(E^o = 0.40 \,V\).Think about this before you read on. Remember that the equilibrium with the more negative E° value will tend to move to the left. The other one tends to move to the right. Is that possible?Yes, it is possible. Given what we are starting with, both of these equilibria can move in the directions required by the E° values.The reaction is feasible.Example \(\PageIndex{4}\)Will chlorine oxidize manganese (II) ions to manganate (VII) ions in acidic solutions?SolutionThe relevant reduction reactions and associated potentials (via Table P2) for this system are \[ MnO_4^- (aq) + 8H^+(aq) + 5e^- \rightleftharpoons Mn^{2+} (aq) + 4H_2O (l) \]with \(E^o = +1.51 \,V\) and \[ Cl_2 (g) + 2e^- \rightleftharpoons 2Cl^- (aq) \]with \(E^o = +1.36 \,V\).Again, think about this before you read on.Given what you are starting from, these equilibrium shifts are impossible. The manganese equilibrium has the more positive E° value and so will tend to move to the right. However, because we are starting from manganese(II) ions, it is already as far to the right as possible. In order to get any reaction, the equilibrium would have to move to the left. That is against what the E° values are saying.This reaction is not feasible.Example \(\PageIndex{5}\): Copper and Nitric AcidWill dilute nitric acid react with copper?SolutionThis is going to be more complicated because there are two different ways in which dilute nitric acid might possibly react with copper. The copper might react with the hydrogen ions or with the nitrate ions. Nitric acid reactions are always more complex than the simpler acids like sulfuric or hydrochloric acid because of this problem.The relevant reduction reactions and associated potentials (via Table P2) for this system are \[ Cu^{2+}(aq) + 2e^- \rightleftharpoons Cu (aq) \]with \(E^o = +0.34 \,V\) and \[ 2H^{+} (aq) + 2e^- \rightleftharpoons H_2 (g)\]with \(E^o = 0 \,V\) and\[ NO_3^{-}(aq) + 4H^+ + 3e^- \rightleftharpoons NO(g) + 2H_2O(l) \]with \(E^o = +0.96 \,V\).We have already discussed the possibility of copper reacting with hydrogen ions in Example \(\PageIndex{2}\). Go back and look at it again if you need to, but the argument (briefly) goes like this: The copper equilibrium has a more positive E° value than the hydrogen one. That means that the copper equilibrium will tend to move to the right and the hydrogen one to the left. However, if we start from copper and hydrogen ions, the equilibria are already as far that way as possible. Any reaction would need them to move in the opposite direction to what the E° values want. The reaction isn't feasible.What about a reaction between the copper and the nitrate ions? This is feasible. The nitrate ion equilibrium has the more positive E° value and will move to the right. The copper E° value is less positive. That equilibrium will move to the left. The movements that the E° values suggest are possible, and so a reaction is feasible.Copper (II) ions are produced together with nitrogen monoxide gas.It sometimes happens that E° values suggest that a reaction ought to happen, but it does not. Occasionally, a reaction happens although the E° values seem to be the wrong way around. These next two examples explain how that can happen. By coincidence, both involve the dichromate(VI) ion in potassium dichromate(VI).Example \(\PageIndex{6}\): Potassium Dichromate and WaterWill acidified potassium dichromate(VI) oxidize water?SolutionThe relevant reduction reactions and associated potentials (via Table P2) for this system are \[Cr_2O_7^{2-} (aq) + 14H^+(aq) + 6e^- \rightleftharpoons 2Cr^{3+} (aq) + 7H_2O (l) \]with \(E^o = +1.33 \,V\) and \[ O_2 (g) + 4H^+(aq) + 4e^- \rightleftharpoons 2H_2O (l) \]with \(E^o = +1.23 \,V\).The relative sizes of the E° values show that the reaction is feasible:However, in the test tube nothing happens however long you wait. An acidified solution of potassium dichromate(VI) does not react with the water that it is dissolved in. So what is wrong with the argument?In fact, there is nothing wrong with the argument. The problem is that all the E° values show is that a reaction is possible. They do not tell you that it will actually happen as there may be very large activation barriers to the reaction which prevent it from taking place. Hence, always treat what E° values tell you with some caution. All they tell you is whether a reaction is feasible - they tell you nothing about how fast the reaction will happen; that is the subject of kinetics.The E° values show is that a reaction is possible, but they do not tell you that it will actually happen (or on what timescale).Example \(\PageIndex{7}\): Acidified Potassium Dichromate(VI)Will acidified potassium dichromate(VI) oxidize chloride ions to chlorine?SolutionThe relevant reduction reactions and associated potentials (via Table P2) for this system are \[Cr_2O_7^{2-} (aq) + 14H^+(aq) + 6e^- \rightleftharpoons 2Cr^{3+} (aq) + 7H_2O (l) \]with \(E^o = +1.33 \,V\) and \[ Cl_2 (g) + 2e^- \rightleftharpoons 2Cl^- (aq) \]with \(E^o = +1.36 \,V\).Because the chlorine E° value is slightly greater than the dichromate(VI) one, there should not be any reaction. For a reaction to occur, the equilibria would have to move in the wrong directions.Unfortunately, in the test tube, potassium dichromate(VI) solution does oxidize concentrated hydrochloric acid to chlorine. The hydrochloric acid serves as the source of the hydrogen ions in the dichromate(VI) equilibrium and of the chloride ions. The problem here is that E° only apply under standard conditions. If you change the conditions you will change the position of an equilibrium - and that will change its E value. The standard condition for concentration is 1 mol dm-3, but concentrated hydrochloric acid is approximately 10 mol dm-3. The concentrations of the hydrogen ions and chloride ions are far in excess of standard.What effect does that have on the two positions of equilibrium?Because the E° values are so similar, you do not have to change them very much to make the dichromate(VI) one the more positive. As soon as that happens, it will react with the chloride ions to produce chlorine. In most cases, there is enough difference between E° values that you can ignore the fact that you are not doing a reaction under strictly standard conditions. But sometimes it does make a difference. Be careful!Remember that oxidation is loss of electrons and an oxidizing agent oxidizes something by removing electrons from it. That means that the oxidizing agent gains electrons. It is easier to explain this with a specific example. What could you use to oxidize iron(II) ions to iron(III) ions? The E° value for this reaction is:\[ Fe^{3+}(aq) + e^- \rightleftharpoons Fe^{2+} (aq) \]with \(E^o = +0.77 \,V\).To change iron(II) ions into iron(III) ions, you need to persuade this equilibrium to move to the left. That means that when you couple it to a second equilibrium, this iron E° value must be the more negative (or less positive) one. An experimentally could use anything which has a more positive E° value. For example, you could use dilute nitric acid:\[ NO_3^{-}(aq) + 3e^- \rightleftharpoons NO(g) + 2H_2O (l) \]with \(E^o = +0.96 \,V\) or acidified potassium dichromate(VI):\[Cr_2O_7^{2-} (aq) + 14H^+(aq) + 6e^- \rightleftharpoons 2Cr^{3+} (aq) + 7H_2O (l) \]with \(E^o = +1.33 \,V\) or chlorine\[ Cl_2 (g) + 2e^- \rightleftharpoons 2Cl^- (aq) \]with \(E^o = +1.36 \,V\) or acidified potassium manganate(VII):\[ MnO_4^- (aq) + 8H^+(aq) + 5e^- \rightleftharpoons Mn^{2+} (aq) + 4H_2O (l) \]with \(E^o = +1.51 \,V\).Remember, reduction is gain of electrons and a reducing agent reduces something by giving electrons to it. That means that the reducing agent loses electrons. You have to be a little bit more careful this time, because the substance losing electrons is found on the right-hand side of one of these redox equilibria. Again, a specific example makes it clearer. For example, what could you use to reduce chromium(III) ions to chromium(II) ions? The E° value is:\[Cr^{3+}(aq) + e^- \rightleftharpoons Cr^{2+}(aq)\]with \(E^o = -0.41 \,V\).You need this equilibrium to move to the right. That means that when you couple it with a second equilibrium, this chromium E° value must be the most positive (least negative). In principle, you could choose anything with a more negative E° value - for example, zinc:\[Zn^{2+} (aq) + 2e^- \rightleftharpoons Zn (s)\]with \(E^o = -0.76 \,V\).You would have to remember to start from metallic zinc, and not zinc ions. You need this second equilibrium to be able to move to the left to provide the electrons. If you started with zinc ions, it would already be on the left - and would have no electrons to give away. Nothing could possibly happen if you mixed chromium(III) ions and zinc ions. That is fairly obvious in this simple case. If you were dealing with a more complicated equilibrium, you would have to be careful to think it through properly.Jim Clark (Chemguide.co.uk)This page titled Using Redox Potentials to Predict the Feasibility of Reactions is shared under a CC BY-NC 4.0 license and was authored, remixed, and/or curated by Jim Clark.
724
Electricity and the Waterfall Analogy
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Supplemental_Modules_(Analytical_Chemistry)/Electrochemistry/Voltage_Amperage_and_Resistance_Basics
To discuss electrochemistry meaningfully, the fundamental properties of electricity must be defined.The voltage between two points is a short name for the electrical force that would drive an electric current between those points. In the case of static electric fields, the voltage between two points is equal to the electrical potential difference between those points. In the more general case with electric and magnetic fields that vary with time, the terms are no longer synonymous. Electric potential is the energy required to move a unit electric charge to a particular place in a static electric field. The first is voltage, usually abbreviated "V" and measured in volts (also abbreviated "V".) Voltage, also sometimes called potential difference or electromotive force (EMF), refers to the amount of potential energy the electrons have in an object or circuit. In some ways, you can think of this as the amount of "push" the electrons are making to try to get towards a positive charge. The more energy the electrons have, the stronger the voltage.The current means the rate of flow of electric charge. This flowing electric charge is typically carried by moving electrons, in a conductor such as wire; in an electrolyte, it is instead carried by ions. The SI unit for measuring the rate of flow of electric charge is the ampere. Electric current is measured using an ammeter. Current is usually abbreviated "I" ("C" is reserved for the principle of charge, the most fundamental building block of electricity.) Current is measured in amperes or amps, abbreviation "A". Current refers to how much electricity is flowing--how many electrons are moving through a circuit in a unit of time.The resistance of an object is a measure of its opposition to the passage of a steady electric current. An object of uniform cross section will have a resistance proportional to its length and inversely proportional to its cross-sectional area, and proportional to the resistivity of the material. Discovered by Georg Ohm in 1827, electrical resistance shares some conceptual parallels with the mechanical notion of friction. The SI unit of electrical resistance is the ohm (Ω). Resistance refers to how much the material that is conducting electricity opposes the flow of electrons. The higher the resistance, the harder it is for the electrons to push through.If we draw an analogy to a waterfall, the voltage would represent the height of the waterfall: the higher it is, the more potential energy the water has by virtue of its distance from the bottom of the falls, and the more energy it will possess as it hits the bottom. Then current represents how much water was going over the edge of the falls each second . Resistance refers to any obstacles that slows down the flow of water over the edge of the falls (e.g. rocks in the river before the edge).These voltage, current and resistance are related via a principle known as Ohm's Law:\[ V = I * R \]which states that the voltage of a circuit is equal to the current through the circuit times its resistance. Another way of stating Ohm's Law, that is often easier to understand, is:\[ I = V / R \]which means that the current through a circuit is equal to the voltage divided by the resistance. This makes sense, if you think about our waterfall example: the higher the waterfall, the more water will want to rush through, but it can only do so to the extent that it is able to as a result of any opposing forces. If you tried to fit Niagara Falls through a garden hose, you'd only get so much water every second, no matter how high the falls, and no matter how much water was waiting to get through! And if you replace that hose with one that is of a larger diameter, you will get more water in the same amount of time.Electricity and the Waterfall Analogy is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by LibreTexts.
725
Voltaic Cells
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Supplemental_Modules_(Analytical_Chemistry)/Electrochemistry/Voltaic_Cells
In redox reactions, electrons are transferred from one species to another. If the reaction is spontaneous, energy is released, which can then be used to do useful work. To harness this energy, the reaction must be split into two separate half reactions: the oxidation and reduction reactions. The reactions are put into two different containers and a wire is used to drive the electrons from one side to the other. In doing so, a Voltaic/ Galvanic Cell is created.When a redox reaction takes place, electrons are transferred from one species to the other. If the reaction is spontaneous, energy is released, which can be used to do work. Consider the reaction of a solid copper (Cu(s)) in a silver nitrate solution (AgNO3(s)).\[2Ag^+_{(aq)} + Cu_{(s)} \leftrightharpoons Cu^{2+}_{(aq)} + 2Ag_{(s)}\]The \(AgNO_{3\;(s)}\) dissociates in water to produce \(Ag^+_{(aq)}\) ions and \(NO­­^-_{3\;(aq)}\) ions. The NO3-(aq) ions can be ignored since they are spectator ions and do not participate in the reaction. In this reaction, a copper electrode is placed into a solution containing silver ions. The Ag+(aq) will readily oxidize Cu(s) resulting in Cu2+(aq), while reducing itself to Ag(s).This reaction releases energy. When the copper electrode solid is placed directly into a silver nitrate solution, however, the energy is lost as heat and cannot be used to do work. In order to harness this energy and use it do useful work, we must split the reaction into two separate half reactions; The oxidation and reduction reactions. A wire connects the two reactions and allows electrons to flow from one side to the other. In doing so, we have created a Voltaic/ Galvanic Cell.: Voltaic CellA Voltaic Cell (also known as a Galvanic Cell) is an electrochemical cell that uses spontaneous redox reactions to generate electricity. It consists of two separate half-cells. A half-cell is composed of an electrode (a strip of metal, M) within a solution containing Mn+ ions in which M is any arbitrary metal. The two half cells are linked together by a wire running from one electrode to the other. A salt bridge also connects to the half cells. The functions of these parts are discussed below. Half of the redox reaction occurs at each half cell. Therefore, we can say that in each half-cell a half-reaction is taking place. When the two halves are linked together with a wire and a salt bridge, an electrochemical cell is created.An electrode is strip of metal on which the reaction takes place. In a voltaic cell, the oxidation and reduction of metals occurs at the electrodes. There are two electrodes in a voltaic cell, one in each half-cell. The cathode is where reduction takes place and oxidation takes place at the anode.Through electrochemistry, these reactions are reacting upon metal surfaces, or electrodes. An oxidation-reduction equilibrium is established between the metal and the substances in solution. When electrodes are immersed in a solution containing ions of the same metal, it is called a half-cell. Electrolytes are ions in solution, usually fluid, that conducts electricity through ionic conduction. Two possible interactions can occur between the metal atoms on the electrode and the ion solutions.When an electrode is oxidized in a solution, it is called an anode and when an electrode is reduced in solution. it is called a cathode.Remembering Oxidation and ReductionWhen it comes to redox reactions, it is important to understand what it means for a metal to be “oxidized” or “reduced”. An easy way to do this is to remember the phrase “OIL RIG”.OIL = Oxidization is Loss (of e-)RIG = Reduction is Gain (of e-)In the case of the example above \(Ag^+_{(aq)}\) gains an electron meaning it is reduced. \(Cu_{(s)}\) loses two electrons thus it is oxidized.The salt bridge is a vital component of any voltaic cell. It is a tube filled with an electrolyte solution such as KNO3(s) or KCl(s). The purpose of the salt bridge is to keep the solutions electrically neutral and allow the free flow of ions from one cell to another. Without the salt bridge, positive and negative charges will build up around the electrodes causing the reaction to stop.The purpose of the salt bridge is to keep the solutions electrically neutral and allow the free flow of ions from one cell to another.Electrons always flow from the anode to the cathode or from the oxidation half cell to the reduction half cell. In terms of Eocell of the half reactions, the electrons will flow from the more negative half reaction to the more positive half reaction. A cell diagram is a representation of an electrochemical cell. The figure below illustrates a cell diagram for the voltaic shown in above.: Cell Diagram. The figure below illustrates a cell diagram for the voltaic shown in .When drawing a cell diagram, we follow the following conventions. The anode is always placed on the left side, and the cathode is placed on the right side. The salt bridge is represented by double vertical lines (||). The difference in the phase of an element is represented by a single vertical line (|), while changes in oxidation states are represented by commas (,).When asked to construct a cell diagram follow these simple instructions. Consider the following reaction:\[2Ag^+_{(aq)} + Cu_{(s)} \rightleftharpoons Cu^{2+}_{(aq)} + 2Ag_{(s)}\]Step 1: Write the two half-reactions.\[Ag^+_{(aq)} + e^- \rightleftharpoons Ag_{(s)}\]\[Cu_{(s)} \rightleftharpoons Cu^{2+}_{(aq)} + 2e^-\]Step 2: Identify the cathode and anode.\(Cu_{(s)}\) is losing electrons thus being oxidized; oxidation occurs at the anode.\(Ag^+\) is gaining electrons thus is being reduced; reduction happens at the cathode.Step 3: Construct the Cell Diagram.\[Cu_{(s)} | Cu^{2+}_{(aq)} || Ag^+_{(aq)} | Ag_{(s)}\]The anode always goes on the left and cathode on the right. Separate changes in phase by | and indicate the the salt bridge with ||. The lack of concentrations indicates solutions are under standard conditions (i.e., 1 M)Example \(\PageIndex{1}\)Consider the following two reactions:\[Cu^{2+}_{(aq)} + Ba_{(s)} \rightarrow Cu_{(s)} + Ba^{2+}_{(aq)}\]\[2Al_{(s)} + 3Sn^{2+}_{(aq)} \rightarrow 2Al^{3+}_{(aq)} + 3Sn_{(s)}\]Solution1.a) Ba2+(aq) → Ba(s) + 2e- with SRP (for opposite reaction) Eo = -2.92 V (Anode; where oxidation happens)Cu2+(aq) + 2e- → Cu(s) with SRP Eo = +0.340 V (Cathode; where reduction happens)1.b) Al3+(aq) → Al(s) + 3e- with SRP (for opposite reaction) Eo = -1.66 V (Anode; where oxidation happens)Sn2+(aq) +2e- → Sn(s) with SRP Eo = -0.137 V (Cathode; where reduction happens)2.a) Ba2+(aq) | Ba(s) || Cu(s) | Cu2+(aq)2.b) Al(s) | Al3+(aq) || Sn2+(aq) | Sn(s)3.a) Eocell = 0.34 - (-2.92) = 3.26 V3.b) Eocell = -0.137 - (-1.66) = 1.523 VThe readings from the voltmeter give the reaction's cell voltage or potential difference between it's two two half-cells. Cell voltage is also known as cell potential or electromotive force (emf) and it is shown as the symbol \(E_{cell}\). Standard Cell Potential:\[E^o_{cell} = E^o_{cathode} - E^o_{anode}\]The Eo values are tabulated with all solutes at 1 M and all gases at 1 atm. These values are called standard reduction potentials. Each half-reaction has a different reduction potential, the difference of two reduction potentials gives the voltage of the electrochemical cell. If Eocell is positive the reaction is spontaneous and it is a voltaic cell. If the Eocell is negative, the reaction is non-spontaneous and it is referred to as an electrolytic cell.Voltaic Cells is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by LibreTexts.
726
Writing Equations for Redox Reactions
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Supplemental_Modules_(Analytical_Chemistry)/Electrochemistry/Redox_Chemistry/Writing_Equations_for_Redox_Reactions
This page explains how to work out electron-half-reactions for oxidation and reduction processes, and then how to combine them to give the overall ionic equation for a redox reaction. This is an important skill in inorganic chemistry.The ionic equation for the magnesium-aided reduction of hot copper(II) oxide to elemental copper is given below :\[\ce{Cu^{2+} + Mg \rightarrow Cu + Mg^{2+}}\nonumber \]The equation can be split into two parts and considered from the separate perspectives of the elemental magnesium and of the copper(II) ions. This arrangement clearly indicates that the magnesium has lost two electrons, and the copper(II) ion has gained them.\[ \ce{ Mg \rightarrow Mg^{2+} + 2e^-}\nonumber \]\[\ce{Cu^{2+} + 2e^-} \rightarrow Cu\nonumber \]These two equations are described as "electron-half-equations," "half-equations," or "ionic-half-equations," or "half-reactions." Every redox reaction is made up of two half-reactions: in one, electrons are lost (an oxidation process); in the other, those electrons are gained (a reduction process).In the example above, the electron-half-equations were obtained by extracting them from the overall ionic equation. In practice, the reverse process is often more useful: starting with the electron-half-equations and using them to build the overall ionic equation.Example \(\PageIndex{1}\): The reaction between Chlorine and Iron (III) IonsChlorine gas oxidizes iron(II) ions to iron(III) ions. In the process, the chlorine is reduced to chloride ions. From this information, the overall reaction can be obtained. The chlorine reaction, in which chlorine gas is reduced to chloride ions, is considered first:\[\ce{ Cl_2 \rightarrow Cl^{-}}\nonumber \]The atoms in the equation must be balanced:\[\ce{ Cl_2 \rightarrow 2Cl^{-}}\nonumber \]This step is crucial. If any atoms are unbalanced, problems will arise later.To completely balance a half-equation, all charges and extra atoms must be equal on the reactant and product sides. In order to accomplish this, the following can be added to the equation:In the chlorine case, the only problem is a charge imbalance. The left-hand side of the equation has no charge, but the right-hand side carries 2 negative charges. This is easily resolved by adding two electrons to the left-hand side. The fully balanced half-reaction is:\[\ce{ Cl_2 +2 e^- \rightarrow 2Cl^{-}}\nonumber \]Next the iron half-reaction is considered. Iron(II) ions are oxidized to iron(III) ions as shown:\[ \ce{Fe^{2+} \rightarrow Fe^{3+}}\nonumber \]The atoms balance, but the charges do not. There are 3 positive charges on the right-hand side, but only 2 on the left. To reduce the number of positive charges on the right-hand side, an electron is added to that side:\[ \ce{Fe^{2+} \rightarrow Fe^{3+} } + e-\nonumber \]The next step is combining the two balanced half-equations to form the overall equation. The two half-equations are shown below:It is obvious that the iron reaction will have to happen twice for every chlorine reaction. This is accounted for in the following way: each equation is multiplied by the value that will give equal numbers of electrons, and the two resulting equations are added together such that the electrons cancel out:At this point, it is important to check once more for atom and charge balance. In this case, no further work is required.Example \(\PageIndex{2}\): The reaction between Hydrogen Peroxide and Magnanate IonsThe first example concerned a very simple and familiar chemical equation, but the technique works just as well for more complicated (and perhaps unfamiliar) chemistry.Manganate(VII) ions, MnO4-, oxidize hydrogen peroxide, H2O2, to oxygen gas. The reaction is carried out with potassium manganate(VII) solution and hydrogen peroxide solution acidified with dilute sulfuric acid. As the oxidizing agent, M anganate (VII) is reduced to manganese(II).The hydrogen peroxide reaction is written first according to the information given:\[ \ce{H_2O_2 \rightarrow O_2} \nonumber \]The oxygen is already balanced, but the right-hand side has no hydrogen.All you are allowed to add to this equation are water, hydrogen ions and electrons. Adding water is obviously unhelpful: if water is added to the right-hand side to supply extra hydrogen atoms, an additional oxygen atom is needed on the left. Hydrogen ions are a better choice.Adding two hydrogen ions to the right-hand side gives:\[ \ce{ H_2O_2 \rightarrow O_2 + 2H^{+}} \nonumber \]Next the charges are balanced by adding two electrons to the right, making the overall charge on both sides zero:\[ \ce{ H_2O_2 \rightarrow O_2 + 2H^{+} + 2e^{-}}\nonumber \]Next the manganate(VII) half-equation is considered:\[MnO_4^- \rightarrow Mn^{2+}\nonumber \]The manganese atoms are balanced, but the right needs four extra oxygen atoms. These can only come from water, so four water molecules are added to the right:\[ MnO_4^- \rightarrow Mn^{2+} + 4H_2O\nonumber \]The water introduces eight hydrogen atoms on the right. To balance these, eight hydrogen ions are added to the left:\[ MnO_4^- + 8H^+ \rightarrow Mn^{2+} + 4H_2O\nonumber \]Now that all the atoms are balanced, only the charges are left. There is a net +7 charge on the left-hand side (1- and 8+), but only a charge of +2 on the right. 5 electrons are added to the left-hand side to reduce the +7 to +2:\[ MnO_4^- + 8H^+ + 5e^- \rightarrow Mn^{2+} _ 4H_2O\nonumber \]This illustrates the strategy for balancing half-equations, summarized as followed:Now the half-equations are combined to make the ionic equation for the reaction.As before, the equations are multiplied such that both have the same number of electrons. In this case, the least common multiple of electrons is ten:The equation is not fully balanced at this point. There are hydrogen ions on both sides which need to be simplified:This often occurs with hydrogen ions and water molecules in more complicated redox reactions. Subtracting 10 hydrogen ions from both sides leaves the simplified ionic equation.\[ 2MnO_4^- + 6H^+ + 5H_2O_2 \rightarrow 2Mn^{2+} + 8H_2O + 5O_2\nonumber \]Example \(\PageIndex{3}\): Oxidation of Ethanol of Acidic Potassium Dichromate (IV)This technique can be used just as well in examples involving organic chemicals. Potassium dichromate(VI) solution acidified with dilute sulfuric acid is used to oxidize ethanol, CH3CH2OH, to ethanoic acid, CH3COOH.The oxidizing agent is the dichromate(VI) ion, Cr2O72-, which is reduced to chromium(III) ions, Cr3+. The ethanol to ethanoic acid half-equation is considered first:\[ CH_3CH_2OH \rightarrow CH_3COOH\nonumber \]The oxygen atoms are balanced by adding a water molecule to the left-hand side:\[ CH_3CH_2OH + H_2O \rightarrow CH_3COOH\nonumber \]Four hydrogen ions to the right-hand side to balance the hydrogen atoms:\[ CH_3CH_2OH + H_2O \rightarrow CH_3COOH + 4H^+\nonumber \]The charges are balanced by adding 4 electrons to the right-hand side to give an overall zero charge on each side:\[ CH_3CH_2OH + H_2O \rightarrow CH_3COOH + 4H^+ + 4e^-\nonumber \]The unbalanced dichromate (VI) half reaction is written as given:\[ Cr_2O_7^{2-} \rightarrow Cr^{3+}\nonumber \]At this stage, students often forget to balance the chromium atoms, making it impossible to obtain the overall equation. To avoid this, the chromium ion on the right is multiplied by two:\[ Cr_2O_7^{2-} \rightarrow 2Cr^{3+}\nonumber \]The oxygen atoms are balanced by adding seven water molecules to the right:\[ Cr_2O_7^{2-} \rightarrow 2Cr^{3+} + 7H_2O\nonumber \]The resulting hydrogen atoms are balanced by adding fourteen hydrogen ions to the left:\[ Cr_2O_7^{2-} + 14H^+ \rightarrow 2Cr^{3+} + 7H_2O\nonumber \]Six electrons are added to the left to give a net +6 charge on each side.\[ Cr_2O_7^{2-} + 14H^+ + 6e^- \rightarrow 2Cr^{3+} + 7H_2O\nonumber \]The two balanced half reactions are summarized: \[ CH_3CH_2OH + H_2O \rightarrow CH_3COOH + 4H^+ + 4e^-\nonumber \]\[ Cr_2O_7^{2-} + 14H^+ + 6e^- \rightarrow 2Cr^{3+} + 7H_2O\nonumber \]The least common multiple of 4 and 6 is 12. Therefore, the first equation is multiplied by 3 and the second by 2, giving 12 electrons in each equation: Simplifying the water molecules and hydrogen ions gives final equation:Working out half-equations for reactions in alkaline solution is decidedly more tricky than the examples above. As some curricula do not include this type of problem, the process for balancing alkaline redox reactions is covered on a separate page.This page titled Writing Equations for Redox Reactions is shared under a CC BY-NC 4.0 license and was authored, remixed, and/or curated by Jim Clark.
727
InfoPage
https://chem.libretexts.org/Bookshelves/Biological_Chemistry/Book3A_Medicines_by_Design/003A_Front_Matter/02%3A_InfoPage
This text is disseminated via the Open Education Resource (OER) LibreTexts Project and like the hundreds of other texts available within this powerful platform, it is freely available for reading, printing and "consuming." Most, but not all, pages in the library have licenses that may allow individuals to make changes, save, and print this book. Carefully consult the applicable license(s) before pursuing such effects.Instructors can adopt existing LibreTexts texts or Remix them to quickly build course-specific resources to meet the needs of their students. Unlike traditional textbooks, LibreTexts’ web based origins allow powerful integration of advanced features and new technologies to support learning. The LibreTexts mission is to unite students, faculty and scholars in a cooperative effort to develop an easy-to-use online platform for the construction, customization, and dissemination of OER content to reduce the burdens of unreasonable textbook costs to our students and society. The LibreTexts project is a multi-institutional collaborative venture to develop the next generation of open-access texts to improve postsecondary education at all levels of higher learning by developing an Open Access Resource environment. The project currently consists of 14 independently operating and interconnected libraries that are constantly being optimized by students, faculty, and outside experts to supplant conventional paper-based books. These free textbook alternatives are organized within a central environment that is both vertically (from advance to basic level) and horizontally (across different fields) integrated.The LibreTexts libraries are Powered by NICE CXOne and are supported by the Department of Education Open Textbook Pilot Project, the UC Davis Office of the Provost, the UC Davis Library, the California State University Affordable Learning Solutions Program, and Merlot. This material is based upon work supported by the National Science Foundation under Grant No. 1246120, 1525057, and 1413739.Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation nor the US Department of Education.Have questions or comments? For information about adoptions or adaptions contact More information on our activities can be found via Facebook , Twitter , or our blog .This text was compiled on 07/13/2023
730
1.1: A Juicy Story
https://chem.libretexts.org/Bookshelves/Biological_Chemistry/Book3A_Medicines_by_Design/01%3A_ABCs_of_Pharmacology/1.01%3A_A_Drug's_Life
abiotic transformation drug workDid you know that, in some people, a single glass of grapefruit juice can alter levels of drugs used to treat allergies, heart disease, and infections? Fifteen years ago, pharmacologists discovered this "grapefruit juice effect" by luck, after giving volunteers grapefruit juice to mask the taste of a medicine. Nearly a decade later, researchers figured out that grapefruit juice affects medicines by lowering levels of a drug-metabolizing enzyme, called CYP3A4, in the intestines.More recently, Paul B. Watkins of the University of North Carolina at Chapel Hill discovered that other juices like Seville (sour) orange juice—but not regular orange juice—have the same effect on the body's handling of medicines. Each of 10 people who volunteered for Watkins' juice-medicine study took a standard dose of Plendil® (a drug used to treat high blood pressure) diluted in grapefruit juice, sour orange juice, or plain orange juice. The researchers measured blood levels of Plendil at various time afterward. The team observed that both grapefruit juice and sour orange juice increased blood levels of Plendil, as if the people had received a higher dose. Regular orange juice had no effect. Watkins and his coworkers have found that a chemical common to grapefruit and sour oranges, dihydroxybergamottin, is likely the molecular culprit. Another similar molecule in these fruits, bergamottin, also contributes to the effect.Many scientists are drawn to pharmacology because of its direct application to the practice of medicine. Pharmacologists study the actions of drugs in the intestinal tract, the brain, the muscles, and the liver—just a few of the most common areas where drugs travel during their stay in the body. Of course, all of our organs are constructed form cells and inside all of our cells are genes. Many pharmacologists study how medicines interact with cell parts and genes, which in turn influences how cells behave. Because pharmacology touches on such diverse areas, pharmacologists must be broadly trained in biology, chemistry, and more applied areas of medicine, such as anatomy and physiology.1.1: A Juicy Story is shared under a Public Domain license and was authored, remixed, and/or curated by LibreTexts.
731
1.2: A Drug's Life
https://chem.libretexts.org/Bookshelves/Biological_Chemistry/Book3A_Medicines_by_Design/01%3A_ABCs_of_Pharmacology/1.02%3A_A_Drug's_Life
Antibiotic Cholesterol Bacterium Death Gene Cholesterol Cyclooxygenase abiotic transformation α-cleavage workHow does aspirin zap a headache? What happens after you rub some cortisone cream on a patch of poison ivy-induced rash on your arm? How do decongestant medicines such as Sudafed® dry up your nasal passages when you have a cold? As medicines find their way to their "job sites" in the body, hundreds of things happen along the way. One action triggers another, and medicines work to either mask a symptom, like a stuffy nose, or fix a problem, like a bacterial infection.Turning a molecule into a good medicine is neither easy nor cheap. The Center for the Study of Drug Development at Tufts University in Boston estimates that it takes over $800 million and a dozen years to sift a few promising drugs from about 5,000 failures. Of this small handful of candidate drugs, only one will survive the rigors of clinical testing and end up on pharmacy shelves.That's a huge investment for what may seem a very small gain and, n part, it explains the high cost of many prescription drugs. Sometimes, problems do not show up until after a drug reaches the market and many people begin taking the drug routinely. These problems range from irritating side effects, such as dry mouth or drowsiness, to life-threatening problems like serious bleeding or blood clots. The outlook might be brighter if pharmaceutical scientists could do a better job of predicting how potential drugs will act in the body (a science called pharmacodynamics), as well as what side effects the drugs might cause.One approach that can help is computer modeling of a drug's properties. Computer modeling can help scientists at pharmaceutical and biotechnology companies filter out, and abandon early on, any candidate drugs that are likely to behave badly in the body. This can save significant amounts of time and money.Computer software can examine the atom-by-atom structure of a molecule and determine how durable the chemical is likely to be inside a body's various chemical neighborhoods. Will the molecule break down easily? How well will the small intestines take it in? Does it dissolve easily in the watery environment of the fluids that course through the human body? Will the drug be able to penetrate the blood-brain barrier? Computer tools not only drive up the success rate for finding candidate drugs, they can also lead to the development of better medicines with fewer safety concerns.A drug's life in the body. Medicines taken by mouth (oral) pass through the liver before they are absorbed into the bloodstream. Other forms of drug administration bypass the liver, entering the blood directly.Drugs enter different layers of skin via intramuscular, subcutaneous, or transdermal delivery methods.Scientists have names for the four basic stages of a medicine's life in the body: absorption, distribution, metabolism, and excretion. The entire process is sometimes abbreviated ADME. The first stage is absorption. Medicines can enter the body in many different ways, and they are absorbed when they travel from the site of administration into the body's circulation. A few of the most common ways to administer drugs are oral (swallowing an aspirin tablet), intramuscular (getting a flu shot in an arm muscle), subcutaneous (injecting insulin just under the skin), intravenous (receiving chemotherapy through a vein), or transdermal (wearing a skin patch), A drug faces its biggest hurdles during absorption. Medicines taken by mouth are shuttled via a special blood vessel leading from the digestive tract to the liver, where a large amount may be destroyed by metabolic enzymes in the so-called "first-pass effect." Other routes of drug administration bypass the liver, entering the bloodstream directly or via the skin or lungs.Once a drug gets absorbed, the next stage is distribution. Most often, the bloodstream carries medicines throughout the body. During this step, side effects can occur when a drug has an effect in an organ other than the target organ. For a pain reliever, the target organ might be a sore muscle in the leg; irritation of the stomach could be a side effect. Many factors influence distribution, such as the presence of protein and fat molecules in the blood that can put drug molecules out of commission by grabbing onto them.Drugs destined for the central nervous system (the brain and spinal cord) face an enormous hurdle: a nearly impenetrable barricade called the blood-brain barrier. This blockade is built from a tightly woven mesh of capillaries cemented together to protect the brain from potentially dangerous substances such as poisons or viruses. Yet pharmacologists have devised various ways to sneak some drugs past this barrier.After a medicine has been distributed throughout the body and has done its job, the drug is broken down, or metabolized. The breaking down of a drug molecule usually involves two steps that take place mostly in the body's chemical processing plant, the liver. The liver is a site of continuous and frenzied, yet carefully controlled, activity. Everything that enters the bloodstream—whether swallowed, injected, inhaled, absorbed through the skin, or produced by the body itself—is carried to this largest internal organ. There, substances are chemically pummeled, twisted, cut apart, stuck together, and transformed.How you respond to a drug may be quite different from how your neighbor does. Why is that? Despite the fact that you might be about the same age and size, you probably eat different foods, get different amounts of exercise, and have different medical histories. But your genes, which are different from those of anyone else in the world, are really what make you unique. In part, your genes give you many obvious things, such as your looks, your mannerisms, and other characteristics that make you who you are. Your genes can also affect how you respond to the medicines you take. Your genetic code instructs your body how to make hundreds of thousands of different molecules called proteins. Some proteins determine hair color, and some of them are enzymes that process, or metabolize, food or medicines. Slightly different, but normal, variations in the human genetic code can yield proteins that work better or worse when they are metabolizing many different types of drugs and other substances. Scientists use the term pharmacogenetics to describe research on the link between genes and drug response.One important group of proteins whose genetic code varies widely among people are "sulfation" enzymes, which perform chemical reactions in your body to make molecules more water-soluble, so they can be quickly excreted in the urine. Sulfation enzymes metabolize many drugs, but they also work on natural body molecules, such as estrogen. Differences in the genetic code for sulfation enzymes can significantly alter blood levels of the many different kinds of substances metabolized by these enzymes. The same genetic differences may also put some people at risk for developing certain types of cancers whose growth is fueled by hormones like estrogen.Pharmacogeneticist Rebecca Blanchard of Fox Chase Cancer Center in Philadelphia has discovered that people of different ethnic backgrounds have slightly different "spellings" of the genes that make sulfation enzymes. Lab tests revealed that sulfation enzymes manufactured from genes with different spellings metabolize drugs and estrogens at different rates. Blanchard and her coworkers are planning to work with scientists developing new drugs to include pharmacognetic testing in the early phases of screening new medicines.The biotransformations that take place in the liver are performed by the body's busiest proteins, its enzymes. Every one of your cells has a variety of enzymes, drawn from a repertoire of hundreds of thousands. Each enzyme specializes in a particular job. Some break molecules apart, while others link small molecules into long chains. With drugs, the first step is usually to make the substance easier to get rid of in urine.Many of the products of enzymatic break-down, which are called metabolites, are less chemically active than the original molecule. For this reason, scientists refer to the liver as a "detoxifying" organ. Occasionally, however, drug metabolites can have chemical activities of their own—sometimes as powerful as those of the original drug. When prescribing certain drugs, doctors must take into account these added effects. Once liver enzymes are finished working on a medicine, the now-inactive drug undergoes the final stage of its time in the body, excretion, as it exits via the urine or feces.Pharmacokinetics is an aspect of pharmacology that deals with the absorption, distribution, and excretion of drugs. Because they are following drug actions in the body, researchers who specialize in pharmacokinetics must also pay attention to an additional dimension: time.Pharmacokinetics research uses the tools of mathematics. Although sophisticated imaging methods can help track medicines as they travel through the body, scientists usually cannot actually see where a drug is going. To compensate, they often use mathematical models and precise measures of body fluids, such as blood and urine, to determine where a drug goes and how much of the drug or a break-down product remains after the body processes it. Other sentinels, such as blood levels of liver enzymes, can help predict how much of a drug is going to be absorbed.Studying pharmacokinetics also uses chemistry, since the interactions between drug and body molecules are really just a series of chemical reactions. Understanding the chemical encounters between drugs and biological environments, such as the bloodstream and the oily surfaces of cells, is necessary to predict how much of a drug will be taken in by the body. This concept, broadly termed bioavailability, is a critical feature that chemists and pharmaceutical scientists keep in mind when designing and packaging medicines. No matter how well a drug works in a laboratory simulation, the drug is not useful if it can't make it to its site of action.1.2: A Drug's Life is shared under a Public Domain license and was authored, remixed, and/or curated by LibreTexts.
732
1.3: Fitting In
https://chem.libretexts.org/Bookshelves/Biological_Chemistry/Book3A_Medicines_by_Design/01%3A_ABCs_of_Pharmacology/1.03%3A_Fitting_In
death not-deaths deaths multiple deathsdna dna (Deoxyribonucleic acid)dyingAntibiotic Bacterium Death GeneDeathstarWhile it may seem obvious now, scientists did not always know that drugs have specific molecular targets in the body. In the mid-1880s, the French physiologist Claude Bernard made a crucial discovery that steered researchers toward understanding this principle. By figuring out how a chemical called curare works, Bernard pointed to the nervous system as a new focus for pharmacology. Curare—a plant extract that paralyzes muscles—had been used for centuries by Native Americans in South America to poison the tips of arrows. Bernard discovered that curare causes paralysis by blocking chemical signals between nerve and muscle cells. His findings demonstrated that chemicals can carry messages between nerve cells and other types of cells.Since Bernard's experiments with curare, researchers have discovered many nervous system messengers, now called neurotransmitters. These chemical messengers are called agonists, a generic term pharmacologists use to indicate that a molecule triggers some sort of response when encountering a cell (such as muscle contraction or hormone release).Nerve cells use a chemical messenger called acetylcholine (balls) to tell muscle cells to contract. Curare (half circles) paralyzes muscles by blocking acetylcholine from attaching to its muscle cell receptors.One of the most important principles of pharmacology, and of much of research in general, is a concept called "dose-response." Just as the term implies, this notion refers to the relationship between some effect—let's say, lowering of blood pressure—and the amount of a drug. Scientists care a lot about dose-response data because these mathematical relationships signify that a medicine is working according to a specific interaction between different molecules in the body.Sometimes, it takes years to figure out exactly which molecules are working together, but when testing a potential medicine, researchers must first show that three things are true in an experiment. First, if the drug isn't there, you don't get any effect. In our example, that means no change in blood pressure. Second, adding more of the drug (up to a certain point) causes an incremental change in effect (lower blood pressure with more drug). Third, taking the drug away (or masking its action with molecule that blocks the drug) means there is no effect. Scientists most often plot data from dose-response experiments on a graph. A typical "dose-response curve" demonstrates the effects of what happens (the vertical Y-axis) when more and more drug is added to the experiment (the horizontal X-axis).Dose-response curves determine how much of a drug ((X-axis) causes a particular effect, or a side effect, in the body (Y-axis).One of the first neurotransmitters identified was acetylcholine, which causes muscle contraction. Curare works by tricking a cell into thinking it is acetylcholine. By fitting—not quite as well, but nevertheless fitting—into receiving molecules called receptors on a muscle cell, curare prevents acetylcholine from attaching and delivering its message. No acetylcholine means no contraction, and muscles become paralyzed.Most medicines exert their effects by making physical contact with receptors on the surface of a cell. Think of an agonist-receptor interaction like a key fitting into a lock. Inserting a key into a door lock permits the doorknob to be turned and allows the door to be opened. Agonists open cellular locks (receptors), and this is the first step in a communication between the outside of the cell and the inside, which contains all the mini machines that make the cell run. Scientists have identified thousands of receptors. Because receptors have a critical role in controlling the activity of cells, they are common targets for researchers designing new medicines.Curare is one example of a molecule called an antagonist. Drugs that act as antagonists compete with natural agonists for receptors but act only as decoys, freezing up the receptor and preventing agonists' use of it. Researchers often want to block cell responses, such as a rise in blood pressure or an increase in heart rate. For that reason, many drugs are antagonists, designed to blunt overactive cellular responses.The key to agonists fitting snugly into their receptors is shape. Researchers who study how drugs and other chemicals exert their effects in particular organs—the heart, the lungs, the kidneys, and so on—are very interested in the shapes of molecules. Some drugs have very broad effects because they fit into receptors on many different kinds of cells. Some side effects, such as dry mouth or a drop in blood pressure, can result from a drug encountering receptors in places other than the target site. One of a pharmacologist's major goals is to reduce these side effects by developing drugs that attach only to receptors on the target cells.That is much easier said than done. While agonists may fit nearly perfectly into a receptor's shape, other molecules may also brush up to receptors and sometimes set them off. These types of unintended, nonspecific interactions can cause side effects. They can also affect how much drug is available in the body.In today's culture, the word "steroid" conjures up notions of drugs taken by athletes to boost strength and physical performance. But steroid is actually just a chemical name for any substance that has a characteristic chemical structure consisting of multiple rings of connected atoms. Some examples of steroids include vitamin D, cholesterol, estrogen, and cortisone—molecules that are critical for keeping the body running smoothly. Various steroids have important roles in the body's reproductive system and the structure and function of membranes. Researchers have also discovered that steroids can be active in the brain, where they affect the nervous system. Some steroids may thus find use as anesthetics, medicines that sedate people before surgery by temporarily slowing down brain function.A steroid is a molecule with a particular chemical structure consisting of multiple "rings" (hexagons and pentagon, below).Douglas Covey of Washington University in St. Louis, Missouri, has uncovered new roles for several of these neurosteroids, which alter electrical activity in the brain. Covey's research shows that neurosteroids can either activate or tone down receptors that communicate the message of a neurotransmitter called gammaaminobutyrate, or GABA. The main job of this neurotransmitter is to dampen electrical activity throughout the brain. Covey and other scientists have found that steroids that activate the receptors for GABA decrease brain activity even more, making these steroids good candidates for anesthetic medicines. Covey is also investigating the potential of neuroprotective steroids in preventing the nerve-wasting effects of certain neurodegenerative disorders.1.3: Fitting In is shared under a Public Domain license and was authored, remixed, and/or curated by LibreTexts.
733
1.4: Bench to Bedside- Clinical Pharmacology
https://chem.libretexts.org/Bookshelves/Biological_Chemistry/Book3A_Medicines_by_Design/01%3A_ABCs_of_Pharmacology/1.04%3A_Bench_to_Bedside-_Clinical_Pharmacology
Clinical trialCombinatorial Genetics Cyclooxygenase DeathPrescribing drugs is a tricky science, requiring physicians to carefully consider many factors. Your doctor can measure or otherwise determine many of these factors, such as weight and diet. But another key factor is drug interactions. You already know that every time you go to the doctor, he or she will ask whether you are taking any other drugs and whether you have any drug allergies or unusual reactions to any medicines.Interactions between different drugs in the body, and between drugs and foods or dietary supplements, can have a significant influence, sometimes "fooling" your body into thinking you have taken more or less of a drug than you actually have taken.By measuring the amounts of a drug in blood or urine, clinical pharmacologists can calculate how a person is processing a drug. Usually, this important analysis involves mathematical equations, which take into account many different variables. Some of the variables include the physical and chemical properties of the drug, the total amount of blood in a person's body, the individual's age and body mass, the health of the person's liver and kidneys, and what other medicines the person is taking. Clinical pharmacologists also measure drug metabolites to gauge how much drug is in a person's body. Sometimes, doctors give patients a "loading dose" (a large amount) first, followed by smaller doses at later times. This approach works by getting enough drug into the body before it is metabolized (broken down) into inactive parts, giving the drug the best chance to do its job.Feverfew for migraines, garlic for heart disease, St. John's wort for depression. These are just a few of the many "natural" substances ingested by millions of Americans to treat a variety of health conditions. The use of so-called alternative medicines is widespread, but you may be surprised to learn that researchers do not know in most cases how herbs work—or if they work at all—inside the human body.Herbs are not regulated by the Food and Drug Administration, and scientists have not performed careful studies to evaluate their safety and effectiveness. Unlike many prescription (or even over-the-counter) medicines, herbs contain many—sometimes thousands—of ingredients. While some small studies have confirmed the usefulness of certain herbs, like feverfew, other herbal products have proved ineffective or harmful. For example, recent studies suggest that St. John's wort is of no benefit in treating major depression. What's more, because herbs are complicated concoctions containing many active components, they can interfere with the body's metabolism of other drugs, such as certain HIV treatments and birth control pills.1.4: Bench to Bedside- Clinical Pharmacology is shared under a Public Domain license and was authored, remixed, and/or curated by LibreTexts.
734
1.5: Pump It Up
https://chem.libretexts.org/Bookshelves/Biological_Chemistry/Book3A_Medicines_by_Design/01%3A_ABCs_of_Pharmacology/1.05%3A_Pump_It_Up
Using Ochem GlossaryOrganic Chemistry, Transition Metal, CarbocationBacteria have an uncanny ability to defend themselves against antibiotics. In trying to figure out why this is so, scientists have noted that antibiotic medicines that kill bacteria in a variety of different ways can be thwarted by the bacteria they are designed to destroy. One reason, says Kim Lewis of Northeastern University in Boston, Massachusetts, may be the bacteria themselves. Microorganisms have ejection systems called multidrug-resistance (MDR) pumps—large proteins that weave through cell-surface membranes. Researchers believe that microbes have MDR pumps mainly for self-defense. The pumps are used to monitor incoming chemicals and to spit out the ones that might endanger the bacteria.Many body molecules and drugs (yellow balls) encounter multidrug-resistance pumps (blue) after passing through a cell membrane. © LINDA S. NYELewis suggests that plants, which produce many natural bacteria-killing molecules, have gotten "smart" over time, developing ways to outwit bacteria. He suspects that evolution has driven plants to produce natural chemicals that block bacterial MDR pumps, bypassing this bacterial protection system. Lewis tested his idea by first genetically knocking out the gene for the MDR pump from the common bacterium Staphylococcus aureus (S. aureus). He and his coworkers then exposed the altered bacteria to a very weak antibiotic called berberine that had been chemically extracted from barberry plants. Berberine is usually woefully ineffective against S. aureus, but it proved lethal for bacteria missing the MDR pump. What's more, Lewis found that berberine also killed unaltered bacteria given another barberry chemical that inhibited the MDR pumps. Lewis suggests that by co-administering inhibitors of MDR pumps along with antibiotics, physicians may be able to outsmart disease-causing microorganisms.MDR pumps aren't just for microbes. Virtually all living things have MDR pumps, including people. In the human body, MDR pumps serve all sorts of purposes, and they can sometimes frustrate efforts to get drugs where they need to go. Chemotherapy medicines, for example, are often "kicked out" of cancer cells by MDR pumps residing in the cells' membranes. MDR pumps in membranes all over the body—in the brain, digestive tract, liver, and kidneys—perform important jobs in moving natural body molecules like hormones into and out of cells.Pharmacologist Mary Vore of the University of Kentucky in Lexington has discovered that certain types of MDR pumps do not work properly during pregnancy, and she suspects that estrogen and other pregnancy hormones may be partially responsible. Vore has recently focused efforts on determining if the MDR pump is malformed in pregnant women who have intrahepatic cholestasis of pregnancy (ICP). A relatively rare condition, ICP often strikes during the third trimester and can cause significant discomfort such as severe itching and nausea, while also endangering the growing fetus. Vore's research on MDR pump function may also lead to improvements in drug therapy for pregnant women.Explain the difference between an agonist and an antagonist.How does grapefruit juice affect blood levels of certain medicines?What does a pharmacologist plot on the vertical and horizontal axes of a dose-response curve?Name one of the potential risks associated with taking herbal products.What are the four stages of drug's life in the body?1.5: Pump It Up is shared under a Public Domain license and was authored, remixed, and/or curated by LibreTexts.
735
2.2: River of Life
https://chem.libretexts.org/Bookshelves/Biological_Chemistry/Book3A_Medicines_by_Design/02%3A_Body_Heal_Thyself/2.02%3A_River_of_Life
Using Achem GlossaryAnalytical Chemistry, Transition Metal, Carbocation, Infrared SpectroscopySince blood is the body's primary internal transportation system, most drugs travel via this route. Medicines can find their way to the bloodstream in several ways, including the rich supply of blood vessels in the skin. You may remember, as a young child, the horror of seeing blood escaping your body through a skinned knee. You now know that the simplistic notion of skin literally "holding everything inside" isn't quite right. You survived the scrape just fine because blood contains magical molecules that can make a clot form within minutes after your tumble. Blood is a rich concoction containing oxygen-carrying red blood cells and infection-fighting white blood cells. Blood cells are suspended in a watery liquid called plasma that contains clotting proteins, electrolytes, and many other important molecules.More than simply a protective covering, skin is a highly dynamic network of cells, nerves, and blood vessels. Skin plays an important role in preserving fluid balance and in regulating body temperature and sensation. Immune cells in skin help the body prevent and fight disease. When you get burned, all of these protections are in jeopardy. Burn-induced skin loss can give bacteria and other microorganisms easy access to the nutrient-rich fluids that course through the body, while at the same time allowing these fluids to leak out rapidly. Enough fluid loss can thrust a burn or trauma patient into shock, so doctors must replenish skin lost to severe burns as quickly as possible.In the case of burns covering a significant portion of the body, surgeons must do two things fast: strip off the burned skin, then cover the unprotected underlying tissue. These important steps in the immediate care of a burn patient took scientists decades to figure out, as they performed carefully conducted experiments on how the body responds to burn injury. In the early 1980s, researchers doing this work developed the first version of an artificial skin covering called Integra® Dermal Regeneration Template™, which doctors use to drape over the area where the burned skin has been removed. Today, Integra Dermal Regeneration Template is used to treat burn patients throughout the world.Blood also ferries proteins and hormones such as insulin and estrogen, nutrient molecules of various kinds, and carbon dioxide and other waste products destined to exit the body.While the bloodstream would seem like a quick way to get a needed medicine to a diseased organ, one of the biggest problems is getting the medicine to the correct organ. In many case, drugs and up where they are not needed and cause side effects, as we've already noted. What's more, drugs may encounter many different obstacles while journeying through the bloodstream. Some medicines get "lost" when they stick tightly to certain proteins in the blood, effectively putting the drugs out of business.Scientists called physiologists originally came up with the idea that all internal processes work together to keep the body in a balanced state. The bloodstream links all our organs together, enabling them to work in a coordinated way. Two organ systems are particularly interesting to pharmacologists: the nervous system (which transmits electrical signals over wide distances) and the endocrine system (which communicates messages via traveling hormones). These two systems are key targets for medicines.Skin consists of three layers, making up a dynamic network of cells, nerves, and blood vessels.Acetylsalicylate is the aspirin of today. Adding a chemical tag called an acetyl group (shaded yellow box, right) to a molecule derived from willow bark (salicylate, above) makes the molecule less acidic (and easier on the lining of the digestive tract), but still effective at relieving pain.2.2: River of Life is shared under a Public Domain license and was authored, remixed, and/or curated by LibreTexts.
737
2.3: No Pain, Your Gain
https://chem.libretexts.org/Bookshelves/Biological_Chemistry/Book3A_Medicines_by_Design/02%3A_Body_Heal_Thyself/2.03%3A_No_Pain_Your_Gain
Like curare's effects on acetylcholine, the interactions between another drug—aspirin—and metabolism shed light on how the body works. This little white pill has been one of the most widely used drugs in history, and many say that it launched the entire pharmaceutical industry.As a prescribed drug, aspirin is 100 years old. However, in its most primitive form, aspirin is much older. The bark of the willow tree contains a substance called salicin, a known antidote to headache and fever since the time of the Greek physician Hippocrates, around 400 B.C. The body convents salicin to an acidic substance called salicylate. Despite its usefulness dating back to ancient times, early records indicate that salicylate wreaked havoc on the stomachs of people who ingested this natural chemical. In the late 1800s, a scientific breakthrough turned willow-derived salicylate into a medicine friendlier to the body. Bayer® scientist Felix Hoffman discovered that adding a chemical tag called an acetyl group (see figure, page 20) to salicylate made the molecule less acidic and a little gentler on the stomach, but the chemical change did not seem to lessen the drug's ability to relieve his father's rheumatism. This molecule, acetylsalicylate, is the aspirin of today.Aspirin works by blocking the production of messenger molecules called prostaglandins. Because of the many important roles they play in metabolism, prostaglandins are important targets for drugs and are very interesting to pharmacologists. Prostaglandins can help muscles relax and open up blood vessels, they give you a fever when you're infected with bacteria, and they also marshal the immune system by stimulating the process called inflammation. Sunburn, bee stings, tendonitis, and arthritis are just a few examples of painful inflammation caused by the body's release of certain types of prostaglandins in response to an injury.Inflammation leads to pain in arthritis.Aspirin belongs to a diverse group of medicines called NSAIDs, a nickname for the tongue-twisting title nonsteroidal antiinflammatory drugs. Other drugs that belong to this large class of medicines include Advil®, Aleve®, and many other popular pain relievers available without doctor's prescription. All these drugs share aspirin's ability to knock back the production of prostaglandins by blocking an enzyme called cyclooxygenase. Known as COX, this enzyme is a critical driver of the body's metabolism and immune function.COX makes prostaglandins and other similar molecules collectively known as eicosanoids from a molecule called arachidonic acid. Named for the Greek word eikos, meaning "twenty," each eicosanoid contains 20 atoms of carbon.You've also heard of the popular pain reliever acetaminophen (Tylenol®), which is famous for reducing fever and relieving headaches. However, scientists do not consider Tylenol an NSAID, because it does little to halt inflammation (remember that part of NSAID stands for "anti-inflammatory"). If your joints are aching from a long hike you weren't exactly in shape for, aspirin or Aleve may be better than Tylenol because inflammation is the thing making your joints hurt.To understand how enzymes like COX work, some pharmacologists use special biophysical techniques and X rays to determine the three-dimensional shapes of the enzymes. These kinds of experiments teach scientists about molecular function by providing clear pictures of how all the folds and bends of an enzyme—usually a protein or group of interacting proteins—help it do its job. In drug development, one successful approach has been to use this information to design decoys to jam up the working parts of enzymes like COX. Structural studies unveiling the shapes of COX enzymes led to a new class of drugs used to treat arthritis. Researchers designed these drugs to selectively home in on one particular type of COX enzyme called COX-2.By designing drugs that target only one form of an enzyme like COX, pharmacologists may be able to create medicines that are great at stopping inflammation but have fewer side effects. For example, stomach upset is a common side effect caused by NSAIDs that block COX enzymes. This side effect results from the fact that NSAIDs bind to different types of COX enzymes—each of which has a slightly different shape. One of these enzymes is called COX-1. While both COX-1 and COX-2 enzymes make prostaglandins, COX-2 beefs up the production of prostaglandins in sore, inflamed tissue, such as arthritic joints. In contrast, COX-1 makes prostaglandins that protect the digestive tract, and blocking the production of these protective prostaglandins can lead to stomach upset, and even bleeding and ulcers.Very recently, scientists have added a new chapter to the COX story by identifying COX-3, which may be Tylenol's long-sought molecular target. Further research will help pharmacologists understand more precisely how Tylenol and NSAIDs act in the body.2.3: No Pain, Your Gain is shared under a Public Domain license and was authored, remixed, and/or curated by LibreTexts.
738
2.4: Our Immune Army
https://chem.libretexts.org/Bookshelves/Biological_Chemistry/Book3A_Medicines_by_Design/02%3A_Body_Heal_Thyself/2.04%3A_Our_Immune_Army
Scientist know a lot about the body's organ systems, but much more remains to be discovered. To design "smart" drugs that will seek out diseased cells and not healthy ones, researchers need to understand the body inside and out. One system in particular still puzzles scientists: the immune system.Even though researchers have accumulated vast amounts of knowledge about how our bodies fight disease using white blood cells and thousands of natural chemical weapons, a basic dilemma persists—how does the body know what to fight? The immune system constantly watches for foreign invaders and is exquisitely sensitive to any intrusion perceived as "non-self," like a transplanted organ from another person. This protection, however, can run afoul if the body slips up and views its own tissue as foreign. Autoimmune disease, in which the immune system mistakenly attacks and destroys body tissue that it believes to be foreign, can be the terrible consequence.Common over-the-counter medicines used to treat pain, fever, and inflammation have many uses. Here are some of the terms used to describe the particular effects of these drugs:Antibodies are Y-shaped molecules of the immune system.The powerful immune army presents significant roadblocks for pharmacologists trying to create new drugs. But some scientists have looked at the immune system through a different lens. Why not teach the body to lunch an attack on its own diseased cells? Many researchers are pursuing immunotherapy as a way to treat a wide range of health problems, especially cancer. With advances in biotechnology, researchers are now able to tailor-produce in the lab modified forms of antibodies—out immune system's front-line agents.Antibodies are spectacularly specific proteins that seek out and mark for destruction anything they do not recognize as belonging to body. Scientists have learned how to join antibody-making cells with cells that grow and divide continuously. This strategy creates cellular "factories" that work around the clock to produce large quantities of specialized molecules, called monoclonal antibodies, that attach to and destroy single kinds of targets. Recently, researchers have also figured out how to produce monoclonal antibodies in the egg whites of chickens. This may reduce production costs of these increasingly important drugs.Doctors are already using therapeutic monoclonal antibodies to attack tumors. A drug called Rituxan® was the first therapeutic antibody approved by the Food and Drug Administration to treat cancer. This monoclonal antibody targets a unique tumor "fingerprint" on the surface of immune cells, called B cells, in a blood cancer called non-Hodgkin's lymphoma. Another therapeutic antibody for cancer, Herceptin®, latches onto breast cancer cell receptors that signal growth to either mask the receptors from view or lure immune cells to kill the cancer cells. Herceptin's actions prevent breast cancer from spreading to other organs.Researchers are also investigating a new kind of "vaccine" as therapy for diseases such as cancer. The vaccines are not designed to prevent cancer, but rather to treat the disease when it has already taken hold in the body. Unlike the targeted-attack approach of antibody therapy, vaccines aim to recruit the entire immune system to fight off a tumor. Scientists are conducing clinical trials of vaccines against cancer to evaluate the effectiveness of this treatment approach.The body machine has a tremendously complex collection of chemical signals that are relayed back and forth through the blood and into and out of cells. While scientists are hopeful that future research will point the way toward getting a sick body to heal itself, it is likely that there will always be a need for medicines to speed recovery from the many illnesses that plague humankind.A body-wide syndrome caused by an infection called sepsis is a leading cause of death in hospital intensive care units, striking 750,000 people every year and killing more than 215,000. Sepsis is a serious public health problem, causing more deaths annually than heart disease. The most severe form of sepsis occurs when bacteria leak into the blood-stream, spilling their poisons and leading to a dangerous condition called septic shock. Blood pressure plunges dangerously low, the heart has difficulty pumping enough blood, and body temperature climbs or falls rapidly. In many cases, multiple organs fail and the patient dies.Despite the obvious public health importance of finding effective ways to treat sepsis, researchers have been frustratingly unsuccessful. Kevin Tracey of the North Shore-Long Island Jewish Research Institute in Manhasset, New York, has identified an unusual suspect in the deadly crime of sepsis: the nervous system. Tracey and his coworkers have discovered an unexpected link between cytokines, the chemical weapons released by the immune system during sepsis, and a major nerve that controls critical body function such as heart rate and digestion. In animal studies, Tracy found that electrically stimulating this nerve, called the vagus nerve, significantly lowered blood levels of TNF, a cytokine that is produced when the body senses the presence of bacteria in the blood. Further research has led Tracey to conclude that production of the neurotransmitter acetylcholine underlies the inflammation-blocking response. Tracey is investigating whether stimulating the vagus nerve can be used as a component of therapy for sepsis and as a treatment for other immune disorders.2.4: Our Immune Army is shared under a Public Domain license and was authored, remixed, and/or curated by LibreTexts.
739
2.5: A Closer Look
https://chem.libretexts.org/Bookshelves/Biological_Chemistry/Book3A_Medicines_by_Design/02%3A_Body_Heal_Thyself/2.05%3A_A_Closer_Look
One protruding end (green) of the MAO B enzyme anchors the protein inside the cell. Body molecules or drugs first come into contact with MAO B (in the hatched blue region) and are worked on within the enzyme's "active site," a cavity nestled inside the protein (the hatched red region). To get its job done, MAO B uses a helper molecule (yellow), which fits right next to the active site where the reaction takes place.REPRINTED WITH PERMISSION FROM J. BIOL. CHEM. 277:23973-6 HTTP://WWW.JBC.ORGSeeing is believing. The cliché could not be more apt for biologists trying to understand how a complicated enzyme works. For decades, researchers have isolated and purified individual enzymes from cells, performing experiments with these proteins to find out how they do their job of speeding up chemical reactions. But to thoroughly understand a molecule's function, scientists have to take a very, very close look at how all the atoms fit together and enable the molecular "machine" to work properly.Researchers called structural biologists are fanatical about such detail, because it can deliver valuable information for designing drugs—even for proteins that scientists have studied in the lab for a long time. For example, biologist have known for 40 years that an enzyme called monoamine oxidase B (MAO B) works in the brain to help recycle communication molecules called neurotransmitters. MAO B and its cousin MAO A work by removing molecular pieces from neurotransmitters, part of the process of inactivating them. Scientists have developed drugs to block the actions of MAO enzymes, and by doing so, help preserve the levels of neurotransmitters in people with such disorders as Parkinson's disease and depression.However, MAO inhibitors have many undesirable side effects. Tremors, increased hart rate, and problems with sexual function are some of the mild side effects of AMO inhibitors, but more serious problems include seizures, large dips in blood pressure, and difficulty breathing. People taking MAO inhibitors cannot eat foods containing the substance tyramine, which is found in wine, cheese, dried fruits, and many other foods. Most of the side effects occur because drugs that attach to MAO enzymes do not have a perfect fit for either MAO A or MAO B.Dale Edmondson of Emory University in Atlanta, Georgia, has recently uncovered new knowledge that may help researchers design better, more specific drugs to interfere with these critical brain enzymes. Edmonson and his coworkers Andrea Mattevi and Claudia Binda of the University of Pavia in Italy got a crystal-clear glimpse of MAO B by determining its three-dimensional structure. The researchers also saw how one MAO inhibitor, Eldepryl®, attaches to the MAO B enzyme, and the scientists predict that their results will help in the design of more specific drugs with fewer side effects.Define metabolism.How does aspirin work?Name three functions of blood.Give two examples of immunotherapy.What is a technique scientists use to study a protein's three-dimensional structure?2.5: A Closer Look is shared under a Public Domain license and was authored, remixed, and/or curated by LibreTexts.
740
3.1: Nature's Medicine Cabinet
https://chem.libretexts.org/Bookshelves/Biological_Chemistry/Book3A_Medicines_by_Design/03%3A_Drugs_from_Nature_Then_and_Now/3.01%3A_Nature's_Medicine_Cabinet
Times have changed, but more than half of the world's population still relies entirely on plants for medicines, and plants supply the active ingredients of most traditional medical products. Plants have also served as the starting point for countless drugs on the market today. Researchers generally agree that natural products from plants and other organisms have been the most consistently successful source for ideas for new drugs, since nature is a master chemist. Drug discovery scientists often refer to these ideas as "leads," and chemicals that have desirable properties in lab tests are called lead compounds.Having high cholesterol is a significant risk factor for heart disease, a leading cause of death in the industrialized world. Pharmacology research has made major strides in helping people deal with this problem. Scientists Michael Brown and Joseph Goldstein, both of the University of Texas Southwestern Medical Center at Dallas, won the 1985 Nobel Prize in physiology or medicine for their fundamental work determining how the body metabolizes cholesterol. This research, part of which first identified cholesterol receptors, led to the development of the popular cholesterol-lowering "statin" drugs such as Mevacor® and Lipitor®.New research from pharmacologist David Mangelsdorf, also at the University of Texas Southwestern Medical Center at Dallas, is pointing to another potential treatment for high cholesterol. The "new" substance has the tongue-twisting name guggulsterone, and it isn't really new at all. Guggulsterone comes from the sap of the guggul tree, a species native to India, and has been used in India's Ayurvedic medicine since at least 600 B.C, to treat a wide variety of ailments, including obesity and cholesterol disorders. Mangelsdorf and his coworker David Moore of Baylor College of Medicine in Houston, Texas, found that guggulsterone blocks a protein called the FXR receptor that plays a role in cholesterol metabolism, converting cholesterol in the blood to bile acids. According to Mangelsdorf, since elevated levels of bile acids can actually boost cholesterol, blocking FXR helps to bring cholesterol counts down.Sap from the guggul tree, a species native to India, contains a substance that may help fight heart disease.Relatively speaking, very few species of living things on Earth have actually been seen and named by scientists. Many of these unidentified organisms aren't necessarily lurking in uninhabited places. A few years ago, for instance, scientists identified a brand-new species of millipede in a rotting leaf pile in New York City's Central Park, an area visited by thousands of people every day.Scientists estimate that Earth is home to at least 250,000 different species of plants, and that up to 30 million species of insects crawl or fly somewhere around the globe. Equal numbers of species of fungi, algae, and bacteria probably also exist. Despite these vast numbers, chemists have tested only a few of these organisms to see whether they harbor some sort of medically useful substance.Pharmaceutical chemists seek ideas for new drugs not only in plants, but in any part of nature where they may find valuable clues. This includes searching for organisms from what has been called the last unexplored frontier: the seawater that blankets nearly three-quarters of Earth.A novel drug delivery system called photodynamic therapy combines an ancient plant remedy, modern blood transfusion techniques, and light. Photodynamic therapy has been approved by the Food and Drug Administration to treat several cancers and certain types of age-related macular degeneration, a devastating eye disease that is the leading cause of blindness in North America and Europe. Photodynamic therapy is also being tested as a treatment for some skin and immune disorders.The key ingredient in this therapy is psoralen, a plant-derived chemical that has a peculiar property: It is inactive until exposed to light. Psoralen is the active ingredient in a Nile-dwelling weed called ammi. This remedy was used by ancient Egyptians, who noticed that people became prone to sunburn after eating the weed. Modern researchers explained this phenomenon by discovering that psoralen, after being digested, goes to the skin's surface, where it is activated by the sun's ultraviolet rays. Activated psoralen attaches tenaciously to the DNA of rapidly dividing cancer cells and kills them. Photopheresis, a method that exposes a psoralen-like drug to certain wave lengths of light, is approved for the treatment of some forms of lymphoma, a cancer of white blood cells.Some forms of cancer can be treated with photodynamic therapy, in which a cancer-killing molecule is activated by certain wavelengths of light.JOSEPH FRIEDBERG3.1: Nature's Medicine Cabinet is shared under a Public Domain license and was authored, remixed, and/or curated by LibreTexts.
742
3.2: Ocean Medicines
https://chem.libretexts.org/Bookshelves/Biological_Chemistry/Book3A_Medicines_by_Design/03%3A_Drugs_from_Nature_Then_and_Now/3.02%3A_Ocean_Medicines
Marine animals fight daily for both food and survival, and this underwater warfare is waged with chemicals. As with plants, researchers have recognized the potential use of this chemical weaponry to kill bacteria or raging cancer cells. Scientists isolated the first marine-derived cancer drug, now known as Cytosar-U®, decades ago. They found this chemical, a staple for treating leukemia and lymphoma, in a Caribbean sea sponge. In recent years, scientists have discovered dozens of similar ocean-derived chemicals that appear to be powerful cancer cell killers. Researchers are testing these natural products for their therapeutic properties.For example, scientists have unearthed several promising drugs from sea creatures called tunicates. More commonly known as sea squirts, tunicates are a group of marine organisms that spend most of their lives attached to docks, rocks, or the undersides of boats. To an untrained eye they look like nothing more than small, colorful blobs, but tunicates are evolutionarily more closely related to vertebrates like ourselves than to most other invertebrate animals.One tunicate living in the crystal waters of West Indies coral reefs and mangrove swamps turned out to be the source of an experimental cancer drug called ecteinascidin. Ken Rinehart, a chemist who was then at the University of Illinois at Urbana-Champaign discovered this natural substance. PharmaMar, a pharmaceutical company based in Spain, now holds the licenses for ecteinascidin, which it calls Yondelis™, and is conducting clinical trials on this drug. Lab tests indicate that Yondelis can kill cancer cells, and the first set of clinical studies has shown that the drug is safe for use in humans. Further phases of clinical testing—to evaluate whether Yondelis effectively treats soft-tissue sarcomas (tumors of the muscles, tendons, and supportive tissues)—and other types of cancer—are under way.A penicillin-secreting Penicillium mold colony inhibits the growth of bacteria (zig-zag smear growing on culture dish).CHRISTINE L. CASELed by the German scientist Paul Ehrlich, a new era in pharmacology began in the late 19th century. Although Ehrlich's original idea seems perfectly obvious now, it was considered very strange at the time. He proposed that every disease should be treated with a chemical specific for that disease, and that the pharmacologist's task was to find these treatments by systematically testing potential drugs.The approach worked: Ehrlich's greatest triumph was his discovery of salvarsan, the first effective treatment for the sexually transmitted disease syphilis. Ehrlich discovered salvarsan after screening 605 different arsenic-containing compounds. Later, researchers around the world had great success in developing new drugs by following Ehrlich's methods. For example, testing of sulfur-containing dyes led to the 20th century's first "miracle drugs"—the sulfa drugs, used to treat bacterial infections. During the 1940s, sulfa drugs were rapidly replaced by a new, more powerful, and safer antibacterial drug, penicillin—originally extracted from the soil-dwelling fungus Penicillium.Yondelis is an experimental cancer drug isolated from the marine organism Ecteinascidia turbinata.PHARMAMARAnimals that live in coral reefs almost always rely on chemistry to ward off hungry predators. Because getting away quickly isn't an option in this environment, lethal chemical brews are the weaponry of choice for these slow-moving or even sedentary animals. A powerful potion comes from one of these animals, a stunningly gorgeous species of snail found in the reefs surrounding Australia, Indonesia, and the Philippines. The animals, called cone snails, have a unique venom containing dozens of nerve toxins. Some of these venoms instantly shock prey, like the sting of an electric eel or the poisons of scorpions and sea anemones. Others cause paralysis, like the venoms of cobras and puffer fish.Pharmacologist Baldomero Olivera of the University of Utah in Salt Lake City, a native of the Philippines whose boyhood fascination with cone snails matured into a career studying them, has discovered one cone snail poison that has become a potent new pain medicine. Olivera's experiments have shown that the snail toxin is 1,000 times more powerful than morphine in treating certain kinds of chronic pain. The snail-derived drug, named Prialt™ by the company (Elan Corporation, plc in Dublin, Ireland) that developed and markets it, jams up nerve transmission in the spinal cord and blocks certain pain signals from reaching the brain. Scientists predict that many more cone snail toxins will be drug leads, since 500 different species of this animal populate Earth.A poison produced by the cone snail C. geographus has become a powerful new pain medicine.The cancer drug Taxol originally came from the bark and needles of yew trees.Are researchers taking advantage of nature when it comes to hunting for new medicines? Public concern has been raised about scientists scouring the world's tropical rain forests and coral reefs to look for potential natural chemicals that may end up being useful drugs. While it is true that rainforests in particular are home to an extraordinarily rich array of species of animals and plants, many life-saving medicines derived from natural products have been discovered in temperate climates not much different from our kitchens and backyards.Many wonder drugs have arisen from non-endangered species, such as the bark of the willow tree, which was the original source of aspirin. The antibiotic penicillin, from an ordinary mold, is another example. Although scientists first found the chemical that became the widely prescribed cancer drug Taxol® in the bark of an endangered species of tree called the Pacific yew, researchers have since found a way to manufacture Taxol in the lab, starting with an extract from pine needles of the much more abundant European yew. In many cases, chemists have also figured out ways to make large quantities of rainforest-and reef-derived chemicals in the lab (see main text).3.2: Ocean Medicines is shared under a Public Domain license and was authored, remixed, and/or curated by LibreTexts.
743
3.3: Tweaking Nature
https://chem.libretexts.org/Bookshelves/Biological_Chemistry/Book3A_Medicines_by_Design/03%3A_Drugs_from_Nature_Then_and_Now/3.03%3A_Tweaking_Nature
Searching nature's treasure trove for potential medicines is often only the first step. Having tapped natural resources to hunt for new medicines, pharmaceutical scientists then work to figure out ways to cultivate natural products or to make them from scratch in the lab. Chemists play an essential role in turning marine and other natural products, which are often found in minute quantities, into useful medicines.In the case of Yondelis, chemist Elias J. Corey of Harvard University in Boston, Massachusetts, deciphered nature's instructions on how to make this powerful medicinal molecule. That's important, because researchers must harvest more than a ton of Caribbean sea squirts to produce just 1 gram of the drug. By synthesizing drugs in a lab, scientists can produce thousands more units of a drug, plenty to use in patients if it proves effective against disease.Scientists are also beginning to use a relatively new procedure called combinatorial genetics to custom-make products that don't even exist in nature. Researchers have discovered ways to remove the genetic instructions for entire metabolic pathways from certain microorganisms, alter the instructions, and then put them back. This method can generate new and different "natural" products.Just as your genes help determine how you respond to certain medicines, your genetic code can also affect your susceptibility to illness. Why is it that two people with a similar lifestyle and a nearly identical environment can have such different propensities to getting sick? Lots of factors contribute, including diet, but scientists believe that an important component of disease risk is the genetic variability of people's reactions to chemicals in the environment.On hearing the word "chemical," many people think of smokestacks and pollution. Indeed, our world is littered with toxic chemicals, some natural and some synthetic. For example, nearly all of us would succumb quickly to the poisonous bite of a cobra, but it is harder to predict which of us will develop cancer from exposure to carcinogens like cigarette smoke.Toxicologists are researchers who study the effects of poisonous substances on living organisms. One toxicologist, Serrine Lau of the University of Texas at Austin, is trying to unravel the genetic mystery of why people are more or less susceptible to kidney damage after coming into contact with some types of poisons. Lau and her coworkers study the effects of a substance called hydroquinone (HQ), an industrial pollutant and a contaminant in cigarette smoke and diesel engine exhaust. Lau is searching for genes that play a role in triggering cancer in response to HQ exposure. Her research and the work of other so-called toxicogeneticists should help scientists find genetic "signatures" that can predict risk of developing cancer in people exposed to harmful carcinogens.3.3: Tweaking Nature is shared under a Public Domain license and was authored, remixed, and/or curated by LibreTexts.
744
3.4: Is It Chemistry or Genetics?
https://chem.libretexts.org/Bookshelves/Biological_Chemistry/Book3A_Medicines_by_Design/03%3A_Drugs_from_Nature_Then_and_Now/3.04%3A_Is_It_Chemistry_or_Genetics
Regardless of the way researchers find new medicines, drug discovery often takes many unexpected twists and turns. Scientists must train their eyes to look for new opportunities lurking in the outcomes of their experiments. Sometimes, side trips in the lab can open up entirely new avenues of discovery.Take the case of cyclosporine, a drug discovered three decades ago that suppresses the immune system and thereby prevents the body from rejecting transplanted organs. Still a best-selling medicine, cyclosporine was a research breakthrough. The drug made it possible for surgeons to save the lives of many critically ill patients by transplanting organs. But it's not hard to imagine that the very properties that make cyclosporine so powerful in putting a lid on the immune system can cause serious side effects, by damping immune function too much.Years after the discovery of cyclosporine, researchers looking for less toxic versions of this drug found a natural molecule called FK506 that seemed to produce the same immune-suppressing effects at lower doses. The researchers found, to their great surprise, that cyclosporine and FK506 were chemically very different. To try to explain this puzzling result, Harvard University organic chemist Stuart Schreiber (then at Yale University in New Haven, Connecticut) decided to take on the challenge of figuring out how to make FK506 in his lab, beginning with simple chemical building blocks.Schreiber succeeded, and he and scientists at Merck & Co., Inc. (Whitehouse Station, New Jersey) used the synthetic FK506 as a tool to unravel the molecular structure of the receptor for FK506 found on immune cells. According to Schreiber, information about the receptor's structure from these experiments opened his eyes to consider an entirely new line of research.Schreiber reasoned that by custom-making small molecules in the lab, scientists could probe the function of the FK506 receptor to systematically study how the immune system works. Since then, he and his group have continued to use synthetic small molecules to explore biology. Although Schreiber's strategy is not truly genetics, he calls the approach chemical genetics, because the method resembles the way researchers go about their studies to understand the functions of genes.In one traditional genetic approach, scientists alter the "spelling" (nucleotide components) of a gene and put the altered gene into a model organism—for example, a mouse, a plant, or a yeast cell—to see what effect the gene change has on the biology of that organism. Chemical genetics harnesses the power of chemistry to custom-produce any molecule and introduce it into cells, then look for biological changes that result. Starting with chemicals instead of genes gives drug development a step up. If the substance being tested produces a desired effect, such as stalling the growth of cancer cells, then the molecule can be chemically manipulated in short order since the chemist already knows how to make it.These days, it's hard for scientists to know what to call themselves. As research worlds collide in wondrous and productive ways, the lines get blurry when it comes to describing your expertise. Craig Crews of Yale University, for example, mixes a combination of molecular pharmacology, chemistry, and genetics. In fact, because of his multiple scientific curiosities, Crews is a faculty member in three different Yale departments: molecular, cellular, and developmental biology; chemistry, and pharmacology. You might wonder how he has time to get anything done.The herb feverfew (bachelor's button) contains a substance called parthenolide that appears to block inflammation.He's getting plenty done—Crews is among a new breed of researchers delving into a growing scientific area called chemical genetics (see main text). Taking this approach, scientists use chemistry to attack biological problems that traditionally have been solved through genetic experiments such as the genetic engineering of bacteria, yeast, and mice. Crews' goal is to explore how natural products work in living systems and to identify new targets for designing drugs. He has discovered how an inflammation-fighting ingredient in the medicinal herb feverfew may work inside cells. He found that the ingredient, called parthenolide, appears to disable a key process that gets inflammation going. In the case of feverfew, a handful of controlled scientific studies in people have hinted that the herb, also known by its plant name "bachelor's button," is effective in combating migraine headaches, but further studies are needed to confirm these preliminary findings.3.4: Is It Chemistry or Genetics? is shared under a Public Domain license and was authored, remixed, and/or curated by LibreTexts.
745
3.5: Testing…I, II, III
https://chem.libretexts.org/Bookshelves/Biological_Chemistry/Book3A_Medicines_by_Design/03%3A_Drugs_from_Nature_Then_and_Now/3.05%3A_TestingI_II_III
To translate pharmacology research into patient care, potential drugs ultimately have to be tested in people. This multistage process is known as clinical trials, and it has led researchers to validate life-saving treatments for many diseases, such as childhood leukemia and Hodgkin's disease. Clinical trials, though costly and very time-consuming, are the only way researchers can know for sure whether experimental treatments work in humans.Scientists conduct clinical trials in three phases (I, II, and III), each providing the answer to a different fundamental question about a potential new drug: Is it safe? Does it work? Is it better than the standard treatment? Typically, researchers do years of basic work in the lab and in animal models before they can even consider testing an experimental treatment in people. Importantly, scientists who wish to test drugs in people must follow strict rules that are designed to protect those who volunteer to participate in clinical trials. Special groups called Institutional Review Boards, or IRBs, evaluate all proposed research involving humans to determine the potential risks and anticipated benefits. The goal of an IRB is to make sure that the risks are minimized and that they are reasonable compared to the knowledge expected to be gained by performing the study. Clinical studies cannot go forward without IRB approval. In addition, people in clinical studies must agree to the terms of a trial by participating in a process called informed consent and signing a form, required by law, that says they understand the risks and benefits involved in the study.Phase I studies test a drug's safety in a few dozen to a hundred people and are designed to figure out what happens to a drug in the body—how it is absorbed, metabolized, and excreted. Phase I studies usually take several months. Phase II trials test whether or not a drug produces a desired effect. These studies take longer—from several months to a few years—and can involve up to several hundred patients. A phase III study further examines the effectiveness of a drug as well as whether the drug is better than current treatments. Phase III studies involve hundreds to thousands of patients, and these advanced trials typically last several years. Many phase II and phase III studies are randomized, meaning that one group of patients gets the experimental drug being tested while a second, control group gets either a standard treatment or placebo (that is, no treatment, often masked as a "dummy" pill or injection). Also, usually phase II and phase III studies are "blinded"—the patients and the researchers do not know who is getting the experimental drug. Finally, once a new drug has completed phase III testing, a pharmaceutical company can request approval from the Food and Drug Administration to market the drug.Scientists are currently testing cone snail toxins for the treatment of which health problem?How are people protected when they volunteer to participate in a clinical trial?Why do plants and marine organisms have chemicals that could be used as medicines?What is a drug "lead?"Name the first marine-derived cancer medicine.3.5: Testing…I, II, III is shared under a Public Domain license and was authored, remixed, and/or curated by LibreTexts.
746
4.1: Medicine Hunting
https://chem.libretexts.org/Bookshelves/Biological_Chemistry/Book3A_Medicines_by_Design/04%3A_Molecules_to_Medicines/4.01%3A_Medicine_Hunting
While sometimes the discovery of potential medicines falls to researchers' good luck, most often pharmacologists, chemists, and other scientists looking for new drugs plod along methodically for years, taking suggestions from nature or clues from knowledge about how the body works.Finding chemicals' cellular targets can educate scientists about how drugs work. Aspirin's molecular target, the enzyme cyclooxygenase, or COX (see page 22), was discovered this way in the early 1970s in Nobel Prize-winning work by pharmacologist John Vane, then at the Royal College of Surgeons in London, England. Another example is colchicines, a relatively old drug that is still widely used to treat gout, an excruciatingly painful type of arthritis in which needle-like crystals of uric acid clog joints, leading to swelling, heat, pain, and stiffness. Lab experiments with colchicine led scientists to this drug's molecular target, a cell-scaffolding protein called tubulin. Colchicine works by attaching itself to tubulin, causing certain parts of a cell's architecture to crumble, and this action can interfere with a cell's ability to move around. Researchers suspect that in the case of gout, colchicine works by halting the migration of immune cells called granulocytes that are responsible for the inflammation characteristic of gout.Drugs used to treat bone ailments may be useful for treating infectious diseases like malaria.As pet owners know, you can teach some old dogs new tricks. In a similar vein, scientists have in some cases found new uses for "old" drugs. Remarkably, the potential new uses often have little in common with a drug's product label (its "old" use). For example, chemist Eric Oldfield of the University of Illinois at Urbana-Champaign discovered that one class of drugs called bisphosphonates, which are currently approved to treat osteoporosis and other bone disorders, may also be useful for treating malaria, Chagas' disease, leishmaniasis, and AIDS-related infections like toxoplasmosis.Previous research by Oldfield and his coworkers had hinted that the active ingredient in the bisphosphonate medicines Fosamax®, Actonel®, and Aredia® blocks a critical step in the metabolism of parasites, the microorganisms that cause these diseases. To test whether this was true, Oldfield gave the medicines to five different types of parasites, each grown along with human cells in a plastic lab dish. The scientists found that small amounts of the osteoporosis drugs killed the parasites while sparing human cells. The researchers are now testing the drugs in animal models of the parasitic diseases and so far have obtained cures—in mice—of certain types of leishmaniasis. If these studies prove that bisphosphonate drugs work in larger animal models, the next step will be to find out if the medicines can thwart these parasitic diseases in humans.Current estimates indicate that scientists have identified roughly 500 to 600 molecular targets where medicines may have effects in the body. Medicine hunters can strategically "discover" drugs by designing molecules to "hit" these targets. That has already happened in some cases. Researchers knew just what they were looking for when they designed the successful AIDS drugs called HIV protease inhibitors. Previous knowledge of the three-dimensional structure of certain HIV proteins (the target) guided researchers to develop drugs shaped to block their action. Protease inhibitors have extended the lives of many people with AIDS.However, sometimes even the most targeted approaches can end up in big surprises. The New York City pharmaceutical firm Pfizer had a blood pressure-lowering drug in mind, when instead its scientists discovered Viagra®, a best-selling drug approved to treat erectile dysfunction. Initially, researchers had planned to create a heart drug, using knowledge they had about molecules that make blood clot and molecular signals that instruct blood vessels to relax. What the scientists did not know was how their candidate drug would fare in clinical trials.Colchicine, a treatment for gout, was originally derived from the stem and seeds of the meadow saffron (autumn crocus).NATIONAL AGRICULTURE LIBRARY, ARS, USDASildenafil (Viagra's chemical name) did not work very well as a heart medicine, but many men who participated in the clinical testing phase of the drug noted one side effect in particular: erections. Viagra works by boosting levels of a natural molecule called cyclic GMP that plays a key role in cell signaling in many body tissues. This molecule does a good job of opening blood vessels in the penis, leading to an erection.4.1: Medicine Hunting is shared under a Public Domain license and was authored, remixed, and/or curated by LibreTexts.
747
4.2: 21st-Century Science
https://chem.libretexts.org/Bookshelves/Biological_Chemistry/Book3A_Medicines_by_Design/04%3A_Molecules_to_Medicines/4.02%3A_21st-Century_Science
While strategies such as chemical genetics can quicken the pace of drug discovery, other approaches may help expand the number of molecular targets from several hundred to several thousand. Many of these new avenues of research hinge on biology.Relatively new brands of research that are stepping onto center stage in 21st-century science include genomics (the study of all of an organism's genetic material), proteomics (the study of all of an organism's proteins), and bioinformatics (using computers to sift through large amounts of biological data). The "omics" revolution in biomedicine stems from biology's gradual transition from a gathering, descriptive enterprise to a science that will someday be able to model and predict biology. If you think 25,000 genes is a lot (the number of genes in the human genome), realize that each gene can give rise to different molecular job. Scientists estimate that humans have hundreds of thousands of protein variants. Clearly, there's lots of work to be done, which will undoubtedly keep researchers busy for years to come.Doctors use the drug Gleevec to treat a form of leukemia, a disease in which abnormally high numbers of immune cells (larger, purple circles in photo) populate the blood.Recently, researchers made an exciting step forward in the treatment of cancer. Years of basic research investigating circuits of cellular communication led scientists to tailor-make a new kind of cancer medicine. In May 2001, the drug Gleevec™ was approved to treat a rare cancer of the blood called chronic myelogenous leukemia (CML). The Food and Drug Administration described Gleevec's approval as " …a testament to the groundbreaking scientific research taking place in labs throughout America."Researchers designed this drug to halt a cell-communication pathway that is always "on" in CML. Their success was founded on years of experiments in the basis biology of how cancer cells grow. The discovery of Gleevec in an example of the success of so-called molecular targeting: understanding how diseases arise at the level of cells, then figuring out ways to treat them. Scores of drugs, some to treat cancer but also many other health conditions, are in the research pipeline as a result of scientists' eavesdropping on how cells communicate.4.2: 21st-Century Science is shared under a Public Domain license and was authored, remixed, and/or curated by LibreTexts.
748
4.3: Rush Delivery
https://chem.libretexts.org/Bookshelves/Biological_Chemistry/Book3A_Medicines_by_Design/04%3A_Molecules_to_Medicines/4.03%3A_Rush_Delivery
Finding new medicines and cost-effective ways to manufacture them is only half the battle. An enormous challenge for pharmacologists is figuring out how to get drugs to the right place, a task known as drug delivery.Ideally, a drug should enter the body, go directly to the diseased site while bypassing healthy tissue, do its job, and then disappear. Unfortunately, this rarely happens with the typical methods of delivering drugs: swallowing and injection. When swallowed, many medicines made of protein are never absorbed into the bloodstream because they are quickly chewed up by enzymes as they pass through the digestive system. If the drug does get to the blood from the intestines, it falls prey to liver enzymes. For doctors prescribing such drugs, this first-pass effect means that several doses of an oral drug are needed before enough makes it to the blood. Drug injections also cause problems, because they are expensive, difficult for patients to self-administer, and are unwieldy if the drug must be taken daily. Both methods of administration also result in fluctuating levels of the drug in the blood, which is inefficient and can be dangerous.What to do? Pharmacologists can work around the first-pass effect by delivering medicines via the skin, nose, and lungs. Each of these methods bypasses the intestinal tract and can increase the amount of drug getting to the desired site of action in the body. Slow, steady drug delivery directly to the bloodstream—without stopping at the liver first—is the primary benefit of skin patches, which makes this form of drug delivery particularly useful when a chemical must be administered over a long period.Hormones such as testosterone, progesterone, and estrogen are available as skin patches. These forms of medicines enter the blood via a meshwork of small arteries, veins, and capillaries in the skin. Researchers also have developed skin patches for a wide variety of other drugs. Some of these include Duragesic® (a prescription-only pain medicine), Transderm Scop® (a motion-sickness drug), and Transderm Nitro® (a blood vessel-widening drug used to treat chest pain associated with heart disease). Despite their advantages, however, skin patches have a significant drawback. Only very small drug molecules can get into the body through the skin.Inhaling drugs through the nose or mouth is another way to rapidly deliver drugs and bypass the liver. Inhalers have been a mainstay of asthma therapy for years, and doctors prescribe nasal steroid drugs for allergy and sinus problems.Researchers are investigating insulin powders that can be inhaled by people with diabetes who rely on insulin to control their blood sugar daily. This still-experimental technology stems from novel uses of chemistry and engineering to manufacture insulin particles of just the right size. Too large, and the insulin particles could lodge in the lungs; too small, and the particles will be exhaled. If clinical trials with inhaled insulin prove that it is safe and effective, then this therapy could make life much easier for people with diabetes.Scientists try hard to listen to the noisy, garbled "discussions" that take place inside and between cells. Less than a decade ago, scientists identified one very important cellular communication stream called MAP (mitogen-activated protein) kinase signaling. Today, molecular pharmacologists such as Melanie H. Cobb of the University of Texas Southwestern Medical Center at Dallas are studying how MAP kinase signaling pathways malfunction in unhealthy cells.Kinases are enzymes that add phosphate groups (red-yellow structures) to proteins (green), assigning the proteins a code. In this reaction, an intermediate molecule called ATP (adenosine triphosphate) donates a phosphate group from itself, becoming ADP (adenosine diphosphate).Some of the interactions between proteins in these pathways involve adding and taking away tiny molecular labels called phosphate groups. Kinases are the enzymes that add phosphate groups to proteins, and this process is called phosphorylation. Marking proteins in this way assigns the proteins a code, instructing the cell to do something, such as divide or grow. The body employs many, many signaling pathways involving hundreds of different kinase enzymes. Some of the important functions performed by MAP kinase pathways include instructing immature cells how to "grow up" to be specialized cell types like muscle cells, helping cells in the pancreas respond to the hormone insulin, and even telling cells how to die.Since MAP kinase pathways are key to so many important cell processes, researchers consider them good targets for drugs. Clinical trials are under way to test various molecules that, in animal studies, can effectively lock up MAP kinase signaling when it's not wanted, for example, in cancer and in diseases involving an overactive immune system, such as arthritis. Researchers predict that if drugs to block MAP kinase signaling prove effective in people, they will likely be used in combination with other medicines that treat a variety of health conditions, since many diseases are probably caused by simultaneous errors in multiple signaling pathways.Proteins that snake through membranes help transport molecules into cells. HTTP:/WWW.PHARMACOLOGY.UCLA.EDU4.3: Rush Delivery is shared under a Public Domain license and was authored, remixed, and/or curated by LibreTexts.
749
4.4: Transportation Dilemmas
https://chem.libretexts.org/Bookshelves/Biological_Chemistry/Book3A_Medicines_by_Design/04%3A_Molecules_to_Medicines/4.04%3A_Transportation_Dilemmas
Scientists are solving the dilemma of drug delivery with a variety of other clever techniques. Many of the techniques are geared toward sneaking through the cellular gate-keeping systems' membranes. The challenge is a chemistry problem—most drugs are water-soluble, but membranes are oily. Water and oil don't mix, and thus many drugs can't enter the cell. To make matters worse, size matters too. Membranes are usually constructed to permit the entry of only small nutrients and hormones, often through private cellular alleyways called transporters.Many pharmacologists are working hard to devise ways to work not against, but with nature, by learning how to hijack molecular transporters to shuttle drugs into cells. Gordon Amidon, a pharmaceutical chemist at the University of Michigan-Ann Arbor, has been studying one particular transporter in mucosal membranes lining the digestive tract. The transporter, called hPEPT1, normally serves the body by ferrying small, electrically charged particles and small protein pieces called peptides into and out of the intestines.Amidon and other researchers discovered that certain medicines, such as the antibiotic penicillin and certain types of drugs used to treat high blood pressure and heart failure, also travel into the intestines via hPEPT1. Recent experiments revealed that the herpes drug Valtrex® and the AIDS drug Retrovir® also hitch a ride into intestinal cells using the hPEPT1 transporter. Amidon wants to extend this list by synthesizing hundreds of different molecules and testing them for their ability to use hPEPT1 and other similar transporters. Recent advances in molecular biology, genomics, and bioinformatics have sped the search for molecules that Amidon and other researchers can test.Scientists are also trying to slip molecules through membranes by cloaking them in disguise. Steven Regen of Lehigh University in Bethlehem, Pennsylvania, has manufactured miniature chemical umbrellas that close around and shield a molecule when it encounters a fatty membrane and then spread open in the watery environment inside a cell. So far, Regen has only used test molecules, not actual drugs, but he has succeeded in getting molecules that resemble small segments of DNA across membranes. The ability to do this in humans could be a crucial step in successfully delivering therapeutic molecules to cells via gene therapy.4.4: Transportation Dilemmas is shared under a Public Domain license and was authored, remixed, and/or curated by LibreTexts.
750
4.5: Act Like a Membrane
https://chem.libretexts.org/Bookshelves/Biological_Chemistry/Book3A_Medicines_by_Design/04%3A_Molecules_to_Medicines/4.05%3A_Act_Like_a_Membrane
Researchers know that high concentrations of chemotherapy drugs will kill every single cancer cell growing in a lab dish, but getting enough of these powerful drugs to a tumor in the body without killing too many healthy cells along the way has been exceedingly difficult. These powerful drugs can do more harm than good by severely sickening a patient during treatment.Some researchers are using membrane-like particles called liposomes to package and deliver drugs to tumors. Liposomes are oily, microscopic capsules that can be filled with biological cargo, such as a drug. They are very, very small—only one one-thousand the width of a single human hair. Researchers have known about liposomes for many years, but getting them to the right place in the body hasn't been easy. Once in the bloodstream, these foreign particles are immediately shipped to the liver and spleen, where they are destroyed.Scientists who study anesthetic medicines have a daunting task—for the most part, they are "shooting in the dark" when it comes to identifying the molecular targets of these drugs. Researchers do know that anesthetics share one common ingredient: Nearly all of them somehow target membranes, the oily wrappings surrounding cells. However, despite the fact that anesthesia is a routine part of surgery, exactly how anesthetic medicines work in the body has remained a mystery for more than 150 years. It's an important problem, since anesthetics have multiple effects on key body functions, including critical processes such as breathing.Scientists define anesthesia as a state in which no movement occurs in response to what should be painful. The problem, is even though a patient loses a pain response, the anesthesiologist can't tell what is happening inside the person's organs and cells. Further complicating the issue, scientists know that many different types of drugs—with little physical resemblance to each other—can all produce anesthesia. This makes it difficult to track down causes and effects.Anesthesiologist Robert Veselis of the Memorial Sloan-Kettering Institute for Cancer Research in New York City clarified how certain types of these mysterious medicines work. Veselis and his coworkers measured electrical activity in the brains of healthy volunteers receiving anesthetics while they listened to different sounds. To determine how sedated the people were, the researchers measured reaction time to the sounds the people heard. To measure memory effects, they quizzed the volunteers at the end of the study about word lists they had heard before and during anesthesia. Veselis' experiments show that the anesthetics they studied affect separate brain areas to produce the two different effects of sedation and memory loss. The findings may help doctors give anesthetic medicines more effectively and safely and prevent reactions with other drugs a patient may be taking.Materials engineer David Needham of Duke University in Durham, North Carolina, is investigating the physics and chemistry of liposomes to better understand how the liposomes ad their cancer-fighting cargo can travel through the body. Needham worked for 10 years to create a special kind of liposome that melts at just a few degrees above body temperature. The end result is a tiny molecular "soccer ball" made from two different oils that wrap around a drug. At room temperature, the liposomes are solid and they stay solid at body temperature, so they can be injected into the bloodstream. The liposomes are designed to spill their drug cargo into a tumor when heat is applied to the cancerous tissue. Heat is know to perturb tumors, making the blood vessels surrounding cancer cells extra-leaky. As the liposomes approach the warmed tumor tissue, the "stitches" of the miniature soccer balls begin to dissolve, rapidly leaking the liposome's contents.Needham and Duke oncologist Mark Dewhirst teamed up to do animal studies with the heat-activated liposomes. Experiments in mice and dogs revealed that, when heated, the drug-laden capsules flooded tumors with a chemotherapy drug and killed the cancer cells inside. Researchers hope to soon begin the first stage of human studies testing the heat-triggered liposome treatment in patients with prostate and breast cancer. The results of these and later clinical trials will determine whether liposome therapy can be a useful weapon for treating breast and prostate cancer and other hard-to-treat solid tumors.David Needham designed liposomes resembling tiny molecular "soccer balls" made from two different oils that wrap around a drug.LAWRENCE MAYER, LUDGER ICKENSTEIN, KATRINA EDWARDS4.5: Act Like a Membrane is shared under a Public Domain license and was authored, remixed, and/or curated by LibreTexts.
751
4.6: The G Switch
https://chem.libretexts.org/Bookshelves/Biological_Chemistry/Book3A_Medicines_by_Design/04%3A_Molecules_to_Medicines/4.06%3A_The_G_Switch
G proteins act like relay batons to pass messages from circulating hormones into cells.Imagine yourself sitting on a cell, looking outward to the bloodstream rushing by. Suddenly, a huge glob of something hurls toward you, slowing down just as it settles into a perfect dock on the surface of your cell perch. You don't realize it, but your own body sent this substance—a hormone called epinephrine—to protect you, telling you to get out of the way of a car that just about sideswiped yours while drifting out of its lane. Your body reacts, whipping up the familiar, spine-tingling, "fight-or-flight" response that gears you to respond quickly to potentially threatening situations such as this one.How does it all happen so fast?Getting into a cell is a challenge, a strictly guarded process kept in control by a protective gate called the plasma membrane. Figuring out how molecular triggers like epinephrine communicate important messages to the inner parts of cells earned two scientists the Nobel Prize in physiology or medicine in 1994. Getting a cellular message across the membrane is called signal transduction, and it occurs in three steps. First, a message (such as epinephrine) encounters the outside of a cell and makes contact with a molecule on the surface called a receptor. Next, a connecting transducer, or switch molecule, passes the message inward, sort of like a relay baton. Finally, in the third step, the signal gets amplified, prompting the cell to do something: move, produce new proteins, even send out more signals.One of the Nobel Prize winners, pharmacologist Alfred G. Gilman of the University of Texas Southwestern Medical Center at Dallas, uncovered the identity of the switch molecule, called a G protein. Gilman named the switch, which is actually a huge family of switch molecules, not after himself but after the type of cellular fuel it uses: and energy currency called GTP. As with any switch, G proteins must be turned on only when needed, then shut off. Some illnesses, including fatal diseases like cholera, occur when a G protein is errantly left on. In the case of cholera, the poisonous weaponry of the cholera bacterium "freezes" in place one particular type of G protein that controls water balance. The effect is constant fluid leakage, causing life-threatening diarrhea.In the few decades since Gilman and the other Nobel Prize winner, the late National Institutes of Health scientist Martin Rodbell, made their fundamental discovery about G protein switches, pharmacologists all over the world have focused on these signaling molecules. Research on G proteins and on all aspects of cell signaling has prospered, and as a result scientists now have an avalanche of data. In the fall of 2000, Gilman embarked on a groundbreaking effort to begin to untangle and reconstruct some of this information to guide the way toward creating a "virtual cell." Gilman leads the Alliance for Cellular Signaling, a large, interactive research network. The group has a big dream: to understand everything there is to know about signaling inside cells. According to Gilman, Alliance researchers focus lots of attention on G proteins and also on other signaling systems in selected cell types. Ultimately, the scientists hope to test drugs and learn about disease through computer modeling experiments with the virtual cell system.Exercise \(\PageIndex{1}\)4.6: The G Switch is shared under a Public Domain license and was authored, remixed, and/or curated by LibreTexts.
752
InfoPage
https://chem.libretexts.org/Bookshelves/Biological_Chemistry/Chemistry_of_Cooking_(Rodriguez-Velazquez)/00%3A_Front_Matter/02%3A_InfoPage
This text is disseminated via the Open Education Resource (OER) LibreTexts Project and like the hundreds of other texts available within this powerful platform, it is freely available for reading, printing and "consuming." Most, but not all, pages in the library have licenses that may allow individuals to make changes, save, and print this book. Carefully consult the applicable license(s) before pursuing such effects.Instructors can adopt existing LibreTexts texts or Remix them to quickly build course-specific resources to meet the needs of their students. Unlike traditional textbooks, LibreTexts’ web based origins allow powerful integration of advanced features and new technologies to support learning. The LibreTexts mission is to unite students, faculty and scholars in a cooperative effort to develop an easy-to-use online platform for the construction, customization, and dissemination of OER content to reduce the burdens of unreasonable textbook costs to our students and society. The LibreTexts project is a multi-institutional collaborative venture to develop the next generation of open-access texts to improve postsecondary education at all levels of higher learning by developing an Open Access Resource environment. The project currently consists of 14 independently operating and interconnected libraries that are constantly being optimized by students, faculty, and outside experts to supplant conventional paper-based books. These free textbook alternatives are organized within a central environment that is both vertically (from advance to basic level) and horizontally (across different fields) integrated.The LibreTexts libraries are Powered by NICE CXOne and are supported by the Department of Education Open Textbook Pilot Project, the UC Davis Office of the Provost, the UC Davis Library, the California State University Affordable Learning Solutions Program, and Merlot. This material is based upon work supported by the National Science Foundation under Grant No. 1246120, 1525057, and 1413739.Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation nor the US Department of Education.Have questions or comments? For information about adoptions or adaptions contact More information on our activities can be found via Facebook , Twitter , or our blog .This text was compiled on 07/13/2023
755
1.1: Viscosity
https://chem.libretexts.org/Bookshelves/Biological_Chemistry/Chemistry_of_Cooking_(Rodriguez-Velazquez)/01%3A_Thickening_and_Concentrating_Flavors/1.01%3A_Viscosity
Viscous means “sticky”’ and the term viscosity refers to the way in which the chocolate flows. Chocolate comes in various viscosities, and the confectioner chooses the one that is most appropriate to his or her needs. The amount of cocoa butter in the chocolate is largely responsible for the viscosity level. Emulsifiers like lecithin can help thin out melted chocolate, so it flows evenly and smoothly. Because it is less expensive than cocoa butter at thinning chocolate, it can be used to help lower the cost of chocolate.Molded pieces such as Easter eggs require a chocolate of less viscosity. That is, the chocolate should be somewhat runny so it is easier to flow into the moulds. This is also the case for coating cookies and most cakes, where a thin, attractive and protective coating is all that is needed. A somewhat thicker chocolate is advisable for things such as ganache and flavoring of creams and fillings. Where enrobers (machines to dip chocolate centers) are used, the chocolate may also be thinner to ensure that there is an adequate coat of couverture.Viscosity varies between manufacturers, and a given type of chocolate made by one manufacturer may be available in more than one viscosity. Bakers sometimes alter the viscosity depending on the product. A vegetable oil is sometimes used to thin chocolate for coating certain squares. This makes it easier to cut afterwards.Content and quality of chocolate chips and chunks vary from one manufacturer to another. This chocolate is developed to be more heat stable for use in cookies and other baking where you want the chips and chunks to stay whole. Ratios of chocolate liquor, sugar, and cocoa butter differ. All these variables affect the flavor. Chips and chunks may be pure chocolate or have another fat substituted for the cocoa butter. Some high quality chips have up to 65% chocolate liquor, but in practice, liquor content over 40% tends to smear in baking, so high ratios defeat the purpose.Many manufacturers package their chips or chunks by count (ct) size. This refers to how many pieces there are in 1 kg of the product. As the count size number increases, the size of the chip gets smaller. With this information, you can choose the best size of chip for the product you are producing.Other chocolate products available are chocolate sprinkles or “hail,” used as a decoration; chocolate curls, rolls, or decorative shapes for use on cakes and pastries; and chocolate sticks or “batons,” which are often baked inside croissants.This page titled 1.1: Viscosity is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by Sorangel Rodriguez-Velazquez via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.
758
1.2: Thickening Agents
https://chem.libretexts.org/Bookshelves/Biological_Chemistry/Chemistry_of_Cooking_(Rodriguez-Velazquez)/01%3A_Thickening_and_Concentrating_Flavors/1.02%3A_Thickening_Agents
Two types of thickening agents are recognized: starches and gums. Most thickening agents are of vegetable origin; the only exception is gelatin. All the starches are products of the land; some of the gums are of marine origin.Bakers use thickening agents primarily to:This page titled 1.2: Thickening Agents is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by Sorangel Rodriguez-Velazquez via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.
759
1.3: Types of Thickening Agents
https://chem.libretexts.org/Bookshelves/Biological_Chemistry/Chemistry_of_Cooking_(Rodriguez-Velazquez)/01%3A_Thickening_and_Concentrating_Flavors/1.03%3A_Types_of_Thickening_Agents
Cornstarch is the most common thickening agent used in the industry. It is mixed with water or juice and boiled to make fillings and to give a glossy semi-clear finish to products. Commercial cornstarch is made by soaking maize in water containing sulphur dioxide. The soaking softens the corn and the sulphur dioxide prevents possible fermentation. It is then crushed and passed to water tanks where the germ floats off. The mass is then ground fine and, still in a semi-fluid state, passed through silk screens to remove the skin particles. After filtration, the product, which is almost 100% starch, is dried.Cornstarch in cold water is insoluble, granular, and will settle out if left standing. However, when cornstarch is cooked in water, the starch granules absorb water, swell, and rupture, forming a translucent thickened mixture. This phenomenon is called gelatinization. Gelatinization usually begins at about 60°C (140°F), reaching completion at the boiling point.The commonly used ingredients in a starch recipe affect the rate of gelatinization of the starch. Sugar, added in a high ratio to the starch, will inhibit the granular swelling. The starch gelatinization will not be completed even after prolonged cooking at normal temperature. The result is a filling of thin consistency, dull color, and a cereal taste. Withhold some of the sugar from the cooking step in such cases, and add it after gelatinization of the starch has been completed.Other ingredients such as egg, fat, and dry milk solids have a similar effect. Fruits with high acidity such as rhubarb will also inhibit starch setting. Cook the starch paste first and add the fruit afterward.In cooking a filling, about 1.5 kg (3 1/3 lb.) of sugar should be cooked with the water or juice for every 500 g (18 oz.) of starch used as a thickener. Approximately 100 g (4 oz.) of starch is used to thicken 1 L of water or fruit juice. The higher the acidity of the fruit juice, the more thickener required to hold the gel. Regular cornstarch thickens well but makes a cloudy solution. Another kind of cornstarch, waxy maize starch, makes a more fluid mix of great clarity.Pre-gelatinized starches are mixed with sugar and then added to the water or juice. They thicken the filling in the presence of sugar and water without heating. This is due to the starch being precooked and not requiring heat to enable it to absorb and gelatinize. There are several brands of these starches on the market (e.g., Clear Jel), and they all vary in absorption properties. For best results, follow the manufacturer’s guidelines. Do not put pre-gelatinized starch directly into water, as it will form lumps immediately.If fruit fillings are made with these pre-cooked starches, there is a potential for breakdown if the fillings are kept. Enzymes in the uncooked fruit may “attack” the starch and destroy some of the gelatinized structure. For example, if you are making a week’s supply of pie filling from fresh rhubarb, use a regular cooked formula.Arrowroot is a highly nutritious farinaceous starch obtained from the roots and tubers of various West Indian plants. It is used in the preparation of delicate soups, sauces, puddings, and custards.Agar-agar is a jelly-like substance extracted from red seaweed found off the coasts of Japan, California, and Sri Lanka. It is available in strips or slabs and in powder form. Agar-agar only dissolves in hot water and is colorless. Use it at 1% to make a firm gel. It has a melting point much higher than gelatin and its jellying power is eight times greater. It is used in pie fillings and to some extent in the stiffening of jams. It is a permitted ingredient in some dairy products, including ice cream at 0.5%. One of its largest uses is in the production of materials such as piping jelly and marshmallow.Extracted from kelp, this gum dissolves in cold water and a 1% concentration to give a firm gel. It has the disadvantage of not working well in the presence of acidic fruits. It is popular in uncooked icings because it works well in the cold state and holds a lot of moisture. It reduces stickiness and prevents recrystallization.Carrageenan is another marine gum extracted from red seaweed. It is used as a thickening agent in various products, from icing stabilizers to whipping cream, at an allowable rate of 0.1% to 0.5%.Gelatin is a glutinous substance made from the bones, connective tissues, and skins of animals. The calcium is removed and the remaining substance is soaked in cold water. Then it is heated to 40°C to 60°C (105°F 140°F). The partially evaporated liquid is defatted and coagulated on glass plates and then poured into moulds. When solid, the blocks of gelatin are cut into thin layers and dried on wire netting.Gelatin is available in sheets of leaf gelatin, powders, granules, or flakes. Use it at a 1% ratio. Like some of the other gelling agents, acidity adversely affects its gelling capacity.The quality of gelatin often varies because of different methods of processing and manufacturing. For this reason, many bakers prefer leaf gelatin because of its reliable strength.This gum is obtained from various kinds of trees and is soluble in hot or cold water. Solutions of gum arabic are used in the bakery for glazing various kinds of goods, particularly marzipan fruits.This gum is obtained from several species of Astragalus, low-growing shrubs found in Western Asia. It can be purchased in flakes or powdered form. Gum tragacanth was once used to make gum paste and gum paste wedding ornaments, but due to high labour costs and a prohibitive price for the product, its use nowadays is uncommon.Pectin is a mucilaginous substance (gummy substance extracted from plants), occurring naturally in pears, apples, quince, oranges, and other citrus fruits. It is used as the gelling agent in traditional jams and jellies.This page titled 1.3: Types of Thickening Agents is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by Sorangel Rodriguez-Velazquez via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.
760
1.4: Coagulation
https://chem.libretexts.org/Bookshelves/Biological_Chemistry/Chemistry_of_Cooking_(Rodriguez-Velazquez)/01%3A_Thickening_and_Concentrating_Flavors/1.04%3A_Coagulation
Coagulation is defined as the transformation of proteins from a liquid state to a solid form. Once proteins are coagulated, they cannot be returned to their liquid state. Coagulation often begins around 38°C (100°F), and the process is complete between 71°C and 82°C (160°F and 180°F). Within the baking process, the natural structures of the ingredients are altered irreversibly by a series of physical, chemical, and biochemical interactions. The three main types of protein that cause coagulation in the bakeshop are outlined below.Eggs contain many different proteins. The white, or albumen, contains approximately 40 different proteins, the most predominant being ovalbumin (54%) and ovotransferrin (12%). The yolk contains mostly lipids (fats), but also lipoproteins. These different proteins will all coagulate when heated, but do so at different temperatures. The separated white of an egg coagulates between 60°C and 65°C (140°F and 149°F) and the yolk between 62°C and 70°C (144°F and 158°F), which is why you can cook an egg and have a fully set white and a still runny yolk. These temperatures are raised when eggs are mixed into other liquids. For example, the coagulation and thickening of an egg, milk, and sugar mixture, as in custard, will take place between 80°C and 85°C (176°F and 185°F) and will start to curdle at 88°C to 90°C (190°F and 194°F).Casein, a semi-solid substance formed by the coagulation of milk, is obtained and used primarily in cheese. Rennet, derived from the stomach linings of cattle, sheep, and goats, is used to coagulate, or thicken, milk during the cheese-making process. Plant-based rennet is also available. Chymosin (also called rennin) is the enzyme used to produce rennet, and is responsible for curdling the milk, which will then separate into solids (curds) and liquid (whey).Milk and milk products will also coagulate when treated with an acid, such as citric acid (lemon juice) or vinegar, used in the preparation of fresh ricotta, and tartaric acid, used in the preparation of mascarpone, or will naturally curdle when sour as lactic acid develops in the milk. In some cases, as in the production of yogurt or crème fraîche, acid-causing bacteria are added to the milk product to cause the coagulation. Similarly, tofu is made from soybean milk that has been coagulated with the use of either salt, acid, or enzyme-based coagulants.Two main proteins are found in wheat flour: glutenin and gliadin (smaller quantities are also found in other grains). During mixing and in contact with liquid, these two form into a stretchable substance called gluten. The coagulation of gluten is what happens when bread bakes; that is, it is the firming or hardening of these gluten proteins, usually caused by heat, which solidify to form a firm structure. This page titled 1.4: Coagulation is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by Sorangel Rodriguez-Velazquez via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.
761
1.5: Gelatinization
https://chem.libretexts.org/Bookshelves/Biological_Chemistry/Chemistry_of_Cooking_(Rodriguez-Velazquez)/01%3A_Thickening_and_Concentrating_Flavors/1.05%3A_Gelatinization
A hydrocolloid is a substance that forms a gel in contact with water. There are two main categories: Thermo-reversible gel: A gel that melts upon reheating and sets upon cooling. Examples are gelatin and agar agar. Thermo-irreversible gel: A gel that does not melt upon reheating. Examples are cornstarch and pectin. Excessive heating, however, may cause evaporation of the water and shrinkage of the gel. Hydrocolloids do not hydrate (or dissolve) instantly, and that hydration is associated with swelling, which easily causes lumping. It is therefore necessary to disperse hydrocolloids in water. Classically, this has always been done with cornstarch, where a portion of the liquid from the recipe is mixed to form a “slurry” before being added to the cooking liquid. This can also be done with an immersion blender or a conventional blender, or by mixing the hydrocolloid with a helping agent such as sugar, oil, or alcohol prior to dispersion in water.Starch gelatinization is the process where starch and water are subjected to heat, causing the starch granules to swell. As a result, the water is gradually absorbed in an irreversible manner. This gives the system a viscous and transparent texture. The result of the reaction is a gel, which is used in sauces, puddings, creams, and other food products, providing a pleasing texture. Starch-based gels are thermoirreversible, meaning that they do not melt upon heating (unlike gelatin, which we will discuss later). Excessive heating, however, may cause evaporation of the water and shrinkage of the gel.The most common examples of starch gelatinization are found in sauce and pasta preparations and baked goods. In sauces, starches are added to liquids, usually while heating.Starch molecules make up the majority of most baked goods, so starch is an important part of the structure. Although starches by themselves generally can’t support the shape of the baked items, they do give bulk to the structure. Starches develop a softer structure when baked than proteins do. The softness of the crumb of baked bread is due largely to the starch. The more protein structure there is, the chewier the bread.Starches can be fairly straightforward extracts of plants, such as cornstarch, tapioca, or arrowroot, but there are also modified starches and pre-gelatinized starches available that have specific uses. See Table 1 for a list of different thickening and binding agents and their characteristics. Waxy maize, waxy rice Dissolved in cold water 20-40 g starch thickens 1 L liquid Added to hot liquid while whisking until it dissolves and the liquid thickens Used in desserts and dessert sauces Clear, does not thicken further as it cools Does not gel at cool temperatures, good for cold sauces Quite stable at extreme temperatures (heat and freezing) Modified starches Dissolved in cold water 20-40 g starch thickens 1 L liquid Added to hot liquid while whisking until it dissolves and the liquid thickens Modified starches are often used in commercially processed foods and convenience products Modified to improve specific characteristics (e.g., stability or texture under extreme conditions; heat and freezing) Translucent, thickens further as it cools Pre-gelatinized starches Powder, dissolved in cold liquid 20-40 g starch thickens 1 L liquid Added to liquid at any temperature Used when thickening liquids that might lose color or flavor during cooking Become viscous without the need for additional cooking Translucent, fairly clear, shiny, does NOT gel when cold Arrowroot Powder, dissolved in cold liquid 20-40 g starch thickens 1 L liquid Added to hot liquid while whisking until it dissolves and the liquid thickens Derived from cassava root Used in Asian cuisines Very clear; possesses a gooey texture Translucent, shiny, very light gel when cold Gelatin 15-30 g gelatin sets 1 L liquid Powder or sheets (leaves) dissolved in cold water Added to cold or simmering liquid Activates with heat, sets when cold Derived from collagens in bones and meats of animals Used in aspic, glazes, cold sauces, and desserts Clear, firm texture Dissolves when reheated, thickens when coldGelatin is a water-soluble protein extracted from animal tissue and used as a gelling agent, a thickener, an emulsifier, a whipping agent, a stabilizer, and a substance that imparts a smooth mouth feel to foods. It is thermo-reversible, meaning the setting properties or action can be reversed by heating. Gelatin is available in two forms: powder and sheet (leaf). Gelatin is often used to stabilize whipped cream and mousses; confectionery, such as gummy bears and marshmallows; desserts including pannacotta; commercial products like Jell-O; “lite” or low-fat versions of foods including some margarines; and dairy products such as yogurt and ice cream. Gelatin is also used in hard and soft gel capsules for the pharmaceutical industry. Agar agar is an extract from red algae and is often used to stabilize emulsions or foams and to thicken or gel liquids. It is thermo-reversible and heat resistant. It is typically hydrated in boiling liquids and is stable across a wide range of acidity levels. It begins to gel once it cools to around 40ºC (100ºF) and will not melt until it reaches 85ºC (185ºF).Pectin is taken from citrus and other tree fruits (apples, pears, etc.). Pectin is found in many different foods such as jam, milk-based beverages, jellies, sweets, and fruit juices. Pectin is also used in molecular gastronomy mainly as a gelling agent, thickener, and stabilizer.There are a variety of types of pectin that react differently according to the ingredients used. Low-methoxyl pectin (which is activated with the use of calcium for gelling) and high-methoxyl pectin that requires sugar for thickening are the two most common types used in cooking. High-methoxyl pectin is what is traditionally used to make jams and jellies. Low-methoxyl pectin is often used in modern cuisine due to the thermo-irreversible gel that it forms and its good reaction to calcium. Its natural capability to emulsify and gel creates stable preparations.Increasingly, cooks, bakers, and pastry chefs are turning to many different gels, chemicals, and other substances used in commercial food processing as new ingredients to modify liquids or other foods. These will be outlined in detail in the section on molecular gastronomy. This page titled 1.5: Gelatinization is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by Sorangel Rodriguez-Velazquez via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.
762
1.6: Crystallization
https://chem.libretexts.org/Bookshelves/Biological_Chemistry/Chemistry_of_Cooking_(Rodriguez-Velazquez)/01%3A_Thickening_and_Concentrating_Flavors/1.06%3A_Crystallization
Many factors can influence crystallization in food. Controlling the crystallization process can affect whether a particular product is spreadable, or whether it will feel gritty or smooth in the mouth. In some cases, crystals are something you try to develop; in others, they are something you try to avoid. It is important to know the characteristics and quality of the crystals in different food. Butter, margarine, ice cream, sugar, and chocolate all contain different types of crystals, although they all contain fat crystals. For example, ice cream has fat crystals, ice crystals, and sometimes lactose crystals. The fact that sugar solidifies into crystals is extremely important in candy making. There are basically two categories of candies: crystalline (candies that contain crystals in their finished form, such as fudge and fondant); and non-crystalline (candies that do not contain crystals, such as lollipops, taffy, and caramels). Recipe ingredients and procedures for non-crystalline candies are specifically designed to prevent the formation of sugar crystals because they give the resulting candy a grainy texture. One way to prevent the crystallization of sucrose in candy is to make sure that there are other types of sugar—usually fructose and glucose—to get in the way and slow down or inhibit the process. Acids can also be added to “invert” the sugar, and to prevent or slow down crystallization. Fats added to certain confectionery items will have a similar effect.When boiling sugar for any application, the formation of crystals is generally not desired. These are some of the things that can promote crystal growth:Crystallization may be prevented by adding an interferent, such as acid (lemon, vinegar, tartaric, etc.) or glucose or corn syrup, during the boiling procedure. As mentioned above, ice cream can have ice and fat crystals that co-exist along with other structural elements (emulsion, air cells, and hydrocolloid stabilizers such as locust bean gum) that make up the “body” of the ice cream. Some of these components crystallize either partially or completely. The bottom line is that the nature of the crystalline phase in the food will determine the quality, appearance, texture, feel in the mouth, and stability of the product. The texture of ice cream is derived, in part, from the large number of small ice crystals. These small ice crystals provide a smooth texture with excellent melt-down and cooling properties. When these ice crystals grow larger during storage (recrystallization), the product becomes coarse and less enjoyable. Similar concerns apply to sugar crystals in fondant and frostings, and to fat crystals in chocolate, butter, and margarine.Control of crystallization in fats is important in many food products, including chocolate, margarine, butter, and shortening. In these products, the aim is to produce the appropriate number, size, and distribution of crystals in the correct shape because the crystalline phase plays such a large role in appearance, texture, spreadability, and flavor release. Thus, understanding the processes that control crystallization is critical to controlling quality in these products.To control crystallization in foods, certain factors must be controlled:Crystallization is important in working with chocolate. The tempering process, sometimes called precrystallization, is an important step that is used for decorative and moulding purposes, and is a major contributor to the mouth feel and enjoyment of chocolate. Tempering is a process that encourages the cocoa butter in the chocolate to harden into a specific crystalline pattern, which maintains the sheen and texture for a long time. When chocolate isn’t tempered properly it can have a number of problems. For example, it may not ever set up hard at room temperature; it may become hard, but look dull and blotchy; the internal texture may be spongy rather than crisp; and it can be susceptible to fat bloom, meaning the fats will migrate to the surface and make whitish streaks and blotches.This page titled 1.6: Crystallization is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by Sorangel Rodriguez-Velazquez via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.
763
1.7: Non-traditional thickeners
https://chem.libretexts.org/Bookshelves/Biological_Chemistry/Chemistry_of_Cooking_(Rodriguez-Velazquez)/01%3A_Thickening_and_Concentrating_Flavors/1.07%3A_Non-traditional_thickeners
In addition to traditional starches, there are new ways to thicken sauces and to change the texture of liquids. Some of these thickening agents work without heating and are simply blended with the cold liquid, such as modified starch or xanthan gum. These allow the creation of sauces and other liquids with a fresh, uncooked taste.Liquids can be stabilized with gelatin, lecithin, and other ingredients, and then used to create foams by whipping or using a special dispenser charged with nitrogen gas. A well-made foam adds an additional flavor dimension to the dish without adding bulk, and an interesting texture as the foamdissolves in the mouth.“Dinner in the Dark 21-Dessert” by Esther Little is licensed under CC BY SA 2.0Espuma is the Spanish term for froth or foam, and it is created with the use of a siphon (ISO) bottle. This is a specific term, since culinary foams may be attained through other means.Espuma from a siphon creates foam without the use of an emulsifying agent such as egg. As a result, it offers an unadulterated flavor of the ingredients used. It also introduces much more air into a preparation compared to other culinary aerating processes.Espuma is created mainly with liquid that has air incorporated in it to create froth. But solid ingredients can be used too; these can be liquefied by cooking, puréeing, and extracting natural juices. It should be noted, though, that the best flavors to work with are those that are naturally diluted. Otherwise, the espuma tends to lose its flavor as air is introduced into it.Stabilizers may be used alongside the liquids to help retain their shape longer; however, this is not always necessary. Prepared liquids can also be stored in a siphon bottle and kept for use. The pressure from the bottle will push out the aerated liquid, producing the espuma.Foam is created by trapping air within a solid or liquid substance. Although culinary foams are most recently associated with molecular gastronomy, they are part of many culinary preparations that date back to even earlier times. Mousse, soufflé, whipped cream, and froth in cappuccino are just some examples of common foams. Common examples of “set” foams are bread, pancakes, and muffins.Foam does not rely on pressure to encase air bubbles into a substance. Like espuma, foam may also be created with the help of a surfactant and gelling or thickening agents to help it hold shape. The production of a culinary foam starts with a liquid or a solid that has been puréed. The thickening or gelling agent is then diluted into this to form a solution. Once dissolved, the solution is whipped to introduce air into it.The process of whipping is done until the foam has reached the desired stiffness. Note that certain ingredients may break down if they are whipped for too long, especially without the presence of a stabilizing agent.Gels Turning a liquid, such as a vegetable juice or raspberry purée, into a solid not only gives it a different texture but also allows the food to be cut into many shapes, enabling different visual presentations. Regular gelatin can be used as well as other gelling agents, such as agar agar, which is derived from red algae.“Papayagelee” by hedonistin is licensed under CC BY NC 2.0Gelling agents are often associated with jelly-like textures, which may range from soft to firm. However, certain gels produced by specific agents may not fit this description.Rather than forming an elastic or pliable substance, brittle gels may also be formed. These are gels that are firm in nature yet fragile at the same time. This characteristic is caused by the formation of a gel network that is weak and susceptible to breaking. This property allows brittle gels to crumble in the mouth and create a melt-in-the-mouth feeling. As a result, new sensations and textures are experienced while dining. At the same time, tastes within a dish are also enhanced due to the flavour release caused by the gel breakdown.Brittle gels are made by diluting the gelling agent into a liquid substance such as water, milk, or a stock. This mixture is left to set to attain a gelled end product. It should be noted that the concentration of gelling agents used, as well as the amount of liquid, both affect gelation.Agar agar is a common agent used to create brittle gels. However, when combined with sugar it tends to create a more elastic substance. Low-acyl gellan gum, locust bean gum, and carrageenan also create brittle gels.A fluid gel is a cross between a sauce, gel, and purée. It is a controlled liquid that has properties of all three preparations. A fluid gel displays viscosity and fluidity at the same time, being thick yet still spreadable.Fluid gels behave as solids when undisturbed, and flow when exposed to sufficient agitation. They are used in many culinary dishes where fluids need to be controlled, and they provide a rich, creamy texture.A fluid gel is created using a base liquid that can come from many different sources. The base liquid is commonly extracted from fruits and vegetables, taken from stocks, or even puréed from certain ingredients. The longer the substance is exposed to stress, and the more intense the outside stress, the more fluidity is gained. More fluidity causes a finer consistency in the gel.Fluid gels can be served either hot or cold, as many of the gelling agents used for such preparations are stable at high temperatures.Drying a food intensifies its flavour and, of course, changes its texture. Eating a piece of apple that has been cooked and then dehydrated until crisp is very different from eating a fresh fruit slice. If the dehydrated food is powdered, it becomes yet another flavour and texture experience.When maltodextrin (or tapioca maltodextrin) is mixed with fat, it changes to a powder. Because maltodextrin dissolves in water, peanut butter (or olive oil) that has been changed to a powder changes back to an oil in the mouth.In molecular gastronomy, liquid nitrogen is often used to freeze products or to create a frozen item without the use of a freezer.Liquid nitrogen is the element nitrogen in a liquefied state. It is a clear, colourless liquid with a temperature of -196°C (-321°F). It is classified as a cryogenic fluid, which causes rapid freezing when it comes into contact with living tissues.The extremely cold temperatures provided by this liquefied gas are most often used in modern cuisine to produce frozen foams and ice cream. After freezing food, nitrogen boils away, creating a thick nitrogen fog that may also add to the aesthetic features of a dish.Given the extreme temperature of liquid nitrogen, it must be handled with care. Mishandling may cause serious burns to the skin. Nitrogen must be stored in special flasks and handled only by trained people. Aprons, gloves, and other specially designed safety gear should be used when handling liquid nitrogen.Used mainly in the form of a coolant for molecular gastronomy, liquid nitrogen is not ingested. It is poured directly onto the food that needs to be cooled, causing it to freeze. Any remaining nitrogen evaporates, although sufficient time must be provided to allow the liquefied gas to be eliminated and for the dish to warm up to the point that it will not cause damage during consumption.Spherification is a modern cuisine technique that involves creating semi-solid spheres with thin membranes out of liquids. Spheres can be made in various sizes and of various firmnesses, such as the “caviar” shown in The result is a burst-in-the-mouth effect, achieved with the liquid. Both flavour and texture are enhanced with this culinary technique.There are two versions of the spherification process: direct and reverse.In direct spherification, a flavoured liquid (containing either sodium alginate, gellan gum, or carrageenan) is dripped into a water bath that is mixed with calcium (either calcium chloride or calcium lactate). The outer layer is induced by calcium to form a thin gel layer, leaving a liquid centre. In this version, the spheres are easily breakable and should be consumed immediately. Calcium chloride and sodium alginate are the two basic components used for this technique. Calcium chloride is a type of salt used in cheese making, and sodium alginate is taken from seaweed. The sodium alginate is used to gel the chosen liquid by dissolving it directly into the fluid. This causes the liquid to become sticky, and proper dissolving must be done by mixing. The liquid is then left to set to eliminate any bubbles.Once ready, a bath is prepared with calcium chloride and water. The liquid is then dripped into the bath using a spoon or syringe depending on the desired sphere size. The gel forms a membrane encasing the liquid when it comes into contact with the calcium chloride. Once set, the spheres are then removed and rinsed with water to remove any excess calcium chloride.In reverse spherification, a calcium-containing liquid (or ingredients mixed with a soluble calcium salt) is dripped into a setting bath containing sodium alginate. Surface tension causes the drop to become spherical. A skin of calcium alginate immediately forms around the top. Unlike in the direct version, the gelling stops and does not continue into the liquid orb. This results in thicker shells so the products do not have to be consumed immediately. “White chocolate spaghetti with raspberry sauce and chocolate martini caviar” by ayngelina is licensed under CC BY NC-ND 2.0Direct: //www.youtube.com/watch?v=BeRMBv95gLkReverse: //www.youtube.com/watch?v=JPNo79U77yISpecialty ingredients used in molecular gastronomyThere are a number of different ingredients used in molecular gastronomy as gelling, thickening, or emulsifying agents. Many of these are available in specialty food stores or can be ordered online.AlginAnother name for sodium alginate, algin is a natural gelling agent taken from the cell walls of certain brown seaweed species.Calcium chloride, also known as CaCl2, is a compound of chlorine and calcium that is a by-product of sodium bicarbonate (baking soda) manufacturing. At room temperature it is a solid salt, which is easily dissolved in water.This is very salty and is often used for preservation, pickling, cheese production, and adding taste without increasing the amount of sodium. It is also used in molecular gastronomy in the spherification technique (see above) for the production of ravioli, spheres, pearls, and caviar. Calcium lactateCalcium lactate is a calcium salt resulting from the fermentation of lactic acid and calcium. It is a white crystalline power when solid and is highly soluble in cold liquids. It is commonly used as a calcium fortifier in various food products including beverages and supplements.Calcium lactate is also used to regulate acidity levels in cheese and baking powder, as a food thickener, and as a preservative for fresh fruits. In molecular gastronomy, it is most commonly used for basic spherification and reverse spherification due to the lack of bitterness in the finished products.Like calcium chloride, calcium lactate is used alongside sodium alginate. In regular spherification, it is used in the bath. It is also used as a thickener in reverse spherification.Carob bean gum is another name for locust bean gum. It is often used to stabilize, texturize, thicken, and gel liquids in the area of modern cuisine, although it has been a popular thickener and stabilizer for many years.Carrageenan refers to any linear sulfated polysaccharide taken from the extracts of red algae. This seaweed derivative is classified mainly as iota, kappa, and lambda. It is a common ingredient in many foods.There are a number of purposes that it serves, including binding, thickening, stabilizing, gelling, and emulsifying. Carrageenan can be found in ice cream, salad dressings, cheese, puddings, and many more foods. It is often used with dairy products because of its good interaction with milk proteins. Carrageenan also works well with other common kitchen ingredients and offers a smooth texture and taste that blends well and does not affect flavour.More often than not, carrageenan is found in powder form, which is hydrated in liquid before being used. For best results, carrageenan powder should be sprinkled in cold liquid and blended well to dissolve, although it may also be melted directly in hot liquids.Classified as a weak organic acid, citric acid is a naturally occurring preservative that can be found in citrus fruits. Produced as a result of the fermentation of sugar, it has a tart to bitter taste and is usually in powder form when sold commercially. It is used mainly as a preservative and acidulent, and it is a common food additive in a wide range of foods such as candies and soda. Other than extending shelf life by adjusting the acidity or pH of food, it can also help enhance flavours. It works especially well with other fruits, providing a fresh taste.In modern cooking, citric acid is often used as an emulsifier to keep fats and liquids from separating. It is also a common component in spherification, where it may be used as an acid buffer.Gellan gum is a water-soluble, high-molecular-weight polysaccharide gum that is produced through the fermentation of carbohydrates in algae by the bacterium Pseudomonas elodea. This fermented carbohydrate is purified with isopropyl alcohol, then dried and milled to produce a powder.Gellan gum is used as a stabilizer, emulsifier, thickener, and gelling agent in cooking. Aspics and terrines are only some of the dishes that use gellan. It comes in both high-acyl and low-acyl forms. High-acyl gellan gum produces a flexible elastic gel, while low-acyl gellan gum will give way to a more brittle gel.Like many other hydrocolloids, gellan gum is used with liquids. The powder is normally dispersed in the chosen liquid to dissolve it. Once dissolved, the solution is then heated to facilitate liquid absorption and gelling by the hydrocolloid. dissolution process. Gelling will begin upon cooling around 10°C and 80°C (50°F and 176°F).Gellan gum creates a thermo-irreversible gel and can withstand high heat without reversing in form. This makes it ideal for the creation of warm gels.Guar gum, or guaran, is a carbohydrate. This galactomannan is taken from the seeds of the guar plant by dehusking, milling, and screening. The end product is a pale, off-white, loose powder. It is most commonly used as a thickening agent and stabilizer for sauces and dressings in the food industry. Baked goods such as bread may also use guar gum to increase the amount of soluble fibre. At the same time, it also aids with moisture retention in bread and other baked items.Being a derivative of a legume, guar gum is considered to be vegan and a good alternative to starches. In modern cuisine, guar gum is used for the creation of foams from acidic liquids, for fluid gels, and for stabilizing foams.Guar gum must first be dissolved in cold liquid. The higher the percentage of guar gum used, the more viscous the liquid will become. Dosage may also vary according to the ingredients used as well as desired results and temperature.Iota carrageenan is a hydrocolloid taken from red seaweed (Eucheuma denticulatum). It is one of three varieties of carrageenan and is used mainly as a thickening or gelling agent.Gels produced from iota carrageenan are soft and flexible, especially when used with calcium salts. It produces a clear gel that exhibits little syneresis. Iota is a fast-setting gel that is thermo-reversible and remains stable through freezing and thawing. In modern cuisine it is used to create hot foams as well as custards and jellies with a creamy texture.Like most other hydrocolloids, iota carrageenan must first be dispersed and hydrated in liquid before use. Unlike lambda carrageenan, it is best dispersed in cold liquid. Once hydrated, the solution must be heated to about 70°C (158°F) with shear to facilitate dissolution. Gelling will happen between 40°C and 70°C (104°F and 158°F) depending on the number of calcium ions present.Kappa carrageenan is another type of red seaweed extract taken specifically from Kappaphycus alvarezii. Like other types of carrageenan, it is used as a gelling, thickening, and stabilizing agent. When mixed with water, kappa carrageenan creates a strong and firm solid gel that may be brittle in texture.This particular variety of carrageenan blends well with milk and other dairy products. Since it is taken from seaweed, it is considered to be vegan and is an alternative to traditional gelling agents such as gelatin.Kappa carrageenan is used in various cooking preparations including hot and cold gels, jelly toppings, cakes, breads, and pastries. When used in molecular gastronomy preparations and other dishes, kappa carrageenan should be dissolved in cold liquid.Once dispersed, the solution must be heated between 40°C and 70°C (104°F and 158°F). Gelling will begin between 30°C and 60°C (86°F and 140°F). Kappa carrageenan is a thermo-reversible gel and will stay stable up to 70°C (158°F). Temperatures beyond this will cause the gel to melt and become liquid once again.Locust bean gum, also known as LBG and carob bean gum, is a vegetable gum derived from Mediterranean-region carob tree seeds. This hydrocolloid is used to stabilize, texturize, thicken, and gel liquids in modern cuisine, although it has been a popular thickener and stabilizer for many years. It has a neutral taste that does not affect the flavour of food that it is combined with. It also provides a creamy mouth feel and has reduced syneresis when used alongside pectin or carrageenan for dairy and fruit applications. The neutral behaviour of this hydrocolloid makes it ideal for use with a wide range of ingredients.To use locust bean gum, it must be dissolved in liquid. It is soluble with both hot and cold liquids.Maltodextrin is a sweet polysaccharide that is produced from starch, corn, wheat, tapioca, or potato through partial hydrolysis and spray drying. This modified food starch is a white powder that has the capacity to absorb and hold water as well as oil. It is an ideal additive since it has fewer calories than sugar and is easily absorbed and digested by the body in the form of glucose.Coming from a natural source, it ranges from nearly flavourless to fairly sweet without any odour. Maltodextrin is a common ingredient in processed foods such as soda and candies. In molecular gastronomy, it can be used both as a thickener and a stabilizer for sauces and dressings, for encapsulation, and as a sweetener. In many cases, it is also used as an aroma carrier due to its capacity to absorb oil. It is also often used to make powders or pastes out of fat.Sodium alginate, which is also called algin, is a natural gelling agent taken from the cell walls of certain brown seaweed species. This salt is obtained by drying the seaweed, followed by cleaning, boiling, gelling, and pulverizing it. A light yellow powder is produced from the process. When dissolved in liquids, sodium alginate acts as a thickener, creating a viscous fluid. Conversely, when it is used with calcium it forms a gel through a cold process.In molecular gastronomy, sodium alginate is most commonly used as a texturizing agent. Foams and sauces may be created with it. It is also used in spherification for the creation of pearls, raviolis, mock caviar, marbles, and spheres. Sodium alginate can be used directly by dissolving it into the liquid that needs to be gelled, as in the case of basic spherification. It may also be used inversely by adding it directly to a bath, as in the case of reverse spherification.This versatile product is soluble in both hot and cold liquids, and gels made with it will set at any temperature.Soy lecithin, also called just lecithin, is a natural emulsifier that comes from fatty substances found in plant tissues. It is derived from soybeans either mechanically or chemically, and is a by-product of soybean oil creation. The end product is a light brown powder that has low water solubility.As an emulsifier, it works to blend immiscible ingredients together, such as oil and water, giving way to stable preparations. It can be whisked directly into the liquid of choice.Soy lecithin is also used in creating foams, airs, mousses, and other aerated dishes that are long lasting and full of flavour. It is used in pastries, confections, and chocolate to enhance dough and increase moisture tolerance.As with most ingredients, dosage and concentration for soy lecithin will depend on the ingredients used, specific properties desired in the resulting preparation, as well as other conditions.Tapioca maltodextrin is a form of maltodextrin made from tapioca starch. It is a common ingredient in molecular gastronomy because it can be used both as a thickener and stabilizer for sauces and dressings, for encapsulation, and as a sweetener. In many cases it is also used as an aroma carrier due to its capacity to absorb oil. It is often used to make powders or pastes out of fat.Xanthan gum is a food additive used as a thickening agent. It is produced through the fermentation of glucose. As a gluten-free additive it can be used as a substitute in cooking and baking.As a thickener, when used in low dosages, xanthan gum produces a weak gel with high viscosity that is shear reversible with a high pourability. It also displays excellent stabilizing abilities that allow for particle suspension.Moreover, xanthan gum mixes well with other flavours without masking them and provides an improved mouth feel to preparations. The presence of bubbles within the thickened liquids often makes way for light and creamy textures. It is used in the production of emulsions, suspensions, raviolis, and foams.Being a hydrocolloid, xanthan gum must be hydrated before use. High versatility allows it to be dissolved over a wide range of temperatures, acid, and alcohol levels. Once set, xanthan gum may lose some of its effectiveness when exposed to heat. This page titled 1.7: Non-traditional thickeners is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by Sorangel Rodriguez-Velazquez via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.
764
1.8: Sauces
https://chem.libretexts.org/Bookshelves/Biological_Chemistry/Chemistry_of_Cooking_(Rodriguez-Velazquez)/01%3A_Thickening_and_Concentrating_Flavors/1.08%3A_Sauces
Sauces enhance desserts by both their flavor and their appearance, just as savory sauces enhance meats, fish, and vegetables. Crème anglaise, chocolate sauce, caramel sauce, and the many fruit sauces and coulis are the most versatile. One or another of these sauces will complement nearly every dessert.Caramel sauce: A proper caramel flavor is a delicate balance between sweetness and bitterness. As sugar cooks and begins to change color, a flavor change will occur. The darker the sugar, the more bitter it will become. Depending on the application for the finished caramel, it can be made mild or strong. At this point, a liquid is added. This liquid will serve several roles: it will stop the cooking process, it can add richness and flavor, and it will soften the sauce. The fluidity of the finished sauce will depend on the amount of liquid added to it, and the temperature it is served at. Dairy products, such as cream, milk, or butter, will add richness; use water for a clear sauce; use fruit purées to add different flavor elements.Except in the case of some home-style or frozen desserts, sauces are usually not ladled over the dessert because doing so would mar the appearance. Instead, the sauce is applied in a decorative fashion to the plate rather than the dessert. Many different styles of plate saucing are available.Pouring a pool of sauce onto the plate is knownfloaosding. Although plate flooding often looks old- fashioned today, it can still be a useful technique for many desserts. a pick or the end of a knife. For this technique to work, the two sauces should be at about the same fluidity or consistency.Rather than flooding the entire plate, it may be more appropriate for some desserts to apply a smaller pool of sauce to the plate, as this avoids overwhelming the dessert with too much sauce.A variation of the flooding technique is outlining, where a design is piped onto the plate with chocolate and allowed to set. The spaces can then be flooded with colorful sauces.A squeeze bottle is useful for making dots, lines, curves, and streaks of sauce in many patterns. Or just a spoon is needed to drizzle random patterns of sauce onto a plate. Another technique for saucing is applying a small amount of sauce and streaking it with a brush, an offset spatula, or the back of a spoon.Sauces are a great way to highlight flavors. Choose ones that will create balance on the plate, not just for color, but with all the components. A tart berry sauce will complement a rich cheesecake or chocolate dessert because sourness (acid) will cut through fat, making it taste lighter than it is. A sweet sauce served with a sweet dessert will have the overall effect of hiding flavors in both. Hold back on sweetness in order to intensify other flavors.Many modern presentations may have a minimal amount of sauce. Sometimes this is done just for aesthetic reasons and not for how it will complement the dessert. Think of the dish and the balance of the components. This is the most important factor: flavor first, presentation second.This page titled 1.8: Sauces is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by Sorangel Rodriguez-Velazquez via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.
765
1.9: Low-temperature and sous-vide
https://chem.libretexts.org/Bookshelves/Biological_Chemistry/Chemistry_of_Cooking_(Rodriguez-Velazquez)/01%3A_Thickening_and_Concentrating_Flavors/1.09%3A_Low-temperature_and_sous-vide
Sous-vide cooking is about immersing a food item in a precisely controlled water bath, where the temperature of the water is the same as the target temperature of the food being cooked. a food-grade plastic bag and vacuum-sealed before going into the water bath. Temperatures will vary depending on desired end result. This allows the water in the bath to transfer heat into the food while preventing the water from coming into direct contact with it. This means the water does not chemically interact with the food: the flavors of the food remain stronger, because the water is unable to dissolve or carry away any compounds in the food.“Img_0081” by Derek is licensed under CC BY- SA-ND 2.0Cooking vegetable and fruits sous-vide is a great way to tenderize them without losing as many of the vitamins and minerals that are normally lost through blanching or steaming. Fruits can also be infused with liquid when cooked at lower temperatures by adding liquid to the bag. Sous-vide helps preserve the nutrients present in fruits and vegetables by not cooking them above the temperatures that cause the cell walls to fully break down. This allows them to tenderize without losing all their structure. The bag also helps to catch any nutrients that do come out of the vegetable.While time and temperature do not factor into safety for fruits and vegetables, they do have a unique effect on their structure. There are two components in fruits and vegetables that make them crisp: pectin and starch. Pectin, which is a gelling agent commonly used in jams and jellies for structure, breaks down at 83oC (183oF) at a slower rate than the starch cells do. In many cases this allows for more tender fruits and vegetables that have a unique texture to them.The term custard spans so many possible ingredients and techniques that it is most useful to think of a custard as simply a particular texture and mouth feel. Custards have been made for centuries by lightly cooking a blend of eggs, milk, and heavy cream, but modernist chefs have invented myriad ways to make custards.Using the sous-vide method to prepare crème anglaise, curds, ice cream bases, custard bases, sabayons, and dulce de leche is possible. The technique offers greater consistency and more control over the texture, which can range from airy, typical of a sabayon, to dense, as in a posset. For custards, eggs will be properly cooked at 82°C (180oF), so if the water bath is set to this temperature, no overcooking can happen. The one constant among custards is the use of plenty of fat, which not only provides that distinctive mouth feel but also makes custard an excellent carrier of fat-soluble flavors and aromas. Lighter varieties of custard, prepared sous-vide style and cooled, can be aerated in a whipping siphon into smooth, creamy foams.Vacuum-compressing fruits and vegetables is a popular modern technique that can give many plant foods an attractive, translucent appearance (as shown in the watermelon in and a pleasant, surprising texture. This technique exploits the ability of a vacuum chamber to reduce surrounding pressure, which causes air and moisture within the plant tissue to rapidly expand and rupture the structures within the food. When the surrounding pressure is restored to a normal level, the labyrinth of air-filled spaces collapses. As a result, light tends to pass through the food rather than being scattered and diffused, which is why vacuum-compressed plant foods appear translucent. Causing the porous structure of a plant food to collapse also imparts a somewhat dense, toothsome texture that can give a familiar ingredient, such as watermelon, an entirely new appeal.“WD-50 (7th Course)” by Peter Dillon is licensed under CC BY 2.0When adding liquids, the vacuum-seal process creates a rapid infusion—especially with more porous foods (such as adding spices to cream or herbs to melon). This can add flavor and texture in a shorter time than traditional infusions.This page titled 1.9: Low-temperature and sous-vide is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by Sorangel Rodriguez-Velazquez via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.
766
2.1: Introduction - Understanding Ingredients
https://chem.libretexts.org/Bookshelves/Biological_Chemistry/Chemistry_of_Cooking_(Rodriguez-Velazquez)/02%3A_Flour/2.01%3A_Introduction_-_Understanding_Ingredients
Ingredients play an important role in baking. Not only do they provide the structure and flavour of all of the products produced in the bakery or pastry shop, their composition and how they react and behave in relation to each other are critical factors in understanding the science of baking. This is perhaps most evident when it comes to adapting formulas and recipes to accommodate additional or replacement ingredients while still seeking a similar outcome to the original recipe.In this book, we look at each of the main categories of baking ingredients, listed below, and then explore their composition and role in the baking process. In addition to these categories, we will discuss the role that salt and water play in the baking process.The main categories of baking ingredients are:Note: For most measurements used in the open textbook series, both S.I. (metric) and U.S./imperial values are given. The exception is nutritional information, which is always portrayed using metric values in both Canada and the United States.This page titled 2.1: Introduction - Understanding Ingredients is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by Sorangel Rodriguez-Velazquez via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.
768
2.2: The History of Wheat Flour
https://chem.libretexts.org/Bookshelves/Biological_Chemistry/Chemistry_of_Cooking_(Rodriguez-Velazquez)/02%3A_Flour/2.02%3A_The_History_of_Wheat_Flour
Archaeologists who did excavations in the region of the lake dwellers of Switzerland found grains of wheat, millet, and rye 10,000 years old. The Romans perfected the rotary mill for turning wheat into flour. By the time of Christ, Rome had more than 300 bakeries, and Roman legions introduced wheat throughout their empire. Improved milling processes were needed because even when wheat was milled twice and bolted (sifted) through silk gauze, the result was still a yellowish flour of uneven texture and flecked with germ and bran.In the second half of the 19th century, there were great changes in the flour milling process. An American inventor, Edmund LaCroix, improved the process with a purifier to separate the middlings (bran, germ,and other coarse particles) from the particles that form smooth-textured white flour. In recent years, the demand for whole grain milling has increased because whole grain food products have proved to be more nutritious than products made from white flour. (More information on whole grain and artisan milling is provided later in this section.)In Canada, large-scale wheat growing didn’t occur until after the Prairies were settled in the 1800s. Hard wheat, such as Red Fife, Marquis, and Selkirk, earned Canada a position as the granary for Britain and many other European countries. Today, most of the wheat grown in Western Canada is the hard Red Spring variety. Soft wheats, such as soft red and soft white, are primarily grown in Quebec and Ontario. Many of the original wheat growers have passed on their farms to the next generations, while others branched out to organic farming and milling. One of these farms, Nunweiler’s, has a heritage that goes back to the early 1900s when the original wheat in Canada, Red Fife and Marquis, was grown on this farm.Today, the major wheat growing areas of North America are in the central part of the continent, in the Great Plains of the United States and the Canadian Prairies. From Nebraska south, winter wheat can be grown, while to the north through Saskatchewan spring wheat dominates. Many American states and some Canadian provinces grow both kinds. In fact, there are very few states that don’t grow some wheat. Kansas, the site of the American Institute of Baking, could be said to be at the heart of the U.S. wheat growing area, while Saskatchewan is the Canadian counterpart.This page titled 2.2: The History of Wheat Flour is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by Sorangel Rodriguez-Velazquez via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.
769
2.3: Milling of Wheat
https://chem.libretexts.org/Bookshelves/Biological_Chemistry/Chemistry_of_Cooking_(Rodriguez-Velazquez)/02%3A_Flour/2.03%3A_Milling_of_Wheat
Milling of wheat is the process that turns whole grains into flours. The overall aims of the miller are to produce:The very first mill operation is analyzing the grain, which determines criteria such as thgeluten content and amylase activity. It is at this point that decisions about blending are made.Following analysis, milling may be divided into three stages:Wheat received at the mill contains weeds, seeds, chaff, and other foreign material. Strong drafts of air from the aspirator remove lighter impurities. The disc separator removes barley, oats, and other foreign materials. From there, the wheat goes to the scourers in which it is driven vigorously against perforated steel casings by metal beaters. In this way, much of the dirt lodged in the crease of the wheat berry is removed and carried away by a strong blast of air. Then the magnetic separator removes any iron or steel.At this point, the wheat is moistened. Machines known as whizzers take off the surface moisture. is then tempered, or allowed to lie in bins for a short time while still damp, to toughen the bran coat, thus making possible a complete separation of the bran from the flour-producing portion of the wheat berry. After tempering, the wheat is warmed to a uniform temperature before the crushing process starts.The objectives at this stage are twofold:Household grain mills create flour in one step — grain in one end, flour out the other — but the commercial mill breaks the grain down in a succession of very gradual steps, ensuring that little bran and germ are mixed with any endosperm.Although the process is referred to ascrushing, flour mills crack rather than crush the wheat with large steel rollers. The rollers at the beginning of the milling system are corrugated and break the wheat into coarse particles. The grain passes through screens of increasing fineness. Air currents draw off impurities from the middlings. Middlings is the name given to coarse fragments of endosperm, somewhere between the size of semolina and flour. Middlings occur after the “break” of the grain.Bran and germ are sifted out, and the coarse particles are rolled, sifted, and purified again. This separation of germ and bran from the endosperm is an important goal of the miller. It is done to improve dough-making characteristics and colour. As well, the germ contains oil and can affect keeping qualities of the flour.In the reduction stage, the coarser particles go through a series of fine rollers and sieves. After the first crushing, the wheat is separated into five or six streams. This is accomplished by means of machines called plansifters that contain sieves, stacked vertically, with meshes of various sizes. The finest mesh is as fine as the finished flour, and some flour is created at an early stage of reduction.Next, each of the divisions or streams passes through cleaning machines, known apsurifiers, a series of sieves arranged horizontally and slightly angled. An upcurrent draught of air assists in eliminating dust. product is crushed a little more, and each of the resulting streams is again divided into numerous portions by means of sifting. The final crushings are made by perfectly smooth steel rollers that reduce the middlings into flour. The flour is then bleached and put into bulk storage. From bulk storage, the flour is enriched (thiamine, niacin, riboflavin, and iron are added), and either bagged for home and bakery use or made ready for bulk delivery.The extraction rate is a figure representing the percentage of flour produced from a given quantity of grain. For example, if 82 kg of flour is produced from 100 kg of grain, the extraction rate is 82% (82÷100×100). Extraction rates vary depending on the type of flour produced. A whole grain flour, which contains all of the germ, bran, and endosperm, can have an extraction rate of close to 100%, while white all-purpose flours generally have extraction rates of around 70%. Since many of the nutrients are found in the germ and bran, flours with a higher extraction rate have a higher nutritional value.This page titled 2.3: Milling of Wheat is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by Sorangel Rodriguez-Velazquez via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.
770
2.4: Flour Streams and Types of Wheat Flour
https://chem.libretexts.org/Bookshelves/Biological_Chemistry/Chemistry_of_Cooking_(Rodriguez-Velazquez)/02%3A_Flour/2.04%3A_Flour_Streams_and_Types_of_Wheat_Flour
Modern milling procedures produce many different flour streams (approximately 25) that vary in quality and chemical analysis. These are combined into four basic streams of edible flour, with four other streams going to feed.Within the streams of edible flours, there are a number of different types of flour used in food preparation. Each has different characteristics, and with those come different uses, as described below.General purpose or home use flours are usually a blend of hard spring wheats that are lower in protein (gluten) content than bread flours. They are top patent flours and contain sufficient protein to make good yeast breads, yet not too much for good quick breads, cakes, and cookies.Note: A word about gluten quality as opposed to gluten quantity: The fact that a particular flour contains a high quantity of protein, say 13% to 15%, does not necessarily mean that it is of high quality. It may contain too much ash or too much damaged starch to warrant this classification. High quality is more important in many bread applications than high quantity. All-purpose flour is an example of a high-quality flour, with a protein content of about 12%.A U.S. patented flour, graham flour is a combination of whole wheat flour (slightly coarser), with added bran and other constituents of the wheat kernel.Bread flour is milled from blends of hard spring and hard winter wheats. They average about 13% protein and are slightly granular to the touch. This type of flour is sold chiefly to bakers because it makes excellent bread with bakery equipment, but has too much protein for home use. It is also called strong flour or hard flour and is second patent flour.For example, the specification sheet on bread flour produced by a Canadian miller might include the following information:Along with this information there is microbiological data and an allergen declaration. (Note that the formula in parentheses beside “Protein” is simply the laboratory’s way of deriving the protein figure from the nitrogen content.)Cake flour is milled from soft winter wheats. The protein content is about 7% and the granulation is so uniform and fine that the flour feels satiny. An exception is a high-protein cake flour formulated especially for fruited pound cakes (to prevent the fruit from sinking).Clear flour comes from the part of the wheat berry just under the outer covering. Comparing it to first patent flour is like comparing cream to skim milk. It is dark in colour and has a very high gluten content. It is used in rye and other breads requiring extra strength.Gluten flour is made from wheat flour by removing a large part of the starch. It contains no more than 10% moisture and no more than 44% starch.Pastry flour is made from either hard or soft wheat, but more often from soft. It is fairly low in protein and is finely milled, but not so fine as cake flour. It is unsuitable for yeast breads but ideal for cakes, pastries, cookies, and quick breads.Self-rising flour has leavening and salt added to it in controlled amounts at the mill.Wheat germ flour consists entirely of the little germ or embryo part of the wheat separated from the rest of the kernel and flattened into flakes. This flour should be refrigerated.Whole wheat flour contains all the natural parts of the wheat kernel up to 95% of the total weight of the wheat. It contains more protein than all-purpose flour and produces heavier products because of the bran particles.Whole wheat pastry flour is milled from the entire kernel of soft wheat, is low in gluten, and is suitable for pastry, cakes, and cookies.Most of the germ goes away with the shorts and only a small fraction of the total quantity can be recovered in a fairly pure form. At the mill, a special process developed in England to improve its keeping qualities and flavour cooks this fraction. It is then combined with white flour to make Hovis flour, which produces a loaf that, though small for its weight, has a rich, distinctive flavour.The world’s first new grain, triticale is a hybrid of wheat and rye. It combines the best qualities of both grains. It is now grown commercially in Manitoba.Semolina is the granular product consisting of small fragments of the endosperm of the durum wheat kernel. (The equivalent particles from other hard wheat are called farina.) The commonest form of semolina available commercially is the breakfast cereal Cream of Wheat.The primary goal of all bakers has been to reduce production time and keep costs to a minimum without losing quality, flavour, or structure. After extensive research, millers have succeeded in eliminating bulk fermentation for both sponge and straight dough methods. No-time flour is flour with additives such as ascorbic acid, bromate, and cysteine. It saves the baker time and labour, and reduces floor spac requirements. The baker can use his or her own formulas with only minor adjustments.Blending of flours is done at the mill, and such is the sophistication of the analysis and testing of flours (test baking, etc.) that when problems occur it is generally the fault of the baker and not the product. Today the millers and their chemists ensure that bakers receive the high grade of flour that they need to produce marketable products for a quality-conscious consumer. Due to the vagaries of the weather and its effect on growing conditions, the quality of the grain that comes into the mill is hardly ever constant. For example, if damp weather occurs at harvest time, the grain may start to sprout and will cause what is known as damaged starch. Through analysis and adjustments in grain handling and blending, the miller is able to furnish a fairly constant product.Bakers do blend flours, however. A portion of soft flour may be blended with the bread flour to reduce the toughness of a Danish pastry or sweet dough, for example. Gluten flour is commonly used in multigrain bread to boost the aeration.This page titled 2.4: Flour Streams and Types of Wheat Flour is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by Sorangel Rodriguez-Velazquez via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.
771
2.5: Flour Terms and Treatments
https://chem.libretexts.org/Bookshelves/Biological_Chemistry/Chemistry_of_Cooking_(Rodriguez-Velazquez)/02%3A_Flour/2.05%3A_Flour_Terms_and_Treatments
In addition to types of flour, you may come across various other terms when purchasing flour. These include some terms that refer to the processing and treatment of the flour, and others outlining some of the additives that may be added during the milling and refining process.Bleaching and maturing agents are added to whiten and improve the baking quality quickly, making it possible to market the freshest flour. Even fine wheat flours vary in colour from yellow to cream when freshly milled. At this stage, the flour produces doughs that are usually sticky and do not handle well. Flour improves with age under proper storage conditions up to one year, both in color and quality.Because storing flour is expensive, toward the close of the 19th century, millers began to treat freshly milled flour with oxidizing agents to bleach it and give it the handling characteristics of naturally aged flour. Under the category of maturing agents are included materials such as chlorine dioxide, chlorine gas plus a small amount of nitrosyl chloride, ammonium persulfate, and ascorbic acid. No change occurs in the nutritional value of the flour when these agents are present.There are two classes of material used to bleach flour. A common one, an organic peroxide, reacts with the yellow pigment only, and has no effect on gluten quality. Chlorine dioxide, the most widely used agent in North America, neutralizes the yellow pigment and improves the gluten quality. It does, however, destroy the tocopherols (vitamin E complex).Iron and three of the most necessary B vitamins (thiamin, riboflavin, and niacin), which are partially removed during milling, are returned to white flour by a process known as enrichment. No change occurs in taste, colour, texture, baking quality, or caloric value of the flour.During the milling process, flour is sifted many times through micro-fine silk. This procedure is known as pre-sifting. The mesh size used for sifting varies from flour to flour. There are more holes per square inch for cake flour than, for example, bread flour, so that a cup of cake flour has significantly more minute particles than does a cup of bread flour, is liable to be denser, and weigh slightly more. Sifted flour yields more volume in baked bread than does unsifted flour, simply because of the increased volume of air.This page titled 2.5: Flour Terms and Treatments is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by Sorangel Rodriguez-Velazquez via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.
772
2.6: Flour Additives
https://chem.libretexts.org/Bookshelves/Biological_Chemistry/Chemistry_of_Cooking_(Rodriguez-Velazquez)/02%3A_Flour/2.06%3A_Flour_Additives
A number of additives may be found in commercial flours, from agents used as dough conditioners, to others that aid in the fermentation process. Why use so many additives? Many of these products are complementary – that is, they work more effectively together and the end product is as close to “ideal” as possible. Nevertheless, in some countries the number of additives allowed in flour are limited. For instance, in Germany, ascorbic acid remains the only permitted additive. Some of the additives that are commonly added to flour include those described below.Until the early 1990s, bromate was added to flour because it greatly sped up the oxidation or aging of flour. Millers in Canada stopped using it after health concerns raised by the U.S. Food and Drug Administration (FDA). In the United States, bromate is allowed in some states but banned in others (e.g., California).Approved in the United States since 1962, but banned in Europe, ADA falls under the food additives permitted in Canada. ADA is a fast-acting flour treatment resulting in a cohesive, dry dough that tolerates high water absorption. It is not a bleach, but because it helps produce bread with a finer texture it gives an apparently whiter crumb. It does not destroy any vitamins in the dough. Bakers who want to know if their flours contain ADA or other chemical additives can request the information from their flour suppliers.An amino acid, L-cysteine speeds up reactions within the dough, thus reducing or almost eliminating bulk fermentation time. In effect, it gives the baker a “no-time” dough. It improves dough elasticity and gas retention.Ascorbic acid was first used as a bread improver in 1932, after it was noticed that old lemon juice added to dough gave better results because it improved gas retention and loaf volume. Essentially vitamin C (ascorbic acid) has the advantage of being safe even if too much is added to the dough, as the heat of baking destroys the vitamin component. The addition of ascorbic acid consistent with artisan bread requirements is now routine for certain flours milled in North America.Calcium peroxide (not to be confused with the peroxide used for bleaching flour) is another dough-maturing agent.Glycerides are multi-purpose additives used in both cake mixes and yeast doughs. They are also known as surfactants, which is a contraction for “surface-acting agents.” In bread doughs, the main function of glycerides is as a crumb-softening agent, thus retarding bread staling. Glycerides also have some dough strengthening properties.Approved for use in the United States since 1961, this additive improves gas retention, shortens proofing time, increases loaf volume, and works as an anti-staling agent.This page titled 2.6: Flour Additives is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by Sorangel Rodriguez-Velazquez via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.
773
2.7: Whole Grain and Artisan Milling
https://chem.libretexts.org/Bookshelves/Biological_Chemistry/Chemistry_of_Cooking_(Rodriguez-Velazquez)/02%3A_Flour/2.07%3A_Whole_Grain_and_Artisan_Milling
Whole grain and artisan milling is the type of milling that was practiced before the consumer market demanded smooth white flours that are refined and have chemical additives to expedite aging of flours. Artisan milling produces flours that are less refined and better suited to traditional breads, but also contain little to no additives and have higher nutritional content. For that reason, demand for these types of flour is on the rise.Artisan millers (also known as micro millers) process many non-stream grains, including spelt, kamut, buckwheat, and other non-gluten grains and pulses. This offers bakers opportunities to work with different grains and expand their businesses. Artisan flours are readily available directly from millers or through a distributor. Knowing the origin of the grains and the quality of the ingredients in baking is important for artisan bakers.Whole grain flours are on the increase as consumers become more aware of their benefits. Whole grain flour, as the name suggests, is made from whole grains.Many artisan millers purchase their grains directly from growers. This method of purchasing establishes trustworthy working relationships with the grain growers and promotes transparency in grain growing and food safety practices. Grain growers that sell their grains to artisan millers apply conventional or organic growing practices. Grain growers and millers have to go through vigorous processes to obtain the certified organic certification for their grains or products, which guarantees that no chemical additives have been used.How organic grain is processed varies. Stone milling and impact hammer milling methods are typical when minimal refined whole grain flour is preferred. Information on several American artisan millers that produce various whole grain flours can be found Faitrebid Mills; Hayden Flour Mills; and Baker Miller Chicago. Organic flours have gained popularity in the baking industry. As consumers become more aware of them, we see the demand swinging back toward whole grain and artisan milling as a preference.This page titled 2.7: Whole Grain and Artisan Milling is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by Sorangel Rodriguez-Velazquez via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.
774
2.8: Flour in Baking
https://chem.libretexts.org/Bookshelves/Biological_Chemistry/Chemistry_of_Cooking_(Rodriguez-Velazquez)/02%3A_Flour/2.08%3A_Flour_in_Baking
Flour forms the foundation for bread, cakes, and pastries. It may be described as the skeleton, which supports the other ingredients in a baked product. This applies to both yeast and chemically leavened products.The strength of flour is represented in protein (gluten) quality and quantity. This varies greatly from flour to flour. The quality of the protein indicates the strength and stability of the flour, and the result in bread making depends on the method used to develop the gluten by proper handling during the fermentation. Gluten is a rubber-like substance that is formed by mixing flour with water. Before it is mixed it contains two proteins. In wheat, these two proteins are gliadin and glutenin. Although we use the terms protein and gluten interchangeably, gluten only develops once the flour is moistened and mixed. The protein in the flour becomes gluten.Hard spring wheat flours are considered the best for bread making as they have a larger percentage of good quality gluten than soft wheat flours. It is not an uncommon practice for mills to blend hard spring wheat with hard winter wheat for the purpose of producing flour that combines the qualities of both. Good bread flour should have about 13% gluten.Flour should be kept in a dry, well-ventilated storeroom at a fairly uniform temperature. A temperature of about 21°C (70°F) with a relative humidity of 60% is considered ideal. Flour should never be stored in a damp place. Moist storerooms with temperatures greater than 23°C (74°F) are conducive to mould growth, bacterial development, and rapid deterioration of the flour. A well-ventilated storage room is necessary because flour absorbs and retains odors. For this reason, flour should not be stored in the same place as onions, garlic, coffee, or cheese, all of which give off strong odors.Wheat that is milled and blended with modern milling methods produce flours that have a fairly uniform quality all year round and, if purchased from a reliable mill, they should not require any testing for quality. The teacher, student, and professional baker, however, should be familiar with qualitative differences in flours and should know the most common testing methods.Flours are mainly tested for:Other tests, done in a laboratory, are done for:The color of the flour has a direct bearing on baked bread, providing that fermentation has been carried out properly. The addition of other ingredients to the dough, such as brown sugar, malt, molasses, salt, and colored margarine, also affects the color of bread.To test the color of the flour, place a small quantity on a smooth glass, and with a spatula, work until a firm smooth mass about 5 cm (2 in.) square is formed. The thickness should be about 2 cm (4/5 in.) at the back of the plate to a thin film at the front. The test should be made in comparison with a flour of known grade and quality, both flours being worked side by side on the same glass. A creamy white color indicates a hard flour of good gluten quality. A dark or greyish color indicates a poor grade of flour or the presence of dirt. Bran specks indicate a low grade of flour.After making a color comparison of the dry samples, dip the glass on an angle into clean water and allow to partially dry. Variations in color and the presence of bran specks are more easily identified in the damp samples.Flours are tested for absorption because different flours absorb different amounts of water and therefore make doughs of different consistencies. The absorption ability of a flour is usually between 55% and 65%. To determine the absorption factor, place a small quantity of flour (100 g/4 oz.) in a bowl. Add water gradually from a beaker containing a known amount of water. As the water is added, mix with a spoon until the dough reaches the desired consistency. You can knead the dough by hand for final mixing and determination of consistency. Weigh the unused water. Divide the weight of the water used by the weight of the flour used. The result is the absorption ability in percentage. For example:Prolonged storage in a dry place results in a natural moisture loss in flour and has a noticeable effect on the dough. For example, a sack of flour that originally weighed 40 kg (88 lb.) with a moisture content of 14% may be reduced to 39 kg (86 lb.) during storage. This means that 1 kg (2 lb.) of water is lost and must be made up when mixing. The moisture content of the wheat used to make the flour is also important from an economic standpoint.Hard wheat flour absorbs more liquid than soft flour. Good hard wheat flour should feel somewhat granular when rubbed between the thumb and fingers. A soft, smooth feeling indicates a soft wheat flour or a blend of soft and hard wheat flour. Another indicator is that hard wheat flour retains its form when pressed in the hollow of the hand and falls apart readily when touched. Soft wheat flour tends to remain lumped together after pressure.The gluten test is done to find the variation of gluten quality and quantity in different kinds of flour. Hard flour has more gluten of better quality than soft flour. The gluten strength and quality of two different kinds of hard flour may also vary with the weather conditions and the place where the wheat is grown. The difference may be measured exactly by laboratory tests, or roughly assessed by the variation of gluten balls made from different kinds of hard flours.For example, to test the gluten in hard flour and all-purpose flour, mix 250 g (9 oz.) of each in separate mixing bowls with enough water to make each dough stiff. Mix and develop each dough until smooth. Let the dough rest for about 10 minutes. Wash each dough separately while kneading it under a stream of cold water until the water runs clean and all the starch is washed out. (Keep a flour sieve in the sink to prevent dough pieces from being washed down the drain.) What remains will be crude gluten. Shape the crude gluten into round balls, then place them on a paper-lined baking pan and bake at 215°C (420°F) for about one hour. The gluten ball made from the hard flour will be larger than the one made from all-purpose flour. This illustrates the ability of hard flour to produce a greater volume because of its higher gluten content.Ash or mineral content of flour is used as another measurement of quality. Earlier in the chapter, we talked about extraction rates as an indicator of how much of the grain has been refined. Ash content refers to the amount of ash that would be left over if you were to burn 100 g of flour. A higher ash content indicates that the flour contains more of the germ, bran, and outer endosperm. Lower ash content means that the flour is more highly refined (i.e., a lower extraction rate).The final and conclusive test of any flour is the kind of bread that can be made from it. The baking test enables the baker to check on the completed loaf that can be expected from any given flour. Good volume is related to good quality gluten; poor volume to young or green flour. Flour that lacks stability or power to hold during the entire fermentation may result in small, flat bread. Flour of this type may sometimes respond to an increase in the amount of yeast. More yeast shortens the fermentation time and keeps the dough in better condition during the pan fermentation period.This page titled 2.8: Flour in Baking is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by Sorangel Rodriguez-Velazquez via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.
775
2.9: Rye Flour
https://chem.libretexts.org/Bookshelves/Biological_Chemistry/Chemistry_of_Cooking_(Rodriguez-Velazquez)/02%3A_Flour/2.09%3A_Rye_Flour
Rye is a hardy cereal grass cultivated for its grain. Its use by humans can be traced back over 2,000 years. Once a staple food in Scandinavia and Eastern Europe, rye declined in popularity as wheat became more available through world trade. A crop well suited to northern climates, rye is grown on the Canadian Prairies and in the northern states such as the Dakotas and Wisconsin.Rye flour is the only flour other than wheat that can be used without blending (with wheat flour) to make yeast-raised breads. Nutritionally, it is a grain comparable in value to wheat. In some cases, for example, its lysine content (an amino acid), is even biologically superior.The brown grain is cleaned, tempered, and milled much like wheat grain. One difference is that the rye endosperm is soft and breaks down into flour much quicker that wheat. As a result, it does not yield semolina, so purifiers are seldom used. The bran is separated from the flour by the break roller, and the flour is further rolled and sifted while being graded into chop, meal, light flour, medium flour, and dark flour:.The lighter rye flours are generally bleached, usually with a chlorine treatment. The purpose of bleaching is to lighten the colour, since there is no improvement on the gluten capability of the flour.The grade of extraction of rye flour is of great importance to the yield of the dough and the creation of a particular flavour in the baked bread. Table 1 shows the percentage of the dry substances of rye flour by grade of extraction. Table 1 Table of extraction for rye flour SubstanceGrade of Extraction70%85% Ash0.8%1.4% Fat1.2%1.7% Protein8.1%9.6% Sugar6.5%7.5% Note that ash, fibre, and pentosans are higher in the 85% extraction rate flour, and starch is lower. Pentosans are gummy carbohydrates that tend to swell when moistened and, in baking, help to give the rye loaf its cohesiveness and structure. The pentosan level in rye flour is greater than that of wheat flour and is of more significance for successful rye bread baking.Rye flours differ from wheat flours in the type of gluten that they contain. Although some dark rye flours can have a gluten content as high as 16%, this is only gliadin. The glutenin, which forms the elasticity in dough is absent, and therefore doughs made only with rye flour will not hold the gas produced by the yeast during fermentation. This results in a small and compact loaf of bread.Starch and pentosans are far more important to the quality of the dough yield than gluten. Starch is the chief component of the flour responsible for the structure of the loaf. Its bread-making ability hinges on the age of the flour and the acidity. While rye flour does not have to be aged as much as wheat flour, it has both a “best after” and a “best before” date. Three weeks after milling is considered to be good.When the rye flour is freshly milled, the starch gelatinizes (sets) quickly at a temperature at which amylases are still very active. As a result, bread made from fresh flour may be sticky and very moist. At the other extreme, as the starch gets older, it gelatinizes less readily, the enzymes cannot do their work, and the loaf may split and crack. A certain amount of starch breakdown must occur for the dough to be able to swell.The moisture content of rye flour should be between 13% and 14%. The less water in the flour, the better its storage ability. Rye should be stored under similar conditions to wheat flour.Here is a short list of the differences between rye and wheat:In summary, both wheat and rye have a long history in providing the “staff of life.” They are both highly nutritious. North American mills have state-of-the-art technology that compensates for crop differences, thus ensuring that the baker has a reliable and predictable raw material. Flour comes in a great variety of types, specially formulated so that the baker can choose according to product and customer taste.This page titled 2.9: Rye Flour is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by Sorangel Rodriguez-Velazquez via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.
776
2.10: Other Grains and Flours
https://chem.libretexts.org/Bookshelves/Biological_Chemistry/Chemistry_of_Cooking_(Rodriguez-Velazquez)/02%3A_Flour/2.10%3A_Other_Grains_and_Flours
Several other types of grains are commonly used in baking. In particular, corn and oats feature predominantly in certain types of baking (quick breads and cookies respectively, for instance) but increasingly rice flour is being used in baked goods, particularly for people with gluten sensitivities or intolerances. The trend to whole grains and the influence of different ethnic cultures has also meant the increase in the use of other grains and pulses for flours used in breads and baking in general.Corn is one of the most widely used grains in the world, and not only for baking. Corn in used in breads and cereals, but also to produce sugars (such as dextrose and corn syrup), starch, plastics, adhesives, fuel (ethanol), and alcohol (bourbon and other whisky). It is produced from the maize plant (the preferred scientific and formal name of the plant that we callcorn in North America). There are different varieties of corn, some of which are soft and sweet (corn you use for eating fresh or for cooking) and some of which are starchy and are generally dried to use for baking, animal feed, and popcorn.Rice is another of the world’s most widely used cereal crops and forms the staple for much of the world’s diet. Because rice is not grown in Canada, it is not regulated by the Canadian Grain Commission.Oats are widely used for animal feed and food production, as well as for making breads, cookies, and dessert toppings. Oats add texture to baked goods and desserts.A wide range of additional flours and grains that are used in ethnic cooking and baking are becoming more and more widely available in Canada. These may be produced from grains (such as kamut, spelt, and quinoa), pulses (such as lentils and chickpeas), and other crops (such as buckwheat) that have a grain-like consistency when dried. Increasingly, with allergies and intolerances on the rise, these flours are being used in bakeshops as alternatives to wheat-based products for customers with special dietary needs.This page titled 2.10: Other Grains and Flours is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by Sorangel Rodriguez-Velazquez via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.
777
3.1: Understanding Fats and Oils
https://chem.libretexts.org/Bookshelves/Biological_Chemistry/Chemistry_of_Cooking_(Rodriguez-Velazquez)/03%3A_Fat/3.01%3A_Understanding_Fats_and_Oils
Fats and oils are organic compounds that, like carbohydrates, are composed of the elements carbon (C), hydrogen (H), and oxygen (O), arranged to form molecules. There are many types of fats and oils and a number of terms and concepts associated with them, which are detailed further here.In baking, lipids are generally a synonym for fats. Baking books may talk about the “lipid content of eggs,” for example.Triglycerides is another chemical name for the most common type of fats found in the body, indicating that they are usually made up of three (tri) fatty acids and one molecule of glycerol (glycerine is another name) as shown in (The mono and diglycerides that are used as emulsifiers have one and two fatty acids respectively.)Composition of fats (triglycerides)Each kind of fat or oil has a different combination of fatty acids. The nature of the fatty acid will determine the consistency of the fat or oil. For example, stearic acid is the major fatty acid in beef fat, and linoleic acid is dominant in seed oils. Fatty acids are defined as short, medium, or long chain, depending on the number of atoms in the molecule. The reason that some fat melts gradually is that as the temperature rises, each fatty acid will, in turn, soften, as its melting point is reached. Fats that melt all of a sudden mean that the fatty acids are of the same or similar type and have melting points within a narrow range. An example of such a fat is coconut fat: one second it is solid, the next, liquid.Table 1 shows the characteristics of three fatty acids.Table 1: Characteristics of Fatty Acids Type of Fatty Acid Melting Point Physical State (at room temperature) Stearic 69°C (157°F) Solid Oleic 16°C (61°F) Liquid Linoleic -12°C (9°F) LiquidRancid is a term used to indicate that fat has spoiled. The fat takes on an unpleasant flavor when exposed to air and heat. Unsalted butter, for example, will go rancid quickly if left outside the refrigerator, especially in warm climates.Oxidation (exposure to air) causes rancidity in fats over time. This is made worse by combination with certain metals, such as copper. This is why doughnuts are never fried in copper pans!Some oils contain natural antioxidants, such as tocopherols (vitamin E is one kind), but these are often destroyed during the processing. As a result, manufacturers add synthetic antioxidants to retard rancidity. BHA and BHT are synthetic antioxidants commonly used by fat manufacturers.Saturated and unsaturated refer to the extent to which the carbon atoms in the molecule of fatty acid are linked or bonded (saturated) to hydrogen atoms. One system of fatty acid classification is based on the number of double bonds.Stearic Acid Oleic AcidLinoleic AcidSaturated fat is a type of fat found in food. For many years, there has been a concern that saturated fats may lead to an increased risk of heart disease; however, there have been studies to the contrary and the literature is far from conclusive. The general assumption is that the less saturated fat the better as far as health is concerned. For the fat manufacturer, however, low saturated fat levels make it difficult to produce oils that will stand up to the high temperatures necessary for processes such as deep-frying. Hydrogenation has been technology’s solution. Hydrogenation will be discussed later in the chapter.Saturated fat is found in many foods:Unsaturated fat is also in the foods you eat. Replacing saturated and trans fats (see below) with unsaturated fats has been shown to help lower cholesterol levels and may reduce the risk of heart disease. Unsaturated fat is also a source of omega-3 and omega-6 fatty acids, which are generally referred to as “healthy” fats. Choose foods with unsaturated fat as part of a balanced diet using the U.S. Department of Health and Human Service’s Dietary Guidelines.Even though unsaturated fat is a “good fat,” having too much in your diet may lead to having too many calories, which can increase your risk of developing obesity, type 2 diabetes, heart disease, and certain types of cancer.There are two main types of unsaturated fats:Simply put, hydrogenation is a process of adding hydrogen gas to alter the melting point of the oil or fat. The injected hydrogen bonds with the available carbon, which changes liquid oil into solid fat. This is practical, in that it makes fats versatile. Think of the different temperature conditions within a bakery during which fat must be workable; think of the different climatic conditions encountered in bakeries.Trans Fat Trans fat is made from a chemical process known as “partial hydrogenation.” This is when liquid oil is made into a solid fat. Like saturated fat, trans fat has been shown to raise LDL or “bad” cholesterol levels, which may in turn increase your risk for heart disease. Unlike saturated fat, trans fat also lowers HDL or “good” cholesterol. A low level of HDL-cholesterol is also a risk factor for heart disease.Until recently, most of the trans fat found in a typical American diet came from:The US Food and Drug Administration (FDA) specifically prescribe what information must be displayed on a label. The trans fat content of food is one piece of core nutrition information that is required to be declared in a nutrition facts table. More information on a nutrition facts table and labeling details can be found in www.fda.gov/food/ingredientsp.../ucm274590.htmEmulsification is the process by which normally unmixable ingredients (such as oil and water) can be combined into a stable substance. Emulsifiers are substances that can aid in this process. There are natural emulsifiers such as lecithin, found in egg yolks. Emulsifiers are generally made up of monoglycerides and diglycerides and have been added to many hydrogenated fats, improving the fat’s ability to:Emulsified shortenings are ideal for cakes and icings, but they are not suitable for deep-frying.Stability refers to the ability of a shortening to have an extended shelf life. It refers especially to deepfrying fats, where a smoke point (see below) of 220°C to 230°C (428°F to 446°F) indicates a fat of high stability.The smoke point is the temperature reached when fat first starts to smoke. The smoke point will decline over time as the fat breaks down (see below).The technical term for fat breakdown is hydrolysis, which is the chemical reaction of a substance with water. In this process, fatty acids are separated from their glycerol molecules and accumulate over time in the fat. When their concentration reaches a certain point, the fat takes on an unpleasant taste, and continued use of the fat will yield a nasty flavor. The moisture, which is at the root of this problem, comes from the product being fried. This is why it is a good reason to turn off the fryer or turn it to “standby” between batches of frying foods such as doughnuts. Another cause of fat breakdown is excessive flour on the product or particles breaking off the product. AttributionStearic Acide. Retrieved from http://library.med.utah.edu/NetBioch...Acids/3_3.html ↵Oleic Acid Retrieved from: http://library.med.utah.edu/NetBioch...Acids/3_3.html ↵Linoleic Acid Retrieved from: http://library.med.utah.edu/NetBioch...Acids/3_3.html This page titled 3.1: Understanding Fats and Oils is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by Sorangel Rodriguez-Velazquez via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.
779
3.2: Sources of Bakery Fats and Oils
https://chem.libretexts.org/Bookshelves/Biological_Chemistry/Chemistry_of_Cooking_(Rodriguez-Velazquez)/03%3A_Fat/3.02%3A_Sources_of_Bakery_Fats_and_Oils
Edible fats and oils are obtained from both animal and vegetable sources. Animal sources include: Beef, Pork, Sheep, and Fish. In North America, the first two are the prime sources. Vegetable sources include canola, coconut, corn, cotton, olive, palm fruit and palm kernel, peanut, soya bean, safflower, and sunflower.The major steps in refining fats and oils are as follows:This page titled 3.2: Sources of Bakery Fats and Oils is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by Sorangel Rodriguez-Velazquez via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.
780
3.3: Major Fats and Oils Used in Bakeries
https://chem.libretexts.org/Bookshelves/Biological_Chemistry/Chemistry_of_Cooking_(Rodriguez-Velazquez)/03%3A_Fat/3.03%3A_Major_Fats_and_Oils_Used_in_Bakeries
Lard is obtained from the fatty tissues of pigs, with a water content of 12% to 18%. Due to dietary concerns, lard has gradually lost much of its former popularity. It is still extensively used, however, for:Lard has a good plastic range, which enables it to be worked in a pie dough at fairly low temperatures (try the same thing with butter!). It has a fibrous texture and does not cream well. It is therefore not suitable for cake making. Some grades of lard also have a distinctive flavor, which is another reason it is unsuitable for cake making.Butter is made from sweet, neutralized, or ripened creams pasteurized and standardized to a fat content of 30% to 40%. When cream is churned or overwhipped, the fat particles separate from the watery liquid known as buttermilk. The separated fat is washed and kneaded in a water wheel to give it plasticity and consistency. Color is added during this process to make it look richer, and salt is added to improve its keeping quality.In Canada, the following regulations apply to butter:Sweet (or unsalted) butter is made from a cream that has a very low acid content and no salt is added to it. It is used in some baking products like French butter cream, where butter should be the only fat used in the recipe. Keep sweet butter in the refrigerator.From the standpoint of flavor, butter is the most desirable fat used in baking. Its main drawback is its relatively high cost. It has moderate but satisfactory shortening and creaming qualities. When used in cake mixing, additional time, up to five minutes more, should be allowed in the creaming stage to give maximum volume. Adding an emulsifier (about 2% based on flour weight) will also help in cake success, as butter has a poor plastic range of 18°C to 20°C (64°F to 68°F).Butter and butter products may also be designated as “whipped” where they have had air or inert gas uniformly incorporated into them as a result of whipping. Whipped butter may contain up to 1% added edible casein or edible caseinates.Butter and butter products may also be designated as “cultured” where they have been produced from cream to which a permitted bacterial culture has been added.Margarines are made primarily from vegetable oils (to some extent hydrogenated) with a small fraction of milk powder and bacterial culture to give a butter-like flavor. Margarines are very versatile and include:Margarine may be obtained white, but is generally colored. Margarine has a fat content ranging from 80% to 85%, with the balance pretty much the same as butter.The claim that margarine contains a certain percentage of a specific oil in advertisements should always be based on the percentage of oil by weight of the total product. All the oils used in making the margarine should be named. For example, if a margarine is made from a mixture of corn oil, cottonseed oil, and soybean oil, it would be considered misleading to refer only to the corn oil content in an advertisement for the margarine. On the other hand, the mixture of oils could be correctly referred to as vegetable oils.It used to be that you could only buy margarines in solid form full of saturated and trans fat. The majority of today’s margarines come in tubs, are soft and spreadable, and are non-hydrogenated, which means they have low levels of saturated and trans fat. Great care must be taken when attempting to substitute spreadable margarine for solid margarine in recipes.Since the invention of hydrogenated vegetable oil in the early 20th century, shortening has come almost exclusively to mean hydrogenated vegetable oil. Vegetable shortening shares many properties with lard: both are semi-solid fats with a higher smoke point than butter and margarine. They contain less water and are thus less prone to splattering, making them safer for frying. Lard and shortening have a higher fat content (close to 100%) compared to about 80% for butter and margarine. Cake margarines and shortenings tend to contain a bit higher percentage of monoglycerides that margarines. Such “high-ratio shortenings” blend better with hydrophilic (attracts water) ingredients such as starches and sugar.Health concerns and reformulationEarly in this century, vegetable shortening became the subject of some health concerns due to its traditional formulation from partially hydrogenated vegetable oils that contain trans fats, which have been linked to a number of adverse health effects. Consequently, a low trans-fat variant of Crisco brand shortening was introduced in 2004. In January 2007, all Crisco products were reformulated to contain less than one gram of trans fat per serving, and the separately marketed trans-fat free version introduced in 2004 was consequently discontinued. Since 2006, many other brands of shortening have also been reformulated to remove trans fats. Non-hydrogenated vegetable shortening can be made from palm oil.Hydrogenated shortenings are the biggest group of fats used in the commercial baking industry. They feature the following characteristics:Variations on these shortenings are: emulsified vegetable shortenings, roll-in pastry shortenings, and deepfrying fats.Emulsified vegetable shortenings are also termed high-ratio fats. The added emulsifiers (mono- and diglycerides) increase fat dispersion and give added fineness to the baked product. They are ideal for highratio cakes, where relatively large amounts of sugar and liquid are incorporated. The result is a cake:This is also the fat of choice for many white cake icings.This type of shortening is also called special pastry shortening (SPS). These fats have a semi-waxy consistency and offer:They are primarily used in puff pastry and Danish pastry products where lamination is required. They come in various specialized forms, with varying qualities and melting points. It is all a matter of compromise between cost, palatability, and leavening power. A roll-in that does not have “palate cling” may have a melting point too low to guarantee maximum lift in a puff pastry product.Deep-frying fats are special hydrogenated fats that have the following features:Vegetable oil is an acceptable common name for an oil that contains more than one type of vegetable oil. Generally, when such a vegetable oil blend is used as an ingredient in another food, it may be listed in the ingredients as “vegetable oil.”There are two exceptions: if the vegetable oils are ingredients of a cooking oil, salad oil, or table oil, the oils must be specifically named in the ingredient list (e.g., canola oil, corn oil, safflower oil), and using the general term vegetable oil is not acceptable. As well, if any of the oils are coconut oil, palm oil, palm kernel oil, peanut oil, or cocoa butter, the oils must be specifically named in the ingredient list.When two or more vegetable oils are present and one or more of them has been modified or hydrogenated, the common name on the principal display panel and in the list of ingredients must include the word “modified” or “hydrogenated,” as appropriate (e.g., modified vegetable oil, hydrogenated vegetable oil, modified palm kernel oil).Vegetable oils are used in:Coconut fat is often used to stabilize butter creams as it has a very small plastic range. It has a quite low melting point and its hardness is due to other factors. It can be modified to melt at different temperatures, generally between 32°C and 36°C (90°F and 96°F).As mentioned above, all fats become oils and vice versa, depending on temperature. Physically, fats consist of minute solid fat particles enclosing a microscopic liquid oil fraction. The consistency of fat is very important to the baker. It is very difficult to work with butter (relatively low melting point) in hot weather, for example. At the other extreme, fats with a very high melting point are not very palatable, since they tend to stick to the palate. Fat manufacturers have therefore attempted to customize fats to accommodate the various needs of the baker.Fats with a melting range between 40°C and 44°C (104°F and 112°F) are considered to be a good compromise between convenience in handling and palatability. New techniques allow fats with quite high melting points without unpleasant palate-cling. Table 1 shows the melting points of some fats.It is probably safe to say that most fats are combinations or blends of different oils and/or fats. They may be all vegetable sources. They may be combined vegetable and animal sources. A typical ratio is 90% vegetable source to 10% animal (this is not a hard and fast rule). Formerly, blends of vegetable and animal oils and fats were termed compound fats. Nowadays, this term, if used at all, may refer also to combinations of purely vegetable origin.This page titled 3.3: Major Fats and Oils Used in Bakeries is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by Sorangel Rodriguez-Velazquez via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.
781
3.4: Functions of Fat in Baking
https://chem.libretexts.org/Bookshelves/Biological_Chemistry/Chemistry_of_Cooking_(Rodriguez-Velazquez)/03%3A_Fat/3.04%3A_Functions_of_Fat_in_Baking
The following summarize the various functions of fat in baking.Used in sufficient quantity, fats tend to “shorten” the gluten strands in flour; hence their name: shortenings. Traditionally, the best example of such fat was lard.This refers to the extent to which fat, when beaten with a paddle, will build up a structure of air pockets. This aeration, or creaming ability, is especially important for cake baking; the better the creaming ability, the lighter the cake. Plastic Range Plastic range relates to the temperature at which the fatty acid component melts and over which shortening will stay workable and will “stretch” without either cracking (too cold) or softening (too warm). A fat that stays “plastic” over a temperature range of 4°C to 32°C (39°F to 90°F) would be rated as excellent. A dough made with such a fat could be taken from the walk-in cooler to the bench in a hot bakeshop and handled interchangeably. Butter, on the other hand, does not have a good plastic range; it is almost too hard to work at 10°C (50°F) and too soft at 27°C (80°F).In dough making, the fat portion makes it easier for the gluten network to expand. The dough is also easier to mix and to handle. This characteristic is known as lubrication.Whether in dough or in a cake batter, fat retards drying out. For this purpose, a 100% fat shortening will be superior to either butter or margarine.As one of the three major food categories, fats provide a very concentrated source of energy. They contain many of the fatty acids essential for health. This page titled 3.4: Functions of Fat in Baking is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by Sorangel Rodriguez-Velazquez via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.
782
4.1: Sugar Chemistry (ADD US)
https://chem.libretexts.org/Bookshelves/Biological_Chemistry/Chemistry_of_Cooking_(Rodriguez-Velazquez)/04%3A_Sugar/4.01%3A_Sugar_Chemistry_(ADD_US)
Chemically, sugar consists of carbon (C), oxygen (O), and hydrogen (H) atoms, and is classified as a carbohydrate. There are three main groups of sugars, classified according to the way the atoms are arranged together in the molecular structure. These groups are the following:Bakers are not concerned with polysaccharides but rather with the monosaccharides and disaccharides. The latter two both sweeten, but they cannot be used interchangeably because they have different effects on the end product. These differences are touched on later in the book.It is helpful to understand some of the conventions of the names of different sugars. Note that sugar names often end in “ose”: sucrose, dextrose, maltose, lactose, etc. Sucrose is the chemical name for sugar that comes from the cane and beet sugar plants.Note that glucose is the chemical name for a particular type of sugar. What is sometimes confusing is that glucose occurs naturally, as a sugar molecule in substances such as honey, but it is also produced industrially from the maize plant (corn).The Canadian Food and Drug Regulations (FDR) govern the following definitions:This page titled 4.1: Sugar Chemistry (ADD US) is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by Sorangel Rodriguez-Velazquez via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.
784
4.2: Sugar Refining
https://chem.libretexts.org/Bookshelves/Biological_Chemistry/Chemistry_of_Cooking_(Rodriguez-Velazquez)/04%3A_Sugar/4.02%3A_Sugar_Refining
While some refining usually occurs at source, most occurs in the recipient country. The raw sugar that arrives at the ports is not legally edible, being full of impurities.At the refinery, the raw brown sugar goes through many stages:Sugar beet undergoes identical steps after the initial processing, which involves:From here, the process is identical to the final steps in cane processing. See which illustrates the process.Some of the sugar passes through a machine that presses the moist sugar into cubes and wraps and packages them; still other sugar is made into icing sugar. The sugar refining process is completely mechanical, and machine operators’ hands never touch the sugar.Brown and yellow sugars are produced only in cane sugar refineries. When sugar syrup flows from the centrifuge machine, it passes through further filtration and purification stages and is re-boiled in vacuum pans such as the two illustrated in The sugar crystals are then centrifuged but not washed, so the sugar crystals still retain some of the syrup that gives the product its special flavour and colour.During the whole refining process almost 100 scientific checks for quality control are made, while workers in research laboratories at the refineries constantly carry out experiments to improve the refining processand the final product. Sugar is carefully checked at the mills and is guaranteed to have a high purity. Government standards both in the United States and Canada require a purity of at least 99.5% sucrose.Are animal ingredients included in white sugar?Bone char — often referred to as natural carbon — is widely used by the sugar industry as a decolourizing filter, which allows the sugar cane to achieve its desirable white colour. Other types of filters involve granular carbon or an ion-exchange system rather than bone char.Bone char is made from the bones of cattle, and it is heavily regulated by the European Union and the USDA. Only countries that are deemed BSE-free can sell the bones of their cattle for this process.Bone char is also used in other types of sugar. so companies that use bone char in the production of their regular sugar also use it in the production of their brown sugar. Confectioner’s sugar — refined sugar mixed with cornstarch — made by these companies also involves the use of bone char. Fructose may, but does not typically, involve a bone-char filter.Bone char is not used at the sugar beet factory in Taber, Alberta, or in Montreal’s cane refinery. Bone char is used only at the Vancouver cane refinery. All products under the Lantic trademark are free of bone char. For the products under the Rogers trademark, all Taber sugar beet products are also free of bone char. In order to differentiate the Rogers Taber beet products from the Vancouver cane products, you can verify the inked-jet code printed on the product. Products with the code starting with the number “22” are from Taber, Alberta, while products with the code starting with the number “10” are from Vancouver.If you want to avoid all refined sugars, there are alternatives such as sucanat and turbinado sugar, which are not filtered with bone char. Additionally, beet sugar — though normally refined — never involves the use of bone char.This page titled 4.2: Sugar Refining is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by Sorangel Rodriguez-Velazquez via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.
785