url
stringlengths
90
342
html
stringlengths
602
98.8k
text_length
int64
602
98.8k
__index_level_0__
int64
0
5.02k
https://chem.libretexts.org/Bookshelves/Biological_Chemistry/Fermentation_in_Food_Chemistry/01%3A_Modules/1.12%3A_Bread
Bread is a staple food in many cultures. The key ingredients are a grain starch, water, and a leavening agent. However, there are some breads without leavening agents (tortillas or naan), but these are flat breads. Typical Steps in Bread Production: , also known as baker’s yeast, is the primary leavening agent in the production of most breads. Yeast cells consume the sugars present in dough and generate carbon dioxide (CO ) and ethanol that are responsible for dough leavening during the fermentation phase and the oven rise. After flour, yeast and water are mixed, complex biochemical and biophysical processes begin, catalyzed by the wheat enzymes and by the yeast. These processes go on in the baking phase. The primary starches found in most cereal plants are the polymers amylose and amylopectin. What are the monosaccharides in these polysaccharides? What are the linkages? These starches in the flour provide most of the sugar for fermentation, but the starch must be broken down into monosaccharides before it can be fermented by the yeast. Here is an overview of the sugars utilized by the yeast for the fermentation process: : Two types of amylases are present in wheat flour: \(\alpha\)-amylases and \(\beta\)-amylases. Sometimes, \(\alpha\)-amylases are added to dough as part of a flour improver. Amongst the most important components of the flour are proteins, which often make up 10-15% of the flour. These include the classes of proteins called glutenins and gliadins. are globular proteins with molecular weights ranging from 30,000 to 80,000 kDa. Gliadins contain intramolecular disulfide bonds. consist of a heterogeneous mixture of linear polymers with a large molecular weight sections and low molecular weight branches (LMW). Disulfide bond cross-link the glutenin subunits. In the bread-making process, water is added to flour, where it hydrates the glutenin proteins, causing them to swell and become stretchy and flexible. Prior to kneading, the two main protein types, gliadin and glutanin, remain separate on a molecular level. However, as the dough is mixed and kneaded several things begin happening: The protease enzymes from the wheat begin to break the glutenin into smaller pieces. The glutenin and gliadin begin to form chemical crosslinks between the proteins. A complex network of proteins, , is formed. Starch granules are trapped in the dough and air is incorporated into the dough during kneading. The dough needs to be elastic enough to relax when it rests and expand and hold CO when it rises — while still maintaining its shape. Eventually, the heat of the baking will kill the yeast. Fat and emulsifiers coat proteins. Salts (table salt, NaCl, or hard water salts such as Ca or Mg ) can strengthen the gluten network. Suggest how the presence of salts might strengthen gluten. Cookie: Usually quite crumbly and doesn’t rise very much. What would you need for a cookie dough? Pizza: To pull dough as thin as a pizza without breaking, there must be a very strong gluten network. What would you need for a pizza dough? Bread: A network is tight enough to trap the yeast’s CO allowing it to rise, but not so tight that it is free to expand. • What would you need for a bread dough? o [ ] gluten formation o [ ] water content Brewer’s Journal, Science/Maillard Reaction In food chemistry, any heating steps involving the presence of sugars and amino compounds lead to a series of reactions called the Maillard reactions. These Maillard reactions are nonenzymatic ‘browning reactions’ that lead to the formation of a wide range of flavorful compounds which include; malty, toasted, bready and nutty flavors. There are three stages to the Maillard Reactions: : A condensation between the sugar and amine followed by the Amadori rearrangement. : Formation of Strecker Aldehydes : Formation of heterocyclic nitrogen compounds. Tautomerizations can convert the Amadori Product to a dicarbonyl. The dicarbonyl reacts with an amino acid (asparagine in this example) to form an imine. In the Strecker degradation, the imine product undergoes a decarboxylation and is hydrolyzed to an aldehyde. • Complete the table with the Strecker aldehyde formed from these amino acids. In this stage, the Strecker aldehydes form complicated heterocycles in a variety of molecular families. furanones ‘sweet, caramel’ pyrroles ‘nutty’ Acylpryidines ‘cracker’ furans ‘meaty, burnt’ thiophenes ‘meaty,roasted’ Alkylpryidines ‘bitter, burnt’ pyranones ‘maple, warm, fruity’ pyrazines ‘roasted, toasted’ oxazoles ‘nutty, sweet’ imidazoles ‘chocolate, bitter, nutty’ The molecules can also form polymers and precipitates.
4,795
4,891
https://chem.libretexts.org/Bookshelves/Organic_Chemistry/Logic_of_Organic_Synthesis_(Rao)/04%3A_The_Logic_of_Synthesis
‘There could be ART in Organic Synthesis’ declared the inimitable monarch of organic synthesis, Professor R.B. Woodward. His school unveiled several elegant approaches covering a variety of complex structures and broke new grounds to define the art of organic synthesis. ‘If organic synthesis is a branch of science, what is the LOGIC of organic synthesis?’ marveled several others. The development of the concept of logical approaches towards synthesis has been evolving over the past several decades. A few stalwarts focused their attention on this theme and attempted to evolve a pattern to define this logic. There is no doubt that all of us who dabble with synthesis contribute our small bit in the magnificent direction. A few names stand out in our minds for their outstanding contributions. Notable contributions came from the schools of J.A. Marshal, E.J. Wenkert, G. Stock, S Hanessian, E.E. van Tamalen, S. Masamune, R.B. Woodward, E.J. Corey and several others. More focused on this theme were the contributions from the school of E.J. Corey. The period 1960 – 1990 witnessed the evolution of this thought and the concept bloomed into a full-fledged topic that now merits a separate space in college curriculum. Earlier developments focused on the idea of and perfected the art of DISCONNECTION via RETROSYNTHESIS. This led to logical approaches for the construction of that summarized various possible approaches for the proposed Target structure. All disconnections may not lead to good routes for synthesis. Once the synthetic tree was constructed, the individual branches were analyzed critically. The reactions involved were looked into, to study their feasibility in the laboratory, their mechanistic pathways were analyzed to understand the conformational and stereochemical implications on the outcome of each step involved and the time / cost factors of the proposed routes were also estimated. The possible areas of pitfall were identified and the literature was critically scanned to make sure that the steps contemplated were already known or feasible on the basis of known chemistry. In some cases, model compounds were first constructed to study the feasibility of the particular reaction, before embarking on the synthesis of the complex molecular architecture. Thus a long process of logical planning is now put in place before the start of the actual synthetic project. In spite of all these careful and lengthy preparations, an experienced chemist is still weary of the Damocles Sword of synthesis viz., the likely failure of a critical step in the proposed route(s), resulting in total failure of the entire project. All achievements are 10% inspiration and 90% perspiration. For these brave molecular engineers, sometimes also called chemists, these long-drawn programs and possible perils of failures are still worth, for the perspiration is enough reward. A sound knowledge of mechanistic organic chemistry, detailed information on the art and science of functional group transformations, bond formation and cleavage reactions, mastery over separation and purification techniques and a sound knowledge of spectroscopic analysis are all essential basics for the synthesis of molecules. A synthetic chemist should also be aware of developments in synthetic strategies generated over the years for different groups of compounds, which include Rules and guidelines governing synthesis. Since organic chemistry has a strong impact on the development of other sister disciplines like pharmacy, biochemistry and material science, an ability to understand one or more of these areas and interact with them using their terminologies is also an added virtue for a synthetic chemist. With achievements from synthesis of strained molecules (once considered difficult (if not impossible) to synthesize, to the synthesis of complex, highly functionalized and unstable molecules, an organic chemist could now confidently say that he could synthesize any molecule that is theoretically feasible. This is the current status of the power of organic synthesis. Based on the task assigned to the chemist, he would select a Target molecule for investigation and devise suitable routes for synthesis. For the manipulation of functional groups and formation of new covalent bonds we make use of a large number of Reagents and Name Reactions. In complex organic syntheses, the starting materials and intermediates in the synthetic scheme often have more than one reactive functional group. A few such multifunctional building blocks are shown below to illustrate this point . While working on such complex molecules, it is often necessary to some groups to enable selective working at the desired locations only. Organic chemists have heavily relied on such protection / deprotection strategies and have diligently developed protecting (masking) and deprotection (unmasking) protocols. We would discuss some of the important protecting groups in this chapter. Before proceed further, it must be emphasized here that this protocol should be applied only after alternate options have been critically analyzed. This is because protection / deprotection strategy involves an increase of at least two more critical steps, adding to the length of the synthesis and consequent drop in overall yields of the desired compound. In large-scale reactions, this leads to a huge impact to the Atom Economy and pollution cost of the synthetic process. All this translates into an increase in the overall cost of the final drug molecule. Group / Site Selective Reagent: Protection / deprotection is not always required whenever you see a multiplicity of functional groups. You could solve the selectivity issue by using site selective reactions / reagents. By choosing an appropriate selective reagent to suit the scheme on hand, you could selectively attack only one of the reactive sites. Consider an olefinic ketone . Sodium borohydride reduction in methanol as solvent could selectively reduce the keto- group to a secondary alcohol leaving the olefin undisturbed. On the other hand, diborane reagent in THF as solvent would be a reagent of choice when the selective reduction at the olefin moiety is desired. Diborane reduction of an olefin is several times faster than reduction of ketones. The oxidative cleavage of borane product is also selective. Thus, you can avoid the protection / deprotection strategy by employing a selective reagent. In C – C bond formation reactions we come across several such site-selective reagents. One such reagent widely used in research is the Wittig reagent. They attack the aldehyde or ketone selectively in the presence of ester, nitrile. olefin etc.. In the case of a molecule like bearing an olefin and a carboxylic acid, the –COOH group is several times more reactive than the olefin towards diborane reduction. Hydroboration / oxidation reduces the acid to a primary alcohol, leaving the olefin unaffected. On the other hand, if you need a selective reduction of olefin, the acid group has to be processed through a selective protection / deprotection sequence as shown in Compound illustrates several important points in Protection / Deprotection protocol. Both the functional groups could react with a Grignard Reagent. Carboxylic acid group would first react with one mole of the Grignard Reagent to give a carboxylate anion salt. This anion does not react any further with the reagent. When two moles of Grignard Reagent are added to the reaction mixture, the second mole attacks the ketone to give a tertiary alcohol. On aqueous work-up, the acid group is regenerated. Thus, the first mole of the reagent provides a selective transient protection for the –COOH group. Once the acid group is esterified, such selectivity towards this reagent is lost. The reagent attacks at both sites. If reaction is desired only at the ester site, the keto- group should be selectively protected as an acetal. In the next step, the grignard reaction is carried out. Now the reagent has only one group available for reaction. On treatment with acid, the ketal protection in the intermediate compound is also hydrolyzed to regenerated the keto- group. Orthogonal protection is a strategy that allows deprotection of multiple protective groups one at a time, each with a dedicated set of reagents and / reaction conditions without affecting the other. This technique is best illustrated with peptide bond formation and associated deprotection reactions. An amino acid has two functional groups –N and –COOH. When two amino acids (A and B) react under conditions for the peptide bond condensation reaction, a mixture of 4 dipeptides (at least) could be formed as shown below. \[ \ce{ A + B \rightarrow A-A + A-B + B-A + B-B}\] If we are interested in only one product A – B, we have to do selective protections and selective deprotections in a proper sequence. Consider the following peptide bond formation reaction. In order to get only one product A – B, we should protect the N – terminal of ‘A’ and C – terminal of ‘B’. Let us look closely at two different dipeptide formation schemes. In the following sequence, the C – terminal is protected in two different ways for one amino acid. For the second amino acid, the N – terminal is protected with an acid labile Boc- protection. In the next step, the two monoprotected amino acids are coupled as shown below. Take a close look at both the products. In the first product, both protections are acid sensitive. If the final product desired is the protection-free dipeptide, this is indeed a short route. If the desired product is a mono-protected dipeptide, then selective deprotection is the preferred reaction. This is feasible only when we use starting compounds that are differentially protected. This is called . Similar techniques are available for other functional groups as well. Let us now learn more on Protection / Deprotection for some important functional groups. In the introduction, we have seen that carboxylate ion lends protection to an attack of Grignard reagents at this carbonyl carbon. However, this is not sufficient for a vast variety of reagents. Meyer’s 2-oxazolines mask an acid function while activating the α- position for lithiation reaction. The use of this group as protection for –COOH group is rare. Since alcohols, aldehydes and ketones are the most frequently manipulated functional groups in organic synthesis, a great deal of work has appeared in their protection / deprotection strategies. In this discussion let us focus on the classes of protecting groups rather than an exhaustive treatment of all the protections. There are two general methods for the introduction of this protection. Transketalation is the method of choice when acetals (ketals) with methanol are desired. Acetone is the by-product, which has to be removed to shift the equilibrium to the right hand side. This is achieved by refluxing with a large excess of the acetonide reagent. Acetone formed is constantly distilled. In the case of cyclic diols, the water formed is continuously removed using a Dean-Stork condenser . The rate of formation of ketals from ketones and 1,2-ethanediol (ethylene glycol), 1,3-propanediol and 2,2-dimethyl-1,3-propanediol are different. So is the deketalation reaction. This has enabled chemists to selectively work at one center. The following examples from steroid chemistry illustrate these points . The demand for has prompted search for new green procedures. Some examples from recent literature are given here . Compared with their oxygen analogues, thioketals markedly differ in their chemistry. The formation as well as deprotection is promoted by suitable Lewis acids. The thioacetals are markedly stable under deketalation conditions, thus paving way for selective operations at two different centers. When conjugated ketones are involved, the ketal formation (as well as deprotection) proceeds with double bond migration. On the other hand, thioketals are formed and deketalated without double bond migration . N-Acetyl (N – COC ), N – Benzoyl (N – COPh) Protections These are the classical protecting groups for primary and secondary amines. The reagents are cheap and the protocol is simple. Such amides generally need drastic conditions for deprotection, though the yields are generally good . A standard procedure is refluxing in aqueous alkali or aqueous mineral acid. Due to the drastic conditions, care should be exercised in this procedure to ensure racemi zation is avoided. Amides are generally crystalline solids that are easily purified by crystallization. When the protection is introduced at the early stages of a long synthetic scheme and a very stable protection is desired (as in nucleotide synthesis) an amide is the most preferred protection. Several more labile amide bonds have been investigated. The amides of trifluoroacetic acid are of special interest. The introduction as well as cleavage is simple and mild . A recent report in amide hydrolysis is given below. N – (N – ) As described above, the amide bonds are very strong. On the other hand, the ester bonds are easily cleaved by mild base conditions. A carboethoxy protection on amine has an amide bond as well as an ester bond. Since N – COOH groups obtained on hydrolysis are very unstable, this protection provides a large family of protective groups for primary and secondary amines. These groups are easily introduced using the corresponding chloroformate esters. Anhydrides or mixed anhydrides under mild basic conditions. Both these protections could be removed under prolonged stirring with base at room temperature. Though mild, some racemisation is sometimes observed. The N – Cbz protection has an added advantage in that it could be easily cleaved under hydrogenolysis conditions . N – Cbz Protection is however stable to acidic conditions. Compare this with –Boc protection discussed below. The Tert-Butyloxycarbonyl Protection could be introduced and removed under very mild acid conditions. This protection is stable to alkali and hydrogenolysis . Thus, N – Z and N – Boc are complimentary as protective groups. This UV active protecting group is very popular in Solid Phase Peptide Synthesis (SPPS) protocols. Protection as well as deprotection steps proceed under mild conditions in good yields . The mechanism for Fmoc deprotection is shown in Silylation is a common protection for active hydrogen on heteroatoms. In the case of N – Si bond, quaternary ammonium fluorides cleave this bond . This protection is very stable. N – Tosylation is easily carried out through acid chloride procedure. It is cleaved by solvated electron cleavage reaction. When this group is attached to a primary amine, the –NH group becomes very acidic . The – OH group protection chemistry has been extensively investigated. The classical protection is the formation of esters of aliphatic and aromatic carboxylic acids. Aromatic esters are comparatively difficult to hydrolyze under mild base condition. This provides an opportunity for selective deprotection protocols . Note that this protection is sensitive to acid as well as base conditions. An ether group is one amongst the most stable functional groups. Hence, this group has been the most favored protecting group. Deprotection was a problem. In the early part of the twentieth century, the only procedure was refluxing with aqueous HI or HBr. In recent years several new procedures have appeared for effective removal under mild conditions. The special feature of the benzyl ethers is that this protection is readily removed under neutral hydrogenolysis conditions . Substituents like – OMe or – NO2 could be introduced on the benzene ring to modify the reactivity at the protection site. When an olefin could compete at the hydrogenolysis procedure, the following sequence appears to be an alternate procedure . Allyl ether is a recent introduction in – OH protection. The versatility of this protection could be seen in the following examples . The oxygen – silicon sigma bond is stable to lithium and Grignard reagents, nucleophiles and hydride reagents but very unstable to water and mild aqueous acid and base conditions. A silyl ether of secondary alcohol is less reactive than that of a primary alcohol. The O – trimethylsilyl (O – SiMe3) was first protection of this class. . Replacement of methyl group with other alkyl and aryl groups gives a large variety of silyl ether with varying degrees of stability towards hydrolysis . The following examples illustrate the selectivity in formation and hydrolysis of this group . These protective groups for alcohols are in fact acetals. They are synthesized using the dihydropyran (DHP) and dihydrofuran (DHF) respectively. They behave like acetals in their stability and cleavage . The rate of formation and cleavage for these two groups differ, which finds application for differential protection of alcohols. These protective groups found extensive use in synthesis. However, two major drawbacks were soon observed. On reaction with benzaldehyde or acetone with suitable acid catalyst, vic-diols form cyclic acetals. This in fact is a proof for the existence of vic-diols in the molecule. They are acetal protections and therefore behave as acetals in their chemistry The above discussions are just a glimpse of the vast literature on this topic. When more than one competing functional groups are present in a molecule, it may be necessary to introduce at least one protection and one deprotection step in the synthetic scheme. This adds not only to the length of the synthetic scheme, but also to the cost of the final compound. With growing awareness in , chemists have been trying to reduce this protocol to a minimum or preferably avoid this altogether. Several protection free syntheses of natural products are known in the literature. We would discuss this topic at the end of this chapter. Having chosen the TARGET molecule for synthesis, the next exercise is to draw out synthetic plans that would summarize all reasonable routes for its synthesis. During the past few decades, chemists have been working on a process called RETROSYNTHESIS. Retrosynthesis could be described as a logical Disconnection at strategic bonds in such a way that the process would progressively lead to easily available starting material(s) through several synthetic plans. Each plan thus evolved, describes a ‘ROUTE’ based on a retrosynthesis. Each disconnection leads to a simplified structure. The logic of such disconnections forms the basis for the retroanalysis of a given target molecule. Natural products have provided chemists with a large variety of structures, having complex functionalities and stereochemistry. This area has provided several challenging targets for development of these concepts. The underlining principle in devising logical approaches for synthetic routes is very much akin to the following simple problem. Let us have a look of the following big block, which is made by assembling several small blocks . You could easily see that the large block could be broken down in different ways and then reassembled to give the same original block. Now let us try and extend the same approach for the synthesis of a simple molecule. Let us look into three possible ‘disconnections’ for a cyclohexane ring as shown in Figure 4.2.2. In the above analysis we have attempted to develop three ways of disconnecting the six membered ring. Have we thus created three pathways for the synthesis of cyclohexane ring? Do such disconnections make chemical sense? The background of an organic chemist should enable him to read the process as a chemical reaction in the reverse (or ‘retro-‘) direction. The dots in the above structures could represent a carbonium ion, a carbanion, a free radical or a more complex reaction (such as a pericyclic reaction or a rearrangement). Applying such chemical thinking could open up several plausible reactions. Let us look into path b, which resulted from cleavage of one sigma bond. An anionic cyclisation route alone exposes several candidates as suitable intermediates for the formation of this linkage. The above analysis describes only three paths out of the large number of alternate cleavage routes that are available. An extended analysis shown below indicates more such possibilities . Each such intermediate could be subjected to further disconnection process and the process continued until we reach a reasonably small, easily available starting materials. Thus, a complete ‘SYNTHETIC TREE’ could be constructed that would summarize all possible routes for the given target molecule. A route is said to be efficient when the ‘overall yield’ of the total process is the best amongst all routes investigated. This would depend not only on the number of steps involved in the synthesis, but also on the type of strategy followed. The strategy could involve a ‘linear syntheses’ involving only consequential steps or a ‘convergent syntheses’ involving fewer consequential steps. Figure 4.3.1 shown below depicts a few patterns that could be recognized in such synthetic trees. When each disconnection process leads to only one feasible intermediate and the process proceeds in this fashion all the way to one set of starting materials (SM), the process is called a . On the other hand, when an intermediate could be disconnected in two or more ways leading to different intermediates, branching occurs in the plan. The processes could be continued all the way to SMs. In such routes different branches of the synthetic pathways converge towards an intermediate. Such schemes are called . The flow charts shown below depicts a hypothetical 5-step synthesis by the above two strategies. Assuming a very good yield (90%) at each step (this is rarely seen in real projects), a linier synthesis gives 59% overall yield, whereas a convergent synthesis gives 73% overall yield for the same number of steps.. The situation becomes more complex when you consider the possibility of unwanted isomers generated at different steps of the synthesis. The overall yield drops down considerably for the synthesis of the right isomer. Reactions that yield single isomers (Diastereospecific reactions) in good yields are therefore preferred. Some reactions like the Diels Alder Reaction generate several stereopoints (points at which stereoisomers are generated) simultaneously in one step in a highly predictable manner. Such reactions are highly valued in planning synthetic strategies because several desirable structural features are introduced in one step. Where one pure enantiomer is the target, the situation is again complex. A pure compound in the final step could still have 50% unwanted enantiomer, thus leading to a drastic drop in the efficiency of the route. In such cases, it is desirable to separate the optical isomers as early in the route as possible, along the synthetic route. This is the main merit of the Chiron Approach, in which the right starting material is chosen from an easily available, cheap ‘chiral pool’. We would discuss this aspect after we have understood the logic of planning syntheses. Given these parameters, you could now decide on the most efficient route for any given target. Molecules of interest are often more complex than the plain cyclohexane ring discussed above. They may have substituents and functional groups at specified points and even specific stereochemical points. Construction of a synthetic tree should ideally accommodate all these parameters to give efficient routes. Let us look into a slightly more complex example shown in Figure 4.4.1 . The ketone is required as an intermediate in a synthesis. Unlike the plain cyclohexane discussed above, the substitution pattern and the keto- group in this molecule impose some restrictions on disconnection processes. This route implies attack of an anion of methylisopropylketone on a bromo-component. This route implies simple regiospecific methylation of a larger ketone that bears all remaining structural elements. This route implies three different possibilities. Route C-1 envisages an acylonium unit, which could come from an acid halide or an ester. Route C-2 implies an umpolung reaction at the acyl unit. Route C-3 suggests an oxidation of a secondary alcohol, which could be obtained through a Grignard-type reaction. This implies a Micheal addition. Each of these routes could be further developed backwards to complete the synthetic tree. These are just a few plausible routes to illustrate an important point that the details on the structure would restrict the possible cleavages to some strategic points. Notable contributions towards planning organic syntheses came from E.J. Corey’s school. These developments have been compiles by Corey in a book by the title LOgIC OF CHEMICAL SYNTHESIS. These and several related presentations on this topic should be taken as guidelines. They are devised after analyzing most of the known approaches published in the literature and identifying a pattern in the logic. They need not restrict the scope for new possibilities. Some of the important strategies are outlined below. When a synthetic chemist looks at the given Target, he should first ponder on some preliminary steps to simplify the problem on hand. Is the molecule polymeric? See whether the whole molecule could be split into monomeric units, which could be coupled by a known reaction. This is easily seen in the case of peptides, nucleotides and organic polymers. This could also be true to other natural products. In molecules like C-Toxiferin 1 , the point of dimerisation is obvious. In several other cases, a deeper insight is required to identify the monomeric units, as is the case with Usnic acid . In the case of the macrolide antibiotic Nonactin , this strategy reduces the possibilities to the synthesis of a monomeric unit . The overall structure has S4 symmetry and is achiral even though assembled from chiral precursors. Both (+)-nonactic acid and (−)-nonactic acid are needed to construct the macrocycle and they are joined head-to-tail in an alternating (+)-(−)-(+)-(−) pattern. (see J. Am. Chem. Soc., 131, 17155 (2009) and references cited therein). Is a part of the structure already solved? Critical study of the literature may often reveal that the same molecule or a closely related one has been solved. R.B. Woodward synthesized as a key intermediate in an elegant synthesis of Reserpine . The same intermediate compound became the key starting compound for Velluz et.al., in the synthesis of Deserpidine . Such strategies reduce the time taken for the synthesis of new drug candidates. These strategies are often used in natural product chemistry and drug chemistry. Once the preliminary scan is complete, the target molecule could be disconnected at Strategic Bonds. STRATEGIC BONDS are the bonds that are cleaved to arrive at suitable Starting Materials (SM) or SYNTHONS. For the purpose of bond disconnection, Corey has suggested that the structure could be classified according to the sub-structures generated by known chemical reactions. He called the sub-structures RETRONS and the chemical transformations that generate these Retrons were called TRANSFORMS. A short list of Transforms and Retrons are given below (TABLE 4.6.1). Note that when Transforms generate Retrons, the product may have new STEREOPOINTS (stereochemical details) generated that may need critical appraisal. The structure of the target could be such that the Retron and the corresponding Transforms could be easily visualized and directly applied. In some cases, the Transforms or the Retrons may not be obvious. In several syntheses, transformations do not simplify the molecule, but they facilitate the process of synthesis. For example, a keto- group could be generated through modification of a -CH-N unit through a . This generates a new set of Retron / transforms pair. A few such transforms are listed below, along with the nomenclature suggested by Corey . A Rearrangement Reaction could be a powerful method for generating suitable new sub-structures. In the following example, a suitable Pinacol Retron, needed for the rearrangement is obtained through an acyloin transform . Such rearrangement Retrons are often not obvious to inexperienced eyes. Some transforms may be necessary to protect (acetals for ketones), modify (reduction of a ketone to alcohol to avoid an Aldol condensation during a Claisen condensation) or transpose a structural element such as a stereopoint (e.g. 2 inversion, epimerization etc.,) or shifting a functional group. Such transforms do not simplify the given structural unit. At times, activation at specific points on the structure may be introduced to bring about a C-C bond formation and later the extra group may be removed. For example, consider the following retrosynthesis in which an extra ester group has been introduced to facilitate a Dieckmann Retron. In complex targets, combinations of such strategies could prove to be a very productive strategy in planning retrosynthesis. Witness the chemical modification strategy shown below for an efficient stereospecific synthesis of a trisubstituted olefin Figure 4.6.4 Examples for FGA / FGR strategies for complex targets Amongst the molecular architectures, the bridged-rings pose a complex challenge in Structure-Based disconnection procedures. Corey has suggested guidelines for efficient disconnections of strategic bonds. A bond cleavage for retrosynthesis should lead to simplified structures, preferably bearing five- or six-membered rings. The medium and large rings are difficult to synthesize stereospecifically. Amongst the common rings, a six-membered ring is easily approached and manipulated to large and small rings. Simultaneous cleavage of two bonds, suggesting cycloaddition – retrons are often more efficient. Some cleavages of strategic bonds are shown in Figure 4.6.5, suggesting good and poor cleavage strategies based on this approach. However, these guidelines are not restrictive. Identifying Retron – Transform sets in a given target molecule is therefore a critical component in retrosynthesis. Such an approach could often generate several synthetic routes. The merit of this approach is that starting materials do not prejudice this logic. Retrosyntheses thus developed could throws open several routes that need further critical scrutiny on the basis of known facts. Identification of Retrons / Transforms sets provided the prerequisite for computer assisted programs designed for generating retrosynthetic routes. A list of Retrons and the corresponding transforms were interlinked and the data was stored in the computer. All known reactions were thus analyzed for their Retron / Transform characteristics and documented. The appropriate literature citations were also documented and linked. Based on these inputs, computer programs were designed to generate retrosynthetic routes for any given structure. Several such programs are now available in the market to help chemists generate synthetic strategies. Given any structure, these programs generate several routes. Once the scientist identifies the specific routes of interest for further analysis, the program generates detailed synthetic steps, reagents required and the appropriate citations. In spite of such powerful artificial intelligence, the intelligence and intuitive genius of a chemist is still capable of generating a new strategy, not yet programmed. Again, human intelligence is still a critical input for the analysis the routes generated using a computer. Based on the experience of the chemists’ team, their projected aim of the project and facilities available, the routes are further screened. Short lists of syntheses that exemplify retroanalysis strategies devised through powerful transforms are given below. Several syntheses from natural product chemistry are later discussed in this chapter, which further illustrates these points. Retrosynthesis based on Diels-Alder Transform; (E.J. Corey et.al., J.A.C.S. (1972), 94, 2549). Fumagillol presents 4 stereocentres and sensitive functionalities. Simplification of the functional groups first exposed a vic-diol. This site could come from an olefin D. Further retroanlysis led to a structurally simplified target sequence B to F. A cyclohexene ring system is suitable for a powerful DA Transform. This step generated two stereocentres in one reaction and also an olefin in the correct position for hydroxylation. The key intermediate C could also be generated through a functional group transform leading to G. This provided scope for a new set of starting materials using another DA Transform. The Retrosynthetic analysis and the actual synthesis are shown in Figure 4.7.1. Synthetic protocol reported by Corey is outlined in Figure 4.7.2 For the synthesis of Estrone, an interesting DA Transform strategy was devised by Kametani et.al.. The retrosynthetic strategy is depicted in Fig 4.7.3. The required diene precursor was generated via cyclo-reversion reaction of a cyclobutene unit (T. Kametani et.al., Tetrahedron, (1981), 37, 3). The crucial stereospecific trisubstituted olefins on Squalene were synthesized using a Claisen Retron . Note the double Claisen approach in this strategy. The biogenetic-type cyclisation of olefins provides scope for application of Mechanistic Transform or transforms based on mechanistic considerations. A cleaver introduction of a chiral centre provided an efficient route for generating several enantiopure chiral centres in one step using this strategy . In these lengthy discussions above, we learnt about disconnection approaches. We said that stereocentres could introduce special challenges in planning efficient synthetic routes. Let us look at the molecule Biotin to understand disconnection strategies and problem of stereocentres. Baker established the structure of Biotin in 1947 through an unambiguous synthesis of the molecule. A retroanalysis of the synthetic scheme is as shown below . The SM chosen and the synthetic approach clearly established the atom connectivities and the overall structure of the compound. However, the route made not attempt to synthesize one pure isomer because the actual stereochemistry was not established at that time. The route yielded all the eight stereoisomers (3 asymmetric centers). These isomers were carefully separated. In 1952 the biologically active isomer was identified as the all cis- enantiomer (+)-Biotoin. At this stage, several groups reported the stereospecific synthesis of the all cis- isomer exclusively . The following retroanalysis depicts three such attempts. Note that these efforts were directed towards the synthesis of the racemate and not the pure (+)- isomer of Biotin. These approaches solved the problem of diastereomeric purity. But they still left a mixture of two unresolved enentiomer viz., (±)-Biotin. To obtain a pure enantiomer in excellent yields, you have to resolve the racemic mixtures at appropriate stages. Alternately one could resort to asymmetric synthesis at all crucial stages. A still better approach would be to start from a chiral SM, which has most of the stereocentres in the correct fashion. This elegant approach is called the . When carefully executed, such procedures yield very pure enantiomer as the final product. Two such approaches for (+)-Biotin is shown below. In the first approach a chiral amino acid cysteine is chosen because it has one key asymmetric center, the sulphur moiety and a carboxylic acid in the correct positions . In this example the choice of the SM is quite obvious. Note the cleaver introduction of the second nitrogen and the cyclisation step leading to the formation of the tetrahydrothiophene ring. Also note that the yield of Cram vs anti-Cram (chelation) products could be influenced by a choice of the reagent. These kinds of insights come only through a thorough knowledge of this particular reaction. In the second example chosen here, the choice of the SM as the appropriate chiron is not obvious but hidden. Such an analysis demands a more critical insight into the concerned stereocentres. An emerging concept in the Logic In Synthesis is deliberate planning of Green Synthetic Pathways. The logic of retroanalysis is same as discussed above. The only differentiating point is that the criteria for selection of synthetic route discussed earlier would now analyses the same synthetic tree through a Green Chemistry window to select only those routes that have maximum Green aspects. The green chemistry goal is enforced through inclusion the Twelve Principles of Green Chemistry. This could be done by embracing one or more of the following techniques - Use of Green Energy Sources like Microwave, Sonochemistry, Photochemistry etc., solvent free syntheses, using easily recoverable new solvent and eco-friendly solvents, reusable catalysts in syntheses and schemes that avoid protecting group chemistry. Most of the chemistry used in Green Chemistry is not really new to chemists. Chemistry is now revisited due to the environment consciousness that has now crept into industrial chemistry and society at large. Most of the chemistry is buried in two centuries of chemical literature. Several new discoveries in reagents have appeared in recent years. Now Chemists have to become more alert to this awakening to environmental damages caused by chemical activities on this globe. The above discussions are meant only to illustrate the major steps involved in retrosynthetic analysis of a molecule. Thorough knowledge of synthetic tools, mechanisms and stereochemistry are essential prerequisites for a chemist to venture into the synthesis of complex molecules. Needless to add that all these efforts have to be suitably backed up by a team of chemists, having a rigorous training in laboratory techniques, a first-hand experience on several organic reactions / reagents and thorough knowledge of purification techniques, spectroscopic techniques and not the least, a good knowledge of search techniques to scan and retrieve requisite information from the vast chemical literature accumulated since the dawn of modern chemistry. Let us now dwell deep into a few select structures chosen from natural product chemistry and see how these structures have been tackled through different synthetic strategies. We would start with a simple molecule – Disparlure – with only two asymmetric centers. The course would end with a flovour of some Green Chemistry based syntheses to draw the attention of students to this newly emerging concepts and concerns.
38,775
4,892
https://chem.libretexts.org/Bookshelves/General_Chemistry/General_Chemistry_Supplement_(Eames)/Chemistry_Basics/Alchemy
If everything is made of the same 4 elements in different ratios, perhaps you can adjust the ratios of elements through various processes and change one material into another. This was called . In particular, people wanted to change inexpensive metals into gold. There were known examples of one material turning into another. For instance, there's a reddish mineral called cinnabar, and if you heat it, you get silvery liquid mercury. Oddly, if you heat it again, you get another red solid. We'll explain that in chemical terms soon (it's a chemical change!) but it looked like a proof of principle for transmutation, with one substance becoming another. Also, some people claimed to be able to make gold. The problem here was that they weren't distinguishing between a and a . A pure substance is composed of a single type of molecule. Gold (theoretically) is a pure substance: not just a pure substance, but a pure element. However, you can make compounds that look a little like gold, that are yellow and shiny, by mixing different metals, such as copper and tin. We call that a mixture (or more specifically an alloy, which is a mixture of metals). A is a substance that contains multiple elements. Water is a compound of hydrogen and oxygen. However, water is a pure substance, because each molecule of water is the same. Air is a mixture, not a pure substance, because it contains different types of molecules, some of which are compounds, like carbon dioxide, which is a compound of carbon and oxygen. The difference between a compound and a mixture is that in a compound you always have the same ratio of the elements: in carbon dioxide, the ratio is there in the name: one carbon atom, two (di) oxygen atoms. Carbon dioxide is CO . In a mixture, the ratio can vary. Air contains nitrogen (N ) and oxygen (O ) molecules and many other components, and a sample from a lecture hall and a sample from a forest would probably have slightly different ratios. Alchemists did not always distinguish between mixtures and pure compounds or elements, for instance gold vs bronze (an alloy of copper and tin). We can distinguish different materials by using the of the materials. Properties are things like what temperature it melts at, whether it dissolves in acid, and so on. We can distinguish and . Melting point is a physical property, and solubility in acid is a chemical property. Alchemists developed many of the techniques of chemistry that we still use. For instance, a mixture of a solid and a liquid, such as sugar in oil, could be separated by filtration. If you have a solution, like sugar in water, that is , because all the sugar has dissolved, you can't filter it. Instead, you could let the water evaporate slowly until big crystals grow. (We call that rock candy in English.) This process is called , and is used to purify solids. To purify liquids, you can use , which is based on different boiling points: heat at a temperature where one component boils and the other doesn't, and collect the vapor. is similar: some solids will vaporize with heat, and can then be recollected from the vapor on a cold surface, where they solidify. Overall, alchemists were sometimes excellent experimentalists, and they definitely spent time "in the lab." However, their explanations and reasons for beliefs may seem strange from a modern perspective. For instance, if a theory had parallels to Christian religious events, that might be considered evidence that it was correct. The other "unscientific" thing about their practice was that they often reported results in a way that was intended to confuse the reader, if they reported the results at all, and they often looked to ancient texts as an authority, even when there was no evidence that the authors had accomplished anything. Finally, the goals they chose were very ambitious, so instead of trying to look at the simplest questions first, to get clear answers, they used many complicated procedures and got results that are hard to explain even now. involves changing one material into another by adjusting the ratios of elements through various processes. A has a chemical composition of only one type of molecule .A is a substance that contains 2 or more types of atoms chemically bonded together. have two or more substances physically combined, meaning that each component retains its chemical composition and properties. The components of a mixture are not uniformly combined while the components of a mixture are. A substance's can be observed without changing its chemical composition (i.e. color, volume, melting point) while its are observed through chemical changes (i.e. burning, rusting).
4,707
4,894
https://chem.libretexts.org/Bookshelves/General_Chemistry/Book%3A_ChemPRIME_(Moore_et_al.)/09%3A_Gases/9.05%3A_Gas_Laws
Toward the end of the eighteenth century, many scientists began studying the relationships between pressure, temperature, and volume for gases. They began to realize that relationships between these measurements were the same for all gases. Gases behave similarly to a good approximation over a wide range of conditions, due in part to the large space between gas molecules. Simple gas laws were devised to predict the behavior of most gases. These gas laws, now recognized as special cases of the , describe the effect of pressure on volume ( ), of temperature on volume or pressure ( and ), and of amount of gas (in mol) on volume ( ). There are several videos on YouTube that show the effects that can be understood in terms of these laws, and help visualize the impact of the 14.7 lb/in of air pressure that we live under and seldom notice: • In the first, water is added to a 55 gallon drum and boiled, letting steam and air escape, so that the drum is filled with hot water vapor; then the drum is sealed and cooled. The water condenses (because of it's high polarity, the molecules attract), and the drum collapses. The 55 gallon drum crush demonstrates intuitively what Gay Lussac's Law tells us about how temperature affects volume. Even railroad tankers aren't immune to the pressure: Again, note how this relates to the gas laws. The temperature or pressure of the tank was altered and it (quite spectacularly) altered the volume of the tanker.
1,474
4,895
https://chem.libretexts.org/Bookshelves/Biological_Chemistry/Fermentation_in_Food_Chemistry/01%3A_Modules/1.14%3A_Cider
Cider is a drink made from apples. In the US, cider can refer to apple juice or the fermented, alcoholic version. This section will focus on the fermented, alcoholic drink. Typical Steps in Cider Production: Apples are the primary material used in cider production; thus, the final cider product quality and style depend heavily upon the quality of the apples used. Apples must be juicy, sweet, and ripened. A full-bodied cider requires the use of several different types of apples to give it a balanced flavor including a mix of sweet and tart apples. There are four main apple varietals. List them and their flavors. Apples are not peeled as the skin of the apples contains many of the compounds that contribute to the taste of the cider. The apples are ground and then pressed to extract the juice. The primary components of an apple are shown in Table 14.1. The fiber and insoluble carbohydrates are mostly removed in the pressing process. After pressing, the juice can be pasteurized and sold as apple juice or it can be further processed with fermentation to produce the alcoholic beverage. The primary sugars found in cider apple juice before fermentation are fructose, glucose, and sucrose. On a commercial scale, there are considerable cost advantages to supplementing the raw apple juice with glucose syrup and water as they are cheaper than apple juice. In fact, many commercial ciders are now made from around 35% juice and 65% glucose syrup. How would this impact the flavor? Pectin is a polysaccharide made from a mixture of monosaccharides. While many distinct polysaccharides have been identified and characterized within these ‘pectic polysaccharide family’, most contain stretches of linear chains of \(\alpha\)-(1–4)-linked D-galacturonic acid. Draw a linear chain of linear chains of \(\alpha\)-(1–4)-linked D-galacturonic acid. Most wild type yeasts cannot ferment galacturonic acid but failure to remove the pectin can lead to the formation of jelly during concentration. Thus, pectolytic enzymes are sometimes added prior to fermentation. This pectinase treatment often results in a release of higher concentrations of anthocyanins, tannins, and polyphenols from the apple pressings. How could the increased level of tannins and phenols impact the flavor of the final cider? Remaining pectin polysaccharides cause a haze in finished ciders, so pectinase is also sometimes added after fermentation to clear the cider. How would the presence of alcohol in the finished cider impact the effectiveness of the pectinase? French cider is often prepared with an initial step, ‘ ’, in which pectins and other substances are separated from the juice using a gelatin. T hen, the clear juice is fermented slowly. These ciders are fruitier than others. Explain. Many cider makers add sulphur dioxide (or potassium metabisulphite) to inhibit the growth of most spoilage yeasts and bacteria, while permitting the desirable fermenting yeasts (such as or ) to facilitate the conversion to alcohol. Most natural weak acid preservatives (such as vinegar, benzoic acid or sorbic acid) are believed to work by diffusing through bacterial cell membranes. The increased acidity of the cytoplasm disrupts the cell homoeostasis and the cell has to work very hard to pump out protons to restore the pH. Eventually, the cells run out of ATP and die. Sulphite is believed to work in the same way as other weak acid preservatives. Apple juice (must) was traditionally fermented with the bacteria and yeast already present on the apples. The main yeasts found in wild fermentations is . But and are also present is substantial amounts. Other species are present in small amounts: and . There are three phases in the cider process based on the dominant yeast species present. Fermentation should take 5 to 10 days, and up to 4 - 6 weeks at cool temperatures. Nearly all the sugar will then have been used by the yeast and the yeast will become dormant. Propose how cider-makers would determine when to stop fermentation. During alcoholic fermentation, many secondary metabolites are produced by the yeasts. Esters provide mainly fruity and floral notes; higher alcohols provide ‘background flavors’; whereas the phenolic compounds can generate interesting or unpleasant aromatic notes. Esters are the main volatile compounds in cider. They are characterized by a high presence of ethyl acetate, which alone can represent up to 90% of the total esters. Dioxanes, key flavor components of cider, are described as a ‘green, cidery’ flavor that results only from alcoholic fermentation of apples (and pears). These dioxanes are formed from reaction of acetaldehyde or other aldehydes (fermentation byproduct) with diols which are found almost exclusively in apples. Propose a mechanism for this reaction of acetaldehyde and octane-1, 3-diol. Another dioxane found in cider is formed from acetaldehyde and (R)-5(Z)-octene-1,3-diol. Draw the product found in cider. is the process of moving the cider from its (the sediment formed). This is usually a filtration or centrifugation process. Pectinase may be added at this point. What is the purpose of this step? After racking, the cider maker may choose to do a secondary fermentation. Yeast might be added to ensure a sparkling cider, or a malolactic acid fermentation will be used to improve the flavor. (See next sections). Cider was traditionally stored in wooden barrels to age, but this is not essential if chilling and fining have been properly carried out after the fermentation. The bacteria needed for malolactic acid are often founded in the wood barrels. Cider fermentation with LAB bacteria convert the sugars and the malic acids into lactate. Malolactic fermentation is primarily completed by , a heterofermentative organism. This process tends to create a rounder mouthfeel to the final cider. Malic acid is typically associated with the taste of green apples, while lactic acid has a richer taste. The malolactic fermentation involves the conversion of malic acid into lactic acid and carbon dioxide. Some LAB bacteria convert the malic acid in one step; while others utilize these steps that include intermediates from the TCA cycle. Complete the steps in this biochemical pathway to convert malic acid to lactic acid. If malolactic fermentation is not fulfilling this function, then there must be some energy gain for the organism in completing this process. Secondary Fermentation: Malolactic Fermentation Energy and pH MLF process is shown. This reaction allows cells to regulate their internal pH. What happens to the overall H concentration inside the cell? This reaction allows cells to gain energy by creating a proton gradient across cell membranes. Some bacteria can utilize citrate or malate. The process allows out 1-2 proton atoms to be pumped out of the cell into the periplasm. Suggest a method for pumping out 2 H instead of 1 H in this process. The proton gradient created from MLF is coupled to an ATPase which captures the energy in the production of new ATP molecules. In cider production, this is important to reduce the malic acid content AND the overall raw acidic flavor of the cider. The proton pump will the acidity of the cider product (outside the cell). Depending upon the organism, these processes are inhibited with higher alcohol content and below pH of 3-4. If a cider producer wants to inhibit MLF in the cider, the pH of the must can be [ ] to prevent the process. Ciders are naturally ‘dry’. The term ‘dry’ means that there is little sweetness from remaining sugars, but more flavor from alcohol, fusel alcohols, esters, etc. Some consumers prefer a sweet still cider. Propose at least two methods for ensuring a sweet cider. To get some bubbles into cider, excess carbon dioxide under pressure can be added and then the cider is bottled or put in a keg which will withstand the pressure. Commercial cider-makers will sometimes inoculate with active dry yeast ( ) before bottling to obtain a naturally-carbonated beverage. Because there is not much sugar left in the cider at this point, __________ is often added when using a second fermentation. This can be very successful although the bottom of each bottle will inevitably be a little cloudy when poured, because there will always be some yeast deposit which will be roused up when the pressure is released. Note: Bottles used for carbonated ciders must be designed to withstand the pressure generated by the gas! After all fermentation processes are complete, the must is either pasteurized or treated with ascorbic acid or sulfur dioxide. This step also decreases the chance of contamination by Acetobacter.
8,684
4,896
https://chem.libretexts.org/Bookshelves/General_Chemistry/Book%3A_ChemPRIME_(Moore_et_al.)/09%3A_Gases/9.03%3A_Pressure
You are probably familiar with the general idea of pressure from experiences in pumping tires or squeezing balloons. A gas exerts force on any surface that it contacts. is called the and is represented by . The symbols and represent force and area, respectively. On the image below, a force is pushing down on the circular area of a barometer. The pressure is then the amount of force pushing on a unit area of the circle of the barometer. \[\text{Pressure}=\frac{\text{force}}{\text{area}}\text{ }P=\frac{F}{A} \nonumber \] As a simple example of pressure, consider a rectangular block of lead which measures 20.0 cm by 50.0 cm by 100.0 cm (Figure \(\Page {1}\) ). The volume of the block is 1.00 × 10 cm , and since the density ρ of Pb is 11.35 g cm , the mass is \[m=V\rho =\text{1}\text{.00 }\times \text{ 10}^{\text{5}}\text{ cm}^{\text{3}}\text{ }\times \text{ }\frac{\text{11}\text{.35 g}}{\text{1 cm}^{\text{3}}}\text{ }\label{1} =\text{1}\text{.135 }\times \text{ 10}^{\text{6}}\text{ g}=\text{1}\text{.135 }\times \text{ 10}^{\text{3}}\text{ kg} \] According to the second law of motion, discovered by British physicist Isaac Newton, the force on an object is the product of the mass of the object and its acceleration : At the surface of the earth, the acceleration of gravity is 9.81 m s . Substituting the mass of the lead block into Eq. \(\ref{4}\), we have \[F = 1.135 \times 10^{3} \text{ kg} \times \text{ m }\text{s}^{\text{-2}} = 11.13 \times 10 \text{ kg} \text{ m}\text{ s}^\text{-2} \nonumber \] The units kilogram meter per square second are given the name in the International System and abbreviated N. Thus the force which gravity exerts on the lead block (the of the block) is 11.13 × 10 N. A block that is resting on the floor will always exert a downward force of 11.13 kN. The exerted on the floor depends on the over which this force is exerted. If the block rests on the 20.0 cm by 50.0 cm side (Figure 9.2 ), its weight is distributed over an area of 20.0 cm × 50.0 cm = 1000 cm . Thus: \[P=\frac{F}{A}=\frac{\text{11}\text{.13 kN}}{\text{1000 cm}^{\text{2}}}=\frac{\text{11}\text{.13 kN}}{\text{1000 cm}^{\text{2}}}\text{ }\times \text{ }\left( \frac{\text{100 cm}}{\text{1 m}} \right)^{\text{2}} =\frac{\text{11}\text{.13 kN}}{\text{10}^{\text{3}}\text{ cm}^{\text{2}}}=\frac{\text{10}^{\text{4}}\text{ cm}^{\text{2}}}{\text{1 m}^{\text{2}}}=\text{111}\text{.3 }\frac{\text{kN}}{\text{m}^{\text{2}}} =\text{111}\text{.3 }\times \text{ 10}^{\text{3}}\text{ N m}^{-\text{2}} \nonumber \] Thus we see that pressure can be measured in units of newtons (force) per square meter (area). The units newton per square meter are used in the International System to measure pressure, and they are given the name (abbreviated Pa). Like the newton, the pascal honors a famous scientist, in this case Blaise Pascal (1623 to 1662), one of the earliest investigators of the pressure of liquids and gases. If the lead block is laid on its side (Figure 1 ), the pressure is altered. The area of contact with the floor is now 50.0 cm × 100.0 cm = 5000 cm , and so \[P=\frac{F}{A}=\frac{\text{11}\text{.13 }\times \text{ 10}^{\text{3}}\text{ N}}{\text{5000 cm}^{\text{2}}}=\frac{\text{11}\text{.13 }\times \text{ 10}^{\text{3}}\text{ N}}{\text{0}\text{.500 m}^{\text{2}}} =\text{22}\text{.26 }\times \text{ 10}^{\text{3}}\text{ N m}^{-\text{2}}=\text{22}\text{.26 kPa} \nonumber \] When the block is lying flat, its pressure on the floor (22.26 kPa) is only one-fifth as great as the pressure (111.3 kPa) when it stands on end. This is because the area of contact is 5 times larger. The air surrounding the earth is pulled toward the surface by gravity in the same way as the lead block we have been discussing. Consequently the air also exerts a pressure on the surface. This is called . The following video shows the "power" of atmospheric pressure. A metal can full of water is heated until the water inside boils, creating a high internal pressure. The can is the put upside down into a bowl of cold ice water, causing the formerly hot water vapor to cool and decrease in volume. This cooling causes a decrease in the internal pressure of the can. The lower pressure exerts less force on the can and can no longer counter the atmospheric pressure coming from the outside of the can, which pushes inward, crushing the can. Because winds may add more air or take some away from the vertical column above a given area on the surface, atmospheric pressure will vary above and below the result obtained in Example 9.1. Pressure also decreases as one moves to higher altitudes. The tops of the Himalayas, the highest mountains in the world at about 8000 m (almost 5 miles), are above more than half the atmosphere. The lower pressure at such heights makes breathing very difficult—even the slightest exertion leaves one panting and weak. For this reason jet aircraft, which routinely fly at altitudes of 8 to 10 km, have equipment to maintain air pressure in their cabins artificially. It is often convenient to express pressure using a unit which is about the same as the average atmospheric pressure at sea level. As we saw in Example 1, atmospheric pressure is about 101 kPa, and the (abbreviated atm) is defined as exactly 101.325 kPa. Since this unit is often used, it is useful to remember that \[1 \text{ atm} = 101.325 \text{ kPa} \nonumber \] The total mass of air directly above a 30 cm by 140 cm section of the Atlantic Ocean was 4.34 × 10 kg on July 27, 1977. Calculate the pressure exerted on the surface of the water by the atmosphere. First calculate the force of gravitational attraction on the air: \[A=\text{30 cm }\times \text{ 140 cm}=\text{4200 cm}^{\text{2}}\text{ }\times \text{ }\left( \frac{\text{1 m}}{\text{100 cm}} \right)^{\text{2}}\text{ }=\text{0}\text{.42 m}^{\text{2}} \nonumber \]
5,869
4,897
https://chem.libretexts.org/Bookshelves/Organic_Chemistry/Organic_Chemistry_(Morsch_et_al.)/17%3A_Alcohols_and_Phenols
When you have completed Chapter 17, you should be able to In this chapter, we examine the chemistry of the alcohol family of compounds. Alcohols can undergo a wide variety of reactions, and because of this reactivity and because they can be prepared in a number of different ways, alcohols occupy an important position in organic chemistry. The discussion begins with an outline of the nomenclature of alcohols and phenols. We review the physical properties of these compounds, and discuss methods used to obtain the lower members of the series on an industrial scale. A detailed discussion of the laboratory preparation of alcohols follows, with particular emphasis on those methods that involve either the reduction of a carbonyl compound or the use of a Grignard reagent. Certain reactions of alcohols were discussed in previous chapters. In this chapter, we concentrate on the oxidation of alcohols to carbonyl compounds. We also introduce the concept of protecting a sensitive functional group during an organic synthesis. The discussion then turns to the uses of phenols, their preparation and their chemical reactivity. Infrared, nuclear magnetic resonance and mass spectroscopy each can provide valuable information about alcohols and phenols, and we illustrate the application of these techniques to the identification of unknown alcohols and phenols with a number of examples.
1,407
4,901
https://chem.libretexts.org/Bookshelves/Biological_Chemistry/Book3A_Medicines_by_Design/05%3A_Medicines_for_the_Future
he advances in drug development and delivery described in this booklet reflect scientists' growing knowledge about human biology. This knowledge has allowed them to develop medicines targeted to specific molecules or cells. In the future, doctors may be able to treat or prevent diseases with drugs that actually repair cells or protect them from attack. No one knows which of the techniques now being developed will yield valuable future medicines, but it is clear that thanks to pharmacology research, tomorrow's doctors will have an unprecedented array of weapons to fight disease. Wanna be a pharmacologist? If you choose pharmacology as a career, here are some of the places you might find yourself working: Most basic biomedical research across the county is done by scientists at colleges and universities. Academic pharmacologists perform research to determine how medicines interact with living systems. They also teach pharmacology to graduate, medical, pharmacy, veterinary, dental, or undergraduate students. Pharmacologists who work in industry participate in drug development as part of a team of scientists. A key aspect of pharmaceutical industry research is making sure new medicines are effective and safe for use in people. Most clinical pharmacologists are physicians who have specialized training in the use of drugs and combinations of drugs to treat various health conditions. These scientists often work with patients and spend a lot of time trying to understand issues relating to drug dosage, including side effects and drug interactions. Pharmacologists and toxicologists play key roles in formulating drug laws and chemical regulations. Federal agencies such as the National Institutes of Health and the Food and Drug Administration hire many pharmacologists for their expertise in how drugs work. These scientists help develop policies about the safe use of medicines. You can learn more about careers in pharmacology by contacting professional organizations such as the American Society for Pharmacology and Experimental Therapeutics ( ) or the American Society for Clinical Pharmacology and Therapeutics ( )
2,160
4,903
https://chem.libretexts.org/Bookshelves/Biological_Chemistry/Fermentation_in_Food_Chemistry/01%3A_Modules/1.15%3A_Wine
Wine is defined as the fermented juice of a fruit. Wines have been produced from all kinds of plant materials and fruits. However, the most classic version is made from grapes. Typical Steps in Wine Production: The grape pulp has a high concentration of fermentable sugars while the skin and seeds have a lot of flavorful compounds. The grape is the fruit of the vine, (wine) and (table grapes). There are over 5000 varietals of grapes which all have different flavor and aroma profiles. A list of (and pronunciations) is available from J. Henderson, Santa Rosa Junior College. The Wine Spectator has an article by J. Laube and J. Molesworth on . In Europe, wines are usually categorized by their geographic region. In America, Australia, South Africa and New Zealand, wines are usually labelled by their varietal names. The grapes will develop a different profiles of flavor chemicals depending on soil, temperature, growing practices, rain, etc. The land and climate are referred to as the ‘terroir’. As grapes ripen on the vine, they accumulate sugars through the translocation of sucrose molecules that are produced by photosynthesis from the leaves. During ripening the sucrose molecules are hydrolyzed (separated) by the enzyme invertase into glucose and fructose. By the time of harvest, between 15 and 25% of the grape will be composed of monosaccharides; the total sugar content and the types will vary by cultivar. This includes glucose, fructose, and sucrose (fermentable sugars) and a small amount of sugars like the five-carbon arabinose, rhamnose and xylose. Sugars like arabinose have little flavor to humans and cannot metabolize them so they have little impact in wine (wild yeast) or LAB are present. Organic Acids in Grapes: Tartaric, Malic, and Citric Acids Tartaric and malic make up over 90% of grape juice acid. Tartaric acid is rarely found in other fruits. There are some other organic acids present in small amounts including lactic, ascorbic ( ), fumaric, pyruvic and more. The majority of the tartaric acid found in grapes is present as the potassium acid salt. Draw the potassium dipotassium salt of tartaric acid. In wine tasting, the term “acidity” refers to the fresh, tart and sour attributes of the wine which are evaluated in relation to how well the acidity balances out the sweetness and bitter components of the wine such as tannins. In the mouth, tartaric acid provides most of the tartness to the flavor of the wine, although citric and malic acids also play a role. To improve the flavor, the winemaker can add tartaric, malic, citric, or lactic to the grape juice (must). Polyphenols are a class of molecules characterized by the presence of large multiples of phenol structural units. This is a huge class of molecules found many plants. Grapes have a wide variety of polyphenols, most of which are concentrated in the skin and seeds. The concentration and types of polyphenols varies between grapes based on cultivar, ‘terroir’ – grape growing region (altitude, geological features, soil type, sunlight exposure), temperature during ripening, and environmental stressors such as heat, drought and light intensity. There are many sub-categories of polyphenols. Here is a simplified outline. The flavor and appearance of red wines are determined by the phenolic compounds: anthocyanins (responsible for the red color) and tannins (responsible for the sensation of astringency). Hydroxycinnamic Acids are mostly found in the grape pulp. Hydroxycinnamic acids are often found as esters of tartaric acid or with a sugar. During the processing, these esters are hydrolyzed. Hydroxybenzoates have been identified in both grapes and wines. These structures are the basis of hydrolysable tannins (next section)! Stilbenes have two aromatic rings connected with an alkene (cis or trans). Resveratrol is one of the most common stilbenes found in grapes and wine. It is usually located in the grape skin. Flavonoids are a class of compounds with a basic structure containing two aromatic rings bound through a three-carbon chain. Flavonoids are grouped into several classes (shown below). They can have many different substituents on the rings. Flavones, Flavanones, and Flavonols are mostly found in the seeds and skin. Many of these flavonoids are present in the grapes as the glycosides (the sugar moiety can also vary) but are cleaved in the processing to wine. Anthocyanins are also prevalent in wines and grapes. They are usually glycosylated. They are partially responsible for the color of grapes and wines. Tannins are polymeric forms of polyphenols. Most of the natural tannins present in grapes and wine are the ‘ ’, often dimers and trimers of polyphenols (flavonoid or non-flavonoid). Hydrolysable tannins are also present in grapes and wine. These are usually a sugar with several polyphenols covalently bound. Complex tannins are long polymeric mixtures of these structures. Harvesting of grapes is usually done in late summer and early fall. Harvesting for most large industrial wineries is mostly mechanical. The stems must be removed first to avoid ‘off-flavors’. The grades are crushed immediately after picking. The goal of crushing is to release the sugars, acids and some of the polyphenols from the skins. For white wines, the juice is separated from the skins so that the color and tannins are not extracted into the must. For red wines, the juice and skin are both fermented. The grape skin cell walls are composed of polysaccharides (pectins, hemicellulose and cellulose) that prevent the diffusion of polyphenols into the must. Excessive crushing can release too many polyphenols. Too sweet Too high alcohol Too low alcohol Too astringent Too dry During winemaking, phenolic compounds are extracted into the juice by diffusion. A diffusion period, ‘ ’, can be done as a cold soak, through heating, enzymes, or a variety of techniques intended to increase polyphenol extraction. Maceration can be before, during, or after fermentation. Maceration enzymes (pectinases and cellulases) are often added during this process. Polyphenols are susceptible to oxidation with Fe and O in solution or through the action of some yeast enzymes. This oxidation is called browning because the quinones are a brown, muddy color. Winemakers will usually add SO to correct for the oxidation processes. Sulfite also prevents ethanol oxidation. It is important to remember that sulfite has another role; it can slow or prevent growth of spoilage organisms. This is a chart of some typical reactions that can occur to anthocyanins in the wine-making process including during fermentation including oxidation and condensations with yeast byproducts. These are just a few of the types of reactions that anthocyanins undergo during maceration and aging. Polymeric pigment formation increases progressively during maceration and aging ultimately leading to color changes, modification of mouthfeel properties, and, sometimes, precipitation. An important polymerization is the reaction of an anthocyanin with a flavanol (shown below). These large polymers start to precipitate and form a sediment. Sugar content is important as it effects the alcohol level of the final wine as well as the sweetness of the wine. ‘ ’ is a density measurement that represents the sugar concentration in wine. \[\text{1 degree Brix (°B) = (% by weight) = 1 gram of sugar per 100 grams solution (water & sugar combined)}\] Sucrose and/or grape juice can be added to the grape must. A wine with low acidity will taste "flat" whereas one with too high an acid level will be unpleasantly tart. Acid content is important for flavor and is important in some of the reactions involved in polyphenolic changes. Wine-makers will add tartaric, malic, citric, or lactic acids to adjust pH for the tartness of a wine. For most adjustments, tartaric acid is used because it disassociates best (lowers the pH more/gram). Fermentation of the ‘grape must’ is an alcoholic fermentation by yeasts. Wine-makers can utilize wild fermentation or inoculation with a specific yeast strains of . In spontaneous wine fermentation, the fermentation begins with non- yeasts until the ethanol concentration reaches 3–4%. As the alcohol concentration increases, these yeasts die off, and dominates the fermentation process. In inoculated ferments, is used to begin the fermentation process and its primary role is to catalyze the rapid, complete and efficient conversion of grape sugars to ethanol. A good wine will have the components of alcohol, acidity, sweetness, fruitiness and tannin structure complement each other so that no single flavor overwhelms the others. Recently, there has been a demand for a ‘richer’ red wine flavor; this has led winemakers to harvest grapes at a later stage to obtain more polyphenols and flavors. However, more mature grapes have increased sugar concentration. In an attempt to develop full-bodied wines with lower alcohol content, researchers have been attempting to create strains of that produce glycerol instead of ethanol. Glycerol tastes slightly sweet with a slightly ‘oily’ mouthfeel but it does not dramatically change the overall sensory perception of the wine. Malic acid is described as a harsher or more aggressive acidic flavor. Wines with high levels of malic acid are submitted to malolactic fermentation (MLF). In general, winemakers use MLF to treat red wines more than whites. There are exceptions; oaked Chardonnay is often put through MLF. Malolactic Fermentation is described in detail in “Cider”. The bacteria behind this process can be found naturally in the winery, usually in the oak wine barrels used for aging. Alternatively, these bacteria can be introduced by the winemaker. The bacteria used in MLF are usually (homofermentative), (heterofermentative, (heterofermentative) or (either). Summarize MLF: The wine aging has two phases: 1) ‘maturation’, changes after fermentation and before 2) ‘bottling’. During the aging process, changes in taste and flavor occur. Traditional maturation involves the storage of wine in barrels for a few months to a few years (or even longer!). During this time, the wine undergoes reactions and absorbs compounds from the wood of the barrels. The polyphenolic component of the wine continues to undergo oxidations and polymerizations and condensations. The main phenolic compounds extracted from the wood to the wine during barrel ageing are hydrolysable tannins and phenolic acids. The volatile compounds extracted from wood are mainly furfural compounds, guaiacol, oak or whisky lactone, eugenol, vanillin, and syringaldehyde. As a wine ages, phenolic molecules combine to form tannin polymers that fall to the bottom of the bottle. Unlike beer and cider, filtration is not a common process for wines so many older wines will have sediment. Many winemakers leave the sediments in the wine bottle. Wine drinkers can ‘decant’ the wine before drinking – pour off the wine leaving behind the sediment. Fining is a technique that is used to remove unwanted juice/wine components that affect flavor and aroma. Bentonite is a clay made of soft silicate mineral that will absorb positively charged proteins that cause hazing of wines (particularly white wines). Bovine Serine Albumin (BSA) or gelatin or casein are added to bind with excess tannins and precipitate out of the wine. Filtration is sometimes used to help control both MLF and Acetic Acid bacteria and other spoilage organisms since lees are a food source for the bacteria. LAB can continue the fermentation leading to off-flavors. Membrane filtration can be helpful at this point to remove organisms. The flavor and aroma components, including polyphenols, acids, aldehydes, esters, and fusel alcohols are a very small percentage of the overall beverage. A dry wine has little residual sugars, so it isn't sweet. Sugars are the main source of perceived sweetness in wine, and they come in many forms. While it seems paradoxical, many people have noticed that wines with higher sugar content last longer even when open to the air. Osmotic pressure seems to play a part: high concentrations of sugar force the water within a microbe to rush outward, and its cell walls collapse. Sweetness from Aging Processes In 2017, scientists in Bordeaux discovered a set of molecules called quercotriterpenosides, which are released from oak during aging. These molecules are small but mighty, influencing the taste of wine at even low doses due to their extreme sweetness. Other oak flavors can evoke sweetness: guaiacol, eugenol, and vanillin. Glycerol can also provide a sweet sensation. Aroma and flavors: Esters and alcohols During alcoholic fermentation, many secondary metabolites are produced by yeast. Esters provide mainly fruity and floral notes; higher alcohols provide ‘background flavors’; whereas the phenolic compounds can generate interesting or unpleasant aromatic notes. Esters are the main volatile compounds in cider. They are characterized by a high presence of ethyl acetate, which alone can represent up to 90% of the total esters. Esters and Fusel Alcohols were covered in the ‘ ’ Section. Too many esters or fusel alcohols are considered a fault in wines. Acetic acid is responsible for the sour taste of vinegar. During fermentation, activity by yeast cells naturally produces a small amount of acetic acid. If the wine is exposed to oxygen, Acetobacter bacteria will convert the ethanol into acetic acid and is considered a fault. The process for ‘acetification’ (conversion of ethanol to acetic acid by AAB is covered in the ‘Vinegar’ section. Lactobacilli and contaminant yeasts like Brettanomyces are often present during wine-making. These organisms are often responsible for ‘taints’, unpleasant chemical flavors. A common taint is the production of volatile phenols, compounds are derived from the naturally occurring hydroxycinnamic acids in grapes/wine. Humans can taste volatile phenols at very low concentrations and can have a strong influence on wine aroma. These compounds are described as medicinal, animal, leather and ‘horse sweat’ odors. Bitterness taint is produced by LAB. The bacteria degrade glycerol, a compound naturally found in wine, to 3-hydroxypropionaldehyde. During aging, this is converted to acrolein which reacts with the anthocyanins and other phenols present within the wine. Mannitol is often described as an ester flavor with a sweet and irritating aftertaste. This was covered in the Cider section. Draw the pathways for the production of mannitol. Diacetyl in wine is produced by lactic acid bacteria. This compound has an intense buttery flavor. This was covered in the Beer section. Draw the pathways for the production of diacetyl. Potassium sorbate is sometimes added to wine as a preservative against yeast. However, LAB will metabolize the sorbic acid into 2-ethoxyhexa-3,5-diene which provides a flavor reminiscent of geranium leaves. Mousiness is a wine fault that can occur during MLF. The compounds responsible are lysine derivatives. The taints are not volatile but, when mixed with saliva in the mouth, they provide a flavor of mouse urine. Certain species of have been found to produce dextran slime or mucilaginous substances in wine. Belda, et. al., Microbial Contribution to Wine Aroma, Molecules 2017, 22(2), 189 Casassa, Flavonoid Phenolics in Red Winemaking In Grapes and Wine, A. M. Jordão, Ed., 2018, InTechOpen. Chantal Ghanam, Study of the Impact of Oenological Processes on the Phenolic Composition of Wines, Thesis, Université de Toulouse. Dangles & Fenger, The Chemical Reactivity of Anthocyanins, Molecules, 2018, 23(8), 1970-1993. Danilewicz, Role of Tartaric and Malic Acids in Wine Oxidation, J. Agric. Food Chem. 2014, 62, 22, 5149-5155. du Toit & Pretorius, Microbial Spoilage, S. Afr. J. Enol. Vitic. 2000, 21, 74-96. E.J. Bartowsky, Bacterial Spoilage of Wine, Letters in Applied Microbiology, 2009, 48, 149–156. Garrido & Borges, Wine and Grape Polyphenols, Food Research International, 2013, 54, 1844–1858 Goold, et. al. Yeast's balancing act between ethanol and glycerol production in low-alcohol wines, Microbial Biotechnology 2017, 10(2), 1-15. He, et. al., Anthocyanins and Their Variation in Red Wines, Molecules, 2012, 17(2), 1483-1519. J. Harbertson, A Guide to the Fining of Wine, Washington State University Li, Guo, & Wang, Mechanisms of Oxidative Browning of Wine, Food Chemistry, 2008, 108, 1-13. Marchal, et. al. Identification of New Natural Sweet Compounds in Wine, Anal. Chem, 2011, 83 (24), 9629-9637. Niculescu, Paun, and Ionete, The Evolution of Polyphenols from Must to Wine, In Grapes and Wine, A. M. Jordão, Ed., 2018, InTechOpen.
16,735
4,906
https://chem.libretexts.org/Bookshelves/General_Chemistry/Chem1_(Lower)/06%3A_Properties_of_Gases/6.03%3A_Dalton's_Law
Make sure you thoroughly understand the following essential ideas which have been presented below. Although all gases closely follow the ideal gas law under appropriate conditions, each gas is also a unique chemical substance consisting of molecular units that have definite masses. In this lesson we will see how these molecular masses affect the properties of gases that conform to the ideal gas law. Following this, we will look at gases that contain more than one kind of molecule— in other words, of gases. We begin with a review of molar volume and the E.V.E.N. principle, which is central to our understanding of gas mixtures. You will recall that the of a pure substance is the mass of 6.02 x 10 ( ) of particles or molecular units of that substance. Molar masses are commonly expressed in units of grams per mole (g mol ) and are often referred to as s. As was explained in the preceding lesson, equal volumes of gases, measured at the same temperature and pressure, contain equal numbers of molecules (this is the , more formally known as ) Standard temperature and pressure: 273K, 1 atm The magnitude of this volume will of course depend on the temperature and pressure, so as a means of convenient comparison it is customary to define a set of conditions T = 273 K and P = 1 atm as , usually denoted as . Substituting these values into the ideal gas equation of state and solving for yields a volume of 22.414 liters for 1 mole. What would the volume of one mole of air be at 20°C on top of Mauna Kea, Hawa'ii (altitude 4.2 km) where the air pressure is approximately 60 kPa? Scoria and cinder cones on Mauna Kea's summit in winter. (Public Domain; ) Apply Boyle's and Charles' laws as successive correction factors to the standard sea-level pressure of 101.3 kPa: The is a value worth memorizing, but remember that it is valid only at STP. The molar volume at other temperatures and pressures can easily be found by simple proportion. The molar volume of a substance can tell us something about how much space each molecule occupies, as the following example shows. Estimate the average distance between the molecules in a gas at 1 atm pressure and 0°C. Consider a 1-cm volume of the gas, which will contain \[\dfrac{6.02 \times 10^{23} \;mol^{–1}}{22,400\; cm^3 \;mol^{–1}} = 2.69 \times 10^{19} cm^{-3} \nonumber\] The volume per molecule (not the same as the volume a molecule, which for an ideal gas is zero!) is just the reciprocal of this, or \(3.72 \times 10^{-20}\, cm^3\). Assume that the molecules are evenly distributed so that each occupies an imaginary box having this volume. The average distance between the centers of the molecules will be defined by the length of this box, which is the cube root of the volume per molecule: \[(3.72 \times 10^{–20})^{1/3} = 3.38 \times 10^{–7}\, cm = 3.4\, nm \nonumber\] Under conditions at which the ideal gas model is applicable (that is, almost always unless you are a chemical engineer dealing with high pressures), "a molecule is a molecule", so the volume of Avogadro's number of molecules will be independent of the composition of the gas. The reason, of course, is that the volume of the gas is mostly empty space; the volumes of the molecules themselves are negligible. The molecular weight (molar mass) of any gas is the mass, expressed in grams, of Avogadro's number of its molecules. This is true regardless of whether the gas is composed of one molecular species or is a mixture. For a mixture of gases, the molar mass will depend on the molar masses of its components, and on the fractional abundance of each kind of molecule in the mixture. The term "average molecular weight" is often used to describe the molar mass of a gas mixture. The average molar mass (\(\bar{m}\)) of a mixture of gases is just the sum of the mole fractions of each gas, multiplied by the molar mass of that substance: \[\bar{m}=\sum_i \chi_im_i\] Find the average molar mass of dry air whose volume-composition is O (21%), N (78%) and Ar (1%). The average molecular weight is the mole-fraction-weighted sum of the molecular weights of its components. The mole fractions, of course, are the same as the volume-fractions (E.V.E.N. principle.) \[m = (0.21 \times 32) + (0.78 \times 28) + (0.01 \times 20) = 28 \nonumber\] The molar volumes of all gases are the same when measured at the same temperature and pressure. However, the molar of different gases will vary. This means that different gases will have different (different masses per unit volume). If we know the molecular weight of a gas, we can calculate its density. Uranium hexafluoride UF gas is used in the isotopic enrichment of natural uranium. Calculate its density at STP. The molecular weight of UF is 352. \[\dfrac{352\; g \;mol^{–1}}{22.4\, L\, mol^{–1}} = 15.7\; g\; L^{–1} \nonumber\] there is no need to look up a "formula" for this calculation; simply combine the molar mass and molar volume in such a way as to make the units come out correctly. More importantly, if we can measure the density of an unknown gas, we have a convenient means of estimating its molecular weight. This is one of many important examples of how a measurement (one made on bulk matter) can yield information (that is, about molecular-scale objects.) Gas densities are now measured in industry by electro-mechanical devices such as vibrating reeds which can provide continuous, on-line records at specific locations, as within pipelines. Determination of the molecular weight of a gas from its density is known as the , after the French chemist Jean Dumas (1800-1840) who developed it. One simply measures the weight of a known volume of gas and converts this volume to its STP equivalent, using Boyle's and Charles' laws. The weight of the gas divided by its STP volume yields the density of the gas, and the density multiplied by 22.4 mol gives the molecular weight. Pay careful attention to the examples of gas density calculations shown here and in your textbook. You will be expected to carry out calculations of this kind, converting between molecular weight and gas density. Calculate the approximate molar mass of a gas whose measured density is 3.33 g/L at 30°C and 780 torr. Find the volume that would be occupied by 1 L of the gas at STP; note that correcting to 273 K will reduce the volume, while correcting to 1 atm (760 torr) will increase it: \[V=(1.00 \mathrm{L})\left(\frac{273}{303}\right)\left(\frac{780}{760}\right)=0.924 \mathrm{L} \nonumber\] The number of moles of gas is \[n = \dfrac{0.924\, L}{22.4\, L\, mol^{–1}}= 0.0412\, mol \nonumber\] The molecular weight is therefore \[\dfrac{33\, g\, L^{–1}}{0.0412\, mol\, L^{–1}} = 80.7\, g\, mol^{–1} \nonumber\] Gas density measurements can be a useful means of estimating the composition of a mixture of two different gases; this is widely done in industrial chemistry operations in which the compositions of gas streams must be monitored continuously. Find the composition of a mixture of \(\ce{CO2}\) (44 g/mol) and methane \(\ce{CH4}\) (16 g/mol) that has a STP density of 1.214 g/L. The density of a mixture of these two gases will be directly proportional to its composition, varying between that of pure methane and pure CO . We begin by finding these two densities: For CO : (44 g/mol) ÷ (22.4 L/mol) = 1.964 g/L For CH : (16 g/mol) ÷ (22.4 L/mol) = 0.714 g/L If is the mole fraction of CO and (1– ) is the mole fraction of CH , we can write 1.964 x + 0.714 (1–x) = 1.214 (Does this make sense? Notice that if x = 0, the density would be that of pure CH , while if it were 1, it would be that of pure CO .) Expanding the above equation and solving for x yields the mole fractions of 0.40 for CO and 0.60 for CH . Because most of the volume occupied by a gas consists of empty space, there is nothing to prevent two or more kinds of gases from occupying the same volume. of this kind are generally known as , but it is customary to refer to them simply as . We can specify the composition of gaseous mixtures in many different ways, but the most common ones are by and by . From Avogadro's Law we know that "equal volumes contains equal numbers of molecules". This means that the volumes of gases, unlike those of solids and liquids, are additive. So if a partitioned container has two volumes of gas A in one section and one mole of gas B in the other (both at the same temperature and pressure), and we remove the partition, the volume remains unchanged. Volume fractions are often called partial volumes: \[V_i = \dfrac{v_i}{\sum v_i}\] Don't let this type of notation put you off! The summation sign Σ (Greek Sigma) simply means to add up the 's (volumes) of every gas. Thus if Gas A is the " -th" substance as in the expression immediately above, the summation runs from =1 through =2. Note that we can employ partial volumes to specify the composition of a mixture even if it had never actually been made by combining the pure gases. When we say that air, for example, is 21 percent oxygen and 78 percent nitrogen by volume, this is the same as saying that these same percentages of the molecules in air consist of O and N . Similarly, in 1.0 mole of air, there is 0.21 mol of O and 0.78 mol of N (the other 0.1 mole consists of various trace gases, but is mostly neon.) Note that you could never assume a similar equivalence with mixtures of liquids or solids, to which the E.V.E.N. principle does not apply. These last two numbers (0.21 and 0.78) also express the mole fractions of oxygen and nitrogen in air. means exactly what it says: the fraction of the molecules that consist of a specific substance. This is expressed algebraically by \[X_i = \dfrac{n_i}{\sum_i n_i}\] so in the case of oxygen in the air, its mole fraction is \[ X_{O_2} = \dfrac{n_{O_2}}{n_{O_2}+n_{N_2}+n_{Ar}}= \dfrac{0.21}{1}=0.21 \nonumber\] A mixture of \(O_2\) and nitrous oxide, \(N_2O\), is sometimes used as a mild anesthetic in dental surgery. A certain mixture of these gases has a density of 1.482 g L at 25 and 0.980 atm. What was the mole-percent of \(N_2O\) in this mixture? First, find the density the gas would have at STP: \[\left(1.482 \mathrm{g} \mathrm{L}^{-1}\right) \times\left(\frac{298}{273}\right)\left(\frac{1}{980}\right)=1.65 \mathrm{g} \mathrm{L}^{-1}\nonumber \] The molar mass of the mixture is (1.65 g L )(22.4 L mol ) = 37.0 g mol . The molecular weights of \(O_2\) and \(N_2\) are 32 and 44, respectively. 37.0 is 5/12 of the difference between the molar masses of the two pure gases. Since the density of a gas mixture is directly proportional to its average molar mass, the mole fraction of the heavier gas in the mixture is also 5/12: \[\dfrac{37-32}{44-32}=\dfrac{5}{12}=0.42 \nonumber\] What is the mole fraction of carbon dioxide in a mixture consisting of equal masses of CO (MW=44) and neon (MW=20.2)? Assume any arbitrary mass, such as 100 g, find the equivalent numbers of moles of each gas, and then substitute into the definition of mole fraction: The ideal gas equation of state applies to mixtures just as to pure gases. It was in fact with a gas mixture, ordinary air, that Boyle, Gay-Lussac and Charles did their early experiments. The only new concept we need in order to deal with gas mixtures is the , a concept invented by the famous English chemist John Dalton (1766-1844). Dalton reasoned that the low density and high compressibility of gases indicates that they consist mostly of empty space; from this it follows that when two or more different gases occupy the same volume, they behave entirely independently. The contribution that each component of a gaseous mixture makes to the total pressure of the gas is known as the of that gas. Dalton himself stated this law in the simple and vivid way shown at the left. The usual way of stating is which is expressed algebraically as \[P_{total}=P_1+P_2+P_3 ... = \sum_i P_i\] or, equivalently \[ P_{total} = \dfrac{RT}{V} \sum_i n_i\] There is also a similar relationship based on , known as . It is exactly analogous to Dalton's law, in that it states that the total volume of a mixture is just the sum of the partial volumes of its components. But there are two important differences: Amagat's law holds only for ideal gases which must all be at the same temperature and pressure. Dalton's law has neither of these restrictions. Although Amagat's law seems intuitively obvious, it sometimes proves useful in chemical engineering applications. We will make no use of it in this course. Calculate the mass of each component present in a mixture of fluorine (MW and xenon (MW 131.3) contained in a 2.0-L flask. The partial pressure of Xe is 350 torr and the total pressure is 724 torr at 25°C. From Dalton's law, the partial pressure of F is (724 – 350) = 374 torr: The mole fractions are \[\chi_{Xe} = \dfrac{350}{724} = 0.48 \nonumber\] and \[\chi_{F_2} = \dfrac{374}{724} = 0.52 \nonumber\] The total number of moles of gas is \[n=\dfrac{P V}{R T}=\frac{(724 / 60)(2)}{(.082)(298)}=0.078 \mathrm{mol}\nonumber\] The mass of \(Xe\) is \[(131.3\, g\, mol^{–1}) \times (0.48 \times 0.078\, mol) = 4.9\, g \nonumber\] Three flasks having different volumes and containing different gases at various pressures are connected by stopcocks as shown. When the stopcocks are opened, Assume that the temperature is uniform and that the volume of the connecting tubes is negligible. The trick here is to note that the total number of moles and the temperature remain unchanged, so we can make use of Boyle's law = constant. We will work out the details for CO only, denoted by subscripts For CO , = (2.13 atm)(1.50 L) = 3.19 L-atm. Adding the products for each separate container, we obtain \[\sum P_iV_i = 6.36 L-atm = n_T RT. \nonumber\] We will call this sum After the stopcocks have been opened and the gases mix, the new conditions are denoted by From Boyle's law, Solving for the final pressure we obtain (6.36 L-atm)/(4.50 L) = . For CO , this works out to (3.19/ ) / (6.36/ ) = 0.501. Because this exceeds 0.5, we know that this is the most abundant gas in the final mixture. A common laboratory method of collecting the gaseous product of a chemical reaction is to conduct it into an inverted tube or bottle filled with water, the opening of which is immersed in a larger container of water. This arrangement is called a , and was widely used in the early days of chemistry. As the gas enters the bottle it displaces the water and becomes trapped in the upper part. The volume of the gas can be observed by means of a calibrated scale on the bottle, but what about its pressure? The total pressure confining the gas is just that of the atmosphere transmitting its force through the water. (An exact calculation would also have to take into account the height of the water column in the inverted tube.) But liquid water itself is always in equilibrium with its vapor, so the space in the top of the tube is a mixture of two gases: the gas being collected, and gaseous H O. The partial pressure of H O is known as the vapor pressure of water and it depends on the temperature. In order to determine the quantity of gas we have collected, we must use Dalton's Law to find the partial pressure of that gas. Oxygen gas was collected over water as shown above. The atmospheric pressure was 754 torr, the temperature was 22°C, and the volume of the gas was 155 mL. The vapor pressure of water at 22°C is 19.8 torr. Use this information to estimate the number of moles of \(O_2\) produced. From Dalton's law, \(P_{O_2} = P_{total} – P_{H_2O} = 754 – 19.8 = 734 \; torr = 0.966\; atm\). \[n=\frac{P V}{R T}=\frac{0.966 \mathrm{atm} \times(0.155 \mathrm{L})}{\left(.082 \mathrm{L} \mathrm{atm} \mathrm{mol}^{-1} \mathrm{K}^{-1}\right)(295 \mathrm{K})}=.00619 \mathrm{mol}\nonumber\] Our respiratory systems are designed to maintain the proper oxygen concentration in the blood when the partial pressure of O is 0.21 atm, its normal sea-level value. Below the water surface, the pressure increases by 1 atm for each 10.3 m increase in depth; thus a scuba diver at 10.3 m experiences a total of 2 atm pressure pressing on the body. In order to prevent the lungs from collapsing, the air the diver breathes should also be at about the same pressure. But at a total pressure of 2 atm, the partial pressure of \(O_2\) in ordinary air would be 0.42 atm; at a depth of 100 ft (about 30 m), the \(O_2\) pressure of .8 atm would be far too high for health. For this reason, the air mixture in the pressurized tanks that scuba divers wear must contain a smaller fraction of \(O_2\). This can be achieved most simply by raising the nitrogen content, but high partial pressures of N can also be dangerous, resulting in a condition known as nitrogen narcosis. The preferred diluting agent for sustained deep diving is helium, which has very little tendency to dissolve in the blood even at high pressures.
16,942
4,907
https://chem.libretexts.org/Bookshelves/Biological_Chemistry/Fermentation_in_Food_Chemistry/01%3A_Modules/1.13%3A_Beer
Beer has been produced by humans for 6000 to 8000 years. The key ingredients are a malted barley, water, hops, and yeast. Typical Steps in Beer Production: Barley is a widely adaptable and hardy crop that can be produced in temperate and tropical areas. Barley kernels or grains are the fruit of the barley grass. The endosperm contains many starches as a food reserve for the baby plant. The starch and the embryo are surrounded by the husk, a protective layer around the kernel. While people have made beers from other grains, many people define beer as the fermented alcoholic barley drink. In fact, the German beer purity law, known as the , of 1516 allows for only hops, barley, water and yeast in the production of beer. The goal of the first stage of beer making, malting the barley, is to access the fermentable carbohydrates. : Yeasts can utilize what sugars? What enzymes are used? The barley grains are soaked, called . This process triggers metabolism in the grain to start for 4-5 days. As the baby plant starts to grow, the enzymes begin to break down the starches and the cell wall. The cell wall surrounding the starch containing endosperm is primarily made of \(\beta\)-glucan and pentosan. \(\beta\)-glucan and pentosan are structural polysaccharides that are NOT digestible by humans or yeast enzymes (i.e. fiber). Explain why germination is necessary for this step of the beer making process (or any food product that uses barley). To stop germination and enzymatic processes, the grain is heated, called . There are many varieties of . These are a few of the popular styles: Roasting the malts promotes Maillard reactions This leads to the complex flavors promoted during this stage. After kilning, the malt grain is then cleaned, transported, and stored. Most breweries purchase their malts rather than prepare them. A malt has enough enzymes (such as amylase) to convert the starch into fermentable sugars in the mashing stage. Brewing involves multiples steps. Here is an overview. There is some important chemistry occurring in these steps. We will look at some of the enzymes, the hops, and the boiling steps in more detail. In this step, the grains are broken up in a mill. The particle size, , can be determined by the spacing on the rotors. A large grist was traditionally favored because the crushed grain was used for the filtering at the end of the brewing process. Modern brewers use small grist because they use polypropylene filters. is the brewer's term for the hot water steeping process which hydrates the barley, activates the malt enzymes, and converts the grain starches into fermentable sugars. Typically, hot water is added to help solubilize starches. Hops are a climbing perennial vine and the cone of the flower is used to add ‘bittering’ and aroma flavors to the beer wort during this phase of beer production. Typically, these cones are milled and pressed into pellets for use by the brewer. Other brewers use extracts of the cones. The main components that hops adds to the beer are alpha acids (table 2) and resins (table 2). These alpha acids isomerize during the boiling process to produce iso-alpha acids (see below) The iso-alpha acids contribute the bitter flavor to most beers. It was also discovered that these compounds disrupt the proton pumps used by gram-positive bacteria. During the 1700s, the British Empire controlled India by maintaining a large army in India, they had a large demand for British brewed ales to be shipped to India. Unfortunately, many ales would spoil during the long sea journey. It was noticed that beers that were brewed at temperatures with higher concentrations hops were less likely to spoil – the beginning of the India Pale Ale (IPA) beers. ‘Lightstruck beer’ or ‘skunk’ beer is one in which the iso-alpha acids have undergone a photochemical reaction to form MBT. Tannins are astringent polyphenolic compounds. Tannic Acid (example of a tannin): The tannin compounds are widely distributed in many species of plants, where they play a role in preventing predation. The astringent flavor predominates in unripe fruit, red wine or tea. Hops added after boiling is called ‘dry hopping’. Hop oils (essential oil) are sometimes added after boiling of the wort. These ‘aroma hops’ are volatile non-polar compounds that have strong aromas and flavors. There are between 400 and 1000 different compounds in hop oil including structures such as myrcene, humulene, caryophyllene, \(\beta\)-pinene, geraniol, linalool, and farnesene. There are several goals of boiling wort. Explain the importance of each of these steps: Liquid (sugars/syrups) are usually added in the wort boiling stage. They may be sugars extracted from plants rich in fermentable sugars, notably sucrose from cane or beet or corn syrup. Liquid adjuncts are frequently called “ ”. This is a filtering process that varies by brewer. There are hundreds of strains of yeast. Many beer yeasts are classified as "top-fermenting" type ( ) and or "bottom-fermenting" ( , formerly known as ). Today, as a result of recent reclassification, both yeast types are considered to be strains of . Ale yeast strains are best used at temperatures ranging from 10 to 25°C. These yeasts rise to the surface during fermentation, creating a very thick, rich yeast head. Fermentation by ale yeasts at these relatively warmer temperatures produces a beer high in esters, regarded as a distinctive characteristic of ale beers. These yeasts are used for brewing ales, porters, stouts, Altbier, Kölsch, and wheat beers. Lager yeast strains are best used at temperatures ranging from 7 to 15°C. At these temperatures, lager yeasts grow less rapidly than ale yeasts, and with less surface foam they tend to settle out to the bottom of the fermenter as fermentation nears completion. These yeasts are used in brewing Pilsners, Dortmunders, Märzen, Bocks, and American malt liquors. Beer that is brewed from natural/wild yeast and bacteria are called spontaneous fermented beers. One of the typical yeasts is the strain which is used to produce traditional lambic beers. This brewing method has been practiced for decades in the West Flanders region of Belgium. We will visit 3 Fonteinen Brewery in Belgium that specializes in lambic beers. Longer chain alcohols produced by yeast during fermentation can also contribute to the aroma and flavor of beer. Primarily these alcohols can increase the warming of the mouthfeel. Fusel alcohols are derived from amino acid catabolism via a pathway that was first described by Ehrlich. Amino acids represent a major source of the assimilable nitrogen in the wort. Amino acids that are taken up by the yeasts and converted to fusel alcohols by the Ehrlich pathway (valine, leucine, isoleucine, methionine and phenylalanine). The Ehrlich pathway is shown below for phenylalanine. Too much of the higher weight fusel alcohols provides a harsh alcoholic taste (in fact, the word fusel is from the German for bad liquor). Fusel alcohols can be produced by excessive amounts of yeast or fermentation temperatures above 80°F. Fermentation Flavors: Ester Production Many of these esters are derived from alcohols reacting with acetyl coA. Some of the esters are derived from alcohols reacting with activated thioesters from the fatty acid synthesis pathway. Usually, brewers want a balance of esters present in the final product but not too many. While the presence of esters and fusel alcohols can enhance the flavor and aroma of beers, the presence of ketones is usually considered undesirable. The most common are the formation of diacetyl and acetoin. Diacetyl is most often described as a buttery flavor. It is desired in small quantities in many ales, but it can be unpleasant in larger quantities and in lagers; it may even take on rancid overtones. Diacetyl can be the result of the normal fermentation process or the result of a bacterial infection. Diacetyl is produced early in the fermentation cycle by the yeast and is gradually metabolized towards the end of the fermentation. Beer sometimes undergoes a "diacetyl rest", in which its temperature is raised slightly for two or three days after fermentation is complete. Beer style is a term used to differentiate and categorize beers by various factors, including appearance, flavor, ingredients, production method, history, or origin. There is no agreed upon method for distinguishing beer styles. There are some general categories that are used in describing beer styles: Craft Beer.com provides a nice style guide on the different names of beers with information about the yeast strains, hop aroma, IBU (International Bitterness Units), alcohol content, carbonation for hundreds of beer styles. John Palmer also provides a nice table that places a wide range of beer styles on a chart comparing a number of ales and lagers on malty vs fruity and sweet vs bitter. Choose your favorite breweries or breweries chosen by your instructor.
8,982
4,910
https://chem.libretexts.org/Bookshelves/Organic_Chemistry/Supplemental_Modules_(Organic_Chemistry)/Polymers/Copolymers
The synthesis of macromolecules composed of more than one monomeric repeating unit has been explored as a means of controlling the properties of the resulting material. In this respect, it is useful to distinguish several ways in which different monomeric units might be incorporated in a polymeric molecule. The following examples refer to a two component system, in which one monomer is designated A and the other B. Most direct copolymerizations of equimolar mixtures of different monomers give statistical copolymers, or if one monomer is much more reactive a nearly homopolymer of that monomer. The copolymerization of styrene with methyl methacrylate, for example, proceeds differently depending on the mechanism. Radical polymerization gives a statistical copolymer. However, the product of cationic polymerization is largely polystyrene, and anionic polymerization favors formation of poly(methyl methacrylate). In cases where the relative reactivities are different, the copolymer composition can sometimes be controlled by continuous introduction of a biased mixture of monomers into the reaction. Formation of alternating copolymers is favored when the monomers have different polar substituents (e.g. one electron withdrawing and the other electron donating), and both have similar reactivities toward radicals. For example, styrene and acrylonitrile copolymerize in a largely alternating fashion. H C=CHCl H C=CCl Saran films & fibers H C=CHC H H C=C-CH=CH SBR styrene butadiene rubber tires H C=CHCN H C=C-CH=CH Nitrile Rubber adhesives hoses H C=C(CH ) H C=C-CH=CH Butyl Rubber inner tubes F C=CF(CF ) H C=CHF Viton gaskets A terpolymer of acrylonitrile, butadiene and styrene, called ABS rubber, is used for high-impact containers, pipes and gaskets. Several different techniques for preparing block copolymers have been developed, many of which use condensation reactions (next section). At this point, our discussion will be limited to an application of anionic polymerization. In the anionic polymerization of styrene , a reactive site remains at the end of the chain until it is quenched. The unquenched polymer has been termed a living polymer, and if additional styrene or a different suitable monomer is added a block polymer will form. This is illustrated for methyl methacrylate in the following diagram. ),
2,374
4,911
https://chem.libretexts.org/Bookshelves/General_Chemistry/Chemistry_1e_(OpenSTAX)/09%3A_Gases/9.E%3A_Gases_(Exercises)
Why are sharp knives more effective than dull knives (Hint: think about the definition of pressure)? The cutting edge of a knife that has been sharpened has a smaller surface area than a dull knife. Since pressure is force per unit area, a sharp knife will exert a higher pressure with the same amount of force and cut through material more effectively. Why do some small bridges have weight limits that depend on how many wheels or axles the crossing vehicle has? Why should you roll or belly-crawl rather than walk across a thinly-frozen pond? Lying down distributes your weight over a larger surface area, exerting less pressure on the ice compared to standing up. If you exert less pressure, you are less likely to break through thin ice. A typical barometric pressure in Redding, California, is about 750 mm Hg. Calculate this pressure in atm and kPa. A typical barometric pressure in Denver, Colorado, is 615 mm Hg. What is this pressure in atmospheres and kilopascals? 0.809 atm; 82.0 kPa A typical barometric pressure in Kansas City is 740 torr. What is this pressure in atmospheres, in millimeters of mercury, and in kilopascals? Canadian tire pressure gauges are marked in units of kilopascals. What reading on such a gauge corresponds to 32 psi? 2.2 × 10 kPa During the Viking landings on Mars, the atmospheric pressure was determined to be on the average about 6.50 millibars (1 bar = 0.987 atm). What is that pressure in torr and kPa? The pressure of the atmosphere on the surface of the planet Venus is about 88.8 atm. Compare that pressure in psi to the normal pressure on earth at sea level in psi. Earth: 14.7 lb in ; Venus: 13.1× 10 lb in A medical laboratory catalog describes the pressure in a cylinder of a gas as 14.82 MPa. What is the pressure of this gas in atmospheres and torr? Consider this scenario and answer the following questions: On a mid-August day in the northeastern United States, the following information appeared in the local newspaper: atmospheric pressure at sea level 29.97 in., 1013.9 mbar. (a) 101.5 kPa; (b) 51 torr drop Why is it necessary to use a nonvolatile liquid in a barometer or manometer? The pressure of a sample of gas is measured at sea level with a closed-end manometer. The liquid in the manometer is mercury. Determine the pressure of the gas in: (a) 264 torr; (b) 35,200 Pa; (c) 0.352 bar The pressure of a sample of gas is measured with an open-end manometer, partially shown to the right. The liquid in the manometer is mercury. Assuming atmospheric pressure is 29.92 in. Hg, determine the pressure of the gas in: The pressure of a sample of gas is measured at sea level with an open-end mercury manometer. Assuming atmospheric pressure is 760.0 mm Hg, determine the pressure of the gas in: (a) 623 mm Hg; (b) 0.820 atm; (c) 83.1 kPa The pressure of a sample of gas is measured at sea level with an open-end mercury manometer. Assuming atmospheric pressure is 760 mm Hg, determine the pressure of the gas in: How would the use of a volatile liquid affect the measurement of a gas using open-ended manometers vs. closed-end manometers? With a closed-end manometer, no change would be observed, since the vaporized liquid would contribute equal, opposing pressures in both arms of the manometer tube. However, with an open-ended manometer, a higher pressure reading of the gas would be obtained than expected, since = + . Sometimes leaving a bicycle in the sun on a hot day will cause a blowout. Why? Explain how the volume of the bubbles exhausted by a scuba diver ( ) change as they rise to the surface, assuming that they remain intact. As the bubbles rise, the pressure decreases, so their volume increases as suggested by Boyle’s law. One way to state Boyle’s law is “All other things being equal, the pressure of a gas is inversely proportional to its volume.” (a) What is the meaning of the term “inversely proportional?” (b) What are the “other things” that must be equal? An alternate way to state Avogadro’s law is “All other things being equal, the number of molecules in a gas is directly proportional to the volume of the gas.” (a) The number of particles in the gas increases as the volume increases. (b) temperature, pressure How would the graph in Figure change if the number of moles of gas in the sample used to determine the curve were doubled? How would the graph in Figure change if the number of moles of gas in the sample used to determine the curve were doubled? The curve would be farther to the right and higher up, but the same basic shape. In addition to the data found in , what other information do we need to find the mass of the sample of air used to determine the graph? Determine the volume of 1 mol of CH gas at 150 K and 1 atm, using . 16.3 to 16.5 L Determine the pressure of the gas in the syringe shown in when its volume is 12.5 mL, using: A spray can is used until it is empty except for the propellant gas, which has a pressure of 1344 torr at 23 °C. If the can is thrown into a fire (T = 475 °C), what will be the pressure in the hot can? 3.40 × 10 torr What is the temperature of an 11.2-L sample of carbon monoxide, CO, at 744 torr if it occupies 13.3 L at 55 °C and 744 torr? we must use \(\dfrac{P_1V_1}{T_1} =\dfrac{P_2V_2}{T_2}\) and solve for \(T_1\) \(T_1 = \dfrac{P_1V_1T_2}{P_2V_2}\) Where: \(P_1 = 744\: torr\) \(V_1 = 11.2\: L\) \(P_2 = 744\: torr\) \(V_2 = 13.3\: L\) \(T_2 = 328.15°\: K\) \(\dfrac{(744\: torr)(11.2\: L)(328.15°\: K)}{(744\: torr)(13.3\: L)} = 276°\: K\) 276°K ; 3°C A 2.50-L volume of hydrogen measured at –196 °C is warmed to 100 °C. Calculate the volume of the gas at the higher temperature, assuming no change in pressure. 12.1 L A balloon inflated with three breaths of air has a volume of 1.7 L. At the same temperature and pressure, what is the volume of the balloon if five more same-sized breaths are added to the balloon? A weather balloon contains 8.80 moles of helium at a pressure of 0.992 atm and a temperature of 25 °C at ground level. What is the volume of the balloon under these conditions? 217 L How many moles of gaseous boron trifluoride, BF , are contained in a 4.3410-L bulb at 788.0 K if the pressure is 1.220 atm? How many grams of BF ? 8.190 × 10 mol; 5.553 g Iodine, I , is a solid at room temperature but sublimes (converts from a solid into a gas) when warmed. What is the temperature in a 73.3-mL bulb that contains 0.292 g of I vapor at a pressure of 0.462 atm? 1.) Use the equation \(PV =nRT\) and solve for \(T\) \(T= \dfrac{PV}{nR}\) 2.) convert grams of I to moles of I and convert mL to L \(0.292g\: \ce{I2}\times \dfrac{1\: mole\: \ce{I2}}{253.8g\: \ce{I2}} = 1.15 \times10^{-3}\: moles\: \ce{I2}\) \(73.3\:mL = 0.0733\:L\) 3.) Use these values along with \(R= 0.08206\: \dfrac{atm\:L}{mole\:°K}\) to solve for \(T\) \(T= \dfrac{(0.462\: \cancel{atm})(0.0733\:\cancel{L})}{(1.15\times10^{-3}\: \cancel{moles})(0.08206\: \dfrac{\cancel{atm}\:\cancel{L}}{\cancel{mole}\:°K})} = 359\: °K \) 359°K ; 86°C How many grams of gas are present in each of the following cases? (a) 7.24 × 10 g; (b) 23.1 g; (c) 1.5 × 10 g A high altitude balloon is filled with 1.41 × 10 L of hydrogen at a temperature of 21 °C and a pressure of 745 torr. What is the volume of the balloon at a height of 20 km, where the temperature is –48 °C and the pressure is 63.1 torr? A cylinder of medical oxygen has a volume of 35.4 L, and contains O at a pressure of 151 atm and a temperature of 25 °C. What volume of O does this correspond to at normal body conditions, that is, 1 atm and 37 °C? 5561 L A large scuba tank ( ) with a volume of 18 L is rated for a pressure of 220 bar. The tank is filled at 20 °C and contains enough air to supply 1860 L of air to a diver at a pressure of 2.37 atm (a depth of 45 feet). Was the tank filled to capacity at 20 °C? A 20.0-L cylinder containing 11.34 kg of butane, C H , was opened to the atmosphere. Calculate the mass of the gas remaining in the cylinder if it were opened and the gas escaped until the pressure in the cylinder was equal to the atmospheric pressure, 0.983 atm, and a temperature of 27 °C. 46.4 g While resting, the average 70-kg human male consumes 14 L of pure O per hour at 25 °C and 100 kPa. How many moles of O are consumed by a 70 kg man while resting for 1.0 h? For a given amount of gas showing ideal behavior, draw labeled graphs of: For a gas exhibiting ideal behavior: A liter of methane gas, CH , at STP contains more atoms of hydrogen than does a liter of pure hydrogen gas, H , at STP. Using Avogadro’s law as a starting point, explain why. The effect of chlorofluorocarbons (such as CCl F ) on the depletion of the ozone layer is well known. The use of substitutes, such as CH CH F( ), for the chlorofluorocarbons, has largely corrected the problem. Calculate the volume occupied by 10.0 g of each of these compounds at STP: (a) 1.85 L CCl F ; (b) 4.66 L CH CH F As 1 g of the radioactive element radium decays over 1 year, it produces 1.16 × 10 alpha particles (helium nuclei). Each alpha particle becomes an atom of helium gas. What is the pressure in pascal of the helium gas produced if it occupies a volume of 125 mL at a temperature of 25 °C? A balloon that is 100.21 L at 21 °C and 0.981 atm is released and just barely clears the top of Mount Crumpet in British Columbia. If the final volume of the balloon is 144.53 L at a temperature of 5.24 °C, what is the pressure experienced by the balloon as it clears Mount Crumpet? 0.644 atm If the temperature of a fixed amount of a gas is doubled at constant volume, what happens to the pressure? If the volume of a fixed amount of a gas is tripled at constant temperature, what happens to the pressure? The pressure decreases by a factor of 3. What is the density of laughing gas, dinitrogen monoxide, N O, at a temperature of 325 K and a pressure of 113.0 kPa? 1.) First convert kPa to atm \(113.0\:kPa\times\dfrac{1\:atm}{101.325\:kPa}=1.115\:atm\) 2.) The use the equation \(d=\dfrac{PM}{RT}\) where d = density in g L and M = molar mass in g mol \(d=\dfrac{(1.115\:atm)(44.02\dfrac{g}{\cancel{mol}})}{(0.08206\: \dfrac{\cancel{atm}\:L}{\cancel{mole}\:\cancel{°K}})(325\:\cancel{°K})}=1.84\:\dfrac{g}{L}\) Calculate the density of Freon 12, CF Cl , at 30.0 °C and 0.954 atm. 4.64 g L Which is denser at the same temperature and pressure, dry air or air saturated with water vapor? Explain. A cylinder of O ( ) used in breathing by emphysema patients has a volume of 3.00 L at a pressure of 10.0 atm. If the temperature of the cylinder is 28.0 °C, what mass of oxygen is in the cylinder? 38.8 g What is the molar mass of a gas if 0.0494 g of the gas occupies a volume of 0.100 L at a temperature 26 °C and a pressure of 307 torr? 1.) convert torr to atm and °C to °K \(307\:torr=0.404atm\) \(26°C= 300.°K\) 2.) Use the equation \(PV=nRT\) and solve for \(n\) \(n=\dfrac{PV}{RT}\) \(n=\dfrac{(0.404\:\cancel{atm})(0.100\:\cancel{L})}{(0.08206\dfrac{\cancel{atm}\:\cancel{L}}{mol\:\cancel{°K}})(300.\cancel{°K})}=0.00165\:moles\) 3.) Then divide grams by the number of moles to obtain the molar mass: \(\dfrac{0.0494g}{0.00165\:moles}=30.0\dfrac{g}{mole}\) What is the molar mass of a gas if 0.281 g of the gas occupies a volume of 125 mL at a temperature 126 °C and a pressure of 777 torr? 72.0 g mol How could you show experimentally that the molecular formula of propene is C H , not CH ? The density of a certain gaseous fluoride of phosphorus is 3.93 g/L at STP. Calculate the molar mass of this fluoride and determine its molecular formula. 88.1 g mol ; PF Consider this question: What is the molecular formula of a compound that contains 39% C, 45% N, and 16% H if 0.157 g of the compound occupies 125 mL with a pressure of 99.5 kPa at 22 °C? A 36.0–L cylinder of a gas used for calibration of blood gas analyzers in medical laboratories contains 350 g CO , 805 g O , and 4,880 g N . At 25 degrees C, what is the pressure in the cylinder in atmospheres? 141 atm A cylinder of a gas mixture used for calibration of blood gas analyzers in medical laboratories contains 5.0% CO , 12.0% O , and the remainder N at a total pressure of 146 atm. What is the partial pressure of each component of this gas? (The percentages given indicate the percent of the total pressure that is due to each component.) A sample of gas isolated from unrefined petroleum contains 90.0% CH , 8.9% C H , and 1.1% C H at a total pressure of 307.2 kPa. What is the partial pressure of each component of this gas? (The percentages given indicate the percent of the total pressure that is due to each component.) CH : 276 kPa; C H : 27 kPa; C H : 3.4 kPa A mixture of 0.200 g of H , 1.00 g of N , and 0.820 g of Ar is stored in a closed container at STP. Find the volume of the container, assuming that the gases exhibit ideal behavior. Most mixtures of hydrogen gas with oxygen gas are explosive. However, a mixture that contains less than 3.0 % O is not. If enough O is added to a cylinder of H at 33.2 atm to bring the total pressure to 34.5 atm, is the mixture explosive? Yes A commercial mercury vapor analyzer can detect, in air, concentrations of gaseous Hg atoms (which are poisonous) as low as 2 × 10 mg/L of air. At this concentration, what is the partial pressure of gaseous mercury if the atmospheric pressure is 733 torr at 26 °C? A sample of carbon monoxide was collected over water at a total pressure of 756 torr and a temperature of 18 °C. What is the pressure of the carbon monoxide? (See for the vapor pressure of water.) 740 torr In an experiment in a general chemistry laboratory, a student collected a sample of a gas over water. The volume of the gas was 265 mL at a pressure of 753 torr and a temperature of 27 °C. The mass of the gas was 0.472 g. What was the molar mass of the gas? Joseph Priestley first prepared pure oxygen by heating mercuric oxide, HgO: \(\ce{2HgO}(s)⟶\ce{2Hg}(l)+\ce{O2}(g)\) (a) Determine the moles of HgO that decompose; using the chemical equation, determine the moles of O produced by decomposition of this amount of HgO; and determine the volume of O from the moles of O , temperature, and pressure. (b) 0.308 L Cavendish prepared hydrogen in 1766 by the novel method of passing steam through a red-hot gun barrel: \[\ce{4H2O}(g)+\ce{3Fe}(s)⟶\ce{Fe3O4}(s)+\ce{4H2}(g)\] \[\ce{CCl2F2}(g)+\ce{4H2}(g)⟶\ce{CH2F2}(g)+\ce{2HCl}(g)\] Automobile air bags are inflated with nitrogen gas, which is formed by the decomposition of solid sodium azide (NaN ). The other product is sodium metal. Calculate the volume of nitrogen gas at 27 °C and 756 torr formed by the decomposition of 125 g of sodium azide. Lime, CaO, is produced by heating calcium carbonate, CaCO ; carbon dioxide is the other product. (a) Balance the equation. Determine the grams of CO produced and the number of moles. From the ideal gas law, determine the volume of gas. (b) 7.43 × 10 L Before small batteries were available, carbide lamps were used for bicycle lights. Acetylene gas, C H , and solid calcium hydroxide were formed by the reaction of calcium carbide, CaC , with water. The ignition of the acetylene gas provided the light. Currently, the same lamps are used by some cavers, and calcium carbide is used to produce acetylene for carbide cannons. Calculate the volume of oxygen required to burn 12.00 L of ethane gas, C H , to produce carbon dioxide and water, if the volumes of C H and O are measured under the same conditions of temperature and pressure. 42.00 L What volume of O at STP is required to oxidize 8.0 L of NO at STP to NO ? What volume of NO is produced at STP? Consider the following questions: (a) 18.0 L; (b) 0.533 atm Methanol, CH OH, is produced industrially by the following reaction: \[\ce{CO}(g)+\ce{2H2}(g)\xrightarrow{\textrm{ copper catalyst 300 °C, 300 atm }}\ce{CH3OH}(g)\] Assuming that the gases behave as ideal gases, find the ratio of the total volume of the reactants to the final volume. What volume of oxygen at 423.0 K and a pressure of 127.4 kPa is produced by the decomposition of 129.7 g of BaO to BaO and O ? 10.57 L O A 2.50-L sample of a colorless gas at STP decomposed to give 2.50 L of N and 1.25 L of O at STP. What is the colorless gas? Ethanol, C H OH, is produced industrially from ethylene, C H , by the following sequence of reactions: \[\ce{3C2H4 + 2H2SO4⟶C2H5HSO4 + (C2H5)2SO4}\] \[\ce{C2H5HSO4 + (C2H5)2SO4 + 3H2O⟶3C2H5OH + 2H2SO4}\] What volume of ethylene at STP is required to produce 1.000 metric ton (1000 kg) of ethanol if the overall yield of ethanol is 90.1%? 5.40 × 10 L One molecule of hemoglobin will combine with four molecules of oxygen. If 1.0 g of hemoglobin combines with 1.53 mL of oxygen at body temperature (37 °C) and a pressure of 743 torr, what is the molar mass of hemoglobin? A sample of a compound of xenon and fluorine was confined in a bulb with a pressure of 18 torr. Hydrogen was added to the bulb until the pressure was 72 torr. Passage of an electric spark through the mixture produced Xe and HF. After the HF was removed by reaction with solid KOH, the final pressure of xenon and unreacted hydrogen in the bulb was 36 torr. What is the empirical formula of the xenon fluoride in the original sample? (Note: Xenon fluorides contain only one xenon atom per molecule.) XeF One method of analyzing amino acids is the van Slyke method. The characteristic amino groups (−NH ) in protein material are allowed to react with nitrous acid, HNO , to form N gas. From the volume of the gas, the amount of amino acid can be determined. A 0.0604-g sample of a biological sample containing glycine, CH (NH )COOH, was analyzed by the van Slyke method and yielded 3.70 mL of N collected over water at a pressure of 735 torr and 29 °C. What was the percentage of glycine in the sample? A balloon filled with helium gas is found to take 6 hours to deflate to 50% of its original volume. How long will it take for an identical balloon filled with the same volume of hydrogen gas (instead of helium) to decrease its volume by 50%? 4.2 hours Explain why the numbers of molecules are not identical in the left- and right-hand bulbs shown in the center illustration of . Starting with the definition of rate of effusion and Graham’s finding relating rate and molar mass, show how to derive the Graham’s law equation, relating the relative rates of effusion for two gases to their molecular masses. Effusion can be defined as the process by which a gas escapes through a pinhole into a vacuum. Graham’s law states that with a mixture of two gases A and B: \(\mathrm{\left(\dfrac{rate\: A}{rate\: B}\right)=\left(\dfrac{molar\: mass\: of\: B}{molar\: mass\: of\: A}\right)^{1/2}}\). Both A and B are in the same container at the same temperature, and therefore will have the same kinetic energy: Heavy water, D O (molar mass = 20.03 g mol ), can be separated from ordinary water, H O (molar mass = 18.01), as a result of the difference in the relative rates of diffusion of the molecules in the gas phase. Calculate the relative rates of diffusion of H O and D O. Which of the following gases diffuse more slowly than oxygen? F , Ne, N O, C H , NO, Cl , H S F , N O, Cl , H S During the discussion of gaseous diffusion for enriching uranium, it was claimed that UF diffuses 0.4% faster than UF . Show the calculation that supports this value. The molar mass of UF = 235.043930 + 6 × 18.998403 = 349.034348 g/mol, and the molar mass of UF = 238.050788 + 6 × 18.998403 = 352.041206 g/mol. Calculate the relative rate of diffusion of H (molar mass 2.0 g/mol) compared to that of H (molar mass 4.0 g/mol) and the relative rate of diffusion of O (molar mass 32 g/mol) compared to that of O (molar mass 48 g/mol). 1.4; 1.2 A gas of unknown identity diffuses at a rate of 83.3 mL/s in a diffusion apparatus in which carbon dioxide diffuses at the rate of 102 mL/s. Calculate the molecular mass of the unknown gas. When two cotton plugs, one moistened with ammonia and the other with hydrochloric acid, are simultaneously inserted into opposite ends of a glass tube that is 87.0 cm long, a white ring of NH Cl forms where gaseous NH and gaseous HCl first come into contact. (Hint: Calculate the rates of diffusion for both NH and HCl, and find out how much faster NH diffuses than HCl.) 51.7 cm Using the postulates of the kinetic molecular theory, explain why a gas uniformly fills a container of any shape. Can the speed of a given molecule in a gas double at constant temperature? Explain your answer. Yes. At any given instant, there are a range of values of molecular speeds in a sample of gas. Any single molecule can speed up or slow down as it collides with other molecules. The average velocity of all the molecules is constant at constant temperature. Describe what happens to the average kinetic energy of ideal gas molecules when the conditions are changed as follows: The distribution of molecular velocities in a sample of helium is shown in . If the sample is cooled, will the distribution of velocities look more like that of H or of H O? Explain your answer. H O. Cooling slows the velocities of the He atoms, causing them to behave as though they were heavier. What is the ratio of the average kinetic energy of a SO molecule to that of an O molecule in a mixture of two gases? What is the ratio of the root mean square speeds, , of the two gases? A 1-L sample of CO initially at STP is heated to 546 °C, and its volume is increased to 2 L. (a) The number of collisions per unit area of the container wall is constant. (b) The average kinetic energy doubles. (c) The root mean square speed increases to \(\sqrt{2}\) times its initial value; is proportional to \(\mathrm{KE_{avg}}\). The root mean square speed of H molecules at 25 °C is about 1.6 km/s. What is the root mean square speed of a N molecule at 25 °C? Answer the following questions: (a) equal; (b) less than; (c) 29.48 g mol ; (d) 1.0966 g L ; (e) 0.129 g/L; (f) 4.01 × 10 g; net lifting capacity = 384 lb; (g) 270 L; (h) 39.1 kJ min Graphs showing the behavior of several different gases follow. Which of these gases exhibit behavior significantly different from that expected for ideal gases? Gases C, E, and F Explain why the plot of for CO differs from that of an ideal gas. Under which of the following sets of conditions does a real gas behave most like an ideal gas, and for which conditions is a real gas expected to deviate from ideal behavior? Explain. The gas behavior most like an ideal gas will occur under the conditions in (b). Molecules have high speeds and move through greater distances between collision; they also have shorter contact times and interactions are less likely. Deviations occur with the conditions described in (a) and (c). Under conditions of (a), some gases may liquefy. Under conditions of (c), most gases will liquefy. Describe the factors responsible for the deviation of the behavior of real gases from that of an ideal gas. For which of the following gases should the correction for the molecular volume be largest: CO, CO , H , He, NH , SF ? SF A 0.245-L flask contains 0.467 mol CO at 159 °C. Calculate the pressure: Answer the following questions: (a) A straight horizontal line at 1.0; (b) When real gases are at low pressures and high temperatures they behave close enough to ideal gases that they are approximated as such, however, in some cases, we see that at a high pressure and temperature, the ideal gas approximation breaks down and is significantly different from the pressure calculated by the van der Waals equation (c) The greater the compressibility, the more the volume matters. At low pressures, the correction factor for intermolecular attractions is more significant, and the effect of the volume of the gas molecules on Z would be a small lowering compressibility. At higher pressures, the effect of the volume of the gas molecules themselves on Z would increase compressibility (see ) (d) Once again, at low pressures, the effect of intermolecular attractions on Z would be more important than the correction factor for the volume of the gas molecules themselves, though perhaps still small. At higher pressures and low temperatures, the effect of intermolecular attractions would be larger. See . (e) low temperatures Graphs showing the behavior of several different gases follow. Which of these gases exhibit behavior significantly different from that expected for ideal gases? Gases C, E, and F Explain why the plot of for CO differs from that of an ideal gas. Under which of the following sets of conditions does a real gas behave most like an ideal gas, and for which conditions is a real gas expected to deviate from ideal behavior? Explain. (a) high pressure, small volume (b) high temperature, low pressure (c) low temperature, high pressure The gas behavior most like an ideal gas will occur under the conditions in (b). Molecules have high speeds and move through greater distances between collision; they also have shorter contact times and interactions are less likely. Deviations occur with the conditions described in (a) and (c). Under conditions of (a), some gases may liquefy. Under conditions of (c), most gases will liquefy. Describe the factors responsible for the deviation of the behavior of real gases from that of an ideal gas. For which of the following gases should the correction for the molecular volume be largest: CO, CO , H , He, NH , SF ? SF A 0.245-L flask contains 0.467 mol CO at 159 °C. Calculate the pressure: (a) using the ideal gas law (b) using the van der Waals equation (c) Explain the reason for the difference. (d) Identify which correction (that for P or V) is dominant and why. Answer the following questions: (a) If XX behaved as an ideal gas, what would its graph of Z vs. P look like? (b) For most of this chapter, we performed calculations treating gases as ideal. Was this justified? (c) What is the effect of the volume of gas molecules on Z? Under what conditions is this effect small? When is it large? Explain using an appropriate diagram. (d) What is the effect of intermolecular attractions on the value of Z? Under what conditions is this effect small? When is it large? Explain using an appropriate diagram. (e) In general, under what temperature conditions would you expect Z to have the largest deviations from the Z for an ideal gas? (a) A straight horizontal line at 1.0; (b) When real gases are at low pressures and high temperatures they behave close enough to ideal gases that they are approximated as such, however, in some cases, we see that at a high pressure and temperature, the ideal gas approximation breaks down and is significantly different from the pressure calculated by the van der Waals equation (c) The greater the compressibility, the more the volume matters. At low pressures, the correction factor for intermolecular attractions is more significant, and the effect of the volume of the gas molecules on Z would be a small lowering compressibility. At higher pressures, the effect of the volume of the gas molecules themselves on Z would increase compressibility (see ) (d) Once again, at low pressures, the effect of intermolecular attractions on Z would be more important than the correction factor for the volume of the gas molecules themselves, though perhaps still small. At higher pressures and low temperatures, the effect of intermolecular attractions would be larger. See . (e) low temperatures
27,470
4,912
https://chem.libretexts.org/Bookshelves/General_Chemistry/Chemistry_1e_(OpenSTAX)/16%3A_Thermodynamics/16.2%3A_Entropy
In 1824, at the age of 28, Nicolas Léonard Sadi (Figure \(\Page {2}\)) published the results of an extensive study regarding the efficiency of steam heat engines. In a later review of Carnot’s findings, Rudolf introduced a new thermodynamic property that relates the spontaneous heat flow accompanying a process to the temperature at which the process takes place. This new property was expressed as the ratio of the heat ( ) and the kelvin temperature ( ). The term refers to a process that takes place at such a slow rate that it is always at equilibrium and its direction can be changed (it can be “reversed”) by an infinitesimally small change is some condition. Note that the idea of a reversible process is a formalism required to support the development of various thermodynamic concepts; no real processes are truly reversible, rather they are classified as . Similar to other thermodynamic properties, this new quantity is a state function, and so its change depends only upon the initial and final states of a system. In 1865, Clausius named this property and defined its change for any process as the following: \[ΔS=\dfrac{q_\ce{rev}}{T} \label{Eq1} \] The entropy change for a real, irreversible process is then equal to that for the theoretical reversible process that involves the same initial and final states. Following the work of Carnot and Clausius, Ludwig developed a molecular-scale statistical model that related the entropy of a system to the number of possible for the system. A is a specific configuration of the locations and energies of the atoms or molecules that comprise a system like the following: \[S=k \ln \Omega \label{Eq2} \] Here is the Boltzmann constant and has a value of \(1.38 \times 10^{−23}\, J/K\). As for other state functions, the change in entropy for a process is the difference between its final ( ) and initial ( ) values: \[\begin{align} ΔS &=S_\ce{f}−S_\ce{i} \nonumber \\[4pt] &=k \ln \Omega_\ce{f} − k \ln \Omega_\ce{i} \nonumber \\[4pt] &=k \ln\dfrac{\Omega_\ce{f}}{\Omega_\ce{i}} \label{Eq2a} \end{align} \] For processes involving an increase in the number of microstates of the system, \(\Omega_f > \Omega_i\), the entropy of the system increases, \(ΔS > 0\). Conversely, processes that reduce the number of microstates in the system, \(\Omega_f < \Omega_i\), yield a decrease in system entropy, \(ΔS < 0\). This molecular-scale interpretation of entropy provides a link to the probability that a process will occur as illustrated in the next paragraphs. Consider the general case of a system comprised of particles distributed among boxes. The number of microstates possible for such a system is . For example, distributing four particles among two boxes will result in 2 = 16 different microstates as illustrated in Figure \(\Page {2}\). Microstates with equivalent particle arrangements (not considering individual particle identities) are grouped together and are called (sometimes called macrostates or configurations). The probability that a system will exist with its components in a given distribution is proportional to the number of microstates within the distribution. Since entropy increases logarithmically with the number of microstates, . For this system, the most probable configuration is one of the six microstates associated with distribution (c) where the particles are evenly distributed between the boxes, that is, a configuration of two particles in each box. The probability of finding the system in this configuration is   \[\dfrac{6}{16} = \dfrac{3}{8} \nonumber \] The least probable configuration of the system is one in which all four particles are in one box, corresponding to distributions (a) and (e), each with a probability of \[\dfrac{1}{16} \nonumber \] The probability of finding all particles in only one box (either the left box or right box) is then \[\left(\dfrac{1}{16}+\dfrac{1}{16}\right)=\dfrac{2}{16} = \dfrac{1}{8} \nonumber \]   As you add more particles to the system, the number of possible microstates increases exponentially (2 ). A macroscopic (laboratory-sized) system would typically consist of moles of particles ( ~ 10 ), and the corresponding number of microstates would be staggeringly huge. Regardless of the number of particles in the system, however, the distributions in which roughly equal numbers of particles are found in each box are always the most probable configurations. The most probable distribution is therefore the one of greatest entropy. The previous description of an ideal gas expanding into a vacuum is a macroscopic example of this particle-in-a-box model. For this system, the most probable distribution is confirmed to be the one in which the matter is most uniformly dispersed or distributed between the two flasks. The spontaneous process whereby the gas contained initially in one flask expands to fill both flasks equally therefore yields an increase in entropy for the system. A similar approach may be used to describe the spontaneous flow of heat. Consider a system consisting of two objects, each containing two particles, and two units of energy (represented as “*”) in Figure \(\Page {3}\). The hot object is comprised of particles and and initially contains both energy units. The cold object is comprised of particles and , which initially has no energy units. Distribution (a) shows the three microstates possible for the initial state of the system, with both units of energy contained within the hot object. If one of the two energy units is transferred, the result is distribution (b) consisting of four microstates. If both energy units are transferred, the result is distribution (c) consisting of three microstates. And so, we may describe this system by a total of ten microstates. The probability that the heat does not flow when the two objects are brought into contact, that is, that the system remains in distribution (a), is \(\frac{3}{10}\). More likely is the flow of heat to yield one of the other two distribution, the combined probability being \(\frac{7}{10}\). The most likely result is the flow of heat to yield the uniform dispersal of energy represented by distribution (b), the probability of this configuration being \(\frac{4}{10}\). As for the previous example of matter dispersal, extrapolating this treatment to macroscopic collections of particles dramatically increases the probability of the uniform distribution relative to the other distributions. This supports the common observation that placing hot and cold objects in contact results in spontaneous heat flow that ultimately equalizes the objects’ temperatures. And, again, this spontaneous process is also characterized by an increase in system entropy. Consider the system shown here. What is the change in entropy for a process that converts the system from distribution (a) to (c)?   We are interested in the following change: The initial number of microstates is one, the final six: \[\begin{align} ΔS &=k \ln\dfrac{\Omega_\ce{c}}{\Omega_\ce{a}} \nonumber \\[4pt] &= 1.38×10^{−23}\:J/K × \ln\dfrac{6}{1} \nonumber \\[4pt] &= 2.47×10^{−23}\:J/K \nonumber \end{align} \nonumber \] The sign of this result is consistent with expectation; since there are more microstates possible for the final state than for the initial state, the change in entropy should be positive. Consider the system shown in Figure \(\Page {3}\). What is the change in entropy for the process where the energy is transferred from the hot object ( ) to the cold object ( )? 0 J/K The relationships between entropy, microstates, and matter/energy dispersal described previously allow us to make generalizations regarding the relative entropies of substances and to predict the sign of entropy changes for chemical and physical processes. Consider the phase changes illustrated in Figure \(\Page {4}\). In the solid phase, the atoms or molecules are restricted to nearly fixed positions with respect to each other and are capable of only modest oscillations about these positions. With essentially fixed locations for the system’s component particles, the number of microstates is relatively small. In the liquid phase, the atoms or molecules are free to move over and around each other, though they remain in relatively close proximity to one another. This increased freedom of motion results in a greater variation in possible particle locations, so the number of microstates is correspondingly greater than for the solid. As a result, > and the process of converting a substance from solid to liquid (melting) is characterized by an increase in entropy, Δ > 0. By the same logic, the reciprocal process (freezing) exhibits a decrease in entropy, Δ < 0. Now consider the vapor or gas phase. The atoms or molecules occupy a greater volume than in the liquid phase; therefore each atom or molecule can be found in many more locations than in the liquid (or solid) phase. Consequently, for any substance, > > , and the processes of vaporization and sublimation likewise involve increases in entropy, Δ > 0. Likewise, the reciprocal phase transitions, condensation and deposition, involve decreases in entropy, Δ < 0. According to kinetic-molecular theory, the temperature of a substance is proportional to the average kinetic energy of its particles. Raising the temperature of a substance will result in more extensive vibrations of the particles in solids and more rapid translations of the particles in liquids and gases. At higher temperatures, the distribution of kinetic energies among the atoms or molecules of the substance is also broader (more dispersed) than at lower temperatures. Thus, the entropy for any substance increases with temperature (Figure \(\Page {5}\) ). The entropy of a substance is influenced by structure of the particles (atoms or molecules) that comprise the substance. With regard to atomic substances, heavier atoms possess greater entropy at a given temperature than lighter atoms, which is a consequence of the relation between a particle’s mass and the spacing of quantized translational energy levels (which is a topic beyond the scope of our treatment). For molecules, greater numbers of atoms (regardless of their masses) increase the ways in which the molecules can vibrate and thus the number of possible microstates and the system entropy. Finally, variations in the types of particles affects the entropy of a system. Compared to a pure substance, in which all particles are identical, the entropy of a mixture of two or more different particle types is greater. This is because of the additional orientations and interactions that are possible in a system comprised of nonidentical components. For example, when a solid dissolves in a liquid, the particles of the solid experience both a greater freedom of motion and additional interactions with the solvent particles. This corresponds to a more uniform dispersal of matter and energy and a greater number of microstates. The process of dissolution therefore involves an increase in entropy, Δ > 0. Considering the various factors that affect entropy allows us to make informed predictions of the sign of Δ for various chemical and physical processes as illustrated in Example . Predict the sign of the entropy change for the following processes. Indicate the reason for each of your predictions. Predict the sign of the enthalpy change for the following processes. Give a reason for your prediction. Positive; The solid dissolves to give an increase of mobile ions in solution. Negative; The liquid becomes a more ordered solid. Positive; The relatively ordered solid becomes a gas Positive; There is a net production of one mole of gas. Entropy (\( ) is a state function that can be related to the number of microstates for a system (the number of ways the system can be arranged) and to the ratio of reversible heat to kelvin temperature. It may be interpreted as a measure of the dispersal or distribution of matter and/or energy in a system, and it is often described as representing the “disorder” of the system. For a given substance, \(S_{solid} < S_{liquid} \ll S_{gas}\) in a given physical state at a given temperature, entropy is typically greater for heavier atoms or more complex molecules. Entropy increases when a system is heated and when solutions form. Using these guidelines, the sign of entropy changes for some chemical reactions may be reliably predicted.
12,456
4,913
https://chem.libretexts.org/Bookshelves/Physical_and_Theoretical_Chemistry_Textbook_Maps/Quantum_Chemistry_(Blinder)
This course is designed to introduce students to a thorough, research-oriented view of Physical Chemistry. This content builds on the introduction to quantum mechanics where Students will solve the Schrödinger equation in 1-, 2-, and 3-dimensions for several problems of interest in chemistry, including the particle-in-a-box, harmonic oscillator, rigid rotor, and hydrogen atom. Further topics include atomic structure, valence-bond and molecular orbital theories of chemical bonding and group theory. The concepts of quantum theory are applied to molecular spectroscopy and nuclear magnetic resonance.
620
4,915
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Qualitative_Analysis_of_Common_Cations_in_Water_(Malik)
The purpose of these experiments is to teach chemical principles behind experimental techniques required for the separation and identification of chemical substances. The techniques will be applied for the separation and identification of fourteen important cations in water solution as a model. The fourteen cations include: barium (\(\ce{Ba^{2+}}\)), Bismuth(III) (\(\ce{Bi^{3+}}\), calcium (\(\ce{Ca^{2+}}\), cadmium(II) (\(\ce{Cd^{2+}}\), chromium(III) (\(\ce{Cr^{3+}}\), copper(II) (\(\ce{Cu^{2+}}\), iron(II) (\(\ce{Fe^{2+}}\), iron(III) (\(\ce{Fe^{3+}}\), lead (II) (\(\ce{Pb^{2+}}\), mercury(I) (\(\ce{Hg2^{2+}}\), nickel(II) (\(\ce{Ni^{2+}}\), potassium (K ), silver(I) (Ag ), sodium (Na ), and tin(IV) (Sn ). The cations will be separated into sets of five groups and then ions within each group will be separated and identified. The exercises will involve dissolution, precipitation, acid-base, and oxidation-reduction reactions controlled based on solubility variations with counter ions, Le Chatelier's principle, common ion effect, pH control, etc. The use of chelating agents and redox-reagents will be demonstrated with practical examples. The chemistry principles involved will be described first, followed by a review of the basic experimental technique used in these experiments. Then, the separation of groups of cations followed by the separation and identification of ions within each group will be described.  
1,450
4,919
https://chem.libretexts.org/Bookshelves/Inorganic_Chemistry/Book%3A_Introduction_to_Inorganic_Chemistry_(Wikibook)
Inorganic chemistry is the study of the synthesis, reactions, structures and properties of compounds of the elements. Inorganic chemistry encompasses the compounds - both molecular and extended solids - of everything else in the periodic table, and overlaps with organic chemistry in the area of organometallic chemistry, in which metals are bonded to carbon-containing ligands and molecules. Inorganic chemistry is fundamental to many practical technologies including catalysis and materials (structural, electronic, magnetic etc.), energy conversion and storage, and electronics. Inorganic compounds are also found in biological systems where they are essential to life processes.
699
4,922
https://chem.libretexts.org/Bookshelves/Biological_Chemistry/Supplemental_Modules_(Biological_Chemistry)/Enzymes/HIV
Human immunodeficiency virus (HIV) is a retrovirus, which is a class of viruses that carry genetic information in RNA.There are two types of HIV, HIV-1 and HIV-2, with HIV-1 being the most predominant, it is commonly called just HIV. Both types of HIV damage a person’s body by destroying specific blood cells, called CD4 T cells, which are crucial to helping the body fight diseases in the immune system. This can lead to immune deficiency, which is when the infection with the virus progressively deteriorates the immune system and is considered deficient when it no longer works to help fight infection and disease. According to the 2006 Morbidity and Mortality Weekly Report, published by the Center for Disease Control there were approximately 1.1 million United State Citizens are affected by HIV. There was an estimated number of 56, 300 people that newly contracted HIV. Although the annual incidence for HIV has remained constant throughout recent years, the prevalence has increased each year. These drugs developed for HIV treatment are based on the mechanism HIV uses, including proteases, to infect its host. The HIV-1 protease is synthesized from the gag and pol genes along with other proteins. Retroviruses, such as HIV-1, are able to reverse transcribe because of the reverse transcriptase which is transcribed by the pol gene. HIV-1 RNA contains many genes, specifically gag and pol, that encode for many proteins. The open reading frames of gag and pol genes overlap in HIV-1. Studies have found that the initial cleavages are made by the immature protease dimer in the membrane of the infected cell during virus budding, or replication. Once these intramolecular cleavages are made a more active gag-pol processing intermediate is released, which becomes the active protease. Further investigation would then take place of the various proposed mechanisms in attempts to synthesize new drugs that would act in a similar fashion. The first part of the mechanism begins with the substrate binding onto the protease. accents the key amino acids in HIV-1 protease that assists in substrate binding. It is predicted that a substrate first binds via a hydrogen bond to aspartic acid 30 on one chain. Once this initial bond is made, the binding is then further stabilized by bondage to the glycine rich region in the flap of the same monomer. A salt bridge is then formed from the substrate to glutamic acid 35 of the other monomer. This completes binding of the substrate to the protease. At this point, waters molecules that are found at the tips of the flaps at isoleucine 50 on each monomer dissociates from the protease. The release of the water molecule results in a structural conformation change of the protease, changing it from semi-open to closed, tightening the space between the protease and substrate. HIV protease has variable states that it exists in, such as the two states mentioned above--the semi-open and closed state. These states depend on whether a substrate is bound to it. In its unbound state, the protease’s glycine rich flaps (shown in grey in Figure 1) are in a semi open state. Figure 1 depicts the protease in a closed flap state, which occurs when a ligand is bound to it (ligand not shown). An open state is thought to be the least frequent of the three states. Once in the tightened state, aspartic acid 25 and 25’ hydrogen binds to their adjacent glycine, and then becomes supported by the following threonine. Originally, there is a water molecule bound between the aspartic acids. One of the aspartic acid exists in a deprotonated state and the other one is protonated. The water molecule stabilizes the aspartates in this form.When the substrate binds to the protease, it causes conformational changes that brings the substrate to the position of the water molecule, and the water molecule acts as a nucleophile to the substrate. The oxygen of the water attacks the carbonyl group of the substrate peptide bond that is by the active site as the nitrogen picks up the hydrogen of the protonated aspartic acid. What results is an hydroxl group is added to the carbonyl group as an amine is formed on other side of the peptide bond, leaving a hydrogen atom behind to stablize the two aspartates. This is proposed to occur in a concerted fashion. This mechanism is outlined in Figure 3. Figure 2. HIV-1 Protease with Accented Substrate Binding Assistant Amino Acids. Aspartic acid 30 is shown in pink; Glycine 48, 49, and 51 are shown in white; Isoleucine 50 and 50’ are shown in yellow; and glutamic acid 35’ is shown in green. (Primes distinguish amino acids from each monomer.) With a disease this prevalent, medication is key in trying to extend the afflicted’s life. As mentions above, since a mutation to the aspartic acid in the active site of HIV-1 protease renders pro-viruses that are unable to form completely and infect other sites, the protease has been one of the targets for therapy. These drugs are referred to as protease inhibitors. A current Food and Drug Administration approved drug against this HIV-1 protease is nelfinavir mesylate, 2-[2-Hydroxy-3-(3-hydroxy-2-methyl-benzoylamino)-4-phenyl sulfanyl-butyl]-decahydro-isoquinoline-3-carboxylic acid tert-butylamide; C32H45N3 O4S (“Viracept”R). Figure 3 shows the drug fitting into the protease. At the center of the drug, a hydroxy group binds with the catalytic aspartic acid (boxed in red), while the other four groups (boxed in white) stabilizes the drug to the protease, making its bond more favorable than its natural substrate. This compounds accomplishes this by making various hydrophobic interactions and hydrogen bonds. This drug has a high drug efficiency. In order to prevent 50 percent of the HIV-1 infected cells from becoming necrotic, a dosage of 14nM is required. Although it is a high potent drug, there also a few side effects that come along with it. Side effects include fever, back pain, rash sweating, vomiting, and diarrhea based on a study of 62 HIV infected children ages 3 months to 13 years. Fourteen out of the 62 had diarrhea as a side effect and less than 6% of the study group had the other side effects.20 Due to the high mutation rate of HIV-1, often, multiple drugs are combined as a treatment in attempts to retard its spread as much as possible. A commonly seen drug paired with protease inhibitors is reverse transcriptase inhibitor. Protease inhibitors prevents the protease transcribed by the gag-pol gene and reverse transcriptase inhibitors prevents the reverse transcriptase transcribed by the pol gene. This combination targets two essential proteins that have been shown to stop HIV-1’s life cycle if these genes have been mutated. By targeting both proteins, HIV-1 activity is seen to decrease more than just one. An example of a reverse transcriptase inhibitor is Efavirenz. Efavirenz, in combination with nelfinavir mesylate has shown to increase the immune cell count and decrease the seen HIV-1 molecules in the blood plasma. The side effect of this drug are similar to those of nelfinavir mesylate.21 The effects of these developed drugs are the main reason HIV-1 infected people can live on life longer than they would have been able to in the past. Figure 3. HIV-1 Protease with nelfinavir mesylate. This is a top down view of the protease showing how the drug fits into the protease. Light blue molecules are carbons, red molecules are oxygens, blue molecules are nitrogens and yellow molecules are sulfurs. White boxed areas show the four main pockets the inhibitor lays in and the red boxed area shows the binding to the catalytic aspartic acid.
7,703
4,927
https://chem.libretexts.org/Bookshelves/Biological_Chemistry/Book3A_Medicines_by_Design/01%3A_ABCs_of_Pharmacology
now why some people's stomachs burn after they swallow an aspirin tablet? Or why a swig of grapefruit juice with breakfast can raise blood levels of some medicines in certain people? Understanding some of the basics of the science of pharmacology will help answer these questions, and many more, about your body and the medicines you take. So, then, what's pharmacology? Despite the field's long, rich history and importance to human health, few people know much about this biomedical science. One pharmacologist joked that when she was asked what she did for a living, her reply prompted an unexpected question: "Isn't 'farm ecology' the study of how livestock impact the environment?" Of course, this booklet isn't about livestock or agriculture. Rather, it's about a field of science that studies how the body reacts to medicines and how medicines affect the body. Pharmacology is often confused with pharmacy, a separate discipline in the health sciences that deals with preparing and dispensing medicines. For thousands of years, people have looked in nature to find chemicals to treat their symptoms. Ancient healers had little understanding of how various elixirs worked their magic, but we know much more today. Some pharmacologists study how our bodies work, while others study the chemical properties of medicines, Others investigate the physical and behavioral effects medicines have on the body. Pharmacology researchers study drugs used to treat diseases, as well as drugs of abuse. Since medicines work in so many different ways in so many different organs of the body, pharmacology research touches just about every area of biomedicine.
1,672
4,928
https://chem.libretexts.org/Bookshelves/General_Chemistry/Chem1_(Lower)/14%3A_Thermochemistry/14.05%3A_Calorimetry
Make sure you thoroughly understand the following essential concept: Constant Volume Calorimetry, also know as bomb calorimetry, is used to measure the heat of a reaction while holding volume constant and resisting large amounts of pressure. Although these two aspects of bomb calorimetry make for accurate results, they also contribute to the difficulty of bomb calorimetry. In this module, the basic assembly of a bomb calorimeter will be addressed, as well as how bomb calorimetry relates to the heat of reaction and heat capacity and the calculations involved in regards to these two topics. is used to measure quantities of heat, and can be used to determine the heat of a reaction through experiments. Usually a is used since it is simpler than a bomb calorimeter, but to measure the heat evolved in a combustion reaction, constant volume or bomb calorimetry is ideal. A constant volume calorimeter is also more accurate than a coffee-cup calorimeter, but it is more difficult to use since it requires a well-built reaction container that is able to withstand large amounts of pressure changes that happen in many chemical reactions. Most serious calorimetry carried out in research laboratories involves the determination of \(\Delta H_{combustion}\), since these are essential to the determination of standard enthalpies of formation of the thousands of new compounds that are prepared and characterized each month. In a constant volume calorimeter, the system is sealed or isolated from its surroundings, which accounts for why its volume is fixed and there is no volume-pressure work done. A bomb calorimeter structure consists of the following: Since the process takes place at constant volume, the reaction vessel must be constructed to withstand the high pressure resulting from the combustion process, which amounts to a confined explosion. The vessel is usually called a “bomb”, and the technique is known as . The reaction is initiated by discharging a capacitor through a thin wire which ignites the mixture. Another consequence of the constant-volume condition is that the heat released corresponds to \(q_v\), and thus to the internal energy change \(ΔU\) rather than to \(ΔH\). The enthalpy change is calculated according to the formula \[ΔH = q_v + Δn_gRT\] where \(Δn_g\)  is the change in the number of moles of gases in the reaction. A sample of biphenyl (\(\ce{(C6H5)2}\)) weighing 0.526 g was ignited in a bomb calorimeter initially at 25°C, producing a temperature rise of 1.91 K. In a separate calibration experiment, a sample of benzoic acid (\(\ce{C6H5COOH}\)) weighing 0.825 g was ignited under identical conditions and produced a temperature rise of 1.94 K. For benzoic acid, the heat of combustion at constant volume is known to be 3,226 kJ mol (that is, Δ = –3,226 kJ mol .) Use this information to determine the standard enthalpy of combustion of biphenyl. Begin by working out the calorimeter constant: \[\dfrac{0.825 g}{122.1 \;g/mol} = 0.00676\; mol \nonumber\] \[(0.00676\; mol) \times (3226\; kJ/mol) = 21.80\; kJ \nonumber\] \[\dfrac{21.80\; kJ}{1.94\; K} = 11.24\; kJ/K \nonumber\] Now determine \(ΔU_{combustion}\) of the biphenyl ("BP"): \[\dfrac{0.526\; g}{154.12\; g/mol} = 0.00341 \; mol \nonumber\] \[(1.91\; K) \times (11.24\; kJ/K) = 21.46\; kJ \nonumber\] \[\dfrac{21.46\; kJ}{0.00341\; mol} = 6,293\; kJ/mol \nonumber\] \[ΔU_{combustion} (BP) = –6,293\; kJ/mol \nonumber\] This is the heat change at constant volume, \(q_v\); the negative sign indicates that the reaction is exothermic, as all combustion reactions are. From the balanced reaction equation \[\ce{(C6H5)2(s) + 29/2 O2(g) \rightarrow 12 CO2(g) + 5 H2O(l)} \nonumber\] we can calculate the change in the moles of gasses for this reaction \[Δn_g = 12 - \frac{29}{2} = \frac{-5}{2} \nonumber\] Thus the volume of the system when the reaction takes place. Converting to \(ΔH\), we can write the following equation. Additionally, recall that at constant volume, \(ΔU = q_V\). \[ \begin{align*} ΔH &= q_V + Δn_gRT \\[4pt] &= ΔU -\left( \dfrac{5}{2}\right) (8.314\; J\; mol^{-1}\; K^{-1}) (298 \;K) \\[4pt] &= (-6,293 \; kJ/mol)–(6,194\; J/mol) \\[4pt] &= (-6,293-6.2)\;kJ/mol \\[4pt] &= -6299 \; kJ/mol \end{align*} \] The amount of heat that the system gives up to its surroundings so that it can return to its initial temperature is the . The heat of reaction is just the negative of the thermal energy gained by the calorimeter and its contents (\(q_{calorimeter}\)) through the combustion reaction. \[q_{rxn} = -q_{calorimeter} \label{2A}\] where \[q_{calorimeter} = q_{bomb} + q_{water} \label{3A}\] If the constant volume calorimeter is set up the same way as before, (same steel bomb, same amount of water, etc.) then the heat capacity of the calorimeter can be measured using the following formula: \[q_{calorimeter} = \text{( heat capacity of calorimeter)} \times \Delta{T} \label{4A}\] Heat capacity is defined as the amount of heat needed to increase the temperature of the entire calorimeter by 1 °C. The equation above can also be used to calculate \(q_{rxn}\) from \(q_{calorimeter}\) calculated by Equation \ref{2A}. The heat capacity of the calorimeter can be determined by conducting an experiment. 1.150 g of sucrose goes through combustion in a bomb calorimeter. If the temperature rose from 23.42 °C to 27.64 °C and the heat capacity of the calorimeter is 4.90 kJ/°C, then determine the heat of combustion of sucrose, \(\ce{C12H22O11}\) (in kJ per mole of \(\ce{C12H22O11}\)). Given: Using Equation \ref{4A} to calculate \(q_{calorimeter}\): \[\begin{align*} q_{calorimeter} &= (4.90\; kJ/°C) \times (27.64 - 23.42)°C \\[4pt] &= (4.90 \times 4.22) \;kJ = 20.7\; kJ \end{align*}\] Plug into Equation \ref{2A}: \[\begin{align*} q_{rxn} &= -q_{calorimeter} \\[4pt] &= -20.7 \; kJ \; \end{align*}\] But the question asks for kJ/mol \(\ce{C12H22O11}\), so this needs to be converted: \[\begin{align*}q_{rxn} &= \dfrac{-20.7 \; kJ}{1.150 \; g \; C_{12}H_{22}O_{11}} \\[4pt] &= \dfrac{-18.0 \; kJ}{g\; C_{12}H_{22}O_{11}} \end{align*}\] Convert to per Mole \(\ce{C12H22O11}\): \[\begin{align*}q_{rxn} &= \dfrac{-18.0 \; kJ}{\cancel{g \; \ce{C12H22O11}}} \times \dfrac{342.3 \; \cancel{ g \; \ce{C12H22O11}}}{1 \; mol \; \ce{C12H22O11}} \\[4pt] &= \dfrac{-6.16 \times 10^3 \; kJ \;}{mol \; \ce{C12H22O11}} \end{align*}\] Although calorimetry is simple in principle, its practice is a highly exacting art, especially when applied to processes that take place slowly or involve very small heat changes, such as the germination of seeds. Calorimeters can be as simple as a foam plastic coffee cup, which is often used in student laboratories. Research-grade calorimeters, able to detect minute temperature changes, are more likely to occupy table tops, or even entire rooms: The is an important tool for measuring the heat capacities of liquids and solids, as well as the heats of certain reactions. This simple yet ingenious apparatus is essentially a device for measuring the change in volume due to melting of ice. To measure a heat capacity, a warm sample is placed in the inner compartment, which is surrounded by a mixture of ice and water. The heat withdrawn from the sample as it cools causes some of the ice to melt. Since ice is less dense than water, the volume of water in the insulated chamber decreases. This causes an equivalent volume of mercury to be sucked into the inner reservoir from the outside container. The loss in weight of this container gives the decrease in volume of the water, and thus the mass of ice melted. This, combined with the heat of fusion of ice, gives the quantity of heat lost by the sample as it cools to 0°C.
7,696
4,932
https://chem.libretexts.org/Bookshelves/General_Chemistry/Chemistry_1e_(OpenSTAX)/12%3A_Kinetics/12.3%3A_Rate_Laws
As described in the previous module, the rate of a reaction is affected by the concentrations of reactants. or are mathematical expressions that describe the relationship between the rate of a chemical reaction and the concentration of its reactants. In general, a rate law (or differential rate law, as it is sometimes called) takes this form: \[\ce{rate}=k[A]^m[B]^n[C]^p… \nonumber \] in which [ ], [ ], and [ ] represent the molar concentrations of reactants, and is the , which is specific for a particular reaction at a particular temperature. The exponents , , and are usually positive integers (although it is possible for them to be fractions or negative numbers). The rate constant and the exponents , , and must be determined experimentally by observing how the rate of a reaction changes as the concentrations of the reactants are changed. The rate constant is independent of the concentration of , , or , but it does vary with temperature and surface area. The exponents in a rate law describe the effects of the reactant concentrations on the reaction rate and define the . Consider a reaction for which the rate law is: \[\ce{rate}=k[A]^m[B]^n \nonumber \] If the exponent is 1, the reaction is first order with respect to . If is 2, the reaction is second order with respect to . If is 1, the reaction is first order in . If is 2, the reaction is second order in . If or is zero, the reaction is zero order in or , respectively, and the rate of the reaction is not affected by the concentration of that reactant. The is the sum of the orders with respect to each reactant. If = 1 and = 1, the overall order of the reaction is second order ( + = 1 + 1 = 2). The rate law: \[\ce{rate}=k[\ce{H2O2}] \nonumber \] describes a reaction that is first order in hydrogen peroxide and first order overall. The rate law: \[\ce{rate}=k[\ce{C4H6}]^2 \nonumber \] describes a reaction that is second order in C H and second order overall. The rate law: \[\ce{rate}=k[\ce{H+},\ce{OH-}] \nonumber \] describes a reaction that is first order in H , first order in OH , and second order overall. An experiment shows that the reaction of nitrogen dioxide with carbon monoxide: \[\ce{NO2}(g)+\ce{CO}(g)⟶\ce{NO}(g)+\ce{CO2}(g) \nonumber \] is second order in NO and zero order in at 100 °C. What is the rate law for the reaction? The reaction will have the form: \[\ce{rate}=k[\ce{NO2}]^m[\ce{CO}]^n \nonumber \] The reaction is second order in NO ; thus = 2. The reaction is zero order in CO; thus = 0. The rate law is: \[\ce{rate}=k[\ce{NO2}]^2[\ce{CO}]^0=k[\ce{NO2}]^2 \nonumber \] Remember that a number raised to the zero power is equal to 1, thus [CO] = 1, which is why we can simply drop the concentration of CO from the rate equation: the rate of reaction is solely dependent on the concentration of NO . When we consider rate mechanisms later in this chapter, we will explain how a reactant’s concentration can have no effect on a reaction despite being involved in the reaction. The rate law for the reaction: \[\ce{H2}(g)+\ce{2NO}(g)⟶\ce{N2O}(g)+\ce{H2O}(g) \nonumber \] has been experimentally determined to be rate = [NO] [H ]. What are the orders with respect to each reactant, and what is the overall order of the reaction? In a transesterification reaction, a triglyceride reacts with an alcohol to form an ester and glycerol. Many students learn about the reaction between methanol (CH OH) and ethyl acetate (CH CH OCOCH ) as a sample reaction before studying the chemical reactions that produce biodiesel: \[\ce{CH3OH + CH3CH2OCOCH3 ⟶ CH3OCOCH3 + CH3CH2OH} \nonumber \] The rate law for the reaction between methanol and ethyl acetate is, under certain conditions, experimentally determined to be: \[\ce{rate}=k[\ce{CH3OH}] \nonumber \] What is the order of reaction with respect to methanol and ethyl acetate, and what is the overall order of reaction? Ozone in the upper atmosphere is depleted when it reacts with nitrogen oxides. The rates of the reactions of nitrogen oxides with ozone are important factors in deciding how significant these reactions are in the formation of the ozone hole over Antarctica (Figure \(\Page {1}\)). One such reaction is the combination of nitric oxide, NO, with ozone, O : \[\ce{NO}(g)+\ce{O3}(g)⟶\ce{NO2}(g)+\ce{O2}(g) \nonumber \] This reaction has been studied in the laboratory, and the following rate data were determined at 25 °C. Determine the rate law and the rate constant for the reaction at 25 °C. The rate law will have the form: \[\ce{rate}=k[\ce{NO}]^m[\ce{O3}]^n \nonumber \] We can determine the values of , , and from the experimental data using the following three-part process: \[\ce{rate}=k[\ce{NO}]^1[\ce{O3}]^1=k[\ce{NO},\ce{O3}] \nonumber \] \[\begin{align*} k&=\mathrm{\dfrac{rate}{[NO,O_3]}}\\ &=\mathrm{\dfrac{6.60×10^{−5}\cancel{mol\: L^{−1}}\:s^{−1}}{(1.00×10^{−6}\cancel{mol\: L^{−1}})(3.00×10^{−6}\:mol\:L^{−1})}}\\ &=\mathrm{2.20×10^7\:L\:mol^{−1}\:s^{−1}} \end{align*} \nonumber \] The large value of tells us that this is a fast reaction that could play an important role in ozone depletion if [NO] is large enough. Acetaldehyde decomposes when heated to yield methane and carbon monoxide according to the equation: \[\ce{CH3CHO}(g)⟶\ce{CH4}(g)+\ce{CO}(g) \nonumber \] Determine the rate law and the rate constant for the reaction from the following experimental data: \(\ce{rate}=k[\ce{CH3CHO}]^2\) with = 6.73 × 10 L/mol/s Using the initial rates method and the experimental data, determine the rate law and the value of the rate constant for this reaction: \[\ce{2NO}(g)+\ce{Cl2}(g)⟶\ce{2NOCl}(g) \nonumber \] The rate law for this reaction will have the form: \[\ce{rate}=k[\ce{NO}]^m[\ce{Cl2}]^n \nonumber \] As in Example \(\Page {2}\), we can approach this problem in a stepwise fashion, determining the values of and from the experimental data and then using these values to determine the value of . In this example, however, we will use a different approach to determine the values of and : m We can write the ratios with the subscripts and to indicate data from two different trials: \[\dfrac{\ce{rate}_x}{\ce{rate}_y}=\dfrac{k[\ce{NO}]^m_x[\ce{Cl2}]^n_x}{k[\ce{NO}]^m_y[\ce{Cl2}]^n_y} \nonumber \] Using the third trial and the first trial, in which [Cl ] does not vary, gives: \[\mathrm{\dfrac{rate\: 3}{rate\: 1}}=\dfrac{0.00675}{0.00300}=\dfrac{k(0.15)^m(0.10)^n}{k(0.10)^m(0.10)^n} \nonumber \] After canceling equivalent terms in the numerator and denominator, we are left with: which simplifies to: \[2.25=(1.5)^m \nonumber \] We can use natural logs to determine the value of the exponent : We can confirm the result easily, since: Cancelation gives: \[\dfrac{0.0045}{0.0030}=\dfrac{(0.15)^n}{(0.10)^n} \nonumber \] which simplifies to: \[1.5=(1.5)^n \nonumber \] Thus must be 1, and the form of the rate law is: \[\ce{Rate}=k[\ce{NO}]^m[\ce{Cl2}]^n=k[\ce{NO}]^2[\ce{Cl2}] \nonumber \] To determine the value of once the rate law expression has been solved, simply plug in values from the first experimental trial and solve for : Use the provided initial rate data to derive the rate law for the reaction whose equation is: \[\ce{OCl-}(aq)+\ce{I-}(aq)⟶\ce{OI-}(aq)+\ce{Cl-}(aq) \nonumber \] Determine the rate law expression and the value of the rate constant with appropriate units for this reaction. \(\mathrm{\dfrac{rate\: 2}{rate\: 3}}=\dfrac{0.00092}{0.00046}=\dfrac{k(0.0020)^x(0.0040)^y}{k(0.0020)^x(0.0020)^y}\) 2.00 = 2.00 \(\mathrm{\dfrac{rate\: 1}{rate\: 2}}=\dfrac{0.00184}{0.00092}=\dfrac{k(0.0040)^x(0.0020)^y}{k(0.0020)^x(0.0040)^y}\) \(\begin{align*} 2.00&=\dfrac{2^x}{2^y}\\ 2.00&=\dfrac{2^x}{2^1}\\ 4.00&=2^x\\ x&=2 \end{align*}\) Substituting the concentration data from trial 1 and solving for yields: \(\begin{align*} \ce{rate}&=k[\ce{OCl-}]^2[\ce{I-}]^1\\ 0.00184&=k(0.0040)^2(0.0020)^1\\ k&=\mathrm{5.75×10^4\:mol^{−2}\:L^2\:s^{−1}} \end{align*}\)   In some of our examples, the reaction orders in the rate law happen to be the same as the coefficients in the chemical equation for the reaction. This is merely a coincidence and very often not the case. Rate laws may exhibit fractional orders for some reactants, and negative reaction orders are sometimes observed when an increase in the concentration of one reactant causes a decrease in reaction rate. A few examples illustrating these points are provided: \(\ce{NO2 + CO⟶NO + CO2}\hspace{20px}\ce{rate}=k[\ce{NO2}]^2\\ \ce{CH3CHO⟶CH4 + CO}\hspace{20px}\ce{rate}=k[\ce{CH3CHO}]^2\\ \ce{2N2O5⟶2NO2 + O2}\hspace{20px}\ce{rate}=k[\ce{N2O5}]\\ \ce{2NO2 + F2⟶2NO2F}\hspace{20px}\ce{rate}=k[\ce{NO2},\ce{F2}]\\ \ce{2NO2Cl⟶2NO2 + Cl2}\hspace{20px}\ce{rate}=k[\ce{NO2Cl}]\) It is important to note that Reaction orders also play a role in determining the units for the rate constant . In Example \(\Page {2}\), a second-order reaction, we found the units for to be \(\mathrm{L\:mol^{-1}\:s^{-1}}\), whereas in Example \(\Page {3}\), a third order reaction, we found the units for to be mol L /s. More generally speaking, the units for the rate constant for a reaction of order \( (m+n)\) are \(\ce{mol}^{1−(m+n)}\ce L^{(m+n)−1}\ce s^{−1}\). Table \(\Page {1}\) summarizes the rate constant units for common reaction orders. Note that the units in the table can also be expressed in terms of molarity ( ) instead of mol/L. Also, units of time other than the second (such as minutes, hours, days) may be used, depending on the situation. Rate laws provide a mathematical description of how changes in the amount of a substance affect the rate of a chemical reaction. Rate laws are determined experimentally and cannot be predicted by reaction stoichiometry. The order of reaction describes how much a change in the amount of each substance affects the overall rate, and the overall order of a reaction is the sum of the orders for each substance present in the reaction. Reaction orders are typically first order, second order, or zero order, but fractional and even negative orders are possible.
10,110
4,935
https://chem.libretexts.org/Bookshelves/General_Chemistry/Chemistry_1e_(OpenSTAX)/Appendices/Essential_Mathematics
Exponential notation is used to express very large and very small numbers as a product of two numbers. The first number of the product, the , is usually a number not less than 1 and not greater than 10. The second number of the product, the , is written as 10 with an exponent. Some examples of exponential notation are: \[\begin{align*} 1000&=1×10^3\\ 100&=1×10^2\\ 10&=1×10^1\\ 1&=1×10^0\\ 0.1&=1×10^{−1}\\ 0.001&=1×10^{−3}\\ 2386&=2.386×1000=2.386×10^3\\ 0.123&=1.23×0.1=1.23×10^{−1} \end{align*} \nonumber \] The power (exponent) of 10 is equal to the number of places the decimal is shifted to give the digit number. The exponential method is particularly useful notation for every large and very small numbers. For example, 1,230,000,000 = 1.23 × 10 , and 0.00000000036 = 3.6 × 10 . Convert all numbers to the same power of 10, add the digit terms of the numbers, and if appropriate, convert the digit term back to a number between 1 and 10 by adjusting the exponential term. Add 5.00 × 10 and 3.00 × 10 . \[\begin{align*} 3.00×10^{−3}&=300×10^{−5}\\ (5.00×10^{−5})+(300×10^{−5})&=305×10^{−5}=3.05×10^{−3} \end{align*} \nonumber \] Convert all numbers to the same power of 10, take the difference of the digit terms, and if appropriate, convert the digit term back to a number between 1 and 10 by adjusting the exponential term. Subtract 4.0 × 10 from 5.0 × 10 . \[4.0×10^{−7}=0.40×10^{−6}\\ (5.0×10^{−6})−(0.40×10^{−6})=4.6×10^{−6} \nonumber \] Multiply the digit terms in the usual way and add the exponents of the exponential terms. Multiply 4.2 × 10 by 2.0 × 10 . \[(4.2×10^{−8})×(2.0×10^3)=(4.2×2.0)×10^{(−8)+(+3)}=8.4×10^{−5} \nonumber \] Divide the digit term of the numerator by the digit term of the denominator and subtract the exponents of the exponential terms. Divide 3.6 × 10 by 6.0 × 10 . \[\dfrac{3.6×10^{−5}}{6.0×10^{−4}}=\left(\dfrac{3.6}{6.0}\right)×10^{(−5)−(−4)}=0.60×10^{−1}=6.0×10^{−2} \nonumber \] Square the digit term in the usual way and multiply the exponent of the exponential term by 2. Square the number 4.0 × 10 . \[(4.0×10^{−6})^2=4×4×10^{2×(−6)}=16×10^{−12}=1.6×10^{−11} \nonumber \] Cube the digit term in the usual way and multiply the exponent of the exponential term by 3. Cube the number 2 × 10 . \[(2×10^4)^3=2×2×2×10^{3×4}=8×10^{12} \nonumber \] If necessary, decrease or increase the exponential term so that the power of 10 is evenly divisible by 2. Extract the square root of the digit term and divide the exponential term by 2. Find the square root of 1.6 × 10 . \[\begin{align*} 1.6×10^{−7}&=16×10^{−8}\\ \sqrt{16×10^{−8}}=\sqrt{16}×\sqrt{10^{−8}}&=\sqrt{16}×10^{−\large{\frac{8}{2}}}=4.0×10^{−4} \end{align*} \nonumber \] A beekeeper reports that he has 525,341 bees. The last three figures of the number are obviously inaccurate, for during the time the keeper was counting the bees, some of them died and others hatched; this makes it quite difficult to determine the exact number of bees. It would have been more accurate if the beekeeper had reported the number 525,000. In other words, the last three figures are not significant, except to set the position of the decimal point. Their exact values have no meaning useful in this situation. In reporting any information as numbers, use only as many significant figures as the accuracy of the measurement warrants. The importance of significant figures lies in their application to fundamental computation. In addition and subtraction, the sum or difference should contain as many digits to the right of the decimal as that in the least certain of the numbers used in the computation (indicated by underscoring in the following example). Add 4.383 g and 0.0023 g. \[\begin{align*} &\mathrm{4.38\underline{3}\:g}\\ &\mathrm{\underline{0.002\underline{3}\:g}}\\ &\mathrm{4.38\underline{5}\:g} \end{align*} \nonumber \] In multiplication and division, the product or quotient should contain no more digits than that in the factor containing the least number of significant figures. Multiply 0.6238 by 6.6. \[0.623\underline{8}×6.\underline{6}=4.\underline{1} \nonumber \] When rounding numbers, increase the retained digit by 1 if it is followed by a number larger than 5 (“round up”). Do not change the retained digit if the digits that follow are less than 5 (“round down”). If the retained digit is followed by 5, round up if the retained digit is odd, or round down if it is even (after rounding, the retained digit will thus always be even). The common logarithm of a number (log) is the power to which 10 must be raised to equal that number. For example, the common logarithm of 100 is 2, because 10 must be raised to the second power to equal 100. Additional examples follow. What is the common logarithm of 60? Because 60 lies between 10 and 100, which have logarithms of 1 and 2, respectively, the logarithm of 60 is 1.7782; that is, \[60=10^{1.7782} \nonumber \] The common logarithm of a number less than 1 has a negative value. The logarithm of 0.03918 is −1.4069, or \[0.03918=10^{-1.4069}=\dfrac{1}{10^{1.4069}} \nonumber \] To obtain the common logarithm of a number, use the button on your calculator. To calculate a number from its logarithm, take the inverse log of the logarithm, or calculate 10 (where is the logarithm of the number). The natural logarithm of a number (ln) is the power to which must be raised to equal the number; is the constant 2.7182818. For example, the natural logarithm of 10 is 2.303; that is, \[10=e^{2.303}=2.7182818^{2.303} \nonumber \] To obtain the natural logarithm of a number, use the button on your calculator. To calculate a number from its natural logarithm, enter the natural logarithm and take the inverse ln of the natural logarithm, or calculate (where is the natural logarithm of the number). Logarithms are exponents; thus, operations involving logarithms follow the same rules as operations involving exponents. Mathematical functions of this form are known as second-order polynomials or, more commonly, quadratic functions. \[ax^2+bx+c=0 \nonumber \] The solution or roots for any quadratic equation can be calculated using the following formula: \[x=\dfrac{-b±\sqrt{b^2−4ac}}{2a} \nonumber \] Solvi Solve the quadratic equation 3 + 13 − 10 = 0. Substituting the values = 3, = 13, = −10 in the formula, we obtain \[x=\dfrac{−13±\sqrt{(13)^2−4×3×(−10)}}{2×3} \nonumber \] \[x=\dfrac{−13±\sqrt{169+120}}{6}=\dfrac{−13±\sqrt{289}}{6}=\dfrac{−13±17}{6} \nonumber \] The two roots are therefore \[x=\dfrac{−13+17}{6}=\dfrac{2}{3}\textrm{ and }x=\dfrac{−13−17}{6}=−5 \nonumber \] Quadratic equations constructed on physical data always have real roots, and of these real roots, often only those having positive values are of any significance. The relationship between any two properties of a system can be represented graphically by a two-dimensional data plot. Such a graph has two axes: a horizontal one corresponding to the independent variable, or the variable whose value is being controlled ( ), and a vertical axis corresponding to the dependent variable, or the variable whose value is being observed or measured ( ). When the value of is changing as a function of (that is, different values of correspond to different values of ), a graph of this change can be plotted or sketched. The graph can be produced by using specific values for ( , ) data pairs. This table contains the following points: (1,5), (2,10), (3,7), and (4,14). Each of these points can be plotted on a graph and connected to produce a graphical representation of the dependence of on . If the function that describes the dependence of on is known, it may be used to compute x,y data pairs that may subsequently be plotted. If we know that = + 2, we can produce a table of a few ( , ) values and then plot the line based on the data shown here.
7,866
4,940
https://chem.libretexts.org/Bookshelves/General_Chemistry/Chemistry_1e_(OpenSTAX)/12%3A_Kinetics/12.7%3A_Catalysis
We have seen that the rate of many reactions can be accelerated by catalysts. A catalyst speeds up the rate of a reaction by lowering the activation energy; in addition, the catalyst is regenerated in the process. Several reactions that are thermodynamically favorable in the absence of a catalyst only occur at a reasonable rate when a catalyst is present. One such reaction is catalytic hydrogenation, the process by which hydrogen is added across an alkene C=C bond to afford the saturated alkane product. A comparison of the reaction coordinate diagrams (also known as energy diagrams) for catalyzed and uncatalyzed alkene hydrogenation is shown in Figure \(\Page {1}\). Catalysts function by providing an alternate reaction mechanism that has a lower activation energy than would be found in the absence of the catalyst. In some cases, the catalyzed mechanism may include additional steps, as depicted in the reaction diagrams shown in Figure \(\Page {2}\) This lower activation energy results in an increase in rate as described by the Arrhenius equation. Note that a catalyst decreases the activation energy for both the forward and the reverse reactions and hence . Consequently, the presence of a catalyst will permit a system to reach equilibrium more quickly, but it has no effect on the position of the equilibrium as reflected in the value of its equilibrium constant (see the later chapter on chemical equilibrium). The two reaction diagrams here represent the same reaction: one without a catalyst and one with a catalyst. Identify which diagram suggests the presence of a catalyst, and determine the activation energy for the catalyzed reaction:   A catalyst does not affect the energy of reactant or product, so those aspects of the diagrams can be ignored; they are, as we would expect, identical in that respect. There is, however, a noticeable difference in the transition state, which is distinctly lower in diagram (b) than it is in (a). This indicates the use of a catalyst in diagram (b). The activation energy is the difference between the energy of the starting reagents and the transition state—a maximum on the reaction coordinate diagram. The reagents are at 6 kJ and the transition state is at 20 kJ, so the activation energy can be calculated as follows: \[E_\ce{a}=\mathrm{20\:kJ−6\:kJ=14\:kJ} \label{12.8.1} \] Determine which of the two diagrams here (both for the same reaction) involves a catalyst, and identify the activation energy for the catalyzed reaction:   Diagram (b) is a catalyzed reaction with an activation energy of about 70 kJ. A is present in the same phase as the reactants. It interacts with a reactant to form an intermediate substance, which then decomposes or reacts with another reactant in one or more steps to regenerate the original catalyst and form product. As an important illustration of homogeneous catalysis, consider the earth’s ozone layer. Ozone in the upper atmosphere, which protects the earth from ultraviolet radiation, is formed when oxygen molecules absorb ultraviolet light and undergo the reaction: \[\ce{3O2}(g)\xrightarrow{hv}\ce{2O3}(g) \label{12.8.2} \] Ozone is a relatively unstable molecule that decomposes to yield diatomic oxygen by the reverse of this equation. This decomposition reaction is consistent with the following mechanism: \[\ce{O3 ⟶ O2 + O\\ O + O3 ⟶ 2O2} \label{12.8.3} \] The presence of nitric oxide, , influences the rate of decomposition of ozone. Nitric oxide acts as a catalyst in the following mechanism: \[\ce{NO}(g)+\ce{O3}(g)⟶\ce{NO2}(g)+\ce{O2}(g)\\ \ce{O3}(g)⟶\ce{O2}(g)+\ce{O}(g)\\ \ce{NO2}(g)+\ce{O}(g)⟶\ce{NO}(g)+\ce{O2}(g) \label{12.8.4} \] The overall chemical change for the catalyzed mechanism is the same as: \[\ce{2O3}(g)⟶\ce{3O2}(g) \label{12.8.5} \] The nitric oxide reacts and is regenerated in these reactions. It is not permanently used up; thus, it acts as a catalyst. The rate of decomposition of ozone is greater in the presence of nitric oxide because of the catalytic activity of NO. Certain compounds that contain chlorine also catalyze the decomposition of ozone. The 1995 Nobel Prize in Chemistry was shared by Paul J. Crutzen, Mario J. (Figure \(\Page {3}\)), and F. Sherwood Rowland “for their work in atmospheric chemistry, particularly concerning the formation and decomposition of ozone." Molina, a Mexican citizen, carried out the majority of his work at the Massachusetts Institute of Technology (MIT). In 1974, Molina and Rowland published a paper in the journal (one of the major peer-reviewed publications in the field of science) detailing the threat of chlorofluorocarbon gases to the stability of the ozone layer in earth’s upper atmosphere. The ozone layer protects earth from solar radiation by absorbing ultraviolet light. As chemical reactions deplete the amount of ozone in the upper atmosphere, a measurable “hole” forms above Antarctica, and an increase in the amount of solar ultraviolet radiation— strongly linked to the prevalence of skin cancers—reaches earth’s surface. The work of Molina and Rowland was instrumental in the adoption of the Montreal Protocol, an international treaty signed in 1987 that successfully began phasing out production of chemicals linked to ozone destruction. Molina and Rowland demonstrated that chlorine atoms from human-made chemicals can catalyze ozone destruction in a process similar to that by which NO accelerates the depletion of ozone. Chlorine atoms are generated when chlorocarbons or chlorofluorocarbons—once widely used as refrigerants and propellants—are photochemically decomposed by ultraviolet light or react with hydroxyl radicals. A sample mechanism is shown here using methyl chloride: \[\ce{CH3Cl + ⟶ Cl + other\: products} \nonumber \] Chlorine radicals break down ozone and are regenerated by the following catalytic cycle: \[\ce{Cl + O3 ⟶ ClO + O2}\\ \ce{ClO + O ⟶ Cl + O2}\\ \textrm{overall Reaction: }\ce{O3 + O ⟶ 2O2} \nonumber \] A single monatomic chlorine can break down thousands of ozone molecules. Luckily, the majority of atmospheric chlorine exists as the catalytically inactive forms Cl and ClONO . Enzymes in the human body act as catalysts for important chemical reactions in cellular metabolism. As such, a deficiency of a particular enzyme can translate to a life-threatening disease. G6PD (glucose-6-phosphate dehydrogenase) deficiency, a genetic condition that results in a shortage of the enzyme glucose-6-phosphate dehydrogenase, is the most common enzyme deficiency in humans. This enzyme, shown in Figure \(\Page {4}\), is the rate-limiting enzyme for the metabolic pathway that supplies to cells (FIgure \(\Page {5}\)). A disruption in this pathway can lead to reduced glutathione in red blood cells; once all glutathione is consumed, enzymes and other proteins such as hemoglobin are susceptible to damage. For example, hemoglobin can be metabolized to bilirubin, which leads to jaundice, a condition that can become severe. People who suffer from G6PD deficiency must avoid certain foods and medicines containing chemicals that can trigger damage their glutathione-deficient red blood cells. A is a catalyst that is present in a different phase (usually a solid) than the reactants. Such catalysts generally function by furnishing an active surface upon which a reaction can occur. Gas and liquid phase reactions catalyzed by heterogeneous catalysts occur on the surface of the catalyst rather than within the gas or liquid phase. Heterogeneous catalysis has at least four steps: Any one of these steps may be slow and thus may serve as the rate determining step. In general, however, in the presence of the catalyst, the overall rate of the reaction is faster than it would be if the reactants were in the gas or liquid phase. Figure \(\Page {6}\) illustrates the steps that chemists believe to occur in the reaction of compounds containing a carbon–carbon double bond with hydrogen on a nickel catalyst. Nickel is the catalyst used in the hydrogenation of polyunsaturated fats and oils (which contain several carbon–carbon double bonds) to produce saturated fats and oils (which contain only carbon–carbon single bonds). Other significant industrial processes that involve the use of heterogeneous catalysts include the preparation of sulfuric acid, the preparation of ammonia, the oxidation of ammonia to nitric acid, and the synthesis of methanol, CH OH. Heterogeneous catalysts are also used in the catalytic converters found on most gasoline-powered automobiles (Figure \(\Page {7}\)). Scientists developed catalytic converters to reduce the amount of toxic emissions produced by burning gasoline in internal combustion engines. Catalytic converters take advantage of all five factors that affect the speed of chemical reactions to ensure that exhaust emissions are as safe as possible. By utilizing a carefully selected blend of catalytically active metals, it is possible to effect complete combustion of all carbon-containing compounds to carbon dioxide while also reducing the output of nitrogen oxides. This is particularly impressive when we consider that one step involves adding more oxygen to the molecule and the other involves removing the oxygen (Figure \(\Page {6}\)). Most modern, three-way catalytic converters possess a surface impregnated with a platinum-rhodium catalyst, which catalyzes the conversion nitric oxide into dinitrogen and oxygen as well as the conversion of carbon monoxide and hydrocarbons such as octane into carbon dioxide and water vapor: \[\ce{2NO2}(g)⟶\ce{N2}(g)+\ce{2O2}(g)\\ [5pt] \ce{2CO}(g)+\ce{O2}(g)⟶\ce{2CO2}(g)\\ [5pt] \ce{2C8H18}(g)+\ce{25O2}(g)⟶\ce{16CO2}(g)+\ce{18H2O}(g) \nonumber \] In order to be as efficient as possible, most catalytic converters are preheated by an electric heater. This ensures that the metals in the catalyst are fully active even before the automobile exhaust is hot enough to maintain appropriate reaction temperatures. The study of enzymes is an important interconnection between biology and chemistry. Enzymes are usually proteins (polypeptides) that help to control the rate of chemical reactions between biologically important compounds, particularly those that are involved in cellular metabolism. Different classes of enzymes perform a variety of functions, as shown in Table \(\Page {1}\). hydrolysis.”" data-quail-id="131" data-mt-width="1016"> Enzyme molecules possess an active site, a part of the molecule with a shape that allows it to bond to a specific substrate (a reactant molecule), forming an enzyme-substrate complex as a reaction intermediate. There are two models that attempt to explain how this active site works. The most simplistic model is referred to as the lock-and-key hypothesis, which suggests that the molecular shapes of the active site and substrate are complementary, fitting together like a key in a lock. The induced fit hypothesis, on the other hand, suggests that the enzyme molecule is flexible and changes shape to accommodate a bond with the substrate. This is not to suggest that an enzyme’s active site is completely malleable, however. Both the lock-and-key model and the induced fit model account for the fact that enzymes can only bind with specific substrates, since in general a particular enzyme only catalyzes a particular reaction (Figure \(\Page {7}\)). Catalysts affect the rate of a chemical reaction by altering its mechanism to provide a lower activation energy. Catalysts can be homogenous (in the same phase as the reactants) or heterogeneous (a different phase than the reactants).
11,595
4,942
https://chem.libretexts.org/Bookshelves/Organic_Chemistry/Complex_Molecular_Synthesis_(Salomon)/05%3A_Polyketides
The polyketides, a diverse family of highly oxygenated natural products, are characterized by the presence of many β-dihydroxy or β-hydroxycarbonyl consonant polar functional relationships. Some polyketides have carbon skeletons comprized of a long straight chain of carbon atoms that is often crosslinked into one or more six-membered rings. Thus, a variety of aromatic compounds is produced in nature from acetate-derived (poly-β-keto)carboxylic acids through , i.e. intramolecular aldol condensation. For example, orselenic acid is topologically and functionally related to a mono crosslinked 3,5,7-triketo octanoic acid. Some polyketides are further modified by oxidative coupling, as in the conversion of griseophenone into griseofulvin. Other modifications include alkylations at the nucleophilic carbon atoms, reduction of carbonyl groups, and electrophilic aromatic substitutions. For example, tetracycline is topologically related to a tetra crosslinked 3,5,7,9,11,13,15,17-octaketo octadecanoic acid. However a dimethylamino and two hydroxyl groups are present that do not fit the otherwise entirely consonant polar reactivity pattern of the remaining functionality. Also a methyl and carboxamido group are also present that are not deriveded from a (poly-β-keto)carboxylic acid precursor. A large family of polyoxygenated macrolide antibiotics, that contain 12-, 14-, or 16-membered lactone rings, share a polyketide biogenesis. For example, erythromycin B is a diglycoside of erythronolide B, a propionate-derived aglycone.
1,551
4,943
https://chem.libretexts.org/Bookshelves/Organic_Chemistry/Supplemental_Modules_(Organic_Chemistry)/Carbohydrates
Carbohydrates are the most abundant class of organic compounds found in living organisms. They originate as products of , an endothermic reductive condensation of carbon dioxide requiring light energy and the pigment chlorophyll. \[ nCO_2 + n H_2O + \text{Energy} \rightarrow C_nH_{2n}O_n + nO_2\] As noted here, the formulas of many carbohydrates can be written as carbon hydrates, \(C_n(H_2O)_n\), hence their name. The carbohydrates are a major source of metabolic energy, both for plants and for animals that depend on plants for food. Aside from the sugars and starches that meet this vital nutritional role, carbohydrates also serve as a structural material (cellulose), a component of the energy transport compound , recognition sites on cell surfaces, and one of three essential components of and .
833
4,944
https://chem.libretexts.org/Bookshelves/General_Chemistry/Chemistry_1e_(OpenSTAX)/09%3A_Gases/9.1%3A_Gas_Pressure
The earth’s atmosphere exerts a pressure, as does any other gas. Although we do not normally notice atmospheric pressure, we are sensitive to pressure changes—for example, when your ears “pop” during take-off and landing while flying, or when you dive underwater. Gas pressure is caused by the force exerted by gas molecules colliding with the surfaces of objects (Figure \(\Page {1}\)). Although the force of each collision is very small, any surface of appreciable area experiences a large number of collisions in a short time, which can result in a high pressure. In fact, normal air pressure is strong enough to crush a metal container when not balanced by equal pressure from inside the container. Atmospheric pressure is caused by the weight of the column of air molecules in the atmosphere above an object, such as the tanker car. At sea level, this pressure is roughly the same as that exerted by a full-grown African elephant standing on a doormat, or a typical bowling ball resting on your thumbnail. These may seem like huge amounts, and they are, but life on earth has evolved under such atmospheric pressure. If you actually perch a bowling ball on your thumbnail, the pressure experienced is twice the usual pressure, and the sensation is unpleasant. Pressure is defined as the force exerted on a given area: \[P=\dfrac{F}{A} \label{9.2.1} \] Since pressure is directly proportional to force and inversely proportional to area (Equation \ref{9.2.1}), pressure can be increased either by either the amount of force or by the area over which it is applied. Correspondingly, pressure can be decreased by either the force or the area. Let’s apply the definition of pressure (Equation \ref{9.2.1}) to determine which would be more likely to fall through thin ice in Figure \(\Page {2}\).—the elephant or the figure skater? A large African elephant can weigh 7 tons, supported on four feet, each with a diameter of about 1.5 ft (footprint area of 250 in ), so the pressure exerted by each foot is about 14 lb/in : \[\mathrm{pressure\: per\: elephant\: foot=14,000\dfrac{lb}{elephant}×\dfrac{1\: elephant}{4\: feet}×\dfrac{1\: foot}{250\:in^2}=14\:lb/in^2} \label{9.2.2} \] The figure skater weighs about 120 lbs, supported on two skate blades, each with an area of about 2 in , so the pressure exerted by each blade is about 30 lb/in : \[\mathrm{pressure\: per\: skate\: blade=120\dfrac{lb}{skater}×\dfrac{1\: skater}{2\: blades}×\dfrac{1\: blade}{2\:in^2}=30\:lb/in^2} \label{9.2.3} \] Even though the elephant is more than one hundred times heavier than the skater, it exerts less than one-half of the pressure and would therefore be less likely to fall through thin ice. On the other hand, if the skater removes her skates and stands with bare feet (or regular footwear) on the ice, the larger area over which her weight is applied greatly reduces the pressure exerted: \[\mathrm{pressure\: per\: human\: foot=120\dfrac{lb}{skater}×\dfrac{1\: skater}{2\: feet}×\dfrac{1\: foot}{30\:in^2}=2\:lb/in^2} \label{9.2.4} \] The SI unit of pressure is the , with 1 Pa = 1 N/m , where N is the newton, a unit of force defined as 1 kg m/s . One pascal is a small pressure; in many cases, it is more convenient to use units of kilopascal (1 kPa = 1000 Pa) or (1 bar = 100,000 Pa). In the United States, pressure is often measured in pounds of force on an area of one square inch— —for example, in car tires. Pressure can also be measured using the unit , which originally represented the average sea level air pressure at the approximate latitude of Paris (45°). Table \(\Page {1}\) provides some information on these and a few other common units for pressure measurements The United States National Weather Service reports pressure in both inches of Hg and millibars. Convert a pressure of 29.2 in. Hg into: This is a unit conversion problem. The relationships between the various pressure units are given in Table 9.2.1. A typical barometric pressure in Kansas City is 740 torr. What is this pressure in atmospheres, in millimeters of mercury, in kilopascals, and in bar? 0.974 atm; 740 mm Hg; 98.7 kPa; 0.987 bar We can measure atmospheric pressure, the force exerted by the atmosphere on the earth’s surface, with a (Figure \(\Page {3}\)). A barometer is a glass tube that is closed at one end, filled with a nonvolatile liquid such as mercury, and then inverted and immersed in a container of that liquid. The atmosphere exerts pressure on the liquid outside the tube, the column of liquid exerts pressure inside the tube, and the pressure at the liquid surface is the same inside and outside the tube. The height of the liquid in the tube is therefore proportional to the pressure exerted by the atmosphere. If the liquid is water, normal atmospheric pressure will support a column of water over 10 meters high, which is rather inconvenient for making (and reading) a barometer. Because mercury (Hg) is about 13.6-times denser than water, a mercury barometer only needs to be \(\dfrac{1}{13.6}\) as tall as a water barometer—a more suitable size. Standard atmospheric pressure of 1 atm at sea level (101,325 Pa) corresponds to a column of mercury that is about 760 mm (29.92 in.) high. The was originally intended to be a unit equal to one millimeter of mercury, but it no longer corresponds exactly. The pressure exerted by a fluid due to gravity is known as , : \[p=hρg \label{9.2.5} \] where Show the calculation supporting the claim that atmospheric pressure near sea level corresponds to the pressure exerted by a column of mercury that is about 760 mm high. The density of mercury = \(13.6 \,g/cm^3\). The hydrostatic pressure is given by Equation \ref{9.2.5}, with \(h = 760 \,mm\), \(ρ = 13.6\, g/cm^3\), and \(g = 9.81 \,m/s^2\). Plugging these values into the Equation \ref{9.2.5} and doing the necessary unit conversions will give us the value we seek. (Note: We are expecting to find a pressure of ~101,325 Pa:) \[\mathrm{101,325\:\mathit{N}/m^2=101,325\:\dfrac{kg·m/s^2}{m^2}=101,325\:\dfrac{kg}{m·s^2}} \nonumber \] \[\begin {align*} p&\mathrm{=\left(760\: mm×\dfrac{1\: m}{1000\: mm}\right)×\left(\dfrac{13.6\: g}{1\:cm^3}×\dfrac{1\: kg}{1000\: g}×\dfrac{( 100\: cm )^3}{( 1\: m )^3}\right)×\left(\dfrac{9.81\: m}{1\:s^2}\right)}\\[4pt] &\mathrm{=(0.760\: m)(13,600\:kg/m^3)(9.81\:m/s^2)=1.01 \times 10^5\:kg/ms^2=1.01×10^5\mathit{N}/m^2} \\[4pt] & \mathrm{=1.01×10^5\:Pa} \end {align*} \nonumber \] Calculate the height of a column of water at 25 °C that corresponds to normal atmospheric pressure. The density of water at this temperature is 1.0 g/cm . 10.3 m A manometer is a device similar to a barometer that can be used to measure the pressure of a gas trapped in a container. A closed-end manometer is a U-shaped tube with one closed arm, one arm that connects to the gas to be measured, and a nonvolatile liquid (usually mercury) in between. As with a barometer, the distance between the liquid levels in the two arms of the tube ( in the diagram) is proportional to the pressure of the gas in the container. An open-end manometer (Figure \(\Page {3}\)) is the same as a closed-end manometer, but one of its arms is open to the atmosphere. In this case, the distance between the liquid levels corresponds to the difference in pressure between the gas in the container and the atmosphere. The pressure of a sample of gas is measured at sea level with an open-end Hg (mercury) manometer, as shown below. Determine the pressure of the gas in:   The pressure of the gas equals the hydrostatic pressure due to a column of mercury of height 13.7 cm plus the pressure of the atmosphere at sea level. (The pressure at the bottom horizontal line is equal on both sides of the tube. The pressure on the left is due to the gas and the pressure on the right is due to 13.7 cm of Hg plus atmospheric pressure.) The pressure of a sample of gas is measured at sea level with an open-end Hg manometer, as shown below Determine the pressure of the gas in:   Blood pressure is measured using a device called a sphygmomanometer (Greek = “pulse”). It consists of an inflatable cuff to restrict blood flow, a manometer to measure the pressure, and a method of determining when blood flow begins and when it becomes impeded (Figure \(\Page {5}\)). Since its invention in 1881, it has been an essential medical device. There are many types of sphygmomanometers: manual ones that require a stethoscope and are used by medical professionals; mercury ones, used when the most accuracy is required; less accurate mechanical ones; and digital ones that can be used with little training but that have limitations. When using a sphygmomanometer, the cuff is placed around the upper arm and inflated until blood flow is completely blocked, then slowly released. As the heart beats, blood forced through the arteries causes a rise in pressure. This rise in pressure at which blood flow begins is the the peak pressure in the cardiac cycle. When the cuff’s pressure equals the arterial systolic pressure, blood flows past the cuff, creating audible sounds that can be heard using a stethoscope. This is followed by a decrease in pressure as the heart’s ventricles prepare for another beat. As cuff pressure continues to decrease, eventually sound is no longer heard; this is the the lowest pressure (resting phase) in the cardiac cycle. Blood pressure units from a sphygmomanometer are in terms of millimeters of mercury (mm Hg).   Throughout the ages, people have observed clouds, winds, and precipitation, trying to discern patterns and make predictions: when it is best to plant and harvest; whether it is safe to set out on a sea voyage; and much more. We now face complex weather and atmosphere-related challenges that will have a major impact on our civilization and the ecosystem. Several different scientific disciplines use chemical principles to help us better understand weather, the atmosphere, and climate. These are meteorology, climatology, and atmospheric science. is the study of the atmosphere, atmospheric phenomena, and atmospheric effects on earth’s weather. Meteorologists seek to understand and predict the weather in the short term, which can save lives and benefit the economy. Weather forecasts (Figure \(\Page {5}\)) are the result of thousands of measurements of air pressure, temperature, and the like, which are compiled, modeled, and analyzed in weather centers worldwide. In terms of weather, low-pressure systems occur when the earth’s surface atmospheric pressure is lower than the surrounding environment: Moist air rises and condenses, producing clouds. Movement of moisture and air within various weather fronts instigates most weather events. The atmosphere is the gaseous layer that surrounds a planet. Earth’s atmosphere, which is roughly 100–125 km thick, consists of roughly 78.1% nitrogen and 21.0% oxygen, and can be subdivided further into the regions shown in Figure \(\Page {7}\): the exosphere (furthest from earth, > 700 km above sea level), the thermosphere (80–700 km), the mesosphere (50–80 km), the stratosphere (second lowest level of our atmosphere, 12–50 km above sea level), and the troposphere (up to 12 km above sea level, roughly 80% of the earth’s atmosphere by mass and the layer where most weather events originate). As you go higher in the troposphere, air density and temperature both decrease. Climatology is the study of the climate, averaged weather conditions over long time periods, using atmospheric data. However, climatologists study patterns and effects that occur over decades, centuries, and millennia, rather than shorter time frames of hours, days, and weeks like meteorologists. Atmospheric science is an even broader field, combining meteorology, climatology, and other scientific disciplines that study the atmosphere. Gases exert pressure, which is force per unit area. The pressure of a gas may be expressed in the SI unit of pascal or kilopascal, as well as in many other units including torr, atmosphere, and bar. Atmospheric pressure is measured using a barometer; other gas pressures can be measured using one of several types of manometers.
12,158
4,949
https://chem.libretexts.org/Bookshelves/General_Chemistry/Chem1_(Lower)/08%3A_Solutions/8.06%3A__Reverse_Osmosis
  \[E=mc^2\]     If it takes a pressure of \(Π\) atm to bring about osmotic equilibrium, then it follows that applying a hydrostatic pressure greater than this to the high-solute side of an osmotic cell will force water to flow back into the fresh-water side. This process, known as , is now the major technology employed to desalinate ocean water and to reclaim "used" water from power plants, runoff, and even from sewage. It is also widely used to deionize ordinary water and to purify it for for industrial uses (especially beverage and food manufacture) and drinking purposes. Pre-treatment commonly employs activated-carbon filtration to remove organics and chlorine (which tends to damage RO membranes). Although bacteria are unable to pass through semipermeable membranes, the latter can develop pinhole leaks, so some form of disinfection is often advised. The efficiency and cost or RO is critically dependent on the properties of the semipermeable membrane. The osmotic pressure of seawater is almost 26 atm. Since a pressure of 1 atm will support a column of water 10.6 m high, this means that osmotic flow of fresh water through a semipermeable membrane into seawater could in principle support a column of the latter by 26 x 10.3 = 276 m (904 ft)! So imagine an osmotic cell in which one side is supplied with fresh water from a river, and the other side with seawater. Osmotic flow of fresh water into the seawater side forces the latter up through a riser containing a turbine connected to a generator, thus providing a constant and fuel-less source of electricity. The key component of such a scheme, first proposed by an Israeli scientist in 1973 and known as (PRO) is of course a semipermeable membrane capable of passing water at a sufficiently high rate. The world's first experimental PRO plant was opened in 2009 in Norway. Its capacity is only 4 kW, but it serves as proof-in-principle of a scheme that is estimated capable of supplying up to 2000 terawatt-hours of energy worldwide. The semipermeable membrane operates at a pressure of about 10 atm and passes 10 L of water per second, generating about 1 watt per m of membrane. PRO is but one form of that depends on the difference between the salt concentrations in different bodies of water. 1 atm is equivalent to 1034 g cm , so from the density of water we get (1034 g cm ) ÷ (1 g cm ) = 1034 cm = 10.3 m. Because many plant and animal cell membranes and tissues tend to be permeable to water and other small molecules, osmotic flow plays an essential role in many physiological processes. The interiors of cells contain salts and other solutes that dilute the intracellular water. If the cell membrane is permeable to water, placing the cell in contact with pure water will draw water into the cell, tending to rupture it. This is easily and dramatically seen if red blood cells are placed in a drop of water and observed through a microscope as they burst. This is the reason that "normal saline solution", rather than pure water, is administered in order to maintain blood volume or to infuse therapeutic agents during medical procedures. In order to prevent irritation of sensitive membranes, one should always add some salt to water used to irrigate the eyes, nose, throat or bowel. Normal saline contains 0.91% w/v of sodium chloride, corresponding to 0.154 M, making its osmotic pressure close to that of blood. The drying of fruit, the use of sugar to preserve jams and jellies, and the use of salt to preserve certain meats, are age-old methods of preserving food. The idea is to reduce the water concentration to a level below that in living organisms. Any bacterial cell that wanders into such a medium will have water osmotically drawn out of it, and will die of dehydration. A similar effect is noticed by anyone who holds a hard sugar candy against the inner wall of the mouth for an extended time; the affected surface becomes dehydrated and noticeably rough when touched by the tongue. In the food industry, what is known as is measured on a scale of 0 to 1, where 0 indicates no water and 1 indicates all water. Food spoilage micro-organisms, in general, are inhibited in food where the water activity is below 0.6. However, if the pH of the food is less than 4.6, micro-organisms are inhibited (but not immediately killed] when the water activity is below 0.85. The presence of excessive solutes in the bowel draws water from the intestinal walls, giving rise to diarrhea. This can occur when a food is eaten that cannot be properly digested (as, for example, milk in lactose-intolerant people). The undigested material contributes to the solute concentration, raising its osmotic pressure. The situation is made even worse if the material undergoes bacterial fermentation which results in the formation of methane and carbon dioxide, producing a frothy discharge. Osmotic flow plays an important role in the transport of water from its source in the soil to its release by transpiration from the leaves, it is helped along by hydrogen-bonding forces between the water molecules. is not believed to be a significant factor. Water enters the roots via osmosis, driven by the low water concentration inside the roots that is maintained by both the active [non-osmotic] transport of ionic nutrients from the soil and by the supply of sugars that are photosynthesized in the leaves. This generates a certain amount of which sends the water molecules on their way up through the vascular channels of the stem or trunk. But the maximum root pressures that have been measured can push water up only about 20 meters, whereas the tallest trees exceed 100 meters. Root pressure can be the sole driver of water transport in short plants, or even in tall ones such as trees that are not in leaf. Anyone who has seen apparently tender and fragile plants pushing their way up through asphalt pavement cannot help but be impressed! But when taller plants are actively transpiring (losing water to the atmosphere], osmosis gets a boost from what plant physiologists call or . As each H O molecule emerges from the opening in the leaf it pulls along the chain of molecules beneath it. So hydrogen-bonding is no less important than osmosis in the overall water transport process. If the soil becomes dry or saline, the osmotic pressure outside the root becomes greater than that inside the plant, and the plant suffers from “water tension”, i.e., wilting. The following section is a bit long, but for those who are interested in biology it offers a beautiful example of how the constraints imposed by osmosis have guided the evolution of ocean-living creatures into fresh-water species . It concerns ammonia NH , a product of protein metabolism that is generated within all animals, but is highly toxic and must be eliminated. Marine invertebrates (those that live in seawater) are covered in membranes that are fairly permeable to water and to small molecules such as ammonia. So water can diffuse in either direction as required, and ammonia can diffuse out as quickly as it forms. Nothing special here. Invertebrates that live in fresh water do have problem: the salt concentrations within their bodies are around 1%, much greater than in fresh water. For this reason they have evolved surrounding membranes that are largely impermeable to salts (to prevent their diffusion out of the body) and to water (to prevent osmotic flow in.) But these organisms must also be able to exchange oxygen and carbon dioxide with their environment. The special respiratory organs (gills) that mediate this process, as a consequence of being permeable to these two gases, will also allow water molecules (whose sizes are comparable to those of the respiratory gases) to pass through. In order to protect fresh-water invertebrates from the disastrous effects of unlimited water inflow through the gill membranes, these animals possess special excretory organs that expel excess water back into the environment. Thus in such animals, there is a constant flow of water passing through the body. Ammonia and other substances that need to be excreted are taken up by this stream which constitutes a continual flow of dilute urine. Fishes fall into two general classes: most fish have bony skeletons and are known as teleosts. Sharks and rays have cartilage instead of bones, and are called elasmobranchs. For the teleosts that live in fresh water, the situation is very much the same as with fresh-water invertebrates; they take in and excrete water continuously. The fact that an animal lives in the water does not mean that it enjoys an unlimited supply of water. Marine teleosts have a more difficult problem. Their gills are permeable to water, as are those of marine invertebrates. But the salt content of seawater (about 3%), being higher than the about 1% in the fish’s blood, would draw water out of the fish. Thus these animals are constantly losing water, and would be liable to desiccation if water could freely pass out of their gills. Some does, of course, and with it goes most of its nitrogen in the form of NH . Thus most of the waste nitrogen exits not through the usual excretory organs as with most vertebrates, but through the gills. But in order to prevent excessive loss of water, the gills have reduced permeability to this water, and with it, to comparably-sized NH . So in order to prevent ammonia toxicity, the remainder of it is converted to a non-toxic substance (trimethylamine oxide (CH ) NO) which is excreted via the kidneys. The marine elasmobranchs solve the loss-of-water problem in another way: they convert waste ammonia to urea (NH ) CO which is highly soluble and non-toxic. Their kidneys are able to control the quantity of urea excreted so that their blood retains about 2-2.5 percent of this substance. Combined with the 1 percent of salts and other substances in their blood, this raises the osmotic pressure within the animal to slightly above that of seawater, Thus the same mechanism that protects them from ammonia poisoning also ensures them an adequate water supply.
10,106
4,954
https://chem.libretexts.org/Bookshelves/Environmental_Chemistry/Toxicology_MSDT/6%3A_Principles_of_Toxicology
ToxTutor is divided into the following sections: Each section of ToxTutor contains one or more related content pages. and buttons are provided to allow you to navigate through these pages. For more information, see the "Getting Around" section below. The basic principles of toxicology described in ToxTutor are similar to those taught in university programs and are well described in toxicology literature. A list of the textbooks used as the primary resources for the tutorials is found in the . It will take approximately three hours to complete this self-paced tutorial. There are a variety of ways you can navigate ToxTutor. You can: At the top of each page in ToxTutor are links indicating where the current page falls within the overall ToxTutor program. You can click these links to return to this homepage or to the section that contains the page. Throughout the course, you will encounter links. Any link to a resource outside of ToxTutor, will typically open in a new tab or window. All other links are to other areas within ToxTutor. Links are bold and underlined as seen and . In addition, clicking on some images may open another external website for more information. ToxTutor was adopted from the in 2021. More Information can be seen in the ToxTutor .
1,304
4,964
https://chem.libretexts.org/Bookshelves/General_Chemistry/General_Chemistry_Supplement_(Eames)/Gases/Gas_Laws
Boyle was an Irish nobleman who is often described as one of the first modern chemists, as opposed to the old alchemists. However, many of his ideas and experiments came from earlier chemist/alchemists. Boyle observed that for a particular sample of gas at a constant temperature, if the pressure or volume is changed, the initial and final pressure and volume are related: \[P_{0}V_{0} = P_{1}V_{1}\] This is now called Boyle's Law. Later it was shown that Boyle's law is an approximate law, not an exact law. Most gases follow it pretty closely at normal pressures, but less closely at large pressures. Charles made the first solo flight in a hot-air balloon filled with hydrogen. He also observed that at constant pressure, the volume of a gas increases linearly with temperature. This can be written \[V = kT\] Notably, although he couldn't measure volumes at very low temperatures, all the lines for different gases pointed to the same temperature point. The temperature at which each gas was predicted to have V = 0 was the same, although the slopes were different. This temperature is now called 0 K = -273 °C, or absolute zero. Since gases can't have negative volume, this temperature seems to be special: the lowest possible temperature. Although in fact gases won't have zero volume at absolute zero (they'll be solids, and solids have volume), modern theory does still consider absolute zero special. In fact, we have to use temperature in Kelvin for any gas law problem. We discussed this . Although often called Avogadro's Law, it was actually a hypothesis. The hypothesis is that at the same temperature and pressure, all gases have the same number of particles (molecules). Avogadro guessed that this was true based on Gay-Lussac's law, but he had no way to measure it directly, so it couldn't really be called a law. However, now we can be pretty sure that it is approximately true. The Ideal Gas Law combines Boyle, Charles and Avogadro's laws. The Ideal Gas Law says that \[PV = nRT\] where P is pressure, V is volume, T is temperature, n is the number of moles, and R is the molar gas constant. We can express it another way too: \[PV = nk_{B}T\] where everything is the same except n is now the number of particles, and k is the Boltzmann constant. As you can see, the gas constant R is just the Boltzmann constant multiplied by Avogadro's number (the number of particles in a mole). The Boltzman constant essentially provides a conversion factor between temperature in K and energy in J. Because temperature and energy are closely connected, k appears in many important equations. You can use the Ideal Gas Law to make predictions about how gases will react when you change pressure, volume or temperature. It gives you a good intuition for what gases do. The predictions it makes aren't always very accurate: they're pretty good at normal temperature and pressure, but actually for most engineering work they aren't good enough, so people use other equations or data tables instead. The Ideal Gas Law is a scientific law: it describes mathematically what happens under certain conditions, in this case low pressure. In the next section we'll describe the theory that explains the behavior of gases, which will also tell us when we should expect the Ideal Gas Law to be inaccurate. You can use the ideal gas law to make various calculations, including with density and molar mass. We need to be careful with the units, because there are so many pressure units. Check that your value of R has the right unit for pressure, and isn't using an energy unit (because sometimes it's convenient to use energy units for R, but not in the ideal gas equation). Also make sure you use temperature in K. For example of a calculation, the ideal gas law says that the molar volume of any gas should be almost the same, and under standard conditions (1 atm and 0 °C) it should be close to 22.4 L.
3,925
4,968
https://chem.libretexts.org/Bookshelves/Organic_Chemistry/Complex_Molecular_Synthesis_(Salomon)/04%3A_Terpenes
A trivial pattern characterizes the structures of fatty acids: their carbon skeletons generally have even numbers of carbons. This is a consequence of their biosynthetic origins. They are oligomers of the two-carbon building block, acetyl CoA. Terpenes are a structurally and functionally diverse family of natural products. Nevertheless, a pattern that characterizes their structures is often discernible. They appear to be oligomers of isoprene. In the ensuing discussion, for clarity, we occasionally will represent bonds that that are not in these isoprene units with dashed lines as in the following examples. The biosynthesis of some terpenes involves such intricate carbon skeletal transmogrifications that the terpenoid biosynthetic origin is not at all obvious. Moreover, the intricate multicyclic skeletons of some terpenes are devoid of functionality. For such molecules, polar reactivity analysis is of little value. Instead, it is the topology of these molecules that must be analyzed in order to perceive potentially effective dislocations to generate precursors, and ultimately, to identify starting materials.
1,140
4,974
https://chem.libretexts.org/Bookshelves/General_Chemistry/Chem1_(Lower)/08%3A_Solutions/8.09%3A_Distillation
Make sure you thoroughly understand the following essential ideas: Distillation is a process whereby a mixture of liquids having different vapor pressures is separated into its components. At first one might think that this would be quite simple: if you have a solution consisting of liquid A that boils at 50°C and liquid B with a boiling point of 90°C, all that would be necessary would be to heat the mixture to some temperature between these two values; this would boil off all the A (whose vapor could then be condensed back into pure liquid A), leaving pure liquid B in the pot. But that overlooks that fact that these liquids will have substantial vapor pressures at temperatures, not only at their boiling points. To fully understand distillation, we will consider an ideal binary liquid mixture of \(\ce{A}\) and \(\ce{B}\). If the of \(A\) in the mixture is \(\chi_A\), then by the definition of mole fraction, that of \(B\) is \[\chi_B = 1 – \chi_A\] Since distillation depends on the different vapor pressures of the components to be separated, let's first consider the vapor pressure vs. composition plots (Figure \(\Page {1}\)) for a hypothetical mixture at some arbitrary temperature at which both liquid and gas phases can exist, depending on the total pressure. In Figure \(\Page {2}\), all states of the system (i.e., combinations of pressure and composition) in which the solution exists solely as a are shaded in green. Since liquids are more stable at higher pressures, these states occupy the upper part of the diagram. At any given total vapor pressure such as at , the composition of the vapor in equilibrium with the liquid (designated by \(x_A\)) corresponds to the intercept with the diagonal equilibrium line at . The diagonal line is just an expression of the linearity between vapor pressure and composition according to . The two liquid-vapor equilibrium lines (one curved, the other straight) now enclose an area in which liquid and vapor can ; outside of this region, the mixture will consist entirely of liquid or of vapor. At this particular pressure , the intercept with the upper boundary of the two-phase region gives the mole fractions of A and B in the liquid phase, while the intercept with the lower boundary gives the mole fractions of the two components in the vapor. Take a moment to study Figure \(\Page {5}\) and to confirm that The vapor in equilibrium with a solution of two or more liquids is richer in the more volatile component. The rule shown above suggests that if we heat a mixture sufficiently to bring its total vapor pressure into the two-phase region, we will have a means of separating the mixture into two portions which will be enriched in the more volatile and less volatile components respectively. This is the principle on which distillation is based. But what temperature is required to achieve this? Again, we will spare you the mathematical details, but it is possible to construct a plot similar to the Figure \(\Page {4}\) except that the vertical axis represents rather than pressure. This kind of plot is called a . Some important things to understand about Figure \(\Page {6}\): The tie line shown in Figure \(\Page {6}\) is for one particular temperature. But when we heat a liquid to its boiling point, the composition will change as the more volatile component (\(\ce{B}\) in these examples) is selectively removed as vapor. The remaining liquid will be enriched in the less volatile component, and its boiling point will consequently rise. To understand this process more thoroughly, let us consider the situation at several points during the distillation of an equimolar solution of \(\ce{A}\) and \(\ce{B}\). Figure \(\Page {5A}\): We begin with the liquid at , below its boiling point. When the temperature rises to , boiling begins and the first vapor (and thus the first drop of condensate) will have the composition . Notice that the vertical green line remains in the same location in the three plots because the "system" is defined as consisting of both the liquid in the "pot" and that in the receiving container which was condensed from the vapor. The principal ideas you should take away from this are that The apparatus used for a simple laboratory batch distillation is shown here. The purpose of the thermometer is to follow the progress of the distillation; as a rough rule of thumb, the distillation should be stopped when the temperature rises to about half-way between the boiling points of the two pure liquids, which should be at least 20-30 C° apart (if they are closer, then , described below, becomes necessary). Although distillation can never achieve complete separation of volatile liquids, it can in principal be carried out in such a way that any desired degree of separation can be achieved if the solution behaves ideally and one is willing to go to the trouble. The general procedure is to distill only a fraction of the liquid, the smaller the better. The condensate, now enriched in the more volatile component, is then collected and re-distilled (again, only a small fraction), thus obtaining a condensate even-more-enriched in the more volatile component. If we repeat this sequence many times, we can eventually obtain almost-pure, if minute, samples of the two components. But since this would hardly be practical, there is a better way. In order to understand it, you need to know about the , which provides a simple way of determining the relative (not just the compositions) of two phases in equilibrium. The lever rule is easily derived from and , but we will simply illustrate it graphically (Figure \(\Page {7}\)). The plot shows the boiling point diagram of a simple binary mixture of composition . At the temperature corresponding to the tie line, the composition of the liquid corresponds to and that of the vapor to . So now for the lever rule: the relative quantities of the liquid and the vapor we identified above are given by the lengths of the tie-line segments labeled and . Thus in this particular example, in which is about four times longer than , we can say that the mole ratio of vapor (of composition ) to liquid (composition ) is 4. It is not practical to carry out an almost-infinite number of distillation steps to obtain nearly-infinitesimal quantities of the two pure liquids we wish to separate. So instead of collecting each drop of condensate and re-distilling it, we will distil half of the mixture in each step. Suppose you want to separate a liquid mixture composed of 20 mole-% B and 80 mole-% A, with A being the more volatile. As we heat the mixture whose overall composition is indicated by , the first vapor is formed at and has the composition , found by extending the horizontal dashed line until it meets the vapor curve. This vapor is clearly enriched in B; if it is condensed, the resulting liquid will have a mole fraction approaching that of A in the original liquid. But this is only the first drop, we don't want to stop there! As the liquid continues to boil, the boiling temperature rises. When it reaches , we will have boiled away half of the liquid. At this point, the "system" composition (liquid plus vapor) is still the same ( ), but is now equally divided between the liquid, which we call "residue" R , and the condensed vapor, the D . We now take the condensed liquid D having the composition , and distill half of it, obtaining distillate of composition D . .. and then carry out yet another distillation, this time using D as our feedstock. Our four-stage fractionation has enriched the more volatile solute from 20 to slightly over 80 mole-percent in D . The less volatile component A is most concentrated in R . R through R are thrown away (but not down the sink, please!) This may be sufficient for some purposes, but we might wish to do much better, using perhaps 1000 stages instead of just 4. What could be more tedious? Not to worry! The multiple successive distillations can be carried out "virtually" by inserting a between the boiling flask and the condenser. These columns are made with indentations or are filled with materials that provide a large surface area extending through the vertical temperature gradient (higher temperature near the bottom, lower temperature at the top.) The idea is that hot vapors condense at various levels in the column and the resulting liquid drips down ( ) to a lower level where it is vaporized, which corresponds roughly to a re-distillation. having multiple indentations are widely used (above right). Simple columns can be made by filling a glass tube with beads, short glass tubes, or even stainless steel kitchen-type scouring pads. More elaborate ones have spinning steel ribbons. The operation of fractionating columns can best be understood by reference to a bubble-cap column. The one shown here consists of four sections, or "plates" through which hot vapors rise and bubble up through pools of condensate that collect on each plate. The intimate contact between vapor and liquid promotes equilibration and re-distillation at successively higher temperatures at each higher plate in the column. Unlike the case of the step-wise fractional distillation we discussed above, none of the intermediate residues is thrown away; they simply drip down back into the pot where their fractionation journey begins again, always leading to a further concentration of the less-volatile component in the remaining liquid. At the same time, the vapor emerging from the top plate (5) provides a continuing flow of volatile-enriched condensate, although in diminishing quantities as it is depleted in the boiling pot. If complete equilibrium is attained between the liquid and vapor at each stage, then we can describe the system illustrated above as providing "five theoretical plates" of separation (remember that the pot represents the first theoretical plate.) Equilibrium at each stage requires a steady-state condition in which the quantity of vapor moving upward at each stage is equal to the quantity of liquid draining downward — in other words, the column should be operating in total reflux, with no net removal of distillate. So any real distillation process will be operated at a reflux ratio that provides optimum separation in a reasonable period of time. Some of the more advanced laboratory-type devices (such as some spinning-steel band columns) are said to offer up to around 200 theoretical plates of separating power. The boiling point diagrams presented in the foregoing section apply to solutions that behave in a reasonably ideal manner — that is, to solutions that do not deviate too far from Raoult's law. As we explained above, mixtures of liquids whose intermolecular interactions are widely different do not behave ideally, and may be impossible to separate by ordinary distillation. The reason for this is that under certain conditions, the compositions of the liquid and of the vapor in equilibrium with it become identical, precluding any further separation. These cross-over points appear as "kinks" in the boiling point diagrams. Thus in this boiling point diagram for a mixture exhibiting a positive deviation from Raoult's law, successive fractionations of mixtures correspond to either or bring the distillation closer to the azeotropic composition indicated by the dashed vertical line. Once this point is reached, further distillation simply yields more of the same "high-boiling" azeotrope. Distillation of a mixture having a negative deviation from Raoult's law leads to a similar stalemate, in this case yielding a "low-boiling" azeotrope. High- and low-boiling azeotropes are commonly referred to as , and they are more common than most people think. There are four general ways of dealing with azeotropes. The first two of these are known collectively as . Ethanol is one of the major industrial chemicals, and is of course the essential component of beverages that have been a part of civilization throughout recorded history. Most ethanol is produced by fermentation of the starch present in food grains, or of sugars formed by the enzymatic degradation of cellulose. Because ethanol is toxic to the organisms whose enzymes mediate the fermentation process, the ethanol concentration in the fermented mixture is usually limited to about 15%. The liquid phase of the mixture is then separated and distilled. For applications requiring anhydrous ethanol ("absolute ethanol "), the most common method is the use of zeolite-based molecular sieves to absorb the remaining water. Addition of benzene can break the azeotrope, and this was the most common production method in earlier years. For certain critical uses where the purest ethanol is required, it is synthesized directly from ethylene. Here be briefly discuss two distillation methods that students are likely to encounter in more advanced organic lab courses. Many organic substances become unstable at high temperatures, tending to decompose, polymerize or react with other substances at temperatures around 200° C or higher. A liquid will boil when its vapor pressure becomes equal to the pressure of the gas above it, which is ordinarily that of the atmosphere. If this pressure is reduced, boiling can take place at a lower temperature. (Even pure water will boil at room temperature under a partial vacuum.) "Vacuum distillation" is of course a misnomer; a more accurate term would be "reduced-pressure distillation". Vacuum distillation is very commonly carried out in the laboratory and will be familiar to students who take more advanced organic lab courses. It is also sometimes employed on a large industrial scale. The vacuum distillation setup is similar that employed in ordinary distillation, with a few additions: : Strictly speaking, this topic does not belong in this unit, since steam distillation is used to separate liquids rather than solutions. But because immiscible liquid mixtures are not treated in elementary courses, we present a brief description of steam distillation here for the benefit of students who may encounter it in an organic lab course. A mixture of immiscible liquids will boil when their vapor pressures reach atmospheric pressure. This combined vapor pressure is just the sum of the vapor pressures of each liquid individually, and . Because water boils at 100° C, a mixture of water and an immiscible liquid (an "oil"), even one that has a high boiling point, is guaranteed to boil 100°, so this method is especially valuable for separating high boiling liquids from mixtures containing non-volatile impurities. Of course the water-oil mixture in the receiving flask must itself be separated, but this is usually easily accomplished by means of a separatory funnel since their densities are ordinarily different. There is a catch, however: the lower the vapor pressure of the oil, the greater is the quantity of water that co-distills with it. This is the reason for using steam: it provides a source of water able to continually restore that which is lost from the boiling flask. Steam distillation from a water-oil mixture without the introduction of additional steam will also work, and is actually used for some special purposes, but the yield of product will be very limited. Steam distillation is widely used in industries such as petroleum refining (where it is often called "steam stripping") and in the flavors-and-perfumes industry for the isolation of essential oils The term refers to the aromas ("essences") of these [mostly simple] organic liquids which occur naturally in plants, from which they are isolated by steam distillation or solvent extraction. Steam distillation was invented in the 13th Century by Ibn al-Baiter, one of the greatest of the scientists and physicians of the Islamic Golden Age in Andalusia. Distillation is one of the major "unit operations" of the chemical process industries, especially those connected with petroleum and biofuel refining, liquid air separation, and brewing. Laboratory distillations are typically batch operations and employ relatively simple fractionating columns to obtain a pure product. In contrast, industrial distillations are most often designed to produce mixtures having a desired boiling range rather than pure products. Industrial operations commonly employ bubble-cap fractionating columns (seldom seen in laboratories), although packed columns are sometimes used. Perhaps the most distinctive feature of large scale industrial distillations is that they usually operate on a continuous basis in which the preheated crude mixture is preheated in a furnace and fed into the fractionating column at some intermediate point. A reboiler unit maintains the bottom temperature at a constant value. The higher-boiling components then move down to a level at which they vaporize, while the lighter (lower-boiling) material moves upward to condense at an appropriate point. Petroleum is a complex mixture of many types of organic molecules, mostly hydrocarbons, that were formed by the effects of heat and pressure on plant materials (mostly algae) that grew in regions that the earth's tectonic movements buried over periods of millions of years. This mixture of liquid and gases migrates up through porous rock until it s trapped by an impermeable layer of sedimentary rock. The molecular composition of (the liquid fraction of petroleum) is highly variable, although its overall elemental makeup generally reflects that of typical plants. The principal molecular constituents of crude oil are The word predates its use as a motor fuel; it was first used as a topical medicine to rid people of head lice, and to remove grease spots and stains from clothing. The first major step of refining is to fractionate the crude oil into various boiling ranges. About 16% of crude oil is diverted to the petrochemical industry where it is used to make ethylene and other feedstocks for plastics and similar products. Because the fraction of straight-run gasoline is inadequate to meet demand, some of the lighter fractions undergo reforming and the heavier ones cracking and are recycled into the gasoline stream. These processes necessitate a great amount of recycling and blending, into which must be built a considerable amount of flexibility in order to meet seasonal needs (more volatile gasolines and heating fuel oil in winter, more total gasoline volumes in the summer.)
18,480
4,978
https://chem.libretexts.org/Bookshelves/Organic_Chemistry/Supplemental_Modules_(Organic_Chemistry)/Phenols
Compounds in which a hydroxyl group is bonded to an aromatic ring are called phenols. The chemical behavior of phenols is different in some respects from that of the alcohols, so it is sensible to treat them as a similar but characteristically distinct group. A corresponding difference in reactivity was observed in comparing aryl halides, such as bromobenzene, with alkyl halides, such as butyl bromide and tert-butyl chloride. Thus, nucleophilic substitution and elimination reactions were common for alkyl halides, but rare with aryl halides. This distinction carries over when comparing alcohols and phenols, so for all practical purposes substitution and/or elimination of the phenolic hydroxyl group does not occur.
745
4,982
https://chem.libretexts.org/Bookshelves/General_Chemistry/CLUE%3A_Chemistry_Life_the_Universe_and_Everything
We developed much of the material in this new curriculum using research on how people learn and our own work on how to improve understanding and problem solving in college-level science classes. In previous studies we have found that our methods, which include dramatic reorganization and reduction of materials covered, increase student interactions and activity and lead to equal or better performance on standardized exams, greater conceptual understanding, and improved problem-solving skills. By focusing the time and effort on the foundational ideas we expect that you will achieve a more robust and confident understanding of chemical principles, an understanding that should serve you well in subsequent chemistry and other science courses, not to mention “real life”!
793
4,984
https://chem.libretexts.org/Bookshelves/Inorganic_Chemistry/Chemistry_of_the_Main_Group_Elements_(Barron)
The main group ( - and -block) elements are among the most diverse in the Periodic Table. Ranging from non-metallic gases (e.g., hydrogen and fluorine), through semi-metals (e.g., metalloids such as silicon) to highly reactive metals (e.g., sodium and potassium). The study of the main group elements is important for a number of reasons. On an academic level they exemplify the trends and predictions in structure and reactivity that are the key to the Periodic Table. They represent the diversity of inorganic chemistry, and the fundamental aspects of structure and bonding that are also present for the transition metal, lanthanide and actinide elements.
675
4,986
https://chem.libretexts.org/Bookshelves/General_Chemistry/Chemistry_1e_(OpenSTAX)/17%3A_Electrochemistry
Electrochemistry deals with chemical reactions that produce electricity and the changes associated with the passage of electrical current through matter. The reactions involve electron transfer, and so they are oxidation-reduction (or redox) reactions. Many metals may be purified or electroplated using electrochemical methods. Devices such as automobiles, smartphones, electronic tablets, watches, pacemakers, and many others use batteries for power. Batteries use chemical reactions that produce electricity spontaneously and that can be converted into useful work. All electrochemical systems involve the transfer of electrons in a reacting system. In many systems, the reactions occur in a region known as the cell, where the transfer of electrons occurs at electrodes.
789
5,000
https://chem.libretexts.org/Bookshelves/Introductory_Chemistry/Chemistry_and_Global_Awareness_(Gordon)
Chemistry is all around us "from the air we breathe to the food we eat" to the items at the supermarket that say “no chemicals added”. In fact, it is impossible to create something without using chemistry because chemistry consists of all matter. It allows us to answer questions as simple as why a candle goes out when a glass is placed over it to more complex questions such as does a candle actually burn in zero gravity? Chemistry is the study of matter and the changes it undergoes such as ice changing from the solid to liquid to gas phase. People have used it for things such as creating metal from an ore, dying fabric and making cheese. Chemistry deals with different substances and how they can interact with each other to create a product. As you begin your study of college chemistry, those of you who do not intend to become professional chemists may well wonder why you need to study chemistry. You will soon discover that a basic understanding of chemistry is useful in a wide range of disciplines and career paths. You will also discover that an understanding of chemistry helps you make informed decisions about many issues that affect you, your community, and your world. A major goal of this text is to demonstrate the importance of chemistry in your daily life and in our collective understanding of both the physical world we occupy and the biological realm of which we are apart.
1,418
5,003
https://chem.libretexts.org/Bookshelves/General_Chemistry/Chem1_(Lower)/12%3A_Solubility_Equilibria
Make sure you thoroughly understand the following essential ideas: Dissolution of a salt in water is a chemical process that is governed by the same laws of chemical equilibrium that apply to any other reaction. There are, however, a number of special aspects of of these equilibria that set them somewhat apart from the more general ones that are covered in the lesson set devoted specifically to chemical equilibrium. These include such topics as the common ion effect, the influence of pH on solubility, supersaturation, and some special characteristics of particularly important solubility systems. Drop some ordinary table salt into a glass of water, and watch it "disappear". We refer to this as , and we explain it as a process in which the sodium and chlorine units break away from the crystal surface, get surrounded by H O molecules, and become . \[NaCl_{(s)} \rightarrow Na^+_{(aq)}+ Cl^–_{(aq)} \] The designation means "aqueous" and comes from , the Latin word for water. It is used whenever we want to emphasize that the ions are hydrated — that H O molecules are attached to them. Remember that solubility equilibrium and the calculations that relate to it are only meaningful when sides (solids and dissolved ions) are simultaneously present. But if you keep adding salt, there will come a point at which it no longer seems to dissolve. If this condition persists, we say that the salt has reached its , and the solution is in NaCl. The situation is now described by \[NaCl_{(s)} \rightleftharpoons Na^+_{(aq)}+ Cl^–_{(aq)}\] in which the solid and its ions are . Salt solutions that have reached or exceeded their solubility limits (usually 36-39 g per 100 mL of water) are responsible for prominent features of the earth's geochemistry. They typically form when NaCl leaches from soils into waters that flow into salt lakes in arid regions that have no natural outlets; subsequent evaporation of these brines force the above equilibrium to the left, forming natural salt deposits. These are often admixed with other salts, but in some cases are almost pure NaCl. Many parts of the world contain buried deposits of NaCl (known as halite) that formed from the evaporation of ancient seas, and which are now mined. Solubilities are most fundamentally expressed in molar (mol L of solution) or molal (mol kg of water) units. But for practical use in preparing stock solutions, chemistry handbooks usually express solubilities in terms of grams-per-100 ml of water at a given temperature, frequently noting the latter in a superscript. Thus 6.9 means 6.9 g of solute will dissolve in 100 mL of water at 20° C. When quantitative data are lacking, the designations "soluble", "insoluble", "slightly soluble", and "highly soluble" are used. There is no agreed-on standard for these classifications, but a useful guideline might be that shown below. The solubilities of salts in water span a remarkably large range of values, from almost completely insoluble to highly soluble. Moreover, there is no simple way of predicting these values, or even of explaining the trends that are observed for the solubilities of different anions within a given group of the periodic table. Ultimately, the driving force for dissolution (and for chemical processes) is determined by the Gibbs free energy change. But because many courses cover solubility before introducing free energy, we will not pursue this here. Dissolution of a salt is conceptually understood as a sequence of the two processes depicted above: The first step consumes a large quantity of energy, something that by itself would strongly discourage solubility. But the second step a large amount of energy and thus has the opposite effect. Thus the net energy change depends on the sum of two large energy terms (often approaching 1000 kJ/mol) having opposite signs. Each of these terms will to some extent be influenced by the size, charge, and polarizability of the particular ions involved, and on the lattice structure of the solid. This large number of variables makes it impossible to predict the solubility of a given salt. Nevertheless, there are some clear trends for how the solubilities of a series of salts of a given anion (such as hydroxides, sulfates, etc.) change with a periodic table group. And of course, there are a number of general — for example, that all nitrates are soluble, while most sulfides are insoluble. Solubility usually increases with temperature - but not always. This is very apparent from the solubility-vs.-temperature plots shown here. (Some of the plots are colored differently in order to make it easier to distinguish them where they crowd together.) The temperature dependence of any process depends on its entropy change — that is, on the degree to which thermal kinetic energy can spread throughout the system. When a solid dissolves, its component molecules or ions diffuse into the much greater volume of the solution, carrying their thermal energy along with them. So we would normally expect the entropy to increase — something that makes any process take place to a greater extent at a higher temperature. So why does the solubility of cerium sulfate (green plot) diminish with temperature? Dispersal of the Ce and SO ions themselves is still associated with an entropy increase, but in this case the entropy of the decreases even more owing to the ordering of the H O molecules that attach to the Ce ions as they become hydrated. It's difficult to predict these effects, or explain why they occur in individual cases — but they do happen. All solids that dissociate into ions exhibit some limit to their solubilities, but those whose saturated solutions exceed about 0.01 mol L cannot be treated by simple equilibrium constants owing to ion-pair formation that greatly complicates their behavior. For this reason, most of what follows in this lesson is limited to salts that fall into the "sparingly soluble" category. The importance of sparingly soluble solids arises from the fact that formation of such a product can effectively remove the corresponding ions from the solution, thus driving the reaction to the right. Consider, for example, what happens when we mix solutions of strontium nitrate and potassium chloride in a 1:2 mole ratio. Although we might represent this process by \[Sr(NO_3)_{2(aq)}+ 2 KCl_{(aq)}→ SrCl_{(aq)}+ 2 KNO_{3(aq)} \label{1}\] the net ionic equation \[Sr^{2+} + 2 NO_3^– + 2 K^+ + 2 Cl^– → Sr^{2+} + 2 NO_3^– + 2 K^+ + 2 Cl^–\] indicates that no net change at all has taken place! Of course if the solution were than evaporated to dryness, we would end up with a mixture of the four salts shown in Equation \(\ref{1}\), so in this case we might say that the reaction is half-complete. Contrast this with what happens if we combine equimolar solutions of barium chloride and sodium sulfate: \[BaCl_{2(aq)}+ Na_2SO_{4(aq)}→ 2 NaCl_{(aq)}+ BaSO_{4(s)} \label{2}\] whose net ionic equation is \[Ba^{2+} + \cancel{ 2 Cl^–} + \cancel{2 Na^+} + SO_4^{2–} → \cancel{2 Na^+} + \cancel{2 Cl^–} + BaSO_{4(s)}\] which after canceling out like terms on both sides, becomes simply \[Ba^{2+} + SO_4^{2– }→ BaSO_{4(s)} \label{3}\] Because the formation of sparingly soluble solids is "complete" (that is, equilibria such as the one shown above for barium sulfate lie so far to the right), virtually all of one or both of the contributing ions are essentially removed from the solution. Such reactions are said to be , and they are especially important in analytical chemistry: Some salts and similar compounds (such as some metal hydroxides) dissociate completely when they dissolve, but the extent to which they dissolve is so limited that the resulting solutions exhibit only very weak conductivities. In these salts, which otherwise act as strong electrolytes, we can treat the dissolution-dissociation process as a true equilibrium. Although this seems almost trivial now, this discovery, made in 1900 by Walther Nernst who applied the to the dissociation scheme of Arrhenius, is considered one of the major steps in the development of our understanding of ionic solutions. Using silver chromate as an example, we express its dissolution in water as \[Ag_2CrO_{4(s)} \rightarrow 2 Ag^+_{(aq)}+ CrO^{2–}_{4(aq)} \label{4a}\] When this process reaches equilibrium ( ), we can write (leaving out the " s" for simplicity) \[Ag_2CrO_{4(s)} \rightleftharpoons 2 Ag^+ + CrO^{2–}_{4} \label{4b}\] The equilibrium constant is formally \[K = \dfrac{[Ag^+]^2[CrO_4^{2–}]}{[Ag_2CrO_{4(s)}]} = [Ag^+]^2[CrO_4^{2–}] \label{5a}\] But because solid substances do not normally appear in equilibrium expressions, the equilibrium constant for this process is \[[Ag^+]^2 [CrO_4^{2–}] = K_s = 2.76 \times 10^{–12} \label{5b}\] Because equilibrium constants of this kind are written as products, the resulting 's are commonly known as , denoted by \(K_s\) or \(K_{sp}\). Strictly speaking, concentration units do not appear in equilibrium constant expressions. However, many instructors prefer that students show them anyway, especially when using solubility products to calculate concentrations. If this is done, \(K_s\) in Equation \(\ref{5b}\) would have units of mol  L . An expression such as [Ag ]  [CrO ] in known generally as an — this one being the ion product for silver chromate. An ion product can in principle have positive value, depending on the concentrations of the ions involved. Only in the special case when its value is identical with does it become the solubility product. A solution in which this is the case is said to be . Thus when \[[Ag^+]^2 [CrO_4^{2–}] = 2.76 \times 10^{-12}\] at the temperature and pressure at which this value \(K_s\) of applies, we say that the "solution is saturated in silver chromate". This is a condition for solubility equilibrium, but it is not by itself . True chemical equilibrium can only occur when all components are simultaneously present. A solubility system can be in equilibrium only when some of the solid is in contact with a saturated solution of its ions. Failure to appreciate this is a very common cause of errors in solving solubility problems. If the ion product is smaller than the solubility product, the system is not in equilibrium and no solid can be present. Such a solution is said to be . A solution is one in which the ion product exceeds the solubility product. A supersaturated solution is not at equilibrium, and no solid can ordinarily be present in such a solution. If some of the solid is added, the excess ions precipitate out and until solubility equilibrium is achieved. This is just a simple matter of comparing the ion product \(Q_s\) with the solubility product \(K_s\). S \[Ag_2CrO_{4(s)} \rightleftharpoons 2 Ag^+ + CrO_4^{2–} \label{4ba}\] a solution in which \(Q_s < K_s\) (i.e., \(K_s /Q_s > 1\)) is undersaturated (blue shading) and the no solid will be present. The combinations of [Ag ] and [CrO ] that correspond to a saturated solution (and thus to equilibrium) are limited to those described by the curved line. The pink area to the right of this curve represents a supersaturated solution. A sample of groundwater that has percolated through a layer of gypsum (CaSO , = 4.9E–5 = 10 ) is found to have be 8.4E–5 in Ca and 7.2E–5 in SO . What is the equilibrium state of this solution with respect to gypsum? The ion product \[Q_s = (8.4 \times 10^{–5})(7.2 \times 10^{-5}) = 6.0 \times 10^{–4}\] exceeds \(K_s\), so the ratio  / > 1 and the solution is in CaSO . There are two principal methods, neither of which is all that reliable for sparingly soluble salts: The (by which we usually mean the ) of a solid is expressed as the concentration of the "dissolved solid" in a In the case of a simple 1:1 solid such as AgCl, this would just be the concentration of Ag or Cl in the saturated solution. However, for a more complicated stoichiometry such as as silver chromate, the solubility would be only one-half of the Ag concentration. For example, let us denote the solubility of as \(S\) mol L . Then for a saturated solution, we have Substituting this into Equation \(\ref{5b}\) above, \[(2S)^2 (S) = 4S^3 = 2.76 \times 10^{–12}\] \[S= \left( \dfrac{K_s}{4} \right)^{1/3} = (6.9 \times 10^{-13})^{1/3} = 0.88 \times 10^{-4} \label{6a}\] thus the solubility is \(8.8 \times 10^{–5}\; M\). Note that the relation between the solubility and the solubility product constant depends on the stoichiometry of the dissolution reaction. For this reason it is meaningless to compare the solubilities of two salts having the formulas \(A_2B\) and \(AB_2\), on the basis of their \(K_s\) values. It is meaningless to compare the solubilities of two salts having different formulas on the basis of their \(K_s\) values. moles of solute in 100 mL; = 0.0016 g / 78.1 g/mol = 2.05E-5 mol = 2.05E–5 mol/0.100 L = 2.05E-4 = [Ca ,F ] = ( )(2 ) = 4 × (2.05E–4) = 3.44E–11 Estimate the solubility of La(IO ) and calculate the concentration of iodate in equilibrium with solid lanthanum iodate, for which = 6.2 × 10 . The equation for the dissolution is \[La(IO_3)_3 \rightleftharpoons La^{3+ }+ 3 IO_3^–\] If the solubility is , then the equilibrium concentrations of the ions will be [La ] = and [IO ] = 3 . Then = [La ,IO ] = (3 ) = 27 27 = 6.2 × 10 , = ( ( 6.2 ÷ 27) × 10 ) = 6.92 × 10 [IO ] = 3 = 2.08 × 10 Cadmium is a highly toxic environmental pollutant that enters wastewaters associated with zinc smelting (Cd and Zn commonly occur together in ZnS ores) and in some electroplating processes. One way of controlling cadmium in effluent streams is to add sodium hydroxide, which precipitates insoluble Cd(OH) ( = 2.5E–14). If 1000 L of a certain wastewater contains Cd at a concentration of 1.6E–5 , what concentration of Cd would remain after addition of 10 L of 4 NaOH solution? As with most real-world problems, this is best approached as a series of smaller problems, making simplifying approximations as appropriate. Volume of treated water: 1000 L + 10 L = 1010 L Concentration of OH on addition to 1000 L of pure water: (4 ) × (10 L)/(1010 L) = .040 Initial concentration of Cd in 1010 L of water: (1.6E–5 ) x (100/101) ≈ 1.6E–5 The easiest way to tackle this is to start by assuming that a stoichiometric quantity of Cd(OH) is formed — that is, of the Cd gets precipitated. Now "turn on the equilibrium" — find the concentration of Cd that can exist in a 0.04 OH solution: Substitute these values into the solubility product expression: \[Cd(OH)_{2(s) } = [Cd^{2+}] [OH^–]^2 = 2.5 \times 10^{–14}\] \[[Cd^{2+}] = \dfrac{2.5 \times 10^{–14}}{ 16 \times 10^{–4}} = 1.6 \times 10^{–13}\; M\] Note that the effluent will now be very alkaline: \[pH = 14 + \log 0.04 = 12.6\] so in order to meet environmental standards an equivalent quantity of strong acid must be added to neutralize the water before it is released. The simple relations between and molar solubility outlined above, and the calculation examples given here, cannot be relied upon to give correct answers. Some of the reasons for this are explained in Part 2 of this lesson, and have mainly to do with incomplete dissociation of many salts and with complex formation in the presence of anions such as Cl and OH . The situation is nicely described in the article by Stephen Hawkes ( 1998 75(9) 1179-81). See also the earlier article by Meites, Pode and Thomas ( 1966 43(12) 667-72). It turns out that solubility equilibria more often than not involve many competing processes and their rigorous treatment can be quite complicated. Nevertheless, it is important that students master these over-simplified examples. However, it is also important that they are not taken too seriously! It has long been known that the solubility of a sparingly soluble ionic substance is markedly decreased in a solution of another ionic compound when the two substances have an ion in common. This is just what would be expected on the basis of the Le Chatelier Principle; whenever the process \[CaF_{2(s)} \rightleftharpoons Ca^{2+} + 2 F^– \label{7}\] is in equilibrium, addition of more fluoride ion (in the form of highly soluble NaF) will shift the composition to the left, reducing the concentration of Ca , and thus effectively reducing the solubility of the solid. We can express this quantitatively by noting that the \[[Ca^{2+},F^–]^2 = 1.7 \times 10^{–10} \label{8}\] must always hold, even if some of the ionic species involved come from sources other than . For example, if some quantity of fluoride ion is added to a solution initially in equilibrium with solid CaF , we have so that \[K_s = [Ca^{2+}, F^–]^2 = S (2S + x)^2 . \label{9a}\] \[K_s ≈ S x^2 \] or \[S ≈ \dfrac{K_s}{x^2} \label{9b}\] University-level students should be able to derive these relations for ion-derived solids of any stoichiometry. The plots shown below illustrate the common ion effect for silver chromate as the chromate ion concentration is increased by addition of a soluble chromate such as \(Na_2CrO_4\). What's different about the plot on the right? If you look carefully at the scales, you will see that this one is plotted logarithmically (that is, in powers of 10.) Notice how a much wider a range of values can display on a logarithmic plot. The point of showing this pair of plots is to illustrate the great utility of log-concentration plots in equilibrium calculations in which simple approximations (such as that made in Equation \(\ref{9b}\) can yield straight-lines within the range of values for which the approximation is valid. Calculate the solubility of strontium sulfate ( = 2.8 × 10 ) in \[S = \sqrt{K_s} = \sqrt{ 2.8 \times 10^{–7} } = 5.3 \times 10^{–4}\] (b) In 0.10 mol L Na SO , we have Because is negligible compared to 0.10 M, we make the approximation so This is roughly 100 times smaller than the result from . Differences in solubility are widely used to selectively remove one species from a solution containing several kinds of ions. The solubility products of AgCl and Ag CrO are 1.8E–10 and 2.0E–12, respectively. Suppose that a dilute solution of AgNO is added dropwise to a solution containing 0.001 Cl and 0.01 CrO . The silver ion concentrations required to precipitate the two salts are found by substituting into the appropriate solubility product expressions: The first solid to form as the concentration of Ag increases will be AgCl. Eventually the Ag concentration reaches 1.4E-5 and Ag CrO begins to precipitate. At this point the concentration of chloride ion in the solution will be 1.3E-5 which is about 13% of the amount originally present. The preceding example is the basis of the Mohr titration of chloride by Ag , commonly done to determine the salinity of water samples. The equivalence point of this precipitation titration occurs when no more AgCl is formed, but there is no way of observing this directly in the presence of the white AgCl which is suspended in the container. Before the titration is begun, a small amount of K is added to the solution. Ag is red-orange in color, so its formation, which signals the approximate end of AgCl precipitation, can be detected visually. Solubility expression are probably the exception rather than the rule. Such equilibria are often in competition with other reactions with such species as H or OH , complexing agents, oxidation-reduction, formation of other sparingly soluble species or, in the case of carbonates and sulfites, of gaseous products. The exact treatments of these systems can be extremely complicated, involving the solution of large sets of simultaneous equations. For most practical purposes it is sufficient to recognize the general trends, and to carry out approximate calculations. Salts of weak acids are soluble in strong acids, but strong acids will not dissolve salts of strong acids The solubility of a sparingly soluble salt of a weak acid or base will depend on the pH of the solution. To understand the reason for this, consider a hypothetical salt MA which dissolves to form a cation M and an anion A which is also the conjugate base of a weak acid HA. The fact that the acid is weak means that hydrogen ions (always present in aqueous solutions) and M cations will both be competing for the A : The weaker the acid HA, the more readily will reaction take place, thus gobbling up A ions. If an excess of H is made available by addition of a strong acid, even more A ions will be consumed, eventually reversing reaction , causing the solid to dissolve. In , for example, sulfate ions react with calcium ions to form insoluble CaSO . Addition of a strong acid such as HCl (which is totally dissociated ) has no effect because CaCl is soluble. Although H can protonate some SO ions to form hydrogen sulfate ("bisulfate") HSO , this ampholyte acid is too weak to reverse by drawing a significant fraction of sulfate ions out of CaSO . Calculate the concentration of aluminum ion in a solution that is in equilibrium with aluminum hydroxide when the pH is held at 6.0. The equilibria are \[Al(OH)_3 \rightleftharpoons Al^{3+} + 3 OH^–\] with \[K_s = 1.4 \times 10^{–34}\] and \[H_2O \rightleftharpoons H^+ + OH^–\] with \[K_w = 1 \times 10^{–14}\] Substituting the equilibrium expression for the second of these into that for the first, we obtain \[[OH^–]^3 = \left( \dfrac{K_w}{ [H^+]}\right)^3 = \dfrac{K_s}{[Al^{3+}]}\] (1.0 × 10 ) / (1.0 × 10 ) = (1.4 × 10 ) / [Al ] from which we find \[[Al^{3+}] = 1.4 \times 10^{–10}\; M\] If two different anions compete with a single cation to form two possible precipitates, the outcome depends not only on the solubilities of the two solids, but also on the concentrations of the relevant ions. These kinds of competitions are especially important in groundwaters, which acquire solutes from various sources as they pass through sediment layers having different compositions. As the following example shows, competing equilibria of these kinds are very important for understanding geochemical processes involving the formation and transformation of mineral deposits. Suppose that groundwater containing 0.001 F and 0.0018 CO percolates through a sediment containing calcite, CaCO . Will the calcite be replaced by fluorite, CaF ? The two solubility equilibria are \[\ce{CaCO3 <=> Ca^{2+} + CO3^{2–} \quad K_s = 10^{–8.1}\] \[\ce{CaF2 <=> Ca^{2+} + 2 F^{–} \quad K_s = 10^{–10.4}\] The equilibrium between the two solids and the two anions is \[CaCO_3 + 2 F^–\rightleftharpoons CaF_2 + CO_3^{2–}\] This is just the sum of the dissolution reaction for CaCO and the reverse of that for CaF , so the equilibrium constant is \[K = \dfrac{[CO_3^{2–}]}{ [F^–]^2} = \dfrac{10^{–8.1}}{ 10^{–10.4}} = 200\] That is, the two solids can coexist only if the reaction quotient ≤ 200. Substituting the given ion concentrations we find that \[Q = \dfrac{0.0018}{0.0012} = 1800\] Since > , we can conclude that the calcite will not change into fluorite. Most transition metal ions possess empty orbitals that are sufficiently low in energy to be able to accept electron pairs from electron donors from cations, resulting in the formation of a covalently-bound . Even neutral species that have a nonbonding electron pair can bind to ions in this way. Water is an active electron donor of this kind, so aqueous solutions of ions such as Fe and Cu exist as the octahedral complexes Fe(H O) and Cu(H O) , respectively. Many of the remarks made above about the relation between and solubility also apply to calculations involving complex formation. See Stephen Hawkes' article ("... to the point of absurdity...and should not be taught" in introductory courses.) ( ). However, it is very important that you understand the principles outlined in this section. H O is only one possible electron donor; NH , CN and many other species (known collectively as ) possess lone pairs that can occupy vacant orbitals on a metallic ion. Many of these bind much more tightly to the metal than does H O, which will undergo displacement and substitution by one or more of these ligands if they are present in sufficiently high concentration. If a sparingly soluble solid is placed in contact with a solution containing a ligand that can bind to the metal ion much more strongly than H O, then formation of a complex ion will be favored and the solubility of the solid will be greater. Perhaps the most commonly seen example of this occurs when ammonia is added to a solution of copper(II) nitrate, in which the Cu ion is itself the complex hexaaquo complex ion shown at the left: Because ammonia is a weak base, the first thing we observe is formation of a cloudy precipitate of Cu(OH) in the blue solution. As more ammonia is added , this precipitate dissolves, and the solution turns an intense deep blue, which is the color of hexamminecopper(II) and the various other related species such as Cu(H O) (NH ) , Cu(H O) (NH ) , etc. In many cases, the complexing agent and the anion of the sparingly soluble salt are identical. This is particularly apt to happen with insoluble chlorides, and it means that addition of chloride to precipitate a metallic ion such as Ag will produce a precipitate at first, but after excess Cl has been added the precipitate will redissolve as complex ions are formed. In this section, we discuss solubility equilibria that relate to some very commonly-encountered anions of metallic salts. These are especially pertinent to the kinds of separations that most college students are required to carry out (and understand!) in their first-year laboratory courses. Metallic oxides and hydroxides both form solutions containing OH ions. For example, the solubilities of the [sparingly soluble] oxide and hydroxide of magnesium are represented by \[Mg(OH)_{2(s)} → Mg^{2+} + 2 OH^– \label{10}\] \[MgO_{(S)} + H_2O → Mg^{2+} + 2 OH^– \label{11}\] If you write out the solubility product expressions for these two reactions, you will see that they are identical in form and value. Recall that pH = –log [H ], so that [H ] = 10 . One might naïvely expect that the dissolution of an oxide such as MgO would yield as one of its products the oxide ion O . But the oxide ion is such a strong base that it grabs a proton from water, forming two hydroxide ions instead: \[O^{2+} + H_2O → 2 OH^–\] This is an example of the rule that the hydroxide ion is the strongest base that can exist in aqueous solution. \[2 HCO_3^– \rightleftharpoons H_2CO_3 + CO_3^{2–}\] \[2 HCO_3^– → H_2O + CO_3^[2–} + CO_{2(g)}\] \[C_{17}H_{35}COONa → C_{17}H_{35}COO^– Na^+\] \[2 C_{17}H_{35}COO^– + Ca^{2+} → (C_{17}H_{35}COO^–)_2Ca_{(s)}\] the \(K_s\) v ailable. Especially suspect are many of those for highly insoluble salts which are more difficult to measure. A table showing the variations in \(K_{sp}\) values for the same salts among ten textbooks was published by Clark and Bonikamp in J Chem Educ. 1998 75(9) 1183-85.A good An example that used a variety of modern techniques to measure the solubility of silver chromate was published by A.L. Jones et al in the Australian J. of Chemistry, 1971 24 2005-12. \[CdI_{2(s)} → Cd^{2+} + 2 I^–\] \[Cd^{2+}_{(aq)} + 2 I^–_{(aq)} → CdI_{2(aq)}\] \(CdI^–_{(aq)}\) \[Fe(H_2O)_6^{3+} → Fe(H_2O)_5(OH)^{2+}+H^+\] \[Fe(H_2O)_5(OH)^{2+}→ Fe(H_2O)_4(OH)_2^+→ Fe(H_2O)_3(OH)_3 → Fe(H_2O)_2(OH)_4^-\]
27,440
5,007
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Chemometrics_Using_R_(Harvey)
Although you may not yet know what we mean by the term chemometrics, you almost certainly make routine use of chemometric techniques in your classes and labs: reporting an average result for several trials of an experiment or creating a calibration curve and using it to find an analyte’s concentration are two examples of chemometric methods analysis with which you almost certainly are familiar. The goal of this textbook is to provide an introduction to chemometrics suitable for the undergraduate chemistry curriculum at the junior or senior level; indeed, much of this textbook's content, including many of the examples and exercises, were developed to support Chem 351: Chemometrics, a course that has been part of the analytical curriculum at DePauw University since 2001 and that has been grounded in R since 2005. Thumbnail: Cluster analysis with k-Means on a density-based data set. k-means tries to model the data using Voronoi cells, but there is no linear separation possible, and the result is not particular meaningful. The visualization was generated using ELKI. (CC BY-SA_NC; via )
1,118
5,016
https://chem.libretexts.org/Bookshelves/Inorganic_Chemistry/Organometallic_Chemistry_(Evans)
Organometallic (OM) chemistry is the study of compounds containing, and reactions involving, metal-carbon bonds. The metal-carbon bond may be transient or temporary, but if one exists during a reaction or in a compound of interest, we’re squarely in the domain of organometallic chemistry. Despite the denotational importance of the M-C bond, bonds between metals and the other common elements of organic chemistry also appear in OM chemistry: metal-nitrogen, metal-oxygen, metal-halogen, and even metal-hydrogen bonds all play a role. Metals cover a vast swath of the periodic table and include the alkali metals (group 1), alkali earth metals (group 2), transition metals (groups 3-12), the main group metals (groups 13-15, “under the stairs”), and the lanthanides and actinides. We will focus most prominently on the behavior of the transition metals, so called because they cover the transition between the electropositive group 2 elements and the more electron-rich main group elements.
1,008
5,020
https://chem.libretexts.org/Bookshelves/Inorganic_Chemistry/Book%3A_Inorganic_Chemistry_(Saito)
Inorganic chemistry is of fundamental importance not only as a basic science but also as one of the most useful sources for modern technologies. Elementary substances and solid-state inorganic compounds are widely used in the core of information, communication, automotive, aviation and space industries as well as in traditional ones. Inorganic compounds are also indispensable in the frontier chemistry of organic synthesis using metal complexes, homogeneous catalysis, bioinorganic functions, etc. One of the reasons for the rapid progress of inorganic chemistry is the development of the structural determination of compounds by X-ray and other analytical instruments. It has now become possible to account for the structure-function relationships to a considerable extent by the accumulation of structural data on inorganic compounds. It is no exaggeration to say that a revolution of inorganic chemistry is occurring. We look forward to the further development of inorganic chemistry in near future. This text book describes important compounds systematically along the periodic table, and readers are expected to learn typical ones both in the molecular and solid states. The necessary theories to explain these properties of compounds come from physical chemistry and basic concepts for learning inorganic chemistry are presented in the first three chapters.
1,390
5,024