title
stringlengths
13
247
url
stringlengths
35
578
text
stringlengths
197
217k
__index_level_0__
int64
1
8.68k
Colinergic Drugs II - Anticholinesterase Agents & Acetylcholine Antagonists
https://chem.libretexts.org/Bookshelves/Biological_Chemistry/Supplemental_Modules_(Biological_Chemistry)/Medicinal_Chemistry/Colinergic_Drugs_II_-_Anticholinesterase_Agents_and_Acetylcholine_Antagonists
Acetylcholine is inactivated by the enzyme acetylcholinesterase (enlarged), which is located at cholinergic synapses and breaks down the acetylcholine molecule into choline and acetate. Three particularly well-known drugs, neostigmine, physostigmine, and diisopropyl fluorophosphate, inactivate acetylcholinesterase so that it cannot hydrolyze the acetylcholine released at the nerve ending. As a result, acetylcholine increases in quantity with successive nerve impulses so that large amounts of acetylcholine can accumulate and repetitively stimulate receptors. In view of the widespread distribution of cholinergic neurons, it is not surprising that the anticholinesterase agents as a group have received more extensive application as toxic agents, in the form of agricultural insecticides and potential chemical-warfare "nerve gases," than as therapeutic agents. Nevertheless, several members of this class of compounds are clinically useful.The active center of acetylcholinesterase consists of a negative subsite, which attracts the quaternary group of choline through both coulombic and hydrophobic forces, and an esteratic subsite, where nucleophilic attack occurs on the acyl carbon of the substrate. The catalytic mechanism resembles that of other serine esterases, where a serine hydroxyl group is rendered highly nucleophilic through a charge-relay system involving the close apposition of an imidazole group and , presumably, a carboxyl group on the enzyme. During enzymatic attack on the ester, a tetrahedral intermediate between enzyme and ester is formed that collapses to an acetyl enzyme conjugate with the concomitant release of choline. The acetyl enzyme is labile to hydrolysis, which results in the formation of acetate and active enzyme. Acetylcholinesterase is one of the most efficient enzymes known and has the capacity to hydrolyze \(3 \times 10^5\) acetylcholine molecules per molecule of enzyme per minute: this is equivalent to a turnover time of 150 microseconds.Drugs such as neostigmine and pydrostigmine that have a carbamyl ester linkage are hydrolyzed by acetylcholinesterase, but much more slowly than acetylcholine because they from more stable intermediates. Therefore, these drugs inactivate acetylcholinesterase for up to several hours, after which they are displaced so that acetylcholinesterase. These drugs are used for their effects on skeletal muscle and the eye (pupillary constriction and decreased intraocular pressure) and for the treatment of atropine poisoning. On the other hand, diisopropyl fluorophosphate, which has military potential as a powerful nerve gas poison, inactivates acetylcholinesterase for weeks, which makes this a particularly lethal poison.Generally the pharmacological properties of anticholinesterase agents can be predicted merely by knowing those loci where acetylcholine is released physiologically by nerve impulses. and the responses of the corresponding effector organs to acetylcholine. While this is true of the main, the diverse locations of cholinergic synapses increase the complexity of the response. Potentially, the antcholiesterase agents can produce all of the following effects: 1) stimulation of muscarinic receptor responses at autonomic effector organs; 2) stimulation, followed by paralysis, of all autonomic ganglia and skeletal muscle; and 3) stimulation, with occasional subsequent depression, of cholinergic receptor sites in the CNS. However, with smaller doses, particularly those employed therapeutically, several modifying factors are significant.Neostigmine and pydrostigmine are among the principal anticholinesterases. These drugs have only a few clinical uses, mainly in augmenting gastric and intestinal contractions (in treatment of obstructions of the digestive tract), in generally augmenting muscular contractions (in the treatment of myasthenia gravis), and in constricting the eye pupils (in the treatment of glaucoma). Other anticholinesterases in larger doses, however, are widely used as toxins that achieve their effects by causing a continual stimulation of the parasympathetic nervous system. Parathion and malathion are thus highly effective agricultural insecticides, while tabun and serin are nerve gases used in chemical warfare to induce nausea, vomiting, convulsions, and death in humans. This action produces a decrease in the rate of destruction of acetylcholine in the synaptic cleft and hence an increase in the amount of transmitter available to interact with the receptors. Serin Nerve GasAtropine, a naturally occurring alkaloid of "Atropa belladonna", the deadly nightshade, inhibits the actions of acetylcholine on autonomic effectors innervated by postganglionic cholinergic nerves as well as on smooth muscles that lack cholinergic innervation. Since atropine antagonizes the muscarinic actions of acetylcholine, it is known as an antimuscarinic agent. Evidence shows that atropine and related compounds compete with muscarinic agonists for identical binding sites on muscarinic receptors.In general, antimuscarinic agents have only a moderate effect on the actions of acetylcholine at nicotinic receptor sites. Thus, at autonomic ganglia, where transmission primarily involves an action of acetylcholine on nicotinic receptors, atropine produces only partial block. At the neuomuscular junction, where the receptors are exclusively nicotinic, extremely high doses of atropine or related drugs are required to cause any degree of blockade. In the central nervous system, cholinergic transmission appears to be predominantly nicotinic in the spinal chord and both nicotinic and muscarinic in the brain. Many of the CNS effects of atropine-like drugs are probably attributable to their central antimuscarinic actions. When used as premedication for anaesthesia, atropine decreases bronchial and salivary secretions, blocks the bradycardia associated with some drugs used in anaesthesia such as halothane, suxamethonium and neostigmine, and also helps prevent bradycardia from excessive vagal stimulation.There is usually an increase in heart rate and sometimes a tachycardia as well as inhibition of secretions (causing a dry mouth) and relaxation of smooth muscle in the gut, urinary tract and biliary tree. Since atropine crosses the blood brain barrier CNS effects in the elderly may include amnesia, confusion and excitation. Pupillary dilatation and paralysis of accommodation occur, with an increase in intraocular pressure especially in patients with glaucoma. Occasionally small intravenous doses may be accompanied by slowing of the heart rate due to a central effect - this resolves with an extra increment of intravenous atropine Being a sympathetic cholinergic blocking agent, signs of parasympathetic block may occur such as dryness of the mouth, blurred vision, increased intraocular tension and urinary retention.Sarin is a nerve agent in the organophosphate family. It is dispersed in a droplet or mist form. Upon inhalation, for instance, the symptoms (in order of occurrence) include: runny nose, bronchial secretions, tightness in the chest, dimming of vision, pin-point pupils, drooling, excessive perspiration, nausea, vomiting, involuntary defecationand urination, muscle tremors, convulsions, coma, and death. Primary treatment for nerve agents is atropine sulfate. It is commonly carried in auto-injectors by military personnel in dosages of 1-2 mgs. However, in many cases, massive doses may be necessary to reverse the effects of the anticholinesterase agents. Frequently, 20-40 mgs. of atropine may be necessary. Picture of Auto-InjectorIn 1799 the famous Prussian explorer and scientist Baron Von Humboldt discovered a potent drug called curare. On an expedition into the jungles of Venezuela, he watched an Indian hunter bring down a large animal with a single shot from his bow and arrow. The arrow had been poisoned with curare, a potion with two curious properties, derived from the jungle plants. Curare injected into the bloodstream, as it was when hunting animals, was deadly. It immobilized the body, attacked the vital organs, and caused death almost instantaneously. Humboldt discovered the second property of curare in a more dramatic fashion. He became sick, and a native witch doctor forced Humboldt to drink some curare that had been diluted with water. Terrified that he was going to die, Humboldt was surprised to find that after drinking the curare, he felt significantly better. Curare, when it was diluted and taken orally, he discovered, could have a positive medicinal value without causing any damage to vital organs.Curare is a generic term for various South American arrow poisons. The main active ingredient in curare is d-tubocurarine, which has the chemical structure shown below.In brief, d-tubocurarine is an antagonist of the cholinergic receptor sites at the post junctional membrane and thereby blocks competitively the transmitter action of acetylcholine. When the drug is applied directly to the end-plate of a single isolated muscle fiber under microscopic control, the muscle cell becomes insensitive to motor-nerve impulses and to direct applied acetylcholine; however, the end -plate region and the remainder of the muscle fiber membrane retain their normal sensitivity to the application of potassium ions, and the muscle fiber still responds to direct electrical stimulation. Because acetylcholine release into the neuromuscular junction muscle is what initiates contraction, curare causes muscle relaxation and paralysis.There are several clinical application for neuromuscular blockage. The most important by far is the induction of muscle relaxation during anesthesia for effective surgery. Without such drugs deeper anesthesia, requiring more anesthetic, would be needed to achieve the same degree of muscle relaxation: tracheal intubation would also be impossible because of strong reflex response to tube insertion. It is the decreased need for anesthetics, however, that represents increased surgical safety.Neuromuscular blockers also find limited utility in convulsive situations such as those precipitated by tetanus infections and to minimize injury to patients undergoing electroconvulsive therapy. Manipulation of fractured or dislocated bones may also be aided by such drugs.Botulus toxin* is a bacterial poison that prevents the release of acetylcholine by all types of cholinergic nerve terminals. Apparently, the toxin blocks release of vesicular acetylcholine at the preterminal portion of the axon, but why this is confined to cholinergic fibers is not known.Edward B. Walker (Weber State University)Colinergic Drugs II - Anticholinesterase Agents & Acetylcholine Antagonists is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by LibreTexts.
905
DNA: Double Helix
https://chem.libretexts.org/Bookshelves/Biological_Chemistry/Supplemental_Modules_(Biological_Chemistry)/Nucleic_Acids/DNA/DNA%3A_Double_Helix
The secondary structure of DNA is actually very similar to the secondary structure of proteins. The protein single alpha helix structure held together by hydrogen bonds was discovered with the aid of X-ray diffraction studies. The X-ray diffraction patterns for DNA show somewhat similar patterns.In addition, chemical studies by E. Chargaff indicate several important clues about the structure of DNA. In the DNA of all organisms:The double helix in DNA consists of two right-handed polynucleotide chains that are coiled about the same axis. The heterocyclic amine bases project inward toward the center so that the base of one strand interacts or pairs with a base of the other strand. According to the chemical and X-ray data and model building exercises, only specific heterocyclic amine bases may be paired.The Base Pairing Principle is that adenine pairs with thymine (A - T) and guanine pairs with cytosine (G - C)The base pairing is called complementary because there are specific geometry requirements in the formation of hydrogen bonds between the heterocylic amines. Heterocyclic amine base pairing is an application of the hydrogen bonding principle. In the structures for the complementary base pairs given in the graphic on the left, notice that the thymine - adenine pair interacts through two hydrogen bonds represented as (T=A) and that the cytosine-guanine pair interacts through three hydrogen bonds represented as (C=G).Although other base pairing-hydrogen bonding combinations may be possible, they are not utilized because the bond distances do not correspond to those given by the base pairs already cited. The diameter of the helix is 20 Angstroms.The double-stranded helical model for DNA is shown in the graphic on the left. The easiest way to visualize DNA is as an immensely long rope ladder, twisted into a cork-screw shape. The sides of the ladder are alternating sequences of deoxyribose and phosphate (backbone) while the rungs of the ladder (bases) are made in two parts with each part firmly attached to the side of the ladder. The parts in the rung are heterocyclic amines held in position by hydrogen bonding. Although most DNA exists as open ended double helices, some bacterial DNA has been found as a cyclic helix. Occasionally, DNA has also been found as a single strand.QUES. Describe the structure of the double helix of DNA in your own words including the terms: backbone, heterocyclic amines, complementary base pairings, hydrogen bonding, deoxyribose, phosphate.DNA: Double Helix is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by LibreTexts.
906
DNA: Mutations
https://chem.libretexts.org/Bookshelves/Biological_Chemistry/Supplemental_Modules_(Biological_Chemistry)/Nucleic_Acids/DNA/DNA%3A_Mutations
This page takes a very brief look at what happens if the code in DNA becomes changed in some way, and the effect that would have on the proteins it codes for. Copying errors when DNA replicates or is transcribed into RNA can cause changes in the sequence of bases which makes up the genetic code. Radiation and some chemicals can also cause changes. The examples which follow show some of the easier-to-understand effects of this. Jim Clark (Chemguide.co.uk) Jim Clark (Chemguide.co.uk)This page titled DNA: Mutations is shared under a CC BY-NC 4.0 license and was authored, remixed, and/or curated by Jim Clark.
907
DNA: Replication
https://chem.libretexts.org/Bookshelves/Biological_Chemistry/Supplemental_Modules_(Biological_Chemistry)/Nucleic_Acids/DNA/DNA%3A_Replication
The hereditary material in a cell is coded in the sequence of the heterocyclic amines of DNA. There are normally 46 strands of DNA called chromosomes in human cells. Specific regions, called genes, on each chromosome contain the hereditary information which distinguishes individuals from each other. The genes also contain the coded information required for the synthesis of proteins and enzymes needed for the normal functions of the cells. Bacterial cells may have 1000 genes, while the human cell contains more than a million genes. A single E. coli (bacteria) chromosome of double helical DNA consists of 3.4 million base pairs.Prior to cell division, the DNA material in the original cell must be duplicated so that after cell division, each new cell contains the full amount of DNA material. The process of DNA duplication is usually called replication. The replication is termed semiconservative since each new cell contains one strand of original DNA and one newly synthesized strand of DNA. The original polynucleotide strand of DNA serves as a template to guide the synthesis of the new complementary polynucleotide of DNA. A template is a guide that may be used for example, by a carpenter to cut intricate designs in wood.The DNA single strand template serves to guide the synthesis of a complementary strand of DNA. DNA polymerase III is an example of this process. More explanation in the next panel.Several enzymes and proteins are involved with the replication of DNA. At a specific point, the double helix of DNA is caused to unwind possibly in response to an initial synthesis of a short RNA strand using the enzyme helicase. Proteins are available to hold the unwound DNA strands in position. Each strand of DNA then serves as a template to guide the synthesis of its complementary strand of DNA. DNA polymerase III is used to join the appropriate nucleotide units together. The replication process is shown in graphic on the left.Template #1 guides the formation of a new complementary #2 strand. The DNA template guides the formation of a DNA complementary strand - not an exact copy of itself. For example looking at template # 2, this process occurs because the heterocyclic amine, adenine (A), codes or guides the incorporation of only thymine (T) to synthesize new DNA #1. The replication of DNA is guided by the base pairing principle so that no other heterocyclic amine nucleotide can hydrogen bond and fit correctly with cytosine. The next heterocyclic amine, cytosine (C), guides the incorporation of guanine (G) while similar arguments apply to the other bases. Exactly the opposite reaction occurs using template #2 (far right margin) where cytosine (C) guides the incorporation of guanine (G) to form a new complementary #2 strand.It is so important that the cells duplicate the DNA genetic material exactly, that the sequence of newly synthesized nucleotides is checked by two different polymerase enzymes. The second enzyme can check for and actually correct any mistake of mismatched base pairs in the sequence. The mismatched nucleotides are hydrolyzed and cut out and new correct ones are inserted.Although details of DNA replication is not thoroughly understood, because so many molecules are involved in the process. This example focuses on the bacteriophage T7 DNA replication complex because it consists of relatively few proteins. The mechanism of T7 DNA replication is a good model for other DNA replication. This molecule is based on the recent work of Doublie, et al.. In the graphic below, the DNA polymerase enzyme is shown with a short section of DNA. The green color represents the DNA template, while the magenta color represents the newly synthesized DNA.In the close up, guanine triphosphate nucleotide is shown on the active site, guided by the cytosine in the template matching through hydrogen bonds. Only a few of the enzyme protein side chain interactions with the nucleotide are shown. Magnesium ions are also active in stabilizing the triphosphate through ionic interactions. Eventually the two of the phosphates are hydrolyzed and the remaining phosphate is bonded in a phosphate ester bond to the deoxyribose on the end of the newly forming DNA chain.DNA: Replication is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by LibreTexts.
908
DNA History
https://chem.libretexts.org/Bookshelves/Biological_Chemistry/Supplemental_Modules_(Biological_Chemistry)/Nucleic_Acids/DNA/DNA_Structure/DNA_History
DNA is the primary agent for all genetic material. This is common sense but it was not always that way. In the early 1900's many people thought that protein must be the genetic material responsible for inherited characteristics. One of the reasons behind this belief was the knowledge that proteins were quite complex molecules and therefore, they must be specified by molecules of equal or greater complexity (i.e. other proteins). DNA was known to be a relatively simple molecule, in comparison to proteins, and therefore it was hard to understand how a complex molecule (a protein) could be determined by a simpler molecule (DNA). What were the key experiments which identified DNA as the primary genetic material? F. Diplococcus pneumoniae, or pneumococcus, is a nasty little bacteria which, when injected into mice, will cause pneumonia and death in the mouse. The bacteria contains a capsular polysaccharide on its surface which protects the bacteria from host defences. Occassionally, variants (mutants) of the bacteria arise which have a defect in the production of the capsular polysaccharide. The mutants have two characteristics: 1) They are avirulent, meaning that without proper capsular polysaccharide they are unable to mount an infection in the host (they are destroyed by the host defences), and 2) Due to the lack of capsular polysaccharide the surface of the mutant bacteria appears rough under the microscope and can be distinguished from the wild type bacteria (whose surface appears smooth). The virulent smooth wild type pneumococcus can be heat treated and rendered avirulent (still appears smooth under the microscope however). w.t. (smooth) + mouse = dead mouse mutant (rough) + mouse = live mouse heat treated w.t. (smooth) + mouse = live mouse Combinations heat treated w.t. (smooth) + mutant (rough) + mouse = dead mouse In this case when the bacteria were recovered from the cold lifeless mouse they were smooth virulent pneumococcus (i.e. indistinguishable from wild type). The overall conclusions from these experiments was that there was a "transforming agent" in the the heat treated type I bacteria which transfomed the live mutant (rough) type II bacteria to be able to produce type I capsule polysaccharide. This "transforming agent" could have been the DNA or the protein or something completely different. O.T. AveryA.D. Hershey and M. Phage were grown in the presence of either 32P or 35S isotopic labels. 1) E. coli were infected with 35S labeled phage. After infection, but prior to cell lysis, the bacteria were whipped up in a blender and the phage particles were separated from the bacterial cells. The isolated bacterial cells were cultured further until lysis occurred. The released progeny phage were isolated. Where the 35S label went: Adsorbed phage shells 85% Infected cells (prior to lysis) 15% Lysed cell debris 15% Progeny phage <1% 2) E. coli were infected with 32P labeled phage. The same steps as in 1) above were performed. The material which was being transfered from the phage to the bacteria during infection appeared to be mainly DNA. Although the results were not entirely unambiguous they provided additional support for the view that DNA was the "stuff" of genetic inheritance. Dr. In the early 1900's many people thought that protein must be the genetic material responsible for inherited characteristics. One of the reasons behind this belief was the knowledge that proteins were quite complex molecules and therefore, they must be specified by molecules of equal or greater complexity (i.e. other proteins). DNA was known to be a relatively simple molecule, in comparison to proteins, and therefore it was hard to understand how a complex molecule (a protein) could be determined by a simpler molecule (DNA). What were the key experiments which identified DNA as the primary genetic material?Diplococcus pneumoniae, or pneumococcus, is a nasty little bacteria which, when injected into mice, will cause pneumonia and death in the mouse. The bacteria contains a capsular polysaccharide on its surface which protects the bacteria from host defences. Occassionally, variants (mutants) of the bacteria arise which have a defect in the production of the capsular polysaccharide. The mutants have two characteristics: 1) They are avirulent, meaning that without proper capsular polysaccharide they are unable to mount an infection in the host (they are destroyed by the host defences), and 2) Due to the lack of capsular polysaccharide the surface of the mutant bacteria appears rough under the microscope and can be distinguished from the wild type bacteria (whose surface appears smooth).The virulent smooth wild type pneumococcus can be heat treated and rendered avirulent (still appears smooth under the microscope however).In this case when the bacteria were recovered from the cold lifeless mouse they were smooth virulent pneumococcus (i.e. indistinguishable from wild type).The overall conclusions from these experiments was that there was a "transforming agent" in the the heat treated type I bacteria which transfomed the live mutant (rough) type II bacteria to be able to produce type I capsule polysaccharide. This "transforming agent" could have been the DNA or the protein or something completely different.The experiment of Griffith could not be taken further until methods were developed to separate and purify DNA and protein cellular components. Avery utilized methods to extract relatively pure DNA from pneumococcus to determine whether it was the "transforming agent" observed in Griffith's experiments.The experiment:Isolation of bacteria from the dead mouse showed that they were type I w.t. (smooth) bacteriaA more sophisticated experiment:Purified type I DNA was divided into two aliquots. One aliquot was treated with DNAse - an enzyme which non-specifically degrades DNA. The other aliquot was treated with Trypsin - a protease which (relatively) non-specfically degrades proteins.The work of Avery provided strong evidence that the "transforming agent" was in fact DNA (and not protein). However, not everyone was convinced. Some people felt that a residual amount of protein might remain in the purified DNA, even after Trypsin treatment, and could be the "transforming agent".T2 is a virus which attacks the bacteria E. coli. The virus, or phage, looks like a tiny lunar landing module.The viral particles adsorb to the surface of the E. coli cells. It was known that some material then leaves the phage and enters the cell. The "empty" phage particles on the surface cells can be physically removed by putting the cells into a blender and whipping them up. In any case, some 20 minutes after the phage adsorb to the surface of the bacteria the bacteria bursts open (lysis) and releases a multitude of progeny virus.If the media in which the bacteria grew (and were infected) included 32P labeled ATP, progeny phage could be recovered with this isotope incorporated into its DNA (normal proteins contain only hydrogen, nitrogen, carbon, oxygen, and sulfur atoms). Likewise if the media contained 35S labeled methionine the resulting progeny phage could be recovered with this isotope present only in its protein components (normal DNA contains only hydrogen, nitrogen, carbon, oxygen and phosphorous atoms).Phage were grown in the presence of either 32P or 35S isotopic labels.1) E. coli were infected with 35S labeled phage. After infection, but prior to cell lysis, the bacteria were whipped up in a blender and the phage particles were separated from the bacterial cells. The isolated bacterial cells were cultured further until lysis occurred. The released progeny phage were isolated.Where the 35S label went:2) E. coli were infected with 32P labeled phage. The same steps as in 1) above were performed.Where the 32P label went:The material which was being transfered from the phage to the bacteria during infection appeared to be mainly DNA. Although the results were not entirely unambiguous they provided additional support for the view that DNA was the "stuff" of genetic inheritance.DNA History is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by LibreTexts.
910
DNA Structure
https://chem.libretexts.org/Bookshelves/Biological_Chemistry/Supplemental_Modules_(Biological_Chemistry)/Nucleic_Acids/DNA/DNA_Structure
Deoxyribonucleic acid (DNA) is a macromolecule that consists of deoxyribonucleotide monomers linked to each other by phosphodiester bonds. The sequence of these nucleotides contains translates into a genetic blueprint by which a cell can synthesize proteins.In cells, DNA exists as a double-stranded molecule that twists around its axis to form a helical structure. Each strand is a complement to the other; the nucleotides on one strand hydrogen-bond with complementary nucleotides on the opposite strand. The double helical "twist" occurs because of the angular geometry of each bonded nucleotide.Uncoiled DNA can exist in either a linear form or as a closed-loop molecule (plasmid). Yet each complete turn of the double-helix spans either 10.7 base pairs (A-DNA) or 10.5 base pairs (B-DNA), so in order for a plasmid to close stably it must be a multiple of 10.7 or 10.5 base pairs in length.There are three major geometric configurations of DNA:B-DNA is the "generic" double helical form of DNA that is typically presented in introductory biology textbooks and on television. It is the form that predominates in vivo (in live cells), and is unmethylated. Every complete turn of the helix spans ~10.5 base pairs. B-DNA is right-handed, and has a barely noticeable tilt from its vertical axis. B-DNA has a wide major groove at which proteins can bind.Unlike B-DNA, A-DNA has a narrower major groove and a wider minor groove. Consequently, A-DNA binds proteins at the minor groove. A-DNA also has a significant tilt from its vertical axis and each helical rotation requires 10.7 base pairs. A-DNA typically forms either when DNA duplexes with RNA, or at low water concentrations.Z-DNA earns its name from a zigzag-like appearance. It is narrower than either B-DNA or A-DNA, and is left-handed unlike the other two forms (both right-handed). Z-DNA also has a tilt from its vertical axis, but not as great as A-DNA. Z-DNA is sometimes formed in vivo, often due to alternating guanine and cytosine nucleotides or as a result of methylation.Without supporting proteins, DNA undergoes "supercoiling" and collapses onto itself. This is because its double-helical nature creates a torsional strain, similar to a twisted piece of rope.In eukaryotic cells, linear DNA is packaged into a dense material called chromatin. This prevents supercoiling, keeps the DNA precisely organized, and prevents disastrous shearing during cell division. Specifically, DNA is wrapped around a histone pentamer (tetramer in some cases), forming a nucleosome. Below is a visual representation of the first degree of DNA packaging where multiple nucleosomes span the DNA molecule like beads on a string:Histones interact with each other to form what is called the 30-nm fiber (referring to the thickness of the structure). Typically, less active genes are packed in a 30-nm fiber. When cell division occurs, the 30-nm is scaffolded to more structural proteins, until eventually the chromatin is packed into structures known as chromosomes.Plasmids are typically found in bacteria, however some eukaryotes such as the yeast Saccharomyces cerevisiae also contain plasmids. Histones are not found in prokaryotes, and DNA is not packaged the way it is in eukaryotic cells. Therefore, plasmids are typically found in supercoiled form. Two common shapes are the toroid and the plectoneme:DNA shape affects how/whether proteins can bind to it, which has important consequences for gene transcription and regulation since condensed DNA cannot bind polymerases. DNA shape also affects its molecular mobility. This is an important consideration in gel electrophoresis, where linear DNA travels faster than plasmids of the same length in base pairs, and supercoiled plasmids travel faster than uncoiled plasmids.DNA Structure is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by LibreTexts.
911
Disaccharides
https://chem.libretexts.org/Bookshelves/Biological_Chemistry/Supplemental_Modules_(Biological_Chemistry)/Carbohydrates/Disaccharides
The most useful carbohydrate classification scheme divides the carbohydrates into groups according to the number of individual simple sugar units. Monosaccharides contain a single unit; disaccharides contain two sugar units; and polysaccharides contain many sugar units as in polymers - most contain glucose as the monosaccharide unit. Thumbnail Ball-and-stick model of the α-lactose molecule. (Public Domain; Ben Mills).Disaccharides is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by LibreTexts.
912
Drug Activity
https://chem.libretexts.org/Bookshelves/Biological_Chemistry/Supplemental_Modules_(Biological_Chemistry)/Pharmaceuticals/Drug_Activity
A very broad definition of a drug would include "all chemicals other than food that affect living processes." If the affect helps the body, the drug is a medicine. However, if a drug causes a harmful effect on the body, the drug is a poison. The same chemical can be a medicine and a poison depending on conditions of use and the person using it. Another definition would be "medicinal agents used for diagnosis, prevention, treatment of symptoms, and cure of diseases." Contraceptives would be outside of this definition unless pregnancy were considered a disease.A disease is a condition of impaired health resulting from a disturbance in the structure or function of the body. Diseases may be classified into the following major categories:In most cases, a drug bearing a generic name is equivalent to the same drug with a brand name. However, this equivalency is not always true. Although drugs are chemically equivalent, different manufacturing processes may cause differences in pharmacological action. Several differences may be crystal size or form, isomers, crystal hydration, purity-(type and number of impurities), vehicles, binders, coatings, dissolution rate, and storage stability.One major problem of pharmacology is that no drug produces a single effect. The primary effect is the desired therapeutic effect. Secondary effects are all other effects beside the desired effect which may be either beneficial or harmful. Drugs are chosen to exploit differences between normal metabolic processes and any abnormalities which may be present. Since the differences may not be very great, drugs may be nonspecific in action and alter normal functions as well as the undesirable ones. This leads to undesirable side effects.The biological effects observed after a drug has been administered are the result of an interaction between that chemical and some part of the organism. Mechanisms of drug action can be viewed from different perspectives, namely, the site of action and the general nature of the drug-cell interaction.Chemotherapeutic agents act by killing or weakening foreign organisms such as bacteria, worms, viruses. The main principle of action is selective toxicity, i.e. the drug must be more toxic to the parasite than to the host.Drugs act by stimulating or depressing normal physiological functions. Stimulation increases the rate of activity while depression reduces the rate of activity.Drugs act within the cell by modifying normal biochemical reactions. Enzyme inhibition may be reversible or non reversible; competitive or non-competitive. Antimetabolites may be used which mimic natural metabolites. Gene functions may be suppressed.Drugs act on the cell membrane by physical and/or chemical interactions. This is usually through specific drug receptor sites known to be located on the membrane. A receptor is the specific chemical constituents of the cell with which a drug interacts to produce its pharmacological effects. Some receptor sites have been identified with specific parts of proteins and nucleic acids. In most cases, the chemical nature of the receptor site remains obscure.Drugs act exclusively by physical means outside of cells. These sites include external surfaces of skin and gastrointestinal tract. Drugs also act outside of cell membranes by chemical interactions. Neutralization ofstomach acid by antacids is a good example.Charles Ophardt (Professor Emeritus, Elmhurst College); Virtual Chembook Charles Ophardt (Professor Emeritus, Elmhurst College); Virtual ChembookDrug Activity is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by LibreTexts.
913
Drug Receptor Interactions
https://chem.libretexts.org/Bookshelves/Biological_Chemistry/Supplemental_Modules_(Biological_Chemistry)/Pharmaceuticals/Drug_Receptor_Interactions
The vast majority of drugs show a remarkably high correlation of structure and specificity to produce pharmacological effects. Experimental evidence indicates that drugs interact with receptor sites localized in macromolecules which have protein-like properties and specific three dimensional shapes. A minimum three point attachment of a drug to a receptor site is required. In most cases a rather specific chemical structure is required for the receptor site and a complementary drug structure. Slight changes in the molecular structure of the drug may drastically change specificity.Several chemical forces may result in a temporary binding of the drug to the receptor. Essentially any bond could be involved with the drug-receptor interaction. Covalent bonds would be very tight and practically irreversible. Since by definition the drug-receptor interaction is reversible, covalent bond formation is rather rare except in a rather toxic situation. Since many drugs contain acid or amine functional groups which are ionized at physiological pH, ionic bonds are formed by the attraction of opposite charges in the receptor site.Polar-polar interactions as in hydrogen bonding are a further extension of the attraction of opposite charges. The drug-receptor reaction is essentially an exchange of the hydrogen bond between a drug molecule, surrounding water, and the receptor site.Finally hydrophobic bonds are formed between non-polar hydrocarbon groups on the drug and those in the receptor site. These bonds are not very specific but the interactions do occur to exclude water molecules. Repulsive forces which decrease the stability of the drug-receptor interaction include repulsion of like charges and steric hindrance. Steric hindrance refers to certain 3-dimensional features where repulsion occurs between electron clouds, inflexible chemical bonds, or bulky alkyl groups.Drug Receptor Interactions is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by LibreTexts.
914
Drugs Acting Upon the Central Nervous System
https://chem.libretexts.org/Bookshelves/Biological_Chemistry/Supplemental_Modules_(Biological_Chemistry)/Pharmaceuticals/Drugs_Acting_Upon_the_Central_Nervous_System
The central nervous system directs the functions of all tissues of the body. The peripheral nervous system receives thousands of sensory inputs and transmits them to the brain via the spinal cord. The brain processes this incoming information and discards 99% as unimportant. After sensory information has been evaluated, selected areas of the central nervous system initiate nerve impulses to organs or tissue to make an appropriate response.Chemical influences are capable of producing a myriad of effects on the activity and function of the central nervous system. Since our knowledge of different regions of brain function and the neurotransmitters in the brain is limited, the explanations for the mechanisms of drug action may be vague. The known neurotransmitters are: acetylcholine which is involved with memory and learning; norepinephrine which is involved with mania-depression and emotions; and serotonin which is involved with biological rhythms, sleep, emotion, and pain.Stimulants are drugs that exert their action through excitation of the central nervous system. Psychic stimulants include caffeine, cocaine, and various amphetamines. These drugs are used to enhance mental alertness and reduce drowsiness and fatigue. However, increasing the dosage of caffeine above 200 mg (about 2 cups of coffee) does not increase mental performance but may increase nervousness, irritability, tremors, and headache. Heavy coffee drinkers become psychically dependent upon caffeine. If caffeine is withheld, a person may experience mild withdrawal symptoms characterized by irritability, nervousness, and headache.Caffeine and the chemically related xanthines, theophylline and theobromine, decrease in the order given in their stimulatory action. They may be included in some over-the-counter drugs. The action of caffeine is to block adenosine receptors as an antagonist. As caffeine has a similar structure to the adenosine group. This means that caffeine will fit adenosine receptors as well as adenosine itself. It inhibits the release of neurotransmitters from presynaptic sites but works in concert with norepinephrine or angiotensin to augment their actions. Antagonism of adenosine receptors by caffeine would appear to promote neurotransmitter release, thus explaining the stimulatory effects of caffeine.The stimulation caused by amphetamines is caused by excessive release of norepinephrine from storage sites in the peripheral nervous system. It is not known whether the same action occurs in the central nervous system. Two other theories for their action are that they are degraded slower than norepinephrine or that they could act on serotonin receptor sites. Therapeutic doses of amphetamine elevate mood, reduce feelings of fatigue and hunger, facilitate powers of concentration, and increase the desire and capacity to carry out work. They induce exhilarating feelings of power, strength, energy, self-assertion, focus and enhanced motivation. The need to sleep or eat is diminished. Levoamphetamine (Benzedrine), dextroamphetamine (Dexedrine), and methamphetamine (Methedrine) are collectively referred to as amphetamines. Benzedrine is a mixture of both the dextro and levoamphetamine isomers. The dextro isomer is several times more potent than the levo isomer.The misuse and abuse of amphetamines is a significant problem which may include the house wife taking diet pills, athletes desiring an improved performance, the truck driver driving non-stop coast to-coast, or a student cramming all night for an exam.Drugs Acting Upon the Central Nervous System is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by LibreTexts.
915
Enzyme Inhibition
https://chem.libretexts.org/Bookshelves/Biological_Chemistry/Supplemental_Modules_(Biological_Chemistry)/Pharmaceuticals/Enzyme_Inhibition
Although activation of enzymes may be exploited therapeutically, most effects are produced by enzyme inhibition.Inhibition caused by drugs may be either reversible or irreversible. A reversible situation occurs when an equilibrium can be established between the enzyme and the inhibitory drug. A competitive inhibition occurs when the drug, as "mimic" of the normal substrate competes with the normal substrate for the active site on the enzyme. Concentration effects are important for competitive inhibition.In noncompetitive inhibition, the drug combines with an enzyme, at a different site other than the active site. The normal substrate can not displace the drug from this site and can not interact with the active either since the shape of the enzyme has been altered. Among the many types of drugs that act as enzyme inhibitors the following may be included: antibiotics, acetylchlolinesterase agents, certain antidepressants such as monoamine oxidase inhibitors and some diuretics.Many drugs act as suppressors of gene function including antibiotics, fungicides, antimalarials and antivirals. Gene function may be suppressed in several steps of protein synthesis or inhibition of nucleic acid biosynthesis. Many substances which inhibit nucleic acid biosynthesis are very toxic since the drug is not very selective in its action between the parasite and host.The strategy of chemotherapy consists of exploiting the biochemical differences between the host and parasite cells. Metabolites are any substances used or produced by biochemical reactions. A drug which possesses a remarkably close chemical similarity (mimic) to the normal metabolite is called an antimetabolite. The antimetabolite enters a normal synthetic reaction by "fooling" an enzyme and producing a counterfeit metabolite. The counterfeit metabolite inhibits another enzyme or is an unusable fraudulent end product which cannot be utilized by the cell for growth or reproduction. Such antimetabolites have been used as antibacterial or anticancer agents.Enzyme Inhibition is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by LibreTexts.
917
Enzymes
https://chem.libretexts.org/Bookshelves/Biological_Chemistry/Supplemental_Modules_(Biological_Chemistry)/Enzymes/Enzymes
Enzymes are catalysts that drive reaction rates forward. Most catalysts, but not all, are made up of amino acid chains called proteins that accelerate the rate of reactions in chemical systems. The functionality of a catalyst depends on how the proteins are folded, what they bind to, and what they react with. For protein-based catalysts, amino acid polarization lies at the core of catalytic activity. In chemistry, a catalyst is a chemical that drives a reaction forward. Catalysts lower the activation energy, which is the amount of energy required for reactants to form products. Catalysts also lower the kinetic barrier, which is needed to drive a reaction forward and backward. A certain amount of energy contained in the molecules is required when the two molecules react together to form a product. If the two molecules do not have enough energy to react, then no product is produced. By lowering the activation energy, a catalyst allows the molecules to gain sufficient energy to overcome the barrier and form products.Compare the red curve with the blue curve. Which hill would you want to climb over? This figure shows the decrease in the activation energy and the kintetic barrier in a reaction in which there is a catalyst or enzyme (red curve). Extracted from Dr. Delmar Larsen's Lecture 22 on 5/24/10.Catalysts increase the rates of the forward and backward reaction (Kf to denote the rate of the forward reaction and Kb to denote the rate of the backward reaction). Exergonic forward reactions convert reactants to products, whereas endergonic backward reactions convert products to reactants. Typically, the conversion of products to reactants requires more energy, measured in Gibbs energy. Recall that the difference in energy between the products and the reactants is measured as ΔG (Gibbs energy). It is very important to note that catalysts do not change the free energy, G, they simply affect the speed of the reaction. Catalysts are very beneficial in biological systems because they drive individual reactions forward. Our bodies are a vast combination of redox reactions. Heat may drive a reaction forward. For example, when you catch a fever, your body raises its temperature to drive reactions forward in your body, dissipating energy in the form of heat. Note that the body raises its temperature to drive reactions forward. This extra energy drives your immune system forward to get rid of the germs faster. Often, life does not want whole systems to be driven forward; instead, the biological system merely wants to produce a little extra product or a small amount of excess reactant of one reaction. It would be a waste of energy to constantly have our body hotter than needed, and we would probably die much faster. Therefore, our body uses reaction specific catalysts. These reaction specific catalysts are required to keep our body alive. In this section, we will talk about the chemistry of inorganic and organic biological catalysts, also called enzymes, and how their composition is evaluated in medicine. , and a carboxyl functional group (CO2H). The general formula for an amino acid is H2NCHRCOOH, which denotes the order in which the hydrogen and carbon atoms are bonded. The 20 amino acids have this same general structure. What makes them different is the R-linked side chain. Images of the 20 different amino acids.Amino acids differ in their electronegativity in the R groups, causing differences in their hydrophobicity. The side chains denote whether an amino acid is: Recall that the more electronegative an R side chain is compared with its amine and carboxyl, the more polar the amino acid. In general, side chains with hydrocarbon alkyl groups (CnHn), or benzene rings are non-polar. Examples: phenylalinine, Leucine, Isoleucine The number of alkyl groups affects the polarity. The more CnHn groups, the more nonpolar. What makes an amino acid more polar?What makes an amino acid basic?What makes an amino acid acidic?Enzyme and Substrate Chemistry can be described biologically. Enzymes provide the particular substrate with an active site, which forms an enzyme-substrate complex, which is necessary for its catalyst properties and the formation of products. In . The homogeneous catalysts do not change their current states, unlike heterogeneous catalysts. By "states" we mean the phase state, which indicates either solid, liquid, or gas. If homogeneous catalysts are solid, then they will remain solid after the reaction is completed; the same is true for liquid and gas homogeneous catalysts. Although many biological enzymes are heterogeneous, there are some homogeneous enzymes that remain in their same state after the reaction, such as in an immunoassay (EIA).Heterogeneous catalysts are catalysts that speed up the rate of reactions by allowing them to occur on a solid surface. An example of a heterogeneous catalyst is a clay DNA polymer scaffolding, where the DNA's individual purines and pyrimidines link together on a clay surface to enable more secure bonding. Enzymes are composed of many amino acids that react with substrates in biological chemistry. Enzymes exist to drive the rates of reactions forward in our bodies. Without enzymes, products would not form quickly enough for our body to actually process the energy that we need. The basic reaction for any enzyme-substrate complex is this:\[ \text{Step 1}\;\;\;\;\; E+S \rightleftharpoons ES \]The enzyme-substrate complex bound together is an intermediate in a reaction, denoted by [ES]. \[ \text{Step 2}\;\;\;\;\; ES \rightarrow E + P \]where P stands for products, E for enzyme, and S for substrate. The rate determining step for an enzyme-substrate reaction is always the second step in which [ES] is converted into the product. The reason for this is because once the enzyme performs its duty, it is free to do more work. Once an enzyme can do more work after conversion, a reaction can go faster. The rates of enzyme-substrate reactions oscillate between first order and second order. Initially, a reaction will be first order because it will depend on the amount of substrate added. When the maximum amount of active sites are consumed, the rate of an enzyme-substrate reaction maximizes, becoming a zero order reaction in which the rate of reaction is constant. A zero-order reaction is typically denoted graphically by an asymptote, which indicates the rate limit of the reaction. When an enzyme-substrate reaction tends toward zero order, the only way to make a reaction speed up is to add more enzyme, therefore adding more active sites. If more enzyme is added to a zero order maximized reaction, then the reaction will go back to first order until either all of the active sites are taken once again or all of the substrate is converted into product, leaving an excess of empty active sites. The rate of production of the product depends on the velocity (V) of a reaction. For any enzyme-substrate reaction to go forward, the rate of product formation (or decomposition of ES) must equal the rate of formation of ES. If ES only forms and does not decompose into product, then the enzyme is not working. Enzymes that do not work are discussed later, and may be a result of faulty RNA translation from DNA, which causes the active site on an enzyme to be malformed.Because the total concentration of an enzyme can never be accurately measured, biochemists like to use \(E_0\) to denote the sum of unbound enzyme versus bound enzyme ES. A convenient constant to use when relating the rates of the forward versus reverse reactions of enzyme chemistry is \(K_M\). This constant can be derived by dividing the rates of formation of unbound E (k-1 and k2) by the bound ES (k1). \[ K_M=\dfrac{k_{-1}+k_2}{k_1} \]whereNote that this constant is always changing due to fluctuations in the rates of the forward and reverse reactions due to the concentrations of the enzyme or substrate. The equation for velocity can then be understood.\[V=\dfrac{k_2[E_0][S]}{K_M+[S]}\]If \([S]\) is low, then the value of \(K_M\) will be large, and the reaction rate will depend on the concentration of the substrate. This reaction will be a first order reaction because there is enough enzyme to drive the reaction forward at a relatively fast rate. \[K_M>>[S]\]If the concentration of the substrate is high and all of the active sites are taken, then \(K_M\) will be sufficiently less than [S], and the reaction will tend toward its maximum rate. More enzyme will need to be added to drive this reaction faster, and the reaction will become zero order and attain an asymptote. \[K_M<<[S]\]The thermodynamics of a biological reaction are crucial. Your body temperature stays at a constant 97.5 to 98.8 degrees Fahrenheit because a higher body temperature can cause certain proteins to denature in your body. A lowering of this range will cause reactions to slow down, which also may cause death. However, slight changes above this set homeostasis can drive all of the reactions forward in your body, causing you to burn more energy, which partially escapes in the form of heat. Therefore, when you have a fever, your mother or father may have felt your forehead to see if you were warmer than usual.Active sites are the parts of enzymes that are substrate-specific. Certain enzymes will only bind to certain substrates because of a site resembling a lock-key on the surface of the enzyme. We will be taking a look at a very common enzyme family called serine protease as an example of how active site chemistry works. The serine protease family is an important enzyme for digestion, blood clotting, and fertilization. They are also the enzymes that catalyse peptide bond cleavage by attacking the carbonyl bond. Serine proteases are most famous for their specificity for substrates. They contain disulphide linkages (S--S) to keep their shape. Charged side chains are found on the outside of the enzyme, interacting with the solvent unless involved in catalysis. Let us use an enzyme called trypsin in the serine protease family. Trypsin's active site has two domains, with the active site between the two. At the center of each domain is a barrel structure. Polar regions of the structure are well hydrated. Trypsin's active site contains the amino acid sequence Asp 102, His 57, Ser 195 (Aspartic Acid, Histidine, and Serine respectively). The numbers correspond with the actual sequence and position of the amino acids. These amino acids are found on loop regions of the two domains, and represent the charge relay system for the active site. The specificity pocket is also found in the loops of the two domains. These two domains of barrel structures are important because they provide a scaffold on which the specific amino acid bonds can interact to form the substrate-specific active site. The connection between the domains is less tight at the active site and may allow more rigid movements within the domains that may contribute to catalysis. These rigid body movements are a fundamental part of enzyme catalysis. We still have much to research on serine proteases because not much is understood about their crucial chemistry.His 57 and Asp 102 are supposed to fix the Ser 195 to a state capable of reacting with the incoming peptide chain and to stabilize any intermediate formed during catalysis. His 57 acts as a strong base, abstracting the alcoholic proton of Ser 195 and moves it to the amine leaving group. The negative end of Asp 102 cancels out the positive charge developed by His 57 during the transition state. Then, the hydrolysis (adding of H2O) of the acyl-enzyme releases the product. (At this link, do not pay attention to the actual reaction, just pay attention to the highlighted intermediate of acyl-enzyme. This reaction is a protease inhibitor.)The actual reaction mechanism. And another...These catalysts drive the reaction forward 1,000,000 times faster than the reaction without a catalyst.1) What is a catalyst?A catalyst is a compound that speeds up a reaction.2) What is an enzyme?An enzyme is a biological catalyst that speeds up reactions and interactions between molecules in biological systems.3) What is the name of reactants that enter a substrate to form products at a faster rate?The name of the reactants that enter a substrate to form products at a faster time are called substrates.4) In which place on the enzyme does the substrate bind (to that enzyme, specifically) to give us products at a faster rate?The name of the place on which the substrate binds is the active site of the enzyme.Enzymes is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by LibreTexts.
918
Fatty Acids
https://chem.libretexts.org/Bookshelves/Biological_Chemistry/Supplemental_Modules_(Biological_Chemistry)/Lipids/Fatty_Acids
Fatty acids are merely carboxylic acids with long hydrocarbon chains. The hydrocarbon chain length may vary from 10-30 carbons (most usual is 12-18). The non-polar hydrocarbon alkane chain is an important counter balance to the polar acid functional group. In acids with only a few carbons, the acid functional group dominates and gives the whole molecule a polar character. However, in fatty acids, the non-polar hydrocarbon chain gives the molecule a non-polar character. Hydrogenation of Unsaturated Fats and Trans FatSaturated fats have a chain like structure which allows them to stack very well forming a solid at room temperature. Unsaturated fats are not linear due to double bonded carbons which results in a different molecular shape because the sp2 carbons are trigonal planar. This causes the fat molecules to poorly stack resulting in fats that are liquid at room temperature. Unsaturated fats can be converted to saturated fats via hydrogenation reactions.Introduction to Fatty AcidsFatty acids are carboxylic acids with long hydrocarbon chains. There are two groups of fatty acids--saturated and unsaturated. Recall that the term unsaturated refers to the presence of one or more double bonds between carbons as in alkenes. A saturated fatty acid has all bonding positions between carbons occupied by hydrogens. The melting points for the saturated fatty acids follow the boiling point principle observed previously.ProstaglandinsProstaglandins were first discovered and isolated from human semen in the 1930s by Ulf von Euler of Sweden. Thinking they had come from the prostate gland, he named them prostaglandins. It has since been determined that they exist and are synthesized in virtually every cell of the body. Prostaglandins, are like hormones in that they act as chemical messengers, but do not move to other sites, but work right within the cells where they are synthesized. Thumbnail: A ball-and-stick diagram of arachidonic acid. (Public Domain; SubDural12). Arachidonic acid is a polyunsaturated fatty acid present in the phospholipids of membranes of the body's cells, and is abundant in the brain, muscles, and liver.Fatty Acids is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by LibreTexts.
919
Flavin Adenine Dinucleotide (FAD)
https://chem.libretexts.org/Bookshelves/Biological_Chemistry/Supplemental_Modules_(Biological_Chemistry)/Vitamins_Cofactors_and_Coenzymes/Flavin_Adenine_Dinucleotide_(FAD)
The structure shown on the left is for FAD and is similar to NAD+ in that it contains a vitamin-riboflavin, adenine, ribose, and phosphates. As shown it is the diphosphate, but is also used as the monophosphate (FMN).In the form of FMN it is involved in the first enzyme complex 1 of the electron transport chain. A FMN (flavin adenine mononucleotide) as an oxidizing agent is used to react with NADH for the second step in the electron transport chain. The simplified reaction is:NADH + H+ + FMN → FMNH2 + NAD+ Red.Ag. Ox.Ag.Note the fact that the two hydrogens and 2e- are "passed along" from NADH to FFMN. Also note that NAD+ as a product is back to its original state as an oxidizing agent ready to begin the cycle again. The FMN has now been converted to the reducing agent and is the starting point for the third step.Ubiquinone: As its name suggests, is very widely distributed in nature. There are some differences in the length of the isoprene unit (in bracket on left) side chain in various species. All the natural forms of CoQ are insoluble in water, but soluble in membrane lipids where they function as a mobile electron carrier in the electron transport chain. The long hydrocarbon chain gives the non-polar property to the molecule.CoQ acts as a bridge between enzyme complex 1 and 3 or between complex 2 and 3. Electrons are transferred from NADH along with two hydrogens to the double bond oxygens in the benzene ring. These in turn convert to alcohol groups. The electrons are then passed along to the cytochromes in enzyme complex 3.Although not used in the electron transport chain, Coenzyme A is a major cofactor which is used to transfer a two carbon unit commonly referred to as the acetyl group. The structure has many common features with NAD+ and FAD in that it has the diphosphate, ribose, and adenine. In addition it has a vitamin called pantothenic acid, and finally terminated by a thiol group. The thiol (-SH) is the sulfur analog of an alcohol (-OH). The acetyl group (CH3C=O) is attached to the sulfur of the CoA through a thiol ester type bond. Acetyl CoA is important in the breakdown of fatty acids and is a starting point in the citric acid cycle.Flavin Adenine Dinucleotide (FAD) is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by LibreTexts.
920
GABA
https://chem.libretexts.org/Bookshelves/Biological_Chemistry/Supplemental_Modules_(Biological_Chemistry)/Medicinal_Chemistry/GABA
In most instances of natural neuron inhibition GABA is the inhibitory transmitter. GABA has the specific effect of opening anion channels in nerves, allowing large numbers of chloride ions to diffuse into the terminal fibril. the negative charges of these ions cancel much of the excitatory effect of the positively charged sodium ions that enter the terminal fibril when as action potential arrives. The cell is thus become hyperpolarized and the action potential in these neuron fibrils becomes greatly reduced.GABA is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by LibreTexts.
921
Glycerides
https://chem.libretexts.org/Bookshelves/Biological_Chemistry/Supplemental_Modules_(Biological_Chemistry)/Lipids/Glycerides
Glycerides and waxes are lipids with have an ester as the major functional group and include: waxes, triglycerides, and phospholipids. Phosphoglycerides or PhospholipidsPhospholipids are similar to the triglycerides with a couple of exceptions. Phospholglycerides are esters of only two fatty acids, phosphoric acid and a trifunctional alcohol - glycerol (IUPAC name is 1,2,3-propantriol). The fatty acids are attached to the glycerol at the 1 and 2 positions on glycerol through ester bonds. There may be a variety of fatty acids, both saturated and unsatured, in the phospholipids.TriglyceridesTriglycerides are esters of fatty acids and a trifunctional alcohol - glycerol. The properties of fats and oils follow the same general principles as already described for the fatty acids. The important properties to be considered are: melting points and degree of unsaturation from component fatty acids. Thumbnail: An example of a phosphatidylcholine, a type of phospholipid in lecithin. Red - choline and phosphate group; Black - glycerol; Green - unsaturated fatty acid; Blue - saturated fatty acid. (Public Domain; Jü).Glycerides is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by LibreTexts.
922
HIV
https://chem.libretexts.org/Bookshelves/Biological_Chemistry/Supplemental_Modules_(Biological_Chemistry)/Enzymes/HIV
Human immunodeficiency virus (HIV) is a retrovirus, which is a class of viruses that carry genetic information in RNA.There are two types of HIV, HIV-1 and HIV-2, with HIV-1 being the most predominant, it is commonly called just HIV. Both types of HIV damage a person’s body by destroying specific blood cells, called CD4+ T cells, which are crucial to helping the body fight diseases in the immune system. This can lead to immune deficiency, which is when the infection with the virus progressively deteriorates the immune system and is considered deficient when it no longer works to help fight infection and disease.2According to the 2006 Morbidity and Mortality Weekly Report, published by the Center for Disease Control there were approximately 1.1 million United State Citizens are affected by HIV.3 There was an estimated number of 56, 300 people that newly contracted HIV.4 Although the annual incidence for HIV has remained constant throughout recent years, the prevalence has increased each year. These drugs developed for HIV treatment are based on the mechanism HIV uses, including proteases, to infect its host.7 The HIV-1 protease is synthesized from the gag and pol genes along with other proteins.8 Retroviruses, such as HIV-1, are able to reverse transcribe because of the reverse transcriptase which is transcribed by the pol gene.9 HIV-1 RNA contains many genes, specifically gag and pol, that encode for many proteins.10 The open reading frames of gag and pol genes overlap in HIV-1.11 Studies have found that the initial cleavages are made by the immature protease dimer in the membrane of the infected cell during virus budding, or replication. Once these intramolecular cleavages are made a more active gag-pol processing intermediate is released, which becomes the active protease.8The mechanism of HIV-1 protease is still yet to be fully understood. The main way the mechanism are studied is through the use of mimicry substrates and simulations. HIV-1 protease has been studied intensely using various inhibitors, observing partial steps of the process. Since the main target of these inhibitors is to bind to the Asp-25 of the catalytic triad, each inhibitor would vary in its mechanism to accomplish this.14 Further investigation would then take place of the various proposed mechanisms in attempts to synthesize new drugs that would act in a similar fashion. The first part of the mechanism begins with the substrate binding onto the protease. accents the key amino acids in HIV-1 protease that assists in substrate binding. It is predicted that a substrate first binds via a hydrogen bond to aspartic acid 30 on one chain. Once this initial bond is made, the binding is then further stabilized by bondage to the glycine rich region in the flap of the same monomer. A salt bridge is then formed from the substrate to glutamic acid 35 of the other monomer. This completes binding of the substrate to the protease.15 At this point, waters molecules that are found at the tips of the flaps at isoleucine 50 on each monomer dissociates from the protease.15,16 The release of the water molecule results in a structural conformation change of the protease, changing it from semi-open to closed, tightening the space between the protease and substrate.15 HIV protease has variable states that it exists in, such as the two states mentioned above--the semi-open and closed state. These states depend on whether a substrate is bound to it. In its unbound state, the protease’s glycine rich flaps (shown in grey in are in a semi open state. depicts the protease in a closed flap state, which occurs when a ligand is bound to it (ligand not shown). An open state is thought to be the least frequent of the three states.17 Once in the tightened state, aspartic acid 25 and 25’ hydrogen binds to their adjacent glycine, and then becomes supported by the following threonine. Originally, there is a water molecule bound between the aspartic acids. One of the aspartic acid exists in a deprotonated state and the other one is protonated. The water molecule stabilizes the aspartates in this form.When the substrate binds to the protease, it causes conformational changes that brings the substrate to the position of the water molecule, and the water molecule acts as a nucleophile to the substrate. The oxygen of the water attacks the carbonyl group of the substrate peptide bond that is by the active site as the nitrogen picks up the hydrogen of the protonated aspartic acid. What results is an hydroxl group is added to the carbonyl group as an amine is formed on other side of the peptide bond, leaving a hydrogen atom behind to stablize the two aspartates. This is proposed to occur in a concerted fashion. This mechanism is outlined in Figure 3.1,18 HIV-1 Protease with Accented Substrate Binding Assistant Amino Acids.13,15 Aspartic acid 30 is shown in pink; Glycine 48, 49, and 51 are shown in white; Isoleucine 50 and 50’ are shown in yellow; and glutamic acid 35’ is shown in green. (Primes distinguish amino acids from each monomer.) Proposed Proteolytic Mechanism.1,18, Boxed molecules in “a” are showing the target peptide bond of the substrate and the water molecule hydrogen bonding between the two aspartic acids. The rest are the catalytic aspartic acids in the active site. “b” shows the proposed concerted mechanism between the water, peptide bond and aspartic acids. “c” shows the end products.With a disease this prevalent, medication is key in trying to extend the afflicted’s life. As mentions above, since a mutation to the aspartic acid in the active site of HIV-1 protease renders pro-viruses that are unable to form completely and infect other sites, the protease has been one of the targets for therapy. These drugs are referred to as protease inhibitors.14 A current Food and Drug Administration approved drug against this HIV-1 protease is nelfinavir mesylate, 2-[2-Hydroxy-3-(3-hydroxy-2-methyl-benzoylamino)-4-phenyl sulfanyl-butyl]-decahydro-isoquinoline-3-carboxylic acid tert-butylamide; C32H45N3 O4S (“Viracept”R). shows the drug fitting into the protease. At the center of the drug, a hydroxy group binds with the catalytic aspartic acid (boxed in red), while the other four groups (boxed in white) stabilizes the drug to the protease, making its bond more favorable than its natural substrate. This compounds accomplishes this by making various hydrophobic interactions and hydrogen bonds.19 This drug has a high drug efficiency. In order to prevent 50 percent of the HIV-1 infected cells from becoming necrotic, a dosage of 14nM is required.19 Although it is a high potent drug, there also a few side effects that come along with it. Side effects include fever, back pain, rash sweating, vomiting, and diarrhea based on a study of 62 HIV infected children ages 3 months to 13 years. Fourteen out of the 62 had diarrhea as a side effect and less than 6% of the study group had the other side effects.20 Due to the high mutation rate of HIV-1, often, multiple drugs are combined as a treatment in attempts to retard its spread as much as possible. A commonly seen drug paired with protease inhibitors is reverse transcriptase inhibitor. Protease inhibitors prevents the protease transcribed by the gag-pol gene and reverse transcriptase inhibitors prevents the reverse transcriptase transcribed by the pol gene. This combination targets two essential proteins that have been shown to stop HIV-1’s life cycle if these genes have been mutated. By targeting both proteins, HIV-1 activity is seen to decrease more than just one. An example of a reverse transcriptase inhibitor is Efavirenz. Efavirenz, in combination with nelfinavir mesylate has shown to increase the immune cell count and decrease the seen HIV-1 molecules in the blood plasma. The side effect of this drug are similar to those of nelfinavir mesylate.21 The effects of these developed drugs are the main reason HIV-1 infected people can live on life longer than they would have been able to in the past. HIV-1 Protease with nelfinavir mesylate.22 This is a top down view of the protease showing how the drug fits into the protease. Light blue molecules are carbons, red molecules are oxygens, blue molecules are nitrogens and yellow molecules are sulfurs. White boxed areas show the four main pockets the inhibitor lays in and the red boxed area shows the binding to the catalytic aspartic acid.HIV is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by LibreTexts.
923
Hallucinogenic Drugs
https://chem.libretexts.org/Bookshelves/Biological_Chemistry/Supplemental_Modules_(Biological_Chemistry)/Pharmaceuticals/Hallucinogenic_Drugs
Hallucinogenic agents, also called psychomimetic agents, are capable of producing hallucinations, sensory illusions and bizarre thoughts. The primary effect of these compounds is to consistently alter thought and sensory perceptions. Some of these drugs are used in medicine to produce model psychoses as aids in psychotherapy. Another purpose is to investigate the relationship of mind, brain, and biochemistry with the purpose of elucidating mental diseases such as schizophrenia.A large body of evidence links the action of hallucinogenic agents to effects at serotonin receptor sites in the central nervous system. Whether the receptor site is stimulated or blocked is not exactly known. The serotonin receptor site may consist of three polar or ionic areas to complement the structure of serotonin as shown in the graphic on the left.The drugs shown in the graphic can be isolated from natural sources: lysergic acid amide from morning glory seeds, psilocybin from the "magic mushroom", Psilocybe mexicana. The hallucinogenic molecules fit into the same receptors as the neuro-transmitter, and over-stimulate them, leading to false signals being created.Mescaline is isolated from a peyote cactus. The natives of Central America first made use of these drugs in religious ceremonies, believing the vivid, colorful hallucinations had religious significance. The Aztecs even had professional mystics and prophets who achieved their inspiration by eating the mescaline-containing peyote cactus (Lophophora williamsii). Indeed, the cactus was so important to the Aztecs that they named it teo-nancacyl, or "God's Flesh". This plant was said to have been distributed to the guests at the coronation of Montezuma to make the ceremony seem even more spectacular.LSD is one of the most powerful hallucinogenic drugs known. LSD stimulates centers of the sympathetic nervous system in the midbrain, which leads to pupillary dilation, increase in body temperature, and rise in the blood-sugar level. LSD also has a serotonin-blocking effect. The hallucinogenic effects of lysergic acid diethylamide (LSD) are also the result of the complex interactions of the drug with both the serotoninergic and dopaminergic systems.During the first hour after ingestion, the user may experience visual changes with extreme changes in mood. The user may also suffer impaired depth and time perception, with distorted perception of the size and shape of objects, movements, color, sound, touch and the user's own body image.Serotonin (5-hydroxytryptamine or 5-HT) is a monoamine neurotransmitter found in cardiovascular tissue, in endothelial cells, in blood cells, and in the central nervous system. The role of serotonin in neurological function is diverse, and there is little doubt that serotonin is an important CNS neurotransmitter. Although some of the serotonin is metabolized by monoamine oxidase, most of the serotonin released into the post-synaptic space is removed by the neuron through a re uptake mechanism inhibited by the tricyclic antidepressants and the newer, more selective antidepressant re uptake inhibitors such as fluoxetine and sertraline.Hallucinogenic Drugs is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by LibreTexts.
924
Important High Energy Molecules in Metabolism
https://chem.libretexts.org/Bookshelves/Biological_Chemistry/Supplemental_Modules_(Biological_Chemistry)/Metabolism/Important_High_Energy_Molecules_in_Metabolism
The complicated processes of metabolism wouldn't be possible without the help of certain high-energy molecules. The main purpose of these molecules is to transfer either inorganic phosphate groups (Pi) or hydride (H-) ions. The inorganic phosphate groups are used to make high energy bonds with many of the intermediates of metabolism. These bonds can then be broken to yield energy, thus driving the metabolic processes of life. Hydride ions can be transferred from one intermediate to another resulting in a net oxidation or reduction of the intermediate. Oxidation corresponds to a loss of hydride and reduction to the gaining of hydride. Certain reduced forms of high energy molecules such as NADH and [FADH2] can donate their electrons to the electron carriers of the electron transport chain (ETC) which results in the production of ATP (only under aerobic conditions).ATP (Adenosine Triphosphate) contains high energy bonds located between each phosphate group. These bonds are known as phosphoric anhydride bonds. There are three reasons these bonds are high energy:ADP (Adenosine Diphosphate) also contains high energy bonds located between each phosphate group. It has the same structure as ATP, with one less phosphate group. The same three reasons that ATP bonds are high energy apply to ADP's bonds.NAD+ (Nicotinamide adenine dinucleotide (oxidized form)) is the major electron acceptor for catabolic reactions. It is strong enough to oxidize alcohol groups to carbonyl groups, while other electron acceptors (like [FAD]) are only able to oxidize saturated carbon chains from alkanes to alkenes. It is an important molecule in many metabolic processes like beta-oxidation, glycolysis, and TCA cycle. With out NAD+ the aforementioned processes would be unable to occur.NADH (reduced form) is an NAD+ that has accepted electrons in the form of hydride ions. NADH is also one of the molecules responsible for donating electrons to the ETC to drive oxidative phosphorolation and also pyruvate during fermentation processes.NADP+ (Nicotinamide adenine dinucleotide phosphate (oxidized form)) is the major electron donator for anabolic reactions.Nicotinamide adenine dinucleotide phosphate (reduced form)Important High Energy Molecules in Metabolism is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by LibreTexts.
925
Local Anesthetics
https://chem.libretexts.org/Bookshelves/Biological_Chemistry/Supplemental_Modules_(Biological_Chemistry)/Pharmaceuticals/Local_Anesthetics
Unlike other drugs which act in the region of the synapse, local anesthetics are agents that reversibly block the generation and conduction of nerve impulses along a nerve fiber. They depress impulses from sensory nerves of the skin, surfaces of mucosa, and muscles to the central nervous system. These agents are widely used in surgery, dentistry, and ophthalmology to block transmission of impulses in peripheral nerve endings.Most local anesthetics can be represented by the following general formula. In both the official chemical name and the proprietary name, a local anesthetic drug can be recognized by the "-caine" ending. The ester linkage can also be an amide linkage. The most recent research indicates that the local anesthetic binds to a phospholipid in the nerve membrane and inhibits the ability of the phospholipid to bind Ca+2 ions.Practically all of the free-base forms of the drugs are liquids. For this reason most of these drugs are used as salts (chloride, sulfate, etc.) which are water soluble, odorless, and crystalline solids. As esters these drugs are easily hydrolyzed with consequent loss of activity. The amide form of the drug is more stable and resistant to hydrolysis.Two local anesthtics are shown below. Local Anesthetics is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by LibreTexts.
926
Metalloproteases
https://chem.libretexts.org/Bookshelves/Biological_Chemistry/Supplemental_Modules_(Biological_Chemistry)/Enzymes/Metalloproteases
Metalloproteases (metallo, metal) are members of a clan of proteases that contain a metal ion at their active site which acts as a catalyst in the hydrolysis peptide binds.1 The metallic core of each enzyme is the location of the specific reaction performed by the enzyme, in the case of metalloproteases, the cleavage of peptide bonds within proteins. The most common metal ion metalloproteases utilize is a zinc ion (Zn2+).2 Other transition metals have been found at active sites, such as Co2+ and Mn2+, and some have been used to restore function in zinc-metalloproteases in which the Zn2+ core has been removed.3 Generally, metal ions are bound in a nearly tetrahedral conformation at the active site. Three amino acid ligands, usually charged residues, associate with the metal core along with one water molecule which is used for hydrolysis.4 There are two major divisions of metalloproteases: metalloendopeptidases and metalloexopeptidases. Each division is named for the region of the hydrolysis-targeted protein at which the reaction takes place.5,6 Within these divisions are nested more highly specified target sites and conserved catalytic residues wholly dependent on the nature of the enzyme.1,2,7Thermolysin (TLN; EC 3.4.24.27, is a 34.6 kDa Zn2+-endopeptidase secreted by the bacterium Bacillus thermoproteolyticus.8,9 TLN and TLN-like proteins are used by bacteria to break down exogenous proteins for nutrition and as virulence factors aiding in host colonization and tissue degradation.10,11 TLN is active in the hydrolysis of internal peptide bonds on the N-terminal side of large hydrophobic amino acids, between leucine, isoleucine, or phenylalanine. Thermolysin was the first metalloproteases to be completely sequenced.12 Commercially, TLN is used as a nonspecific protease (within its cleavage site specificity) in peptide sequencing and is used in the production of the artificial sweetener aspartame.13General structure of thermolysin (PDBID 3DNZ). Alpha-helices in blue, beta-sheets in red, Zn2+ ion in yellow, active site side chains in magenta, Ca2+ ions in green.8The overall structure of the protein consists of 316 amino acid residues organized into two domains separated by the active site.12 The N-terminal domain is predominately composed of β-sheets, while the C-terminal domain is primarily composed of α-helices. The two large domains are separated by a central α-helix. One Zn2+ is bound via three residues in the active site within a conserved binding motif of HEXXH (His 142, His 146, Glu 166) located on the C-terminal side.14,15 The tetrahedral binding of Zn2+ is rounded out by a nucleophilic water molecule. This water molecule is capable of alternating between two distinct association positions, W1 and W2, which are in turn stabilized by His 231 and Glu 143, respectively.16 Four Ca2+ ions are associated with the enzyme that aide in thermostability.12Active site of thermolysin.Zinc: The bound Zn2+ is responsible for catalyzing peptide hydrolysis and stabilizing the various intermediates of the reaction. Although normally bound in a tetrahedral structure, during catalysis Zn2+ assumes a pentacoordinate geometry between the original three residues (His142, His146, and Glu166), the oxygen of the nucleophilic water, and the carbonyl oxygen of the substrate.7,14,17 The formation of a gem-diolate intermediate is stabilized by Zn2+.14,17 Removal of Zn2+ yields an inactive enzyme. Exogenous addition of other divalent transition metals, specifically Zn2+, Co2+, Fe2+, and Mn2+, results in the regaining of 100%, 200%, 60%, and 10% enzymatic activity.3 Zn2+ is also responsible for the polarization of the carbonyl bond of the substrate and the enhancement of nucleophilicity of the catalytic water molecule. Glu143: Glu143 is responsible for the polarization of the catalytic water molecule leading to an enhancement of nucleophilicity. Additionally, Glu143 abstracts a proton from the water molecule and transfers it to the amide leaving group. In site-directed mutagenesis experiments involving the neutral protease of B. stearothermophilus, a protease with an amino acid sequence identical to that of thermolysin, Glu143Asp and Glu143Gln substitutions resulted in no catalytic activity of the enzyme.12,18 The mechanism proposed by Mock and colleagues suggests a diminished catalytic role of Glu143, which is instead used solely for charge stabilization with no association with the nucleophilic water molecule.19,20 Mutagenesis studies show that Glu143Asp substitutions result in inactive enzymes, despite the same side chain charge that would provide electrostatic stability.18 His231: His231 is responsible for substrate stabilization during hydrolysis. The carbonyl oxygen of the peptide is hydrogen bonded to N1 of His231 in the intermediate step of hydrolysis. Mutagenesis experiments involving His231Phe and His231Ala show 430- and 500-fold reductions in catalytic activity with no significant change in Km.21 These findings led to a proposed TLN-mechanism with an emphasis on the role of His231 as a general base, but this mechanism is less favored due to the residual activities of His231-substituted mutants compared to inactive mutants generated by Glu143 mutagenesis studies.18,19,21 Tyr157: The importance of Tyr157 has been debated in several mechanisms.18,22 Site-directed mutagenesis studies show an 80% drop in catalytic activity of Tyr157Trp mutants, and it has been suggested that the hydroxyl-H of Ty157 stabilizes the carboxylate-O of the peptide substrate during hydrolysis.17,23 Energetics modeling suggests a primary role for Tyr157 in transition state stabilization and substrate binding with an increase in the activation barrier energy of 2.7 kcal/mol and a decrease of binding affinity of 0.5 kcal/mol.24 Asp226: Asp226 has been suggested to stabilize the catalytic His231 through H-bonding of the Asp226 carboxylate to N3 of His231.17,22 Mutagenesis of Asp226Ala shows a relatively small decrease in catalytic activity of 40-fold and energetics modeling suggest an increase in overall ΔG‡ 2.2 kcal/mol.23,24Scheme 1. Mechanism of peptide hydrolysis by thermolysin.24Several mechanisms for TLN mediated peptide cleavage have been proposed with varying emphasis on the importance of the roles of the catalytic residues, Glu143 and His231, as well as several catalysis-associated residues, Tyr157 and Asp226. The generally accepted mechanism for TLN-mediated hydrolysis proceeds via the two-step process depicted in Scheme 1.24 Briefly:The catalytic activity of TLN is dependent on both temperature and pH.25 The pKas of TLN are 5.0 and 8.25, and maximum catalytic activity has been measured at pH 7.2.25,26 The thermostability of TLN conferred by the four Ca2+ ions allows an increase in catalytic activity at temperatures approaching 40°C, with no significant loss in activity or conformation alteration until temperatures exceed 70°C.25,27Chelating agents such as EDTA have been shown to completely inactivate TLN and other metalloproteases by removal of the metal ion.3 In addition to direct removal of the catalytic metal ion, substrate and transition state analogs have been synthesized that greatly decrease the catalytic activity of TLN and TLN-like proteins. Transition state analogs that resemble the mechanistic transition states of the normal hydrolysis reaction catalyzed by TLN have been both isolated and synthesize. Phosphoramidates, a group of phosphoryl group containing amino acid or peptide compounds have been shown to mimic the tetrahedral intermediate of the TLN mechanism.28,29 Overall inhibition has been shown to vary from five to 1000-fold decrease in catalytic activity through use of transition state analogs.28,30 Holmquist and Vallee combined substrate analog inhibition with anionic ligands that tightly bind active site metals such as R-S- and R-P-O-.30 By combining two approaches, catalytic activity of TLN was decreased 10,000-fold. This compared to ~800-fold decrease when transition state analogs were combined with metal-binding ligands, suggesting a stronger inhibition effect of the substrate analog.30Carboxypeptidase A (carboxypolypeptidase; CPA; EC 3.4.17.1) is a 35 kD metalloenzyme within the zinc hydrolase family and as such contains a Zn2+ ion cofactor located within its active site.31,32 Originally isolated from bovine pancreas tissue in 1929, CPA is an exopeptidase that catalyzes the hydrolysis of C-terminal esters and peptides with large hydrophobic side chains.31,32 Biologically, CPA facilitates the breakdown of proteins during metabolism, while proposed commercial applications include its use the hydrolysis of cheese whey protein and the production of phenylalanine-free protein hydrolysates for use by individuals with phenylketonuria.33,34General structure of carboxypeptidase A (PDBID 1YME). Alpha-helices in blue, beta-sheets in red, Zn2+ ion in yellow, active site side chains in cyan.Carboxypeptidase A active site.32The structure of CPA was first determined in 1967 using x-ray diffraction, making it one earliest proteins to have been characterized using the technique.35 CPA consists of a single chain containing 307 amino acids and a single Zn2+ ion in its active site.35 Zn2+ is stabilized within the active site through interactions with His69, Glu72, and His196 and is additionally bound by a catalytic water molecule that has been shown to interact with Glu270.36 Mutagenesis studies have indicated that the additional residues Arg127, Tyr248, Arg71, Asn144, and Arg145 form an outer shell, which includes Glu270, around the active site and contribute to the catalytic function of CPA through the stabilization of substrate molecules.36 In both proposed mechanisms for the catalytic activity of CPA, Glu270 plays an important role, either acting as a general base-general acid or as a nucleophile.36Two mechanisms have been suggested for CPA catalyzed hydrolysis—the promoted-water pathway, also known as the general base-general acid pathway, and the nucleophilic, or anhydride, pathway—with experimental evidence existing for both mechanisms.36Scheme 2: Promoted-water (general base-general acid) pathway. Active site substituent side chains in black, substrate in red, water in blue.36In the enzyme substrate (ES) complex, the Zn2+ tetrahedron consists of a water molecule, His69, Glu72, and His196 bound to the ion while the water molecule in the near attack position is maintained through hydrogen bonding with Glu270 and Ser197. The substrate position is maintained through interactions with Arg127, Asn144, Arg145, Tyr248, and Arg71. The water molecule attacks the scissile carbonyl carbon of the substrate molecule through nucleophilic addition with Glu270 acting as a general base, leading to the first transition state (TS1) and the formation of the tetrahedral intermediate (TI). Following the formation of the TI, the leaving group is protonated and the peptide bond is cleaved with Glu270 now severing as a general acid, forming the second transition state (TI2) and finally the enzyme product (EP) complex. An oxyanion hole, created by the polarization of the scissile carbonyl carbon of the substrate in the ES complex by Arg127 and the presence of Zn2+, helps stabilize the charge generated on the carbonyl oxygen during TS1 and TI.36Scheme 3: Nucleophilic pathway. Active site substituent side chains in black, substrate in red, water in blue.36In the enzyme substrate (ES) complex, the Zn2+ tetrahedron consists of His69, Glu72, His196, and the scissile carbonyl oxygen of the substrate molecule. This direct binding by Zn2+ polarizes the carbonyl oxygen, facilitating the nucleophilic attack by the carboxylate side chain of Glu270, which is in the near attack position. The substrate position is maintained through interactions with Arg127, Asn144, Arg145, Tyr248, and Arg71. The nucleophilic attack by Glu270 on the scissile carbonyl carbon leads to the first transition state (TS1) and the formation of the acyl enzyme intermediate (AE). A water molecule present in the active site nucleophilically attacks the carboxylate carbon of Glu270, resulting in deacylation and the transition through the second transition state (TS2) and finally the enzyme product (EP) complex.36 Computational evidence suggests that in regards to proteolysis, the promoted-water pathway is the only feasible pathway of the two. However, both pathways are feasible in esterolysis reactions, with the promoted-water pathway having the lower kinetic barrier. This suggests that under normal conditions the promoted-water pathway is favored, whereas at low temperatures, the formation of the tetrahedral intermediate by the promoter-water pathway and the formation of the acyl enzyme intermediate by the nucleophilic pathways are comparable. The second deacylation step in the nucleophilic pathway presents too high of a barrier however to be viable versus the promoted-water pathway. This then suggests that in regards to both proteolysis and esterolysis, the promoted-water pathway is the dominant pathway of CPA.36Ultraviolet-visible radiation (400 W, λ=250-750 nm) has been shown to cause uncompetitive inhibition of CPA with the decrease in enzymatic activity indirectly proportional to the irradiation time, with total enzymatic inactivation after 20 minutes of exposure. Additionally, exposure times of greater than 24 minutes are suspected to adversely affect the structure of CPA, resulting in the formation of protein aggregates.37 Active site-directed inhibitors of CPA, which are characterized by the presence of a terminal carboxylate, a hydrophobic side chain, and a zinc-binding group, have been identified, among which include the enantiomers of 2-benzyl-5-hydroxy-4-oxopentanoic acid. (L)-2-benzyl-5-hydroxy-4-oxopentanoic acid interacts through its carboxylate by forming hydrogen bonds with Arg145, Arg127, and Tyr248 and through its terminal hydroxyl group by forming a hydrogen bond with Ser197.38 Transition state analog inhibitors of CPA have also been found, which when bound with the enzyme active site create a pseudo-transition state complex. Both (R)-2-benzyl-3-nitropropanoic acid and (R)-2-benzyl-5-nitro-4-oxopentanoic acid are capable of inhibiting CPA by forming complexes with their respective nitro groups, Glu270, Arg127, and Zn2+.39,40Metalloproteases is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by LibreTexts.
928
Misc Antibiotics
https://chem.libretexts.org/Bookshelves/Biological_Chemistry/Supplemental_Modules_(Biological_Chemistry)/Pharmaceuticals/Misc_Antibiotics
Antibiotics are specific chemical substances derived from or produced by living organisms that are capable of inhibiting the life processes of other organisms. The first antibiotics were isolated from microorganisms but some are now obtained from higher plants and animals. Over 3,000 antibiotics have been identified but only a few dozen are used in medicine. Antibiotics are the most widely prescribed class of drugs comprising 12% of the prescriptions in the United States.Macrolides are products of actinomycetes (soil bacteria) or semi-synthetic derivatives of them. Erythromycin is an orally effective antibiotic discovered in 1952 in the metabolic products of a strain of Streptocyces erythreus, originally obtained from a soil sample.Erythromycin and other macrolide antibiotics inhibit protein synthesis by binding to the 23S rRNA molecule (in the 50S subunit) of the bacterial ribosome blocking the exit of the growing peptide chain. of sensitive microorganisms. (Humans do not have 50 S ribosomal subunits, but have ribosomes composed of 40 S and 60 S subunits). Certain resistant microorganisms with mutational changes in components of this subunit of the ribosome fail to bind the drug. The association between erythromycin and the ribosome is reversible and takes place only when the 50 S subunit is free from tRNA molecules bearing nascent peptide chains. Gram-positive bacteria accumulate about 100 times more erythromycin than do gram-negative microorganisms. The non ionized from of the drug is considerably more permeable to cells, and this probably explains the increased antimicrobial activity that is observed in alkaline pH.Tetracyclines have the broadest spectrum of antimicrobial activity. These may include: Aureomycin, Terramycin, and Panmycin. Four fused 6-membered rings, as shown in the figure below, form the basic structure from which the various tetracyclines are made. The various derivatives are different at one or more of four sites on the rigid, planar ring structure. The classical tetracyclines were derived from Streptomyces spp., but the newer derivatives are semisynthetic as is generally true for newer members of other drug groups.Tetracyclines inhibit bacterial protein synthesis by blocking the attachment of the transfer RNA-amino acid to the ribosome. More precisely they are inhibitors of the codon-anticodon interaction. Tetracyclines can also inhibit protein synthesis in the host, but are less likely to reach the concentration required because eukaryotic cells do not have a tetracycline uptake mechanism.Streptomycin is effective against gram-negative bacteria, although it is also used in the treatment of tuberculosis. Streptomycin binds to the 30S ribosome and changes its shape so that it and inhibits protein synthesis by causing a misreading of messenger RNA information.Chloromycetin is also a broad spectrum antibiotic that possesses activity similar to the tetracylines. At present, it is the only antibiotic prepared synthetically. It is reserved for treatment of serious infections because it is potentially highly toxic to bone marrow cells. It inhibits protein synthesis by attaching to the ribosome and interferes with the formation of peptide bonds between amino acids. It behaves as an antimetabolite for the essential amino acid phenylalanine at ribosomal binding sites.Charles Ophardt (Professor Emeritus, Elmhurst College); Virtual Chembook Charles Ophardt (Professor Emeritus, Elmhurst College); Virtual ChembookMisc Antibiotics is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by LibreTexts.
929
Narcotic Analgesic Drugs
https://chem.libretexts.org/Bookshelves/Biological_Chemistry/Supplemental_Modules_(Biological_Chemistry)/Pharmaceuticals/Narcotic_Analgesic_Drugs
Narcotic agents are potent analgesics which are effective for the relief of severe pain. Analgesics are selective central nervous system depressants used to relieve pain. The term analgesic means "without pain". Even in therapeutic doses, narcotic analgesics can cause respiratory depression, nausea, and drowsiness. Long term administration produces tolerance, psychic, and physical dependence called addiction.Narcotic agents may be classified into four categories:The main pharmacological action of analgesics is on the cerebrum and medulla of the central nervous system. Another effect is on the smooth muscle and glandular secretions of the respiratory and gastro-intestinal tract. The precise mechanism of action is unknown although the narcotics appear to interact with specific receptor sites to interfere with pain impulses.A schematic for an analgesic receptor site may look as shown in the graphic below with morphine. Three areas are needed: a flat areas to accommodate a flat nonpolar aromatic ring, a cavity to accept another series of rings perpendicular, and an anionic site for polar interaction of the amine group.Recently investigators have discovered two compounds in the brain called enkephalins which resemble morphine in structure. Each one is a peptide composed of 5 amino acids and differ only in the last amino acid. The peptide sequences are: tyr-gly-gly-phe-leu and tyr-gly-gly-phe-met. Molecular models show that the structures of the enkephalins has some similarities with morphine. The main feature in common appears to be the aromatic ring with the -OH group attached (tyr). Methadone and other similar analgesics have 2 aromatic rings which would be similar to the enkephalins (tyr and phe).Analgesics may relieve pain by preventing the release of acetylcholine. Enkephalin molecules are released from a nerve cell and bind to analgesic receptor sites on the nerve cell sending the impulse. The binding of enkephalin or morphine-like drugs changes the shape of the nerve sending the impulse in such a fashion as to prevent the cell from releasing acetylcholine. As a result, the pain impulse cannot be transmitted and the brain does not preceive pain.Morphine exerts a narcotic action manifested by analgesia, drowsiness, changes in mood, and mental clouding. The major medical action of morphine sought in the CNS is analgesia. Opiates suppress the "cough center" which is also located in the brain stem, the medulla. Such an action is thought to underlie the use of opiate narcotics as cough suppressants. Codeine appears to be particularly effective in this action and is widely used for this purpose.Narcotic analgesics cause an addictive physical dependence. If the drug is discontinued, withdrawal symptoms are experienced. Although the reasons for addiction and withdrawal symptoms are not completely known, recent experiments have provided some information. A nucleotide known as cyclicadenosine monophosphate (cAMP) is synthesized with the aid of the enzyme adenylate cyclase. Enkephalin and morphine-like drugs inhibit this enzyme and thus decrease the amount of cAMP in the cells. In order to compensate for the decreased cAMP, the cells synthesize more enzyme in an attempt to produce more cAMP. Since more enzyme has been produced, more morphine is required as an inhibitor to keep the cAMP at a low level. This cycle repeats itself causing an increase in the tolerance level and increasing the amounts of morphine required. If morphine is suddenly withheld, withdrawal symptoms are probably caused by a high concentration of cAMP since the synthesizing enzyme, adenylate cyclase, is no longer being inhibited.Morphine and codeine are contained in opium from the poppy (Papaver Somniterum) plant found in Turkey, Mexico, Southeast Asia, China, and India. This plant is 3-4 feet tall with 5-8 egg shaped capsules on top. Ten days after the poppy blooms in June, incisions are made in the capsules permitting a milky fluid to ooze out. The following day the gummy mass (now brown) is carefully scraped off and pressed into cakes of raw opium to dry.Opium contains over 20 compounds but only morphine (10%) and codeine (0.5%) are of any importance. Morphine is extracted from the opium and isolated in a relatively pure form. Since codeine is in such low concentration, it is synthesized from morphine by an ether-type methylation of an alcohol group. Codeine has only a fraction of the potency compared to morphine. It is used with aspirin and as a cough suppressant.Heroin is synthesized from morphine by a relatively simple esterification reaction of two alcohol (phenol) groups with acetic anhydride (equivalent to acetic acid). Heroin is much more potent than morphine but without the respiratory depression effect. A possible reason may be that heroin passes the blood-brain barrier much more rapidly than morphine. Once in the brain, the heroin is hydrolyzed to morphine which is responsible for its activity. Synthetic narcotic analgesics include meperidine and methadone. Meperidine is the most common subsitute for morphine. It exerts several pharmacological effects: analgesic, local anesthetic, and mild antihistamine. This multiple activity may be explained by its structural resemblance to morphine, atropine, and histamine.Methadone is more active and more toxic than morphine. It can be used for the relief of may types of pain. In addition it is used as a narcotic substitute in addiction treatment because it prevents morphine abstinence syndrome. Methadone was synthesized by German chemists during Wold War II when the United States and our allies cut off their opium supply. And it is difficult to fight a war without analgesics so the Germans went to work and synthesized a number of medications in use today, including demerol and darvon which is structurally simular to methadone. And before we go further lets clear up another myth. Methadone, or dolophine was not named after Adolf Hitler. The "dol" in dolophine comes from the latin root "dolor." The female name Dolores is derived from it and the term dol is used in pain research to measure pain e.g., one dol is 1 unit of pain. Even methadone, which looks strikingly different from other opioid agonists, has steric forces which produce a configuration that closely resembles that of other opiates. See the graphic on the left and the top graphic on this page. In other words, steric forces bend the molecule of methadone into the correct configuration to fit into the opiate receptor. When you take methadone it first must be metabolized in the liver to a product that your body can use. Excess methadone is also stored in the liver and blood stream and this is how methadone works its 'time release trick' and last for 24 hours or more. Once in the blood stream metabolized methadone is slowly passed to the brain when it is needed to fill opiate receptors. Methadone is the only effective treatment for heroin addiction. It works to smooth the ups and down of heroin craving and allows the person to function normally.Narcotic Antagonists prevent or abolish excessive respiratory depression caused by the administration of morphine or related compounds. They act by competing for the same analgesic receptor sites. They are structurally related to morphine with the exception of the group attached to nitrogen.Nalorphine precipitates withdrawal symptoms and produces behavioral disturbances in addition to the antogonism action. Naloxane is a pure antagonist with no morphine like effects. It blocks the euphoric effect of heroin when given before heroin. Naltrexone became clinically available in 1985 as a new narcotic antagonist. Its actions resemble those of naloxone, but naltrexone is well is well absorbed orally and is long acting, necessitating only a dose of 50 to 100 mg. Therefore, it is useful in narcotic treatment programs where it is desired to maintain an individual on chronic therapy with a narcotic antagonist. In individuals taking naltrexone, subsequent injection of an opiate will produce little or no effect. Naltrexone appears to be particularly effective for the treatment of narcotic dependence in addicts who have more to gain by being drug-free rather than drug dependantNarcotic Analgesic Drugs is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by Charles Ophardt.
931
Nicotinamide Adenine Dinucleotide (NAD)
https://chem.libretexts.org/Bookshelves/Biological_Chemistry/Supplemental_Modules_(Biological_Chemistry)/Vitamins_Cofactors_and_Coenzymes/Nicotinamide_Adenine_Dinucleotide_(NAD)
Nicotinamide is from the niacin vitamin. The NAD+ coenzyme is involved with many types of oxidation reactions where alcohols are converted to ketones or aldehydes. It is also involved in the first enzyme complex 1 of the electron transport chain.The structure for the coenzyme, NAD+, Nicotinamide Adenine Dinucleotide is shown in .One role of \(NAD^+\) is to initiate the electron transport chain by the reaction with an organic metabolite (intermediate in metabolic reactions). This is an oxidation reaction where 2 hydrogen atoms (or 2 hydrogen ions and 2 electrons) are removed from the organic metabolite. (The organic metabolites are usually from the citric acid cycle and the oxidation of fatty acids--details in following pages.) The reaction can be represented simply where M = any metabolite.\[ MH_2 + NAD^+ \rightarrow NADH + H^+ + M: + \text{energy}\]One hydrogen is removed with 2 electrons as a hydride ion (\(H^-\)) while the other is removed as the positive ion (\(H+\)). Usually the metabolite is some type of alcohol which is oxidized to a ketone. The NAD+ is represented as cyan in . The alcohol is represented by the space filling red, gray, and white atoms. The reaction is to convert the alcohol, ethanol, into ethanal, an aldehyde.\[ CH_3CH_2OH + NAD^+ \rightarrow CH_3CH=O + NADH + H^+ \]This is an oxidation reaction and results in the removal of two hydrogen ions and two electrons which are added to the NAD+, converting it to NADH and H+. This is the first reaction in the metabolism of alcohol. The active site of ADH has two binding regions. The coenzyme binding site, where NAD+ binds, and the substrate binding site, where the alcohol binds. Most of the binding site for the NAD+ is hydrophobic as represented in green. Three key amino acids involved in the catalytic oxidation of alcohols to aldehydes and ketones. They are ser-48, phe 140, and phe 93.: Active site of Alcohol DehydrogenaseNicotinamide Adenine Dinucleotide (NAD) is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by LibreTexts.
932
Non-glyceride Lipids
https://chem.libretexts.org/Bookshelves/Biological_Chemistry/Supplemental_Modules_(Biological_Chemistry)/Lipids/Non-glyceride_Lipids
SphingolipidsSphingolipids are a second type of lipid found in cell membranes, particularly nerve cells and brain tissues. They do not contain glycerol, but retain the two alcohols with the middle position occupied by an amine.WaxA wax is a simple lipid which is an ester of a long-chain alcohol and a fatty acid. The alcohol may contain from 12-32 carbon atoms. Waxes are found in nature as coatings on leaves and stems. The wax prevents the plant from losing excessive amounts of water. Thumbnail: Sphingolipids consist of a long-chain base (green) amide linked to a fatty acid (black). Various modifications can be made to the basic structure (red) including desaturations (4, 8 and n-9), hydroxylations (2 & 4) and headgroups (R). (CC SA BY 4.0; Jonathan E. Markham at the Department of Biochemistry, University of Nebraska-Lincoln). Non-glyceride Lipids is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by LibreTexts.
933
Nucleic Acids
https://chem.libretexts.org/Bookshelves/Biological_Chemistry/Supplemental_Modules_(Biological_Chemistry)/Nucleic_Acids/Nucleic_Acids/Nucleic_Acids
The nucleic acids are informational molecules because their primary structure contains a code or set of directions by which they can duplicate themselves and guide the synthesis of proteins. The synthesis of proteins - most of which are enzymes - ultimately governs the metabolic activities of the cell. In 1953, Watson, an American biologist, and Crick, an English biologist, proposed the double helix structure for DNA. This development set the stage for a new and continuing era of chemical and biological investigation. The two main events in the life of a cell - dividing to make exact copies of themselves, and manufacturing proteins - both rely on blueprints coded in our genes.There are two types of nucleic acids which are polymers found in all living cells. Deoxyribonucleic Acid (DNA) is found mainly in the nucleus of the cell, while Ribonucleic Acid (RNA) is found mainly in the cytoplasm of the cell although it is usually synthesized in the nucleus. DNA contains the genetic codes to make RNA and the RNA in turn then contains the codes for the primary sequence of amino acids to make proteins.The best way to understand the structures of DNA and RNA is to identify and examine individual parts of the structures first. The complete hydrolysis of nucleic acids yields three major classes of compounds: pentose sugars, phosphates, and heterocyclic amines (or bases).A major requirement of all living things is a suitable source of phosphorus. One of the major uses for phosphorus is as the phosphate ion which is incorporated into DNA and RNA.There are two types of pentose sugars found in nucleic acids. This difference is reflected in their names--deoxyribonucleic acid indicates the presence of deoxyribose; while ribonucleic acid indicates the presence of ribose. In the graphic below, the structures of both ribose and deoxyribose are shown. Note the red -OH on one and the red -H on the other are the only differences. The alpha and beta designations are interchangeable and are not a significant difference between the two.Heterocyclic amines are sometimes called nitrogen bases or simply bases. The heterocyclic amines are derived from two root structures: purines or pyrimidines. The purine root has both a six and a five member ring; the pyrimidine has a single six member ring. There are two major purines, adenine (A) and guanine (G), and three major pyrimidines, cytosine (C), uracil (U), and thymine (T). The structures are shown in the graphic on the left. As you can see, these structures are called "bases" because the amine groups as part of the ring or as a side chain have a basic property in water.A major difference between DNA and RNA is that DNA contains thymine, but not uracil, while RNA contains uracil but not thymine. The other three heterocyclic amines, adenine, guanine, and cytosine are found in both DNA and RNA. For convenience, you may remember, the list of heterocyclic amines in DNA by the words: The Amazing Gene Code (TAGC).Nucleic Acids is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by LibreTexts.
934
Nucleotides
https://chem.libretexts.org/Bookshelves/Biological_Chemistry/Supplemental_Modules_(Biological_Chemistry)/Nucleic_Acids/Nucleotides
Nucleotides are the basic monomer building block units in the nucleic acids. A nucleotide consists of a phosphate, pentose sugar, and a heterocyclic amine.The phosphoric acid forms a phosphate-ester bond with the alcohol on carbon #5 in the pentose. A nitrogen in the heterocyclic amines displaces the -OH group on carbon #1 of the pentose. The reaction is shown in the graphic below. If the sugar is ribose, the general name is ribonucleotide and deoxyribonucleotide if the sugar is deoxyribose. The other four nucleotides are synthesized in a similar fashion.Just as the exact amino acid sequence is important in proteins, the sequence of heterocyclic amine bases determines the function of the DNA and RNA. This sequence of bases on DNA determines the genetic information carried in each cell. Currently, much research is under way to determine the heterocyclic amine sequences in a variety of RNA and DNA molecules. The Genome Project as already succeeded in determining the DNA sequences in humans and other organisms. Future research will be to determine the exact functions of each DNA segment as these contain the codes for protein synthesis.Nucleotides is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by LibreTexts.
935
Nutrition
https://chem.libretexts.org/Bookshelves/Biological_Chemistry/Supplemental_Modules_(Biological_Chemistry)/Metabolism/Nutrition
The use of food by organisms is termed nutrition. Vitamins and minerals necessary for biochemical processes. There are three general categories of food: Essential fiber which are non-digestible polysaccharide material, essential for normal functioning of animal digestive systems (i.e. colon), Energy-yielding nutrients which are protein, carbohydrate and lipid and Micronutrients.Animals are unable to synthesize certain amino acids (humans can only make 10 of the 20 common amino acids). The amino acids that an animal is unable to synthesize must be obtained from the diet (i.e. by consuming plants or microorganisms), and these amino acids are termed "essential amino acids".Excess dietary protein becomes a source of metabolic energyProtein is an important source of nitrogen in the diet. Protein within the body is constantly turning over (i.e. being degraded and resynthesized). Furthermore, there is a general demand for protein synthesis when an organism is growing. The Nitrogen Balance refers to the relationship between the supply and demand for nitrogen (i.e. protein) within an organism.Carbohydrates are also an essential structural component of nucleic acids, nucleotides, glycoproteins and glycolipids. However, the principle role of carbohydrate in the diet is production of metabolic energy.Fatty acids and triacylglycerols can be used as fuel by many tissues in the human body. Phospholipids are essential components of all biological membranes"Dietary fiber" refers to molecules that cannot be broken down by enzymes in the human body.Vitamins are essential nutrients that are required in the diet because they cannot be synthesized by human metabolic enzymes. Often, only trace levels are required, but a shortage can result in disease or death.Coenzymes are low molecular weight molecules that provide unique chemical functionalities for certain enzyme/coenzyme complexes.Summary of water soluble and fat soluble vitamins:Common nameChemical nameRelated cofactor(s)Water Soluble VitaminsVitamin B1ThiamineThiamine pyrophosphateVitamin B2RiboflavinFlavin adenine dinucleotide (FAD) Flavin mononucleotide (FMN)Vitamin B6Pyridoxal, pyridoxine, pyridoxaminePyridoxal phosphateVitamin B12Cobalamin5'-deoxyadenosylcobalamin MethylcobalaminNiacinNicotinic acidNicotinamide adenine dinucleotide (NAD+) Nicotinamide adenine dinucleotide phosphate (NADP+)Vitamin B3 Pantothenic acidCoenzyme ABiotinBiotin-lysine conjugates (biocytin)Lipoic acidLipoyl-lysine conjugates (lipoamide)Folic acidTetrahydrofolateVitamin CL-ascorbateFat Soluble VitaminsVitamin ARetinolVitamin D2ErgocalciferolVitamin D3CholecalciferolVitamin Ea-TocopherolVitamin KThiamine is the precursor of thiamine pyrophosphate (TPP):Nicotinamide is an essential part of two important coenzymes: nicotinamide adenine dinucleotide (NAD+) and nicotinamide adenine dinucleotide phosphate (NADP+).Riboflavin is a constituent of riboflavin 5'-phosphate (flavin mononucleotide, or FMN) and flavin adenine dinucleotide (FAD). The nucleotide part of the molecule does not enter into any chemistry, but is important for recognition and binding to enzymes that will use FMN or FAD as a cofactor.The isoalloxazine ring is the core structure of the different flavin molecules. It is yellow in color and the word "flavin" is derived from the latin word for yellow, flavus.Pantothenic acid is a component of coenzyme A (CoA). The two main functions of CoA are:a-hydrogen of the acyl group for removal as a protonBoth of these functions involve the reactive sulfhydryl group through the formation of thioester linkages with acyl groupsThe biologically active form of vitamin B6 is pyridoxal-5-phosphate (PLP), however, the nutritional requirements can be met by either pyridoxine, pyridoxal or pyridoxol.PLP participates in a wide variety of reactions involving amino acids, including:These involve bonds to the amino acid Ca as well as side chain carbons. The wide variety of reactions is due to the ability of PLP to form stable Schiff base adducts with a-amino groups of amino acids:Vitamin B12 is not made by any animal or plant, it is produced by only a few species of bacteria. Once in the food chain, vitamin B12 is obtained by animals by eating other animals, but plants are sadly deficient. Therefore, herbivorous animals (and vegetarians) can suffer a deficiency. The structure contains a cobalt ion, coordinated within a corrin ring structure:Vitamin B12 (cyanocobalamin) is converted in the body into two coenzymes:Vitamin B12 coenzymes participate in three types of reactions:L-Ascorbate is a reducing sugar (has a reactive ene-diol structure) that is involved in the following biochemical processes:Almost all animals can synthesize vitamin C (its in the pathway of carbohydrate synthesis). Humans and great apes have suffered a mutation in the last enzyme in the pathway of synthesis for L-ascorbate (mutation occurred about 10-40 million years ago). Since that time, all great apes (of which humans are a member) must get L-ascorbate from their diet (fresh fruits and vegetable contain an abundance). Thus, for the great apes L-ascorbate is a "vitamin" (another way of looking at it is that all great apes suffer an in-born error in metabolism). Humans still have the gene for the enzyme to make vitamin C. However, it has suffered a couple of deletions that introduce a frame shift mutation, in addition to numerous point mutations.Biotin acts as a mobile carboxyl group carrier in a variety of enzymatic carboxylation reactions.Lipoic acid contains two sulfur atoms that can exist as a disulfide bonded pair, or as two free sulfhydrils. Conversion between the two forms involves a redox reaction (the two free sulfhydrils represent the reduced form). Lipoic acid is typically found covalently attached to a lysine side chain in enzymes that use it as a cofactor, as a lipoamide complex.Folic acid derivatives (i.e. "folates") are acceptors and donors of one-carbon units for all oxidation levels of carbon (except for the most oxidized form - CO2. See biotin above).Vitamin A occurs as an ester (Retinyl ester), aldehyde (Retinal) or acidic form (Retinoic acid).Cholecalciferol is produced in the skin of animals by the action of U.V. light on the precursor molecule 7-dehydrocholesterol.Vitamin E: Tocopherola-Tocopherol is a potent antioxidant, however, molecular details of its function are not clearly understood.Vitamin K is essential to the blood-clotting process. Vitamin K is required for the post-translational modification to produce g-carboxy glutamic acid from glutamic acid. Such modified residues can bind Ca2+, which is an essential part of the process in the clotting cascade.g-carboxy glutamic acids in their structurehttp://www.mikeblaber.org/oldwine/BCH4053/bch4053.htmNutrition is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by LibreTexts.
936
Oligosaccharides
https://chem.libretexts.org/Bookshelves/Biological_Chemistry/Supplemental_Modules_(Biological_Chemistry)/Carbohydrates/Oligosaccharides
An oligosaccharide is a carbohydrate whose molecule, upon hydrolysis, yields two to ten Monosaccharid molecules. Oligosaccharides are classified into subclasses based on the number of monosaccharide molecules that form when one molecule of the oligosaccharide is hydrolyzed. Oligosaccharides can have many functions including cell recognition and cell binding. For example, glycolipids have an important role in the immune response. Thumbnail: Structure of galactooligosaccharide. (Public Domain; Klaas1978)Oligosaccharides is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by LibreTexts.
937
Penicillin
https://chem.libretexts.org/Bookshelves/Biological_Chemistry/Supplemental_Modules_(Biological_Chemistry)/Pharmaceuticals/Penicillin
The penicillins were the first antibiotics discovered as natural products from the mold Penicillium.In 1928, Sir Alexander Fleming, professor of bacteriology at St. Mary's Hospital in London, was culturing Staphylococcus aureus. He noticed zones of inhibition where mold spores were growing. He named the mold Penicillium rubrum. It was determined that a secretion of the mold was effective against Gram-positive bacteria.Beta Lactam StructurePenicillins as well as cephalosporins are called beta-lactam antibiotics and are characterized by three fundamental structural requirements: the fused beta-lactam structure (shown in the blue and red rings, a free carboxyl acid group (shown in red bottom right), and one or more substituted amino acid side chains (shown in black). The lactam structure can also be viewed as the covalent bonding of pieces of two amino acids - cysteine (blue) and valine (red).Penicillin-G where R = an ethyl pheny group, is the most potent of all penicillin derivatives. It has several shortcomings and is effective only against gram-positive bacteria. It may be broken down in the stomach by gastric acids and is poorly and irregularly absorbed into the blood stream. In addition many disease producing staphylococci are able to produce an enzyme capable of inactivating penicillin-G. Various semisynthetic derivatives have been produced which overcome these shortcomings.Powerful electron-attracting groups attached to the amino acid side chain such as in phenethicillin prevent acid attack. A bulky group attached to the amino acid side chain provides steric hindrance which interferes with the enzyme attachment which would deactivate the pencillins i.e. methicillin. Refer to Table 2 for the structures. Finally if the polar character is increased as in ampicillin or carbenicillin, there is a greater activity against Gram-negative bacteria.All penicillin derivatives produce their bacteriocidal effects by inhibition of bacterial cell wall synthesis. Specifically, the cross linking of peptides on the mucosaccharide chains is prevented. If cell walls are improperly made cell walls allow water to flow into the cell causing it to burst. Resemblances between a segment of penicillin structure and the backbone of a peptide chain have been used to explain the mechanism of action of beta-lactam antibiotics. The structures of a beta-lactam antibiotic and a peptide are shown on the left for comparison. Follow the trace of the red oxygens and blue nitrogen atoms.Gram-positive bacteria possess a thick cell wall composed of a cellulose-like structural sugar polymer covalently bound to short peptide units in layers.The polysaccharide portion of the peptidoglycan structure is made of repeating units of N-acetylglucosamine linked b-1,4 to N-acetylmuramic acid (NAG-NAM). The peptide varies, but begins with L-Ala and ends with D-Ala. In the middle is a dibasic amino acid, diaminopimelate (DAP). DAP (orange) provides a linkage to the D-Ala (gray) residue on an adjacent peptide.The bacterial cell wall synthesis is completed when a cross link between two peptide chains attached to polysaccharide backbones is formed. The cross linking is catalyzed by the enzyme transpeptidase. First the terminal alanine from each peptide is hydrolyzed and secondly one alanine is joined to lysine through an amide bond.Peptidoglycan image courtesy of the University of Texas-Houston Medical SchoolPenicillin binds at the active site of the transpeptidase enzyme that cross-links the peptidoglycan strands. It does this by mimicking the D-alanyl-D-alanine residues that would normally bind to this site. Penicillin irreversibly inhibits the enzyme transpeptidase by reacting with a serine residue in the transpeptidase. This reaction is irreversible and so the growth of the bacterial cell wall is inhibited. Since mammal cells do not have the same type of cell walls, penicillin specifically inhibits only bacterial cell wall synthesis.As early as the 1940s, bacteria began to combat the effectiveness of penicillin. Penicillinases (or beta-lactamases) are enzymes produced by structurally susceptable bacteria which renders penicillin useless by hydrolysing the peptide bond in the beta-lactam ring of the nucleus. Penicillinase is a response of bacterial adaptation to its adverse environment, namely the presence of a substance which inhibits its growth. Many other antibiotics are also rendered ineffective because of this same type of resistance.It is estimated that between 300-500 people die each year from penicillin-induced anaphylaxis, a severe allergic shock reaction to penicillin. In afflicted individuals, the beta-lactam ring binds to serum proteins, initiating an IgE-mediated inflammatory response. Penicillin and ala-ala peptide - Chime in new windowCephalosporins are the second major group of beta-lactam antibiotics. They differ from penicillins by having the beta-lactam ring as a 6 member ring. The other difference, which is more significant from a medicinal chemistry stand point, is the existence of a functional group (R) at position 3 of the fused ring system. This now allows for molecular variations to effect changes in properties by diversifying the groups at position 3.The first member of the newer series of beta-lactams was isolated in 1956 from extracts of Cephalosporium acremonium, a sewer fungus. Like penicillin, cephalosporins are valuable because of their low toxicity and their broad spectrum of action against various diseases. In this way, cephalosporin is very similar to penicillin. Cephalosporins are one of the most widely used antibiotics, and economically speaking, has about 29% of the antibiotic market. The cephalosporins are possibly the single most important group of antibiotics today and are equal in importance to penicillin.The structure and mode of action of the cephalosporins are similar to that of penicillin. They affect bacterial growth by inhibiting cell wall synthesis, in Gram-positive and negative bacteria. Some brand names include: cefachlor, cefadroxil, cefoxitin, ceftriaxone. CephalexinPenicillin is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by LibreTexts.
938
Photochemical Changes in Opsin
https://chem.libretexts.org/Bookshelves/Biological_Chemistry/Supplemental_Modules_(Biological_Chemistry)/Photoreceptors/Chemistry_of_Vision/Photochemical_Changes_in_Opsin
Photochemical events in vision involve the protein opsin and the cis/trans isomers of retinal. Opsin does not absorb visible light, but when it is bonded with 11-cis-retinal to form rhodopsin, which has a very broad absorption band in the visible region of the spectrum. The peak of the absorption is around 500 nm, which matches the output of the sun closely.Upon absorption of a photon of light in the visible range, cis-retinal can isomerize to all-trans-retinal. The shape of the molecule changes as a result of this isomerization. The molecule changes from an overall bent structure to one that is more or less linear. All of this is the result of trigonal planar bonding (120 o bond angles) about the double bonds.As we shall see below, the isomerization of retinal has an important effect on special proteins in the rod cell: the isomerization event actually causes the proteins to change their shape. This shape change ultimately leads to the generation of a nerve impulse. Hence, the next step in understanding the vision process for monochrome vision is to describe these proteins, and how they change their shape after retinal isomerizes. Opsin consists of 348 amino acids, covalently linked together to form a single chain. This chain has seven hydrophobic, or water-repelling, alpha-helical regions that pass through the lipid membrane of the pigment-containing discs. This region consists primarily of nonpolar amino acids, which do not attract the polar water molecule. The cis-retinal is situated among these alpha helixes in the hydrophobic region. It is covalently linked to Lysine 296, one of the amino acids in the opsin peptide chain. The linkage is as a Schiff base reaction.When the cis-retinal absorbs a photon it isomerizes to the all-trans configuration without (at first) any accompanying change in the structure of the protein. Rhodopsin containing the all-trans isomer of retinal is known as bathorhodopsin. However, the trans isomer does not fit well into the protein, due to its rigid, elongated shape. While it is contained in the protein, the all-trans-retinal adopts a twisted conformation, which is energetically unfavorable. The molecule undergoes a series of shape changes to try and better fit the binding site. Therefore, a series of changes in the protein occurs to expel the trans-retinal from the protein. These rapid movements of the retinal are transferred to the protein, and from there into the lipid membrane and nerve cells to which it is attached. This generates nerve impulses which travel along the optic nerve to the brain, and we perceive them as visual signals - sight. The free all-trans-retinal is then converted back into the cis form by a series of enzyme-catalyzed reactions, whereupon is reattaches to another opsin ready for the next photon to begin the process again. Activated rhodopsin causes electrical impulses in the following way:Photochemical Changes in Opsin is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by LibreTexts.
940
Photoreceptor Excitation
https://chem.libretexts.org/Bookshelves/Biological_Chemistry/Supplemental_Modules_(Biological_Chemistry)/Photoreceptors/Photoreceptor_Excitation
Upon excitation from a laser or other light energy source, photoreceptor molecules transition from a lower energy state to a higher energy state. During this process, electrons of photoreceptor absorbs the energy, and turn into excited state therefore change photoreceptors form. Now, let's see the basic concept of excitation of electrons.In order to convert between stereoisomers, photons are required to excite an electron to a higher energy state. The lifetime of an excited electron ranges from a few femtoseconds to several hours. When the electron relaxes to a lower energy state, a photon is emitted equal in energy to the difference between the two states.An example of an energy diagram is shown below.In the retina, electronic excitation converts trans-retinal to cis-retinal by breaking the π Bond of an alkene and rotating the molecule about its σ bond to change the relationship of neighboring groups from cis to trans.Ethene is a simple molecule that contains only one double bond, and serves as a convenient model to explain electronic excitations and subsequent changes in molecular geometry.Yellow lobes indicate σ bonding regions and purple lobes represent π bonding regions. In the molecule's lowest-energy conformation, each carbon's unhybridized p orbitals are coplanar, allowing ethene to form a stabilized π bond; the molecule is subsequently "locked" in htis conformation. When one of the electrons in the π bonding orbital is excited to a higher energy orbital, the bond is destabilized, allowing rotation about the carbon-carbon axis. The two figures below show this rotation from the π-stabilized conformation to the rotated, destabilized rotation.Molecular orbital diagrams for ethene are given in the figure below: the ground state configuration is on the left, and an excited state configuration on the right. In the ground state, the highest occupied molecular orbital (HOMO) is the carbon-carbon π bonding orbital; the lowest unoccupied molecular orbital (LUMO) is the carbon-carbon π antibonding orbital. Upon exposure to light, an electron is excited from the fully-occupied HOMO to the LUMO as shown.Upon excitation photoreceptors under go several changes in conformation, forming photocycles. Photoreceptors can be stimulated by unique wavelengths of light. For example, photoactive yellow protein (PYP) is a UV-blue light photoreceptor: ultraviolet and blue light can initiate photocycle formation in PYP.Photoreceptor Excitation is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by LibreTexts.
941
Photoreceptor Proteins
https://chem.libretexts.org/Bookshelves/Biological_Chemistry/Supplemental_Modules_(Biological_Chemistry)/Photoreceptors/Photoreceptor_Proteins
Photoreceptor proteins are light-sensitive proteins involved in the sensing and response to light in a variety of organisms. Photoreceptor proteins can be find in both animals and plants. Human eye retina is a good example of photoreceptor protein. Many bacteria, such as halohodospira halophila, an extremophile bacterium contain Photoactive Yellow Protein.Photoreceptor proteins are optimally suited to study the role of dynamical alterations in protein structure in relation to their function. First, such proteins can be triggered with (laser) flash illumination, and therefore excellent time-resolution is achievable in studies of the dynamical alterations in their structure. Second, because they are signal-transduction proteins, one may anticipate large conformational transitions to be involved in their signaling state formation (and its subsequent decay), which is indeed borne out by the experiments. Third, the (changing) color of these proteins often is an excellent indicator as to which time scale is relevant to resolve structural transitions. Significant and unsurpassed insight along these lines has been obtained for a number of different photoreceptor proteins. Hence, they can, indeed, be considered as “star actors” in the pursuit to understand, in general terms, the atomic details of the dynamics of functional conformational transitions [i.e., (partial) un/folding] in these proteins required for their functioning.Photoreceptor protein contains two parts, the protein part, and non-protein, chromophore part. The non-protein part can response to light throught photoisomerization, or photoexcitation.The figure above show protein part in secondary structure, and chromophorepart in line structure.Plants are important living organisms on earth. As autotrophs, part of plants absorb sunlight through photosynthesis to convert water and Carbon dioxide to oxygen and other chemicals. The part of plants that responsible for absorb and use sunlight is photoreceptor. There are many types of photoreceptors in plants, such as Chlorophyll.PHYTOCHROMESBLUE-LIGHT RECEPTORScharacterization of two photolyase-like genes of Synechocystis sp. PCC6803. Nucleic Acids Res. 2000, 28, 2353-2362.Photoreceptor Proteins is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by LibreTexts.
942
Photosynthesis overview
https://chem.libretexts.org/Bookshelves/Biological_Chemistry/Supplemental_Modules_(Biological_Chemistry)/Photosynthesis/Photosynthesis_overview
Photosynthesis is a process that occurs in plants, algae, and some bacteria. These photosynthetic organisms (called autotrophs) use the sun's energy to convert carbon dioxide (CO2) into organic compounds, such as carbohydrates. An example of carbohydrates would be simple sugars such as glucose, mannose, or galactose.Photosynthesis by these organisms is essential for life on Earth. They take in CO2 ,which are waste products from animals and humans, and create oxygen so that we can breathe. However, there are certain bacteria that perform anoxygenic photosynthesis, meaning they consume CO2 but do not release O2.Essentially, photosynthesis is the opposite of cellular respiration, which is carried out through glycolysis, the Krebs cycle, and the electron transport chain (ETC). All processes of photosynthesis are carried out in the chloroplasts of plants.The photosynthesis equation. Can be found at commons.wikimedia.org/wiki/Fi...C3%ADntese.jpgThe process of photosynthesis occurs through:The parts of a chloroplast. Used with permission from Wikipedia Commons.The chloroplasts are organelles in plants that harvest light energy to produce ATP and fix carbon in eukaryotic photosynthetic cells. Chloroplasts that exist in green plants are usually globular or discoid. They have a dual membrane system. Important features of the chloroplast include:Photosynthetic prokaryotes such as cyanobacteria do not have chloroplasts but they do have photosynthetic apparati bound to their plasma membrane. In the chloroplasts, electrons flow from H2O, through several electron acceptors, to NADP+, which serves as the final electron acceptor. This entire process requires energy, which is provided by the sun.Quantities of light are defined in terms of photons, which are particles of light. They are represented by the symbol hv. When photons strike the molecules and are absorbed, the electrons of those molecules get excited to a higher energy level. When the light source is removed, the electrons may return to their original states, give off energy as heat or light, or be transferred to other molecules. Visible light is the region of importance in photosynthesis. The range is from 350 to 800 nm (nanometers). The energy of light increases with shorter wavelengths and decreases with longer wavelengths.The electromagnetic spectrum. Used with permission. Can be found at commons.wikimedia.org/wiki/File:Electromagnetic-Spectrum.pngIn the thylakoid membranes of photosynthetic cells, there are light-absorbing molecules called pigments. Green pigments are in a class called chlorophyll. Chlorophylls a and b are found in most green plants. Bacteriochlorophyll is found in photosynthetic bacteria.Structure of chlorophyll a, chlorophyll b, and bacteriochlorophyll, respectively. Used with permission from Wikipedia Commons.Photosynthetic cells also contain secondary, or accessory, pigments, which absorb light where chlorophyll is not as useful. There are two types of secondary pigments: carotenoids and phycobilins. All carotenoids have 40 carbons and absorb in the range of 400 to 500 nm. That is why their colors are red, orange, and yellow. Two categories that represent carotenoids are xanthophylls and beta-carotenes. Phycobilins absorb in the range of 550 to 630 nm.Zeaxanthin, a xanthophyll which is one of the carotenoids.1. True or False: Sucrose is a direct product of photosynthesis.2. Chlorophyll can be found in the:a) Granab) Lumenc) Stromad) Thylakoid membranee) Outer membrane3. Photosynthesis can be observed in the region of:a) Ultraviolet lightb) Infraredc) Visible lightd) Gamma rayse) Radio waves4. True or False: Chlorophyll a, chlorophyll b and bacteriochlorophyll are found in green plants.5. Fill in the blank: The color of carotenoids is usually _____, _____, or _____.1. False2. d3. c4. False5. red, orange, yellowPhotosynthesis overview is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by LibreTexts.
943
Polysaccharides
https://chem.libretexts.org/Bookshelves/Biological_Chemistry/Supplemental_Modules_(Biological_Chemistry)/Carbohydrates/Polysaccharides
The most useful carbohydrate classification scheme divides the carbohydrates into groups according to the number of individual simple sugar units. Monosaccharides contain a single unit; disaccharides contain two sugar units; and polysaccharides contain many sugar units as in polymers - most contain glucose as the monosaccharide unit. Thumbnail: Schematic two-dimensional cross-sectional view of glycogen: A core protein of glycogenin is surrounded by branches of glucose units. The entire globular granule may contain around 30,000 glucose units. By Mikael Häggström, used with permission (Public Domain). Polysaccharides is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by LibreTexts.
945
Protein-RNA Recognition
https://chem.libretexts.org/Bookshelves/Biological_Chemistry/Supplemental_Modules_(Biological_Chemistry)/Nucleic_Acids/RNA/Protein-RNA_Recognition
RNA-protein interactions are behind a number of vital processes in the cell. Without the ability of particular proteins to bind RNA, the RNA would no longer be able to carry out its important functions as a component of the ribosome1,2 and spliceosome.3 Other examples of important RNA-protein interactions include binding of tRNA to aminoacyl-tRNA synthetases, a process vital to translation of genetic information into proteins necessary for continued biological function4 and regulation of post-transcriptional control of gene expression via the binding of RNA to riobonucleoproteins, or RNPs.5Although not as well characterized as the binding between DNA and proteins, RNA-protein binding has been a field that has seen a great deal of growth in recent years. Although it was originally expected that RNA-protein binding motifs might fall neatly into categories the way DNA motifs did, the wide range of secondary and tertiary RNA structures that can be recognized by proteins requires more variety in binding motifs of the proteins, and the rules used to categorize them become correspondingly more complex.6 At this time all major families of RNA-binding proteins have been structurally characterized and these characterizations have led to a much better understanding of RNA recognition.7Protein-RNA Recognition is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by LibreTexts.
946
Protein Structure
https://chem.libretexts.org/Bookshelves/Biological_Chemistry/Supplemental_Modules_(Biological_Chemistry)/Proteins/Protein_Structure
Secondary structure refers to the shape of a folding protein due exclusively to hydrogen bonding between its backbone amide and carbonyl groups. Secondary structure does not include bonding between the R-groups of amino acids, hydrophobic interactions, or other interactions associated with tertiary structure. The two most commonly encountered secondary structures of a polypeptide chain are α-helices and beta-pleated sheets. These structures are the first major steps in the folding of a polypeptide chain, and they establish important topological motifs that dictate subsequent tertiary structure and the ultimate function of the protein. Protein FoldingSecondary Structure: α-HelicesAn α-helix is a right-handed coil of amino-acid residues on a polypeptide chain, typically ranging between 4 and 40 residues. This coil is held together by hydrogen bonds between the oxygen of C=O on top coil and the hydrogen of N-H on the bottom coil.Secondary Structure: β-Pleated SheetThis structure occurs when two (or more, e.g. ψ-loop) segments of a polypeptide chain overlap one another and form a row of hydrogen bonds with each other. This can happen in a parallel arrangement or in anti-parallel arrangement. Parallel and anti-parallel arrangement is the direct consequence of the directionality of the polypeptide chain.Secondary Structure: α-Pleated SheetA similar structure to the beta-pleated sheet is the α-pleated sheet. This structure is energetically less favorable than the beta-pleated sheet, and is fairly uncommon in proteins. An α-pleated sheet is characterized by the alignment of its carbonyl and amino groups; the carbonyl groups are all aligned in one direction, while all the N-H groups are aligned in the opposite direction.The Structure of ProteinsThis page explains how amino acids combine to make proteins and what is meant by the primary, secondary and tertiary structures of proteins. Quaternary structure isn't covered. It only applies to proteins consisting of more than one polypeptide chain. Thumbnail: Structure of human hemoglobin. The proteins α and β subunits are in red and blue, and the iron-containing heme groups in green. (CC BY-SA 3.0; Zephyris).Protein Structure is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by LibreTexts.
947
Psychoactive Drugs
https://chem.libretexts.org/Bookshelves/Biological_Chemistry/Supplemental_Modules_(Biological_Chemistry)/Medicinal_Chemistry/Psychoactive_Drugs
A sedative drug decreases activity, moderates excitement, and calms the recipient. A hypnotic drug produces drowsiness and facilitates the onset and maintenance of a state of sleep that resembles natural sleep in its electrocephalographic characteristics and from which the recipient may be easily aroused; the effect is sometimes called hypnosis. Sedation, pharmacological hypnosis, and general anesthesia are usually regarded as only increasing depths of a continuum of central nervous system (CNS) depression. Indeed, most sedative or hypnotic drugs, when used in high doses, can induce general anesthesia. One important exception is the benzodiazepinesThe term benzodiazepine refers to the portion of the structure composed of a benzene ring (A) fused to a seven-membered diazepine ring (B). However, since all of the important benzodiazepines contain a aryl substituent (ring C) and a 1, 4-diazepine ring, the term has come to mean the aryl-1,4-benzodiazepines. There are several useful benzodiazepines available. The skeletal structure and two examples are shown below..3-DThe effects of the benzodiazepines virtually all result from action of these drugs on the CNS, even when lethal doses are used. The most prominent of these effects are sedation, hypnosis, decreased anxiety, muscle relaxation, and anticonvulsant activity. As the dose of a benzodiazepine is increased, sedation progresses to hypnosis and hypnosis to stupor. They are used as sedatives, hypnotics, antianxiety agents (in panic disorder), anticonvulsants, muscle relaxants, in anesthesia and in alcoholism.The actions of benzodiazepines are a result of potentiation of neural inhibition that is mediated by gamma-aminobutyric acid (GABA). This view is supported by behavioral and electrophysiological evidence that the effects of benzodiazepines are reduced or prevented by prior treatment with antagonists of GABA or inhibitors of the synthesis of the transmitter. Benzodiazepine receptors are located on the alpha subunit of the GABA receptor (see figure below) located almost exclusively on postsynaptic nerve endings in the CNS (especially cerebral cortex). Benzodiazepines enhance the GABA transmitter in the opening of postsynaptic chloride channels which leads to hyperpolarization of cell membranes. That is, they "bend" the receptor slightly so that GABA molecules attach to and activate their receptors more effectively and more often.The remarkable safety of the benodiazepines can be accounted for by the self-limited nature of neuronal depression that requires the release of an endogenous inhibitory neurotransmitter to be expressed. That is, they do not directly act to open chloride ion channels, and therefore, are not lethal in over dose as are barbiturates.Benzodiazepines are highly lipid soluble. There is rapid diffusion into the CNS followed by redistribution to inactive tissue sites. Benzodiazepines have a high volume of distribution and rapidly cross the placenta. The duration of action is determined by rate of metabolism and elimination. Diazepam is not water soluble and is dissolved in propylene glycol; therefore it may cause venous irritation and thrombophlebitis. Diazepam also has unpredictable absorption after IM injection. Benzodiazepines are extensively bound to albumin.The barbiturates once enjoyed a long period of extensive use as sedative-hypnotic drugs; however, except for a few specialized uses, they have been largely replaced by the much safer benzodiazepines.Barbiturates are CNS depressants and are similar, in many ways, to the depressant effects of alcohol. To date, there are about 2,500 derivatives of barbituric acid of which only 15 are used medically. The first barbiturate was synthesized from barbituric acid in 1864. The original use of barbiturates was to replace drugs such as opiates, bromides, and alcohol to induce sleep. Barbiturates are broken down chemically within the liver and eliminated via the kidneys at different rates according to their type. With regular use, the body develops a tolerance to barbiturates that translates into a need for larger and more frequent doses to attain the desired affect. However, while the tolerance increases in terms of realizing a desired effect, tolerance to the lethal level does not.In general, structural changes that increase lipid solubility decrease duration of action, decrease latency to onset of activity, accelerate metabolic degradation, and increase hypnotic potency. Thus, large aliphatic groups at R2 confer greater activity than do methyl groups, but the compounds have a shorter duration of action; however, groups longer than seven carbons tend to have convulsive activity. Introduction of polar groups, such as ether, keto, hydroxyl, amino, or carboxyl groups, into alkyl side chains decreases lipid solubility and abolishes hypnotic activity.Barbiturates facilitate GABA-ergic inhibition in a way that resembles some of the actions of the benzodiazepines, discussed above. However, barbiturates do not displace benzodiazepines from their binding sites. Instead, they enhance such binding by increasing the affinity for benzodiazepines; they also enhance the binding of GABA and its agonist analogs to specific sites in neural membranes. These effects are almost completely dependent upon the presence of chloride or other anions that are known to permeate through chloride channels and they are completely inhibited by picritoxin. (The picrotoxin group of toxins are naturally-occurring GABA antagonists which can cause death due to convulsions.)While both barbiturates and benzodiazepines are capable of potentiating GABA-induced increases in chloride conductance, significant differences in their modes of action can be detected. Pentobarbital appears to increase the lifetime of the open state of the chloride channels that are regulated by GABA-ergic receptors; the magnitude of this effect more than offsets a barbiturate-induced decrease in the frequency of channel openings. By contrast, high concentrations of diazepam increase the frequency of channel openings with little effect on the lifetime of the open state. It is thought that barbiturates prolong the activation of the channel by decreasing the rate of dissociation of GABA from its receptor.Schizophrenia comes in many varieties. One of the most common types is seen in the person who hears voices and has delusions of grandeur, intense fear, or other types of feelings that are unreal. Many schizophrenics are highly paranoid, with a sense of persecution from outside sources.Schizophrenia appears to result from excessive excitement of a group of neurons that secrete dopamine in the behavioral centers of the brain, including in the frontal lobes. An alternative possibility is either hypersensitive or excess D2 dopamine receptors. Therefore, drugs used to treat this disorder decrease the level of dopamine excreted from these neurons or antagonize dopamine. Dopamine has been implicated as a cause of schizophrenia because many patients with Parkinson's disease develop schizophrenic-like symptoms when they are treated with the drug L-DOPA. Also, drugs known to enhance central dopamine activity can worsen symptoms and even produce psychotic-like signs in normal individuals. It has been suggested that in schizophrenia, excess dopamine is secreted by a group of dopamine-secreting neurons whose cell bodies lie in the mesencephalon, medial to the substantia nigra. These neurons give rise to the so-called mesolinic dopaminergic system that projects nerve fibers to the medial and anterior portions of the limbic system, especially to the hippocampus, amygdala, anterior caudate nucleus, and portions of the prefrontal lobes. All of these are powerful behavioral control centers. An even more compelling reason for believing that schizophrenia is caused by excess production of dopamine is that many drugs that are effective in treating schizophrenia-such as chlorpromazine, haloperidol, and thiothizene-all decrease the secretion of dopamine at the dopaminergic nerve endings or decrease the effect of dopamine on subsequent neurons.The phenothiazines as a class, and especially chlorpromazine, the prototype, are among the most widely used drugs in medical practice and are primarily employed in the management of patients with serious psychiatric illnesses. In addition, many members of the group have other clinically useful properties, including antiemetic, antinausea, and antihistaminic effects and the ability to potentiate analgesics, sedatives and general anesthetics. Phenothiazine compounds were synthesized in Europe in the late nineteenth century as part of the development of aniline dyes such as methylene blue. In the late 1930s a derivative of phenothiazine was found to have antihistamine and a strong sedative effect and so the drug was introduced as into clinical anesthesia. It was noted that chlorpromazine by itself did not cause a loss of consciousness but produced only a tendency to sleep and a lack of interest in what was going on. These central actions became known as neuroleptic soon after.Phenothiazine has a tricyclic structure in which two benzene rings are linked by a sulfur and a nitrogen atom (see figures below). Substitution of an electron-withdrawing group at R2 (but not at position 3 or 4) increases the efficacy of phenothiazines and other tricyclic congeners. Neuroleptic drugs reduce initiative and interest in the environment, and they reduce displays of emotion or affect. Initially there may be some slowness in response to external stimuli and drowsiness. However subject are easily aroused, capable of giving appropriate answers to direct questions, and seem to have intact intellectual functions; there is no ataxia, incoordination, or dysathria at ordinary doses. Psychotic patients become less agitated and restless, and withdrawn or autistic patients sometimes become more responsive and communicative. Aggressive and impulsive behavior diminishes. Gradually (over a period of days). psychotic symptoms of hallucinations, delusions, and disorganized or incoherent thinking tend to disappear.The most prominent observable effects of typical neuroleptic agents are strikingly similar. In low doses, operant behavior is reduced but spinal reflexes are unchanged. Exploratory behavior is diminished, and responses to a a variety of stimuli are fewer, slower, and smaller, although the ability to discriminate stimuli is retained. Conditioned avoidance behaviors are selectively inhibited, while unconditioned excape or avoidance responses are not.Behavioral activation, stimulated environmentally of pharmacologically, is blocked. Feeding is inhibited. Most neuroleptics block the emesis and aggression induced by apomorphine--a dopaminergic agonist. In high doses, most neuroleptic agents induce characteristic cataleptic immobility that allows the animal to be placed in abnormal postures that persist. Muscle tone is altered, and ptosis (drooping of the eyelids) is typical. Even very high doses of most neuroleptics do not induce comma, and the lethal dose is extraordinarily high.Mechanism. It is well established that benzodiazepines block dopaminergic receptors in the brain. There are three major central dopamine pathways: the nigrostriatal, which is affiliated with motor effects produced by antipsychotic drugs; the tuberoinfudibular, which is associated with the endocrine effects of neuroleptics; and the mesolimbic, which is the most likely to relate to the symptoms of schizophrenia. Of the three central dopamine receptor subtypes D1, D2 and D3, the D2 receptor, is believed to be most relevant to antipsychotic drug action. Most interesting, however, is that both D1 and D2 are altered in drug-naive schizophrenics. Though neuroleptic drugs are D1 and D2 antagonists, in vitro D2 effects are achieved at 103 lower concentrations.D2 receptors are also located outside the blood brain barrier. One area is in the the chemoreceptor trigger zone of the medulla which is the reason that many of the phenothiazine drugs work as antiemetics and stop nausea.Side Effects. There are several resulting syndromes which can occur from using antipsychotic drugs. A parkinsonian syndrome that may be indistinguishable from idiopathic parkinsonism my develop during administration of antipsychotic drugs. The most noticeable signs are rigidity and tremor at rest, especially involving the upper extremities. Tardive dyskenesia is a late-appearing neurological syndrome also associated with antipsychotic drug use. It is characterized by stereotypical involuntary movements consisting in sucking and smacking of the lips, lateral jaw movements, and fly-catching dartings of the tongue. There may be purposeless, quick movements of the extremities and slower, more dystonic movements and postures of the extremities, trunk, and neck may also be seen. All of these movements disappear during sleep as they do in parkinsonism. Symptoms of these conditions my persists indefinitely after discontinuation of the medication, although sometimes they disappear with time.In 1994 an addition tot he antipsychotic drugs is risperidone (Risperdal). This drug antagonises D2 and serotonin type 2 receptors. The drug also antagonizes for other receptors such as a adrenergic and histaminergic H1 receptors.Major depression is the most common of the major mental illnesses, and it must be distinguished from normal grief, sadness, and disappointment. Major depression is characterized by feelings of intense sadness and despair, mental slowing and loss of concentration, pessimistic worry, agitation, and self-depreciation. Physical changes also occur, such as weight loss, decreased libido, and disruption of hormonal circadian rhythms.Before the advent of psychotherapy in the 1950s, treatment of depression consisted of stimulants such as caffeine and amphetamines to ameliorate the depressive phases and barbiturates to allay agitation, anxiety, and insomnia. At best, such attempts at therapy may have offered transient relief to some patients. Suffering usually decreased little.Monoamine oxidase inhibitors were the first effective antidepressants used. These were discussed in detail in the section on Adrenergic Mechanisms and therefore will not be further discussed here.Serotonin (5-hydroxytryptamine or 5-HT) is a monoamine neurotransmitter found in cardiovascular tissue, in endothelial cells, in blood cells, and in the central nervous system. The role of serotonin in neurological function is diverse, and there is little doubt that serotonin is an important CNS neurotransmitter. The cell bodies for serotonergic neurons are found in the raphe region in the brainstem/pons region. Lesions of this area can be made using 5,6 or 5,7-dihydroxytryptamine in a similar manner to 6-hydroxydopamine and have helped define the CNS pathways for 5-HT.The monoamine serotonin is itself a precursor for melatonin production in the pineal gland. The biosynthesis of serotonin from the amino acid tryptophan is similar to that found for the catecholamines, and 5-hydroxytryptophan can cross the BBB to increase central levels of 5-HT. Although some of the serotonin is metabolized by monoamine oxidase to 5-hydroxyindole acetic acid, most of the serotonin released into the post-synaptic space is removed by the neuron through a reuptake mechanism inhibited by the tricyclic antidepressants and the newer, more selective antidepressants such as fluoxetine and sertraline.Serotonin receptors are diverse and numerous. Over the past several years, over fourteen different serotonin receptors have been cloned and sequenced through molecular biological techniques. Overall, there are seven distinct families of 5-HT receptors, with as many as five within a particular family. Only one of the 5-HT receptors is a ligand-gated ion channel (the 5-HT3 receptor), and the other six families are all G protein-coupled receptors.Imipramine, amitriptylin, and other closely related drugs are among the drugs currently most widely used for the treatment of major depression. Because of there structure ( see below). They are often referred to as the tricyclic antidepressants. Although these compounds seem to be similar to the phenothiazines chemically, the ethylene group of imiprimine's middle ring imparts dissimilar stereochemical properties and prevents conjegation of the rings, as occurs with the phenothiazines.One might expect an effective antidepressant drug to have a stimulating or mood-elevating effect when given to a normal subject. Although this may occur with the MAOIs, it is not true of the tricyclic antidepressants. If a dose of imiprimine given to a normal subject, he feels sleepy and tends to be quieter, his blood pressure falls slightly, and he feels light headed. These drug effects are usually perceived to be unpleasant, and cause a feeling of unhappiness and increased anxiety. Repeated administration for several days may lead to accentuation of these symptoms and, in addition, to difficulty in concentrating and thinking. In contrast, if the drug is given over a period of time ( two to three weeks) to depressed patients an elevated mood occurs. For this reason, the tricyclics are not prescribed on an "as-needed" basis.Mechanism. All tricyclic antidepressants in current use in the U.S. potentiate the actions of biogenic amines in the CNS by blocking its re-uptake at nerve terminals. However, the potency and selectivity for the inhibition of the uptake of norepinephrine, serotonin, and dopamine vary greatly among the agents. The tertiary amine tricyclics seem to inhibit the serotonin uptake pump, whereas the secondary amine ones seem better in switching off the NE pump (see table below). For instance, imipramine and amitriptyline are potent and selective blockers of serotonin transport with small effects on NE uptake, while desipramine and nortriptyline inhibit the uptake of norepinephrine and exert smaller effects on serotonin inhibition. None of these agents is very effective as an inhibitor of dopamine transport; this contrasts with the rather nonselective inhibitory actions of cocaine and amphetamine on the uptake of both norepinephrine and dopamine. These are poor antidepressants, despite the fact that it has a stimulant and even euphoriant effect in some people.In recent years, selective serotonin reuptake inhibitors (SSRIs) have been introduced for the treatment of depression. Prozac is the most famous drug in this class. Lilly's sales of Prozac in 1993 exceeded 1 billion US dollars. Clomiprimine, fluoxetine (Prozac), sertraline and paroxetine selectively block the reuptake of serotonin, thereby increasing the levels of serotonin in the central nervous system. Note the similarities and differences between the tricyclic antidepressants and the selective serotonin reuptake inhibitors. The SSRIs generally have fewer anticholinergic side effects, but caution is still necessary when co-administering any drugs that affect serotonergic systems (e.g., monoamine oxidase inhibitors). Some of the newer, SSRIs (e.g., clomipramine) have been useful in the treatment of obsessive-compulsive disorders.Here are some data to give you an idea of what transport systems are likely to be altered by the different antidepressants:From: Hyttel J. Pharmacological characterization of selective serotonin reuptake inhibitors (SSRIs). International Clinical Psychopharmacology. 9 Suppl 1: 19-26, 1994 Mar.Lithium is widely used as \(\ce{Li2CO3}\) to control manic behavior in manic-depressive patients. No totally acceptable mechanism for its action exists. Postulations involve action that would likely adjust overactive catecholaminergic activity, which is the accepted occurrence in mania. Recent research has shown that \(\ce{Li^{+}}\) is a glutamate reuptake inhibitor. Other research has shown that \(\ce{Li^{+}}\) may inhibit glutamate stimulation of nerve cells. Several studies showing that \(\ce{Li^{+}}\) has some antidepressant effects are known. Some weak biphasic alterations of norepinephrine and serotonin turnover in the brain were established.Amphetamines were discussed under the topic of Adrenergic Mechanisms and therefore will not be further discussed here.Caffeine, theophylline and theobromine share in common several pharmacological actions of therapeutic interest. They stimulate the central nervous system, act on the kidney to produce diuresis, stimulate cardiac muscle, and relax smooth muscle, notably bronchial muscle. Because the various xanthines differ markedly in the intensity of their action on various structures, one particular xanthine has been used more than another for a particular therapeutic effect. Since theobromine displays low potency in these pharmacological actions, it has all but disappeared from the therapeutic scene.Caffeine, theophyline, and theobromine occur naturally in plants widely distributed geographically. Caffeine is found in the coffee bean, tea leaves, guarana, and other plants. It is probably the most-used of all psychoactive drugs. From the figure below, we can see that the methylxanthines have a structure which is very similar to adenine.We have already discussed the role of the second messenger, cAMP, in the response to norepinephrine and epinephrine in the section on Adrenergic Mechanisms. The mechanism of action of caffeine and other methylxanthines is inhibiting the degradation of cAMP.Edward B. Walker (Weber State University)Psychoactive Drugs is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by LibreTexts.
949
RNA - Transcription
https://chem.libretexts.org/Bookshelves/Biological_Chemistry/Supplemental_Modules_(Biological_Chemistry)/Nucleic_Acids/RNA/RNA_-_Transcription
The biosynthesis of RNA, called transcription, proceeds in much the same fashion as the replication of DNA and also follows the base pairing principle. Again, a section of DNA double helix is uncoiled and only one of the DNA strands serves as a template for RNA polymerase enzyme to guide the synthesis of RNA. After the synthesis is complete, the RNA separates from the DNA and the DNA recoils into its helix.The differences in the composition of RNA and DNA have already been noted. In addition, RNA is not usually found as a double helix but as a single strand. However, the single polynucleotide strand may fold back on itself to form portions which have a double helix structure like the tertiary structure of proteins.The transcription of a single RNA strand is illustrated in the graphic on the left. One major difference is that the heterocyclic amine, adenine, on DNA codes for the incorporation of uracil in RNA rather than thymine as in DNA. Remember that thymine is not found in RNA and do not confuse the replacement of uracil in RNA for thymine in DNA in the transcription process. For example, thymine in DNA still codes for adenine on RNA not uracil, while the adenine on DNA codes for uracil in RNA. Note that the new RNA (red) is identical to non coding DNA with the exception of uracil where thymine was located in DNA.There are three major types of RNA which will be fully explained in a later section. Although RNA is synthesized in the nucleus, it migrates out of the nucleus into the cytoplasm where it is used in the synthesis of proteins.The RNA transcription process occurs in three stages: initiation, chain elongation, and termination.The first stage occurs when the RNA Polymerase-Promoter Complex binds to the promoter gene in the DNA. This also allows for the finding of the start sequence for the RNA polymerase. The promoter enzyme will not work unless the sigma protein is present (shown in blue in graphic). Specific sequences on the non coding strand of DNA are recognized as the signal to start the unwinding process.The recognition sequences are as follows: Non-coding DNA -5' recognition sections in bold GGCCGCTTGACAAAAGTGTTAAATTGTGCTATACTOnce the process has been initiated, then the RNA polymerase elongation enzyme takes over and is described in the next panel.The elongation begins when the RNA polymerase "reads" the template DNA. Only one strand of the DNA is read for the base sequence. The RNA which is synthesized is the complementary strand of the DNA. The RNA (top strand) and DNA (bottom strand) sequences in the model are: 5' -GACCAGGCA-3' 3'-TCTGGTCCGTAAA-5'In the graphic, the magenta color is the template DNA, while the green is the RNA strand. In the next reaction step, uracil triphosphate (UTP) is the next to be added to the RNA by bind and pairing with the adenine (A) nucleotide on the template DNA strand.A phosphodiester bond is formed; the RNA chain is than elongated to 10 nucleotides; and diphosphate left over would dissociate. Note: The coordinates used in this display have only the alpha carbons of the proteins RNA Polymerase.RNA - Transcription is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by LibreTexts.
951
Steroids
https://chem.libretexts.org/Bookshelves/Biological_Chemistry/Supplemental_Modules_(Biological_Chemistry)/Lipids/Steroids
One major class of lipids is the steroids, which have structures totally different from the other classes of lipids. The main feature of steroids is the ring system of three cyclohexanes and one cyclopentane in a fused ring system as shown below. There are a variety of functional groups that may be attached. The main feature, as in all lipids, is the large number of carbon-hydrogens which make steroids non-polar.Steroids include such well known compounds as cholesterol, sex hormones, birth control pills, cortisone, and anabolic steroids.The best known and most abundant steroid in the body is cholesterol. Cholesterol is formed in brain tissue, nerve tissue, and the blood stream. It is the major compound found in gallstones and bile salts. Cholesterol also contributes to the formation of deposits on the inner walls of blood vessels. These deposits harden and obstruct the flow of blood. This condition, known as atherosclerosis, results in various heart diseases, strokes, and high blood pressure.Much research is currently underway to determine if a correlation exists between cholesterol levels in the blood and diet. Not only does cholesterol come from the diet, but cholesterol is synthesized in the body from carbohydrates and proteins as well as fat. Therefore, the elimination of cholesterol rich foods from the diet does not necessarily lower blood cholesterol levels. Some studies have found that if certain unsaturated fats and oils are substituted for saturated fats, the blood cholesterol level decreases. The research is incomplete on this problem. Sex hormones are also steroids. The primary male hormone, testosterone, is responsible for the development of secondary sex characteristics. Two female sex hormones, progesterone and estrogen or estradiol control the ovulation cycle. Notice that the male and female hormones have only slight differences in structures, but yet have very different physiological effects.Testosterone promotes the normal development of male genital organs ans is synthesized from cholesterol in the testes. It also promotes secondary male sexual characteristics such as deep voice, facial and body hair. Estrogen, along with progesterone regulates changes occurring in the uterus and ovaries known as the menstrual cycle. For more details see Birth Control. Estrogen is synthesized from testosterone by making the first ring aromatic which results in mole double bonds, the loss of a methyl group and formation of an alcohol group.The adrenocorticoid hormones are products of the adrenal glands ("adrenal" means adjacent to the renal (kidney). The most important mineralocrticoid is aldosterone, which regulates the reabsorption of sodium and chloride ions in the kidney tubules and increases the loss of potassium ions. Aldosterone is secreted when blood sodium ion levels are too low to cause the kidney to retain sodium ions. If sodium levels are elevated, aldosterone is not secreted, so that some sodium will be lost in the urine. Aldosterone also controls swelling in the tissues.Cortisol, the most important glucocortinoid, has the function of increasing glucose and glycogen concentrations in the body. These reactions are completed in the liver by taking fatty acids from lipid storage cells and amino acids from body proteins to make glucose and glycogen.In addition, cortisol and its ketone derivative, cortisone, have the ability to inflammatory effects. Cortisone or similar synthetic derivatives such as prednisolone are used to treat inflammatory diseases, rheumatoid arthritis, and bronchial asthma. There are many side effects with the use of cortisone drugs, so there use must be monitored carefully. Thumbnail: Ball-and-stick model of the cholesterol molecule, a compound essential for animal life that forms the membranes of animal cells. (Public Domain; Jynto).Steroids is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by LibreTexts.
952
Sulfa Drugs
https://chem.libretexts.org/Bookshelves/Biological_Chemistry/Supplemental_Modules_(Biological_Chemistry)/Pharmaceuticals/Sulfa_Drugs
Sulfonamides are synthetic antimicrobial agents with a wide spectrum encompassing most gram-positive and many gram-negative organisms. These drugs were the first efficient treatment to be employed systematically for the prevention and cure of bacterial infections.Charles Ophardt (Professor Emeritus, Elmhurst College); Virtual Chembook Charles Ophardt (Professor Emeritus, Elmhurst College); Virtual ChembookSulfa Drugs is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by LibreTexts.
953
The Replication of DNA
https://chem.libretexts.org/Bookshelves/Biological_Chemistry/Supplemental_Modules_(Biological_Chemistry)/Nucleic_Acids/DNA/The_Replication_of_DNA
This page takes a very simplified look at how DNA replicates (copies) itself. It gives only a brief over-view of the process, with no attempt to describe the mechanism.We'll explain exactly what "semi-conservative" means when we have got some diagrams to look at. First imagine what happens if the two individual strands in the DNA double helix start to unzip. The diagram shows this happening in the middle of the DNA double helix - you mustn't assume that the top of the diagram is the end of the chain. It isn't. Further up the double helix, the two strands will still be joined together.In fact, this is happening lots of times along the very long DNA molecule. Lengths of chain become separated to form what are known as "bubbles". If you feel the need to see this in more detail, read the rest of this page, and then have a quick look at the links above.Some of the hydrogen bonds get broken and the two strands become partly separated. The red dotted lines on the diagram just point out the original base pairs. These are not bonds in any form. These base pairs are now much too far apart for any sort of bonding between them. Now suppose that you have a source of nucleotides - phosphate joined to deoxyribose joined to a base, including all the four sorts of bases needed for DNA.The next diagram shows what would happen if a nucleotide containing guanine (G) and one containing cytosine (C) were attracted to the top two bases on the left-hand strand of the unzipped DNA - and then joined together.How did they end up joined together? This is all under the control of a number of enzymes, one of which (DNA polymerase) is responsible for joining up nucleotides along the chain in this way. Now suppose the same sort of thing happened at the top of the right-hand strand. You would end up with . . .Now compare the double strands that you are forming on the left- and right-hand sides. They are exactly the same . . . and if you were to continue this process, they would continue to be the same. And if you compare the patterns of bases in the new DNA being formed with what was in the original DNA before it started to unzip, everything is the same. This is inevitable because of the way the bases pair together.Let's simplify the last diagram, and assume that the whole copying process is complete. The next diagram focusses on the short bit of the total DNA molecule that we have been looking at. A typical human DNA molecule is around 150 million base pairs long - you will have to imagine the rest of it! You have also got to remember that in reality the whole thing would have coiled into its double helix. Trying to draw that just makes everything look messy and complicated!The original DNA is shown all in blue. The red strands in the daughter DNA are the ones which have been built on the original blue strands during the replication process.You can see that each of the daughter molecules is made of half of the original DNA plus a new strand. That's all "semi-conservative replication" means. Half of the original DNA is conserved (kept) in each of the daughter molecules. The red and blue, of course, have no physical significance apart from as a way of making the diagrams clearer. All three of these DNA molecules will be identical in every way.Jim Clark (Chemguide.co.uk)This page titled The Replication of DNA is shared under a CC BY-NC 4.0 license and was authored, remixed, and/or curated by Jim Clark.
955
Types of RNA
https://chem.libretexts.org/Bookshelves/Biological_Chemistry/Supplemental_Modules_(Biological_Chemistry)/Nucleic_Acids/RNA/Types_of_RNA
Three general types of RNA exist: messenger, ribosomal, and transfer.Messenger RNA (mRNA) is synthesized from a gene segment of DNA which ultimately contains the information on the primary sequence of amino acids in a protein to be synthesized. The genetic code as translated is for m-RNA not DNA. The messenger RNA carries the code into the cytoplasm where protein synthesis occurs.Each gene (or distinct segment) on DNA contains instructions for making one specific protein with order of amino acids coded by the precise sequence of heterocyclic amines on the nucleotides. Since proteins have a variety of functions including those of enzymes mistakes in the primary sequence of amino acids in proteins may have lethal effects.How can a polymeric nucleotide with only four different heterocyclic amines specify the sequence of 20 or more different amino acids? If each nucleotide coded for a single amino acid, then obviously only 4 of the 20 amino acids could be accommodated. If the nucleotides were used in groups of two, there are 16 different combinations possible which is still inadequate.It has been determined that the genetic code is actually based upon triplets of nucleotides which provide 64 different codes using the 4 nucleotides. During the 1960's, a tremendous effort was devoted to proving that the code was read as triplets, and also to solving the genetic code. The genetic code was originally translated for the bacteria E. Coli, but its universality has since been established. The genetic code is "read" from a type of RNA called messenger RNA (mRNA). Each nucleotide triplet, called a codon, can be "read" and translated into an amino acid to be incorporated into a protein being synthesized. The genetic code is shown in and protein combine to form a nucleoprotein called a ribosome. The ribosome serves as the site and carries the enzymes necessary for protein synthesis. In the graphic on the left, the ribosome is shown as made from two sub units, 50S and 30 S. There are about equal parts rRNA and protein. The far left graphic shows the complete ribosome with three tRNA attached.The ribosome attaches itself to m-RNA and provides the stabilizing structure to hold all substances in position as the protein is synthesized. Several ribosomes may be attached to a single RNA at any time. In upper right corner is the 30S sub unit with mRNA and tRNA attached. Note: The coordinates used in this display have only the alpha carbons of the proteins (CA) and the DNA backbone atoms.Transfer RNA (tRNA) contains about 75 nucleotides, three of which are called anticodons, and one amino acid. The tRNA reads the code and carries the amino acid to be incorporated into the developing protein.There are at least 20 different tRNA's - one for each amino acid. The basic structure of a tRNA is shown in the left graphic. Part of the tRNA doubles back upon itself to form several double helical sections. On one end, the amino acid, phenylalanine, is attached. On the opposite end, a specific base triplet, called the anticodon, is used to actually "read" the codons on the mRNA.The tRNA "reads" the mRNA codon by using its own anticodon. The actual "reading" is done by matching the base pairs through hydrogen bonding following the base pairing principle. Each codon is "read" by various tRNA's until the appropriate match of the anticodon with the codon occurs. In this example, the tRNA anticodon (AAG) reads the codon (UUC) on the mRNA. The UUC codon codes for phenylalanine which is attached to the tRNA. Remember that the codons read from the mRNA make up the genetic code as read by humans.Charles Ophardt (Professor Emeritus, Elmhurst College); Virtual ChembookTypes of RNA is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by LibreTexts.
958
Vision and Light
https://chem.libretexts.org/Bookshelves/Biological_Chemistry/Supplemental_Modules_(Biological_Chemistry)/Photoreceptors/Vision_and_Light
Vision is such an everyday occurrence that we seldom stop to think and wonder how we are able to see the objects that surround us. Yet the vision process is a fascinating example of how light can produce molecular changes. The retina contain the molecules that undergo a chemical change upon absorbing light, but it is the brain that actually makes sense of the visual information to create an image.Light is one of the most important resources for civilization, it provides energy as it pass along by the sun. Light influence our everyday live. Living organisms sense light from the environment by photoreceptors. Light, as waves carry energy, contains energy by different wavelength. In vision, light is the stimulus input. Light energy goes into eyes stimulate photoreceptor in eyes. However, as an energy wave, energy is passed on through light at different wavelength.Light, as waves carry energy, contains energy by different wavelength. From long wavelength to short wavelength, energy increase. 400 nm to 700 nm is visible spectrum. Light energy can convert chemical to other forms. Vitamin A, also known as retinol, anti-dry eye vitamins, is a required nutrition for human health. The predecessor of vitamin A is present in the variety of plant carotene. Vitamin A is critical for vision because it is needed by the retina of eye. Retinol can be convert to retinal, and retinal is a chemical necessary for rhodopsin. As light enters the eye, the 11-cis-retinal is isomerized to the all-"trans" form.The molecule cis-retinal can absorb light at a specific wavelength. When visible light hits the cis-retinal, the cis-retinal undergoes an isomerization, or change in molecular arrangement, to all-trans-retinal. The new form of trans-retinal does not fit as well into the protein, and so a series of geometry changes in the protein begins. The resulting complex is referred to a bathrhodopsin (there are other intermediates in this process, but we'll ignore them for now).The reaction above shows Lysine side-chain from the opsin react with 11-cis-retinal when stimulated. By removing the oxygen atom form the retinal and two hydrogen atom form the free amino group of the lysine, the linkage show on the picture above is formed, and it is called Schiff base.As the protein changes its geometry, it initiates a cascade of biochemical reactions that results in changes in charge so that a large potential difference builds up across the plasma membrane. This potential difference is passed along to an adjoining nerve cell as an electrical impulse. The nerve cell carries this impulse to the brain, where the visual information is interpreted.The light image is mapped on the surface of the retina by activating a series of light-sensitive cells known as rods and cones or photoreceptors. The rods and cones convert the light into electrical impulses which are transmitted to the brain via nerve fibers. The brain then determines, which nerve fibers carried the electrical impulse activate by light at certain photoreceptors, and then creates an image.The retina is lined with many millions of photoreceptor cells that consist of two types: 7 million cones provide color information and sharpness of images, and 120 million rods are extremely sensitive detectors of white light to provide night vision. The tops of the rods and cones contain a region filled with membrane-bound discs, which contain the molecule cis-retinal bound to a protein called opsin. The resulting complex is called rhodopsin or "visual purple".In human eyes, rod and cones react to light stimulation, and a series of chemical reactions happen in cells. These cells receive light, and pass on signals to other receiver cells. This chain of process is class signal transduction pathway. Signal transduction pathway is a mechanism that describe the ways cells react and response to stimulation.Vision and Light is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by LibreTexts.
959
Vitamin A: β-Carotene
https://chem.libretexts.org/Bookshelves/Biological_Chemistry/Supplemental_Modules_(Biological_Chemistry)/Vitamins_Cofactors_and_Coenzymes/Vitamin_A
β-carotene is the molecule that gives carrots, sweet potatoes, squash, and other yellow or orange vegetables their orange color. It is part of a family of chemicals called the carotenoids, which are found in many fruit and vegetables, as well as some animal products such as egg yolks. Carotenoids were first isolated in the early 19th century, and have been synthesized for use as food colorings since the 1950s. Biologically, β-carotene is most important as the precursor of vitamin A in the human diet. It also has anti-oxidant properties and may help in preventing cancer and other diseases.The long chain of alternating double bonds (conjugated) is responsible for the orange color of beta-carotene. The conjugated chain in carotenoids means that they absorb in the visible region - green/blue part of the spectrum. So β-carotene appears orange, because the red/yellow colors are reflected back to us. Vitamin A has several functions in the body. The most well known is its role in vision - hence carrots "make you able to see in the dark". The retinol is oxidized to its aldehyde, retinal, which complexes with a molecule in the eye called opsin. When a photon of light hits the complex, the retinal changes from the 11-cis form to the all-trans form, initiating a chain of events which results in the transmission of an impulse up the optic nerve. A more detailed explanation is in Photochemical Events.Other roles of vitamin A are much less well understood. It is known to be involved in the synthesis of certain glycoproteins, and that deficiency leads to abnormal bone development, disorders of the reproductive system, xerophthalmia (a drying condition of the cornea of the eye) and ultimately death.Vitamin A is required for healthy skin and mucus membranes, and for night vision. Its absence from diet leads to a loss in weight and failure of growth in young animals, to the eye diseases; xerophthalmia, and night blindness, and to a general susceptibility to infections. It is thought to help prevent the development of cancer. Good sources of carotene, such as green vegetables are good potential sources of vitamin A. Vitamin A is also synthetically manufactured by extraction from fish-liver oil and by synthesis from beta-ionone.Vitamin A is structurally related to β-carotene. β-Carotene is converted into vitamin A in the liver. Two molecules of vitamin A are formed from on molecule of beta carotene.Oxidation: If you compare the two molecules, it is clear that vitamin A (retinol) is very closely related to half of the beta-carotene molecule. One way in which beta-carotene can be converted to vitamin A is to break it apart at the center and is thought to be most important biologically. The breakdown of beta-carotene occurs in the walls of the small intestine (intestinal mucosa) and is catalyzed by the enzyme β-carotene dioxygenase to form retinal.Reduction Reaction: The retinal reduced to retinol by retinaldehyde reductase in the intestines. This is the reduction of an aldehyde by the addition of hydrogen atoms to make the alcohol, retinol.Esterification Reaction: The absorption of retinol from the alimentary tract is favored by the simultaneous absorption of fat or oil, especially if these are unsaturated. Retinol is esterified to palmitic acid and delivered to the blood via chylomicrons. Finally the retinol formed is stored in the liver as retinyl esters. This is why cod liver oil used to be taken as a vitamin A supplement. It is also why you should never eat polar bear liver if you run out of food in the Arctic; vitamin A is toxic in excess and a modest portion of polar bear liver contains more than two years supply! Beta-carotene, on the other hand, is a safe source of vitamin A. The efficiency of conversion of beta-carotene to retinol depends on the level in the diet. If you eat more beta-carotene, less is converted, and the rest is stored in fat reserves in the body. So too much beta-carotene can make you turn yellow, but will not kill you with hypervitaminosis.Vitamin A: β-Carotene is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by LibreTexts.
960
Vitamin B₁₂: Cobalamin
https://chem.libretexts.org/Bookshelves/Biological_Chemistry/Supplemental_Modules_(Biological_Chemistry)/Vitamins_Cofactors_and_Coenzymes/Vitamin_B%3A_Cobalamin
Cobalamin, or Vitamin B12, is the largest and the most complex out of all the types of Vitamins. The discovery of Cobalamin was made as scientists were seeking to find a cure for pernicious anemia, an anemic disease caused by an absence of intrinsic factor in the stomach. Cobalamin was studied, purified, and collected into small red crystals, and its crystallize structure was determined during an X ray analysis experiment conducted by Scientist Hodkin. A molecule structure of Cobalamin is simple, yet contains a lot of different varieties and complexes as shown in . The examination of the vitamin’s molecular structure helps scientists to have a better understanding of how the body utilizes Vitamin B12 into building red blood cells and preventing pernicious anemia syndromes.The metalloenzyme structure of Cobalamin presents a corrin ring with Cobalt, the only metal in the molecule, positioned right in the center of the structure by four coordinated bonds of nitrogen from four pyrrole groups. These four subunit groups are separated evenly on the same plane, directly across from each other. They are also connected to each other by a C-CH3 methylene link on the other sides, by a C-H on one side and by two pyrroles directly coming together. Together, they form a perfect corrin ring as shown in The fifth ligand connected to Cobalt is a nitrogen coming from the 5,6-dimethhylbenzimidazole. It presents itself as an axial running straight down from the cobalt right under the corrin ring. This benzimidazole is also connected to a five carbon sugar, which eventually attaches itself to a phosphate group, and then straps back to the rest of the structure. Since the axial is stretched all the way down, the bonding between the Cobalt and the 5,6-dimethylbenzimidazole is weak and can sometimes be replaced by related molecules such as a 5-hydrozyl-benzimidazole, an adenine, or any other similar group. In the sixth position above the Corrin ring, the active site of Cobalt can directly connect to several different types of ligands. It can connect to CN to form a Cyanocobalami, to a Methyl group to form a methylcobalamin, to a 5’-deoxy adenosy group to form an adenosylcobalamin, and OH, Hydroxycobalamin. Cobalt is always ready to oxidize from 1+ change into 2+ and 3+ in order to match up with these R groups that are connected to it. For example, Hydroxocobalamin contains cobalt that has a 3+ charge while Methyladenosyl contains a cobalt that has a 1+ Charge.The point group configuration of Cobalamin is C4v. In order to determine this symmetry, one must see that the structure is able to rotate itself four times and will eventually arrive back to its original position. Furthermore, there are no sigma h plane and no perpendicular C2 axe. However, since there are sigma v planes that cut the molecules into even parts, it is clear to determine that the structure of Cobalamin is a C4v. With Cobalt being the center metal of the molecule, Cobalamin carried a distorted octahedral configuration. The axial that connects Cobalt to the 5,6 dimethyl benzimidazole is stretched all the way down to the bottom. Its distance is several times longer than the distance from the Cobalt and the attached R group above it. This sometimes can also be referred to as a tetragonal structure. The whole shape overall is similar to an octahedral, but the two axial groups are different and separated into uneven distances. Since there is only one metalloenzyme center in the system, the point group and configuration just mentioned is also assigned to the structure as a whole. Since the metallocoenzyme structure is stretched out, it is quite weakly coordinated and can be break apart or replaced with other groups as mentioned above.Scientists have shown that both IR and Raman Spectroscopy were used to determine the structure of the molecule. This is determined by observing the character tables of point group C4v, the point group symmetry of Cobalamin. On the IR side, one can see that there are groups such as drz, (x, y), (rz, ry). On the other hand, on the Raman side, there are groups such as x square +y square, z square, x square – y square, xy, xz, yz. The Raman side indicated that there were stretching modes in the molecule and relates back to the stretching of the 5,6 dimethyl benzimidazole axial that connected directly below the Cobalt metal. The stretching can be seen in . Therefore, by studying the structure and function of Cobalamin, scientists can experiment and form Vitamin B12 in their laboratories and serve the community as a whole.Vitamin B₁₂: Cobalamin is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by LibreTexts.
961
InfoPage
https://chem.libretexts.org/Bookshelves/Environmental_Chemistry/Environmental_Toxicology_(van_Gestel_et_al.)/00%3A_Front_Matter/02%3A_InfoPage
This text is disseminated via the Open Education Resource (OER) LibreTexts Project and like the hundreds of other texts available within this powerful platform, it is freely available for reading, printing and "consuming." Most, but not all, pages in the library have licenses that may allow individuals to make changes, save, and print this book. Carefully consult the applicable license(s) before pursuing such effects.Instructors can adopt existing LibreTexts texts or Remix them to quickly build course-specific resources to meet the needs of their students. Unlike traditional textbooks, LibreTexts’ web based origins allow powerful integration of advanced features and new technologies to support learning. The LibreTexts mission is to unite students, faculty and scholars in a cooperative effort to develop an easy-to-use online platform for the construction, customization, and dissemination of OER content to reduce the burdens of unreasonable textbook costs to our students and society. The LibreTexts project is a multi-institutional collaborative venture to develop the next generation of open-access texts to improve postsecondary education at all levels of higher learning by developing an Open Access Resource environment. The project currently consists of 14 independently operating and interconnected libraries that are constantly being optimized by students, faculty, and outside experts to supplant conventional paper-based books. These free textbook alternatives are organized within a central environment that is both vertically (from advance to basic level) and horizontally (across different fields) integrated.The LibreTexts libraries are Powered by NICE CXOne and are supported by the Department of Education Open Textbook Pilot Project, the UC Davis Office of the Provost, the UC Davis Library, the California State University Affordable Learning Solutions Program, and Merlot. This material is based upon work supported by the National Science Foundation under Grant No. 1246120, 1525057, and 1413739.Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation nor the US Department of Education.Have questions or comments? For information about adoptions or adaptions contact More information on our activities can be found via Facebook , Twitter , or our blog .This text was compiled on 07/13/2023
963
Preface
https://chem.libretexts.org/Bookshelves/Environmental_Chemistry/Environmental_Toxicology_(van_Gestel_et_al.)/00%3A_Front_Matter/04%3A_Preface
Cornelis A.M. van Gestel, Frank G.A.J. Van Belleghem, Nico W. van den Brink, Steven T.J. Droge, Timo Hamers, Joop L.M. Hermens, Michiel H.S. Kraak, Ansje J. Löhr, John R. Parsons, Ad M.J. Ragas, Nico M. van Straalen, and Martina G. VijverThis open online textbook on Environmental Toxicology aims at covering the field in its full width, including aspects of environmental chemistry, ecotoxicology, toxicology and risk assessment. With that, it will contribute to improving the quality, continuity and transparency of the education in environmental toxicology. We also want to make sure that fundamental insights on fate and effects of chemicals gained in the past are combined with recent approaches of effect assessment and molecular analysis of mechanisms causing toxicity.The book consists of six chapters, with each chapter being divided into several sub-chapters to enable covering all aspects relevant to the topic. All chapters are designed in a modular way, which each module having clear training goals and being flagged with a number of keywords. Most modules have an average length of 1000-2000 words, a limited number of references, and 3-5 figures and/or tables. A few modules are enlighted with short clips, animations or movies to allow better illustration of the theory. The introduction chapter of the book, for instance, contains a short interview with two key experts reflecting on the development of the field over the past 30 years.The book contains tools for self-study and training, like a (limited) number of questions at the end of each module. For the future we foresee the addition of separate exercises and other tools that may help the student in understanding the theory.The development of this open online textbook was carried out by a project team that included a team of editors and some supporting staff. The team of editors consisted of environmental toxicologists and chemists from six Dutch universities. They drafted the outline of the book, assigned leaders for each chapter, and identified authors for each module. Each module is authored by 1-2 members of the project team. When a topic required expertise not present among the project team, an external expert was asked to write a module (see List of authors).To guarantee quality of the book, each module was reviewed by at least one of the members of the project team but also by an international reviewer from outside the project team (see List of reviewers). An advisory board and a steering committee were involved in supervising the project, as well as educational advisors, while the project team served as an editorial board.The supporting staff included an expert from the university library of the Vrije Universiteit Amsterdam, who advised on the choice of and working with online publication formats, copyright issues, options for including links to other freely available online materials, etc. We also had support from a designer and a professional drawer, who both contributed to the development of the book.The publication of this book on an open online publication platform allowing free access to anyone, and facilitates its embedding in Learning Management Systems like Canvas and Blackboard often used in university teaching, so giving students easy access.The modular composition of the book will allow teachers to design their 'own' book, by selecting those modules relevant for the class to teach. This will support flexible use of the book.The publication as an open online book will allow continuous updating of the book, so to stay on top of new developments in the field. As it stands, about 100 modules have been finalized, another 30 modules are already available in drafs that currently are in the process of reviewing, and some more modules are still in preparation. In spite of this large number of modules, which do provide a good basis for teaching at the BSc level, we do realize the book still is not complete. More advance modules that would facilitate teachign at the MSc and higher level as well as widening the number of topics seems desirable, but such was not possible within the current project. We therefore will continu working on the book, but we also welcome any suggestions for extending the book, and we invite colleagues in environmental toxicology and chemistry to take the initiative to write modules on topics still missing.The preparation of this book was sponsored by the Netherlands Ministry of Education, Culture and Science through SURF, but could not have been realized without the help of many colleagues who assisted in the writing and reviewing the different modules (see Acknowledgement).-Amsterdam, September 2019
966
1.1: Environmental toxicology
https://chem.libretexts.org/Bookshelves/Environmental_Chemistry/Environmental_Toxicology_(van_Gestel_et_al.)/01%3A_Environmental_Toxicology/1.01%3A_Environmental_toxicology
If you want to re-use this chapter, for e.g. in your electronic learning environment, feel free to copy this url: maken.wikiwijs.nl/120137/1__IntroductionAuthor: Ad RagasReviewers: Kees van Gestel, Nico van StraalenLearning objectivesYou should be able toKeywords: Environmental toxicology, environmental chemistry, toxicology, ecology, ecotoxicologyEnvironmental toxicology is the science that studies the fate and effects of potentially hazardous chemicals in the environment. It is a multidisciplinary field assimilating and building upon knowledge, concepts and techniques from other disciplines, such as toxicology, analytical chemistry, biochemistry, genetics, ecology and pathology. Environmental toxicology emerged in response to the growing awareness in the second part of the 20th century that chemicals emitted to the environment can trigger hazardous effects in organisms living in this environment, including humans. Section 1.3 gives a brief summary of the history of environmental toxicology.One way to depict the field of environmental toxicology is by a triangle consisting of chemicals, the environment and organisms. The triangle illustrates that the fate and potential hazardous effects of chemicals emitted to the environment are determined by the interactions between these chemicals, the environment and organisms. The fate of substances in the environment is the topic of environmental chemistry, the effects of substances on living organisms is studied by toxicology, and the implications of these effects on higher levels of biological organization are analyzed by the field of ecology.Another term widely used to refer to this field of study is ecotoxicology. The main distinction is the inclusion of human health as an endpoint in environmental toxicology, whereas ecotoxicology is restricted to ecological endpoints. Since the current book includes human health as an assessment endpoint for environmental contaminants, the term environmental toxicology is preferred over ecotoxicology.Environmental chemists study the fate of chemicals in the environment, e.g. their distribution over different environmental compartments and how this distribution is influenced by the physicochemical properties of a chemical and the characteristics of the environment. They aim to understand the pathways and processes involved in the environmental fate of a chemical after it has been emitted to the environment, including processes such as advection, deposition and (bio)degradation. Within the context of environmental toxicology, the ultimate aim is to produce a reliable assessment of the exposure of organisms, an aim which is often complicated by the enormous heterogeneity of the environment.Environmental chemists use a variety of tools to analyze and assess the fate of chemicals in the environment. Two fundamental tools are analytical measurements and mathematical modelling. Measurements are essential to acquire new knowledge and insight into the behavior of chemicals in the environment., e.g. measurements on emissions, environmental concentrations and specific processes such as biodegradation. These measurements are analyzed to discover patterns, e.g. between substance properties and environmental characteristics. Once revealed, such patterns can be integrated into a comprehensive mathematical model to predict the fate of and exposure to substances in the environment. If sufficiently validated, these models can subsequently be used by risk assessors to assess the exposure of organisms to chemicals, reducing the need for expensive measurements.Chapter 2 focuses on the types of chemicals occurring in the environment, their sources and the concentrations found at contaminated sites. In Chapter 3, focus will be on the fate and transport of these chemicals, including aspects of bioavailability and bioaccumulation in organisms.Toxicologists study the effects of chemicals on organisms, often at the individual level. Fundamental toxicologists aim to understand the mechanisms involved in the toxicity of a compound, whereas more applied toxicologists are primarily interested in the relationship between exposure and effect, often with the aim of identifying an exposure level that can be considered safe. Within this context, the dose concept as introduced by Parcelsus at the start of the 16th century is essential (see Section 1.3), i.e. the likelihood of adverse effects depends on the dose organisms are being exposed to.The processes taking place after exposure of an organism to a toxicant are often divided into toxicokinetic and toxicodynamic processes. Toxicokinetic processes are those that describe the fate of the toxicant in the organism, including processes such as absorption, distribution, metabolism and excretion (ADME). These toxicokinetic or ADME processes are sometimes collectively referred to as "What the body does to the substance" and determine the exposure level at the site of toxic action, or internal dose. Toxicodynamic processes are those that describe the evolution of an adverse effect from the moment that the toxicant, or one of its metabolites, interacts with a molecular receptor in the body. This interaction is often referred to as the primary lesion or molecular initiating event (MIE). Toxicodynamic processes are sometimes collectively referred to as "What the substance does to the body" and the chain of events leading to an adverse outcome as the adverse outcome pathway (AOP).The toxicity of a compound thus depends on toxicokinetic as well as toxicodynamic processes. Traditionally, this toxicity is being determined by exposing whole organisms in the laboratory to the substance of interest, and subsequently monitoring the health status of these organisms. However, as a result of the growing societal pressure to reduce animal testing, as well as the increased mechanistic understanding and improved molecular techniques, this so-called "black box approach" is more and more being replaced by a combination of in vitro toxicity testing and "in silico" predictive approaches. Physiologically-based toxicokinetic (PBTK) models are increasingly used to model the fate of chemicals in the body, resulting in a prediction of the internal exposure. In vitro tests and advanced molecular techniques at the gene (genomics) or protein (proteomics) level may subsequently be used to determine whether these internal exposure levels will trigger adverse effects, although many challenges remain in the prediction of adverse effects based on in vitro test and omics information. Chapter 4 focuses on dose-response relationships, modes of action, species differences in sensitivity and resistance against toxicants.Ecologists study the interactions between organisms and their environment. Ecology is an important pillar of environmental toxicology, because ecological knowledge is needed to translate effects at the individual level to the ecosystem level; an important endpoint of ecological risk assessments. Such a translation requires specific knowledge, e.g. on life cycles of organisms, natural factors regulating their populations, genetic variability within populations, spatial distribution patterns, and the role organisms play in processes like nutrient cycling and decomposition. Effects considered relevant at the individual level, such as a tumor risk, may turn out to be irrelevant at the population or ecosystem level. Similarly, subtle effects at the individual level may turn out to be highly relevant at the ecosystem level, e.g. behavioral changes after environmental exposure to antidepressants which may affect the population dynamics of fish species. In recent years, there is an increasing interest for the role of the landscape configuration, distribution patterns and their dynamics in environmental toxicology. The spatial configuration of the landscape, the distribution of species and the timing of exposure events turn out to be important determinants of ecosystem effects. The ecological aspects of environmental toxicology will be discussed in Chapter 5.What is the difference between environmental toxicology and ecotoxicology?Indicate from the following terms whether these belong to the environmental chemistry, toxicology or ecology?(Bio)degradationFate and exposure modelDoseToxicodynamicsAdverse outcome pathwayPopulation dynamicsLandscape configurationGive an example how subtle effects which may remain undetected in a toxicity test can be relevant at the population or ecosystem level.This page titled 1.1: Environmental toxicology is shared under a CC BY 4.0 license and was authored, remixed, and/or curated by Sylvia Moes, Kees van Gestel, & Gerco van Beek via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.
967
1.2: DPSIR
https://chem.libretexts.org/Bookshelves/Environmental_Chemistry/Environmental_Toxicology_(van_Gestel_et_al.)/01%3A_Environmental_Toxicology/1.02%3A_DPSIR
Author: Ad RagasReviewers: Frank van BelleghemLearning objectivesYou should be able to:Keywords: Drivers, pressures, state variables, impacts, responsesOn the one hand, environmental toxicology is rooted in more fundamental scientific disciplines like biology and chemistry where curiosity is an important driver for gathering new knowledge. On the other hand, environmental toxicology is a problem-oriented discipline. As such, it is part of the broader field of environmental sciences which analyses the interactions between society and its physical environment in order to promote sustainability. Within this context, knowledge about the interactions of substances with the biotic and abiotic environment is being generated with the ultimate aim to prevent and address potential pollution problems in society. To be able to contribute optimally, an environmental toxicologist should know how pollution problems are structured and what the role of environmental toxicologists is in analysing, preventing and solving such problems. A widely used framework for structuring environmental problems is DPSIR. DPSIR stands for Drivers, Pressures, State, Impacts and Responses. The aim of the current section is to explain the DPSIR framework.Communication is essential when analysing and addressing societal issues such as environmental pollution. As an environmental toxicologist, you will have to communicate with fellow scientists to develop a common understanding of the pollution problem, and with policy makers and stakeholders (e.g., producers of chemicals and consumers that are being exposed to chemicals) to explain the scientific state of the art. It is likely that you will use terms like "cause", "source" and "effects". However, not everybody will use and perceive these terms in the same way. Some people may argue that a farmer is the main cause of pesticide pollution, whereas others may argue that it is the pesticide manufacturer, or even the increasing world population. Likewise, some people may perceive the concentration of pesticides in water as an effect of pesticide use, whereas others may refer to the extinction of species when talking about effects. These differences may result in miscommunication, complicating scientific analysis and the search for appropriate solutions.The DPSIR framework is a tool that helps preventing such communication problems. It provides a common and flexible frame of reference to structure environmental issues by describing these in terms of drivers, pressures, state (variables), impacts and responses. Flexibility is an important characteristic of the framework, enabling adaptation to the problem at hand. The DPSIR framework should not be considered a panacea or used as a mould that rigidly fits all environmental issues. Its main strength is that it stimulates communication between scientists, policy makers and other actors and thereby supports the development of a common understanding.The DPSIR framework essentially is a cause-and-effect chain that aims to capture the main processes involved in an environmental issue; from its origin to the changes it triggers in the environment and in society. These processes are organized in five main categories, i.e.:In principle, any environmental issue can be captured in a DPSIR. But it is important to realize that the labelling of processes as either drivers, pressures, state (variables), impacts or responses is likely to differ between people since the categories are broadly defined and the level of detail in the processes considered may vary. For example, some people may argue that "agriculture" should be classified as a driver, whereas others may argue it is a pressure. Yet other people may deal with this issue by adapting the DPSIR framework, i.e. by adding a new category called "human activities" that is placed in-between the drivers and the pressures. Another typical issue is the labelling of consecutive changes in the physical environment such as rising CO2 levels, increases in temperature and changes in species abundance. These changes can be labelled as changes in consecutive state variables, i.e. state variables of 1st, 2nd and 3rd order. The idea is that 1st order changes trigger 2nd order changes, e.g. rising CO2 levels triggering a rise in temperature, and 2nd order changes trigger 3rd order changes, in this case a shift in species abundance. The change in species abundance may also be labelled as an impact, provided this change is perceived as problematic by (groups in) society. The category "impacts" is closely related to the protection goals of risk assessment (see the Section Ecosystem services and protection goals). If there is consensus in society that an impact should be prevented, it becomes a protection goal. All these examples illustrate that the DPSIR framework should be applied in a flexible way and that communication is essential.Environmental toxicology mainly focuses on the Pressures, State and Impacts blocks of the DPSIR chain. The use of chemicals by society, e.g. in agriculture or in consumer products, and their emission to the environment belongs to the Pressure block. The fate of chemicals in the environment and their accumulation in organisms belongs to the State block. And the adverse effects triggered in ecosystems and humans belong to the Impact block. An important step in risk assessment of chemicals (Chapter 6) is the derivation of safe exposure levels such as the Predicted No Effect Concentration (PNEC) for ecosystems or the Acceptable Daily Intake (ADI) for humans. In terms of DPSIR, this boils down to defining an acceptable impact level (e.g. a zero effect level or a 1 in a million tumor risk) and translating this into a corresponding state parameter (e.g. the chemical concentration in air or water). Fate modelling (Section on Modelling exposure) aims to predict soil, water, air and organisms (all State parameters) based on emission data (a Pressure parameter).The DPSIR framework has been criticized because it tries to capture all processes in cause-and-effect relationships, resulting in a bias towards the physical dimension of environmental issues, e.g. human activities, emissions, physical effects and mitigations measures. The societal dimension is less easily captured, e.g. knowledge generation, governance structures, resources needed to implement measures, awareness and societal framing of the problem (Svarstad et al., 2008). Although the DPSIR framework can been adapted to accommodate some of these aspects (e.g., see , it should be acknowledged that it has its limitations. Several alternative frameworks have been developed, and some of these better capture the societal dimension (Gari et al., 2015; Elliott et al., 2017). Nevertheless, DPSIR can be a useful framework to contextualize the problems that are addressed in environmental toxicology. It nicely shows why knowledge on the fate and impact of chemicals (state and impacts) is needed to address pollution issues and that the use of this knowledge is always subject to valuation, i.e. it depends on how society values the adverse effects triggered by the pollution. DPSIR is also widely used by national and international institutes such as the European Environment Agency (EEA), the United States Environmental Protection Agency (US-EPA) and the Organisation for Economic Cooperation and Development (OECD). The DPSIR framework is sometimes also used as a first step in modelling, especially its physical dimension. Once relevant processes have been identified, these are then described quantitatively resulting in models that can be used to predict environmental concentrations or ecological effects of substances based on knowledge about human activities or emissions.Gari, S.R., Newton, A., Icely, J.D.. A review of the application and evolution of the DPSIR framework with an emphasis on coastal social-ecological systems. Ocean & Coastal Management 103, 63-77.Svarstad, H., Petersen, L.K., Rothman, D., Siepel, H., Wätzold, F.. Discursive biases of the environmental research framework DPSIR. Land Use Policy 25, 116-125.Elliott, M., Burdon, D., Atkins, J.P., Borja, A., Cormier, R., de Jonge, V.N., Turner, R.K.. "And DPSIR begat DAPSI(W)R(M)!" - A unifying framework for marine environmental management. Marine Pollution Bulletin 118, 27-40.Indicate whether the following phenomena should be labelled as drivers, pressures, state (variables), impacts and responses.Pharmaceuticals are being used to protect the health of humans, farm animals and pets. After use, part of these pharmaceuticals may reach the environment where they may trigger adverse effects in ecosystems. In theory, humans may also be affected, e.g. after swimming in polluted surface water or consumption of polluted drinking water. Besides direct toxic effects, antibiotics may also result in the emergence of antibiotic resistance, which threatens human health. List the most important drivers, pressures, state (variables), impacts and responses for the issue of "pharmaceuticals in the environment".On which blocks of the DPSIR framework do you focus when you work as an environmental toxicologist?This page titled 1.2: DPSIR is shared under a CC BY 4.0 license and was authored, remixed, and/or curated by Sylvia Moes, Kees van Gestel, & Gerco van Beek via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.
968
1.3: Short history
https://chem.libretexts.org/Bookshelves/Environmental_Chemistry/Environmental_Toxicology_(van_Gestel_et_al.)/01%3A_Environmental_Toxicology/1.03%3A_Short_history
Author: Ansje LöhrReviewers: Ad Ragas, Kees van Gestel, Nico van StraalenLearning Objective:You should be able toKeywords: Paracelsus; Rachel Carson (Silent Spring); Awareness; SETAC; standardsFrom earliest times, man has been confronted with the poisonous properties of certain plants and animals. Poisonous substances are indeed common in nature. People who still live in close contact with nature generally possess an extensive empirical knowledge of poisonous animals and plants. Poisons were, and still are, used by these people for a wide range of applications (catching fish, poisoning arrowheads, in magic rituals and medicines). The first Egyptian medical documentation (written in the Ebers Papyrus) dates from 1550 BC and demonstrates that the ancient Egyptians had an extensive knowledge of the toxic and curative properties of natural products. A good deal is known about the information regarding toxic substances possessed by the Greeks and the Romans. They were very interested in poisons and used them to carry out executions. Socrates, for example, was executed using an extract of hemlock (Conium maculatum). It was also not unusual to use a poison to murder political opponents. Poisons were ideal for that purpose, since it was usually impossible to establish the cause of death by examining the victim. To do so would have required advanced chemical analysis, which was not available at that time.Early European literature also includes a considerable number of writings on toxins, including the so-called herbals, such as the Dutch "Herbarium of Kruidtboeck" by Petrus Nylandt dating from 1673. Poisoning sometimes assumed the character of a true environmental disaster. One example is poisoning by the fungus Claviceps purpurea, which occurs as a parasite in grain, particularly in rye (spurred rye) and causes the condition known as ergotism. In the past, this type of epidemic has killed thousands of people, who ingested the fungus with their bread. There are detailed accounts of such calamities. For example, in the year 992 an estimated 40,000 people died of ergotism in France and Spain. People were not aware of the fact that death was caused by eating contaminated bread. It was not until much later that it came to be understood that large-scale cultivation of grain involved this kind of risk.It was pointed out centuries ago that workers in the mining industry, who came into contact with a variety of metals and other elements, tended to develop specific diseases. The symptoms regularly observed as a result of contact with arsenic and mercury in the mining industry were described in detail by the famous Swiss physician Paracelsus in his 1567 treatise "Von der Bergsucht und anderen Nergkrankheiten" (miners sickness and other diseases of mining). During the emergence of the scientific renaissance of the 16th century, Paracelsus (1493 - 1541) drew attention to the dose-dependency of the toxic effect of substances. In the words of Paracelsus, "all Ding sind Gifft … allein die Dosis macht das ein Ding kein Gifft is" (everything is a poison … it is only the dose that makes it not a poison). This principle is just as valid today. At the same time, it is one of the most neglected principles in the public understanding of toxicology.A work from the same period "De Re Metallica" by Gergius Agricola (Georg Bauer, 1556), deals with the health aspects of working with metals. Agricola even advised preventive aspects, such as wearing protective clothing (masks) and using ventilation.Another example of the rising awareness of the effects of poisons on human health came with the suggestion, by Percival Pott in 1775, that the high frequency of scrotum cancer among British chimney sweepers was due to exposure to soot. He was the first to describe occupational cancer.A part of the essay by Percival Pott "The fate of these people seems singularly hard; in their early infancy, they are most frequently treated with great brutality, and almost starved with cold and hunger; they are thrust up narrow, and sometimes hot chimnies, where they are bruised, burned, and almost suffocated; and when they get to puberty, become peculiarly liable to a most noisome, painful, and fatal disease." See the rest of the original text of his essay here.Soot consists of polycyclic aromatic hydrocarbons (PAHs) and their derivatives. The exposure to soot came with concurrent exposure to a number of carcinogens such as cadmium and chromium. From the 1487 cases of scrotal cancer reported, 6.9 % occurred in chimney sweepers. Scrotal and other skin cancers among chimney sweepers were at the same time also reported from several other countries.Changes in the environment due to environmental pollution led to interesting insights in the potential of species to adapt for survival and the role of natural selection in it. A famous example of such micro-evolution is the peppered moth, Biston betularia, that is generally a mottled light color with black speckles. This pattern gives them good camouflage against lichen-covered tree trunks while resting during the day. During the industrial revolution, the massive increase in the burning of coal resulted in the emission of dark smoke turning the light trees in the surrounding areas dark. As a consequence, the dark, melanic form of the peppered moth took over in industrial parts of the United Kingdom during the 1800s. The melanic forms used to be quite rare, but their dark color served as a protective camouflage from bird populations in the polluted areas. This allowed them to become dominant in areas with soot-covered trunks. Two British biologists, Cedric Clarke and Phillip Sheppard, discovered this when they pinned dead moths of the two types on dark and light backgrounds to study their predation by birds. The dark moths had an advantage in the dark forests, a result of natural selection. In areas where air pollution has decreased the melanic form became less abundant again. Video on peppered mothsAfter the second world war, synthetic chemical production became widespread. However, there was limited awareness of the environmental and health risks. In the 1950s, Environmental Toxicology came to light as a result of increasing concern about the impact of toxic chemicals on the environment. This led toxicology to expand from the study of the toxic impacts of chemicals on man to that of toxic impacts on the environment. An important person in raising this awareness was Rachel Carson. Her book "Silent Spring", published in 1962, in which she warned of the dangers of chemical pesticides, triggered widespread public concern on the dangers of improper pesticide use.First have a look at an historical clip on the use of dichlorodiphenyltrichloroethane, commonly known as DDT, that was developed in the 1940s as the first modern synthetic insecticide.Silent Spring - Rachel Carson DDT is very persistent and tends to concentrate when moving through the food chain. As a consequence, the use of DDT led to very high levels, especially in organisms high in the food chain. Bioaccumulation in birds appeared to cause eggshell thinning and reproductive failure. Because of the increasing evidence of DDT's declining benefits and its environmental and toxicological effects, the United States Department of Agriculture, the federal agency responsible for regulating pesticide use, began regulatory actions in the late 1950s and 1960s to prohibit many of its uses. By the 1980s, the use of DDT was also banned from most Western countries.As a result of large environmental disasters, awareness amongst the general public increased. An enormous industrial pesticide disaster occurred in 1984 in Bhopal, India, when more than 40 ton of the highly toxic methyl isocyanate (MIC) gas leaked from a pesticide plant into the towns located near the plant. Almost 4000 people were killed immediately and 500,000 people were exposed to the poisonous substance causing many additional deaths because of gas-related diseases. The plant was actually initially only allowed to import MIC but was producing it on a large scale by the time of the disaster and safety procedures were far below (international) standards for environmental safety. The disaster made it very clear that this should be changed to avoid other large-scale industrial disasters.The Sandoz agrochemical spill close to Basel in Switzerland in 1986 was the result of a fire in a storehouse. The emission of large amounts of pesticide caused severe ecological damage to the Rhine river and massive mortality of benthic organisms and fish, particularly eels and salmonids.At the time of these incidents, environmental standards for chemicals were still largely lacking. The incidents triggered scientists to do more research on the adverse environmental impacts of chemicals. Public pressure to control chemical pollution increased and policy makers introduced instruments to better control the pollution, e.g. environmental permitting, discharge limits and environmental quality standards.In 1987, the World Commission on Environment and Development released the report "Our Common Future", also known as the Brundtland Report. This report placed environmental issues firmly on the political agenda, defining sustainable development as "a development that meets the needs of the present without compromising the ability of future generations to meet their own needs". Another influential book was "Our stolen future" written by Theo Colborn and colleagues in 1996. It raised awareness of the endocrine disrupting effects of chemicals released into the environment and threatening (human) reproduction by emphasizing it not only concerns feminization of fish or other organisms in the environment, but also the human species.Please watch the video "Developments in Environmental Toxicology - Interview with two pioneers" included at the start of the Introduction of this book.SETACBefore the 1980s, no forum existed for interdisciplinary communication among environmental scientists -biologists, chemists, toxicologists- as well as managers and others interested in environmental issues. The Society of Environmental Toxicology and Chemistry (SETAC) was founded in North America in 1979 to fill this void. In 1991, the European branch started its activities and later SETAC also established branches in other geographical units, like South America, Africa and South-East Asia. SETAC publishes two journals: Environmental Toxicology and Chemistry (ET&C) and Integrated Environmental Assessment and Management (IEAM). SETAC also is active in providing training, e.g. a variety of online courses where you can acquire skills and insights in the latest developments in the field of environmental toxicology. Based on the growth in the society's membership, the meeting attendance and their publications, a forum like SETAC was clearly needed. Read more on SETAC, their publications and how you can get involved here.Where SETAC focuses on environmental toxicology, international toxicological societies have also been established like EUROTOX in Europe and the Society of Toxicology (SOT) in North America. In addition to SETAC, EUROTOX and SOT, many national toxicological societies and ecotoxicological counterparts or branches became active since the 1970s, showing that environmental toxicology has become a mature field of science. One element indicative of this maturation, also is that the different societies have developed programmes for the certification of toxicologists.ReferencesCarson, R.. Silent Spring. Houghton Mifflin Company.Colborn, T., Dumanoski, D., Peterson Myers, J.. Our Stolen Future: Are We Threatening Our Fertility, Intelligence, and Survival? A Scientific Detective Story. New York: Dutton. 306 p.World Commission on Environment and Development. Our Common Future. Oxford: Oxford University Press. p.27.Paracelsus is famous for the dose-dependency of the toxic effect of substances. What is meant by dose-dependency?What is the difference between toxicology and environmental toxicology?How was sustainable development defined in "Our Common Future" also known as the Brundtlandt report?This page titled 1.3: Short history is shared under a CC BY 4.0 license and was authored, remixed, and/or curated by Sylvia Moes, Kees van Gestel, & Gerco van Beek via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.
969
2.1: Introduction
https://chem.libretexts.org/Bookshelves/Environmental_Chemistry/Environmental_Toxicology_(van_Gestel_et_al.)/02%3A_Environmental_Chemistry_Chemicals/2.01%3A_Introduction
Authors: John Parsons, Steven DrogeReviewer: Kees van GestelLeaning objectives:You should be able to:Keywords: natural toxicants, molecular structures, pollutant classes, anthropogenic pollutantsEnvironmental toxicology deals with the negative effects of the exposure to chemicals we regard as pollutants (or contaminants/toxicants). Environmental toxicants receive a lot of media attention, but many critical details are getting lost or are easily forgotten. The clip "You, Me, DDT" shows the discovery of the grandson of the works of his grandfather, the Swiss inventor of the insecticide DDT, Paul Hermann Müller, who received the Nobel Prize for Medicine for that in 1948 (see also Section 1.3). The clip "Stop the POPs" interviews (seemingly) common people about one of the most heavily regulated group of pollutants.Organisms, including humans, have always been exposed to chemicals in the environment and rely on many of these chemicals as nutrients. Volcanoes, flooding of acid sulfur lakes, and forest fires have caused widespread contamination episodes. Organisms are also in many cases directly or indirectly involved in the fate and distribution of undesirable chemicals in the environment. Many naturally occurring chemicals are toxicants already (see also Section 1.3), think for example about:Human activities have had an enormous impact on the increased exposure to natural chemicals as a result of, for example, the mining and use of metals, salts and fossil fuels from geological resources. This is for example the case for many metals, nutrients such as nitrate, and organic chemicals present in fossil fuels. Additionally, the industrial synthesis and use of organic chemicals, and the disposal of wastes, have resulted in a wide variety of hazardous chemicals that had either never existed before, or at least not in the levels or chemical form that occur nowadays in our heavily polluted global system. These are typically organic chemicals that are referred to as anthropogenic (`due to humans in nature´) or xenobiotic (`foreign to organisms´) chemicals. In this chapter we aim to clarify the key properties and functionalities of the most common groups of pollutants as a result from human activities, and provide some background on how we can group them and understand their behaviour in the environment.In the field of environmental toxicology, we are most often concerned about the effects of two distinct types of contaminants: metals and organic chemicals. In some cases other chemicals, such as radioactive elements, may also be important while we could also consider the ecological effects of highly elevated nutrient concentrations (eutrophication) as a form of environmental toxicology.Metals and metalloids (elements intermediate between metals and non-metals) comprise the majority of the known elements. They are mined from minerals and used in an enormous variety of applications either in their elemental form or as chemicals with inorganic or organic elements or ions. Many metals occur as cations, but many processes influence the dissolved form of metals. Aluminium for example for example is only present under very acidic conditions as dissolved cation (Al3+), while at neutral pH the metal speciates into for example certain hydroxides (Al(OH3)0). Mercury as free ion is present at (Hg2+) but due to microbial transformation the highly toxic product methylmercury (CH3Hg+, or MeHg+) is formed. Mining and processing of metals together with disposal of metal-containing wastes are the main contributors to metal pollution although sometimes metals are introduced deliberately into the environment as biocides. The widely used pesticide copper sulfate in e.g. grape districts is an example (LINK on comparison to glyphosate here). More information on metals considered to be environmental pollutants is given in Section 2.2.1.Organic chemicals are manufactured to be used in a wide variety of applications. These range from chemicals used as pesticides to industrial intermediates, fossil fuel related hydrocarbons, additives used to treat textiles and polymers, such as flame retardants and plasticisers, and household chemicals such as detergents, pharmaceuticals and cosmetics.Organic chemicals that we regard as environmental pollutants include a huge variety of different structures and have a wide variety of properties that influence environmental distribution and toxicity. With such a wide variety of chemicals to deal with, it is useful to classify them into groups. Depending on our interest, we can base this classification on different aspects, for example on their chemical structure, their physical and chemical properties, the applications the chemicals are used in, or their effects on biological systems. These aspects are of course closely related to their chemical structure as this is the basis of the properties and effects of chemicals. An overview of different ways of classifying environmental contaminant (sometimes referred to as ecotoxicants) is shown in Tables 1A, 1B, and 1C.Table 1A. Grouping options of organic contaminants with specific chemical structuresTermCharacteristicsExamplesHydrocarbonsMore CHx units: higher hydrophobicity/lipophilicity, and lower aqueous solubilityhexanePolycyclic aromatic hydrocarbonsCombustion products. Flat structurenaphthalene, B[a]PHalogenated hydrocarbonsH substituted by fluor, chlorine, bromide, iodine. Often relatively persistentPCB, DDT, PBDEDioxins and furansCombustion/industrial products, one or two oxygen atoms between two aromatic rings. Highly toxic.TCDD, TCDFOrganometallicsOrganic chemicals containing metals, used e.g. in anti-fouling paintstributyltinOrganophosphate pesticidesPhosphate esters, often connecting two lipophilic groups. Act on nervous systemchlorpyrifosPyrethroidsUsually synthetic pesticides based on natural pyrethrum extractsfenvalerateNeonicotinoidsSynthetic insecticides with aromatic nitrogen, related to the alkaloid nicotineimidacloprid… Endless varieties / combinations and ......too many characteristics to list...Table 1B. Grouping options of organic contaminants with specific propertiesTermCharacteristicsExamplesPersistent organic pollutants (POPs)Bioaccumulative, end up even in remote Arctic systemsPCBs, PFOSPersistent mobile organic chemicals (PMOCs)Difficult to remove during drinking water productionPFBA, metforminIonogenic organic chemicals (IOCs)Acids or bases, predominantly ionized under environmental pHProzac, MDMA, LASSubstances of unknown or variable composition, complex reaction products or of biological materials (UVCB)Multicomponent compositions of often analogue structures with wide ranging properties.Oil based lubricantsPlasticsChains of repetitive monomer structures. Wide ranging size/dimensions.Polyethylene,silicone, teflonNanoparticles (NP)Mostly manufactured particles with >50% having dimensions ranging 1 - 100 nm.Titanium dioxide (TiO2), fullereneTable 1C. Grouping options of organic contaminants with specific usageTermCharacteristicsExamplesPesticidesHerbicidesInsecticidesFungicidesRodenticidesBiocidesToxic to pestsToxic to plantsToxic to insectsToxic to fungiToxic to rodentsToxic to many speciesDDTatrazine, glyphosateChlorpyrifos, parathionPhenyl mercury acetateHydrogen cyanideBenzalkoniumPharmaceuticalsSpecifically bioactive chemicals with often (un)known side effects. Many bases.diclofenac (pain killer), iodixanol (radio-contrast), carbemazepine, prozacDrugs of abuseOften opioid based but also synthetic designer drugs with similar activity. Many are are ionogenic bases.cannabinoids, opioids,amphetamine, LSDVeterinary PharmaceuticalsCan include relatively complex (ionogenic) structuresantibiotics, antifungals, steroids, non-steroidal anti-inflammatoriesIndustrial ChemicalsProduced in large volumes by chemical industry for a wide array of products and processesphenolFuel productsFlammable chemicalskeroseneRefrigerants and propellantsSmall chemicals with specific boiling pointsfreon-22Cosmetics/personal care productsWide varieties of specific ingredients of formulations that render specific properties of a productsunscreen, parabenesDetergents and surfactantsLong hydrophobic hydrocarbon tails and polar/ionic headgroupsSodium lauryl sulfate (SLS), benzalkoniumFood and Feed AdditivesTo preserve flavor or enhance its taste, appearance, or other qualities"E-numbers", acetic acid = E260 in EU, additive 260 in other countriesChapter 2 mostly discusses groups of chemicals in separate modules according to the specific environmental properties in Table 1B (Section 2.2) and specific applications in Table 1C (Section 2.3), according to which certain regulations apply in most cases. The property classifications can be based on (often interrelated) properties such as solubility (in water), hydrophobicity (tendency to leave the water), surface activity (tendency to accumulate at surfaces of two phases, such as for "surfactants"), polarity, neutral or ionic chemicals and reactivity. Other classifications very important for environmental toxicology are based on environmental behaviour or effects, such as persistency ("P" increasing problems with increased emissions), bioaccumulation potential ("B", up-concentration in food chains), or type of specific toxic effects ("T"). The influence of specific chemical structures such as in Table 1A is further clarified in the current introductory chapter in order to better understand the basic chemical terminology.As the name suggest, hydrocarbons contain only carbon and hydrogen atoms and can therefore be considered to be the simplest group of organic molecules. Nevertheless, this group covers a wide variety of aliphatic, cycloaliphatic and aromatic structures (see for some examples) and also a wide range of properties. What this group shares is a low solubility in water with the larger molecules being extremely insoluble and accumulating strongly in organic media such as soil organic matter.As a result of the ability of carbon to form strong bonds with itself and other atoms to form structures containing long chains or rings of carbon atoms there is a huge and increasing number (millions) of organic chemicals known. Chemicals containing only carbon and hydrogen are known as hydrocarbons. Aliphatic molecules consist of chains of carbon atoms as either straight or branched chains. Molecules containing multiple carbon-carbon bonds (C=C) are known as unsaturated molecules and can be converted to saturated molecules by addition of hydrogen.Cyclic alkanes consist of rings or carbon atoms. These may also be unsaturated and a special class of these is known as aromatic hydrocarbons, for example benzene in The specific electronic structure in aromatic molecules such as benzene makes them much more stable than other hydrocarbons. Multiple aromatic rings linked together make perfectly flat molecules, such as pyrene in . It later became clear that other organochlorine pesticides such as lindane and dieldrin (Table 3) were also widely distributed in the environment. This was also the case for polychlorinated biphenyls (PCBs) and other organochlorinated industrial chemicals. These chemicals all share a number of undesirable properties such as environmental persistence, very low solubility in water and high level of accumulation in biota to potentially toxic levels. Many organochlorines can be viewed as hydrocarbons in which hydrogen atoms have been replaced by chlorine. This makes them even less soluble than the corresponding hydrocarbon due to the large size of chlorine atoms. In addition, chlorination also makes the molecules more chemically stable and therefore contributes to their environmental persistence. Other organochlorines contain additional functional groups, such as the ether bridges in PCDDs and PCDFs (better known as dioxins and dibenzofurans) and ester groups in the 2,4-D and 2,4,5-T herbicides. Many organochlorines were applied very successfully in huge quantities as pesticides for decades before their negative effects such as persistence and accumulation in biota became apparent. It is therefore no coincidence that the initial set of Persistent Organic Pollutants (POPs) identified in the Stockholm Treaty (see below) as chemicals that should be banned were all organochlorines, as shown in Table 3. As well as chlorine, other halogens such as bromine and fluorine are used in important groups of environmental contaminants. Organobromines are best known as flame retardants and have been applied in large quantities to improve the fire safety of plastics and textiles. They share many of the same undesirable properties of organochlorines and several classes have now been taken out of production. Organofluorines are another important class of halogenated chemicals, and part of the well-known group of ozone depleting CFCs (Section 2.3.6). In particular, per-and polyfluoralkyl substances are widely used as fire-stable surfactants in fire-fighting foams, as grease and water resistant coatings and in the production of fluoropolymers such as Teflon. Organofluorines are much more water soluble and much less bioaccumulative than organochlorines and organobromines but are extremely persistent in the environment.The recognition of these organochlorines as harmful environmental contaminants eventually resulted in measures to restrict their manufacture and use in the Stockholm Convention on Persistent Organic Pollutants signed in 2001 to eliminate or restrict the production and use of persistent organic pollutants (POPs). This initial list of POPs has been subsequently augmented with other harmful halogenated organic pollutants up to a total of 29 chemicals, which are either to be eliminated, restricted, or required measured to reduce unintentional releases. POPs are further discussed in section 2.2.4.Table 3. Key persistent organic pollutants, also named POPs - the Dirty DozenAdditional POPs to eliminate include: chlordecone, lindane (hexachlorocyclohexane), pentachlorobenzene, endosulfan, chlorinated naphthalenes, hexachlorobutadiene, tetrabromodiphenylether, and pentabromodiphenylether decabromodiphenyl ether (BDEs).Since the signing of the Stockholm Convention, organochlorine pesticides have been replaced in most countries by more modern pesticide types such as the organophosphorus and carbamate insecticides. These compounds are less persistent in the environment, but still could pose elevated risks to environments surrounding the agricultural sites, and increased levels on food produced on these agricultural sites. The very toxic organophosphorus neurotoxicant parathion has been in use since the 1940s, and has the typical two lipophilic side chains on two esters (ethyl units), as well as a polar unit. Parathion has caused hundreds of fatal and non-fatal intoxications worldwide and as a result it is banned or restricted in 23 countries. The relatively comparable organophosphate structure of diazinon has been widely used for general-purpose gardening and indoor pest control since the 1970s, but residential use was banned in the U.S. in 2004. In Californian agriculture however, 35000 kg diazinon was used in 2012. The carbamate based insecticide carbaryl is toxic to target insects, and also non-target insects such as bees, but is detoxified and eliminated rapidly in vertebrates, and not secreted in milk. Although illegal in 7 countries, carbaryl is the third-most-used insecticide in the U.S., approved for more than 100 crops. In 2012, 52000 kg carbaryl was used in California, while this was 3 times more in 2000. Neonicotinoid insecticides, with the typical aromatic ring containing nitrogen, form a third generation of pesticide structures. Imidacloprid is currently the most widely used insecticide worldwide, but as of 2018 banned in the EU, along with two other neonicotinoids clothianidin and thiamethoxam.As well as the pesticides discussed above, many other chemicals are brought into the environment inadvertently during their manufacture, distribution and use and the range of chemicals recognised as problematic environmental contaminants has expanded enormously. These include fossil fuel-related hydrocarbons, surfactants, pigments, biocides and chemicals used as pharmaceuticals and personal care products (PPCPs). in Boxall et al. gives an illustrative overview of the major routes by which PPCPs, but also many other anthropogenic contaminants other than pesticides, are released into the environment. Particularly wastewater treatment systems form the main entry point for many industrial and household products.The wide variety of contaminant structures does not mean that most chemicals have become increasingly more complex. For risk assessment, molecular properties such as water solubility, volatility and lipophilicity are often estimated based on quantitative structure-property relationships (Section 3.4.3). With increasingly complex structures, such property-estimations based on the molecular structure become more uncertain.The antibiotic erythromycin for example, is a very complex chemical structure (C37H67NO13) that has 13 functional units along with a 14 member ring. In addition, the tertiary nitrogen group is an amine base group that can give the molecule a positive charge upon protonation, depending on the environmental pH. Erythromycin is on the World Health Organization's List of Essential Medicines (the most effective and safe medicines needed in a health system), and therefore widely used. Continuous emissions in waste streams pose a potential threat to many ecosystems, but many environmentally and toxicologically relevant properties are scarcely studied, and poorly estimated.There are also many contaminants or toxicants with a seemingly simple structure. Many surfactants are simple linear long chain hydrocarbons with a polar or charged headgroup. The illicit drug amphetamine has only a benzene ring and an amine unit, the illicit drug GHB only an alcohol and a carboxylic acid, the herbicide glyphosate only 16 atoms. Still, these 4 chemical examples also have acidic or basic units that often result in predominantly charged organic molecules, which also strongly influences their environmental and toxicological behaviour (see sections on PMOCs and Ionogenic Organic Compounds). In case of glyphosate, the chemical has 4 differently charged forms depending on the pH of the environment. At common pH of 7-9, glyphosate has all charged groups predominantly ionized, making it very difficult to derive calculations on environmental properties.Boxall, A. B. A., Rudd, M. A., Brooks, B. W., Caldwell, D. J., Choi, K., Hickmann, S., Innes, E., Ostapyk, K., Staveley, J. P., Verslycke, T., Ankley, G. T., Beazley, K. F., Belanger, S. E., Berninger, J. P., Carriquiriborde, P., Coors, A., DeLeo, P. C., Dyer, S. D., Ericson, J. F., Gagné, F., Giesy, J. P., Gouin, T., Hallstrom, L., Karlsson, M. V., Larsson, D. G. J., Lazorchak, J. M., Mastrocco, F., McLaughlin, A., McMaster, M. E., Meyerhoff, R. D., Moore, R., Parrott, J. L., Snape, J. R., Murray-Smith, R., Servos, M. R., Sibley, P. K., Oliver Straub, J., Szabo, N. D., Topp, E., Tetreault, G. R., Trudeau, V. L., Van der Kraak, G.. Pharmaceuticals and personal care products in the environment: what are the big questions? Environmental Health Perspectives 120, 1221-1229.This page titled 2.1: Introduction is shared under a CC BY 4.0 license and was authored, remixed, and/or curated by Sylvia Moes, Kees van Gestel, & Gerco van Beek via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.
971
2.2: Pollutants with specific properties
https://chem.libretexts.org/Bookshelves/Environmental_Chemistry/Environmental_Toxicology_(van_Gestel_et_al.)/02%3A_Environmental_Chemistry_Chemicals/2.02%3A_Pollutants_with_specific_properties
Author: Kees van GestelReviewers: John Parsons, Jose Alvarez RogelLearning objectives:You should be able to:Keywords: Heavy metals, Metalloids, Rare earth elements, Essential elementsIntroductionThe majority of the elements in the periodic table consists of metals: is not very meaningful for such a heterogeneous group of elements with rather different biological and chemical properties. The rare earth elements (REEs), lanthanides and actinides, have, for example, a high density or specific weight but are usually not considered heavy metals because of their rather different chemical behaviour. Metalloids have both metallic and non-metallic properties or are nonmetallic elements that can combine with a metal to produce an alloy. shows the periodic table of elements, indicating the groups of (heavy) metals, metalloids and rare earth elements.Also indicated in are the elements that are known to be essential to life and include besides C, H, O and N, the major essential elements Ca, P, K, Mg, Na, Cl and S, the trace elements Fe, I, Cu, Mn, Zn, Co, Mo, Se, Cr, Ni, V, Si, As and B (the latter only for plants) and some elements that may support physiological functions at ultra-trace levels (Li, Al, F and Sn) (Walker et al., 2012).Chemical and physical propertiesExcept for mercury, most pure metals are solid at room temperature. In general, metals are good electrical and thermal conductors having high luster and malleability. Upon heating, metals readily emit electrons. These descriptors of metals, however, are not very helpful when having to deal with elements that do not exist prominently in the pure elemental state, but rather are present as metal compounds, complexes, and ions at fairly low environmental concentrations.More useful are characteristics that influence metal transport between environmental compartments and their interaction with abiotic and biotic components of the environment. The speciation, the chemical form in which an element occurs (e.g., oxidized, free ion or complexed to inorganic or organic molecules), determines its transport and interaction in the environment (see Section on Metal Speciation). Chemical bonding is determined by outer orbital electron behavior, with metals tending to lose electrons when reacting with nonmetals. In many normal biological reactions, metals are cofactors within coenzymes (e.g. in vitamins) and can act as electron acceptors and donors during oxidation and reduction reactions (Newman, 2015).Nieboer and Richardson proposed a classification, based on the equilibrium constant for the formation of metal complexes. They distinguished:In addition, an intermediate or borderline group is defined, in which the type A or B characteristics are less pronounced. As, Cd, Co, Cr, Cu, Fe, Mn, Ni, Pb, Sb, Sn, Ti, V, and Zn belong to this group.This classification of metals is highly relevant for the transport across cell membranes, the intercellular storage in granules and the induction of metal-binding proteins as well as for their behaviour in the environment in general.Occurrence(Heavy) metals and rare earth elements are diffusely distributed over the Earth, but at some places certain elemental combinations are highly concentrated (in metal ores). Despite this diffuse distribution, differences in background metal concentrations in soils can be large, depending on the type and origin of rock or sediment (Table 1).Table 1. Background concentrations (mg/kg dry weight) of (heavy) metals and metalloids in crust material and median and maximum concentrations in different top soils across the world. Derived from Kabata-Pendias and Mukherjee and Alloway.In general, volcanic rock (e.g. basalt) contains high and sedimented rock (e.g. limestone) low metal levels. But there is no relation between metal concentrations in the Earth's crust and the elemental requirements of organisms.Emissions of metalsUpon weathering of stone formations and ores, elements are released and enter local, regional and global biogeochemical cycles. Depending on their water solubility and on soil properties and vegetation, metals may be transported through the environment and deposited or precipitated at places close to or far away from their source.Volcanoes take account of the largest natural input of metals to the environment but the concentrations of these metals in the soil are rarely elevated to toxic levels due to the massive dilution which takes place in the atmosphere. Permanently active volcanoes may be an important local source of (metal) pollution.A special case is arsenic, that may occur as a natural element of soils. At some places, As levels are fairly high, particularly in ground water. High As-groundwater areas are found in Argentina, Chile, Mexico, China and Hungary, and also in Bangladesh, India (West Bengal), Cambodia, Laos and Vietnam. In the latter countries, especially in the Bengal Basin, millions of wells have been dug to provide safe drinking water. Irrigation pumping leads to an inflow of oxygen and organic carbon, which causes a mobilisation of arsenic normally bound to ferric oxyhydroxides in these soils. As a result in many wells dissolved As concentrations are exceeding the World Health Organisation (WHO) guideline value of 10 µg/L for drinking water.Important anthropogenic sources of metals in the environment include:Anthropogenic releases of many metals, such as Pb, Zn, Cd and Cu, are estimated to be between one and three orders of magnitude higher than natural fluxes (Depledge et al. 1998). An estimated amount of up to 50,000 tonnes of mercury are released naturally per year as a result of degassing from the Earth's crust, but human activities account for even larger emissions (Walker et al. 2012).ReferencesAlloway, B.J.. Heavy Metals in Soils. Trace Metals and Metalloids in Soils and their Bioavailability. Third Edition. Environmental Pollution, Volume 22, Springer, Dordrecht.Depledge, M.H., Weeks, J.M., Bjerregaard, P.. Heavy metals. In: Calow, P. (Ed.). Handbook of Ecotoxicology. Blackwell Science, Oxford, pp. 543-569.Kabata-Pendias, A., Mukherjee, A.B.. Trace Elements from Soil to Human. Springer Verlag, Berlin.Newman, M.C.. Fundamentals of Ecotoxicology. The Science of Pollution. Fourth Edition. CRC Press, Taylor & Francis Group. Boca Raton.Nieboer, E., Richardson, D.H.S.. The replacement of the nodescript term 'Heavy metals' by a biologically and chemically significant classification of metal ions. Environmental Pollution (Ser. B) 1, 3-26.Walker, C.H., Hopkin, S.P., Sibly, R.M., Peakall, D.B.. Principles of Ecotoxicology, Fourth Edition. CRC Press Taylor & Francis Group, London.Why can the term heavy metal not be used when referring to different elements considered metals?Why are some elements indicated as essential elements?Why is the classification of Nieboer and Richardson relevant for the biological interactions of metals with living organisms?Name at least 5 sources of metal emission to the environment.Besides leading to the emission of metals, metal mining may another important environmental effect. Which one?in preparationAuthors: Steven DrogeReviewer: Michael McLachlanLeaning objectives:You should be able toKeywords: Chemical industry, tonnage, hazardous chemicals, REACH, regulationIntroductionThe chemical industry produces a wide variety of chemicals that find use in industrial process and as ingredients in day-to-day products for consumers. Instead of chemicals, 'substances' may be a more carefully worded description as it also includes complex mixtures, polymers and nanoparticles. Many substances are produced by globally distributed companies in very high volumes, ranging for example from 100 - 10,000 tonnes (1 tonne = 1000 kg) per year. Worldwide, governments have tried to control and assess chemical safety, as nicely summarized on the ChemHAT website. Australia for example, has the Industrial Chemicals (Notification and Assessment) Act 1989 (2013 version). Just like elsewhere in the world, in the European Union (EU) a variety of regulatory institutes at all levels of government used to perform safety assessments regarding the use of substances in products, and how these are emitted into waste streams. This changed dramatically in 2007.On June 1st, 2007, a new EU regulation went into force called REACH (official legislation documents C 1907/2006; about REACH; EU info on REACH). This law reversed the role of governments in chemical safety assessment, because it placed the burden of proof on companies that manufacture a chemical, import a chemical into the EU, or apply chemicals in their products. Within REACH companies must identify and manage the risks linked to the chemicals they manufacture and market in the EU. REACH stands for Registration, Evaluation, Authorisation and Restriction of Chemicals. China soon followed with the analogous "China REACH" in 2010, and then came South Korea in 2015 with "K-REACH". The main focus in this module is on EU-REACH as the leading and well documented example. Other legislation regulating industrial chemicals can often be easily found online, e.g. via the ChemHAT link above.In REACH, each chemical is registered only once. Accordingly, companies must work together to prepare one dossier that demonstrates to the European Chemical Agency (ECHA) how chemicals can be safely used, and they must communicate the risk management measures to the users. ECHA, or any Member State, authorizes the dossiers, and can start a "restriction procedure" when they are concerned that a certain substance poses an unacceptable risk to human health or the environment. If the risks cannot be managed, authorities can restrict the use of substances in different ways. In the long run, the most hazardous substances should be substituted with less dangerous ones.So which chemicals have been registered in the past decade in REACH?In principle, REACH applies to all chemical 'substances' in the EU zone. This includes metals, such as "iron" and "chromium", organic chemicals such as "methanol" and "fatty acids" and "ethyl-4-(8-chloro-5,6-dihydro-11H-benzocyclohepta[1,2-b]pyridin-11-ylidene)piperidine-1-carboxylate (see Box 1)", and (nano)particles like "zink oxide" and "silicon dioxide", and polymers. Discover for example the registration dossier link in Box 1.Box 1. Examples from the REACH dossiersThe REACH registration data base can be searched via LINK. Accept the disclaimer, and you are ready to search for chemicals based on name, CAS number, substance data, or use and exposure data.Search for example for the name "ethyl 4-(8-chloro-5,6-dihydro-11H-benzocyclohepta[1,2-b]pyridin-11-ylidene)piperidine-1-carboxylate" and you find the link to the dossier of this substance with CAS 79794-75-5 as compiled by the registrant. This complex chemical name is better known as the antihistamine drug Loratadine, but this name does not show up in the dossier search!Click on the name to get basic information on the compound. The hazard classification reads: "Warning! According to the classification provided by companies to ECHA in REACH registrations this substance is very toxic to aquatic life, is very toxic to aquatic life with long lasting effects, is suspected of causing cancer, causes serious eye irritation, is suspected of causing genetic defects, causes skin irritation, may cause an allergic skin reaction and may cause respiratory irritation." This compound is "PBT" labeled based on limited available data (classifying as a combination of Persistent / Bioaccumulative / Toxic). However, the section [About this substance] reads: "for industrial use resulting in the manufacture of another substance (use of intermediates)." As an intermediate in a restricted process, many parts of the dossier did not have to be completed for REACH. As a medicinal product Loratadine is strictly regulated elsewhere. Scroll down to the REACH link for the registration dossier (.../21649) to find out more the different entries for this chemical.If we do a search for [" Bisphenol "], we get a long list of optional chemicals, for example Bisphenol A (CAS 80-05-7) but also for example Bisphenol S if you scroll down further (CAS 80-09-1). If we look at the dossier of the first Bisphenol A entry, with tonnage "100 000 - 1 000 000 tonnes per annum", you can find a long list of REACH information packages besides the dossier, as this chemical is hotly debated. The dossier for Bisphenol A was evaluated in 2013, and also this is available (look for the pdf in the Dossier evaluation status). In this compliance check, the registrant is requested to submit additional rat and mouse toxicity data, along with statements of reasons. There is for example also a link to the [Restriction list (annex XVII)], which leads to a pdf called 66.pdf, which states an adopted restriction for this chemical within the REACH framework and the previous legislation, Directive 76/769/EEC: "Shall not be placed on the market in thermal paper in a concentration equal to or greater than 0,02 % by weight after 2 January 2020".Find your own chemical of interest to discover more on the transparancy of the chemical information on which risk assessment is based.However, some groups of chemicals are (partly) exempt from REACH because they are covered by other legislation in the EU:A detailed overview of European chemical safety guidelines related to chemicals with different application types is presented in in Van Wezel et al..Following pre-registration of the 145,297 chemicals most likely to require regulation, REACH came into force in 2008 in a stepwise process with different deadlines for different groups of chemicals. The first dossiers were to be completed by 2010 for the highest produced volume chemicals (>1000 tonnes/y) and the most hazardous chemicals (CMRs >1 tonne/y, and chemicals with known very high aquatic toxicity >100 tonnes/y). These groups potentially pose the greatest risk because of either their high emissions or their inherent toxicity. In 2013, registration dossiers for chemicals with a lower tonnage (100-1000 tonnes/y) were to be completed. By May 31 2018, all chemicals with a quantity of 1-100 tonnes/y chemicals on the EU market should have been registered. New chemicals will all be subject to the REACH procedures.In 2018, 21.787 substances had been registered under REACH. A total of 14.262 companies were involved. In comparison, 15.500 substances were registered in 2016 (i.e., 6287 chemicals were added in the two following years). In 2018, 48% of all substance registrations had been done in Germany. For 24% of the registered substances a dossier was already available prior to REACH, 70% are "old chemicals" for which no registration had been done before REACH was initiated, and only 6% are newly developed substances that needed to be registered before manufacture or import could start.There are multiple benefits of REACH regulation of industrial chemicals. Most data on chemicals entered in the registration process is publically available, creating transparency and improving customer awareness. If registered chemicals are classified as Substance of Very High Concern (SVHC) based on the chemical information in these dossiers and after agreement from research panels, alternatives that passed the same regulation can be suggested instead.The necessity to add data on potential toxicity for so many chemicals has been combined with a strong focus on, and further development of, animal friendly testing methods. Read-across from related chemicals, weight of evidence approaches, and calculations based on chemical structures (QSAR) allow much experimental testing to be circumvented. In vitro studies are also used, but a 2017 REACH document (REACH alternatives to animal testing 2017, which followed up 2011 and 2014 reports) reports that 5.795 in vitro studies were used overall to determine endpoints for REACH, compared to 9,287 in vivo studies (ratio of 0.6). Clearly, many new animal tests have been performed under REACH to complete the dossiers on industrial chemicals. Prenatal developmental and repeated dose toxicity testing as well as extended one generation reproductive toxicity studies remain difficult to circumvent without animal use. However, the safe use of industrial chemicals must be ensured and demonstrated.Van Wezel, A.P., Ter Laak, T.L., Fischer, A., Bäuerlein, P.S., Munthee, J., Posthuma, L.. Mitigation options for chemicals of emerging concern in surface waters; operationalising solutions-focused risk assessment . Environmental Science: Water Research 3, 403-414.Name three important advantages of the set-up of an international chemical legislation such as REACH.Name three important challenges, or even disadvantages, of the set-up of an international chemical legislation such as REACH.The total amount of chemical substances for which CAS numbers have been appointed, was 68,062,538 on June 28, 2019. The CAS REGISTRY is updated daily with thousands of new substances. Provide some reasons why only 21,787 substances have been registered for REACH in 2018.(draft)Authors: Jacob de BoerReviewer:Leaning objectives:You should be able toKeywords: Persistence, bioaccumulation, long range transport, toxicity, analysisIntroductionChemicals are generally produced because they have a useful purpose. These purposes can vary widely, such as to protect crops by killing harmful insects or fungi, to protect materials against catching fire, to act as a medicine, to enable a proper packing of food materials, etc. Unfortunately, the properties which make a chemical attractive to use, often have a downside when it comes to environmental behavior and/or human health. A number of synthetic chemicals have properties that make them persistent organic pollutants (POPs). POPs are xenobiotic (foreign to the biosphere) chemicals that are persistent, bioaccumulative and toxic ('PBT') in low doses. In addition, they are transported over long distances. Criteria for these properties, which are used to define a chemical as a POP, were set by the United Nations (UN) Stockholm Convention, which was adopted in 2001 and entered into force in 2004 (Fiedler et al., 2019). These criteria are summarized in Table 1 . The objective of the Stockholm Convention is defined in article 1: "Mindful of the precautionary approach, to protect human health and the environment from the harmful impacts of persistent organic pollutants". Initially, 12 chemicals (aldrin, chlordane, dieldrin, DDT, endrin, heptachlor, hexachlorobenzene (HCB), mirex, polychlorinated biphenyls (PCBs), polychlorinated dibenzo-p-dioxins (PCDDs), polychlorinated dibenzofurans (PCDFs) and toxaphene) were listed as POPs. Gradually the list was extended with new POPs, which appeared to fulfil the criteria. For some of the new chemicals exceptions were made for limited use, in case no suitable alternatives are available. For example, in the battle against malaria DDT can still be used to a limited extent for in-house spraying in Africa (Van den Berg, 2009). Until now all POPs are chemicals that contain carbon and halogen atoms. Some POPs, such as the PCDDs and PCDFs (together often short-named as dioxins) are not intentionally produced. They are formed and released unintentionally during thermal processes. PCDDs and PCDFs tended to be released by waste incinerators (Karasek and Dickson, 1987). The combination of elevated temperatures and the presence of chlorine from e.g. polyvinylchloride (PVC) led to the formation of the extremely toxic PCDDs and PCDFs. Stack emissions could contaminate entire areas around the incinerators with consequences for the quality of cow milk or local crop. Dioxins were first discovered after the Seveso (Italy) disaster when high quantities of dioxins were released after an explosion in a trichlorophenol factory (Mocarelli et al., 1991). Meanwhile, in many countries incinerators have been improved by changing the processes and installing appropriate filters.Table 1. Stockholm Convention criteria for persistence, bioaccumulation, toxicity and long range transport of POPs.PersistenceiEvidence that the half-life of the chemical in water is greater than two months, or that its half-life in soil is greater than six months, or that its half-life in sediment is greater than six months; oriiEvidence that the chemical is otherwise sufficiently persistent to justify itsconsideration within the scope of this ConventionBioaccumulationiEvidence that the bio-concentration factor or bio-accumulation factor in aquatic species for the chemical is greater than 5,000 or, in the absence of such data, that the log Kow is greater than 5iiEvidence that a chemical presents other reasons for concern, such as high bioaccumulation in other species, high toxicity or ecotoxicity; oriiiMonitoring data in biota indicating that the bio-accumulation potential of thechemical is sufficient to justify its consideration within the scope of this ConventionLong range transport potentialiMeasured levels of the chemical in locations distant from the sources of its releasethat are of potential concerniiMonitoring data showing that long-range environmental transport of the chemical,with the potential for transfer to a receiving environment, may have occurred via air,water or migratory species; oriiiEnvironmental fate properties and/or model results that demonstrate that the chemical has a potential for long-range environmental transport through air, water or migratory species, with the potential for transfer to a receiving environment in locations distant from the sources of its release. For a chemical that migrates significantly through the air, its half-life in air should be greater than two daysAdverse effectsiEvidence of adverse effects to human health or to the environment that justifies consideration of the chemical within the scope of this Convention; oriiToxicity or ecotoxicity data that indicate the potential for damage to human health orto the environmentStructures and useWhereas all initial POPs were chlorinated chemicals and mainly pesticides, POPs that were added at a later stage also included brominated and fluorinated compounds and chemicals with a more industrial application. Brominated diphenylethers (PBDEs) and hexabromocyclododecane (HBCD) belong to the group of brominated flame retardants. These chemicals are being produced in high quantities. Many national legislations require the use of flame retardants in many materials, such as electric and electronic systems (TV, cell phones, computers), furniture and building materials. Although the PBDEs and HBCD have been banned in most countries, other brominated flame retardants are still being produced in annually growing volumes.Perfluorinated alkyl substances (PFASs) have many applications. Examples are Teflon production, use in fire-fighting foams, ski wax, as dirt and water repellant on outdoor clothes and carpets and many more. They are different from most other POPs because they are both lipophilic and hydrophilic due to a polar group present in most of the molecules. Examples of structures of a few POPs are given in . High levels of POPs are, consequently, found in marine mammals (seals, whales, polar bears) and also in humans (Meironyte, 1999). Women may transfer a part of their POP load again to their children, the highest quantities to their firstborns.Long range transportChemicals that migrate significantly through the air with a half-life in air greater than two days qualify for the POP criterion of long range transport. Many chemicals are indeed transported by air, often in different stages. Chemicals are emitted from a stack or evaporate from the soil in relatively warm areas and travel in the atmosphere toward cooler areas, condensing out again when the temperature drops. This process, repeated in 'hops', can carry them thousands of kilometers within days. This is called the 'grasshopper effect' (Gouin et al., 2004). It results in colder climate zones, in particular countries around the North Pole, receiving relatively high amounts of POPs.Adverse environmental and health effectsThere is very little doubt on the toxicity of POPs. Of course, the dose is always determining if a compound is causing an effect in the environment or in humans. POPs, however, are very toxic at very low doses. The Seveso disaster showed the high toxicity of dioxins for humans. Polybrominated biphenyls (PBBs) caused a high mortality in cattle when they were inadvertently fed with these chemicals (Fries and Kimbrough, 2008). Evidence of toxicity is often coming from laboratory studies with animals (in vivo) and more recently from in vitro studies. These studies are in particular important for the assessment of chronic toxicity. Many POPs are carcinogenic or act as endocrine disruptor.AnalysisThe analysis of POPs in environmental or human matrices is relatively complicated and costly. The compounds need to be isolated from the matrix by extraction. Subsequently, the extracts need to be cleaned from interfering compounds such as fat from organisms or sulphur in case of sediment or soil samples. Finally, due to the required sensitivity and selectivity, expensive instrumentation such as gas or liquid chromatography combined with mass spectrometry is needed for their analysis (Muir and Sverko, 2006). UN Environment is investing in large capacity building programs to train laboratories in developing countries in this type of analysis. According to the Stockholm Convention, countries shall manage stockpiles and wastes containing POPs in a manner protective of human health and the environment. POPs in wastes are not allowed to be reused or recycled. A global monitoring program has been installed to assess the effectiveness of the Convention.FutureMuch has to be done to achieve the original goals of eliminating the production and use of POPs and gradually reduce their spreading into the environment. A global treaty such as the Stockholm Convention with 182 countries involved is in a continuous challenge with procedures and political realities in countries, which hamper the achievement of perceived simple goals such as to eliminate the use of PCBs in 2025. The goals are, however, extremely important, as POPs are a global threat for current and future generations.ReferencesDe Boer, J., Wester, P.G., Klamer, J.C., Lewis, W.E., Boon, J.P.. Brominated flame retardants in sperm whales and other marine mammals - a new threat to ocean life? Nature 394, 28-29.Fiedler, H., Kallenborn, R., de Boer, J., Sydnes, L.K.. United Nations Environment Programme (UNEP): The Stockholm Convention - A Tool for the global regulation of persistent organic pollutants (POPs). Chem. Intern. 41, 4-11.Fries, G.F., Kimbrough, R.D.. The PBB episode in Michigan: An overall appraisal. CRC Critical Rev. Toxicol. 16, 105-156.Gouin, T., Mackay, D., Jones, K.C., Harner, T., Meijer, S.N.. Evidence for the "grasshopper" effect and fractionation during long-range atmospheric transport of organic contaminants. Environ. Pollut. 128, 139-148.Karasek, F.W., Dickson, L.C.. Model studies of polychlorinated dibenzo-p-dioxin formation during municipal refuse incineration. Science 237, 754-756.Meironyte, D., Noren, K., Bergman, Å.. Analysis of polybrominated diphenyl ethers in Swedish human milk. A time-related trend study, 1972-1997. J. Toxicol. Environ. Health Part A 58, 329-341.Mocarelli, P., Needham, L.L., Marocchi, A., Patterson Jr., D.G., Brambilla, P., Gerthoux, P.M.. Serum concentrations of 2,3,7,8‐tetrachlorodibenzo‐p‐dioxin and test results from selected residents of Seveso, Italy. J. Toxicol. Environ. Health 32, 357-366.Muir, D.C.G., Sverko, E.. Analytical methods for PCBs and organochlorine pesticides in environmental monitoring and surveillance: a critical appraisal. Anal. Bioanal. Chem. 386, 769-789.Van den Berg H.. Global status of DDT and its alternatives for use in vector control to prevent disease. Environ. Health Perspect. 117:1656-63.Which criteria are used by the Stockholm Convention to define chemicals as POPs?What are the objectives of the Stockholm Convention on POPs?How can dioxins be formed?Why is the analysis of POPs in environmental and human matrices expensive?Authors: Pim de VoogtReviewers: John Parsons, Hans Peter ArpLeaning objectives:You should be able to:Keywords: Mobility, persistence, PMTEcosystems and humans are protected against exposure to hazardous substances in several ways. These include treating our wastewater so that substances are prevented from entering receiving surface waters, and purification of source waters intended for drinking water production.Currently, a majority of the drinking water produced in Europe is either not treated or treated by conventional technologies. The latter remove substances by degradation (physical, microbiological) or by sorption. However, chemicals that are difficult to break down and that can pass through soil layers, water catchments and riverbanks and cross natural and technological barriers may eventually reach the tap water. Typically, these chemicals are persistent and mobile.When the electrons in a molecule are unevenly divided over its surface, this results in an asymmetric distribution of charge, with positive and negative regions. Such molecules have electric dipoles (see and are polar, in contrast to molecules where the charge is evenly distributed thus resulting in the molecule being neutral or apolar. The ultimate form of polarity is when a permanent charge is present in a compound. Such chemicals are called ionogenic. We distinguish between cations (having a permanent positive charge, e.g. protonated bases, and quaternary amines) and anions (negatively charged ions, e.g. dissociated acids, and organosulfates). Ionic charges in molecules can be pH dependent (e.g. acids and bases). Most, and in particular small, polar and ionic chemicals are water soluble, in other words they have a strong affinity to water (often referred to as hydrophilic). Because water is one of the most polar liquids possible (a strong negative charge on the oxygen and two strong positive charges on each hydrogen), this means that for very polar organic molecules solvation by water is more favorable energetically then sorption to solid particles.Chemicals that are nonpolar are inherently poorly water soluble and therefore tend to escape from the water compartment, resulting in their evaporation, or sorption to sediments and soils, or uptake and accumulation in organisms. It is therefore relatively easy to remove them from water during water treatment. In contrast, mobile organic chemicals, especially those that do not breakdown easily, pose a more serious threat for (drinking) water quality because they are much more difficult to remove. It should be noted that mobility and polarity can be thought of as a gradient, rather than a distinct category, with water being the most polar molecule, a large aliphatic wax being the most non-polar molecule, and all other organic molecules falling somewhere in the spectrum between.In a recent study contaminants were analysed in Dutch water samples covering the journey from WWTP effluent to surface water to groundwater and then to drinking water. While the concentration level of total organic contaminants decreased by about 2 orders of magnitude from the WWTP effluents to the groundwater used for drinking water production, the hydrophilic contaminants (using chromatographic retention time as an indicator for hydrophilicity) in the WWTP effluents remained in the water throughout its passage to groundwater and into the drinking water (see .The mobility of chemicals in aquatic ecosystems is determined by their distribution between water and solid particles. The more the substance has an affinity for the solid phase the less mobile it will be. The distribution coefficient is known as KD, which expresses the ratio between the concentrations in the solid phase (soil, sediment, suspended particles), CS, and the dissolved phase at equilibrium, CW, i.e. KD = CS/CW. For neutral non-polar chemicals the distribution is almost entirely determined by the amount of organic carbon in the solid phase, fOC, and hence their distribution is usually expressed by KOC, the organic carbon-normalized KD (i.e. KOC = KD/ fOC). Unfortunately, there are relatively few reliable KD or KOC data available, in particular for polar chemicals. Instead, KOW is often used as a proxy for KOC. The n-octanol/water partition coefficient: KOW, is the equilibrium distribution coefficient of a chemical between n-octanol and water, KOW = Coctanol/Cwater. It's logarithmic value is often used as a proxy to express the polarity of a compound: a high log KOW means that the compound favors being in the octanol phase rather than in water, which is typically the case for a nonpolar compound. For ionizable chemicals we need to account for the pH dependency of KOW: at low pH an organic acid will become protonated (this in turn depends on its pKa value) and thus less polar. DOW is the pH-dependent KOW. It can be assumed that ions, whether cationic or anionic, will no longer dissolve into octanol but rather be retained in the water, because ions have much higher affinity for water than for octanol.Accounting for this, for organic acids, the pH dependency of the DOW can be expressed as:Therefore, as pH increases above the pKa, the smaller the DOW will get in the case of organic acids. In the case of basis, the opposite is true; the more the pH dips below the pKa of an organic base, the more cations form and the lower the Dow becomes.However, one has to keep in mind that the assumption that the (log) KOW or DOW value inversely correlates with a compound's aquatic mobility is, certainly, very simplistic. The behavior of an ionic solute will obviously also be determined by interactions i) with sites other than organic carbon, e.g. ionizable or ionic sites on soil and sediment particles, and ii) with other ions in solution.The persistence of a compound is assessed in experimental tests by monitoring the rate of disappearance of the compound from the most relevant compartment. This is often done using standardized test protocols. In the European REACH legislation of chemicals, criteria have been established to qualify chemicals as persistent (P) and "very persistent"(vP) based on the outcomes of such tests. Table 1 presents the P and vP criteria used. Unfortunately, good-quality experimental data on half-lives are rare and obtaining such data requires both time-consuming and expensive testing.Currently there is no certified definition of a compound's mobility (M). Several possible compound properties have been proposed to characterize mobility, including a compound's aqueous solubility or its KOC value. If (experimental) KOC values are not available, DOW values can be used as a proxy.Table 1. P and vP criteria identical to Annex XIII to the REACH regulation (source: ECHA chapter R.11. Version 3.0, June 2017)Persistent (P) in any of the following situationsVery persistent (vP) in any of the following situationsFreshwaterHalf-life > 40 daysHalf-life > 60 daysMarine waterHalf-life > 60 daysHalf-life > 60 daysFreshwater sedimentHalf-life > 120 daysHalf-life > 180 daysMarine sedimentHalf-life > 180 daysHalf-life > 180 daysSoilHalf-life > 120 daysHalf-life > 180 daysTable 2. Proposed cut off values of compound properties proposed by the German Environmental Agency (UBA) to define substance mobility*Mobile (M) if compound is P or vP and any of the following situationsvery Mobile (vM) if compound is P or vP and any of the following situationsLowest experimental log KOC(at pH 4-9)≤4.0≤3.0Log DOW(at pH 4-9)≤4.0 if no experimentalLog KOC data available≤3.0 if no experimentalLog KOC data available* note that the proposed criteria may change by the time of publicationThe majority of chemicals for which international guidelines exist or that are identified as priority pollutants in existing regulations (e.g. EU Water Framework Directive and REACH), are nonpolar with log DOW values mostly above two (see . The German Ministry of Environment (UBA) has recently proposed to develop regulation for chemicals with P, M and toxic (T) properties (PMT substances) analogous to the existing PBT criteria used for regulation of chemicals in the EU. UBA proposed to use cut-off values of the KOC or DOW (if KOC data are not available) to define Mobile (M) or very mobile (vM) in conjunction with persistence criteria (see Table 2). Note that the KOC and DOW values have to be obtained from testing at an environmentally relevant pH range (pH 4-9).When we consider current analytical techniques used for monitoring contaminants in the environment, it can be readily seen that the scope of techniques most often used (gas chromatography, GC, and reversed-phase liquid chromatography, RPLC) do not overlap with what is required for chemicals having log DOW values typical of the most mobile chemicals, having a log Dow below zero (see . Consequently, there is limited information available on the occurrence and fate of these mobile chemicals in the environment. Nevertheless, some examples of persistent and mobile chemicals have been identified. These include highly polar pesticides and their transformation products, for instance glyphosate and aminomethylphosphonic acid (AMPA), short-chain perfluorinated carboxylates and sulfonates, quaternary ammonium chemicals such as diquat and paraquat and complexing agents such as EDTA. There are, however, likely to be many more chemicals that could be classified as PMOC and we can therefore conclude that there is a gap in the knowledge and regulation of persistent mobile organic chemicals.Arp, H.P.H., Brown, T.N., Berger, U., Hale, S.E.. Ranking REACH registered neutral, ionizable and ionic organic chemicals based on their aquatic persistency and mobility. Environmental Science: Processes Impacts 19, 939-955.Reemtsma, T., Berger, U., Arp, H. P. H., Gallard, H., Knepper, T. P., Neumann, M., Benito Quintana, J., de Voogt, P.. Mind the gap: persistent and mobile organic chemicals - water contaminants that slip through. Environmental Science & Technology 50, 10308-10315.Sjerps, R.M.A., Vughs, D., van Leerdam, J.A., ter Laak, T.L., van Wezel, A.P.. Data-driven prioritization of chemicals for various water types using suspect screening LC-HRMS. Water Research 93, 254-264.What is the reason that PMOCs may end up in drinking water despite the application of removal processes?Explain why the average hydrophilicity of compounds present in drinking water is higher than that of surface waters or wastewater from which is has been produced.Authors: Steven DrogeReviewer: John Parsons, Satoshi EndoLeaning objectives:You should be able to:Keywords: pKa, dissociation constant, speciation, drugs, surfactants, solubilityIonogenic organic chemicals (IOCs) are widely used in industry and daily life, but also abundantly present as chemicals of emerging concern. For environmental risk assessment purposes, IOCs may be defined as organic acids, bases, and zwitterionic chemicals that under common environmental pH conditions exist for a large part as charged (ionic) species, with only modest fraction of neutral species. The environmentally relevant pH range could be argued to lie between 4 (acidic creeks, even lower in polluted streams from volcanic regions or mine drainage systems) to 9 (sewage treatment effluents). The environmental behaviour of IOC pollutants of concern are different from neutral chemicals of concern, because the aqueous pH controls the neutral fraction of dissolved IOCs and the ionic form is highly soluble and interacts partly via electrostatic interactions with environmental susbtrates. Note that the major fraction of an IOC can also be neutral in a certain environmental system, and in that case it is often the neutral form that dominates the chemical's behavior.IOCs are common in many different types of pollutant classes. A random subset analysis of all EU (pre-)registered industrial chemicals indicated that large fractions of the total list of >100,000 chemicals are IOCs (51% neutral; 27% acids; 14% bases; 8% zwitterions/amphoterics). In another source, It has been estimated that >60% of all prescription drugs (Section 2.3.3) are IOCs (Manallack, 2007), and even higher fractions for illicit drugs (Section 2.3.4). Well known examples are basic beta-blockers (e.g. propranolol), basic antidepressants (e.g. fluoxetine and sertraline), acidic non-steroidal anti-inflammatory drugs (NSAID such as diclofenac), and basic opioids (e.g. morphine, cocaine, heroin) and basic designer drugs (e.g. MDMA). The majority of surfactants and polyfluorinated chemicals (e.g. PFOS and GenX) are IOCs (Section 2.3.8), as well as wide variety of important pesticides (e.g. zwitterionic glyphosate) (Section 2.3.1) and (natural) toxins (Section 2.1) (e.g. peptide based multi-ionic cyanobacterial toxins).The release into the environment is specific for each of these types of IOCs with a different use, but in many cases happens via sewage treatment systems. If sorption to sewage sludge is very strong, application of sludge onto terrestrial (agricultural) systems is a key entry in many countries. However, many IOCs are rather hydrophilic and will mainly be present in wastewater effluent released into aquatic systems. As they are hydrophilic, they are considered rather mobile which allows for rapid transport through e.g. groundwater plumes, soil aquifers, and (drinking water) filtration steps. The distinction between (mostly) neutral chemicals and IOCs is important because the ionic molecular form generally behaves very differently from the corresponding neutral molecular form. For example, in many aspects ionic molecules are non-volatile compared to the corresponding neutral molecules, while neutral molecules are more hydrophobic than the corresponding ionic molecules. As a result of their lower "hydrophobicity", ionic molecules often bind with lower affinity to soils and are therefore more mobile in the environment. The ionic forms bioaccumulate to a lower extent and can therefore be less toxic than the corresponding neutral form (though not necessarily). However, there are various important exceptions for these rules. For example, clay minerals sorb cationic IOCs fairly strongly via ion exchange mechanisms. Certain proteins (e.g. the blood serum protein albumin) tightly bind anionic chemicals because of cationic subdomains on specific (enzymatic) pockets, which allows for effective transport throughout our systems and over cell membranes.The critical chemical parameter describing the ability to ionize is the acid dissociation constant (pKa). The pKa defines at which pH 50% of the IOC is in either the neutral or ionic form by releasing an H+ from the neutral molecule acids (AH to anion A-), or accepting an H+ onto the neutral molecule base (B to cation BH+). The equilibrium between neutral acid and dissociated form can thus be defined as:[ HA ] ↔ [H+] + [A- ] (eq.1)where the chemical's equilibrium speciation is defined as: (eq.2)which gives the pKa as: (eq.3)and as a function of pH, the ratio of the acid and anion is defined as: for acids and for bases (eq.4)Although the term pKb is also used to denote the base association constant, it is conventional we consider [BH+] as acid and use 'pKa' and other relationships for bases as well. The fraction of neutral species (fN) for simple IOCs (one acidic or basic site) can be readily calculated with a derivatization of the Henderson-Hasselbalch equation: (eq.5)in which α = 1 for bases, and -1 for acids.in which α = 1 for bases, and -1 for acids. Using equation 5, presents a typical speciation profile for an acid (shown with pKa 5, so perhaps a carboxylic acid) and a base (shown with pKa 9, so perhaps a beta-blocker drug). Following the curve of equation 5, it is interesting to see some simple rules: if the pH is 1 unit lower than the pKa, the deprotonated species fraction is present at 10%. If the pH is 2 units lower than the pKa, the deminished species fraction is present at 1%, 3 units lower would give 0.1%, etc. From this, it is easy to make a good estimate for the protonation of a strong basic drug like MDMA (pKa reported 9.9-10.4) in blood (pH 7.4): with a maximum of 3 units lower pH, up to 99.9% of the MDMA will be in the protonated form, and only 0.1% neutral. For toxicological modeling studies, e.g. in terms of permeation through the blood-brain barrier membrane, this is highly relevant.Boxes 1-3. Extended learning: calculating the dissociation constant for multiprotic chemicals:see end of this moduleAcidic IOCs:For example, the painkiller (or non-steroidal anti-inflammatory drug, or NSAID) diclofenac is a carboxylic acid with a pKa of 4.1. This means that at pH 4.1, 50% of the dissolved diclofenac is in the dissociated (anionic) form (so, (1 - fN) from equation 5). At pH 5.1 (1 unit higher than the pKa) this is roughly 90% (90.91% to be more precise, but simply remembering 90% helps), at pH 6.1 (2 units higher than the pKa) this is 99%. This stepwise increase in 50-90-99% with each pH unit works for all acids, and for bases the other way round. Test for yourself that at physiological pH of 7.4 (e.g. in blood) diclofenac is calculated to be 99.95% anionic.Many carboxylic acids have a pKa in the range of 4-5, but the neighboring molecular groups can affect the pK­a. Particularly electronegative atoms such as chlorine, fluorine, or oxygen may lower the pKa, as they reduce the forces holding the dissociating proton to the oxygen atom. For example, trichloroacetic acid (CCl3-COOH) has a pKa of 0.77, while acetic acid (CH3-COOH) has a pKa of 4.75. For the same reason, perfluorinated carboxylic acids have a strongly reduced acidic pKa compared to the analogous non-perfluorinated carboxylic acids.Sulfate acids (see figure 3) are very strong acids, with a pKa <0. These acids almost always occur in their pure form as a salt, for example the common soap ingredient sodium dodecylsulfate ("SDS" or "SLS", Na.C12H25-SO4). Other common detergents are sulfonates, such as linear alkylbenzenesulfonate ("LAS", C10-14-(benzyl-SO3)), where the SO3 anionic moiety is attached to a benzene ring, which can be positioned to different carbon atoms of a long alkyl chain. Even at the lower environmental pH range of about 4 these soap chemicals are fully in the anionic form. Such very strong acids, but also many weaker acids, are thus often sold in pure form as salts with sodium, potassium, or ammonium, which causes them to have different names and CAS numbers (e.g. Na.C12H25-SO4 or K.C12H25-SO4) than the neutral form.Many phenols have a pKa > 8, and are therefore mostly neutral under environmental pH. Electron-withdrawing groups on the aromatic ring of the phenol group, such as Cl, Br and I, can lower the dissociation constant. For example, the dinitrophenol-based pesticide dinoseb has a pKa of 4.6, and is thus mostly anionic in the aquatic environment. Note that a hydroxyl group (-OH) not connected to an aromatic ring, such as -OH of alcohols, can in most cases for risk assessment be considered permanently neutral.To help interpret the differences in pKa between molecules, it sometimes helps to remember that in more acidic solutions, there are simply higher H+ concentrations, in a logarithmic manner on the pH scale. At pH 3, the concentration H+ protons in solution is 1 mM, while at pH 9 the H+ concentration is 1 nM (6 pH units equals 106 times lower concentrations). The affinity ("Ka") of H+ to associate with a negatively charged molecular group, is so low for strong acids that even at very high dissolved H+ concentrations (low pH) only very few AH bonds (neutral acid fraction) are actually formed. In other words, for chemicals with a low Ka, even at low pH the neutral fraction is still low. For weak acids such as phenols, already at a very low dissolved H+ concentrations (high pH) many AH bonds (neutral acid fraction) are formed. So it can be reasoned that the affinity of common acidic groups to hold on to a proton is in the order:SO4 < SO3 < CO2 < amide ( C(=O)NH ) < phenol < hydroxylFor bases, it is mostly a nitrogen atom that can accept a proton to form an organic cation, because of the lone electron pair in nitrogen. Neutral nitrogen atoms have opportunities for 3 bonds. A primary amine group has the nitrogen atom bonding to only 1 carbon atom (represented here as part of a molecular fragment R), and two additional bonds with hydrogen atoms. The remaining electron pair readily accepts another proton to form a cationic molecule [ R-NH3+ ]. Neutral secondary amines have one bond to hydrogen and two bonds to carbon atoms and can accept a proton to form [ R-NH2+-R' ], whereas neutral tertiary amines have no bonds to hydrogen but only to carbon atoms and can form [ R-NH+-(R')(R'') ]. Of course, each R group may be the same (e.g. a methyl unit).Many basic chemicals have complex functionalities that can influence the pKa of the nitrogen moiety. However, as shown in the examples of figure 5, as long as there are at least two carbon atoms between the amine and the polar molecular fragment (for example OH, but much stronger for =O), the pKa of the basic nitrogen groups in all three types of bases (primary, secondary and tertiary amines) is high, often above 9 (dissolved H+ concentration <10-9). So even at very low H+ concentrations, dissolved protons like to be associated to such amine groups. As a result, amines such as most beta-blockers and amphetamine based drugs are predominantly positively charged molecules (organic cationic amines) in the common environmental pH range of 4-9, as well as in the pH of most biotic tissues that are useful for toxicological assessments. As soon as a polar group with oxygen (e.g. ketones or hydroxyl groups) is connected to the second carbon away from the nitrogen (e.g. R-CH(OH)-CH2-NH2) the pKa is considerably lowered. Also nitrogen atoms as part of an aromatic ring, or connected to an aromatic ring, have much lower pKa's: protons have rather low affinity to bind to these N-atoms and only start doing so if the proton concentration becomes relatively high (solution becomes more acidic).Most classical pollutants, such as DDT, PCBs and dioxins, are neutral hydrophobic chemicals. On the other hand, most metals are almost always cationic species (e.g. Cd2+). Consequently, their environmental distribution and biological exposure are influenced by quite distinct processes. Obviously, predominantly charged IOCs behave somewhat in between these two extremes. The charged positive or negative groups cause a strong effect of electrostatic interactions between the IOC and environmental substrates (sorption or ligand/receptor binding). While also metals speciate into different forms, pH difference between environmental compartments can strongly influence the IOCs chemical fate and effect if ionizable group is relatively weak. An important difference to metals is that the nonionic molecular part still influences the IOC's hydrophobicity even in charged state for several processes.As will be discussed in other chapters regarding chemical processes (see Chapter 3), it needs to be taken into account for IOCs that many environmental substrates (DOC, soil organic matter, clay minerals) are mostly negatively charged in the common range of environmental pH, but also that proteins involved in biotic uptake-distribution-effects are rich in ionogenic peptides that are part of binding pockets and reactive centers.Manallack, D.T.. The pKa distribution of drugs: application to drug discovery. Perspectives in Medical Chemistry 1, 25-38.Box 1. Extended learning: calculating the dissociation constant for multiprotic chemicals:Several common inorganic acids are multiprotic: they have multiple protons that can dissociate.Multiples species can occur at a certain pH, such as for phosphoric acid (H3PO4, H2PO4-, HPO42- , HPO42-), and carbonic acid (H2CO3, HCO3-, CO32-). It is important to realize that there are actually two micro-species of HCO3-, because two hydroxylgroups can dissociate: HO-C(=O)-OHA polyprotc acid HnA can undergo n dissociations to form n+1 species. Each dissociation has a pKa.But how to calculate the fraction of each species of multiprotic chemicals?The charge of a polyprotic acid can be described as Hn-jAj-. A useful variable, v, can be defined for each general polyprotic acid:for each dissociation reaction:Hn-j+1A-j+1 ↔ H+ + Hn-jAj-the dissociation constant Kj is:The degree of dissociation of the acid (η) is equal to the ratio of the total charge (TC) to the total mol acid (TM). For a diprotic acid, a plot of η as a function of pH provides the dissociation curve:which can be a fitted curve to experimental data.The degree of protolysis for the jth species, αj, can be calculated from the ratio of [Hn-jAj-]/TMYou can set up such a calculation in MS Excel, with calculations of α0, α1, α2, at a range of different pH values ([H+] concentrations), for a given K1 and K2, and plot the speciation against pH.More details are described by King et al. J. Chem. Educ. 67, p. 932; DOI: 10.1021/ed067p932Box 2. Example 1 for multiprotic chemicals: Carbonic acidLet's try carbonic acid (bicarbonate, or H2CO3) as an example first. H2CO3 is the product of carbon dioxide dissolved in water. In pure water/seawater the hydration equilibrium constant Kh = [H2CO3]/[CO2] ≈ 1.7×10−3 / ≈ 1.2×10−3 respectively, indicating that only 0.1% of dissolved CO2 equilibrates to H2CO3. The dissolved concentration of CO2 depends on atmospheric CO2 levels according to the air-water distribution coefficient (Henry constant kH = pCO2/[CO2]= 29.76 atm/(mol/L)). Because of the relevance of CO2 in e.g. ocean acidification, and gas exchange in our lungs, it is interesting to see how H2CO3 speciates depending on pH. As in the formula HnA, n= 2 for H2CO3.With pK1*=6.5 (in equilibrium with atmospheric CO2) and pK2=10.33, K1=10-6.5 and K2=10-10.3.At pH 7, [H+]=10-7 , so at pH 7 with these dissociation constantsor from a series of Excel calculations at different pH:you can copy/paste the following cells into cell A1 of a new Excel sheet, and extend the range of pH:K13.16E-07K25.01E-11pH5[H+]=10^-B3a0 = H2CO3=(B4*B4)/((B4*B4)+(B4*$B$1)+($B$1*$B$2))a1 = HCO3-=(B4*$B$1)/((B4*B4)+(B4*$B$1)+($B$1*$B$2))a2 = CO32-=($B$1*$B$2)/((B4*B4)+(B4*$B$1)+($B$1*$B$2))ETA=((B4*$B$1)+2*($B$1*$B$2))/((B4*B4)+(B4*$B$1)+($B$1*$B$2))Box 3. Example 2 for multiprotic chemicals: ZwitterionsMany organic pH buffers are zwitterionic chemicals, that contain both an acidic and a basic group. Norman Good and colleagues described a set of 20 of such buffers for biochemical and biological research (see for example www.interchim.fr/ft/0/062000.pdf, or www.applichem.com/fileadmin/Broschueren/BioBuffer.pdf). Examples are MES, MOPS, HEPPS . These buffers are selected to:the zwitterionic chemicals with sulfate groups are actually always have the sulfate group charged, making it highly soluble and impermeable to cell membranes, while the amine group protonates between pH6-10, depending on neighbouring functional groups. The speciation of the amine groups in MES and MOPS simply follows the single pKa calculation of equations 1-5. In HEPPS, either of the two amines is protonated, the second pKa is 3, so the doubly charged molecules only occurs at much lower pH, but can still be used as a buffer.A zwitterionic chemical with two apparent pKa values relatively close is p-amino-benzoic acid. If chemicals have not one ionisable group, but N ionisable groups that speciation in a relevant pH range, than the amount of possible species is 2N. So the zwitterion p-amino benzoic acid has 4 species, each with a separate pKa (pH where both species are present in equal concentrations).Let's formulate the benzyl group in p-amino benzoic acid as X, the neutral amine base as B, and the neutral carboxylic acid as AH, so that the fully neutral species is BXAH.Compared to the carboxylic acid, we now have under the most acidic conditions (BHXAH)+, as , the neutral species BXAH and the zwitterionic intermediate (BHXA)0, as , and anionic species at most alkaline conditions (BXA)-, as . The calculation of the fraction of each species can be calculated according to similar rules as for carbonic acid if the two dissociation constants are known (K1 = 2.4, K2=4.88).However, this does not inform us on the ratio between the zwitterionic form and the fully neutral form. To do this, the speciation constants of the 4 microspecies are required.[BH+XAH] ↔ [BXAH] + [H+] for which the pk1 is calculated to be 2.72 K1=10^-2,72 = [BXAH]*[H+]/ [BH+XAH] [BH+XAH]*10^-2.72 = [BXAH]*[H+] which rearranges to [BXAH]= 10^-2.72 *[BH+XAH] /[ H+][BH+XAH] ↔ [BH+XA-] + [H+] for which the pk2 is calculated to be 3.93 K2=10^-3.93 = [BH+XA-]*[H+] / [BH+XAH] [BH+XAH]*10^-3.93 = [BH+XA-]*[H+] which rearranges to [BH+XA-] =10^-3.93*[BH+XAH]/[H+][BXAH] ↔ [BXA-] + [H+] for which the pk3 is calculated to be 4.74 K3= 10^-4,74 = ([BXA-] * [H+]) / [BXAH] [BXAH]*10^-4,74 = [BXA-] * [H+])[BH+XA-] ↔ [BXA-] + [H+] for which the pk4 is calculated to be 4.31 K4=10^-4,31 = [BXA-]*[H+] / [BH+XA-] [BH+XA-]*10^-4,31 = [BXA-]*[H+]So the ratio between zwitterionic form [BH+XA-] and neutral form [BXAH] equals to:[BH+XA-] / [BXAH] = 10^-pK2 / 10^-pK1[BH+XA-] / [BXAH] = 10^-3.93 / 10^-2.72 = 0.06: so only 6% zwitterionic vs 94% neutral species.The macro pK1 and pK2 are then calculated as:K1 = ([BXAH] *[H+] + [BH+XA-]*[H+] )/ [BH+AXH]K1 = [BXAH] *[H+]/[BH+XAH] + [BH+XA-]*[H+] / [BH+XAH]K1 = 10^-pK1 + 10^- pK2K1 = 10^-2.72 + 10^-3.93 =10^-2.69K1=10^-2.69K2 = ([BXA-]) *[H+] / ( [BH+XA-] + [BXAH] )1/K2 = [BH+XA-]/([BXA-]) *[H+] + [BXAH] /([BXA-]) *[H+]1/K2 = 1/10^-pK3 + 1/10^-pK41/K2 = 1/10^-4,31 + 1/10^-4,74 = 1/10^-4.87K2 =10^-4.87Calculcate the neutral fraction at pH7.4 for the illicit drug GHB (apply HH equation of eq 5):Connect the chemicals to their appropriate pKa values:Decide/look up whether these chemicals are predominantly neutral/anionic/cationic in the environment:In preparation(draft)Author: Ansje LöhrReviewer: John ParsonsLeaning objectives:You should be able to:Keywords: Plastic types, sources of plastics, primary and secondary microplastics, plastic degradation, effects of plasticsIntroductionSince its introduction in the 1950s, the amount of plastics in the environment has increased dramatically. A recent study by Jambeck et al. estimated that 192 coastal countries generated 275 million metric tonnes of plastic waste in 2010 of which around 8 million tons of land-based plastic waste ends up in the ocean every year. By UN Environment plastic pollution is seen as one of the largest environmental threats. If waste management does not change rapidly, another 33 billion tonnes of plastic will have accumulated around the planet by 2050. (Micro)plastics is widely recognized as a serious problem in the ocean, however, plastic pollution is also seen in terrestrial and freshwater systems.Classification by size and morphologyPlastics are commonly divided into macroplastics and microplastics; the latter plastic particles are <5 mm in diameter (including nanoplastics). There are several ways to classify microplastics but the following two types are often used; primary microplastics and secondary microplastics. Primary microplastics have been made intentionally, like pellets or microbeads, secondary microplastics are fragmented parts of larger objects. Microplastics show a large variety in characteristics such as size, composition, weight, shape and color. These characteristics have an influence on the behaviour in the environment, like for instance, the dispersion in water and the uptake by organisms. Low-density particles float on water and are therefore more prone to advection than particles with ahigher density. Similarly, spheres are more likely to be taken up by organisms than fibers. The characteristics also affect the absorption of contaminants, adsorption of microbes, and potential toxicity.Classification by chemistryPlastic is the term used to define a sub-category of the larger class of materials called polymers, usually synthesized from fossil fuels, although biomass and plastic waste can also be used as feedstock. Polymers are very large molecules that have characteristically long chain-like molecular architecture. There are many different types of plastics but the market is dominated by 6 classes of polymers: polyethylene (PE, high and low density), polypropylene (PP), polyvinyl chloride (PVC), polystyrene (PS, including expanded EPS), polyurethane (PUR) and polyethylene terephthalate (PET). In order to make materials flexible, transparent, durable, less flammable and long-lived, additives to polymers are used such as flame retardants (e.g. polybrominated diphenyl ethers), and plasticisers (e.g. phthalates). Some of these substances are known to be toxic to marine organisms and to humans.Biopolymers/ bioplasticsThere is a lot of discussion on bioplastics as degradable plastics that these may still persist for a long time under marine conditions. Please watch this video by dr. Peter Kershaw.Plastic degradationDegradation of plastics takes place as soon as the plastic loses its original integrity and properties. There is a faster breaking up phase (degradation into microparticles) and a much slower mineralization phase (polymer chains being degraded to carbon dioxide). The degradation rate of plastics is determined by its polymer type, additive composition and environmental factors. Many commonly-used polymers are extremely resistant to biodegradation. Although plastics degrade in natural environments it is argued that no polymer can be efficiently biodegraded in a landfill site. Plastics in aquatic environments can be subject to in-situ degradation, e.g. by photodegradation or mechanical fragmentation but are in general very durable. As a result, plastics that are present in our oceans will degrade at a very slow pace,. So the majority of plastics produced today will persist in the environment for decades and probably for centuries, if not millennia.Plastics in the environment Plastics are found in terrestrial, freshwater, estuarine, coastal and marine environments, and even in very remote areas of the world and the deep-sea. Sources and pathways of marine litter are diverse and exact quantities and routes are not fully known. But there is a surge in interest to determine the exact quantities and types of plastic litter and pathways in the environment and most of the plastic in our oceans originates from land-based sources but also from sea-based sources. Most PE and PP is used in (single-use) packaging products that have a short lifetime and end up soon as waste.Primary microplastics in terrestrial environments mostly originate from the use of sewage sludge containing microplastics from personal care or household products. In agricultural soils the application of sewage sludge from municipal wastewater treatment plants to farmland is probably a major input, based on recent MP emission estimates in industrialized countries. Plastic pollution in terrestrial systems is also linked to the use of agricultural plastics, such as polytunnels and plastic mulches. Secondary microplastics originate from varying and diverse sources, for example from mismanaged waste either accidentally or intentionally.As plastics have become widespread and ubiquitous in the environment, they are present in a diversity of habitats and can impact organisms at different levels of biological organization, possibly leading to population, community and ecosystem effects. Entanglement is one of the most obvious and dramatic physical impacts of macroplastics, as it often leads to acute and chronic injury or death. In particular the higher taxa (mammals, reptiles, birds and fish) are affected, and it may be critical for the success of several endangered species. Because of similar size characteristics to food, plastics are both intentionally and unintentionally ingested by a wide range of species, such as invertebrates, fish, birds and mammals. Ingestion of the non-nutritional plastics can cause damage and/or obstruction of the digestive tract and may lead to decreased foraging due to false feelings of satiation, resulting in reduced energy reserves.Microplastics and in particular nanoparticles that are small enough to be taken up and translocated into tissues, cells and body fluids can cause cellular toxicity and pathological changes due to particle toxicity. In addition, there are also chemical risks involved as plastics can be a source of hazardous chemicals. These chemicals can be part of the plastic itself (i.e. monomers and additives) and/or chemicals that are absorbed from the environment into the matrix or such as lead, cadmium, mercury, persistent organic pollutants (POPs) like PAHs, PCBs and dioxins. However, as this process depends on the fugacity gradients, there is a lot of uncertainty about the extent that transfer of pollutants does occur in the environment. Actually, when taking all exposure pathways into account, the transfer from (micro)plastics seems to be a minor pathway.Watch the video on the research of Inneke Hantoro.Finally, marine plastics may act as floating habitats for invasive species, including harmful algal blooms and pathogens, leading to spreading beyond their natural dispersal range and creating the risk of disrupting ecosystems of sensitive habitats.Author: Martina VijverReviewers: Kees van Gestel, Frank van Belleghem, Melanie KahLeaning objectives:You should be able to:Keywords: Nanomaterials , emerging technologies, colloids, nanoscale, surface reactivityEngineering at the nanoscale (i.e. 10-9 m) brings the promise of radical technological development. Due to their unique properties, engineered nanomaterials (ENMs) have gained interest from industry and entered the global market. Potentials ascribed to nanotechnology are amongst all: stronger materials, more efficient carriers of energy, cleaner and more compact materials that allow for small yet complex products. Currently, nanomaterials are used in numerous products, although exact numbers are lacking. In 2014, the market was estimated to contain more than 13,000 nano-based products (Garner and Keller, 2014). There is a wide variety of products containing nanomaterials, ranging from sunscreens and paint, to textiles, medicines, electronics covering many sectors.The European Commission in 2011 adopted a new definition of 'nanomaterial' reading 'a natural, incidental or manufactured material containing particles, in an unbound state or as an agglomerate or as an aggregate and where, for 50 % or more of the particles in the number size distribution, one or more external dimensions is in the size range 1 nm-100 nm'.Nanomaterials occur naturally, think of minuscule small fine dust, colloids in the water column, volcanic ash, carbon black and colloids known as ocean spray. In paints the features of the colloids are used to obtain the pigment colors. From the year 2000 on, an exponential growth was seen in their synthesis due to the advanced technologies and imaging techniques needed to work on a nano-scale. First generation nanotechnologies (before 2005) generally refers to nanotechnology already on the market, either as individual nanomaterials, or as nanoparticles incorporated into other materials, such as films or composites. Surface engineering has opened the doors to the development of second and third generation ENMs. Second generation nanotechnologies are characterised by nanoscale elements that serve as the functional structure, such as electronics featuring individual nanowires. From 2010 onward there has been more research and development of third generation nanotechnologies, which are characterised by their multi-scale architecture (i.e. involving macro-, meso-, micro- and nano-scales together) and three-dimensionality, for applications like biosensors or drug-delivery technologies modelled on biological templates. Self-assembling bottom-up techniques have been widely developed at industrial scale, to create, manipulate and integrate nanophases into more complex nanomaterials with new or improved technological features. Post 2015, the fourth generation ENMs are anticipated to utilise 'molecular manufacturing': achieving multi-functionality and control of function at a molecular level. Nowadays, virtually any material can be made on the nanoscale.Nanoscale materials have far larger surface areas than larger objects with similar masses. A simple thought experiment shows why nanoparticles have phenomenally high surface areas.A solid cube of a material 1 cm on a side has 6 cm2 of surface area, about equal to one side of half a stick of gum. When the 1 cm3 is filled with micrometer-sized cubes - a trillion of them, each with a surface area of 6 square micrometers - the total surface area amounts to 6 m2.. As surface area per mass of a material increases, a greater proportion of the material comes into contact with surrounding materials. Small particles also give that there is a high proportion of surface atoms, high surface energy, spatial confinement and reduced imperfections. It results in the fact that ENMs are having an enlarged reactivity. compared to larger "bulk" materials. For instance, ENMs have the potency to transfer concentrated medication across the cell membranes of targeted tissues. By engineering nanomaterials, these properties can be harnessed to make valuable new products or processes.ENMs are often designed to accomplish a particular purpose, taking advantage of the fact that materials at the nanoscale have different properties than their larger-scale counterparts.ENMs are described as a population of particles, and quantified by the particle size distribution (PSD). Nonetheless often a single value (e.g. average ± standard deviation) is reported and not the full PSD. When the particles are suspended in an exposure medium, the size distribution of the NPs is changing over time. After being emitted into aquatic environments, NPs are subject to a series of environmental processes. These processes include dissolution and aggregation (see and subsequent sedimentation. It is known that the behavior and fate of NPs are highly dependent on the water chemistry. In particular, environmental parameters like pH, concentration and type of salts (especially divalent cations) and natural organic matter (NOM) can strongly influence the behaviour of NPs in the environment. For example, pH can affect the aggregation and dissolution of metallic NPs by influencing the surface potential of the NPs (von der Kammer et al., 2010). The divalent cations Ca2+ and Mg2+ are able to efficiently compress the electrical double-layer of NPs and consequently enhance homo-aggregation and hetero-aggregation of NP (see and the cations will be related to bridging the electrostatic interactions. In surface water, aggregation processes most often lead to sedimentation and sometimes to floating aggregates (depending on the density).Coating of ENMs will change the dynamics of these processes. As a result of these nano-specific features, ENMs form a suspension which is different from chemicals that dissolve and form a solution. These ENMs in suspension then follow different environmental fate and behaviour compared to solutions. For this reason the way the dosage of ENPs should be expressed is highly debated within the nano-safety community. Should this be on a mass basis, as is the case for molecules of conventional chemicals (e.g. mg/L), or is the particle number the preferred dose metric such as in colloidal science (e.g. number of particles/L, or relative surface-volume ratio) or multi-mixed dosimetry expression. How to express the dose for nanomaterials is a quest still debated within the scientific community (Verschoor et al., 2019).Although we learned from the text above that changing the form of a nanomaterial can produce a material with new properties (i.e. a new nanomaterial); often a group of materials developed is named after the main chemical component of the ENMs (e.g. nanoTiO2) that is available in different (nano)forms. Approaches to group ENMs have been presented below:Shape-based classification is related to defining nanomaterials, and has been synopsized in the ISO terminology.This approach groups nanomaterials based on their chemical properties.The nanomaterials that are in routine use in products currently are likely to be displaced by nanomaterials designed to have multiple functionalities, so called 2nd-4th generation nano-materials.A proposal relates to the hypothesis that nanomaterials acquire a biological identity upon contact with biofluids and living entities. Systems biology approaches will help identify the key impacts and nanoparticle interaction networks.Garner, K.L., Keller, A.A.. Emerging patterns for engineered nanomaterials in the environment: a review of fate and toxicity studies. Journal of Nanoparticle Research 16: 2503.Nowack, B., Bucheli, T.D.. Occurrence, behavior and effects of nanoparticles in the environment. Environmental Pollution 150, 5-22.Verschoor, A.J., Harper, S., Delmaar, C.J.E., Park, M.V.D.Z., Sips, A.J.A.M., Vijver, M.G., Peijnenburg, W.J.G.M.. Systematic selection of a dose metric for metal-based nanoparticles. NanoImpact 13, 70-75.Von der Kammer, F., Ottofuelling, S., Hofmann, T.. Assessment of the physico-chemical behavior of titanium dioxide nanoparticles in aquatic environments using multi-dimensional parameter testing. Environmental Pollution 158, 3472-3481.What gave us (as mankind) the capacity to virtually synthesize every material at the nanoscale?Explain the statement 'size does matter' for reactivity, use within your explanation the relationship between particle size and the surface area.Give an example of a nano-specific property - which is not size - that enhances reactivity of a material.This page titled 2.2: Pollutants with specific properties is shared under a CC BY 4.0 license and was authored, remixed, and/or curated by Sylvia Moes, Kees van Gestel, & Gerco van Beek via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.
972
2.3: Pollutants with specific use
https://chem.libretexts.org/Bookshelves/Environmental_Chemistry/Environmental_Toxicology_(van_Gestel_et_al.)/02%3A_Environmental_Chemistry_Chemicals/2.03%3A_Pollutants_with_specific_use
Author: Kees van GestelReviewers: Steven Droge, Peter DohmenLeaning objectives:You should be able to:Keywords: Insecticides, Herbicides, Fungicides, Active substances, FormulationsCrop protection products are used in agriculture. The principle target of agriculture is the provision of food. For this purpose, agriculture aims to reduce the competition by other (non-crop) plants and the loss of crop due to herbivores or diseases. An important tool to achieve this is the use of chemicals, such as crop protection products (CPP). Accordingly, CPPs are intentionally introduced into the environment and represent one of the largest sources of xenobiotic chemicals in the environment. These chemicals are by definition effective against the target organism, often already at fairly low doses, but may also be toxic to non-target organisms including humans. The use of pesticides, also named Crop Protection Products (CPP) or often also Plant Protection Products (PPP, the latter term may be misleading for herbicides which are intended to reduce all but the crop plants), is therefore strictly regulated in most countries. The main pesticides used in the largest volumes world-wide are herbicide all s, insecticides, and fungicides. As shown in Table 1, pesticides are used against a large number of diseases and plagues.Table 1. Classification of pesticides according to what they are supposed to controlPesticide typeTargetacaricidesagainst mites and spiders (incl. miticides)algicidesagainst algaealthelmintics (vermicides)against parasitesantibioticsagainst bacteria and viruses (incl. bactericides)bactericidesagainst bacteriafungicidesagainst fungiherbicidesagainst weedsinsecticidesagainst insectsmiticidesagainst mitesmolluscicidesagainst slugs and snailsnematicidesagainst nematodesplant growth regulatorsretard or accelerate the growth of plantsrepellentsdrive pests (e.g. insects, birds) awayrodenticidesagainst rodentsA pesticidal product usually consists of one or more active substances, that are brought onto the market in a commercial formulation (spray powder, granulate, liquid product etc.). The formulation is used to facilitate practical handling and application of the chemical, but also to enhance its effect or its safety of use. The active substance may, for instance, be a solid chemical, while application requires it to be sprayed. Or the active substance degrades fast under the influence of sunlight and therefore has to be encapsulated. One of the most used types of formulation is a concentrated emulsion, which may be sprayed directly after dilution with water. In this formulation, the active substance is dissolved in an oily matrix and a detergent is added as emulsifier to make the oil miscible with water. In this way, the active substance becomes quickly available after spraying. In so-called slow-release formulations, the active substance is encapsulated in permeable microcapsules, from which it is slowly released. Another component of a formulation can be a synergist, which increases the efficacy of the active substance, for instance by blocking enzymes that metabolize the active substance. Here an overview of main formulation constituents:Four types of nomenclature are used in case of pesticides:1. The trade name, e.g. Calypso®, which is given by the manufacturer. The same active substance is often sold under more than one different trade names (accordingly, the use of trade names only is not a sufficient description of the test substance in scientific literature).2. The code name, which is the "common" name of the active substance. Calypso® 480 SC, for example, is a concentrated suspension containing 480 g/L of the active substance thiacloprid.3. The chemical name of the active substance. Thiacloprid is [3-[(6-chloropyridin-3-yl)methyl]-1,3-thiazolidin-2-ylidene]cyanamide.4. The name of the chemical group to which the active substance belongs, in case of thiacloprid: neonicotinoids.Pesticides represent quite a number of different groups of chemicals. Pesticides include inorganic chemicals (like copper used as a fungicide), organic synthetic chemicals, and biologicals (organic natural compounds). Pesticides from the same chemical group may be used against different pest organisms, like the organotin compounds (see below). Some chemicals have a broad mode of action: many soil disinfectants, such as metam-sodium, kill nematodes, fungi, soil insects and weeds. Other pesticides are more selective, like neonicotinoids acting only on insects, or very selective, like the insect-growth regulator fenoxycarb, which is used against leaf-rollers without affecting its natural enemies. Selectivity of a pesticide also indicates to what extent non-target species may be affected upon its application (side-effects). Integrated pest management (IPM) aims at an as sustainable as possible crop protection system by combining biological agents (predators of the pest organism) using chemicals having a selective mode of action. Such systems are nowadays receiving increasing interest in different agricultural crops.Some groups of pesticides that were used or still are widely used are presented in more detail. Their modes of action are discussed in Chapter 4.Best known representative of this group is DDT (dichloro diphenyl trichloroethane; , which was discovered in 1939 by the Swiss entomologist Paul Hermann Müller and seemed to be an ideal pesticide: it was effective, cheap and easy to produce and remained active for a long period of time. As a remedy against Malaria and other insect borne diseases, it has saved millions of human lives. However, the high persistency of DDT, its strong bioaccumulation and its effects on bird populations have triggered the search for alternatives and its ban in most Western countries. But in some developing countries, because of a lack of suitable alternatives for an effective control of malaria, DDT is still in use to kill malaria mosquitos.Other representatives of chlorinated hydrocarbons are lindane, also called gamma-hexachlorocyclohexane, and the cyclodienes that include the "drins" (aldrin, dieldrin, endrin, See Section 2.1) and endosulfan. Because of their high persistence and bioaccumulative potential, most organochlorinated pesticides have been banned.Volatile halogenated hydrocarbons were often used as soil disinfectant. These compounds were injected into the soil, and acted as a nematicide but also killed fungi, soil insects and weeds. An example is 1,3-dichloropropene.Organophosphates are esters of phosphoric acid and constitute important biological molecules such as nucleic acids (DNA) or ATP. Within the contents of pesticides this refers mainly to a group of organophosphate molecules which interfere with acetylcholinesterase. Nerve gases, produced for chemical warfare (e.g., Sarin), also belong to the organophosphates. They are much less persistent and were therefore introduced as alternatives for the chlorinated hydrocarbons. The common molecular structure of organophosphates is a tri-ester of phosphate, phosphonate, phosphorthionate, phosphorthiolate, phosphordithionate or phosphoramidate. With two of the three ester bonds, a methyl- or ethyl- group is bound to the P atom, while the third ester bond binds the rest group or "leaving group".Dependent on the identity of the latter group, three sub-groups may be distinguished:1. Aliphatic organophosphates, including malathion and a number of systemic chemicals.2. Phenyl-organophosphates, which are more stable than the aliphatic ones but also less soluble in water, like parathion (no longer allowed in Europe; .3. Heterocyclic organophosphates, including chemicals with an aromatic ring containing a nitrogen atom like chlorpyrifos.Where organophosphates are derived from phosphoric acid, carbamates are derived from carbamate. Their mode of action is similar to that of the organophosphates. The use of older representatives of this group, like aldicarb, carbaryl, carbofuran and propoxur, is no longer allowed in Europe, but diethofencarb, oxamyl and methomyl are still in use.A number of modern pesticides are derived from natural products. Pyrethroids are based on pyrethrum, a natural insecticide from flowers of the Persian ox-eyed daisy, Chrysanthemum roseum. Typical for the molecular structure of pyrethroids is the cyclopropane-carboxyl group (the triangular structure), which is connected with an aromatic group through an ester bond. Pyrethrum is rapidly degraded under the influence of sunlight. Synthetic pyrethroids, which are much more stable and therefore used on a large scale against many different insects, include cypermethrin, deltamethrin, lambda-cyhalothrin, fluvalinate and esfenvalerate.Based on the natural compound nicotine, which acts as a natural insecticide against plant herbivores, but which was banned as an insecticide due to its high human toxicity, in the 1980s a new group of more specific insecticides has been developed, the neonicotinoids. Several neonicotinoids (e.g., imidacloprid, thiamethoxam) are systemic. This means that they are taken up by the plant and exert their effect from inside the plant, either on the pest organism (systemic fungicides or insecticides) or on the plant itself (systemic herbicides). The systemic neonicotinoids are widely applied as seed dressing in major crops like maize and sunflower. Other compounds are mainly used in spray applications, e.g. in fruit growing (thiacloprid, acetamiprid, etc.). Although neonicotinoids are more selective and therefore preferred over the older classes of insecticides like organophosphates, carbamates and pyrethroids, in recent years they have become under debate because of their side effects on honey bees and other pollinators.Isothiocyanates were used on a large scale as soil disinfectant against nematodes, fungi and weeds. The large number of chemicals with different chemical origin belonging to the isothiocyanates have in common that they form isothiocyanate in soil. A representative of this group is metam-sodium.Fentin hydroxide was used as a fungicide against Phytophthora (causing potato -disease). Tributyltin compounds (TBT) were used as anti-fouling agent (algicide) on ships. TBTC (tributyltin chloride) is extremely toxic to shell-fish, such as oysters, and for this reason banned in many countries. Fenbutatin-oxide was used as an acaricide against spider mites on fruit trees (tributyltin chloride).Also indicated as diamide insecticides, this group includes chemically distinct synthetic compounds such as chlorantraniliprole, flubendiamide, and cyantraniliprole, that act on the ryanodine receptor and are used against chewing and sucking insects.Phenoxy acetic acids are systemic herbicides, exerting their action after uptake by the leaf and translocation throughout the plant. Especially plants with broad, horizontally oriented leaves are sensitive for these herbicides. 2,4-D is the best known representative of this group.Triazines are heterocyclic nitrogen compounds, whose structure is characterized by an aromatic ring in which three carbon atoms have been replaced by nitrogen atoms. Triazines are usually applied to the soil before seed germination. The use of several compounds (atrazine, simazine) has been banned in Europe, while others like metribuzin and terbuthylazine are still in use.This group contains the herbicides diquat and paraquat which mainly act as contact herbicides. This means they damage the plant without being taken up. In soil, they are rapidly inactivated by strong binding to soil particles. The use of paraquat is no longer allowed in Europe, but diquat is still in use.As an alternative to the above mentioned herbicides, glyphosate and later glufosinate were developed. These are systemic broad-spectrum herbicides with a relatively simple chemical structure. Their low toxicity to other organisms triggered pesticide producers to introduce genetically modified crops (e.g. soybean, maize, oilseed rape, and cotton) that contain incorporated genes for resistance against these broad-spectrum herbicides. This type of resistance allows the farmer to use the herbicide without damaging the crop. For this reason, environmentalist fear an unrestricted use of these herbicides, which indeed is the case especially for glyphosate (better known under the formulation name Roundup®).Several modern fungicides are sharing a triazole group. These fungicides have gained importance because of problems with the resistance of fungi against other classes of fungicides. Members of this group for instance are epoxiconazole, propiconazole and tebuconazole.Biological pesticides are produced in living organisms as secondary metabolites to protect themselves against predators, herbivores, parasites or competition. They can be highly effective and act at low concentrations (high toxicity), but in contrast to some synthetic pesticides they are usually sufficiently biodegradable. Compounds like pyrethrum or strobilurin are produced within the plant or within the fungus and are thus protected against photolysis or other environmental degradation. Furthermore, the living organism can produce additional quantities of the secondary metabolite on demand. When used as a pesticide applied as a spray, however, the molecule needs to be modified to enhance its stability (for example against photolysis) to remain sufficiently active over a sufficient period of time. Accordingly, synthetic derivatives of these biological molecules are often more stable, less biodegradable. Examples are the Bt insecticide, which contains an endotoxin highly toxic to insects produced by the bacterium Bacillus thuringiensis, and avermectins, complex molecules synthesized by the bacterium Streptomyces avermitilis. Avermectins act as insecticides, acaricides and have anthelminthic properties. In nature, eight different forms of avermectin have been found. Ivermectin is a slightly modified structure that is synthesized and marketed commercially. Other compounds belonging to this group are milbemectin and emamectin.Genetically modified plants containing a gene coding for the toxin produced by the bacterium Bacillus thuringiensis (or Bt) are another example of genetic modification being applied in agriculture produce insect-resistant crops.EU Pesticides DatabaseSystemic pesticides are easily taken up by plants and internally distributed over all plant tissues. What do you expect regarding the water solubility of systemic pesticides?Why are systemic insecticides applied as seed dressing, so dosed into the soil, dangerous for honey bees and other pollinators?A pesticide with an active substance that is hardly soluble in water is introduced to the market in a commercial formulation. What components should be present in the formulation to allow homogenous application of the pesticide by spraying?Name three important groups of insecticides and mention one typical property or characteristic.Author: Thomas WagnerReviewers: Steven Droge, Kevin ThomasLearning objectives:You should be able to:Keywords: Biocides, product types, Biocidal Product Regulation (BPR), environmental impactEuropean legislation describes a biocide as 'chemical substance or microorganism intended to destroy, deter, render harmless, or exert a controlling effect on any harmful organism by chemical or biological means'. The US Environmental Protection Agency (EPA), an independent agency of the U.S. federal government to protect the environment, defines biocides as 'a diverse group of poisonous substances including preservatives, insecticides, disinfectants and pesticides used for the control of organisms that are harmful to human or animal health or that cause damage to natural or manufactured products'. The definition by the EPA includes pesticides (Chapter 2.3.1). In the scientific and non-scientific literature, the distinction between biocides, pesticides and plant protection products is often vague.Biocides are used all around us:A biocide contains an 'active substance', which is the chemical that is toxic to its target organism, and often contain 'non-active co-substances', which could help in reaching desired product parameters, such as a viscosity, pH, colour, odour or increase its handling or effectiveness. The combination of active substances and non-active substances together makes up the 'biocidal product'. An example of a well-known biocidal product is TriChlor, which contains active substance chlorine that is used to disinfect swimming pools. Because it is impractical to store chlorine gas for the treatment of swimming pools, TriChlor tablets are added to the pool water. TriChlor is trichloroisocyanuric acid. When dissolved in water, the Cl atoms are replaced by H atoms, forming chlorine (Cl-) and cyanuric acid. The free chlorine is able to disinfect the swimming pool.A biocidal product can also contain multiple biologically active substances to enhance its effectivity, such as AQUCAR™ 742 produced by DuPont. It contains glutaraldehyde and quaternary ammonium compounds that have a synergistic toxic effect on microorganisms that are present in oilfields and could form biofilms in the pipelines. The biocidal products are classified into 22 different product-types by the European Chemicals Agency (ECHA) (Table 1). It is possible that an active substance can be classified in more than one product types.Table 1. The classification of biocides in 22 product types (www.echa.europe.eu)Main group 1: Disinfectants and general biocidal productsProduct type 1 - Human hygiene biocidal productsProduct type 2 - Private area and public health area disinfectants and other biocidal productsProduct-type 3 - Veterinary hygiene biocidal productsProduct type 4 - Food and feed area disinfectantsProduct-type 5 - Drinking water disinfectantsMain group 2: PreservativesProduct-type 6 - In-can preservativesProduct-type 7 - Film preservativesProduct-type 8 - Wood preservativesProduct-type 9 - Fibre, leather, rubber and polymerised materials preservativesProduct-type 10 - Masonry preservativesProduct-type 11 - Preservatives for liquid-cooling and processing systemsProduct-type 12 - SlimicidesProduct-type 13 - Metalworking-fluid preservativesMain group 3: Pest controlProduct-type 14 - RodenticidesProduct-type 15 - AvidicidesProduct-type 16 - MolluscicidesProduct-type 17 - PiscicidesProduct-type 18 - Insecticides, acaricides and products to control other arthropodsProduct-type 19 - Repellents and attractantsProduct-type 20 - Control of other vertebratesMain group 4: Other biocidal productsProduct-type 21 - Antifouling productsProduct-type 22 - Embalming and taxidermist fluidsIn Europe, biocides are authorised for production and use by the Biocidal Products Regulation (BPR, Regulation (EU) 528/2012) of the ECHA. The BPR 'aims to improve the functioning of the biocidal product market in the EU, while ensuring a high level of protection for humans and the environment.' (echa.europe.eu/legislation). This is an alternative regulatory framework than that for the plant protection products, managed by the European Food Safety Authority (EFSA). All biocidal products go through an extensive authorisation process before they are allowed on the market. The assessment of a new active starts with the evaluation of a product by the authorities of an ECHA member state, after which the ECHA Biocidal Products Committee forms an opinion. The European Commission then makes a decision to approve or reject the new active substance based on the opinion of ECHA. This approval is granted for a maximum of 10 years and needs to be renewed after it reaches the end of the registration period. The BPR has strict criteria for new active substances, and meeting the following 'exclusion criteria' will result in the new active substance not being approved:In very special cases, new active substances will be allowed on the market when meeting this exclusion criteria, if they are important for public health and public interest and there are no alternatives available. To lower the pressure on public health and the environment, there is also a candidate list for active substances to be substituted for less harmful active substances when the old active substances meet the following criteria:The release of biocides in the environment can have huge consequences, since these products are designed to cause damage to living organisms. A classic example is the release of tributyltin from shipyards, harbours and on sailing routes from the antifouling paint on the hulls of ships (De Mora, 1996). Tributyltin was used in the antifouling paint from the 1950s on to prevent microorganisms from settling on the hulls of ships, which would increase the fuel costs and repair costs. However, the release of tributyltin from the paint resulted in a toxic effect on organisms at the bottom of the food chain, such as algae and invertebrates. Tributyltin then biomagnified in the food web, this way affecting larger predators, such as dolphins and sea otters. Eventually, tributyltin entered the diet of humans. The first legislation on the use of tributyltin for ships dates back to the 1980s, but it was not until the Rotterdam Convention of 2008 that the complete use of tributyltin as an active biocide in antifouling paints was banned. Biocides can also have an effect on the capability of the environment to deal with pollution. Microorganisms are responsible for cleaning polluted areas by using the pollutant as food-source. McLaughlin et al. studied the effect of the release of biocide glutaraldehyde in spilled water from hydraulic fracturing on the microbial activity and found that the microbial activity was hampered by the biocide glutaraldehyde. Hence, because of the biocide, the environment was not or slower capable to return to its original state.De Mora, S.J.. Tributyltin: case study of an environmental contaminant, Vol. 8, Cambridge Univ. PressMcLaughlin, M.C., Borch, T., Blotevogel, J.. Spills of hydraulic fracturing chemicals on agricultural topsoil: Biodegradation, sorption and co-contaminant interactions, Environmental Science & Technology 50, 6071-6078Explain the difference between an active substance and a biocidal product.What is the purpose of using non-active co-substances in a biocidal product?A ship is transporting bananas from Costa Rica to the Netherlands. Mention 5 biocide product types that could be used on this ship, and their purpose.Mention 3 potential ways in which the environment can be damaged as the result of the (accidental) release of biocides in common daily practice.(draft)Author: Thomas ter LaakReviewer: John Parsons, Steven DrogeLeaning objectives:You should be able to:Keywords: emission, immission, waste water treatment, disease treatmentIntroductionPharmaceuticals are consumed by humans (human pharmaceuticals) and administered to animals (veterinary pharmaceuticals).The active ingredients used in human and veterinary medicine partially overlap, however, the major fraction of pharmaceutically active substances in use are restricted to human consumption. In veterinary practice most of the applied pharmaceuticals are antibiotics and anti-parasitic agents, while in human medicine, pharmaceuticals to treat e.g. diabetes, pain, cardiovascular diseases, autoimmune disorders and neurological disorders make up a much larger portion of the pharmaceuticals in use. Worldwide pharmaceutical consumption has increased over the last century (several staggering numbers are summarized here). It is expected that the consumption will further increase due to a wider access to pharmaceuticals in developing countries. Additionally, demographic trends such as aging populations that is often seen in developed countries can also lead to increased consumption of pharmaceuticals, since older generations generally consume more pharmaceuticals than younger generations (van der Aa et al., 2011). The widespread and increasing use and their biological activity makes them relevant for environmental research. Below an overview is given on the emission, occurrence and fate(modeling) of pharmaceuticals in the environment.Pharmaceuticals in the environmentPharmaceuticals can enter the environment through various routes. gives and overview of emission routes of pharmaceuticals to the environment. Pharmaceutical are produced, transported to users (humans and/or animals), partially excreted by the users via urine and feces. For humans, the excrements are transport to wastewater treatment plants, septic tanks or directly emitted to soil or surface water. For animals, and especially livestock, manure contains residues of the pharmaceuticals. These pharmaceuticals end up in the environment when animals are grazing outside or when centrally collected manure is applied as fertilizer on arable land. Just like the pharmaceutical consumption, metabolism in the user (a human or an animal), the treatment and further application of communal wastewater and manure varies. Subsequently, emissions also vary between pharmaceuticals, countries and locations.Concentrations of pharmaceuticals in the environment strongly vary, in concentration ranges of pharmaceuticals and some of their transformation products in the Meuse river and some tributaries are shown.Properties of pharmaceuticals and their behavior and fate in the environmentPharmaceuticals in use are developed for a wide array of diseases and therapeutic treatments. The chemical structures of these substances are therefore also very diverse, considering their size, structural presence of specific atoms, and physicochemical properties such as their hydrophobicity, aqueous solubility and ionization under environmentally relevant pH values, as shown for some examples in , groundwater, drinking water, manure, soil and sediments were also studied. Pharmaceuticals have been observed in all these matrices in concentrations generally varying from µg/L to sub ng/L levels (Aus der Beek et al., 2016, Monteiro and Boxall, 2010). Various studies have related environmental loads and related concentrations to human consumption data. Basically such -so called- mass balance or immision-emission balancing studies work according the following principle:Modelling pharmaceuticals in the environmentSince the application of human pharmaceuticals, the consumption, is relatively well documented. Environmental concentrations can be related to consumption. This prediction works the best for the most persistent pharmaceuticals, since variations in loss factors are marginal for these pharmaceuticals. When loss factors become larger, they generally also become more variable, trough seasonal variations in use as well as variation in loss during wastewater treatment and loss processes in the receiving rivers, respectively. This makes the loads and concentrations of more degradable pharmaceuticals are more difficult to predict (Ter Laak et al., 2010).Loads in a particular riverine system (such as a tributary of the river Meuse in the example below) can be predicted with a very simplified model. Here the pharmaceutical consumption over a selected period is multiplied by the factor of the selected pharmaceuticals that are excreted unchanged by the human body (ranging from 0 to 1)and the fraction that is able to pass the wastewater treatment plant (WWTP) (ranging from 0 to 1):When this is related to actual measured concentrations and loads calculated from these numbers, the correlation between predicted and measured loads can be plotted. Various studies have shown that environmental loads can be predicted within a factor of 3 for most commonly observed pharmaceuticals (Ter Laak et al., 2014; Ter Laak et al., 2010; Alder et al., 2010; Oosterhuis et al., 2013).For veterinary pharmaceuticals this so called 'immision-emission balancing' is more difficult due to a number of reasons:In a way the emissions and fate of veterinary pharmaceuticals is similar to emissions of pesticides used in agriculture, but then with a much poorer understanding on loads entering the system and the fate related to the various emission routes and emissions in combination with a complex matrix (urine, feces manure) (Guo et al., 2016). As a consequence, environmental fate studies of veterinary pharmaceuticals often describe specific cases, or cover laboratory studies to unravel specific aspects of the environmental fate of these pharmaceuticals (Kaczala and Blum, 2016; Kummerer, 2009).Concluding remarksPharmaceuticals are commonly found in the environmental compartments such as surface water, soil, sediment and groundwater (Williams et al., 2016). Pharmaceuticals consist of a single or multiple active ingredients that have a specific biological activity. The therapeutic application and pharmacological mechanisms provide valuable information to evaluate the environmental hazard of these chemicals. Their physicochemical properties are of more relevance for the assessment of the environmental fate and exposure. The occurrence in the environment and the biological activity of this group of contaminants makes them relevant in environmental science.ReferencesAlder, A.C., Schaffner, C., Majewsky, M., Klasmeier, J., Fenner, K. Fate of [beta]-blocker human pharmaceuticals in surface water: Comparison of measured and simulated concentrations in the Glatt Valley Watershed, Switzerland. Water Research 44, 936-948Aus der Beek, T., Weber, F., Bergmann, A., Hickmann, S., Ebert, I., Hein, A., Küster, A. Pharmaceuticals in the environment-Global occurrences and perspectives. Environ. Toxicol. Chem. 35, 823-835.Guo, X.Y., Hao, L.J., Qiu, P.Z., Chen, R., Xu, J., Kong, X.J., Shan, Z.J., Wang, N. Pollution characteristics of 23 veterinary antibiotics in livestock manure and manure-amended soils in Jiangsu province, China. J. Environ. Sci. Health Part B Pestic. Food Contamin. Agric. Wastes 51, 383-392Kaczala, F., Blum, S.E. The occurrence of veterinary pharmaceuticals in the environment: A review. Curr. Anal. Chem., 12, 169-182Kümmerer, K. The presence of pharmaceuticals in the environment due to human use - present knowledge and future challenges. J. Environ. Manage., 90, 2354-2366Monteiro, S.C., Boxall, A.B.A. Occurrence and Fate of Human Pharmaceuticals in the Environment. Rev. Environ. Contam. Toxicol., 202, 53-154.Oosterhuis, M., Sacher, F., ter Laak, T.L. Prediction of concentration levels of metformin and other high consumption pharmaceuticals in wastewater and regional surface water based on sales data. Sci. Total Environ. 442, 380-388Ter Laak, T.L., Van der Aa, M., Houtman, C.J., Stoks, P.G., Van Wezel, A.P. Relating environmental concentrations of pharmaceuticals to consumption: A mass balance approach for the river Rhine. Environ. Intern. 36, 403-409Ter Laak, T.L., Kooij, P.J.F., Tolkamp, H., Hofman, J. Different compositions of pharmaceuticals in Dutch and Belgian rivers explained by consumption patterns and treatment efficiency. Environ. Sci. Pollut. Res. 21, 12843-12855Van der Aa, N.G.F.M., Kommer, G.J., van Montfoort, J.E. and Versteegh, J.F.M. Demographic projections of future pharmaceutical consumption in the Netherlands. Water Science and Technology, 825-832.Williams, M., Backhaus, T., Bowe, C., Choi, K., Connors, K., Hickmann, S., Hunter, W., Kookana, R., Marfil-Vega, R., Verslycke, T. Pharmaceuticals in the environment: An introduction to the ET&C special issue. Environ. Toxicol. Chem. 35, 763-766.Name three reasons why pharmaceuticals are relevant environmental contaminants?Why is it easier to model human pharmaceuticals in the environment than veterinary pharmaceuticals?Ibuprofen (anti-inflammatory) is discharged with the wastewater to the river. The following information is known about the use and fate of this drug.- consumption per 1000 inhabitants of 1 g/d- excretion by human 10 %- removal by STP 68%How much iboprufen is potentially discharged to the river in g/d ?Author: Pim de VoogtReviewer: John Parsons, Félix HernándezLeaning objectives:You should be able to:Keywords: Cocaine, ecstasy, speed, cannabis, wastewater analysisSince about little more than a decade, drugs of abuse (DOA) and their degradation products have been recognized as emerging environmental contaminants. They are among the growing number of chemicals that can be observed in the aquatic environment.The residues of a major part of the chemicals used in households and daily life end up in our sewer systems. Among the many chemicals are cleaning agents and detergents, cosmetics, food additives and contaminants, pesticides, pharmaceuticals, and surely also illicit drugs. Once in the sewer, they are transported to wastewater treatment plants (WWTPs), where they may be removed by degradation or adsorption to sludge, or end up in the effluent of the plant when removal is incomplete.The consumption of both pharmaceuticals and DOA has increased substantially over the last couple of decades as a result of several factors, including ageing of the population, medicalization of society and societal changes in life-style. As a result the loads in wastewater of drugs and their transformation products formed in the body after consumption have steadily increased. More recently, it has been observed that chemical waste from production sites of illicit drugs is being occasionally discharged into sewer systems, thereby dramatically increasing the loads of illicit drug synthesis chemicals and end products transported to WWTPs. As WWTPs are not designed to remove drugs, a substantial fraction of the loads may end up in receiving waters and thus pose a threat to both human and ecosystem health.Europe's most commonly used illicit drugs are THC (cannabis), cocaine, MDMA (ecstasy) and amphetamines. The structure of these drugs is given in Other important DOA include the opioids such as heroine and fentanyl, GHB, Khat and LSD.Drugs of abuse are controlled by legislation, in The Netherlands by the Opium Act. The Opium Act encompasses two lists of substances. List one chemicals are called hard drugs while List II chemicals are known as soft drugs. Some narcotics are also being used for medicinal purposes, e.g., ketamine, diazepines, and one of the isomers of amphetamine. New psychoactive substances (NPS), also known as designer drugs or legal highs (because they are not yet controlled as they are not listed on the Opium Act lists), are synthesized every year and become available on the market in high numbers (see .Central sewage systems collect and pool wastewater from household cleaning and personal care activities as well as excretion products resulting from human consumption and thus contain chemical information on the type and amount of substances used by the population connected to the sewer. Drugs that are consumed are metabolized in the body and subsequently excreted. Excretion products can include the intact compounds as well as the transformation products, that can be used as biomarkers. An example of the latter is benzoylecgonine, which is the major transformation product of consumed cocaine. The collective wastewater from the sewer system carrying the load of chemicals is directed to the WWTP, and this wastewater influent can be sampled at the point where it enters the WWTP. By appropriate sampling of the influent during discrete time-intervals, e.g. 24 h, a so-called composite sample can be obtained and the concentrations of the chemicals can be determined. The volume of influent entering the WWTP is recorded continuously. Multiplying the observed 24 h average concentration of a compound with the total 24 h volume yields the daily load of the chemical entering the WWTP. This load can be normalised to the number of people living in the sewer catchment, resulting in a load per inhabitant. The loads of drugs in wastewater influents are usually expressed as mg.day-1.1000 inh-1. Normalised drug load data allow comparison between sewer catchments, such as shown in Obtaining chemical information about the population through wastewater analysis is known as Wastewater-based epidemiology, WBE (Watch the video). While WBE was developed originally to obtain data on consumption of DOA, the methodology has been shown to have a much wider potential: in calculating the consumption of e.g., alcohol, nicotine, NPS, pharmaceuticals and doping, as well as for assessing community health indicators, such as incidence of diseases or stress biomarkers.Barring direct discharges into surface waters or terrestrial environments, the major sources of DOA to the environment are WWTP effluents. Conventional treatment in municipal WWTPs has not been specifically designed to remove pharmaceuticals or DOA. Removal rates of DOA vary widely and depend on compound properties such as persistence and polarity as well as WWTP operational conditions and process configurations. Some DOA cross WWTPs almost unhindered, thus ending op in the receiving waters. Examples of the latter are MDMA and some diazepines (see . Despite that several studies report the presence of DOA or their transformation products in surface waters, until now there is very little information about their aquatic ecotoxicity available in the scientific literature.Recently, chemical waste from synthetic DOA manufacturing including their precursors and synthesis byproducts have been observed to be discharged directly into sewers. In addition, containers with chemical waste from DOA production sites have been dumped on soil or surface waters. Apart from solvents and acids or bases this waste often contains remainders of the synthesis products, which can then be dissipated in the aquatic environment or seep through the soil into groundwater.Considering that DOA are highly active in the human body, it can be expected that some of them, in particular the more persistent ones, may exert some effects on aquatic biota when their levels increase in the aquatic environment.Van der Aa, M., Bijlsma, L., Emke, E., et al.. Risk assessment for drugs of abuse in the Dutch watercycle. Water Research 47, 1848-1857.Bijlsma, L., Emke, E., Hernández, F., de Voogt, P.. Investigation of drugs of abuse and relevant metabolites in Dutch sewage water by liquid chromatography coupled to high resolution mass spectrometry. Chemosphere 89, 1399-1406.EMCDDA, European Drug Report 2018, Lisbon (www.emcdda.europa.eu/publications/edr/trends-developments/2018_en)Thomas, K. V., Bijlsma, L., Castiglioni, S., et al.. Comparing illicit drug use in 19 European cities through sewage analysis. Science of the Total Environment 432, 432-439.One way of mapping the consumption volumes of psychoactive substances in a city is by chemical analysis of drug residues and biomarkers in wastewater. To assess the consumption volume of cocaine by a population the concentrations in wastewater of the substance benzoylecgonine (BE) are determined. Why is BE used instead of cocaine itself?What possible sources can lead to the occurrence of an illicit drug in wastewater?Can illicit drugs in wastewater pose a threat to the environment and if so, why?Author: Pim N.H. WassenaarReviewer: Emiel Rorije, Eric M.J. Verbruggen, Jonathan MartinLearning objectives: You should be able toKeywords: Hydrocarbons, Paraffins, Naphthenics, AromaticsHydrocarbons are a class of chemicals that only consist of carbon and hydrogen atoms. But despite their simplicity in building blocks, this group of chemicals consists of a wide variety of structures, as there are differences in chain length, branching, bonding types and ring structures. The main sources of hydrocarbons are crude oil and coal, which are formed over millions of years by natural decomposition of the remains of plants, animals or wood, and are used to derive products we are using on a daily basis, including fuels and plastics Other natural sources include natural burning (forest fires) and volcanic sourcesThe major classes of hydrocarbons are paraffins (i.e. alkanes), naphthenics (i.e. cycloalkanes) and aromatics, and within these classes, several subclasses can be identified. Paraffins are hydrocarbons that do not contain any ring structures. Paraffins can be subdivided in normal (n-) paraffins, which do not contain any branching (straight chain), and iso-paraffins (i-), which do contain a branched carbon-chain. When alkanes include at least one carbon-carbon double bond, they are considered olefins (or alkenes).Naphthenic and aromatic hydrocarbons both contain ring-structures but differ in the presence of aromatic or non-aromatic rings. The naphthenics and aromatics can be further specified based on their ring count; often mono-, di- and poly-ring structures are distinguished from each other. Of all these classes, the polycyclic aromatic hydrocarbons (PAHs) are the best-studied category in terms of all kinds of environmental aspects.Besides the classes considered in . Narcosis is a reversible state of inhibited activity of membrane structures within the cells of organisms. Narcosis type toxicity is considered the minimum toxicity that any substance will be able to have, just by reaching concentration levels in the phospholipid bilayer of the cell membranes that disturb membrane transportation process. Hence the name "baseline" or minimum toxicity. When these events take place above a certain threshold, systemic toxicity can be observed in the organism, such as lethality. This threshold concentration is also known as the critical body residue (CBR) (Bradbury et al., 1989; Parkerton et al., 2000; Veith & Broderius, 1990).Nevertheless, hydrocarbons can also have a more specific mechanisms of action, resulting in greater toxicity than baseline toxicity. For example, the toxicity of several PAHs increases in combination with ultraviolet radiation due to photo-induced toxicity. Photo-induced toxicity may be caused by photoactivation, in which a PAH is degraded into an oxidized product with a higher toxicity, or rather by photosensitization, in which reactive oxygen species (ROS) are formed due to an excited state of the PAHs (Roberts et al., 2017). PAHs are especially vulnerable to photodegradation as their absorption spectrum falls within the range of wavelengths reaching the earth's surface (> 290 nm), which is not the case for most monoaromatic and aliphatic hydrocarbons (EMBSI, 2015). The photo-induced effects are of particular concern for aquatic species with transparent bodies, like zooplankton and early life stages, as more UV-light can penetrate into their organs and tissues (Roberts et al., 2017).Several hydrocarbons are also able to cause genotoxicity and cancer upon exposure, including benzene, 1,3-butadiene and some PAHs. The carcinogenicity of PAHs is caused by biotransformation into reactive metabolites, specifically into epoxides which are the first step in oxidation of aromatic ring structures into dihydrodiol ring systems. In general, the biotransformation step increases the water solubility of the hydrocarbons (Phase I metabolism) and promotes subsequent conjugation and excretion (Phase II metabolism). However, several epoxide metabolites - more specifically the most stable aromatic epoxides - can reach the cell nucleus and covalently react with DNA, forming DNA adducts, and induce mutations. Ultimately, if not repaired such mutations can accumulate and may result in the formation of tumors (Ewa & Danuta, 2016). Specifically, PAHs with a bay-like region are of concern as biotransformation results in relatively stable reactive epoxides that are not accessible to epoxide hydrolase enzymes (Jerina et al. 1980). Similar to PAHs, 1,3-butadiene and benzene are also able to cause cancer via the effects of their respective reactive metabolites (Kirman et al., 2010; US-EPA 1998).Besides their toxicity, some hydrocarbons such as the high molecular weight PAHs can be persistent in the environment and may accumulate in biota as a result of their hydrophobicity. It is therefore expected that internal concentrations are higher for such hydrocarbons and it is interesting that there is thus a relationship between narcosis and bioaccumulation potential. Consequently, these hydrocarbons might be of even greater concern.As most research focused on specific hydrocarbons, including several PAHs, it is important to note that the biodegradation, bioaccumulation and toxicity potential of many hydrocarbons is still not fully known, such as for alkylated PAHs and naphthenics. As there is such a wide variety in hydrocarbon structures, it is impossible to assess the (potential) hazards of all hydrocarbons separately. Therefore, grouping approaches have been developed to speed up the risk assessment. Within a grouping approach, hydrocarbons can be clustered based on structural similarities. The underlying assumption is that all chemicals in a group are expected to have fairly similar physicochemical properties, and subsequently also fairly similar environmental fate and effect properties. As a result, such a group could potentially be assessed as if it is one single hydrocarbon.The applicability of a hydrocarbon specific grouping approach, known as the Hydrocarbon Block Method (King et al., 1996), to assess the biodegradation and bioaccumulation potential of hydrocarbons is currently being investigated. Within this approach, all hydrocarbons are grouped based on their functional class (e.g. paraffin, naphthenic, aromatic) and the number of carbon atoms. The number of carbon atoms is thought to highly correlate with the boiling point of the hydrocarbons. An example matrix of the Hydrocarbon Block Method is presented in The composition of an oil substance could be expressed in such a matrix following GC-GC/MS analysis. Subsequently, the PBT-properties of the individual blocks could potentially be assessed by analyzing and extrapolating the PBT-properties of representative hydrocarbons for varying hydrocarbon blocks (see .ReferencesBradbury, S.P., Carlson, R.W., Henry, T R.. Polar narcosis in aquatic organisms. In Aquatic Toxicology and Hazard Assessment: 12th Volume. ASTM International.EMBSI. Assessment of Photochemical Processes in Environmental Risk Assessment of PAHsEwa, B., Danuta, M.Š.. Polycyclic aromatic hydrocarbons and PAH-related DNA adducts. Journal of applied genetics 58, 321-330.Homburger, F., Hayes, J.A., Pelikanm E.W.. A Guide to General Toxicology. Karger/Base, New York, NY.Jerina, D.M., Sayer, J.M., Thakker, D.R., Yagi, H., Levin, W., Wood, A.W., Conney, A.H.. Carcinogenicity of polycyclic aromatic hydrocarbons: the bay-region theory. In Carcinogenesis: Fundamental Mechanisms and Environmental Effects (pp. 1-12). Springer, Dordrecht.King, D.J., Lyne, R.L., Girling, A., Peterson, D.R., Stephenson, R., Short, D.. Environmental risk assessment of petroleum substances: the hydrocarbon block method. CONCAWE report no. 96/52.Kirman, C.R., Albertini, R.A., & Gargas, M.L.. 1, 3-Butadiene: III. Assessing carcinogenic modes of action. Critical reviews in toxicology 40(sup1), 74-92.Parkerton, T.F., Stone, M.A., Letinski, D. J.. Assessing the aquatic toxicity of complex hydrocarbon mixtures using solid phase microextraction. Toxicology letters 112, 273-282.Roberts, A.P., Alloy, M.M., Oris, J.T.. Review of the photo-induced toxicity of environmental contaminants. Comparative Biochemistry and Physiology Part C: Toxicology & Pharmacology 191, 160-167.US-EPA. Carcinogenic Effects of Benzene: An Update. EPA/600/P-97/001F.Veith, G.D., Broderius, S.J.. Rules for distinguishing toxicants that cause type I and type II narcosis syndromes. Environmental Health Perspectives 87, 207.Mention at least three aspects that influence the variation in hydrocarbon structures?To which hydrocarbon class (or Hydrocarbon Block) do the following structures belong?1. 2.3.What kind of toxic effects can be observed for polycyclic aromatic hydrocarbons?(draft)Authors: Steven DrogeReviewer: John ParsonsLeaning objectives:You should be able to:Keywords: Ozone layer, refrigerator, volatile chemicals, spray cans, radicalsIntroductionCFCs (chlorofluorocarbons) were very common air pollutants in the 20th century because they were the basic components of refrigerants and air conditioning, propellants (in spray can applications), and solvents, since the 1930s. They are still very common air pollutants, because they are very persistent chemicals, and emissions do still continue. In the first years as refrigerants, they replaced the much more toxic components ammonia (NH3), chloromethane (CH3Cl), and sulfur dioxide (SO2). Particularly the CFCs leaking from old refrigerating systems in landfills and waste disposal sites caused high emissions into the environment. Typically, these volatile CFC chemicals are based on the smallest carbon molecules methane (CH4), ethane (C2H6), or propane (C3H8). All hydrogen atoms in these CFC molecules are replaced by a mixture of chlorine and fluorine atoms.CFCs are less volatile than their hydrocarbon analogue, because the halogen atoms polarize the molecules, which causes stronger intermolecular attractions. Depending on the substitution with Cl or F, the boiling point can be tuned to the desired point for refrigerating cooling processes. The CFCs are also much less flammable than hydrocarbon analogues, making them much safer in all kinds of applications.Naming of CFCsCFCs were often known by the popular brand name Freon. Freon-12 (or R-12) for example stands for dichlorodifluoromethane (CCl2F2, boiling point -29.8 °C, while methane has -161 °C), as shown in The naming reflects the amount of fluor atoms as the most right number. The next value to the left is the number of hydrogen atoms plus 1, and the next value to the left is the number of carbon atoms less one (zeroes are not stated), and the remaining atoms are chlorine. Accordingly, Freon-113 could apply to 1,1,2-trichloro-1,2,2-trifluoroethane (C2Cl3F3, boiling point 47.7 °C, while ethane has -161 °C). The structure of any Freon-X number can also be derived from adding +90 to the value of X, so Freon-113 would give a value of 203. The first numerical is the number of C, the second numerical H, the third numerical F, and the remaining substitutions are by chlorine (C2X6 gives 3 chlorines).The reason CFC depletes the ozone layerThe key issue with CFC emissions is the reaction under influence of light ("photodegradation") that ultimately reduces ozone concentrations ("ozone depletion") in the upper atmosphere ("stratosphere"). Ozone absorbs the high energy radiation of the solar UV-B spectrum (280-315nm), and the ozone layer therefore prevents this to reach the Earth's surface. The even more energetic solar UV-C spectrum (100-280nm) is actually causing the formation of ozone (O3) when reacting with oxygen (O2), as shown in Under the influence of intense light-energy in the upper atmosphere, CFC molecules can disintegrate into two highly reactive radicals (molecules with a free electron . ), for Freon-11:It is the radical Cl. that catalyzes the conversion of ozone back into O2. The environmentally relevant role of the fluorine atoms in CFCs is that they make these chemicals very persistent after emission, because the C-F bond is one of the strongest covalent bonds known. With half-lives up to >100 years, high CFC levels can reach the upper atmosphere. James Lovelock was the first to detect the widespread presence of CFCs in air in the 1960s, while the damage caused by CFCs was discovered only in 1974. Another undesirable effect of CFC in the stratosphere is that they are a much more potent greenhouse gases than CO2.CFC replacements.In 1978 the United States banned the use of CFCs such as Freon in aerosol cans. After several years of observations of the ozone layer depletions globally, particularly above Antarctica, the Montreal Protocol was signed in 1987 to drastically reduce CFC emissions worldwide. CFCs were banned by the late 1990s in most EU countries, and e.g. in South Korea by 2010. Due to the persistency of CFCs it may take until 2050-2070 before the ozone layer will return to 1980 levels (which were bad already).The key damaging feature of CFCs in terms of ozone depletion is their persistency, so that emissions reach and build up in the stratosphere (starting from 20km above the equator, but only at 7km above the poles). CFC replacement molecules were initially found simply by adding more hydrogens in the CFC structures and somewhat less Cl (HCFCs), but fractions still contributed to Cl. radicals. Later alternatives lack the chlorine atoms and have even shorter lifetimes in the lower atmosphere, and simply cannot form the Cl radicals. These "hydrofluorocarbons" (HFCs) are currently common in automobile air conditioners, such as Freon-134 (do the math to see that there is no Cl, boiling point -26.1 °C).Still, HCFC as well as HFCs are still very potent greenhouse gasses, so the worldwide use of such chemicals remains problematic and gives rise to new legislations, regulations, and searches for alternatives. R-410A (which contains only fluorine) is becoming more widely used but is 1700 times more potent than CO2 as greenhouse gas, equal to Freon-22. Simple hydrocarbon mixtures such as propane/isobutane are already used extensively in mobile air conditioning systems, as they have the right thermodynamic properties for some uses and are relatively safe. Unfortunately, we did not have the technological skills, nor the awareness to apply this back in the 1930s.(draft)Author: Mélanie DouziechReviewers: John ParsonsLearning objectives:You should be able to:Keywords: wastewater, chemical function, surfactants, microbeadsPersonal care products (PCPs) cover a large range of products fulfilling hygiene, health, or beauty purposes (e.g. shampoo, toothpaste, nail polish). They are categorized into oral care, skin care, sun care, hair care, decorative cosmetics, body care and perfumes. Overall, most PCPs are classified as cosmetics and regulated accordingly. In the European Union (EU) the Cosmetic Regulation governs the production, safety of ingredients and the labelling and marketing of cosmetic products. The United States of America (USA), on the other hand, have a narrower definition of cosmetics so that products not fulfilling the definition are regulated as pharmaceuticals (e.g. sunscreen) (Food and Drug Administration, 2016).PCPs come in a range of formats (e.g. liquids, bars, aerosols, powders) and typically contain a wide range of chemicals, each fulfilling a specific function within the product. For example, a shampoo can include cleansing agents (surfactants), chemicals to ensure product stability (e.g. preservatives, pH adjusters, viscosity controlling agents), diluent (e.g. water), perfuming chemicals (fragrances), and chemicals to influence the product's appearance (e.g. colourants, pearlescers, opacifiers). The chemicals present in PCPs ultimately enter the environment either through air during direct use, such as the propellants in aerosols, or through wastewater via down the drain disposal following product use (e.g. shower products, toothpaste). The release of PCP chemicals into the environment needs to be monitored and the safety of these chemicals understood in order to avoid potential problems. In developed countries, the use of wastewater treatment plants (WWTPs) is key to effectively removing the PCP chemicals and other pollutants from wastewater prior to their release to rivers and other watercourses. The removal mechanisms occurring in WWTPs include biodegradation, sorption onto solids, and volatilization to the air. The extent of removal is influenced by the physicochemical properties of the chemicals and the operational conditions of the WWTPs. In regions where wastewater treatment is lacking, the chemicals in PCPs enter the environment directly.The wide scale daily use of PCPs and the associated large volumes of chemicals released explain why they are scrutinized by environmental protection agencies and regulatory bodies. The following sections will briefly review some of the classes of chemicals used in PCPs by describing their behavior in the environment and their potential effect on ecosystems.Surfactants are an important and widely used class of chemicals. They are the key components of many household cleaning agents as well as PCPs, such as shampoos, soaps, bodywash and toothpaste, because of their ability to remove dirt. These dirt-removing properties also make surfactants inherently toxic to aquatic organisms. The biodegradability of surfactants is a key legal requirement for their use in PCPs to minimize the likelihood of unsafe levels in the environment. Different types of surfactants exist and are often classified based on their surface charge. Anionic surfactants, which carry a negative surface charge, interact and help remove positively charged dirt particles from surfaces such as hair and skin. Sodium lauryl sulfate is a typical example of an anionic surfactant used in PCPs. Cationic surfactants, such as cetrimonium chloride, are positively charged and may be used as hair conditioning agents to make hair shinier or more manageable. Non-ionic surfactants (uncharged), such as cetyl alcohol, help formulate products or increase foaming. Amphoteric surfactants, such as sodium lauriaminodipropionate, carry both positive and negative charges and are commonly used to counterbalance the potentially irritating properties of anionic surfactants.Fragrances are mixtures of often more than 20 perfumery chemicals used to provide the smell of PCPs. Typically, fragrances are present at very low levels in most PCPs (below 0.01%) so that their exact compositions are not disclosed. Disclosed, however, are any allergens present in the fragrance to help dermatologists and consumers avoid certain fragrance chemicals. Despite the wish to protect trade secrets, a recent trend increasingly sees companies disclose the full fragrance compositions of their products on their websites (e.g. L'Oréal, Unilever). Well-known examples of fragrances include hexyl cinnamal, linalool, and limonene. Potential concerns about the ecotoxicological impact of fragrances have arisen on the one hand because of a lack of disclosure of fragrance formulations and on the other hand because of the detection of certain persistent fragrances in the environment (e.g. nitromusks).Preservatives are usually added to PCPs containing water for their ability to protect the product from contamination by bacteria, yeasts, and molds during storage or repeated use. Given their targeted action against living organisms, the use of preservative in chemical products including PCPs is under constant scrutiny. For example, in 2016 and 2017, the European Commission tightened the regulation around the use of methylisothiazolinone in cosmetics products due to human safety concerns. Other preservatives that have been restricted in use, because of both human safety and environmental safety concerns (e.g. endocrine disruption effects), include certain types of parabens and triclosan.UV filters are used in sunscreen products as well as in other PCPs such as foundation, lipstick, or moisturizing cream to protect users from UV radiation. UV filters can be organic or inorganic. Inorganic UV filters, like titanium oxide and zinc oxide, form a physical boundary protecting the skin from UV radiation. Organic UV filters, on the other hand, protect the skin by undergoing a chemical reaction with the incoming UV radiation. Organic UV filters commonly found in PCPs include butyl methoxydibenzoylmethane, ethylhexyl methoxycinnamate, and octocrylene. Organic UV filters are poorly biodegradable and have the potential to accumulate in organisms. Further, a number of organic UV filters have been shown to be toxic to coral organisms in laboratory tests. They are suspected to cause coral bleaching by, for example, promoting viral infections but research is still on-going to understand their potential ecotoxicological effects at realistic environmental concentrations.Certain chemicals used in PCPs are highly volatile and may end up in the air following product use. Examples include propellants, such as propane butane mixes or compressed air/nitrogen, used in aerosols to apply ingredients in hairsprays or deodorants and antiperspirants. Fragrances also volatilize when the product is applied to skin or hair to provide smell. Volatile silicones, chemicals used to assist the deposition of ingredients in liquids and creams, are another example of chemicals emitted to air upon PCP use.Plastic microbeads, with a diameter smaller than 5mm, have been used in PCPs such as face scrubs or shower gels for their scrubbing and cleansing properties. The growing concern about plastic pollution in water has drawn attention to the use of microbeads in PCPs. As a result, a number of initiatives were launched both to highlight the use of plastic microbeads and to encourage replacement with natural alternatives. An example thereof is the "Beat the microbead" coalition sponsored by the United Nations Environment Program, launched to help consumers identify and avoid PCPs containing microbeads. Such initiatives together with voluntary commitments by industry have led to a large decrease in the use of microbeads in wash-off cosmetic products: In the EU, for example, the use of microbeads in wash-off products was reduced by 97% from 2012 to 2017. Legislation to restrict the use of microbeads has also recently been put in place. In the USA microbeads in PCPs were banned in July 2017 and a number of EU countries (e.g. United Kingdom, Italy) have also banned their use in wash-off products.For more information on PCP chemicals and their function in products, please see (European commission 2009; Grocery Manufacturers Association 2017).For more information on the different types of surfactants, please see Tolls et al. and Section 2.3.8.Manova et al. list the different types of UV filters.The report of Scudo et al. gives more information on the use of microplastics in Europe.European Commission. Cosing. 2009 03.2019]; Available from: http://ec.europa.eu/growth/tools-databases/cosing/.Food and Drug Administration. Are All "Personal Care Products" Regulated as Cosmetics? [cited 2019 03]; Available from: //www.fda.gov/forindustry/fdabasicsforindustry/ucm238796.htm.Grocery Manufacturers Association. Smartlabel. [cited 2017 11]; Available from: http://www.smartlabel.org/.Manova, E., von Goetz, N., Hauri, U., Bogdal, C., Hungerbuhler, K.. Organic UV filters in Personal Care Products in Switzerland: A Survey of Occurrence and Concentrations. International Journal of Hygiene and Environmental Health 216, 508-514.Scudo, A., Liebmann, B., Corden, C., Tyrer, D., Kreissig, J., Warwick, O.. Intentionally Added Microplastics in Products. in: Limited A.F.W.E.a.I.U., ed. United KingdomTolls, J., Berger, H., Klenk, A., Meyberg, M., Müller, R., Rettinger, K., Steber, J.. Environmental safety aspects of Personal Care Products - a European perspective. Environmental Toxicology and Chemistry 28, 2485-2489.How do chemicals found in cosmetics end in the environment?Why are surfactants a concern for environmental toxicity?How did personal care products contribute to the microplastic pollution in the water?Author: Steven DrogeReviewer: Thomas P. KnepperLeaning objectives:You should be able to:Keywords: amphiphilic chemicals, micelle formation, biodegradabilitySurface active agents ("surf-act-ants") are a wide variety of chemicals produced in bulk volumes (>10.000 tonnes annually) as a key ingredient in cleaning products: detergents. Typical for surfactants is that they have a hydrophobic tail and a hydrophilic head group.At relatively high concentrations in water (typically >10-100 mg/L), surfactants spontaneously form aggregated structures called micelles, often in spheres with the hydrophobic tails inward and the hydrophilic head groups towards the surrounding water molecules. These micelle super-structures allow surfactants to dissolve grease and dirt from e.g. textile or dishes into water, which can then be flushed away. Besides this common use of surfactants, their amphiphilic (i.e., both hydrophilic and lipophilic) properties allow for a versatile use in our modern world:• During the large 2010 oil spill in the Mexican Gulf, enormous volumes (>6700 tonnes) of several types of surfactant formulations (e.g. "Corexit") were used to disperse the constant stream of oil leaking from the damaged deep water well into small dissolved droplets, in order to facilitate microbial degradation and prevent the formation of floating oil slabs that could ruin coastal habitats.• The ability of a layer of surfactants to maintain hydrophobic particles in solution is a key process in many products, such as paints and lacquers.• The ability to emulsify dirt particles is a key feature in process fluids during deep drilling in soil or sediment.• Fabric softners, and hair conditioners, have cationic surfactants as key ingredients that stick with the positively charged head groups onto the negatively charged fibers of your towel or hair. After the final flushing, these cationic surfactants still stick on the fibers and because of the hydrophobic head groups sticking out make these materials feel soft and smooth. Often only during the next washing event (with anionic or nonionic surfactants) the cationic surfactants are flushed off the fibers.• Many cationic surfactants have biocidal properties at relatively low concentrations and are therefore used in a few percent in many cosmetic products as preservatives, e.g. in cosmetics, or used to kill microbes in food processing, antibacterial hand wipes or during swimming pool cleaning. Examples are chloride salts of benzalkonium, benzethonium, cetylpyridinium.• Surfactants lower the surface tension of water, and therefore are used (as "adjuvants") in pesticide products to facilitate the droplet formation during spraying and to improve contact of the droplets with the target leaves in case of herbicides. Examples are fluorinated surfactants, silicone based surfactants (Czajka et al. 2015), and polyethoxylated tallow amine (POEA) used for example in the glyphosate formulation Roundup.The hydrophobic tail of surfactants is mostly composed of a chain of carbon atoms, although also fluorinated carbon (-CF2-) chains or siloxane (Si(CH3)3-O-[..]-Si(CH3)3) chains are also possible.The first bulk volume produced surfactants for washing machines were branched anionic alkylbenzenesulfonates (ABS) and alkylphenolethoxylates (APEO), with the hydrocarbon source obtained from petroleum. Because of the variable petroleum source, these chemicals are often complex mixtures. However, hydrophobic branched alkylchains are poorly biodegraded, and the constant disposal of these surfactants into the waste water caused very high environmental concentrations, often leading to foaming rivers.Surfactant producers 'voluntarily' switched to carbon sources such as palm oil or controlled polymerization of petrol-based ethylene, that could be used to generate surfactants with linear alkyl chains: linear alkylbenzenesulfonate (LAS) and alcohol ethoxylates (AEO). Some surfactants have the hydrophilic headgroup attached to two carbon chains, such as the anionic docusate (heavily used in the BP oil spill) and the cationic dialkyldimethylammonium chemicals. Common detergent surfactants are nowadays designed to pass ready biodegradability tests (>60% mineralisation to CO2 within a 10 d window following a lag phase, in a 28 d test). Early examples of fabric softners are double chain (dialkyl)dimethylammonium surfactants, but the environmental persistency of these compounds (DODMAC and DHTDMAC, see e.g. EU and RIVM-reports) has led to a large replacement by diesterquats (DEEDMAC), which degrade more rapidly through the weak ester linkages of the fatty acid chains (Giolando et al. 1995). A switch to sustainable production of the carbon sources is ongoing. Whereas petroleum based ethylene oil was mostly used, it is being replaced increasingly by the linear fatty acid carbon chains from either palm-oil (mostly C16/C18), coconu oil (mostly C12/C14), but also such raw materials needs to be as sustainably derived as possible.The hydrophilic headgroups can vary extensively. Nonionic surfactants can have a simple polar functional group (amide), glucose based (polyglycoside), or contain a variable lengths of repetitive ethoxlyate and/or propoxylate units. Because the ethoxylation process is difficult to control, such surfactants are often complex mixtures. Anionic surfactants are often based on sulfate (SO4-) or sulfonate (SO4-), but also phosphonate and carboxylates are common. A key difference between anionic surfactants is that sulfate and sulfonates are fully anionic (pKa ~<0) over the entire environmental pH range (pH4-9), while carboxylates are weaker acids that are still partially neutral species (pKa ~5). Most cationic surfactants are based on permanently charged quaternary ammonium headgroups (R-(N+)(CH3)3), although several ionizable amine groups are applied in cationic surfactants too (e.g., diethanolamines).The key ingredient property of most surfactants is the critical micelle concentration (CMC), which defines the dissolved concentration above which micellar aggregates start to form that can remove grease or fully emulsify particles. The CMC decreases proportionally with the hydrophobic tail length, and this means that with longer tails, you need less surfactant to start to form micelles. However, with increasing hydrophobic tails anionic surfactants more readily precipitate with dissolved inorganic cations such as calcium. Also, surfactant toxicity increases proportionally with hydrophobic tail lengths. If the alkyl chain is too long, the surfactant may bind strongly to all kinds of surfaces and not be available for micelle formation. The optimum hydrophobic chain length is thus often a balance between the desired properties of the surfactant and several critical processes that influence the efficiency and risk of surfactants.Kümmerer, K.. Sustainable from the very beginning: rational design of molecules by life cycle engineering as an important approach for green pharmacy and green chemistry. Green Chemistry 9, 899-907 . DOI: 10.1039/b618298bGiolando, S.T., Rapaport, R.A. , Larson, R.J., Federle, T.W., Stalmans, M., Masscheleyn P.. Environmental fate and effects of DEEDMAC: A new rapidly biodegradable cationic surfactant for use in fabric softeners. Chemosphere 30, 1067-1083. DOI: 10.1016/0045-653500005-SCzajka, A., Hazell, G., Eastoe, J.. Surfactants at the Design Limit. Langmuir 31, 8205−8217. DOI: 10.1021/acs.langmuir.5b00336Scientific Committee on Toxicity, Toxicity and the Environment (CSTEE). Opinion on the results of the Risk Assessment of: Dimethyldioctadecylammonium chloride (DODMAC). EU-report C2/JCD/csteeop/DodmacHH22022002/D //ec.europa.eu/health/archive/ph_risk/committees/sct/documents/out143_en.pdfVan Herwijnen, R.. Environmental risk limits for DODMAC and DHTDMACRIVM Letter report 601782029 - //www.rivm.nl/bibliotheek/rapporten/601782029.pdfTry to look up toxicity literature for the pesticide formulation Roundup and compare the apparent toxicities of the active ingredient glyphosate and the adjuvant surfactant to nontarget organisms. What do you notice?Surfactants do not just accumulate at the air-water interface, but also at water-solvent interfaces. The octanol-water partition coefficient is a critical property for risk assessment, but can you provide reasons why this property is problematic to be derived and applied for surfactants?Although detergents are designed to be readily biodegradable, some compounds appear to be pseudo-persistent as they are still found at detectable levels near the outflow of sewage treatment effluent pipes. What do you think is meant by this?In preparationThis page titled 2.3: Pollutants with specific use is shared under a CC BY 4.0 license and was authored, remixed, and/or curated by Sylvia Moes, Kees van Gestel, & Gerco van Beek via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.
973
3.1: Environmental compartments
https://chem.libretexts.org/Bookshelves/Environmental_Chemistry/Environmental_Toxicology_(van_Gestel_et_al.)/03%3A_Environmental_Chemistry_-_From_Fate_to_Exposure/3.01%3A_Environmental_compartments
In order to understand and predict the effects of chemicals in the environment we need to understand the behaviour of chemicals in specific environments and in the environment as a whole. In order to deal with the diversity of natural systems, we consider them to consist of compartments. These are defined as parts of the physical environment that are defined by a spatial boundary that distinguishes them from the rest of the world, for example the atmosphere, soil, surface water and even biota. These examples suggest that three phases: gas, liquid, and solid, are important but compartments may consist of different phases. For example, the atmosphere consists of suspended liquids (e.g., fog) and solids (e.g., dust) as well as gases. Similarly, lakes contain suspended solids and soils contain gaseous and water-filled pore space. In detailed environmental models, each of these phases may also be considered to be a compartment.The behaviour and fate of chemicals in the environment is determined by the properties of environmental compartments and the physicochemical characteristics of the chemicals. Together these properties determine how chemicals undergo chemical and biological reactions, such as hydrolysis, photolysis and biodegradation, and phase transfer processes such as air-water exchange and sorption.In this chapter, we first introduce the most important compartments and their most important properties and processes that determine the behaviour of chemical contaminants: the atmosphere, the hydrosphere, sediment, soil, groundwater and biota. The emissions of chemicals into the environment from either point sources or diffuse sources is discussed and the important pathways and processes determining the fate of chemicals. The partitioning approach to phase-transfer processes is presented with sorption as a specific example. The impact of physicochemical properties on partitioning is also discussed.Other important environmental processes are discussed in sections on metal speciation, processes affecting the bioavailability of metals and organic contaminants and the transformation and degradation of organic chemicals. These sections also include information on the basic methods to measure these processes.Finally, approaches that are used to model and predict the environmental fate of chemicals, and thus the exposure of organisms to these chemicals are described in section 3.8.Authors: Astrid Manders-GrootReviewer: Kees van Gestel, John Parsons, Charles ChemelLeaning objectives:You should be able to:Keywords: atmosphere, transport distance, residence timeThe atmosphere of the Earth consists of several layers that have limited interaction. The troposphere is the lowermost part of the atmosphere and contains the oxygen that we breathe and most of the water vapor. It contains on average 78% N2, 20% O2 and up to 4% water vapor. Greenhouse gases like CO2 and CH4 are present at 0.0038 % and 0.0002%, respectively. Air pollutants like ozone and NO2 have concentrations that are even a factor 1,000-10,000 lower, but are already harmful for the health of humans, animals and vegetation at these concentrations.The troposphere is 6-8 km high near the poles, about 10 km at mid-latitudes and about 15 km at the equator. It has its own circulation and determines what we experience as weather, with temperature, wind, clouds and precipitation. The lowest part of the troposphere is the boundary layer, the part that is closest to the Earth. Its height is determined by the heating of the atmosphere by the Earth surface and the wind conditions and has a daily cycle, determined by the incoming sunlight. It is not a completely separate layer, but the exchange of air pollutants like O3, NOx, SO2, and xenobiotic chemicals between the boundary layer and the above layers is generally inefficient. Therefore it is also termed the mixing layer.Above the troposphere there is the stratosphere, a layer that is less strongly influenced by the daily solar cycle. It is very dry and has its owns circulation, with some exchange with the troposphere. The stratosphere contains the ozone layer that protects life on Earth against UV radiation and extends to about 50 km altitude. The layers covering the next 50 km are the mesosphere and thermosphere, which are not directly relevant for the transport of the chemicals considered in this book.Air pollutants include a wide range of chemicals, ranging from metals like lead and mercury to asbestos fibers, polycyclic hydrocarbons (PAH) and chloroform. These pollutants may be emitted into the atmosphere as a gas or as a particle or droplet with sizes of a few nanometer to tens of micrometers. The particles and droplets are termed aerosol, or, depending on the measurement method, particulate matter. The latter definition is used in air quality regulations. Note that a single aerosol can be composed of several chemical compounds. Once a pollutant is released in the atmosphere, it is transported by diffusion and advection by horizontal and vertical winds and may be ultimately deposited to the Earth's surface by rain (wet deposition), and by sticking to the surface (dry deposition). Large particles may fall down by gravitational settling, a process also called sedimentation. Air pollutants may interact with each other or with other chemicals, particles and water by physical or chemical processes. All these processes will be explained in more detail below. A summary of the relevant interactions is given in -containing gases, and be transported over ranges of a few meters to crossing the globe several times. Concentrations of gases are often expressed in volume mixing ratios (parts per billion, ppb) whereas for particulate matter the correct unit is in (micro)gram per cubic meter as there is no molecular weight associated to it. For ultrafine particles, concentrations are expressed as numbers of particles per cubic meter, for asbestos, the number of fibers per cubic meter is used.The properties of air pollutants, like solubility in water, attachment efficiencies to the Earth's surface (water, vegetation, soil) and size of particles, are key elements determining the lifetime and transport distances. These properties may change due to the interaction with other chemicals and with meteorology.The main physical processes are:Chemical conversions include:Pollutants are characterized by their chemical composition but for aerosols also the size distribution of particles is relevant. Note that the conservation of atoms always applies, but particle size distribution and particle number can be changed by physical processes. This has to be kept in mind when concentrations are expressed in particles per volume instead of mass concentrations.Several processes determine the mixing and transport of chemicals in the air:Although the processes of diffusion and transport are well-known, it is not an easy task to solve the equations describing these processes. For stationary point and line sources under idealized conditions, analytical descriptions can be derived in terms of a plume with concentration profile with a Gaussian distribution, but for more realistic descriptions the equations must be solved numerically. For complex flow around a building, computational fluid dynamics is required for an accurate description, for long-range transport a chemistry-transport model must be used.Wet deposition comprises the removal processes that involve water:Wet deposition is a very efficient removal mechanism for both small (<0.1 µm diameter) and large aerosols (diameter >1 µm). Aerosols that are hygroscopic can grow in size by absorbing water, or shrink by evaporating water under dry conditions. This affects their deposition rate for wet or dry deposition.Dry deposition is partly determined by the gravitational forces on a particle. Heavy particles (≥ 5 µm) fall to the Earth's surface in a process called gravitational settling or sedimentation. In the lowest layer of the atmosphere, air pollutants can be brought close enough to the surface to stick to it or be taken up. In the turbulent boundary layer, air pollutants are brought close to the surface by the turbulent motion of the atmosphere, up to the very thin laminar layer (laminar resistance, only for gases) through which they diffuse to the surface. Aerosols or gases can stick to the Earth's surface or be taken up by vegetation, but they may also rebound. Several pathways take place in parallel or in series, similar to an electric circuit with several resistances in parallel and series. Therefore the resistance approach is often used to describe these processes.Deposition above snow or ice is generally slow, since the atmosphere above it is often stably stratified with little turbulence (high aerodynamic resistance), the surface area to deposit on is relatively small (impactors) and aerosols may even rebound to an icy surface (collection efficiency of impactors) to which it is difficult to attach. On the other hand, forests often show high deposition velocities since they induce stronger turbulence in the lowermost atmosphere and have a large leaf surface that may take up gases by the stomata or provide sticking surfaces for aerosols. Deposition velocities thus depend on the type of surface, but also on the season, atmospheric stability (wind speed, cloud coverage) and ability of stomata to take up gases. When the atmosphere is very dry, for example, plants close their stomata and this pathway is temporarily shut down. For particles, the dry deposition velocity is lowest at sizes of 0.1-1 µm.Once air pollutants are removed from the atmosphere, they can be part of the soil or water compartments which can act as a reservoir. This is in general only taken into account for a limited number of chemicals. Ammonia or persistent organic pollutants may be re-emitted from the soil by evaporation. Dusty material or pollutants attached to dust may be brought back into the atmosphere by the action of wind. This is relevant for bare areas like agricultural lands in wintertime, but also for passing vehicles that bring up the dust on a road by the flow they induce.Due to the many relevant processes and interactions, the fate of chemical pollutants in the air has to be determined by using models that cover the most important processes. Which processes need to be covered depends on the case study: a good description of a plume of toxic material during an accident, where high concentrations, strong gradients and short timescales are important, requires a different approach than the chronic small release of a factory. Since it would require too heavy numerical simulations to include all aspects, one has to select the relevant processes to be included. Key input for all transport models are emission rates and meteorological input.When one is interested in concentrations close to a specific source, next to emission rate the effective emission height is important, and processes that determine dispersion: wind speed, atmospheric stability. Chemical reaction rates and deposition velocities should be included when the time horizon is long or when the reactions are fast or deposition velocities are high.When one is interested in actual concentrations resulting from releases of multiple sources and species over a large area of interest, like for an air quality forecast, the processes of advection, deposition and chemical conversions become more relevant, and input meteorology needs to be known over the area. Sharp gradients close to the individual sources are, however, no longer resolved. In particular rain can be a very efficient removal mechanism, removing most of the aerosol within one hour. Dry deposition is slower, but results in a lifetime of less than a week and transport distances of less than 1,000 km for most aerosols. For some gaseous compounds like halogens and N2O deposition does hardly play a role and they are chemically inert in the troposphere, leading to very long lifetimes.To assess the overall long-term fate of a new chemical to be released to the market, the potential concentrations in air, water and soil have to be determined. Ideally, models for air, soil and water are used together in a consistent way, including their interaction For many air pollutants the atmospheric lifetime is short but determines where and in which form they are deposited onto ground and water surfaces, where they may accumulate. This means that even if a concentration in air is relatively low at a certain distance from a source, the deposition of an air pollutant over a year may still be significant. shows an example of annual mean modelled concentrations and annual total deposition of a hypothetical passive (non-reactive) soot-like tracer that is released at 1 kg/hour at a fictitious site in The Netherlands. Annual mean concentrations are small compared to ambient concentrations of particulate matter, but the footprint of the accumulated deposition is larger than that of the mean concentration, since the surface acts as a reservoir. This implies that re-emission to air can be relevant. It may take several years for soil or water before an equilibrium concentration is reached in these compartments from the deposition input, as different processes and time scales apply. Mountain ranges are visible in the accumulated wet deposition (Alps, Pyrenees), as they are areas with enhanced precipitations.In addition to spatially explicit modelling, also box models exist that have the advantage that they can make long-term calculations for a continuous release of a species, including interaction between the compartments air, soil and water. They can be used to determine when an equilibrium concentration is reached within a compartment, but these models cannot resolve horizontal concentration gradients within a compartment.Constant release of a passive tracer from a point source in The Netherlands. Upper panel shows the annual mean concentration, the lower panel shows the accumulated wet and dry deposition over one year. Note the nonlinear colour scale to cover the large range of values. Source: //doi.org/10.3390/atmos8050084.Learn moreEU air quality, policy and air quality legislation: http://ec.europa.eu/environment/air/index_en.htmUS hazardous air pollutants, including lists of toxics: //www.epa.gov/hapsPlume dispersion approach: http://courses.washington.edu/cewa567/Plumes.PDFChemistry-transport models: www.narsto.org/sites/narsto-dev.ornl.gov/files/Ch71.3MB.pdFSeinfeld, J., Pandis, S.N. Atmospheric Chemistry and Physics, from air pollution to climate change, Wiley, 2016, covering all aspects.John, A. C., Küpper, M., Manders-Groot, A. M., Debray, B., Lacome, J. M., Kuhlbusch, T. A.. Emissions and possible environmental implication of engineered nanomaterials (ENMs) in the atmosphere. Atmosphere, 8, 84.Which processes determine the concentration of a pollutant in the air very close to its point of emission?When fine particles (<1µm diameter) are released in the lowest 100 m of the atmosphere, how far will they be transported? And what processes do contribute to removal of the particles from the air?When gases are released in the lowest 100 m of the atmosphere, how far do they get?Authors: John ParsonsReviewers: Steven Droge, Sean ComberLeaning objectives:You should be able to:Keywords: Hydrogen bonding, carbonates, dissolved saltsWater covers 71% of the earth's surface and this water, together with the smaller amounts present as gas in the atmosphere, as groundwater and as ice is referred to collectively as the hydrosphere. The bulk of this water is salt water in the oceans and seas with only a minor part of freshwater being present as lakes and rivers.Water is essential for life and also plays a key role in many other chemical and physical processes, such as the weathering of minerals and soil formation and in regulating the Earth's climate. These important roles of water derive from its structure as a small but very polar molecule arising from the polarised hydrogen-oxygen bonds. As a consequence, water molecules are strongly attracted by hydrogen bonding, giving it relatively high melting and boiling points, heat capacity, surface tension, etc. The polarity of the water molecule also makes water an excellent solvent for a wide variety of ionic and polar chemicals but a poor solvent for large nonpolar molecules.As mentioned above, freshwater is only very small proportion of total amount of water on the planet and most of this is present as ice. Since this water is in contact with the atmosphere and the soils and bedrock of the Earth's crust, it dissolves both atmospheric gases such as oxygen and carbon dioxide and salts and organic chemicals from the crust. If we compare the relative compositions of cations in the Earth's crust and the major dissolved species (Table 1) it is clear that these are very different. This difference reflects the importance of the solubility of these components. For ionic chemicals, this depends on both their charge and their size (expressed as z/r2, where z is the charge and r the radius of an ion). As well as reflecting the properties of the local crust, the composition of salts is also influenced by precipitation and evaporation and the deposition of sea salt in coastal regions.Table 1. Comparison of the major cation composition of average upper continental crust and average river water. (*except aluminum and iron from Broecker and Peng)Upper continental crust (mg/kg) (Wedepohl 1995*)River water (mg/kg) (Berner & Berner 1987*)Al77.40.05Fe30.90.04Ca29.413.4Na25.75.2K28.61.3Mg13.53.4The pH of surface water is determined by both the dissolution of carbonate minerals and carbon dioxide from the atmosphere. These components are part of the set of equilibrium reactions known as the carbonate system.At equilibrium with the current atmospheric CO2 concentration and solid calcium carbonate, the pH of surface water is between 7 and 9 but this may reach more acidic values where soils are calcium carbonate (limestone) poor. This is illustrated by the pH values measured in a river in Northern England, where acidic, organic carbon-rich water at the source is gradually neutralised once the river encounters limestone rich bedrock.As well as these natural processes, there are human influences on the pH of surface water including acidic precipitation resulting from fossil fuel combustion and acidic effluents from mining activities caused by oxidation and dissolution of mineral sulphides. Regions such as Southern Scandinavia with carbonate-poor soils are particularly vulnerable to acidification due to these influences and this is reflected in for example, reduced fish populations in these vulnerable regions (see . More recently, reduced coal burning and the decline in heavy industry is resulting in the recovery of pH values in upland areas across Europe.Dissolved oxygen is of course essential to aquatic life and concentrations are in general adequate in well mixed water bodies. Oxygen can become limiting in deep lakes where thermal stratification restricts the transport of oxygen to deeper layers, or in water bodies with high rates of organic matter decomposition. This may result in anoxic conditions with significant ecological impacts and on the behaviour of chemical contaminants.Freshwater eventually moves into seas and oceans where the concentrations of dissolved species are much higher than in the freshwater environment. This is partly due to the effects of evaporation of water from the oceans but is also be due to specific marine sources of some dissolved components. Estuaries are the transition zones where freshwater and seawater mix. These are highly productive environments where increasing salinity has a major impact on the behaviour of many chemicals, for example on the speciation of metals and the aggregation of colloids as a result of cations shielding the negative surface change of colloidal particles. Increasing salinity also affects organic chemicals, with ionic chemicals forming ion pairs, and even reducing the solubility of neutral organics (the so-called salting-out effect). As well as these chemical effects due to increasing salinity, the lowering of flow rates in estuaries leads to the deposition of suspended particles.Since the concentrations of pollutants are in general lower in the marine environment than in the freshwater environment, concentrations in estuaries decrease as freshwater is diluted with seawater. Measuring salinity at different locations in estuaries is a convenient way to determine the extent of this dilution. Components that are present in higher concentrations in seawater will of course show an increase with salinity unless. Plotting salinity against the concentrations of chemicals at different locations can yield information on whether they behave conservatively (i.e. only undergoing mixing) or are removed by processes such as degradation or partitioning into the atmosphere or sediments. shows examples of plots expected for conservative chemicals and those that are either removed in the estuary or have local sources there. Models describing the behaviour of chemicals in estuaries can be used with these data to derive the rates of removal or addition of the chemical in the system.The open ocean is sufficiently mixed for the composition of major dissolved constituents to be fairly constant, except in local situations as a result of upwelling of deep nutrient-rich waters or the biological uptake of nutrients. In coastal regions the concentrations of chemicals and other components originating from terrestrial sources may also be locally higher. The major components in seawater are listed in Table 2 with their typical concentrations.Table 2. Major ion composition of freshwater and seawater.Seawater (mmol/L)(Broecker and Peng, 1982)River water (mmol/L) (Berner and Berner, 1987)Na+4700.23Mg2+530.14K+100.03Ca2+100.33HCO3-20.85SO42-280.09Cl-5500.16Si0.10.16These concentrations may be higher in waterbodies that are partly or wholly isolated from the oceans and are impacted by evaporative losses of water (e.g. Mediterranean, Baltic, Black Sea). In extreme case, concentrations of salts may exceed their solubility product, resulting in precipitation of salts in evaporate deposits.As is the case in freshwater, carbonates play an important role in regulating the ocean pH. The fact that the oceans are supersaturated in calcium carbonate makes it possible for a variety of organisms to have calcium carbonate shells and other structures. The important processes and equilibria involved are illustrated in There is concern that one of the most important effects of increasing atmospheric carbon dioxide will be lowering of ocean pH to values that will result in destabilisation of these carbonate structures.Andrews, J.E., Brimblecombe, P., Jicketts, T.D., Liss, P.S., Reid, B.J.. An Introduction To Environmental Chemistry, Blackwell Publishers, ISBN 0-632-05905-2.Baird, C., Cann, M.. Environmental Chemistry, Fifth Edition, W.H. Freeman and Company, ISBN 978-1429277044.Berner, E.K., Berner, R.A.. Global water cycle: geochemistry and environment, Prentice-Hall.Broecker, W.S., Peng, T.S.. Tracers in the Sea, Lamont-Doherty Geol. Obs. Publ.Henriksen, A., Lien, L., Rosseland, B.O., Traaen, T.S., Sevaldrud, I.S.. Lake Acidification in Norway: Present and Predicted Fish Status. Ambio 18, 314-321Wedepohl, K.H.. The composition of the continental crust, Geochimica Cosmochimica Acta 59, 1217-1232.The concentrations of dissolved salts in rivers and lakes is determined by three processes. Describe these processes and the characteristic composition of waterbodies where one of these processes is dominant.In addition to the well-known effects on global climate, increasing atmospheric CO2 is also expected to have an impact on the pH of the oceans. Which processes are responsible for determining the pH of the oceans?How could increasing atmospheric CO2 affect these processes?What effects could increasing acidity of the oceans have on marine organisms?In preparationAuthor: Kees van GestelReviewers: John Parsons, Jose Alvarez RogelLearning goalsYou should be able toKeywords:Particle size distribution, Porosity, Minerals, Organic matter, Cation Exchange Capacity, Water Holding CapacitySoil is the upper layer of the terrestrial environment that serves as a habitat for organisms and medium for plant growth. In addition, it also plays an important role in water storage and purification and helps to regulate the Earth's atmosphere (e.g. carbon storage, gas fluxes, …).Soils are composed of three phases.The solid phase is formed by mineral and organic components. Mineral components appear in different particle sizes from coarse particles (sand), intermediate (silt) and fine (clay) which combination determine soil texture. The particles can be arranged to form porous aggregates; soil pores being filled with air and/or water. The proportion of air in soils depends on soil moisture content. The composition of the soil solid phase may be quite variable.The gaseous phase has a similar composition as the air, but due to the respiration of plant roots and the metabolic activity of soil microorganisms, O2 content generally is lower and CO2 content higher. Exchange of gases between soil pores and atmospheric air takes place by diffusion. Diffusion proceeds faster in dry soil and much slower when soil pores are filled with water.The liquid phase of the soil, the soil solution or pore water, is an aqueous solution containing ions (mainly Na+, K+, Ca2+, Cl-, NO3-, SO42-, HCO3-) from dissolution of a variety of salts, and also contains dissolved organic carbon (DOC, also referred to as dissolved organic matter, DOM). The soil solution is part of the hydrological cycle, which involves input from among others rain and irrigation, and output by water uptake by plants, evaporation, and drainage to ground and surface water. The soil solution acts as a carrier for the transport of chemicals in soil, both to plant roots, soil microorganisms and soil animals and to ground and surface water.The soil solid phase consists of mineral and organic soil particles. Based on their size, the mineral particles are divided into sand (63-2000 µm), silt (2- 63 µm), and clay (<2 µm). With increasing particle size, the specific surface area decreases, pore size increases and water retention capacity decreases. The sand fraction mainly consists of quartz (SiO2) and does not have any sorption properties because the quartz crystals are electrically neutral. Sandy soils have large pores, so a low capacity to retain water. In soils with a high silt fraction, smaller pores are better represented, giving these soils a higher water retention capacity. Also the silt fraction has no adsorptive properties. Clays are aluminium silicates, lattices composed of SiO4 tetrahedrons and Al(OH)6 octahedrons. Upon the formation of clay particles, isomorphic substitution occurred, a process in which Si4+ was replaced by Al3+, and Al3+ by Mg2+. Although having similar diameters, these elements have different valences. As a consequence, clay particles have a negative charge, making positive ions to accumulate on their surface. This includes ions important for plant growth, like NH4+, K+, Na+ and Mg2+, but also cationic metals. Many other minerals have pH-dependent charges (either positive or negative) which are also important in binding cations and anions.In addition to mineral particles, soils also contain organic matter, which includes all dead plant and animals remains and their degradation products. Living biota is not included in the soil organic matter faction. Organic matter is often divided into: 1. humin, non-dissolved organic matter associated with clay and silt particles, 2. humic acids having a high degree of polymerization, and 3. fulvic acids containing more phenolic and carboxylic acid groups. Humic and fulvic acids are water soluble but their solubility depends on pH. For example, humic acids are soluble at alkaline pH but not at acidic pH. The dissociation of the phenolic and carboxylic groups gives the organic matter also a negative charge, the density of which increases with increasing soil pH. The soil organic matter acts as a reservoir of nitrogen and other elements, provides adsorption site for cations and organic chemicals, and supports the building of soil aggregates and the development of soil structure.The binding of cations to the negatively charged sites on the soil particles is an exchange process. The degree of cation accumulation near soil particles depends on their charge density, the affinity of the cations to the charged surfaces (which is higher for bivalent than for monovalent cations), the concentration of ions in solution (the higher the concentration of a cation in solution, the higher attraction to soil particles), etc. Due to their binding to charged soil particles, cations are less available for leaching and for uptake by organisms. The Cation Exchange Capacity (CEC) is commonly used as a measure of the number of sites available for the sorption of cations. CEC is usually expressed as cmolc/kg dry soil. Soils with higher CEC have a higher capacity to bind cations, so cationic metals show a lower (bio)availability in high CEC soils (see the Section on metal speciation). CEC depends on the content and type of clay minerals, with montmorillonite having a higher CEC than e.g. kaolinite, organic matter content and pH of the soil. In addition to clay and organic matter, also aluminium and iron oxides and hydroxides may contribute to the binding of cations to the soil.The transport of water through soil pores is controlled by gravity, and by suction gradients which are the result of water retention by capillary and osmotic processes. Capillary binding of water is stronger in smaller soil pores, which explains why clayey soils have higher water retention capacities than sandy soils. The osmotic binding of water increases with increasing ionic strength, and is especially high close to charged soil particles like clay and organic matter where ions tend to accumulate.The stronger water is retained by soil, the lower its availability is for plants and other organisms. The strength by which water is retained depends on moisture content, because 1. at decreasing moisture content the ionic strength of the soil solution and therefore osmotic binding increases, 2. when soil moisture content decreases the larger soil pores will be emptied first, leading to increasing capillary retention of the remaining water in smaller pores. Water retention curves describe the strength with which water is retained as a function of total water content and in dependence of the composition of the soil. shows pF curves for three different soil types.A pF value of 2.2-2.5 corresponds with a binding strength of 200 to 300 hPa. This is called field capacity; water is readily available for plant uptake. At pF 4.2 (15,000 hPa), water is strongly bound in the soil and no longer available for plant uptake; this is called the wilting point. For soil organisms, not the total water content of a soil is of importance but rather the content of available water. Water retention curves may be important to describe the availability of water in soil. Toxicity tests with soil organisms are typically performed at 40-60% of the water holding capacity (WHC) of the soil, which corresponds with field capacity.Schulten, H.-R., Schnitzer, M.. Chemical model structure for soil organic matter and soils. Soil Science 162, 115-130.Blume, H.-P., Brümmer, G.W., Fleige, H., Horn, R., Kandeler, E., Kögel-Knabner, I., Kretzschmar, R., Stahr, K., Wilke, B.-M.. Scheffer/Schachtschabel Soil Science, Springer, ISBN 978-3-642-30941-0What are the main components of the solid phase of a soil?When considering two different soils, a sandy and a clayey soil, which one has the highest available volume of water when both soils have the same total water content? Explain your answer.What is the agricultural and environmental importance of the cation exchange capacity of soils?Explain the role of clay minerals and organic matter in the retention of cationic metals in soils.Which two factors explain the fact that the availability of cationic metals increases with decreasing soil pH?(draft)Author: Thilo BehrendsReviewer: Steven Droge, John ParsonsLeaning objectives:You should be able to:Keywords: Aquifer, Nernst equation, electron transfer, redox potential, half reactionsSome definitions conceive all water beneath the earth's surface as groundwater while others restrict the definition to water in the saturated zone. In the saturated zone the pores are completely filled with water in contrast to the undersaturated zone in which some pores are filled with gas and capillary action are important for moving water. Geological formations, which host groundwater in the saturated zone, can be classified as 'aquifer', ' aquitard', or 'aquifuge' depending on their permeability. In contrast to aquitard and aquifuge, which have a low permeability, an aquifer permits water to move in significant rates under ordinary field conditions. Aquifers typically have a high porosity and the pores are well connected with each other. Examples of aquifers include sedimentary layers of sand or gravel, carbonate rocks, sandstones, volcanic rocks and fractured igneous rocks. The redox chemistry discussed in this chapter is focusing on aquifers in sedimentary formations.Groundwaters are an important source for drinking water and the quality of groundwater is, therefore, of high importance for protecting human health. However, aquifers also represent a habitat for bacteria and aquatic invertebrates and are, therefore, also an object for ecotoxicological studies. Furthermore, groundwater can act as a transportation pathway connecting different environmental compartments e.g. soils with rivers or oceans. Groundwater thus plays a role in the distribution of contaminants in the environment.The movement of a chemical in groundwater is controlled by three processes: advection, dispersion and reaction. Advection is the transport of a chemical in dissolved form together with the groundwater flow. When a chemical is released from a point source into groundwater with a constant flow direction, a plume is forming downstream of the source. The spreading of the chemical is due to dispersion. There are two reasons for this spreading: First, molecular diffusion causes transport of the chemical independently from advection; Second, differences in groundwater velocities at different scales causes mixing of the groundwater (mechanical dispersion) in the direction of groundwater flow but also perpendicular to it. Several process can retard the transport of chemicals or can cause its removal from the system (e.g. degradation). For the mobility of a chemical, the distribution between immobile solid phase and moving liquid phase is of key importance in groundwater (see chapter 3.4). There are several processes which can lead to the degradation of a compound in aquifers. Microbial activity can contribute to the degradation of chemicals but also abiotic reactions can be of importance. For some chemical, redox reactions are relevant which are discussed in the following section.Many elements are redox-sensitive under environmental conditions. This means they occur naturally in different 'redox states'. For example oxidation or reduction of carbon plays a pivotal role in the energy metabolism of living organisms and carbon occurs in oxidation states from +IV in CO2 (because the two oxygen atoms both count as -II ((because oxygen is more electronegative than carbon)), and the total molecule should balance out) to -IV in CH4 (because each H-atom counts as +I ((because hydrogen is less electronegative than carbon))). Also potentially toxic elements, such as arsenic, are found in nature in different oxidation states. Important oxidation states of arsenic are +V, (e.g. AsO43-, arsenate), + III (e.g. AsO3-3, arsenite), 0 (elemental arsenic or arsenic associated with sulfide as in FeAsS, arsenopyrite). Arsenic can also have negative oxidation states when it forms arsenides such as FeAs2 (löllingite). Bioavailability, toxicity and mobility of redox sensitive elements are usually strongly dependent on their oxidation state. For example, arsenite tends to be more toxic and more mobile than arsenate. For this reason, assessing the redox state of potentially toxic elements is an important element of environmental risk assessment of groundwater.Organic contaminants can also undergo redox transformations. At the earth surface, when oxygen is present, (photo-)oxidation is an important degradation pathway for organic contaminants. In subsurface environments, when oxygen concentrations are often very low (anoxic conditions), reduction can play an important role in degradation pathways. For example, the reductive dehalogenation of chlorinated hydrocarbons or the reduction of nitroaromatic compounds have been extensively investigated. The reduction of these compounds can be mediated by microorganisms but they can also occur abiotically on solid surfaces present in the subsurface. In any case, reduction of organic contaminants is only possible when the reaction is thermodynamically feasible. For this reason it is necessary to know the redox conditions in, for example, an aquifer.As the name indicates, redox reactions combine oxidation of one constituent in the system with the reduction of another and, hence, involve electron transfer. The oxidation of arsenite with elemental oxygen to arsenate has following stoichiometry:It is important that the stoichiometries of redox reactions are not only charge- and mass-balanced but also electron-balanced. Here, arsenic releases two electrons when going from oxidation state +III to +V (arsenite becomes oxidized to arsenate) while one oxygen atom takes up the two electrons and goes from oxidation state 0 to -II (elemental oxygen becomes reduced). For this reaction an equilibrium constant can be obtained and based on the activities (or concentrations) of dissolved reactants and products it can be evaluated whether the reaction is in equilibrium or in which direction the reaction is thermodynamically favorable.When a natural system contains several different redox-active constituents, a large number of possible redox reactions can be formulated and evaluated separately. In this situation it is more convenient to formulate and compare half reactions. For examples, the oxidation of arsenite with oxygen can be split up into the reactions of arsenic and oxygen. Half reactions are typically formulated as reduction reactions (electrons are on the left hand side of the reaction). The Eho is the standard redox potential and represents the electrical potential, which would be measured in a standardized electrochemical cell which contains on one side, H3AsO4, H3AsO3 and H+, all with activities of 1 mol l-1, and a solution containing 1 mol l-1 H+ in equilibrium with H2 gas with a pressure of 1 bar, on the other side.In natural environments the pH is usually not 0 and the activities of arsenite and arsenate are not 1 mol l-1. The redox potential, Eh under these conditions can be calculated using the Nernst equation:where:R is the ideal gas constant ( 8.314 J mol-1 K-1),T the temperature in K,z is the number of electrons which are transferred in the reaction,F the Faraday constant (96485 mol C-1).In the ratio ox/red, 'ox' represents the activities or pressures of the constituents on the right hand side of the half reaction, whereby the stoichiometric factor becomes the corresponding exponent, while 'red' represents the right hand side of the half reaction.The redox potentials of different half reactions can be compared:In other words, it is thermodynamically favorable that the half reaction with the high potential proceeds from left to right and the half reaction with the low potential from right to left.The redox conditions in an aquifer depends on the inherited inventory of oxidants and reductants during the formation of the geological formation and the processes which have been occurring throughout its history. Oxidants and reductants can have entered the aquifer by diffusion or with the infiltrating water and slowly progressing redox reaction can have modified the assemblage of oxidants and reductants. In the absence of (microbial) catalysis redox reactions often have very slow kinetics. Furthermore, due to photosynthesis, redox reactions are not in equilibrium at the earth's surface and the upper part of the underlying subsurface. As a consequence, the redox conditions in an aquifer can usually not be represented in one unique redox potential. This implies that values obtained for groundwaters with electrochemical measurements, e.g. potentiometric measurements using redox electrodes, might be not representative for the redox conditions in the aquifer. Furthermore, relevant half reactions in the aquifer often involve solids (heterogeneous reactions) with low solubility, implying that the concentrations in solution (for example of Fe3+) are too low to be detected. Hence, evaluating the redox conditions in subsurface environments is often challenging.Oxygen concentrations in groundwaters are often virtually zero, as oxygen in infiltrating rain water or entering the subsurface by molecular diffusion is often consumed before it can reach the aquifer. Hence, 'reducing conditions' typically prevail in aquifers. The redox potential measured in a system may reflect the dominant electron acceptors besides oxygen that are present in the system.In sediments or sedimentary rocks, redox reactions after deposition are predominately driven by the oxidation of organic matter, which entered the sediment during its deposition. However, the aquifer might also have received dissolved organic matter via infiltrating water. The oxidation of organic matter is predominately microbially mediated and predominately coupled to the reduction of elemental oxygen (if present). However, when elemental oxygen is depleted, which is usually the case, other electron acceptors are used by microorganisms. Relevant electron acceptors (see in anoxic environments include:Nitrate and sulphate can be present in dissolved form while Mn(IV), Mn(III), Fe(III) occur as solids with low solubility. The (hydr)oxide solids of these metals, such as goethite (FeOOH) or manganite (MnOOH) are mostly accessible for microbial reduction while Mn(III) or Fe(III) in silicates can only be partially reduced or are not bioavailable for reduction. When also these electron acceptors run short, methanogenesis can be initiated.Microorganisms, which reduce sulphate, Mn or Fe(III), can use the products of fermentative organisms. These fermentative organisms produce short-chain fatty acids, such as acetate or lactate, but often also release hydrogen gas. That is, hydrogen concentrations in groundwater reflect a steady state of hydrogen production and consumption, and are typically limited by the rates of production. As a consequence, hydrogen concentrations in groundwater are often at the physiological limit of the consuming organism. The concentrations are just sufficient to allow the organism to conserve energy from oxidizing the hydrogen. This limit increases according to the sequence of electron acceptors: nitrate reduction < Mn reduction < Fe reduction < sulphate reduction < methanogenesis when the corresponding compounds are present in relevant amounts or concentrations. For this reason, concentrations of dissolved hydrogen can be a useful indicator to identify the dominant, anaerobic respiration pathway in an aquifer. For example, one can determine whether sulphate reduction is enabled or methanogenesis has set in. The hydrogen concentrations in the groundwater can also directly be used to assess whether the microbial reduction of metals, metalloids, chlorinated hydrocarbons, nitro aromatic compounds or other organic contaminants is feasible.The reduction of Fe(III)(hydr)oxides are sulphate leads to the formation of Fe(II) and sulphide, which, in turn, typically results in the precipitation of ferrous solids such as FeCO3 (siderite), FeS (mackinawite) or FeS2 (pyrite). These Fe(II) containing minerals often play an important role in the abiotic reduction of organic or inorganic contaminants in aquifers. When the composition of the groundwater and the mineral assemblage is known, the Nernst equation can be used to calculate the redox potential of relevant half reactions in the aquifer. These redox potential can be then used for evaluating whether reduction of potentially toxic compounds is possible or not. For example, the half reaction for the reduction of an amorphous ferric iron hydroxide coupled to the precipitation of siderite is given by: At given pH and carbonic acid concentration, the corresponding redox potential can be calculated using the Nernst equation. This redox potential can be compared to that obtained from the Nernst equation for the reductive dichlorination of tetrachloroethylene (Cl2C=CCl2) With this approach the feasibility of redox reactions involving potentially organic and inorganic compounds can be evaluated in aquifers. That does, however, not imply that the corresponding reactions also occur within the relevant time scale. For this the kinetics of the reaction have to be known and have been studied for many reactions of potential relevance in aquifer systems. However, the kinetics of redox reactions are not subject of this section.Sparks, D.. Environmental Soil Chemistry, Second Edition, Academic Press, Chapters 5 and 8, ISBN 978-0126564464.Essington, M.E.. Soil and Water Chemistry: An Integrative Approach, Chapters 7 and 9, CRC Press, ISBN 978-0849312588Which processes control the movement of chemicals in groundwater? Define each of these processes.Redox reactions of contaminants in groundwater are controlled by the redox potential. What are the most important chemicals and processes determining the redox potential?How can the redox potential in aquifer be determined? What are the disadvantages of these methods?(draft)Author: Steven Droge,Reviewer: Nico van der Brink, John ParsonsLeaning objectives:You should be able to:Keywords: cellular composition, body composition, exposure routes, absorption, distributionJust like soil, water, and air, the organic tissue of living organisms can also be regarded as a compartment of the ecosystem where chemical pollutants can accumulate or can be broken down. The internal concentration in living organisms provide important information on chemical exposure and ultimately determines the environmental risk of pollution, but it is important to understand the key features of tissue that influence chemical partitioning into organisms. Chemical accumulation in the tissue of living organisms is a series of chemical and biological processes, briefly based on:- chemical uptake (mostly permeation from bulk media over certain membranes into cells);- internal distribution (e.g. via blood flows through organs);- metabolism (e.g. biotransformation processes in for instance the liver).- excretion (e.g. through urine and feces, but also via gills, sweat, milk, or hairs)These four processes are the basis of toxicokinetic modeling, and are often summarized as Absorption, Distribution, Metabolism, and Excretion, or "ADME". These ADME processes can strongly vary for different polluting compounds due to the properties of the chemical structure. These ADME processes can also strongly vary for different organisms, because of:- the physiological characteristics (e.g. having gills, lungs, or roots, availability of specific chemical uptake mechanisms, presence of specific metabolic enzymes, size-related properties like metabolic rate),- the position in the polluted environment (flying birds or midge larvae living in sediment),- the interaction with the polluted environment (living in soil or water, food choice, etc.)- the behaviour in the polluted environment (being sessile or able to move (temporarily) away from a polluted spot).More details of these toxicokinetic processes are presented in section 4.1 on Toxicokinetics and bioaccumulation. The current module aims to provide a summary of the key features of different tissue components that explain the internal distribution of chemicals (distribution), the different types of contact between pollutants and organisms (exposure-absorption), and temporal changes in physiology that may affect internal exposure (e.g. excretion, which includes examples such as release of POPs via lactation, and increasing POP concentrations during starvation). Before we discuss how chemicals are taken up into biota, it is important to first define the key chemical properties and the molecular composition of tissue that influence the way chemicals are absorbed from the surrounding environment and distributed throughout an organism.All organisms are composed of cells, which are composed of a cell membrane, surrounding the largely watery solution filled with inner organelle membranes, protein structures, and DNA/RNA. Prokaryote organisms such as bacteria, but also algae, fungi and plants have reinforced membranes with cell walls to prevent water leaking by high osmotic pressures, and to protect the cell membrane. Metabolic energy is stored in large molecules such as fatty esters and sugars. Remarkably, for all existing living organism species, these tissue components are mostly structures made out of relatively simple and repetitive molecular building blocks, with minor variations on side chains. See examples in The composition of organs, as a collection of specific cells, in terms of the percentage of lipids, proteins and carbohydrates is important for the overall toxicokinetics of chemicals in the whole organism.Cell walls are mostly made from highly polar polysaccharides, e.g.:The specific algae group of diatoms have a cell wall composed of biogenic silica (hydrated silicon dioxide), typically as two valves that overlap each other surrounding the unicellular species. Diatoms generate about 20 percent of the oxygen annually produced on the planet, and contribute nearly half of the organic material found in the oceans. With their specific cell wall structure, diatoms take in over 6.7 billion metric tons of silicon each year from the waters in which they live, which creates huge deposits when they die off.Cell membranes are made up mostly of a phospholipid bilayer, with each phospholipid molecule basically having a polar and ionized headgroup connected to two long alkyl chains (example with POPC type phospholipid). The outer sides of a phospholipid bilayer are hydrophilic (water-loving), the inside is hydrophobic (water-fearing). Ions (inorganic salts, nutrients, metals, strong acids and ionized biomolecules) do not readily permeate through such a membrane passively, and require specific transport proteins that can transport as well as regulate ions in and out of the cell interior. Cholesterol molecules stabilize the fluidity of the membrane bilayers in cells of most organisms, but for example not in most Gram negative bacteria. Dissolved neutral chemicals may passively diffuse through phospholipid bilayers into and out of cells.Proteins are chains of a variety of amino acids, 21 of which are known to be genetically coded, and of which humans can only produce 12. The other nine must be consumed, and are therefore called essential amino acids (coded H, I, L, K, M, F, T, W, V). Proteins form complex 3 dimensional structures that allow for enzymatic reactions to occur effectively and repeatedly. There are two amino acids with side chains that carry a positive charge at neutral pH: Arginine (pKa 12) and Lysine (pKa 10.6), and two amino acids with side chains that carry a negative charge at neutral pH: Aspartic acid (pKa 3.7) and Glutamic acid (pKa 4.1). Some amino acids carry typical hydrophobic side chains: amongst others Leucine and Phenylalanine. Cysteine has a thiol (SH) moiety that can form strong connective disulfide interactions with spatially nearby cysteine side groups in the 3D structure. The key blood transport protein albumin, for example, contains about 98 anionic amino acids, and 83 cationic amino acids, and about 35 cysteine residues.DNA and other genetically encoding chains are composed of 4 different nucleotides that form a double helix of two opposing strands, held together by hydrogen bonds connecting the complementary bases: A and T (or A and U in RNA), and G and C. DNA can be densely packed around histone proteins, and is either or part of the cellular cytoplasm (in prokaryotic species) or separated within a membrane (in eukaryotic species). DNA is not a critical accumulation phase for chemicals, but of course it is a cellular structure where pollutants can strongly impact all kinds of cellular processes when they react with DNA components or affect the structural organisation otherwise.Storage fat provides for many animals and fruits of plants an important energy reserve, but also insulates warm-blooded animals in cold climates, lubricates joints to move smoothly, and protects organs from shocks (e.g. eyes and kidneys). Seeds and nuts may contain up to 65% (walnuts) of fatty components, which of course provides energy for initial growth, but from which also oil can be pressed. Storage fat in most animals is present in the form of triglycerides, and as such neutral and very hydrophobic phases within tissue. Polyunsaturated fatty acid esters like omega-6 and omega-3 fatty acids are abundant in fish (eicosapentaenoic acid (EPA) and docosahexaenoic acid (DHA)) and in seeds and plants (mostly alpha-linolenic acid (ALA), but algae also contain EPA and DHA). The high intake of algae by fish in aqueous food webs based on algae, results in the high EPA/DHA levels in many fish species, as they mostly cannot make it themselves . Humans can make some EPA and DHA from ALA.The average composition of living organisms based on the key tissue components lipid, protein, and carbohydrate can range widely, as illustrated in Table 1.Organismlipid% of d.w.protein% of d.w.carbohydrate% of d.w.Grass0.5-415-2560-84Phytoplankton205030Zooplankton15-3560-7010Oyster125533Midge larvae107020Army cutworms (moth larvae)72% of bodyPike filet3.796.30Lake trout14.4850Eel (farmed for 1.5y)65~341Deer game meat10900Table 2. Estimates on the tissue structure composition of a woman (BW= 60 kg, H = 163 cm, BMI = 22.6 kg/m2), (taken from Goss et al., 2018). Bones are not included.OrganTotal organ volume (mL)moisturecontentphospholipid% of d.w.storage lipid% of d.w.protein% of d.w.Adipose2207626.5%0.3%93.6%6.1%Brain131180.8%35.4%22.6%42.1%Gut122381.8%9.9%22.0%68.1%Heart34377.3%19.1%17.7%63.2%Kidneys42782.4%16.5%5.9%77.7%Liver184379.4%19.4%8.0%72.6%Lung103494.5%13.1%13.7%73.2%Muscle1911483.7%2.6%2.5%95.0%Skin351671.1%2.6%22.9%74.5%Spleen23183.5%5.6%2.4%92.0%Gonads1283.3%18.8%0.0%81.3%Blood480083.0%2.5%2.4%95.1%total5592960.2%1.7%71.0%27.3%Different organs in a single species can also largely differ in their composition, as well as their contribution to the overall body, as shown for a human in Table 2. Most organs have a moisture content >75%, but overall the moisture content is considerably lower, due to the low moisture content of bones and adipose tissue. Adipose tissue is by far the largest repository of lipids, but made up mostly of storage lipid, while the brain is also particularly rich in both lipids, but particularly enriched in phospholipids of cell membranes. Muscles and blood contain relatively high protein content.The influence of chemical structure on the accumulation of chemicals in the biotic compartments is largely dependent on their bioavailability, as discussed in more detail in section 4.1 on Toxicokinetics and bioaccumulation, as well as on the basic binding properties as the results of the chemical's hydrophobicity and volatility (section 3.4 on Partitioning and partitioning constants) and ionization state (section 2.2.6 on Ionogenic organic chemicals). In brief, the more non-polar the composition of a chemical, the more hydrophobic it is, the higher its affinity to partition from dissolved phases (both externally as well as internally) into poorly hydrated tissue phases such as storage fat and cell membranes. For this reason, the main issue with classical organic pollutants such as dioxins, DDT, and PCBs, is often their high hydrophobicity which results in strong accumulation in tissue. Such chemicals often take very long times to excrete from the tissue if they are not made less hydrophobic via biotransformation processes. This leads to foodweb accumulation and specific acute or chronic toxic effects at a certain organism level (section 4.1.6 on Food chain transfer). Proteins and sugary carbohydrates are mostly comprised of extended series of polar units and thus strongly hydrated, and bind hydrophobic chemicals to a much lower extent. Proteins may have three dimensional pockets that could fit either hydrophilic of hydrophobic chemicals, and as such act as transport proteins in blood throughout the body (transporting fatty acids for example), based on the specific binding affinity. Many protein based receptors are also based on a specific binding affinity, and in many cases this involves (combinations of) polar and electrostatic interactions that also have an optimum three-dimensional fitting space. Volatile chemicals are more abundantly present in the gas phase rather than being dissolved, and are more readily in contact with biota via gas-exchange on the extensive surfaces of lungs of animals and leaves of plants.In order to be taken up into cells, or into organs, chemicals have to permeate through membranes. For most organic pollutants, the passive diffusion through phospholipid bilayers has an optimum at a certain hydrophobicity. The high accumulation in the membrane ensures desorption into the adjacent cellular solution. It is assumed that ionized chemicals have a passive permeation rates that are either negligible or at least orders of magnitude lower than that of corresponding neutral chemicals. For this reason, all kinds of molecular intra-extracellular gradients can be readily maintained, for example for protons (H+) or sodium-potassium (Na+/K+). The movement of very polar and ionic chemicals can be tightly regulated by transport proteins protruding the membrane bilayer. Specific molecules can be actively excreted from cells (e.g. certain drugs) or reabsorbed (e.g. in the kidneys back into the blood stream). This again is based on three dimensional fitting in the transport pocket and stepwise movement through the protein structure, and costs energy. For acids and bases with a very small fraction of neutral species at physiological pH, the passive permeation over membranes may still be dominated by the neutral species.There are multiple routes by which chemicals can enter the tissue of biota, for example via respiratory organs, through digestion of contaminated food, or dermal contact. Most animals need to take in a more or less constant flux of oxygen and water, and periodically food, to release nutrients and energy from the food. Of course they also need to release CO2 (and other chemicals) as waste. Pollutant chemicals are taken up alongside these basic processes, and it depends on the chemical properties and the efficiency of the uptake route how much the organism will take in from these different exposure routes.Plants need plenty of water and during daytime photosynthesis need CO2, but also require oxygen during the night. High algal densities can deplete the oxygen levels in shallow aquatic systems during the night, and replenish oxygen levels during daytime. Oxygen is plentiful in air (200,000 parts per million in the air), but it is considerably less accessible in water (15 parts per million in cool, flowing water), and often depleted below the first few mm of sediment. To obtain sufficient oxygen, water and food, aquatic organisms have to pass large volumes of waters through their gills. Sediment-dwelling organisms either have hemoglobin to bind oxygen, or constantly pump fresh overlying water through burrows created in sediment, often lined with mucus. Living organisms are thus constantly in contact with water dissolved pollutants, and air breathing organism are readily exposed to air pollutants. To simplify the domain of living organisms as part of this module about the biotic compartment and how they get into contact with chemicals relevant to Environmental Toxicology, they can for example be divided in:Nearly all plants have roots below ground, a sturdy structure of stem and branches above ground, and leaves. Along with soil pore water, soluble chemicals are readily transported from roots of the plant in the internal circulation stream through xylem cells, which are lined with water impenetrable lignin (see . Moderately hydrophobic chemicals (Kow of 1-1000) are rapidly transported from roots to shoots to leaves, while hydrophobic chemicals may be strongly retained on the membranes and cell walls and mostly accumulate in root sections, limiting transport to above-ground plant tissues. Roots may also actively release considerable quantities of chemicals to influence the immediate surrounding media of the roots (rhizosphere), e.g. to stimulate microbial processes or pH in order to release nutrients. These plant root 'exudates' can be ions, small acids, amino acids, sterols, etc. Chemicals that enter plants via leaves, such as pesticides or semi-volatile organic pollutants, can be redistributed to other plant parts via the phloem streams.The transport through xylem up to higher plant tissues occurs via capillary forces and is enhanced by three passive phenomena:As a result of the capillary forces needed to pull water up against gravity, and a certain maximum diameter of the vessels to do so, there is a maximum possible plant height of 122-130 m (Koch et al., 2004) which compares to Redwood trees (Sequoias) reaching a maximum height of 113 m.Most leaves are covered with a waxy layer, to prevent damage and water evaporation. This wax layer may be 0.3-4.6 µm thick (Moeckel et al., 2008). Large forests provide enormous hydrophobic surfaces to which semi-volatile organic chemicals (SVOC) can bind out of air, which influences the global distribution of chemicals such as PCBs. Partitioning of SVOCs on the vegetation of extended grasslands contaminates the base of the food chain, as well as agricultural and cattle sectors used by humans. The grass/corn-cattle-milk/beef food chain accounts for the largest portion of background exposure of the European and North American population to many persistent SVOCs. The absorption rate of chemicals on the leaves often also depends on the air boundary layer surrounding leaves, which limits diffusion into the leave surfaces. Of course, all kinds of other factors such as wind speed, canopy formation and cuticle thickness also control exchange between leaves and gas phase (see also section 3.1.2 on the Atmosphere). Tiny openings or pores on the lower side of the leaves, called stomata, allow furthermore for gas exchange. In warm conditions, stomata can close to prevent water evaporation, but gas exchange is needed in many plant types to allow for CO2 to be metabolized and the release of O2 that is produced. The waxy layer on leaves can trap gaseous organic chemicals. Many plants, like coniferous trees, produce resins to provide effective defense against insects and diseases, and these resins release large amounts and structurally highly diverse organic volatiles such as terpenes and isoprenes (Michelozzi, 1999). These plant-produced volatiles can even contribute to ozone formation. Plants thus accumulate chemicals from their environment, but also release chemicals into the environment. It thus also matters for the exposure of grazing organisms to certain types of pollutants whether they eat roots, shoots, leaves, seeds or fruits of plants living in contaminated environments.The 'gill' movements of water breathers create a constant flux of chemicals dissolved in bulk water along outer cell membranes (or mucus layers surrounding cell membranes) of gills. A 1 kg rainbow trout fish ventilates about 160 mL/min, so 230 L/day (Consoer et al., 2014). The total gill surface area in a fish depends on species behaviour and weight (active large fish require a lot of oxygen), and equals to about 1-6 cm2 per g fish (Palzenberger & Pohla, 1992). For a 1 kg fish of ~20 cm length, the ~1000 cm2 gill area compares to a ~500 cm2 outer body surface. This results in an effective partitioning of chemicals between water and cell membranes. Within the gills, the cells are in close contact with the blood system of the organism, and the build-up of chemical concentrations in the outer cells provides an effective exchange with the blood stream (or other internal fluids, flushing along that redistribute chemicals to the inner organs. The reverse equally occurs: chemicals dissolved in blood stream coming from organs will also rapidly exchange with bulk external water if concentrations are lower.Of course, many pollutants can also enter water breathing organisms via food, but the gills-water exchange is very effective in controlling the distribution of chemicals. The salinity of water plays a strong role in the need of water breathing organisms to "drink" water, and hence take in contaminants via this route.BOX 1. Osmoregulation (MSc level)Most aquatic vertebrate animals are osmoregulators: their cells contain a concentration of solutes that is different than the water around them. Fish living in freshwater typically have a cellular osmotic level (300 millOsmoles per Liter, mOsm/L) that is higher than the bulk fresh water (~20-40 mOsm/L), so a lot of water flows passively via the gills (not via the skin) into the tissue of fish. They are thus constantly taking in water (water molecules only) via the gills, which needs to be controlled, e.g. by strongly diluting the urine. They do also take in some water in their gastro-intestinal tract (GIT). Marine fish have similar cellular osmotic levels as freshwater fish, but the salty water (1000 mOsm/L) causes water to move out of the gill tissue through the linings of the fish's gills by osmosis, which needs to be replenished by active intake of salty water, and separate excretion of the salts. Most invertebrate organisms in oceans have an internal overall concentration of dissolved compounds comparable to the water they live in, so that they don't suffer from strong osmotic pressures on their soft tissue (osmoconformers).A single adult oyster can cleanse about 200 liters of water per day . Plans to re-populate the harbour of New York with 1 billion oysters on artificial substrates can have enormous impacts on chemical redistribution. A single 2 cm zebra mussel (Dreissena polymorpha) that inhabits the shallow Lake IJsselmeer (6.05x1012 L) in can filter about 1 L per day, and the high densities of these and related species in this fresh water lake can turn over the lake volume once or twice per month (Reeders et al., 1989).Even many soil organism are constantly in contact with wet soil surfaces, and contact between soil pore water and the outer surfaces (gills/soft skin areas) dominates the routes of chemical exchange for many chemicals. Earthworms, for example, do not have lungs and they exchange oxygen through their skin. Earthworms eat bacteria and fungi that grow on dead and decomposing organic matter, and thus act as major organic matter decomposers and recycling of nutrients. Earthworms dramatically alter soil structure, water movement, nutrient dynamics, and plant growth. It is estimated that earthworms turn over the top 15 cm of soil in ten to twenty years (LINK), and so they also are able to mix surface bound pollution into a substantial soil layer. In terms of biomass, earthworms dominate the world of soil invertebrates, including arthropods. In order to better understand how much contamination earthworms take in via food or via their skin, several studies have used earthworms in exposure tests with part of the organisms having their mouth parts sealed with surgical glue (Vijver et al., 2003). Uptake rates of the metals Cd, Cu and Pb in sealed and unsealed earthworms exposed to two contaminated field soils were similar (Vijver et al., 2003), indicating main uptake through the skin of the worms. The uptake rates as well as the maximum accumulation level for several organic contaminants from artificially contaminated soil were also comparable between sealed and non-sealed worms (Jager et al., 2003). The dermal route is thus a highly important uptake route for organic chemicals too. Dermal uptake by soil organisms is generally from the pool of chemicals in the soil pore water, hence the distribution of chemicals between solid particles and organic materials in the soil and the soil pore water is extremely important in driving the dermal uptake of chemicals by earthworms (See section 3.4 on Partitioning and partitioning constants, section 3.5 on Metal speciation and 3.6 on Availability and bioavailability).Air breathing organisms typically take in less/non-volatile chemicals via food and require active excretion and metabolism for the elimination of these chemicals via e.g. feces and urine. Dermal uptake is generally assumed to be negligible, while the intake of chemicals via air particles can be relatively high, for example of pollutants present in house dust, or contaminants on aerosols. The excretion via air particles is clearly not a dominant route. The chemicals in the food matrix inside the gastro-intestinal tract often first need to be fully dissolved in gut fluids, before they can pass the mucus layers and membranes lining the gastrointestinal tract and enter blood streams for redistribution. However, 'endocytosis' may also result in the uptake of (small) particulate chemicals in cells by first completely surrounding the particle by the membrane, after which the encapsulating membrane buds buds off inside the cell and forms a vesicle. Notwithstanding this endocytosis, chemical fractions of pollutants strongly sorbed to non-digestible parts may not always automatically be taken up from food. Grazing animals typically require microbial conversion in their gut to digest plant material like cellulose and lignin into chemical components that can be taken up as energy source.Many aquatic foodwebs are structured such that they begin with aquatic plants being eaten by water breathing organisms, with air breathing marine animals or birds on top of the food chain. These air breathing top predators take in pollutants largely through their diet, but lack the effective blood-membrane-water exchange through gills. The blood-membrane-air partitioning in lungs is far less effective in removing chemicals via passive partitioning. For this reason, many top predators of foodwebs have the highest concentrations of pollutants. The chemical distribution in foodwebs will be discussed in more detail in section 4.1.6.Palzenberger & Pohla 1992, Reviews in Fish Biology and Fisheries, 2, 187-216.Jager, T.; Fleuren, R.H.L.J.; Hogendoorn, E.A.; de Korte, G. 2003. Elucidating the routes of exposure for organic chemicals in the earthworm, Eisenia andrei (Oligochaeta). Environ. Sci. Technol. 37, 3399-3404Koch et al. 2004, Nature 851-854.Moeckel et al. 2008, Environ. Sci. Technol. 42, 100-105.Michelozzi 1999, Defensive roles of terpenoid mixtures in conifers, Acta Botanica Gallica 146, 73-84.Consoer et al. 2014, Aquatic Toxicology 156, 65-73.Reeders et al. 1989, Freshwater Biology 22, 133-141.Vijver et al. 2003, Soil Biology and Biochemistry 35, 125-132.Goss et al. 2018, Chemosphere 199, 174-181.Explain whether a lean fish of 1 kg likely contains a lower concentration of hydrophobic pollutant as a 1 kg fatty fish caught in the same lake.Explain how bivalves (clams, muscles, oysters) can reduce contamination with a hydrophobic chemical from a lakeIn many countries, the levels of water pollution have been strongly reduced compared to 30 years ago, but a lot of sediments are still contaminated with historic pollutants that strongly bind to the sediment particles. Explain how historic contamination in a sediment can be brought into suspension by benthic organisms.Discuss whether the internal concentrations of very hydrophobic chemicals in fatty fish gets higher or lower during an extended period of starvation, e.g. during migration towards breeding grounds. Note that very hydrophobic chemicals may have elimination half lives of months or even longer.Discuss whether the internal concentrations of very hydrophobic chemicals in female organisms gets higher or lower during a reproduction cycle, e.g. by generating offspring and or lactation. Note that very hydrophobic chemicals may have elimination half lives of months or even longer.This page titled 3.1: Environmental compartments is shared under a CC BY 4.0 license and was authored, remixed, and/or curated by John Parsons & Steven Droge via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.
974
3.3: Pathways and processes determining chemical fate
https://chem.libretexts.org/Bookshelves/Environmental_Chemistry/Environmental_Toxicology_(van_Gestel_et_al.)/03%3A_Environmental_Chemistry_-_From_Fate_to_Exposure/3.03%3A_Pathways_and_processes_determining_chemical_fate
Authors: Dik van de Meent, Michael MatthiesReviewer: John ParsonsLeaning objectives:You should be able toKeywords: fate processes, degradation, transport, partitioningChemicals can escape during all steps of their life cycle, e.g. manufacturing, processing, use, or disposal. Release of chemicals into the environment necessarily leads to exposure of ecosystems, populations, and organisms including man. Exposure assessment science seeks to analyze, characterize, understand and (quantitatively) describe the pathways and processes that link releases to exposure. Chemicals in the environment undergo various transport, transfer and degradation processes, which can be described and quantified in terms of loss rates, i.e. the rates at which chemicals are lost from the environmental compartment into which they are emitted or transferred from adjacent compartments. Exposure assessment science aims to capture the 'environmental fate' of chemicals in process descriptions that can be used in mass balance modeling, using mathematical expressions borrowed from thermodynamic laws and chemical reaction kinetics (Trapp and Matthies, 1998).The 'fate' of a chemical in the environment can be viewed of as the net result of a suite of transport, transfer and degradation processes (see Section 3.4 on partitioning and partitioning constants, Section 3.6 on availability and bioavailability, Section 3.7 on degradation) that start to act on the chemical directly after its emission (see Section 3.2 on sources of emission) and during the subsequent environmental distribution. Environmental fate modeling (see Section 3.8 on multimedia mass balance modelling) builds on this knowledge by implementing the various degradation, transfer and transport processes derived in exposure assessment science in mathematical models that simulate 'fate of chemicals in the environment'.In chemical reaction kinetics, the amount of chemical in a 'system' (for instance, a volume of surface water) is described by mass balance equations of the kind: (eq. 1)where is the rate of change (kg.s-1) of the mass (kg) of chemical in the system over time t (s), i is the input rate (kg.s-1) and k (s-1) is the reaction rate constant. Mathematically, this equation is a first-order differential equation in m, meaning that the loss rate of mass from the systemis proportional to the first power of m. Equation 1 is widely applied in description and characterization of environmental fate processes: environmental fate processes generally obey first-order kinetics, and can generally be characterized by a first-order reaction rate constant k1st: (eq. 2)Such loss rated equations can also be formulated in integral format, which is obtained by integration of equation over time t with initial mass m0 = m: (eq. 3)As shown in which shows that half-life time constant, i.e. independent of the concentration of the chemical considered. This is the case for all environmental loss processes that obey first-order kinetics. First-order loss processes can therefore be sufficiently characterized by the time required for disappearance of 50% of the amount originally present.The disappearance time DT50 is often used in environmental regulation but is only identically with the half-life if the loss process is of first order. Note that the silent assumption of constancy of half-life implies that the process considered is assumed to obey first-order kinetics.Occurrence of true first-order reaction kinetics in chemistry is rare (see Section 3.7 on degradation). It occurs only when substances degrade spontaneously, without interaction with other chemicals. A good example is radio-active decay of elements, with a reaction rate proportional to the (first power of) the concentration (mass) of the decaying element, as in equation 3.Most chemical reactions between two substances are of second order: (eq. 5)or, when a chemical reacts with itself: (eq. 6)because the reaction rate is proportional to the concentrations (masses) of both of the two reactants. It follows directly from equation 2. As the concentrations (masses) of both reactants decrease as a result of the reaction taking place, the reaction rate decreases during the reaction, more rapidly so at high initial concentrations. When second-order kinetics applies, half-life is not constant, but increases with ongoing reaction, when concentrations decrease. In principle, this is the case for most chemical reactions, in which the chemical considered is transformed into something else by reaction with a transforming chemical agent.In the environment, the availability of second reactant (transforming agent) is usually in excess, so that its concentration remains nearly unaffected by the ongoing transformation reaction. This is the case, for oxidation (reaction with oxygen) and hydrolysis (reaction with water). In these cases, the rate of reaction decreases with the decreasing concentration of the first chemical only: (eq. 7)and reaction kinetics become practically first-order: so-called pseudo first-order kinetics. Pseudo first-order kinetics of chemical transformation processes is very common in the environment.Chemical reactions in the biosphere are often catalyzed by enzymes. This type of reaction is saturable and the kinetics can be described by the Michael-Menten kinetic model for single substrate reactions. At low concentrations, there is no effect of saturation of the enzyme and the reaction can be assumed to follow (pseudo) first order kinetics. At concentrations high enough to saturate the enzyme, the rate of reaction is independent of the concentrations (masses) of the reactants, thus constant in time during the reaction, and the reaction obeys zero-order kinetics. This is true for catalysis, where the reaction rate depends only on the availability of catalyst (usually the reactive surface area): = constant. (eq. 8)One could say that the rate is proportional to the zero-th power of the mass of reactant present. In case of zero-order kinetics, the half-life times are longer for greater initial concentrations of chemical.An example of zero-order reaction kinetics is the transformation of alcohol (ethanol) in the liver. It has been worked out theoretically and experimentally that human livers remove alcohol from the blood at a constant rate, regardless the amount of alcohol consumed.Microbial degradation (often referred to as biodegradation) is a special case of biotic transformation kinetics. Although this is an enzymatically catalysed process, the microbial transformation process can be viewed of as the result of the encounter of molecules of chemical with microbial cells, which should result in apparent second-order kinetics (first order with respect to the number of microbial cells present, and first order with respect to the mass of chemical present): (eq. 9)where mbio stands for the concentration (mass) of active bacteria present in natural surface water, and kdeg represents a pseudo-first-order degradation rate constant.Chemicals can be moved from one local point to another by wind and water currents. Advection means transport along the current axis whereas dispersion is the process of turbulent mixing in all directions. Advective processes are driven by external forces such as wind and water velocity, or gravity such as rain fall and leaching in soil. In most exposure models these processes are described in a simplified manner, e.g. the dispersive air plume model. An example of a first-order advective loss process is the outflow of a chemical from a lake: (eq. 10)where Q stands for the flow rate of lake water [m³/s] and V is the lake volume [m³]. Q/V is known as the renewal rate constant kadv of the transport medium, here water. More sophisticated hydrological, atmospheric, or soil leaching models consider detailed spatial and temporal resolution, which require much more data and higher mathematical computing effort (see sections 3.1.2 and 3.8.2).Due to Fick's first law the rate of transfer through an interface between two media (e.g. water and air, or water and sediment) is proportional to the concentration difference of the chemical in the two media (see section 3.4 on partitioning, and Schwarzenbach et al., 2017 for further reading). As long as the concentration in one media is higher than in the other, the more molecules are likely to pass through the interface. Examples are volatilization of chemicals from water (to air) and gas absorption from air (to water or soil), adsorption from water (to sediments, suspended solids and biota) and desorption from sediments and other solid surfaces.When two environmental media are in direct contact, (first-order) transfer can take place in two directions, in the case of water and air by volatilization and gas absorption: Each at a rate proportional to the concentration of chemical in the medium of origin and each with a (first-order) rate constant characteristic of the physical properties of the chemical and of the nature of the interface (area, roughness). This is known as physical intermedia partitioning (see section 3.4 on partitioning), usually represented by a chemical reaction formula: (eq. 11)where [M] stands for a (mass) concentration (unit mass per unit volume) and the double arrow represents forward and reverse transport. Intermedia partitioning proceeds spontaneously until the two media have come to thermodynamic equilibrium. In the state of equilibrium, forward and backward rates (here: volatilization from water to air and gas absorption from air to water) have become equal. At equilibrium, the total (Gibbs free) energy of the system has reached a minimum: the system has come to rest, so that (eq. 12)and the ratio of concentrations of the chemical in the two media has reached its (thermodynamic) equilibrium value, called equilibrium constant or partition coefficient (see section 3.4 on partitioning).Challenge to environmental chemists is to describe and characterize the various processes of chemical and microbial degradation and transformation, of intra-media transport and intermedia transfer rate constants and of equilibrium constants, in terms of (i) physical and chemical properties of the chemicals considered and (ii) of the properties of the environmental media.Schwarzenbach, R.P., Gschwend, P.M., Imboden, D.M.. Environmental Organic Chemistry, Third Edition, Wiley, ISBN 978-1-118-76723-8.Trapp, S., Matthies, M.. Chemodynamics and Environmental Modeling. An Introduction. Springer, Heidelberg, ISBN 3-540-63096-1.Name and explain in your own words the essential environmentally relevant property of (pseudo) first-order kinetics.Give examples of transformation or transport processes that obey zero-order, first-order, second-order and pseudo first-order kinetics.Why is it useful to formulate environmental fate processes in terms of process rates and rate constants and equilibrium constants?This page titled 3.3: Pathways and processes determining chemical fate is shared under a CC BY 4.0 license and was authored, remixed, and/or curated by Sylvia Moes, Kees van Gestel, & Gerco van Beek via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.
976
3.4: Partitioning and Partitioning Constants
https://chem.libretexts.org/Bookshelves/Environmental_Chemistry/Environmental_Toxicology_(van_Gestel_et_al.)/03%3A_Environmental_Chemistry_-_From_Fate_to_Exposure/3.04%3A_Partitioning_and_Partitioning_Constants
Authors: Joop Hermens, Kees van GestelReviewer: Steven Droge, Monika NendzaLearning ObjectivesYou should be able to:Keywords: Hydrophobicity, octanol-water partition coefficients, volatility, Henry's Law constant, ionized chemicalsDifferent processes affect the fate of a chemical in the environment. In addition to the transfer and exchange between compartments (air-water-sediment/soil-biota), also degradation determines the concentration in each of these compartments.Some of these processes are discussed in other sections (see sections on Sorption and Environmental degradation of chemicals). Some chemicals will easily evaporate from water to air, while others remain mainly in the aqueous phase or sorb to sediment and accumulate into biota.These differences are related to only a few basic properties:Hydrophobicity means fear (phobic) of water (hydro). A hydrophobic chemical prefers to "escape from the aqueous phase" or in other words "it does not like to dissolve in water". Water molecules are tightly bound to each other via hydrogen bonds. For a chemical to dissolve in water, a cavity should be formed in the aqueous phase and this will cost energy.Hydrophobicity mainly depends on two molecular properties:It will take more energy for a chemical with a larger size to create the cavity making the chemical more hydrophobic, while interactions of the chemical with water will favour its dissolution making it less hydrophobic. shows chemicals with increasing hydrophobicity with increasing size and a decreasing hydrophobicity by the presence of polar groups (amino or hydroxy).Most hydrophobic chemicals are non-polar organic micro pollutants. Well-known examples are the chlorinated hydrocarbons, such as polychlorinated biphenyls (PCBs) and polycyclic aromatic hydrocarbons (PAHs). Water solubility of these chemicals in general is rather low (in the order of a few ng/L up to a few mg/L).The hydrophobic nature mainly determines the distribution of these chemicals in water and sediment or soil and their uptake across cell membranes. Additional Cl- or Br-atoms in a chemical, as well as additional (CH)x units, increase the molecular size, and thus a chemical's hydrophobicity. The increased molecular volume requires a larger cavity to dissolve the chemical in water, while they only interact with water molecules via VanderWaals interactions.Polar groups, such as the -OH and -NH units on the aromatic chemicals in combination reduce the hydrophobicity of organic contaminants, because even though they increase the molecular volume they interact via hydrogen bonds (H-bonds) with surrounding water molecules. Additional polar groups in a chemical typically decrease a chemical's hydrophobicity.A simple measure of the hydrophobicity of chemicals, originating from pharmacology, is the octanol-water partition coefficient, abbreviated as Kow (and sometimes also called Pow or Poct): this is the ratio of concentrations of a chemical in n-octanol and in water, after establishment of an equilibrium between the two phases. The -OH group in n-octanol does allow for some hydrogen bonding between octanol-molecules in solution, and between octanol and dissolved molecules. However, the relatively long alkyl chain only interacts through VanderWaals interactions, and therefore the interaction strength between octanol-molecules is much smaller than that between water-molecules, and it is energetically much less costly to create a cavity to dissolve any molecule.Experimentally determined Kow values were used in pharmacological research to predict the uptake and biological activity of pharmaceuticals. Octanol was selected because it appears to closely mimic the nonionic molecular properties of most tissue components, particularly phospholipids in membranes. Since the beginning of the 1970s, Kow values have also been used in environmental toxicology to predict the hazard and environmental fate of organic micro pollutants. Octanol may partially also mimic the nonionic molecular properties of most organic matter phases that sorb neutral organic chemicals in the biotic and abiotic environment.Not unexpectedly, water solubility is negatively correlated with octanol-water partition coefficients.In practice, three methods can be used to determine or estimate the Kow:In the shake-flask method (Leo et al., 1971) and the 'slow-stirring' method (de Bruijn et al., 1989), the distribution of a chemical between octanol and water is determined experimentally. For highly lipophilic chemicals (log Kow > 5-6), the extremely low water solubility, however, hampers a reliable analytical determination of concentrations in the water phase. For such chemicals, these experimental methods are not suitable. During the last two decades, the use of generator columns has allowed for quantification of higher Kow values. Generator columns are columns packed with a sorbing material (e.g. Chromosorb®) onto which an appropriate hydrophobic solvent (e.g. octanol) is coated that contains the compound of interest. In this way, a large interface surface area between the lipophilic and water phases is created, which allows for a rapid establishment of equilibrium. When a large volume of (octanol-saturated) water (typically up to 10 litres) is passed slowly through the column, an equilibrium distribution of the compound is established between the octanol and the water. The water leaving the column is passed over a solid sorbent cartridge to concentrate the compound and allow for a quantification of the aqueous concentration. In this way, it is possible to more reliably determine log Kow values up to 6-7.Kow values may also be derived from the retention time in a chromatographic system (Eadsforth, 1986). The use of reversed-phase High Performance Liquid Chromatography (HPLC), thin-layer chromatography or gas chromatography results in a capacity factor (relative retention time; retention of the compound relative to a non-retained chemical species), which may be used to predict the chemical distribution over octanol and water. HPLC systems have shown most successful, because they consist of stationary and mobile phases that are liquid. As a consequence, the nature of the phases can be most closely arranged to resemble the octanol-water system. Of course, this requires calibration of the capacity factors by applying the chromatographic method to a number of chemicals with well-known Kow values. Chromatographic methods may reliably be applied for estimations of log Kow values in the range of 2-8. For more lipophilic chemicals, also these methods will fail to reliably predict Kow values (Schwarzenbach et al., 2003).Kow values may also be calculated or predicted from parameters describing the chemical structure of a chemical. Several software programs are commercially available for this purpose, such as KOWWIN program of the US-EPA. These programs make use of the so-called fragment method (Leo, 1993; Rekker and Kort, 1979). This method takes into account the contribution to Kow of different chemical groups or atoms in a molecule, and in addition corrects for special features such as steric hindrance or other intramolecular interactions (equation 1):log Kow = Ʃ fn + Ʃ Fp (eq.1)in which fn quantifies the contributions of each fragment n in a particular chemical (see e.g. Table 1) and Fp accounts for any special intramolecular interaction p between the fragments.This fragment approach has been improved during the last decades and is available in the EPISUITE program from the US Environmental Protection Agency. Other programs for the calculation of Kow values are: ChemProp, and ChemAxon from Chemspider.Table 1. Fragment constants (Kow) for a few fragments. (from the EPISUITE program)FragmentFragment constant (f)a-CH3 aliphatic carbon 0.5473Aromatic Carbon 0.2940-OH hydroxy, aromatic attach-0.4802-N aliphatic N, one aromatic attach-0.9170Note: the above calculations are given for non-ionized chemicals. The hydrophobicity of ionic chemicals is also highly affected by the degree of ionization (see below).Kow values can also be retrieved from databases like echemportal or ECHA and others.Volatility of a chemical from the aqueous phase to air (see is expressed via the Henry's law constant (KH).Henry's law constant (KH, in Pa⋅m3/mol) is the chemical distribution between the gas phase and water, as (eq.2)where in an equilibrated water-gas system:Caq is the aqueous concentration of the chemical (units in mol/m3), and Pi is the partial pressure of the chemical in air (units in Pascal, Pa), which is the pressure exerted by the chemical in the total gas phase volume (occupied by the mixture of gases the gas-phase above the water solution of the chemical). Note that Pi is a measure of the concentration in the gas phase, but not yet in the same units as the dissolved concentration (discussed below)!For compounds that are slightly soluble in water, KH can be estimated from: (eq.3)where:KH: Henry's law constant (Pa⋅m3/mol), Vp is the (saturated) vapor pressure (Pa), which is the pressure of the chemical above the pure condensed (liquid) form of the chemical, and Sw is the maximum solubility in water (mol/m3).The advantage of equation 3 is that both Vp and Sw can be experimentally derived or estimated. The rationale behind equation 3 is that two opposite forces will affect the evaporation of a chemical from water to air:(i) the vapor pressure (Vp) of the pure chemical - high vapor pressure means more volatile, and(ii) solubility in water (Sw) - high solubility means less volatile.Benzene and ethanol (see Table 2) are good illustrations. Both chemicals have similar vapor pressure, but the Henry's law constant for benzene is much higher because of its much lower solubility in water compared to ethanol; benzene is much more volatile from an aqueous phase.Table 2. Air-water partition coefficients (Kair-water) calculated for five chemicals (ranked by aqueous solubility) by equation 3.ChemicalVapor pressure(Pa)Solubility (mol/m3)KH(Pa.m3/mol)Kair-water(L/L, or m3/m3)Ethanol7.50⋅1031.20⋅1046.25⋅10-12.53⋅10-4Phenol5.50⋅1018.83⋅1026.23⋅10-22.52⋅10-5Benzene1.27⋅1042.28⋅1015.57⋅1022.25⋅10-1Pyrene6.00⋅10-46.53⋅10-49.18⋅1013.71⋅10-4DDT2.00⋅10-52.82⋅10-67.082.86⋅10-3Note: all chemicals at equilibrium have a higher concentration (in e.g. mol/L) in the aqueous phase than in the gas phase. Of these five, benzene is the chemical most prone to leave water, with an equilibrated air concentration about 4 times lower (22.5%) than the dissolved concentration.Equations 2 and 3 are based on the pressure in the gas phase. Environmental fate is often based on partition coefficients, in this case the air-water partition coefficient (Kair-water). These partition coefficients are more or less 'dimensionless', because the concentrations are based on equal volumes (such as L/L), while KH has the unit Pa⋅m3/mol or something equivalent to the applied units (equation 4). (eq.4)where:Cair is the concentration in air (in e.g. mol/m3) and Caq is the aqueous concentration (in e.g. mol/m3).Kair-water can be calculated from KH according to equation 5: (eq.5)where R is the gas constant (8.314 m3⋅Pa⋅K−1⋅mol−1), and T is the temperature in Kelvin (Kelvin = oCelsius + 273).This use of "RT" converts this gas phase concentration to a volume based metric, and applies the ideal gas law which relates pressure (P, in Pa) to temperature (T, in K), volume (V, in m3), and amount of gas molecules (n, in mol), according to the gas constant (R: 8.314 m3⋅Pa⋅K-1⋅mol-1):P⋅V = n⋅R⋅T (note that the units of both terms will cancel out) (eq.6)At 25 oCelcius (298 K), the product RT equals 2477 m3⋅Pa⋅K-1⋅mol-1.Examples of calculated values for Kair-water are presented in Table 2.The influence of the chemical structure on volatility of a chemical from a solvent fully depends on the cost of creating a cavity in the solvent (interactions between solvent molecules) and the interactions between the chemical and the solvent molecules. For partitioning processes, the gas phase is mostly regarded as an inert compartment without chemical interactions (i.e. gas phase molecules hardly ever touch each other).The molecules of a strongly dipolar solvent such as water that contain atoms that can interact as hydrogen acceptor (the O in an OH group) and hydrogen donor (the H in an OH group) strongly interact with each other, and it costs much energy to create a cavity. This cost increases strongly with molecular size, for nearly all molecules more than the energy regained by interactions with the surrounding solvent molecules. As a result, for most classes of organic chemicals, affinity with water decreases and volatility out of water into air slightly increases with molecular volume. For chemicals that are not able to re-interact via hydrogen bonding, e.g. alkanes, the overall volatility is much higher than for chemicals that do have specific interactions with water molecules besides van der Waals.Acids and bases can be present in the neutral (HA and B) or ionized form (A- and BH+, respectively). For acids, the neutral form (HA) is in equilibrium with the anionic form (A-) and for bases the neutral form (B) is in equilibrium with the cationic form (BH+). The degree of ionization depends on the pH and the acid dissociation constant (pKa). Table 3 shows the equations to calculate the fraction ionized for acids and bases and examples of two acids (phenols) are presented in Table 4.Table 3. Calculation of the fraction ionized for acids and bases.AcidsBasespKa = - log Ka, where Ka is dissociation constant of the acidic form (HA or BH+).The degree of ionization is thus determined by the pH and the pKa value and more examples for several organic chemicals are presented elsewhere (see Chapter Ionogenic organic compounds).Table 4. The degree of ionization of two phenolic structures (acids).PentachlorophenolPhenolpKa = 4.60pKa = 9.98% ionized versus pH% ionized versus pHat pH of 7.0: 99.6 % ionizedat pH of 7.0: 0.1 % ionizedExamples for several organic chemicals are presented elsewhere (see section on Ionogenic organic compounds).The fate of ionic chemicals is very different from that of non-ionic chemicals. The sediment-water sorption coefficient of the anionic species is substantially (>100x) lower than that of the neutral species. If the percentage of ionization is less than ~99 % (at a pH 2 units above the pKa), the sorption of the anion may be neglected (Kd is still dominated by the >1% neutral species) (Schwarzenbach et al., 2003). The reason for the low sorption affinity of the anionic acid form is twofold: anions are much better water soluble, but also most sediment particles (clay, organic matter, silicates) are negatively charged, and electrostatically repulse the similarly charged chemical. In that case the sorption coefficient Kd can be calculated from the sorption coefficient of the non-ionic form and the fraction of the non-ionized form (α): (eq. 7)In environments where the pH is such that the neutral acid fraction <1% (when pH >2 units above the pKa), the sorption of the anionic species to soil/sediment may significantly contribute to the overall "distribution coefficient" of both acid species.For basic environmental chemicals of concern, among which many illicit drugs (e.g. amphetamine, cocaine) and non-illicit drugs (e.g. most anti-depressants, beta-blockers), the protonated forms are positively charged. These organic cations are also much more soluble in water than the neutral form, but at the same time they are electrostatically attracted to the negatively charged sediment surfaces. As a result, the sorption affinity of organic cations to sediment should not be considered negligible relative to the neutral species. The sorption processes, however, may strongly differ for the neutral base species and the cationic base species. Several studies have shown that the sorption affinity of cationic base species to DOM or sediment is even stronger than that of the neutral species.De Bruijn, J., Busser, F., Seinen, W., Hermens, J.. Determination of octanol/water partition coefficients for hydrophobic organic chemicals with the "slow-stirring" method. Environmental Toxicology and Chemistry 8, 499-512.Eadsforth, C.V.. Application of reverse-phase HPLC for the determination of partition coefficients. Pesticide Science 17, 311-325.Leo, A., Hansch, C., Elkins, D.. Partition coefficients and their uses. Chemical Reviews 71, 525-616.Leo, A.J.. Calculating log P(oct) from structures. Chemical Reviews 93, 1281-1306.Rekker, R.F., de Kort H.M.. The hydrophobic fragmental constant; an extension to a 1000 data point set. Eur.J. Med. Chem. - Chim. Ther. 14:479-488.Schwarzenbach RP, Gschwend PM, Imboden DM (Eds.) 2003. Environmental organic chemistry. Wiley, New York, NY, USA.Further reading:Mackay, D., Boethling, R.S. (Eds.) 2000. Handbook of property estimation methods for chemicals: environmental health and sciences. CRC Press.van Leeuwen, C.J., Vermeire, T.G. (Eds.) 2007. Risk assessment of chemicals: An introduction. Springer, Dordrecht, The NetherlandsExplain the term hydrophobicity and mention two major properties that affect hydrophobicity of chemicals.What is the definition of Kow. Rank the following chemicals from the Table below from low to high Kow. 1. Pentachlorobenzene, 2. Monochlorobenzene, 3. Monochloroaniline, 4. DDT.1.Pentachlorobenzene2.Monochlorobenzene3.Monochloroaniline4.DDTRank the chemicals in the Table according to their volatility as pure compounds and their solubility in water and volatility from water.Which two basic properties determine the volatility of a chemical from water to air.Calculate the percentage ionized for 2,3,4-trichlorophenol (pKa = 4.6) at pH 3, 5, 7 and 9.Authors: Joop HermensReviewers: Kees van Gestel, Steven Droge, Philipp MayerLeaning objectives:You should be able to:Keywords: Sorption isotherm, absorption and adsorption, organic matter, Freundlich model, Langmuir model, organic carbon contentSorption processes have a major influence on the fate of chemicals in the environment (Box 1). In general, sorption is defined as the binding of a dissolved or gaseous chemical (the sorbate) to a solid phase (the sorbent) and this may involve different processes, including:(i) binding of dissolved chemicals from water to sediments and soilsand (ii) binding of gaseous phase chemicals from air to soils, plants, and trees.Information about sorption is relevant because of a number of reasons:Box 1.The Biesbosch is a wetland area in the Netherlands, an area in between the Rivers Rhine and Meuse and estuaries that are connected to the North Sea. The water flow is relatively low and as a consequence there is a strong sedimentation of particles from the water to the sediment. Chemicals present in the water strongly sorb to these particles, which in the past were polluted with hydrophobic organic contaminants such as dioxins and PCBs. The concentrations of these organic compounds in sediment are still relatively high because they are highly persistent. The reason for this persistence of these compounds is that these sorbed compounds are not easily available for degradation by bacteria. Also, the concentrations in organisms that live close to or in the sediment are high. These concentrations are so high that fishing on eel, for example, is not allowed in the area. This example shows the importance of sorption processes on fate, but also on effects in the environment.Measurement of sorption is a simple procedure. A chemical X is spiked (added) to the aqueous phase in the presence of a certain amount of the solid phase (sediment or soil). The chemical sorbs to the solid phase and when the system is in equilibrium, the concentrations in the sediment (Cs) and in the aqueous phase (Ca) are measured. The solid phase is collected via centrifugation or filtration.The sorption coefficient Kp (equation 1 and box 2) gives information about the degree of sorption of a chemical to sediment and is defined as:Box 2:The concentration of a chemical X in sediment (Cs) is 30 mg/kg and the concentration in the aqueous phase (Ca) is 0.1 mg/L.The sorption coefficient Kp = Cs / Ca = 30 mg/kg / 0.1 mg/L = 300 L/kgNote the units of a sorption coefficient: L/kgIn the environmental risk assessment of chemicals, it is very useful to understand the fraction of the total amount of chemical (Atotal) in a system that is sorbed (fsorbed) or dissolved (fdissolved) (e.g. due to an accidental spill in a river):fdissolved = Adissolved / Atotal , and thus fsorbed = 1 - fdissolvedThis is related to the sorption coefficients of X and the volume of the solvent and the volume of the sorbent material. The equation derived for calculating fdissolved is based on the mass balance of chemical A, which relates the concentration of X (C) to the amount of X (A) in each volume (V):C = A / V, and thus A = C ⋅ Vwhich for a system of water and sediment (air not included for simplification) relates to:Atotal = Adissolved + Asorbed = Cwater ⋅ Vwater + Csediment ⋅ Vsediment = Cwater ⋅ Vwater + (Kp ⋅Cwater)⋅Vsedimentfdissolved = Adissolved / Atotal = Cwater ⋅ Vwater / (Cwater ⋅Vwater + Kp ⋅Cwater⋅Vsediment)This way of separating out Csediment from the equation using Kp can result, after rearranging (by dividing both parts of the ratio by Cwater ⋅ Vwater) to the following simplified equation:fdissolved = 1 / (1 + Kp⋅(Vsediment / Vwater))in this equation, 'sediment' can be replaced by any sorbent, as long as the appropriate sorption coefficient is used.Let's try to calculate with chemical X from above, in a wet sediment, where 1L wet sediment contains ~80% water and 20% dry solids. The dissolved fraction of X with Kp = 300 kg/L, is only 0.013 in this example. Thus, with 1.3% of X actually dissolved, this indicates that 98.7% of X is sorbed to the sediment.There are two major sorption processes (see :A sorption isotherm gives the relation between the concentration in a sorbent (sediment) and the concentration in the aqueous phase and the isotherm is important in identifying a sorption process.Absorption of a chemical is similar to its partitioning between two phases and comparable to its partitioning between two solvents. Distribution of a chemical between octanol and water is a well-known example of a partitioning process (see Section 3.4.1 on Relevant chemical properties for more detailed information on octanol-water partitioning). The isotherm for an absorption process is linear and the slope of the y-x plot is the sorption coefficient Kp.In an adsorption process, where the sorbing phase is a surface with a limited number of sorption sites, the sorption isotherm is non-linear and may reach a maximum concentration that is adsorbed when all sites are occupied. A mechanistic model for adsorption is the Langmuir model. This model describes adsorption of molecules to homogeneous surfaces with equal adsorption energies, represented by the adsorption site energy term (b) and a limited number of sorption sites (Cmax) that can become saturated. The Langmuir adsorption coefficient (Kad) is equal to the product (b ⋅ Cmax) at relatively low aqueous concentrations, where the product (b ⋅ Caq) << 1 (note that the denominator term will then be ~1). Indeed, the isotherm curve on a double log scale plot shows a slope of 1 at such low concentrations, indicating linearity.Another mathematical approach to describe non-linear sorption is the Freundlich isotherm, where KF is the Freundlich sorption constant and n is the Freundlich exponent describing the sorption process non-linearity. Using logarithmic values for aqueous and sorbed concentrations, the Freundlich isotherm can be rewritten as:Log Cs = n ⋅ log Caq + log KF (eq. 2)This conveniently yields a linear relationship (just as y = a⋅x + b) between log Cs and log Caq, with a slope equal to n and the abscissa (crossing point with the Y-axis) equal to log KF. This allows for easy fitting of linear trend lines through experimental data sets. When n = 1, the isotherm is linear, and equals the one for absorption. In case of saturation of the sorption sites on the solid phase, 1/n will be smaller than 1. The Freundlich isotherm can, however, also yield a 1/n value > 1; this may occur for example if the chemical that is sorbed itself forms a layer that serves as a new sorbing phase and examples are described for surfactants.Soils and sediments may show large variations in composition and particle size distribution. The major components of soils and sediments are:Sand63 - 2 mmSilt2 - 63 µmClay<2 µmOrganic matterincludes e.g. detritus, humic acids, especially associated with the clay and silt fractionsCaCO3gives a schematic picture of a sediment or soil particle. In addition to the presence of clay minerals and (soil or sediment) organic matter (SOM), sediment and soil may contain soot particles (a combustion residue).Organic matter is formed upon decomposition of plant material and dead animal or microbial tissues. Upon decomposition of plant material, the first organic groups to be released are phenolic acids, some of which have a high affinity for complexation of metals. One example is salicylic acid (o-hydroxybenzoic acid), which occurs in high concentrations in leaves of willows, poplar and other deciduous trees. Further decomposition of plant material may result in the formation of humic acids, fulvic acids and humin. Humic and fulvic acids contain a series of functional groups, such as carboxyl- (COOH), carbonyl- (=C=O), phenolic hydroxyl- (-OH), methoxy- (-OCH3), amino- (-NH2), imino (=NH) and sulfhydryl (-SH) groups (see for more details the section on Soil).Hydrophobic organic chemicals mainly sorb to organic matter. Because organic matter has the characteristics of a solvent, the sorption is clearly an absorption process and the sorption isotherm is linear. Because binding is mainly to organic matter, the sorption coefficient (Kp) depends on the fraction of organic matter (fom) or the fraction of organic carbon (foc) present in the soil or sediment. Please note that as a rule of thumb, organic matter contains 58% organic carbon (foc = 0.58⋅fom). shows the increase in sorption coefficient with increasing fraction organic carbon in soils and sediments. In order to arrive at a more intrinsic parameter, sorption coefficients are often normalized to the fraction organic matter (Kom) or organic carbon (Koc). These Koc or Kom values are less dependent of the sediment or soil type.Hydrophobic chemicals can have a very high affinity to soot particles relative to the affinity to SOM. If a sediment contains soot, Kp values are often higher than predicted based on the fraction organic carbon in the organic matter (Jonker and Koelmans, 2002).Schwarzenbach, R.P., Gschwend, P.M., Imboden, D.M.. Environmental Organic Chemistry. Wiley, New York, NY, USA.Means, J.C., Wood, S.G., Hassett, J.J., Banwart, W.L.. Sorption of polynuclear aromatic-hydrocarbons by sediments and soils. Environmental Science and Technology 14, 1524-1528.Jonker, M.T.O., Koelmans, A.A.. Sorption of polycyclic aromatic hydrocarbons and polychlorinated biphenyls to soot and soot-like materials in the aqueous environment mechanistic considerations. Environmental Science and Technology 36, 3725-3734.van Leeuwen, C.J., Vermeire, T.G. (Eds.). Risk Assessment of Chemicals: An Introduction. Springer, Dordrecht, The Netherlands. Chapter 3 and 9.Schwarzenbach, R.P., Gschwend, P.M., Imboden, D.M.. Environmental Organic Chemistry. Wiley, New York, NY, USA. chapters 9, 11. Detailed information about sorption processes and sorption mechanisms.What is a sorption isotherm and what is the difference between absorption and adsorption?Why is the sorption isotherm to clay non-linear?What are the main sorption phases in sediment or soil?Author: Joop HermensReviewer: Steven Droge, Monika NendzaLearning objectives:You should be able to:Keywords: Quantitative structure-property relationships (QSPR), quantitative structure-activity relationships (QSAR), octanol-water partition coefficients, hydrogen bonding, multivariate techniques.Risk assessment needs input data for fate and effect parameters. These data are not available for many of the existing chemicals and predictions via estimation models will provide a good alternative to actual testing. Examples of estimation models are Quantitative Structure-Property Relationships (QSPRs) and Quantitative Structure-Activity Relationships (QSARs). The term "activity" is often used in relation to models for toxicity, while "property" usually refers to physical-chemical properties or fate parameters.In a QSAR or QSPR, a certain environmental parameter is related to a physical-chemical or structural property, or a combination of properties.The elements in a QSPR or QSAR are shown in and include:Estimation models have been developed for many endpoints such as sorption to sediment, humic acids, lipids and proteins, chemical degradation, biodegradation, bioconcentration and ecotoxic effects.An overview of the chemical parameters (the X-variable) used in estimation models is given in Table 1. Chemical properties are divided in three categories: (i) parameters related to hydrophobicity, (ii) parameters related to charge and charge distribution in a molecule and (iii) parameters related to the size or volume of a molecule. Hydrophobicity is discussed in more detail in the section on Relevant chemical properties.Other QSPR approaches use large number of parameters derived from chemical graphs. The CODESSA Pro software, for example, generates molecular and fragment descriptors, classified as (i) constitutional, (ii) topological, (iii) geometrical, (iv) charge related, and (v) quantum chemical (Katritzky et al. 2009). Some models are based on structural fragment in a molecule. The polyparameter linear free energy relationships (pp-LFER) use parameters that represent interactions between molecules (see under pp-LFER).Table 1. Examples of parameters related to hydrophobicity and electronic and steric parameters (the X variable).Hydrophobic parametersAqueous solubilityOctanol-water partition coefficient (Kow)Hydrophobic fragment constant πElectronic parametersAtomic charges (q)Dipole momentHydrogen bond acidity (H bond-donating)Hydrogen bond basicity (H bond-accepting)Hammett constant σSteric parametersTotal Surface Area (TSA)Total Molecular Volume (TMV)Taft constant for steric effects (Es)Most models are based on correlations between Y and X. Such a relationship is derived for a "training set" that consists of a limited number of carefully selected chemicals. The validity of such a model should be tested by applying it to a "validation set", i.e. a set of compounds for which experimental data can be compared with the predictions. Different techniques can be used to develop an empirical model, such as:Linear equations take the form:Y(i) = a1X1(i) + a2X2(i) + a3X3(i) + ... + bwhere Y(i) is the value of the dependent parameter of chemical i (for example sorption coefficients); X1-X3(i) are values for the independent parameters (the chemical properties) of chemical i; a1-a3 are regression coefficients (usually 95% confidence limits are given); b is the intercept of the linear equation. The quality of the equation is presented via the correlation coefficient (r) and the standard error of estimate (s). The closer r is to 1.0, the better the fit of the relationship is. More information about the statistical quality of models can be found under "limitation of QSPR".The classical approach in QSAR and QSPR studies is the Hansch approach that was develop in the 1960s. The Hansch equation (Hansch et al., 1963) describes the influence of substituents on the biological activity in a series of parent compounds with a certain substituent (equation 2). Substituents are for example a certain atom or chemical group (Cl, F, B,. OH, NH2) attached to a parent aromatic ring structure.log 1/C = c π + c' σ + c'' Es + c'''in which:C is the molar concentration of a chemical with a particular effect,π is a substituent constant for hydrophobic effects,σ is a substituent constant for electronic effects, andEs is a substituent constant for steric effects.c are constants that are obtained by fitting experimental dataFor example, the hydrophobic substituent constant is based on Kow and is defined as is defined as:π (X) = log Kow (RX) - log Kow (RH)where RX and RH are the substituted and unsubstituted parent compound, respectively.The Hammett and Taft constants are derived in a similar way.Multivariate techniques may be very useful to develop structure-activity relationships, in particular in cases where a large number of chemical parameters is involved. Principal Component Analysis (PCA) can be applied to reduce the number of variables into a few principal components. The next step is to find a relationship between Y and X via, for example, Partial Least Square (PLS) analysis. The advantage of PCA and PLS is that it can deal with a large number of chemical descriptors and that is can also cope with collinear (correlated) properties. More information on these multivariate techniques and examples in the field of environmental science are given by Eriksson et al..Poly-parameter Linear Free Energy Relationship (pp-LFER)The pp-LFER approach has a strong mechanistic basis because it includes the different types of interactions between molecules (Goss and Schwarzenbach, 2001). For example, the sorption coefficient of a chemical from an aqueous phase to soil or to phospholipids (the sorbent) depends on the interaction of a chemical with water and the interaction with the sorbent phase. One of the driving forces behind sorption is the hydrophobicity. Hydrophobicity means fear (phobia) for water (hydro). A hydrophobic chemical prefers to "escape from the aqueous phase" or in other words "it does not like to dissolve in water". Water molecules are tightly bound to each other via hydrogen bonds. For a chemical to dissolve, a cavity should be formed in the aqueous phase and this will cost energy. More hydrophobic compounds will often have a stronger sorption (see more information in the section on Relevant chemical properties).Hydrophobicity mainly depends on two molecular properties:In the interaction with the sorbent (soil, membrane lipids, storage lipids, humic acids), major interactions are van der Waals interactions and hydrogen bonding (Table 2). Van der Waals interactions are attractive and occur between all kind of molecules and the strength depends on the contact area. Therefore, the strength of van der Waals interactions are related to the size of a molecule. A hydrogen bond is an electrostatic attraction between a hydrogen (H) and another electronegative atom bearing a lone pair of electrons. The hydrogen atom is usually covalently bound to a more electronegative atom (N, O, F). Table 2 lists the interactions with examples of chemical structures.A pp-LFER is a linear equation developed to model partition or sorption coefficients (K) using parameters that represent the interactions (Abraham, 1993). The model equation is based on five descriptors:with:Eexcess molar refractionSdipolarity/polarizability parameterAsolute H-bond acidity (H-bond donor)Bsolute H-bond basicity (H-bond acceptor)Vmolar volumeThe partition or sorption coefficient K may be expressed as the sum of five interaction terms, with the uppercase parameters describing compound specific properties. E depends on the valence electronic structure, S represents polarity and polarizability, A is the hydrogen bond (HB) donor strength (HB acidity), B the HB acceptor strength (HB basicity), V is the so-called characteristic volume related to the molecule size, and c is a constant. The lower-case parameters express the corresponding properties of the respective two-phase system, and can thus be taken as the relative importance of the compound properties for the particular partitioning or sorption process. In this introductory section, we only focus on the volume factor (V) and the two hydrogen bond parameters (A and B).Numerous pp-LFERs have been developed for all kinds of environmental processes and an overview is given by Endo and Goss.Table 2. Types of interactions between molecules and the phase to which they sorb with examples of chemicals (Goss and Schwarzenbach, 2003).Compounda)InteractionsExamplesApolaronly van der Waalsalkanes, chlorobenzenes, PCBsMonopolarvan der Waals +H-acceptor (e-donor)alkenes, alkynes,alkylaromatic compoundsethers, ketones, esters, aldehydesMonopolarvan der Waals +H-donor (e-acceptor)CHCl3, CH2Cl2Bipolarvan der Waals+ H-donor+ H-acceptorR-NH2, R2-NH,R-COOH, R-OHa) Apolar: no polar group present; mono/dipolar: one or two polar groups present in a moleculeExamples of QSPR for bioconcentration to fishPredictive models for bioconcentration have a long history. The octanol-water partition coefficient (KOW) is a good measure for hydrophobicity and bioconcentration factors (BCF's) are often correlated to Kow (see more information in section on Bioaccumulation). The success of these KOW based models was explained by the resemblance of partitioning in octanol and bulk lipid in the organisms, at least for neutral hydrophobic compounds. A well-known example of a linear QSAR model for the log BCF (Y variable) based on the log KOW (X variable) (Veith et al., 1979):log BCF = 0.85 log KOW - 0.70gives a classical example of such a correlation for BCF to guppy of a series of chlorinated benzenes and polychlorinated biphenyls. When lipophilic chemicals are metabolised, the relation shown in is no longer valid and BCF will be lower than predicted based on KOW. Another deviation of this BCF-Kow relation can be found for highly lipophilic chemicals with log Kow>7. For such chemicals, BCF often decrease again with increasing Kow (see . The apparent BCF curve with Kow as the X variable tends to follow a nonlinear curve with an optimum at log Kow 7-8. This phenomenon may be explained from molecular size: molecules of chemicals like decachlorobiphenyl may be so large that they have difficulties in passing membranes. A more likely explanation, however, is that for highly lipophilic chemicals aqueous concentrations may be overestimated. It is not easy to separate chemicals bound to particles from the aqueous phase (see box 1 in the section on Sorption) and this may lead to measured concentrations that are higher than the bioavailable (freely dissolved) concentration (Jonker and van der Heijden 2007; Kraaij et al. 2003). For example, at a dissolved organic carbon (DOC) concentration of 1 mg-DOC/L, a chemical with a log Koc of 7 will be 90% bound to particles, and this bound fraction is not part of the dissolved concentration that equilibrates with the (fish) tissue. This shows that these models are also interesting because they may show trends in the data that may lead to a better understanding of processes.Examples of QSPR for sorption to lipidsKow based models are successful because octanol probably has similar properties than fish lipids. There are several types of lipids and membrane lipids have different properties and structure than for example storage lipids (see . More refined BCF models include separation of storage and membrane lipids and also proteins as separate sorptive phases (Armitage et al. 2013). pp-LFER is a very suitable approach to model these sorption or partitioning processes and results for two large data sets are presented in Table 3. The coefficients e, s, b and v are rather similar. The only parameter that is different in these two models is coefficient a, which represents the contribution of hydrogen bond (HB) donating properties (A) of chemicals in the data set. This effect makes sense because the phosphate group in the phospholipid structure has strong HB accepting properties. This example shows the strength of the pp-LFER approach because it closely represents the mechanism of interactions.Table 3. LFERs for storage lipid-water partition coefficients (KSL-W) and membrane lipid-water partition coefficients (KML-W (liposome)). Listed are the parameters (and standard error), the number of compounds with which the LFER was calibrated (n), the correlation coefficient (r2), and the standard error of estimate (SE). log K = c + eE + sS + aA + bB + vV.Para-metercesabvnr2SEKSL-W-0.07(0.07)0.70(0.06)-1.08(0.08)-1.72(0.13)-4.14(0.09)4.11(0.06)2470.9970.29From (Geisler et al. 2012)KML-W(liposome)0.26 (0.08)0.85(0.05)-0.75(0.08)0.29(0.09)-3.84 (0.10)3.35 (0.09)1310.9790.28From (Endo et al. 2011)KSL-W: storage lipid partition coefficients are mean values for different types of oil. Raw data and pp-LFER (for 37 oC) reported in (Geisler et al. 2012).KML-W (liposome): data from liposomes made up of phosphatidylcholine (PC) or PC mixed with other membrane lipids. Raw data (20-40 oC) and pp-LFER reported in (Endo et al. 2011).Examples of QSPR for sorption to soilNumerous QSPRs are available for soil sorption (see section on Sorption). Also the organic carbon normalized sorption coefficient (Koc) is linearly related to the octanol-water partition coefficient (see .The model in is only valid for neutral, non-polar hydrophobic organic chemicals such as chlorinated aromatic compounds, polycyclic aromatic hydrocarbons (PAHs), polychlorinated biphenyl (PCBs) and chlorinated insecticides or, in general, compounds that only contain carbon, hydrogen and halogen atoms. It does not apply to polar and ionized organic compounds nor to metals. For polar chemicals, also other interactions may influence sorption and a pp-LFER approach would also be useful.The sorption of ionic chemicals is more complex. For the sorption of cationic organic compounds, clay minerals can be an equally important sorption phase as organic matter because of their negative surface charge and large surface area. The sorption of organic cations is mainly an adsorption process that reaches a maximum at the cation exchange capacity (CEC) of a particle (see section on Soil). Also models for the prediction of sorption of cationic compounds are more complicated and first attempts have been made recently (Droge and Goss, 2013). Major sorption mechanism for anionic chemicals is sorption into organic matter. The sorption coefficient of anionic chemicals is substantially lower than for the neutral form of the chemical, roughly a factor 10-100 for KOC (Tülp et al. 2009). In case of weakly dissociating chemicals such as carboxylic acids, the sorption coefficient can often be estimated from the sorption coefficient of the non-ionic form and the fraction of the chemical that is present in the non-ionized form (see section on Relevant chemical properties).Reliability and limitations of QSPRPredictive models have limitations and it is important to know these limitations. There is not one single model that can predict a parameter for all chemicals. Each model will have a domain of applicability and it is important to apply a model only to a chemical within that domain. Therefore, guidance has to be defined on how to select a specific model. It is also important to realize that in many computer programs (such as fate modeling programs), estimates and predictions are implicitly incorporated in these progams.Another aspect is the reliability of the prediction. The model itself can show a good fit (high r2) for the training set (the chemicals used to develop the model), but the actual reliability should be tested with a separate set of chemicals (the validation set) and a number of statistical procedures can be applied to test the accuracy and predictive power of the model. The OECD has developed a set of rules that should be applied in the validation of QSPR and QSAR models.ReferencesAbraham, M.H.. Scales of solute hydrogen-bonding - their construction and application to physicochemical and biochemical processes. Chemical Society Reviews 22, 73-83.Armitage, J.M., Arnot, J.A., Wania, F., Mackay, D.. Development and evaluation of a mechanistic bioconcentration model for ionogenic organic chemicals in fish. Environmental Toxicology and Chemistry 32, 115-128.Bruggeman, W.A., Opperhuizen, A., Wijbenga, A., Hutzinger, O.. Bioaccumulation of super-lipophilic chemicals in fish. Toxicological and Environmental Chemistry 7, 173-189.Droge, S.T.J., Goss, K.U.. Development and evaluation of a new sorption model for organic cations in soil: Contributions from organic matter and clay minerals. Environmental Science and Technology 47, 14233-14241.Endo, S., Escher, B.I., Goss, K.U.. Capacities of membrane lipids to accumulate neutral organic chemicals. Environmental Science and Technology 45, 5912-5921.Endo, S., Goss, K.U.. Applications of polyparameter linear free energy relationships in environmental chemistry. Environmental Science and Technology 48, 12477-12491.Eriksson, L., Hermens, J.L.M., Johansson, E., Verhaar, H.J.M., Wold, S.. Multivariate analysis of aquatic toxicity data with pls. Aquatic Sciences 57:217-241.Geisler, A., Endo, S., Goss, K.U.. Partitioning of organic chemicals to storage lipids: Elucidating the dependence on fatty acid composition and temperature. Environmental Science and Technology 46, 9519-9524.Goss, K.-U., Schwarzenbach, R.P.. Linear free energy relationships used to evaluate equilibrium partittioning of organic compounds. Environmental Science and Technology 35, 1-9.Goss, K.U., Schwarzenbach, R.P.. Rules of thumb for assessing equilibrium partitioning of organic compounds: Successes and pitfalls. Journal of Chemical Education 80, 450-455.Hansch, C., Streich, M., Geiger, F., Muir, R.M., Maloney, P.P., Fujita, T.. Correlation of biological activity of plant growth regulators and chloromycetin derivatives with hammett constants and partition coefficients. Journal of the American Chemical Society 85, 2817-&.Jonker, M.T.O., van der Heijden, S.A.. Bioconcentration factor hydrophobicity cutoff: An artificial phenomenon reconstructed. Environmental Science and Technology 41, 7363-7369.Katritzky, A.R., Slavov, S., Radzvilovits, M., Stoyanova-Slavova, I., Karelson, M.. Computational chemistry approaches for understanding how structure determines properties. Zeitschrift Fur Naturforschung Section B-a Journal of Chemical Sciences 64:773-777.Könemann, H., Van Leeuwen, K.. Toxicokinetics in fish: Accumulation and elimination of six chlorobenzenes by guppies. Chemosphere 9, 3-19.Kraaij, R., Mayer, P., Busser, F.J.M., Bolscher, M.V., Seinen, W., Tolls, J.. Measured pore-water concentrations make equilibrium partitioning work - a data analysis. Environmental Science and Technology 37, 268-274.Sabljic, A., Güsten, H., Verhaar, H.J.M., Hermens, J.L.M.. Qsar modelling of soil sorption. Improvements and systematics of log koc vs. Log kow correlations. Chemosphere 31, 4489-4514.Tülp, H.C., Fenner, K., Schwarzenbach, R.P., Goss, K.U.. pH-dependent sorption of acidic organic chemicals to soil organic matter. Environmental Science and Technology 43, 9189-9195.Veith, G.D., Defoe, D.L., Bergstedt, B.V.. Measuring and estimating the bioconcentration factor of chemicals in fish. Journal of the Fisheries Research Board of Canada 36, 1040-1048.What is a QSPR and why is it useful?Which techniques are applied to derive a QSPR.Which chemical parameters are applied in a QSPR?This page titled 3.4: Partitioning and Partitioning Constants is shared under a CC BY 4.0 license and was authored, remixed, and/or curated by Sylvia Moes, Kees van Gestel, & Gerco van Beek via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.
977
3.5: Metal Speciation
https://chem.libretexts.org/Bookshelves/Environmental_Chemistry/Environmental_Toxicology_(van_Gestel_et_al.)/03%3A_Environmental_Chemistry_-_From_Fate_to_Exposure/3.05%3A_Metal_Speciation
You should be able to:Keywords: Metal complexation, redox reactions, equilibrium reactions, water chemistry, soil properties.Metals occur in different physical and chemical forms in the environment, for example as the element (very rare in the environment), as components of minerals, as free cations dissolved in water (e.g. Cd2+), or bound to inorganic or organic molecules in either the solid or dissolved phases (e.g. HgCH3+ or AgCl2+) (Allen 1993). The distribution of a metal over these different forms is referred to as metal speciation. Physical processes may also affect the mobility and bioavailability of metals, for example the electrostatic attraction of metal cations to negatively charged mineral surfaces. These processes are in general not referred to as metal speciation in the strict sense but they are discussed here.The speciation of metals is controlled by both the properties of the metals (see the section Metals and metalloids) and the properties of the environment in which they are present, such as the pH, redox potential and the presence and concentrations and properties of molecules that could form complexes with the metals. These complex forming molecules are often called ligands and these can vary from relatively simple anions in solution, such as sulphate or anions of soluble organic acids, to more complex macromolecules such as proteins and other biomolecules. The adsorption of metals by covalent bond formation to oxide and hydroxide surfaces of minerals, and oxygen- or nitrogen-containing functional groups of solid organic matter, is also referred to as complexation. Since these metal-binding functional groups are often either acidic or basic, the pH is an important environmental parameter controlling complexation reactions.In natural systems the speciation of metals is of great complexity and determines their mobility in the environment and their bioavailability (i.e. how easily they are taken up by organisms). Metal speciation therefore plays a key role in determining the potential bioaccumulation and toxicity of metals and should therefore be considered when assessing their ecological risks. Metal bioavailability and transport are in particular strongly related to the distribution over solid and liquid phases of the environmental matrix.The four main chemical reactions determining metal speciation, so the binding of metal ions to ligands and their presence in solid and liquid phases, are (Bourg, 1995):The complexity of these reactions is illustrated in speciation in the environment is determined by a number of reactions, including complexation, precipitation and sorption. These reactions affect the partitioning of metals across solid and liquid phases, hence their mobility as well as their bioavailability.Adsorption, desorption and ion exchange processes take place with the reactive components present in soils, sediments and to lower extent in water. These include:Metal ions react with these reactive components in different ways. In soils and sediments, cationic metals bind reversibly to clay minerals via cation-exchange processes (see section on Soil). Metal ions also form complexes with so-called functional groups (mainly carboxylic and phenolic groups) present in organic matter (see section on Soil). In aquatic systems similar binding processes occur, in which dissolved organic matter (or carbon) (DOM or DOC) plays a major role. The "dissolved" fraction of organic matter is operationally defined as the fraction passing a 0.45 µm filter and is often referred to fulvic and humic acids.As mentioned above and in the section on S//maken.wikiwijs.nl/147644/Environmental_Toxicology__an_open_online_textbook#!page-5415168oil, negatively charged surfaces of the reactive mineral and organic components present in soil, sediment or water attract positively-charged atoms or molecules (cations, e.g. Cd2+), and allow these cations to exchange with other positively charged ions. The competition between cations for binding sites is driven by the binding affinity of each metal species, as well as the concentration of each metal species. Cation-exchange capacity (CEC) is a property of the sorbent, and defined as the density of available negatively charged sites per mass of environmental matrix (soil, sediment). In fact, it is a measure of how many cations can be retained on solid surfaces. CEC usually is expressed in cmolc/kg soil (see section on Soil). Increasing the pH (i.e. decreasing the concentration of H+ ions) increases the variable charge of most sorbents (more types of protonated groups on sorbent surfaces release their H+), especially for organic matter, and therefore also increases the cation exchange capacity. Protons (H+) also compete with metal ions for the same binding sites. Conversely, at decreasing pH (increasing H+ concentrations), most sorbents lower their CEC.Metal speciation can be modelled if we have sufficient knowledge of the most important reactions involved and the environmental conditions that control these reactions. This knowledge is expressed in the form of equilibria expressing the most important complexation and/or redox reactions. For example, in the general case of a complexation reaction between metal M and ligand L described by the equilibrium:aMm+ (aq) + bLn- (aq) ↔ MaLbq+ (aq) (where q = am-bn)The relationship between the concentrations (or more accurately the activities) of the species is given by:If redox reactions are involved in speciation, we can use the Nernst equation to describe the equilibrium between reduced and oxidised states of the metal:Where Eh is the redox potential, Eh0 the standard potential of the redox pair (relative to the hydrogen electrode), R the molar gas constant, T the temperature, n the number of transferred electrons, F the Faraday constant and {Red/Ox} the activity (or concentration) ratio of the reduced and oxidized species. Since many redox reactions involve the transfer of H+, the value of {Red/Ox} for these equilibria will depend on the pH. Note that the redox potential is often expressed as pe which is defined as the negative logarithm of the electron activity (pe = - log {e-}).Using these comparatively simple equations for all the relevant reactions involved it is possible to construct models to describe metal speciation as a function of ligand concentrations, pH and redox potential. As an example, Table 1 presents the relevant equilibria for the speciation of iron in water.Table 1. Equilibrium reactions relevant for Fe in water (adapted from Essington, 2003)BoundaryEquilibrium reactionFe3+ + e- D Fe2+pEΘ = 13.05Fe(OH)3(s) + 3H+ D Fe3+ + 3H2OKsp = 9.1 x 103 L2 mol-2Fe(OH)2(s) + 2H+ D Fe2+ + 2H2OK*sp = 8.0 x 1012 L mol-1Fe(OH)3(s) + H+ + e- D Fe(OH)2(s) + H2OFe(OH)3(s) + 3H+ + e- D Fe2+ + 3H2OUsing these we can derive equations defining the conditions of pH and pe at which the activity or concentration ratio is one for each equilibrium. These are shown as the continuous boundary lines in In this Pourbaix or pe-pH (or pE-pH) diagram, the fields separated by the boundary lines are labelled with the dominant species present under the conditions that define the fields. (NB. The dotted lines define the conditions of pe and pH under which water is stable.)In the environment there is, however, in general no equilibrium. This means that the speciation and hence also fate of metals is highly dynamic. Large scale alterations occur when land use changes, e.g. when agricultural land is abandoned and becomes nature. Whereas agricultural soil often is 'limed' (addition of CaCO3) to maintain near-neutral pH and crop is removed by harvesting, in natural ecosystems all produced organic matter remains in the system. Therefore natural soils show an increase in soil organic matter content, while due to microbial decomposition processes soil pH tends to decrease. As a result, DOC concentration in the soil porewater will increase, while metal mobility also is increased by the decreasing soil pH (Cu2+ is more mobile than CuCO3). This may cause historical metal pollution to suddenly become available (the "chemical time bomb" effect). Large scale reconstruction of rivers or deep soil digging for land planning and development may also affect environmental conditions in such a way that metal speciation may change. An example of this is the change in arsenic speciation in groundwater due to the drilling of wells in countries like Bangladesh; the introduction of oxygen and organic matter into the deeper groundwater caused a change of arsenic speciation, enhancing its solubility in water and therefore increasing human exposure (see section on Metals and metalloids).Dynamic conditions do not only occur on large spatial and temporal scales, nature is also dynamic on smaller scales. Abiotic factors such as rain and flooding events, weather conditions, and redox status may alter metal speciation. In addition, biotic factors may affect metal speciation. An example of the latter is the bioturbation by sediment-dwelling organisms that re-suspend particles into water, or earthworms that by their digging activities aerate the soil and excrete mucus that may stimulate microbial activity (see . These activities of soil and sediment organisms alter the environmental conditions and hence affect metal speciation (see . The production of acidic root exudates by plants may also have similar effects on metal speciation. Another process that alters metal speciation is the uptake of metals. Since the ionic metal form seems most prone to root uptake, or active intake over cell membranes, this process may affect metal partitioning over different species.Allen, H.E.. The significance of trace metal speciation for water, sediment and soil quality criteria and standards. Science of the Total Environment 134, 23-45.Andrews, J.E., Brimblecombe, P., Jickells, T.D., Liss P.S., Reid, B. An Introduction to Environmental Chemistry, 2nd Edition, Blackwell, ISBN 0-632-05905-2 (chapter 6).Bourg, A.C.M. Speciation of heavy metals in soils and groundwater and implications for their natural and provoked mobility. In: Salomons, W., Förstner, U., Mader, P. (Eds.). Heavy Metals. Springer, Berlin. p. 19-31.Blust, R., Fontaine, A., Decleir, W. Effect of hydrogen ions and inorganic complexing on the uptake of copper by the brine shrimp Artemia franciscana. Marine Ecology Progress Series 76, 273-282.Essington, M.E. Soil and Water Chemistry, CRC Press, ISBN 0-8493-1258-2 (chapters 5, 7 and 9).Sposito, G. The Chemistry of Soils, 2nd Edition, Oxford University press, ISBN 978-0-19-531369-7 (chapter 4).Sparks, D.L. Environmental Soil Chemistry, 2nd Edition, Academic Press, ISBN 0-12-656446-9 (chapters 5, 6 and 8).What are the most important reactions involved in the speciation of metals in the aquatic and soil environments?What are the most important environmental parameters controlling these reactions?In the equilibrium approach to modelling metal speciation, dominance or pe-pH diagrams are used as a visual representation of speciation. What do the lines and fields in such diagrams represent?This page titled 3.5: Metal Speciation is shared under a CC BY 4.0 license and was authored, remixed, and/or curated by Sylvia Moes, Kees van Gestel, & Gerco van Beek via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.
978
4.1: Toxicokinetics
https://chem.libretexts.org/Bookshelves/Environmental_Chemistry/Environmental_Toxicology_(van_Gestel_et_al.)/04%3A_Toxicology/4.01%3A_Toxicokinetics
Author: Nico van StraalenReviewer: Kees van GestelLearning objectives:You should be able toKeywords: toxicodynamics, toxicokinetics, bioaccumulation, toxicokinetic rate constants, critical body residue, detoxificationToxicology usually distinguishes between toxicokinetics and toxicodynamics. Toxicokinetics involves all processes related to uptake, internal transport and the accumulation inside an organism, while toxicodynamics deals with the interaction of a compound with a receptor, induction of defence mechanisms, damage repair and toxic effects. Of course the two sets of processes may interact, for instance defence may feed-back to uptake and damage may change the internal transport. However, often toxicokinetics analysis just focuses on tracking of the chemical itself and ignores possible toxic effects. This holds up to a critical threshold, the critical body concentration, above which effects become obvious and the normal toxicokinetics analysis is no longer valid. The assumption that toxicokinetic rate parameters are independent of the internal concentration is due the limited amount of information that can be obtained from animals in the environment. However, in so called physiology-based pharmacokinetic and pharmacodynamic models (PBPK models) kinetics and dynamics are analyzed as an integrated whole. The use of such models is however mostly limited to mammals and humans.It must be emphasized that toxicokinetics considers fluxes and rates, i.e. mg of a substance moving per time unit from one compartment to another. Fluxes may lead to a dynamic equilibrium, i.e. an equilibrium that is due to inflow being equal to outflow; when only the equilibrium conditions are considered, this is called partitioning.In this Chapter 4.1 we will explore the various approaches in toxicokinetics, including the fluxes of toxicants through individual organisms and through trophic levels as well as the biological processes that determine such fluxes. We start by comparing the concentrations of toxicants between organisms and their environment (section 4.1.1), and between organisms of different trophic levels (section 4.1.6). This leads to the famous concept of bioaccumulation, one of the properties of a substance often leading to environmental problems. While in the past dilution was sometimes seen as a solution to pollution, this is not correct for bioaccumulating substances, since they may turn up elsewhere in the next level food-chain and reach an even higher concentration. The bioaccumulation factor is one of the best-investigated properties characterizing the environmental behaviour of a substance . It may be predicted from properties of the substance such as the octanol-water partitioning coefficient.In section 4.1.2 we discuss the classical theory of uptake-elimination kinetics using the one-compartment linear model. This theory is a crucial part of toxicological analysis. One of the first things you want to know about a substance is how quickly it enters an organism and how quickly it is removed. Since toxicity is basically a time-dependent process, the turnover rate of the internal concentration and the build-up of a residue depend upon the exposure time. An understanding of toxicokinetics is therefore critical to any interpretation of a toxicity experiment. Rate parameters may partly be predicted from substance properties, but properties of the organism play a much greater role here. One of these is simply the body mass; prediction of elimination rate constants from body mass is done by allometric scaling relationships, explored in section 4.1.5.In two sections, 4.1.3 and 4.1.4, we present the biological processes that underlie the turnover of toxicants in an organism. These are very different for metals than for organic substances, hence, two separate sections are devoted to this topic, one on tissue accumulation of metals and one on defence mechanisms for organic xenobiotics.Finally, if we understand all toxicokinetic processes we will also be able to understand whether the concentration inside a target organ will stay below or just passes the threshold that can be tolerated. The critical body concentration, explored in section 4.1.7 is an important concept linking toxicokinetics to toxicity.Define and indicate the differences betweenAuthor: Joop HermensReviewers: Kees van Gestel and Philipp MayerLearning objectives:You should be able toKey words: Bioaccumulation, lipid contentThe term bioaccumulation describes the transfer and accumulation of a chemical from the environment into an organism." For a chemical like hexachlorobenzene, the concentration in fish is more than 10,000 times higher than in water, which is a clear illustration of "bioaccumulation". A chemical like hexachlorobenzene is hydrophobic, so has a very low aqueous solubility. It therefore prefers to escape the aqueous phase to enter (or partition into) a more lipophilic phase such as the lipid phase in biota.Uptake may take place from different sources. Fish mostly take up chemicals from the aqueous phase, organisms living at the sediment-water interphase are exposed via the overlying water and sediment particles, organisms living in soil or sediment via pore water and by ingesting soil or sediment, while predators will be exposed via their food. In many cases, uptake is related to more than one source. The different uptake routes are also reflected in the parameters and terminology used in bioaccumulation studies. The different parameters include the bioconcentration factor (BCF), bioaccumulation factor (BAF), biomagnification factor (BMF) and biota-to-sediment or biota-to-soil accumulation factor (BSAF). summarizes the definition of these parameters. Bioconcentration refers to uptake from the aqueous phase, bioaccumulation to uptake via both the aqueous phase and the ingestion of sediment or soil particles, while biomagnification expresses the accumulation of contaminants from food.Please note that the bioaccumulation factor (BAF) is defined in a similar way as the bioconcentration Factor (BCF), but that uptake can be both from the aqueous phase and the sediment or soil and that the exposure concentration usually is expressed per kg dry sediment or soil. Other definitions of the BAF are possible, but we have followed the one from Mackay et al.. "The bioaccumulation factor (BAF) is defined here in a similar fashion as the BCF; in other words, BAF is CF/CW at steady state, except that in this case the fish is exposed to both water and food; thus, an additional input of chemical from dietary assimilation takes place".All bioaccumulation factors are steady state constants: the concentration in the organism constant and the organisms is in equilibrium with its surrounding phase. It will take time before such an steady state is reached. Steady state is reached when uptake rate (for example from an aqueous phase) equals the elimination rate. Models that include the factor time in describing the uptake are called kinetic models; see section on Bioaccumulation kinetics.Uptake of chemicals is determined by properties of both the organism and the chemical. For xenobiotic lipophilic chemicals in water, organism-specific factors usually play a minor role and concentrations in organisms can pretty well be predicted from chemical properties (see section on Structure-property relationships). For metals, on the contrary, uptake is to a large extent determined by properties of the organism, and a direct consequence of its mineral requirements. A chemical with low bioavailability (low uptake compared to concentration in exposure medium) may nevertheless accumulate to high levels when the organism is not capable of excreting or metabolising the chemical.overall organism-water distribution coefficient (or surrogate BCF) at a given pHstorage lipid-water distribution ratiomembrane lipid-water distribution ratioDNLOM-Wsorption coefficient to NLOM (non-lipid organic matter, for example proteins)fraction of storage lipidsfraction of membrane lipidsfraction of non-lipid organic matter (e.g. proteins, carbohydrates)fraction of waterTable 1. Mean PCB concentrations in algae (Dunaliella spec.), rotifers (Brachionus plicatilis) and anchovies larvae (Angraulis mordax), expressed on a dry-weight basis and on a lipid basis. From Moriarty.OrganismLipid content (%)PCB-concentration based on dry weight(µg g-1)PCB-concentration based on lipid weight(µg g-1)BCF based on concentration in the lipid phasealgae6.40.253.910.48 x 106rotifer15.00.422.800.34 x 106fish (anchovies) larvae7.52.0627.4613.70 x 106Abarnou, A., Robineau, D., Michel, P.. Organochlorine contamination of commersons dolphin from the Kerguelen islands. Oceanologica Acta 9, 19-29.Armitage, J.M., Arnot, J.A., Wania, F., Mackay, D.. Development and evaluation of a mechanistic bioconcentration model for ionogenic organic chemicals in fish. Environmental Toxicology and Chemistry 32, 115-128.Geyer, H., Scheunert, I., Korte, F.. Relationship between the lipid-content of fish and their bioconcentration potential of 1,2,4-trichlorobenzene. Chemosphere 14, 545-555.Mackay, D., Arnot, J.A., Gobas, F., Powell, D.E.. Mathematical relationships between metrics of chemical bioaccumulation in fish. Environmental Toxicology and Chemistry 32, 1459-1466.Moriarty, F.. Ecotoxicology: The Study of Pollutants in Ecosystems. Publisher: Academic Press, London.Van der Heijden, S.A., Jonker, M.T.O.. Intra- and interspecies variation in bioconcentration potential of polychlorinated biphenyls: are all lipids equal? Environmental Science and Technology 45, 10408-10414.Mackay, D., Fraser, A.. Bioaccumulation of persistent organic chemicals: Mechanisms and models. Environmental Pollution 110, 375-391.Van Leeuwen, C.J., Vermeire, T.G. (Eds.). Risk Assessment of Chemicals: An Introduction. Springer, Dordrecht, The Netherlands. Chapter 3.What is the difference between BCF and BSAF?What are the main uptake routes for a sediment organismWhich biological factors may influence the bioaccumulation?Author: Joop Hermens, Nico van StraalenReviewers: Kees van Gestel, Philipp MayerLearning objectives:You should be able toKey words: Bioaccumulation, toxicokinetics, compartment modelsIn the section "Bioaccumulation", the process of bioaccumulation is presented as a steady state process. Differences in the bioaccumulation between chemicals are expressed via, for example, the bioconcentration factor BCF. The BCF represents the ratio of the chemical concentration in, for instance, a fish versus the aqueous concentration at a situation where the concentrations in water and fish do not change in time. where:Caq concentration in water (aqueous phase) (mg/L)Corg concentration in organism (mg/kg)The unit of BCF is L/kg.Steady state can be established in a simple laboratory set-up where fish are exposed to a chemical at a constant concentration in the aqueous phase. From the start of the exposure (time 0, or t=0), it will take time for the chemical concentration in the fish to reach steady state and in some cases, this will not be established within the exposure period. In the environment, exposure concentrations may fluctuate and, in such scenarios, constant concentrations in the organism will often not be established. Steady state is reached when the uptake rate (for example from an aqueous phase) equals the elimination rate. Models that include the factor time in describing the uptake of chemicals in organisms are called kinetic models.Toxicokinetic models for the uptake of chemicals into fish are based on a number of processes for uptake and elimination. An overview of these processes is presented in In the case of fish, the major process of uptake is by diffusion from the surrounding water compartment via the gill to the blood. Elimination can be via different processes: diffusion via the gill from blood to the surrounding water compartment, via transfer to offspring or eggs by reproduction, by growth (dilution) and by internal degradation of the chemical (biotransformation).Kinetic models to describe uptake of chemicals into organisms are relatively simple with the following assumptions:Rates of exchange are proportional to the concentration. The change in concentration with time (dC / dt ) is related to the concentration and a rate constant (k): It is often assumed that an organism consists of only one single compartment and that the chemical is homogeneously distributed within the organism. For "simple" small organisms this assumption is intuitively valid, but for large fish this assumption looks unrealistic. But still, this simple model seems to work well also for fish. To describe the internal distribution of a chemical within fish, more sophisticated kinetic models are needed, similar to the ones applied in mammalian studies. These more complex models are the "physiologically based toxicokinetic" (PBTK) models (Clewell, 1995; Nichols et al., 2004)The accumulation process can be described as the sum of rates for uptake and elimination. Integration of this differential equation leads to equation 4. Corg concentration in organisms (mg/kg)Caq concentration in aqueous phase (mg/L)kw uptake rate constant (L/kg·day)ke elimination rate constant (1/day)t time (day)(dimensions used are: amount of chemical: mg; volume of water: L; weight of organism: kg; time: day); see box.Box: The units of toxicokinetic rate constantsThe differential equation underlying toxicokinetic analysis is basically a mass balance equation, specifying conservation of mass. A mass balance implies that the amount of chemical is expressed in absolute units such as mg. If Q is the amount in the animal and F the amount in the environmental compartment the mass balance reads:where is the uptake rate constant and k2 the elimination rate constant, both with dimension time-1. However, it is often more practical to work with the concentration in the animal (e.g. expressed in mg/kg). This can be achieved by dividing the left and right sides of the equation by w, the body-weight of the animal and defining Cint = Q/w. In addition, we define the external concentration as Cenv = F/V, where V is the volume (L or kg) of the environmental compartment. This leads to the following formulation of the differential equation:Beware that Cenv is measured in other units (mg per kg of soil, or mg per litre of water) than Cint (mg per kg of animal tissue). To get rid of the awkward factor V/w it is convenient to define a new rate constant, k1:This is the uptake rate constant usually reported in scientific papers. Note that it has other units than : it is expressed as kg of soil per kg of animal tissue per time unit (kg kg-1 h-1), and in the case of water exposure as L kg-1 h-1. The dimension of k2 remains the same whether mass or concentrations are used (time-1). We also learn from this analysis that when dealing with concentrations, the body-weight of the animal must remain constant.Moriarty, F. Persistent contaminants, compartmental models and concentration along food-chains. Ecological Bulletins 36: 35-45.Skip, B., A.J. Bednarska, & R. Laskowski Toxicokinetics of metals in terrestrial invertebrates: making things straight with the one-compartment principle. PLoS ONE 9: e108740.Equation 4 describes the whole process with the corresponding graphical representation of the uptake graph.The concentration in the organism is the result of the net process of uptake and elimination. At the initial phase of the accumulation process, elimination is negligible and the ratio of the concentration in the organism is given by: After longer exposure time, elimination becomes more substantial and the uptake curve starts to level off. At some point, the uptake rate equals the elimination rate and the ratio Corg/Caq becomes constant. This is the steady state situation. The constant Corg/Caq at steady state is called the bioconcentration factor BCF. Mathematically, the BCF can also be calculated from kw/ke. This follows directly from equation 4: after long exposure time (t), becomes 0 leading to Elimination is often measured following an uptake experiment. After the organism has reached a certain concentration, fish are transferred to a clean environment and concentration in the organism will decrease in time. Because this is also a first order kinetic process, the elimination rate will depend on the concentration in the organism (Corg) and the elimination rate constant (ke) (see equation 8). Concentration will decrease exponentially in time (equation 9) as shown in Concentrations are often transformed to the natural logarithmic values (ln Corg) because this results in a linear relationship with slope -ke.(equation 10 and figure 3B). where Corg(t=0) is the concentration in the organism when the elimination phase starts.The half-life (T1/2 or DT50) is the time needed to eliminate half the amount of chemical from the compartment. The relationship between ke and T1/2 is: T1/2 = (In 2) / ke. The half-life increases when ke decreases.Very often, organisms cannot be considered as one compartment, but as two or even more compartments. Deviations from the one-compartment system usually are seen when elimination does not follow an exponential pattern as expected: no linear relationship is obtained after logarithmic transformation. shows the typical trend of the elimination in a two-compartment system. The decrease in concentration (on a logarithmic scale) shows two phases: phase I with a relatively fast decrease and phase II with a relatively slow decrease. According to the linear compartment theory, elimination may be described as the sum of two (or more) exponential terms, like: whereke(I) and ke(II) represent elimination rate constants for compartment I and II,F(I) and F(II) are the size of the compartments (as fractions)Typical examples of two compartment systems are:Elimination from fat tissue is often slower than from, for example, the liver. The liver is a well perfused organ while the exchange between lipid tissue and blood is much less. That explains the faster elimination from the liver.gives uptake curves for two different chemicals and the corresponding kinetic parameters. Chemical 2 has a BCF of 1000, chemical 1 a BCF of 10,000. Uptake rates (kw) are the same, which is often the case for organic chemicals. Half-lives (time to reach 50 % of the steady state level) are 14 and 140 hours. This makes sense because it will take a longer time to reach steady state for a chemical with a higher BCF. Elimination rate constants also differ a factor of 10.In figure 6, uptake curves are presented for one chemical, but in two organisms of different size/weight. Organism 1 is much smaller than organism 2 and reaches steady state much earlier. T1/2 values for the chemical in organisms 1 and 2 are 14 and 140 hours, respectively. The small size explains this fast equilibration. Rates of uptake depend on the surface-to-volume ratio (S/V) of an organism, which is much higher for a small organism. Therefore, kinetics in small organisms is faster resulting in shorter equilibration times. The effect of size on kinetics is discussed in more detail in Hendriks et al. and in the Section on Allometric Relationships.In equation 2, elimination only includes gill elimination. If other processes such as biotransformation and growth are considered, the equation can be extended to include these additional processes (see equation 12). For organisms living in soil or sediment, different routes of uptake may be of importance: dermal (across the skin), or oral (by ingestion of food and/or soil or sediment particles). Mathematically, the uptake in an organism in sediment can be described as in equation 13. Corg concentration in organisms (mg/kg)Caq concentration in aqueous phase (mg/L)Cs concentration soil or sediment (mg/kg)kw uptake rate constant from water (L/kg/day)ks uptake rate constant from soil or sediment (kgsoil/kgorganism/day)ke elimination rate constant (1/day)t time (day)(dimensions used are: amount of chemical: mg; volume of water: L; weight of organism: kg; time: day)In this equation, kw and ks are the uptake rate constants from water and sediment, ke is the elimination rate constant and Caq and Cs are the concentrations in water and sediment or soil. For soil organisms, such as earthworms, oral uptake appears to become more important with increasing hydrophobicity of the chemical (Jager et al., 2003). This is because the concentration in soil (Cs) will become higher than the porewater concentration Ca for the more hydrophobic chemicals (see section on Sorption).Clewell, H.J., 3rd. The application of physiologically based pharmacokinetic modeling in human health risk assessment of hazardous substances. Toxicology Letters 79, 207-217.Hendriks, A.J., van der Linde, A., Cornelissen, G., Sijm, D.. The power of size. 1. Rate constants and equilibrium ratios for accumulation of organic substances related to octanol-water partition ratio and species weight. Environmental Toxicology and Chemistry 20, 1399-1420.Jager, T., Fleuren, R., Hogendoorn, E.A., De Korte, G.. Elucidating the routes of exposure for organic chemicals in the earthworm, Eisenia andrei (Oligochaeta). Environmental Science and Technology 37, 3399-3404.Nichols, J.W., Fitzsimmons, P.N., Whiteman, F.W.. A physiologically based toxicokinetic model for dietary uptake of hydrophobic organic compounds by fish - II. Simulation of chronic exposure scenarios. Toxicological Sciences 77, 219-229.Van Leeuwen, C.J., Vermeire, T.G. (Eds.). Risk Assessment of Chemicals: An Introduction. Springer, Dordrecht, The Netherlands.What are the assumptions in a one-compartment kinetic model?How can you identify when more compartments are involved in a kinetic model?Give examples of compartments in a multi-compartment system.Why is equilibrium reached faster in a small organism compared to a bigger organism?Which two methods can be applied to estimate the bioconcentration factor of a chemical in an organism?Author: Nico M. van StraalenReviewers: Philip S. Rainbow, Henk SchatLearning objectives:You should be able toKey words: Metal binding proteins; phytochelatin; metallothionein;The issue of metal speciation, which is crucially important to understand metal fate in the environment, is equally important for internal distribution in organisms and toxicity inside the cell. Many metals tend to accumulate in specific organs, for example the hepatopancreas of crustaceans, the chloragogen tissue of annelids, and the kidney of mammals. In addition, there is often a specific organ or tissue where dysfunction or toxicity is first observed, e.g., in the human body, the primary effects of chronic exposure to mercury are seen in the brain, for lead in bone marrow and for cadmium in the kidney. This module is aimed at increasing the insight into the different mechanisms by which metals accumulate in biological tissues.Metals will be present in biological tissues in a large variety of chemical forms: free metal ion, various inorganic species with widely varying solubility, such as chlorides or carbonates, plus all kind of metal species bound to low-molecular and high-molecular weight biotic ligands. The free metal ion is considered the most relevant species relating to toxicity.To explain the affinities of metals with specific targets, a system has been proposed based on the physical properties of the ion; according to this system, metals are divided into "oxygen-seeking metals" (class A, e.g. lithium, beryllium, calcium and lanthanum), and "sulfur-seeking metals" (class B, e.g. silver, mercury and lead) (See section on Metals and metalloids). However, most of the metals of environmental relevance fall in an intermediate class, called "borderline" (chromium, cadmium, copper, zinc, etc.). This classification is to some extent predictive of the binding of metals to specific cellular targets, such as SH-groups in proteins, nitrogen in histidine or carbonates in bone tissue.Not only do metals differ enormously in their physicochemical properties, also the organisms themselves differ widely in the way they deal with metals. The type of ligand to which a metal is bound, and how this ligand is transported or stored in the body, determines to a great extent where the metal will accumulate and cause toxicity. Sensitive targets or critical biochemical processes differ between species and this may also lead to differential toxicity.Many tissues contain "mineral concretions", that is, granules with a specific mineral composition that, due to the nature of the mineral, attract different metals. Especially the gut epithelium of invertebrates, and their digestive glands (hepatopancreas, midgut gland, chloragogen tissue) may be full of such concretions. Four classes of granules are distinguished:The type B granules are assumed to be lysosomal vesicles that have absorbed metal-loaded peptides such as metallothionein or phytochelatin, and have developed into inorganic granules by degrading almost all organic material; the high sulfur content derives from the cysteine residues in the peptides.Tissues or cells that specialize in the synthesis of intracellular granules are also the places where metals tend to accumulate. Well-known are the "S cells" in the hepatopancreas of isopods. These cells (small cells, B-type cells sensu Hopkin 1989) contain very large amounts of copper. Most likely the large stores of copper in woodlice and other crustaceans relate to their use of hemocyanin, a copper-dependent protein, as an oxygen-transporting molecule. Similar tissues with high loadings of mineral concretions have been described for earthworms, snails, collembolans and insects.The second class of metal-binding ligands is of organic nature. Many plants but also several animals synthesize a peptide called phytochelatin (PC). This is an oligomer derived from glutathione with the three amino acids, γ-glumatic acid, cysteine and glycine, arranged in the following way: (γ-glu-cys)n-gly, where n can vary from 2 to 11. The thiol groups of several cysteine residues are involved in metal binding.The other main organic ligand for metals is metallothionein (MT). This is a low-molecular weight protein with hydrophilic properties and an unusually large number of cysteine residues. Several cysteines (usually nine or ten) can bind a number of metal ions (e.g. four or five) in one cluster. There are two such clusters in the vertebrate metallothionein. Metallothioneins occur throughout the tree of life, from bacteria to mammals, but the amino acid sequence, domain structure and metal affinities vary enormously and it is doubtful whether they represent a single evolutionary-homologous group.In addition to these two specific classes of organic ligands, MT and PC, metals will also bind aspecifically to all kind of cellular constituents, such as cell wall components, albumen in the blood, etc. Often this represents the largest store of metals; such aspecific binding sites will constantly deliver free metal ions to the cellular pool and so are the most important cause of toxicity. Of course metals are also present in molecules with specific metal-dependent functions, such as iron in hemoglobin, copper in hemocyanin, zinc in carbonic anhydrase, etc.The distinction between inorganic ligands and organic ones is not as strict as it may seem. After binding to metallothionein or phytochelatin metals may be transferred to a more permanent storage compartment, such as the intracellular granules mentioned above, or they may be excreted.Free metal ions are strong inducers of stress response pathways. This can be due to the metal ion itself but more often the stress response is triggered by a metal-induced disturbance of the redox state, i.e. an induction of oxidative stress. The stress response often involves the synthesis of metal-binding ligands such as phytochelatin and metallothionein. Because this removes metal ions from the active pool it is also called metal scavenging.The binding capacity of phytochelatin is enhanced by activation of the enzyme phytochelatin synthase (PC synthase). According to one model of its action, the C-terminus of the enzyme has a "metal sensor" consisting of a number of cysteines with free SH-groups. Any metal ions reacting with this nucleophile center (and cadmium is a strong reactant) will activate the enzyme which then catalyzes the reaction from (γ-glu-cys)n-gly to (γ-glu-cys)n+1-gly, thus increasing the binding capacity of cellular phytochelatin. This reaction of course relies on the presence of sufficient glutathione in the cell. In plants the PC-metal complex is transported into the central vacuole, where it can be stabilized through incorporation of acid-labile sulfur (S2). The PC moiety is degraded, resulting in the formation of inorganic metal-sulfide crystallites. Alternatively, complexes of metals with organic acids may be formed (e.g. citrates or oxalates). The fate of metal-loaded PC in animal cells is not known, but it might be absorbed in the lysosomal system to form B-type granules (see above).The upregulation of metallothionein (MT) occurs in a quite different manner, since it depends on de novo synthesis of the apoprotein. It is a classic example of gene regulation contributing to protection of the cell. In a wide variety of animals, including vertebrates and invertebrates, metallothionein genes (Mt) are activated by a transcription factor called metal-responsive transcription factor 1 (MTF-1). MTF -1 binds to so-called metal-responsive elements (MREs) in the promoter of Mt. MREs are short motives with a characteristic base-pair sequence that form the core of a transcription factor binding site in the DNA. Under normal physiological conditions MTF-1 is inactive and unable to induce Mt. However, it may be activated by Zn2+ ions, which are released, from unspecified ligands, by metals such as cadmium that can replace zinc.It must be emphasized that the model discussed above is inspired by work on vertebrates. Arthropods (Drosophila, Orchesella, Daphnia) could have a similar mechanism since they also have an MTF-1 homolog that activates Mt, however, the situation for other invertebrates such as annelids and gastropods is unclear; their Mt genes seem to lack MREs, despite being inducible by cadmium. In addition, the variability of metallothioneins in invertebrates is extremely large and not all metal-binding proteins may be orthologs of the vertebrate metallothionein. In snails, a cadmium-binding, cadmium-induced MT functions alongside a copper-binding MT while the two MTs have different tissue distributions and are also regulated quite differently.While both phytochelatin and metallothionein will sequester essential as well as non-essential metals (e.g. Cd) and so contribute to detoxification, the widespread presence of these systems throughout the tree of life suggests that they did not evolve primarily to deal with anthropogenic metal pollution. The very strong inducibility of these systems by non-essential elements like cadmium may be considered a side-effect of a different primary function, for example regulation of the cellular redox state or binding of essential metals.Any tissue-specific accumulation of metals can be explained by turnover of metal-binding ligands. For example, accumulation of cadmium in mammalian kidney is due to the fact that metallothionein loaded with cadmium cannot be excreted. High concentrations of metals in the hind segments of earthworms are due to the presence of "residual bodies" which are fully packed with intracellular granules. Accumulation of cadmium in the human prostrate is due to the high concentration of zinc citrate in this organ, which serves to protect contractile proteins in sperm tails from oxidation; cadmium assumedly enters the prostrate through zinc transporters.It is often stated that essential metals are subject to regulatory mechanisms, which would imply that their body burden, over a large range of external exposures, is constant. However, not all "essential" metals are regulated to the extent that the whole-body concentration is kept constant. Many invertebrates have body compartments associated with the gut (midgut gland, hepatopancreas, Malpighian tubules) in which metals, often in the form of mineral concretions, are inactivated and stored permanently or exchanged very slowly with the active pool. Since these compartments are outside the reach of regulatory mechanisms but usually not separated in whole-body metal analysis, the body burden as a whole is not constant. Some invertebrates even carry a "backpack" of metals accumulating over life. This holds, e.g., for zinc in barnacles, copper in isopods and zinc in earthworms.Accumulation of metals in target organs may lead to toxicity when the critical binding or excretion capacity is exhausted and metal ions start binding aspecifically to cellular constituents. The organ in which this happens is often called the target organ. The total metal concentration at which toxicity starts to become apparent is called the critical body concentration (CBC) or critical tissue concentration. For example, the critical concentration for cadmium in kidney, above which renal damage is observed to occur, is estimated to be 50 μg/g. A list of critical organs for metals in the human body is given in Table 1.The concept of CBC assumes that the complete metal load in an organ is in equilibrium with the active fraction causing toxicity and that there is no permanent storage pool. In the case of storage detoxification the body burden at which toxicity appears will depend on the accumulation history.Table 1. Critical organs for chronic toxicity of metals in the human bodyMetal or metalloidCritical organSymptomsAlBrainAlzheimer's diseaseAsLung, liver, heart gutMultisystem energy disturbanceCdKidney, liverKidney damageCrSkin, lung, gutRespiratory system damageCuLiverLiver damageHgBrain, liverMental illnessNiSkin, kidneyAllergic reaction, kidney damagePbBone marrow, blood, brainAnemia, mental retardationCobbett, C., Goldsbrough, P.. Phytochelatins and metallothioneins: roles in heavy metal detoxification and homeostasis. Annual Review of Plant Biology 53, 159-182.Dallinger, R., Berger, B., Hunziker, P., Kägi, J.H.R.. Metallothionein in snail Cd and Cu metabolism. Nature 388, 237-238.Dallinger, R., Höckner, M.. Evolutionary concepts in ecotoxicology: tracing the genetic background of differential cadmium sensitivities in invertebrate lineages. Ecotoxicology 22, 767-778.Haq, F., Mahoney, M., Koropatnick, J. Signaling events for metallothionein induction. Mutation Research 533, 211-226.Hopkin, S.P. Ecophysiology of Metals in Terrestrial Invertebrates. London, Elsevier Applied Science.Nieboer, E., Richardson, D.H.S. The replacement of the nondescript term "heavy metals" by a biologically and chemically significant classification of metal ions. Environmental Pollution Series B 1, 3-26.Rainbow, P.S. Trace metal concentrations in aquatic invertebrates: why and so what? Environmental Pollution 120, 497-507.Mention the four main inorganic cellular constituents that bind metals, their main elemental composition and indicate what metals are usually bound in each structure.How can the intracellular levels of metal-scavenging molecules such as metallothionein (MT) and phytochelatin (PC) be adjusted to counteract the possible adverse effects of free metal ions? Describe the molecular mechanism for metallothionein and phytochelatin separately.Author: Nico M. van StraalenReviewers: Timo Hamers, Cristina FossiLearning objectives:You should be able to:Keywords: biotransformation; phase 1; phase II; phase III; excretion; metabolic activation; cytochrome P450All organisms are equipped with metabolic defence mechanisms to deal with foreign compounds. The reactions involved, jointly called biotransformation, can be divided into three phases, and usually aim to increase water solubility and excretion. The first step (phase I) is catalyzed by cytochrome P450, which is followed by a variety of conjugation reactions (phase II) and excretion (phase III). The enzymes and transporters involved are often highly inducible, i.e. the amount of protein is greatly enhanced by the xenobiotic compounds themselves. The induction involves binding of the compound to cytoplasmic receptor proteins, such as the arylhydrocarbon receptor (AhR), or the constitutive androstane receptor (CAR). In some cases the intermediate metabolites, produced in phase I are extremely reactive and a main cause of toxicity, a well-known example being the metabolic activation of polycyclic aromatic hydrocarbons such as benzo(a)pyrene, which readily forms DNA adducts and causes cancer. In addition, some compounds greatly induce metabolizing enzymes but are hardly degraded by them and cause chronic cellular stress. The various biotransformation reactions are a crucial aspect of both toxicokinetics and toxicodynamics of xenobiotics.The term "xenobiotic" ("foreign to biology") is generally used to indicate a chemical compound that does not normally have a metabolic function. We will use the term extensively in this module, despite the fact that it is somewhat problematic (e.g., can a compound be considered "foreign" if it circulates in the body, is metabolized or degraded in the body?, and: what is "foreign" for one species is not necessarily "foreign" for another species).The body has an extensive defence system to deal with xenobiotic compounds, loosely designated as biotransformation. The ultimate result of this system is excretion the compound in some form or another. However, many xenobiotics are quite lipophilic, tend to accumulate and are not easily excreted due to low water solubility. Molecular modifications are usually required before such compounds can be removed from the body, as the main circulatory and excretory systems (blood, urine) are water-based. By introducing hydrophilic groups in the molecule (-OH, =O, -COOH) and by conjugating it to an endogenous compound with good water-solubility, excretion is usually accomplished. However, as we will see below, intermediate metabolites may have enhanced reactivity and it often happens that a compound becomes more toxic while being metabolized. In the case of pesticides, deliberate use is made of such responses, to increase the toxicity of an insecticide once it is in the target organism.The study of xenobiotic metabolism is a classical subject not only in toxicology but also in pharmacology. The mode of action of a drug often depends critically on the rate and mode of metabolism. Also, many drugs show toxic side-effects as a consequence of metabolism. Finally, xenobiotic metabolism is also studied extensively in entomology, as both toxicity and resistance of pesticides are often mediated by metabolism.The most problematic xenobiotics are those with a high octanol-water partition coefficient (Kow) that are strongly lipophilic and very hydrophobic. They tend to accumulate, in proportion to their Log Kow, in tissues with a high lipid content such as the subcutis of vertebrates, and may cause tissue damage due to disturbance of membrane functions. This mode of action is called "minimum toxicity". Well-known are low-molecular weight aliphatic petroleum compounds and chlorinated alkanes such as chloroform. These compounds cause their primary damage to cell membranes; especially neurons are sensitive to this effect, hence minimum toxicity is also called narcotic toxicity. Lipophilic chemicals with high Log Kow do not reach concentrations high enough to cause minimum toxicity because they induce biotransformation at lower concentrations. The toxicity is then usually due to a reactive metabolite.Xenobiotic metabolism involves three subsequent phases:Cytochrome P450 is a membrane-bound enzyme, associated with the smooth endoplasmic reticulum. It carries a porphyrin ring containing an Fe atom, which is the active center of the molecule. The designation P450 is derived from the fact that it shows an absorption maximum at 450 nm when inhibited by carbon monoxide, a now outdated method to demonstrate its presence. Other (outdated) namings are MFO (mixed function oxygenase) and drug metabolizing enzyme complex. Cytochrome P450 is encoded by a gene called CYP, of which there are many paralogs in the genome, all slightly differing from each other in terms of inducibility and substrate specificity. Three classes of CYP genes are involved with biotransformation, designated CYP1, CYP2 and CYP3 in vertebrates. Each class has several isoforms; the human genome has 57 different CYP genes in total. The CYP complement of invertebrates and plants often involves even more genes; many evolutionary lineages have their own set, arising from extensive gene duplications within that lineage. In humans, the genetic complement of a person's CYP genes is highly relevant as to its drug metabolizing profile (see the section on Genetic variation in toxicant metabolism).Cytochrome P450 operates in conjunction with an enzyme called NADPH cytochrome P450 reductase, which consists of two flavoproteins, one containing Flavin adenine dinucleotide (FAD), the other flavin mononucleotide (FMN). The reduced Fe2+ atom in cytochrome P450 binds molecular oxygen, and is oxidized to Fe3+ while splitting O2; one O atom is introduced in the substrate, the other reacts with hydrogen to form water. Then the enzyme is reduced by accepting an electron from cytochrome P450 reductase. The overall reaction can be written as: RH + O2 + NADPH + H+ → ROH + H2O + NADP+where R is an arbitrary substrate.Cytochrome P450 is expressed to a great extent in hepatocytes (liver cells), the liver being the main organ for xenobiotic metabolism in vertebrates, but it is also present in epithelia of the lung and the intestine. In insects the activity is particularly high in the Malpighian tubules in addition to the gut and the fat body. In mollusks and crustaceans the main metabolic organ is the hepatopancreas.After activation by cytochrome P450 the oxidized substrate is ready to be conjugated to an endogenous compound, e.g. a sulphate, glucose, glucuronic acid or glutathione group. These reactions are conducted by a variety of different enzymes, of which some reside in the sER like P450, while others are located in the cytoplasm of the cell. Most of them transfer a hydrophilic group, available from intermediate metabolism, to the substrate, hence the enzymes are called transferases. Usually the compound becomes more polar in phase II, however, not all phase II reactions increase water solubility; for example, methylation (by methyl transferase) decreases reactivity but increases apolarity. Other phase II reactions are conjugation with glutathione, conducted by glutathione-S-transferase (GST) and with glucuronic acid, conducted by UDP-glucuronyl transferase. In invertebrates other conjugations may dominate, e.g. in arthropods and plants, conjugation with malonyl glucose is a common reaction, which is not seen in vertebrates.Conjugation with glutathione in the human body is often followed by splitting off glutamic acid and glycine, leaving only the cysteine residue on the substrate. Cysteine is subsequently acetylated, thus forming a so-called mercapturic acid. This is the most common type of metabolite for many xenobiotics excreted in urine by humans.Like cytochrome P450, the phase II enzymes consist in various isoforms, encoded by different paralogs in the genome. Especially the GST family is quite extensive and polymorphisms in these genes contribute significantly to the personal metabolic profile (see the section on Genetic variation in toxicant metabolism).In the human body, there are two main pathways for excretion, one from the liver into the bile (and further into the gut and the faeces), the other through the kidney and urine. These two pathways are used by different classes of xenobiotics: very hydrophobic compounds such as high-molecular weight polycyclic aromatic hydrocarbons are still not readily soluble in water even after metabolism but can be emulsified by bile salt and excreted in this way. It sometimes happens that such compounds, once arriving in the gut, are assimilated again, transported to the liver by the portal vein and metabolized again. This is called "entero-hepatic circulation". Lower molecular weight compounds and hydrophilic compounds are excreted through urine. Volatile compounds can leave the body through the skin and exhaled air.Excretion of activated and conjugated compounds from tissues out of the cell usually requires active transport, which is mediated by ABC (ATP binding cassette) transporters, a very large and diverse family of membrane proteins which have in common a binding cassette for ATP. Different subgroups of ATP transporters transport different types of chemicals, e.g. positively charged hydrophobic molecules, neutral molecules and water-soluble anionic compounds. One well-known group consists of multidrug resistance proteins or P-glycoproteins. These transporters export drugs aiming to attack tumor cells. Because their activity is highly inducible, these proteins can enhance excretion enormously, making the cell effectively resistant and thus case major problems for cancer treatment.All enzymes of xenobiotic metabolism are highly inducible: their activity is normally on a low level but is greatly enhanced in the presence of xenobiotics. This is achieved through a classic case of transcriptional regulation upon CYP and other genes, leading to de novo synthesis of protein. In addition, extensive proliferation of the endoplasmic reticulum may occur, and in extreme cases even swelling of the liver (hepatomegaly).The best investigated pathway for transcriptional activation of CYP genes is due to the arylhydrocarbon receptor (AhR). Under normal conditions, this peptide is stabilized in the cytoplasm by heat-shock proteins, however, when a xenobiotic compound binds to AhR, it is activated and can join with another protein called Ah receptor nuclear translocator (ARNT) to translocate to the nucleus and bind to DNA elements present in the promotor of CYP and other genes. It thus acts as a transcriptional activator or transcription factor on these genes. The DNA motifs to which AhR binds are called xenobiotic responsive elements (XRE) or dioxin-responsive elements (DRE). The compounds acting in this manner are called 3-MC-type inducers, after the (highly carcinogenic) model compound 3-methylcholanthrene. The inducing capacity of a compound is related to its binding affinity to the AhR, which in itself is determined by the spatial structure of the molecule. The lock-and key fitting between AhR and xenobiotics explains why induction of biotransformation by xenobiotics shows a very strong stereospecificity. For example, among the chlorinated biphenyls and chlorinated dibenzodioxins, some compounds are extremely strong inducers of CYP1 genes, while others, even with the same number of chlorine atoms, are no inducers at all. The precise position of the chlorine atoms determines the molecular "fit" in the Ah receptor (see Section on Receptor interaction).In addition to 3MC-type of induction there are other modes in which biotransformation enzymes are induced, but these are less well-known. A common class is PB-type induction (named after another model compound, phenobarbital). PB-type induction is not AhR-dependent, but acts through activation of another nuclear receptor, called constitutive androstane receptor (CAR). This receptor activates CYP2 genes and some CYP3 genes.The high inducibility of biotransformation can be exploited in a reverse manner: if biotransformation is seen to be highly upregulated in a species living in the environment, this indicates that that species is being exposed to xenobiotic compounds. Assays addressing cytochrome P450 activity can therefore be exploited in bioindication and biomonitoring systems. The EROD (ethoxyresorufin-O-deethylase) assay is often used for this purpose, although it is not 100% specific to the isoform of P450. Another approach is to address CYP expression directly, e.g. through reverse transcription-quantitative PCR, a method to quantify the amount of CYP mRNA.Although the main aim of xenobiotic metabolism is to detoxify and excrete foreign compounds, some pathways of biotransformation actually enhance toxicity. This is mostly due to the first step, activation by cytochrome P450. The activation may lead to intermediate metabolites which are highly reactive and the actual cause of toxicity. The best investigated examples are due to bioactivation of polycyclic aromatic hydrocarbons (PAHs), a group of chemicals present in diesel, soot, cigarette smoke and charred food products. Many of these compounds, e.g. benzo(a)pyrene, benz(a)anthracene and 3-methylcholanthrene, are not reactive or toxic as such but are activated by cytochrome P450 to extremely reactive molecules. Benzo(a)pyrene, for instance, is activated to a diol-epoxide, which readily binds to DNA, especially to the free amino-group of guanine. The complex is called a DNA adduct, the double helix is locally disrupted and this results in a mutation. If this happens in an oncogene, a tumor may develop (see the Section on Carcinogenesis and genotoxicity).Not all PAHs are carcinogenic. Their activity critically depends on the spatial structure of the molecule, which again determines its "fit" in the Ah receptor. PAHs with a "notch" (often called bay-region) in the molecule tend to be stronger carcinogens than compounds with a symmetric (round or linear) molecular structure.Another mechanism for biotransformation-induced toxicity is due to some very recalcitrant organochlorine compounds such as polychlorinated dibenzodioxins (PCDDs, or dioxins for short) and polychlorinated biphenyls (PCBs). Some of these compounds are very potent inducers of biotransformation, but they are hardly degraded themselves. The consequence is that the highly upregulated cytochrome P450 activity continues to generate a large amount of reactive oxygen (ROS), causing oxidative stress and damage to cellular constituents. It is assumed that the chronic toxicity of 2,3,7,8-tetrachlorodibenzo(para)dioxin (TCDD), one of the most toxic compounds emitted by human activity, is due to its high capacity to induce prolonged oxidative stress. On the molecular level, there is a close link between oxidative stress and biotransformation activity. Many toxicants that primarily induce oxidative stress (e.g. cadmium) also upregulate CYP enzymes. Two defence mechanisms, oxidative stress defence and biotransformation are part of the same integrated stress defence system of the cell.Bui, P.H., Hsu, E.L., Hankinson, O., Fatty acid hydroperoxides support cytochrome P450 2S1-mediated bioactivation of benzo[a]pyrene-7-8-dihydrodiol. Molecular Pharmacology 76, 1044-1052.Stroomberg, G.J., Zappey, H., Steen, R.J.C.A., Van Gestel, C.A.M., Ariese, F., Velthorst, N.H., Van Straalen, N.M.. PAH biotransformation in terrestrial invertebrates - a new phase II metabolite in isopods and springtails. Comparative Biochemistry and Physiology Part C 138, 129-137.Timbrell, J.A.. Principles of Biochemical Toxicology. Taylor & Francis Ltd, London.Van Straalen, N.M., Roelofs, D.. An Introduction to Ecological Genomics, 2nd Ed. Oxford University Press, Oxford.Vermeulen, N.P.E., Van den Broek, J.M.. Opname en verwerking van chemicaliën in de mens. Chemisch Magazine. Maart: 167-171.Describe the main processes involved with each of the three general phases of xenobiotic metabolism, as well as the molecular systems involved.The upregulation of cytochrome P450 activity is a classical example of regulation through enhanced de novo synthesis of enzyme. The mechanism is known in quite some detail and involves a variety of components. Describe the involvement of the following componentsIn the past, activity of cytochrome P450 was often assessed using a synthetic fluorescent substrate, resorufin; activity was measured as ethoxyresorufin-O-deethylase (EROD). Discuss the advantages and disadvantages of the use of such an assay in environmental risk assessment.Author: A. Jan HendriksReviewers: Nico van den Brink, Nico van StraalenLearning objectives:You should be able toKeywords: body size, biological properties, scaling, cross-species extrapolation, size-related uptake kineticsGlobally more than 100,000,000 chemicals have been registered. In the European Union more than 100,000 compounds are awaiting risk assessment to protect ecosystem and human health, while 1,500,000 contaminated sites potentially require clean-up. Likewise, 8,000,000 species, of which 10,000 are endangered, need protection worldwide, with one lost per hour (Hendriks, 2013). Because of financial, practical and ethical (animal welfare) constraints, empirical studies alone cannot cover so many substances and species, let alone their combinations. Consequently, the traditional approach of ecotoxicological testing is gradually supplemented or replaced by modelling approaches. Environmental chemists and toxicologists for long have developed relationships allowing extrapolation across chemicals. Nowadays, so-called Quantitative Structure Activity Relationships (QSARs) provide accumulation and toxicity estimates for compounds based on their physical-chemical properties. For instance bioaccumulation factors and median lethal concentrations have been related to molecular size and octanol-water partitioning, characteristic properties of a chemical that are usually available from its industrial production process.In analogy with the QSAR approach in environmental chemistry, the question may be asked whether it is possible to predict toxicological, physiological and ecological characteristics of species from biological traits, especially traits that are easily measured, such as body size. This approach has gone under the name "Quantitative Species Sensitivity Relationships" (QSSR) (Notenboom, 1995).Among the various traits available, body-size is of particular interest. It is easily measured and a large part of the variability between organisms can be explained from body size, with r2 > 0.5. Not surprisingly, body size also plays an important role in toxicology and pharmacology. For instance, toxic endpoints, such as LC50s, are often expressed per kg body weight. Recommended daily intake values assume a "standard" body weight, often 60 kg. Yet, adult humans can differ in body weight by a factor of 3 and the difference between mouse and human is even larger. Here it will be explored how body-size relationships, which have been studied in comparative biology for a long time, affect the extrapolation in toxicology and can be used to extrapolate between species.Do you expect a 104 kg elephant to eat 104 times more than a 1 kg rabbit per day? Or less, or more? On being asked, most people intuitively come up with the right answer. Indeed, daily consumption by the proboscid is less than 104 times that of the rodent. Consequently, the amount of food or water used per kilogram of body weight of the elephant is less than that of the rabbit. Yet, how much less exactly? And why should sustaining 1 kg of rabbit tissue require more energy than 1 kg of elephant flesh in the first place?A century of research (Peters, 1983) has demonstrated that many biological characteristics Y scale to size X according to a power function:Y = a Xbwhere the independent variable X represents body mass, and the dependent variable Y can virtually be any characteristic of interest ranging, e.g., from gill area of fish to density of insects in a community.Plotted in a graph, the equation produces a curved line, increasing super-linearly if b > 1 and sub-linearly if b < 1. If b=1, Y and X are directly proportional and the relationship is called isometric. As curved lines are difficult to interpret, the equation is often simplified by taking the logarithm of the left and right parts. The formula then becomes:log Y = log a + b log XWhen log Y is plotted against log X, a straight line results with slope b and intercept log a. If data are plotted in this way, the slope parameter b may be estimated by simple linear regression.Across wide size ranges, slope b often turns out to be a multitude of ¼ or, occasionally, ⅓. Rates [kg∙d-1] of consumption, growth, reproduction, survival and what not, increase with mass to the power ¾ , while rate constants, sometimes called specific rates [kg∙kg-1∙d-1], decrease with mass to the power -¼. So, while the elephant is 104 kg heavier than the 1 kg rabbit, it eats only¾ = 103 times more each day. Vice versa, 1 kg of proboscid apparently requires a consumption of-¼ kg∙kg-1∙d-1, i.e., 10 times less. Variables with a time dimension [d] like lifespan or predator-prey oscillation periods scale inversely to rate constants and thus change with body mass to the power ¼. So, an elephant becomes¼ = 10 times older than a rabbit. Abundance, i.e., the number of individuals per surface area [m-2] decreases with body mass to the power -¾. Areas, such as gill surface or home range, scale inversely to abundance, typically as body mass to the power ¾.Now, why would sustaining 1 kg of elephant require 10 times less food than 1 kg of rabbit? Biologists, pharmacologists and toxicologists first attributed this difference to area-volume relationships. If objects of the same shape but different size are compared, the volume increases with length to the power 3 and the surface increases with length to the power 2. For a sphere with radius r, for example, area A and volume V increase as A ~ r2 and V ~ r3, so area scales to volume as A ~ r2 ~ (V⅓)2 ~ V⅔. So, larger animals have relatively smaller surfaces, as long as the shape of the organism remains the same. Since many biological processes, such as oxygen and food uptake or heat loss deal with surfaces, metabolism was, for long, thought to slow down like geometric structures, i.e., with multitudes of ⅓. Yet, empirical regressions, e.g. the "mouse-elephant curve" developed by Max Kleiber in the early 1930s show a universal slope of ¼ (Peters, 1983. This became known as the "Kleiber's law". While the data leave little doubt that this is the case, it is not at all clear why it should be ¼ and not ⅓. Several explanations for the ¼ slope have been proposed but the debate on the exact value as well as the underlying mechanism continues.Since chemical substances are carried by flows of air and water, and inside the organism by sap and blood, toxicokinetics and toxicodynamics are also expected to scale to size. Indeed, data confirm that uptake and elimination rate constants decrease with size, with an exponent of about -¼. Slopes vary around this value, the more so for regressions that cover small size ranges and physiologically different organisms. The intercept is determined by resistances in unstirred water layers and membranes through which the substances pass, as well as by delays in the flows by which they are carried. The resistances mainly depend on the affinity and molecular size of the chemicals, reflected by, e.g., the octanol-water partition coefficient Kow for organic chemicals or atomic mass for metals. The upper boundary of the intercept is set by the delays imposed by consumption and, subsequently, egestion and excretion. The lower end is determined by growth dilution. Both uptake and elimination scale to mass with the same exponent so that their ratio, reflecting the bioconcentration or biomagnification factor in equilibrium, is independent of body-size.Scaling of rate constants for uptake and elimination, such as in exposure duration are lower in smaller compared to larger organisms. Thus, the apparent "sensitivity" of daphnids can, at least partially, be attributed to their small body-size. This emphasizes the need to understand simple scaling relationships before developing to more elaborate explanations.Using . Complicated responses like susceptibility to toxicants can be predicted only from Kow and body size, which illustrates the generality and power of allometric scaling. Of course, the regressions describe the general trends and in individual cases the deviations can be large. Still, considering the challenges of risk assessment as outlined above, and in the absence of specific data, the predictions in Table 1 can be considered as a reasonable first approximation.Table 1. Lethal concentrations and doses as a function of test animal body-massSpeciesEndpointUnitb (95% CI)r2ncnsSourceGuppyLC50mg∙L-10.66 (0.51-0.80)0.98611MammalsLD10≈MTDmg∙animal-10.730.69‑0.772752BirdsOral LD50mg∙animal-11.19 (0.67-0.82)0.761943…373MammalsOral LD50mg∙animal-10.94 (1.18-1.20)0.891673…164MammalsOral LD50mg∙animal-11.01 (1.00-1.01)>50002…85Allometry is also important when dealing with other levels of biological organisation. Leaf or gill area, the number of eggs in ovaries, the number of cell types and many other cellular and organ characteristics scale to body-size as well. Likewise, intrinsic rates of increase (r) of populations and the production-biomass ratios (P/B) of communities can also be obtained from the (average) species mass. Even the area needed by animals in laboratory assays scales to size, i.e., by m¾, approximately the same slope noted for home ranges of individuals in the field.Since almost any physiological and ecological process in toxicokinetics and toxicodynamics depends on species size, allometric models are gaining interest. Such an approach allows one to quantitatively attribute outliers (like apparently "sensitive" daphnids) to simple biological traits, rather than detailed chemical-toxicological mechanisms.Scaling has been used in risk assessment at the molecular level for a long time. The molecular size of a compound is often a descriptor in QSARs for accumulation and toxicity. If not immediately evident as molecular mass, volume or area often pops up as an indicator of steric properties. Scaling does not only apply to bioaccumulation and toxicity from molecular to community levels, size dependence is also observed in other sections of the environmental cause-effect chain. Emissions of substances, e.g., scale non-linearly to the size of engines and cities. Concentrations of chemicals in rivers depend on water discharge, which in itself is an allometric function of catchment size. Hence, understanding the principles of cross-disciplinary scaling is likely to pay off in protecting many species against many chemicals.Anderson, P.D., Weber, L.J.. Toxic response as a quantitative function of body size. Toxicology and Applied Pharmacology 33, 471-483.Burzala-Kowalczyk, L., Jongbloed, G.. Allometric scaling: Analysis of LD50 data. Risk Analysis 31, 523-532.Hendriks, A.J.. Modelling response of species to microcontaminants: Comparative ecotoxicology by (sub)lethal body burdens as a function of species size and octanol-water partitioning of chemicals. Ecotoxicology and Environmental Safety 32, 103-130.Hendriks, A.J.. How to deal with 100,000+ substances, sites, and species: Overarching principles in environmental risk assessment. Environmental Science and Technology 47, 3546−3547.Hendriks, A.J., Van der Linde, A., Cornelissen, G., Sijm, D.T.H.M.. The power of size: 1. Rate constants and equilibrium ratios for accumulation of organic substances. Environmental Toxicology and Chemistry 20, 1399-1420.Notenboom, J., Vaal, M.A., Hoekstra, J.A.. Using comparative ecotoxicology to develop quantitative species sensitivity relationships (QSSR). Environmental Science and Pollution Research 2, 242-243.Peters, R.H.. The Ecological Implications of Body Size. Cambridge University Press, Cambridge.Sample, B.E., Arenal, C.A. Allometric models for interspecies extrapolation of wildlife toxicity data. Bulletin of Environmental Contamination and Toxicology 62, 653-66.Travis, C.C., White, R.K. Interspecific scaling of toxicity data. Risk Analysis 8, 119-125.Think of a biological trait that you are interested in. How do you think it will scale to organism size?What is the elimination rate constant of a chemical with a Kow of 104 from an insect with a mass of 10-5 kg?Why is size-scaling more prominent in toxicokinetics than in toxicodynamics?Auteur: Nico van den BrinkReviewers: Kees van Gestel. Jan HendriksLearning objectives:You should be able to:Keywords: biomagnification, food-chain transfer,Chemicals may be transferred from one organism to another. Grazers will ingest chemicals that are in the vegetation they eat. Similarly, predators are exposed to chemicals in their prey items. This so-called food web accumulation is governed by properties of the chemical, but also by some traits of the receiving organism (e.g. grazer or predator).Some chemicals are known to accumulate in food webs, reaching the highest concentrations in top-predators. Examples of such chemicals are organochlorine pesticides like DDT and brominated flame retardants (e.g. PBDEs; see section on POPs). Such accumulating chemicals have a few properties in common: they need to be persistent and they need to have affinity for the organismal body. Organic chemicals with a relatively high log Kow, indicating a high affinity for lipids, will enter organisms quite effectively (see section on Bioconcentration and kinetics modelling). Once in the body, these chemicals will be distributed to lipid rich tissues, and excretion is rather limited. In case of persistent chemicals that are not metabolised, concentrations will increase over time when uptake is higher than excretion. Furthermore, such chemicals are likely to be passed on to organisms at the next trophic level in case of prey-predator interactions. Some of these chemicals may be metabolised by the organism, most often into more water soluble metabolites (see section on Xenobiotic metabolism & defence). These metabolites are more easily excreted, and in this way concentrations of metabolizable chemicals do not increase so much over time, and will therefore also transfer less to higher trophic levels. The effects of metabolism on the internal concentrations of organisms is clearly illustrated by a study on the uptake of organic chemicals by different aquatic species (Kwok et al., 2013). In that study the uptake of persistent chemicals (organochlorine pesticides; OCPs) was compared with the uptake of chemicals that may be metabolised (polycyclic aromatic hydrocarbons; PAHs). The authors compared shrimps with fish, the former having a limited capacity to metabolise PAHs while fish can. shows the Biota-to-Sediment Accumulation Factors (BSAFs; see section on Bioaccumulation), which is the ratio between the concentration in the organism and in the sediment. It is shown that OCPs accumulate to a high extent in both species, reflecting persistent, non-metabolizable chemicals. For PAHs the results are different per species, fish are able to metabolise and as a result the concentrations of PAHs in the fish are low, while in shrimp, with a limited metabolic capacity, the accumulation of PAHs is comparable to the OCPs. These results show that not only the properties of the chemicals are of importance, but also some traits of the organisms involved, in this case the metabolic capacity.Food-web accumulation of chemicals is driven by food uptake. At lower trophic levels, most organisms will acquire relatively low concentrations from the ambient environment. First consumers, foraging on these organisms will accumulate the chemicals of all of them, and in case of persistent chemicals that enter the body easily, concentrations in the consumers will be higher than in their diet. Similarly, concentrations will increase when the chemicals are transferred to the next trophic level. This process is called biomagnification, indicating increasing concentrations of persistent and accumulative chemicals in food webs. The most iconic example of this is on the increasing concentrations of DDTs in fish eating American Osprey, a casus which has led to the ban of a lot of organochlorine chemicals.Since biomagnification along trophic levels is food driven, it is of importance to include diet composition into the studies. This can be explained by an example on small mammals in the Netherlands. Two similar small mammal species, the bank vole (Myodes glareolus) and the common vole (Microtus arvalis) co-occur in larger part of the Netherlands. Although the species look very similar, they are different in their diet and habitat use. The bank vole is a omnivorous species, inhabiting different types of habitat while the common vole is strictly vegetarian living in pastures. In a study on the species-specific uptake of cadmium, diet items of both species were analysed, indicating nearly 3 orders of magnitudes differences in cadmium concentrations between earthworms and berries from vegetation (Fig 3A, van den Brink et al., 2010). Stable isotopic ratios of carbon and nitrogen were used to assess the general diets of the organisms. The common vole ate mostly stinging nettle and grass, including seeds, while the bank vole showed to forage on grass herbs and earthworms. This difference in diet was reflected in increased concentrations of cadmium in the bank vole in comparison to the common vole (both inhabiting the same area). The concentrations of one bank vole appeared to be extremely low (red diamond in , and initially this was considered to be an artefact. However, detailed analysis of the stable isotopic ratios in this individual revealed that it had foraged on stinging nettle and grass, hence a diet more reflecting the common vole. This emphasises once more that organisms accumulate through their diet (you accumulate what you eat!)Orcas or Killer whales (Orcinus orca) are large marine predatory mammals, which roam all around the oceans, from the Arctic to the deep south region of the Antarctic. Although they appear ubiquitous around the world, generally different pods of Orcas occur in different regions of the marine ecosystem. Often, each pod has developed specialised foraging behaviours targeted at specific prey species. Although Orcas are generally apex top-predators at the top of the (local) food web, the different foraging strategies would suggest that exposure to accumulating chemicals may differ considerably between pods. This was indeed shown to be the case in a very elaborate study on different pods of Orcas of the West-coast of Canada by Ross et al.. In the Vancouver region there is a resident pod while the region is also often visited by two transient groups of Orcas. PCB concentrations were high in all animals, but the transient animals contained significantly higher levels. The transient whales mainly fed on marine mammals, while the resident animals mainly fed on fish, and this difference in diet was thought to be the cause of the differences in PCB levels between the groups. In that study, it was also shown that PCB levels increased with age, due to the persistence of the PCBs, while female orcas contained significant lower concentrations of PCBs. The latter is caused by the lactation of the female Orcas during which they feed their calves with lipid rich milk, containing relatively high levels of (lipophilic) PCBs. By this process, females offload large parts of their PCB body burden, however by transferring these PCBs to their developing calves (see also in the section on Bioaccumulation). A recent study showed that although PCBs have been banned for decades now, they still pose threats to populations of Orcas (Deforges et al., 2018). In that study, regional differences in PCB burdens were confirmed, likely due to differences in diet preferences although not specifically mentioned. It was shown that PCB levels in most of the Orca populations were still above toxic threshold levels and concerns were raised regarding the viability of these populations. This study confirms that 1) Orcas are exposed to different levels of PCBs according to their diet, which influences the biomagnification of the PCBs, 2) Orca populations are very inefficient in clearing PCBs from the individual due to little metabolism but also from the population due to the efficient maternal transfer from mother to calve, and 3) persistent, accumulating chemicals may pose threats to organisms even decades after their use. Understanding the mechanisms and processes underlying the biomagnification of persistent and toxic compounds is essential for a in depth risk assessment.Desforges, J.-P., Hall, A., McConnell, B., Rosing-Asvid, A., Barber, J.L., Brownlow, A., De Guise, S., Eulaers, I., Jepson, P.D., Letcher, R.J., Levin, M., Ross, P.S., Samarra, F., Víkingson, G., Sonne, C., Dietz, R.. Predicting global killer whale population collapse from PCB pollution. Science 361, 1373-1376.Ford, J.K.B., Ellis, G.A., Matkin, D.R., Balcomb, K.C., Briggs, D., Morton, A.B.. Killer whale attacks on minke whales: Prey capture and antipredator tactics. Marine Mammal Science 21, 603-618.Guinet, C.. Predation behaviour of killer whales (Orcinus orca) around Grozet Islands. Canadian Journal of Zoology 70, 1656-1667.Kwok, C.K., Liang, Y., Leung, S.Y., Wang, H., Dong, Y.H., Young, L., Giesy, J.P., Wong, M.H.. Biota-sediment accumulation factor (BSAF), bioaccumulation factor (BAF), and contaminant levels in prey fish to indicate the extent of PAHs and OCPs contamination in eggs of waterbirds. Environmental Science and Pollution Research 20, 8425-8434.Ross, P.S., Ellis, G.M., Ikonomou, M.G., Barrett-Lennard, L.G., Addison, R.F.. High PCB concentrations in free-ranging Pacific killer whales, Orcinus orca: Effects of age, sex and dietary preference. Marine Pollution Bulletin 40, 504-515.Samarra, F.I.P., Bassoi, M., Beesau, J., Eliasdottir, M.O., Gunnarsson, K., Mrusczok, M.T., Rasmussen, M., Rempel, J.N., Thorvaldsson, B., Vikingsson, G.A.. Prey of killer whales (Orcinus orca) in Iceland. Plos One 13, 20.van den Brink, N., Lammertsma, D., Dimmers, W., Boerwinkel, M.-C., van der Hout, A.. Effects of soil properties on food web accumulation of heavy metals to the wood mouse (Apodemus sylvaticus). Environmental Pollution 158, 245-251.In a lake, the ecosystem consists of algae, daphnids feeding on the algae and fish feeding on the daphnids. Chemical A has a low log Kow, but is persistent. Chemical B has high log Kow, but can be metabolised by daphnids and fish, and Chemical C has high log Kow and is also persistent. Where may you find the highest concentrations of Chemical A, B and C?Name the two traits of species determining the potential for chemicals to reach high concentrations in it.This is a more general question, trying to put the information in the text of this section on bioaccumulation across different trophic levels in a broader perspective:What property does a chemical need, besides high potential for bioaccumulation and persistence, to have a high likelihood to pose serious environmental risks to organisms?Author: Martina G. VijverReviewers: Kees van Gestel and Frank GobasLearning objectives:You should be able toKeywords:Time dependent effects, internal body concentrations, one compartment modelOne of the quests in ecotoxicology is how to link toxicity to exposure and to understand why some organisms experience toxic effects while others do not at the same level of exposure. A generally accepted approach for assessing possible adverse effects on biota, no matter what kind of species, is the Critical Body Concentration (CBC) concept (McCarty 1991). According to this concept, toxicity is determined by the amount of chemical taken up, so by the internal concentration, which has a relationship with the duration of the exposure as well as the exposure concentration.shows the relationship between the development with time of the internal concentrations of a chemical in an organism and the time when mortality occurs at different exposure concentrations under constant exposure. Independent of exposure time or exposure concentration, mortality occurs at a more or less fixed internal concentration. The CBC is defined as the highest internal concentration of a substance in an organism that does cause a defined effect, e.g. 50% mortality or 50% reduction in the number of offspring produced. By comparing internal concentrations measured in exposed organisms to CBC values derived in the laboratory, a measure of risk is obtained. The CBC applies to lethality as well as to sub-lethal effects like reproduction or growth inhibition.From . As a consequence, also the time to reach a constant LC50 (indicated as the ultimate LC50: LC50¥; depends on kinetics. Hence, both toxic effects and chemical concentration are controlled by the same kinetics.The CBC can be derived from the LC50-time relationship and be linked to the LC50¥ using uptake and elimination rate constants (k1 and k2). It should be noted that the k2 in this case does not reflect the rate of chemical excretion but rather the rate of elimination of toxic effects caused by the chemical (so, note the difference here with Section on Bioaccumulation kinetics).The time needed to reach steady state depends on the body size of the organisms, with larger organisms taking longer time to attain steady state compared to smaller organisms (McCarty 1991). The time needed to reach steady state depends also on the exposure surface area of the exposed organisms (Pawlisz and Peters 1993) as well as their metabolic activity. Organisms not capable of excreting or metabolizing a chemical will continue accumulating with time, and the LC50¥ will be zero. This is e.g. the case for cadmium in isopods (Crommentuijn et al. 1994), but kinetics are so slow that cadmium in these animals never reaches lethal concentrations in relatively clean environments as their life span is too short.The CBC integrates environmentally available fractions with bioavailable concentrations and toxicity at specific receptors (McCarty and MacKay 1993). See also Section on Bioavailability. In this way, the actual exposure concentration in the environment does not need to be known for performing a risk assessment. The internal concentration of the chemical in the organism is the only concentration required for a risk assessment. Therefore many difficulties are overcome regarding bioavailability issues, e.g. it removes some of the disadvantages of the exposure concentration expressed per unit of soil, as well as of dealing with exposures that vary over time or space.A convincing body of evidence was collected to support the CBC approach. For organic compounds with a narcotic mode of action, effects could be assessed over a wide range of organisms, test compounds and exposure media. For narcotic compounds with octanol-water partition coefficients (Kow) varying from 10 to 1,000,000 (see for details Section on Relevant chemical properties), the concentration of chemical required for lethality through narcosis is approximately 1-10 mmol/kg: (McCarty and MacKay 1993).To reduce the variation in bioconcentration factor (BCF) values for the accumulation of chemicals in organisms from water, normalization by lipid content has been suggested allowing to determine the chemical activity within an organism's body (US EPA 2003). For that reason, lipid extraction protocols are intensively described within the updated OECD Guideline for the testing of chemicals No. 305 for fish bioaccumulation tests, along with a sampling schedule of lipid measurement in fish. Correction of the BCF for differences in lipid content is also described in the same OECD guideline No. 305. If chemical and lipid analyses have been conducted on the same fish, this requires correction for the corresponding lipid content of each individual measured concentration in the fish. This should be done prior to using the data to calculate the kinetic BCF. If lipid content is not measure on all sampled fish, a mean lipid content of approx. 5% must be used to normalize the BCF. It should be noted that this correction holds only for chemicals accumulating in lipids and not for chemicals that do primarily bind to proteins (e.g. perfluorinated substances).The CBC concept also has some limitations. Crommentuijn et al. found that the toxicity of metals to soil invertebrates could not be explained using critical body concentrations. The way different organisms deal with accumulated metals has a large impact on the magnitude of body concentrations reached and the accompanying metal sensitivity (Rainbow 2002). Moreover, adaptation or development of metal tolerance limits the application of CBCs for metals. When the internal metal concentration does not show a monotonic relationship with the exposure concentration, it is not possible to derive CBCs. This means that whenever organisms are capable of trapping a portion of the metal in forms that are not biologically reactive, a direct relationship between body metal concentrations and toxicity may be absent or less evident (Luoma and Rainbow 2005, Vijver et al. 2004). Consequently, for metals a wide range of body concentrations with different biological significance exists. It therefore remains an open question whether the approach is applicable to modes of toxic action other than narcosis. Another important point is the question to what extent the CBC approach is applicable to assessing the effect of chemical mixtures, especially in cases the chemicals have a different mode of action.Crommentuijn, T., Doodeman, C.J.A.M., Doornekamp, A., Van der Pol, J.J.C., Bedaux, J.J.M., Van Gestel, C.A.M.. Lethal body concentrations and accumulation patterns determine time-dependent toxicity of cadmium in soil arthropods. Environmental Toxicology and Chemistry 13, 1781-1789.Luoma, S.N., Rainbow, P.S.. Why is metal bioaccumulation so variable? Biodynamics as a unifying concept. Environmental Science and Technology 39, 1921-1931McCarty, L.S.. Toxicant body residues: implications for aquatic bioassays with some organic chemicals. In: Mayes, M.A., Barron, M.G. (Eds.), Aquatic Toxicology and Risk Assessment: Fourteenth Volume. ASTM STP 1124. Philadelphia: American Society for Testing and Materials. pp. 183-192. DOI: 10.1520/STP23572SMcCarty, L.S., Mackay, D.. Enhancing ecotoxicological modeling and assessment. Environmental Science and Technology 27, 1719-1727Pawlisz, A.V., Peters, R.H.. A test of the equipotency of internal burdens of nine narcotic chemicals using Daphnia magna. Environmental Science and Technology 27, 2801-2806Rainbow P.S.. Trace metal concentrations in aquatic invertebrates: why and so what? Environmental Pollution 120, 497-507.U.S. EPA.. In: Methodology for Deriving Ambient Water Quality Criteria for the Protection of Human Health: Technical Support Document. Volume 2. United States Environmental Protection Agency, Washington, D.C: Development of National Bioaccumulation Factors.Vijver M.G., Van Gestel, C.A.M., Lanno, R.P., Van Straalen, N.M., Peijnenburg, W.J.G.M. Internal metal sequestration and its ecotoxicological relevance: a review. Environmental Science and Technology 18, 4705-4712.Explain why the CBC approach integrates chemical and biological availability.How are time dynamics involved in the CBC approach?Under what conditions the CBC approach cannot be applied?This page titled 4.1: Toxicokinetics is shared under a CC BY 4.0 license and was authored, remixed, and/or curated by Sylvia Moes, Kees van Gestel, & Gerco van Beek via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.
980
4.2: Toxicodynamics and Molecular Interactions
https://chem.libretexts.org/Bookshelves/Environmental_Chemistry/Environmental_Toxicology_(van_Gestel_et_al.)/04%3A_Toxicology/4.02%3A_Toxicodynamics_and_Molecular_Interactions
Author: Timo HamersReviewers: Frank van Belleghem and Ludek BlahaLearning goalsYou should be able toKey words: Receptor; Transcription factor; DNA adducts; Membrane; Oxidative stressToxicodynamics describes the dynamic interactions between a compound and its biological target, leading ultimately to an (adverse) effect. In this Chapter 4.2, toxicodynamics have been described for processes leading to diverse adverse effects. Any adverse effects by a toxic substance is the result of an interaction between the toxicant and its biomolecular target (i.e. mechanism of action). Biomolecular targets include a protein, a DNA or RNA molecule, a phospholipid bilayer membrane, but also small molecules that have specific functions in keeping cellular homeostasis.Both endogenous and xenobiotic compounds that bind to proteins are called ligands. The consequence of a protein interaction depends on the role of the target protein, e.g.1. Receptor2. Enzyme3. ProteinReceptor proteins specifically bind and respond to endogenous signalling ligands such as hormones, prostaglandins, growth factors, or neurotransmitters, by causing a typical cellular response. Receptor proteins can be located in the cell membrane, in the cytosol, and in the nucleus of a cell. Agonistic receptor ligands activate the receptor protein whereas antagonistic ligands inactivate the receptor and prevent (endogenous) agonists from activating the receptor. Based on the role of the receptor protein, binding by ligands may interfere with ion channels, G-protein coupled receptors, enzyme linked receptors, or nuclear receptors. Xenobiotic ligands can interfere with these cellular responses by acting as agonistic or antagonistic ligands (link to section on Receptor interaction).Compounds that bind to an enzyme usually cause inhibition of the enzyme activity, i.e. a decrease in the conversion rate of the endogenous substrate(s) of the enzyme into its/their corresponding product(s). Compounds that bind non-covalently to an enzyme cause reversible inhibition, while compounds that bind covalently to an enzyme cause irreversible inhibition (link to section on Protein inactivation).Similarly, compounds that bind to a transporter protein usually inhibit the transport of the natural, endogenous ligand. Such transporter proteins may be responsible for local transport of endogenous ligands across the cell membrane, but also for peripheral transport of endogenous ligands through the blood from one organ to the other (link to section Endocrine disruption).Apart from interaction with functional receptor, enzyme, or transporter proteins, toxic compounds may also interact with structural proteins. For instance the cytoskeleton may be damaged by toxic compounds that block the polymerization of actin, thereby preventing the formation of filaments.In addition to proteins, DNA and RNA macromolecules can be targets for compound binding. Especially the guanine base can be covalently bound by electrophilic compounds, such as reactive metabolites. Such DNA adducts may cause copy errors during DNA replication leading to point mutations (link to section on Genotoxicity).Compounds may also interfere with phospholipid bilayer membranes, especially with the outer cell membrane and with mitochondrial membranes. Compounds disturb the membrane integrity and functioning by partitioning into the lipid bilayer. Lost membrane integrity may ultimately lead to leakage of electrolytes and loss of membrane potential.Narcosis and Membrane DamagePartitioning into the lipid bilayer is a non-specific process. Therefore, concentrations in biological membranes that cause effects through this mode of action do not differ between compounds. As such, this type of toxicity is considered as a "baseline toxicity" (also called "narcosis"), which is exerted by all chemicals. For instance, the chemical concentration in a target membrane causing 50% mortality in a test population is around 50 mmol/kg lipid, irrespective of the species or compound under consideration. Based on external exposure levels, however, compounds do have different narcotic potencies. After all, to reach similar lipid-based internal concentrations, different exposure concentrations are required, depending on the lipid-water partitioning coefficient, which is an intrinsic property of a compound, and not of the species.Narcotic action is not the only mechanism by which compounds may damage membrane integrity. Compounds called "ionophores", for instance, act like ion carriers that transport ions across the membrane, thereby disrupting the electrolyte gradient across the membrane. Ionophores should not be confused with compounds that open or close ion channels, although both type of compounds may disrupt the electrolyte gradient across the membrane. The difference is that ionophores dissolve in the bilayer membrane and shuttle transport ions across the membrane themselves, whereas ion channel inhibitors or stimulators close or open, respectively, a protein channel in the membrane that acts as a gate for ion transport.Finally, it should be mentioned here that some compounds may cause oxidative stress by increasing the formation of reactive oxygen species (ROS), such as H2O2, O3, O2•-, •OH, NO•, or RO•. ROS are oxygen metabolites that are found in any aerobic living organism. Compounds may directly cause an increase in ROS formation by undergoing redox cycling or interfering with the electron transport chain. Alternatively, compounds may cause an indirect increase in ROS formation by interference with ROS-scavenging antioxidants, ranging from small molecules (e.g. glutathione) to proteins (e.g. catalase or superoxide dismutase). For compounds causing both direct or indirect oxidative stress, it is not the compound itself that has a molecular interaction with the target, but the ROS which may bind covalently to DNA, proteins, and lipids (link to section on Oxidative Stress).Name three biomolecular targets that can be affected by a compoundName three different mechanisms by which a compound can affect analyte transport across the cell membraneWhat is the difference between a receptor agonist and a receptor antagonist?Author: Timo HamersReviewers: Frank van Belleghem and Ludek BlahaLearning objectives:You should be able toKey words: enzyme inhibition; acetylcholinesterase, transthyretin, competitive inhibition, non-competitive inhibition, uncompetitive inhibitionProteins play an important role in essential biochemical processes including catalysis of metabolic reactions, DNA replication and repair, transport of messengers (e.g. hormones), or receptor responses to such messengers. Many toxic compounds exert their toxic action by binding to a protein and thereby disturbing these vital protein functions.Binding of xenobiotic compounds to a transporter protein may hamper binding of the natural ligand of the protein, thereby inhibiting the transporter function of the protein. An example of such inhibition is the binding of halogenated phenols to transthyretin (TTR). TTR is a transport protein for thyroid hormones, present in the blood. It has two binding places for the transport of thyroid hormone, i.e. mainly thyroxine (T4) in mammals and mainly triiodothyronine (T3) in other vertebrates. Compounds with high structural resemblance with thyroid hormone (especially halogenated phenols, such as hydroxylated metabolites of PCBs or PBDEs), are capable to compete with thyroid hormone for TTR binding. Apart from the fact that this enhances distribution of the toxic compounds, this also causes an increase of unbound thyroid hormone in the blood, which is then freely available for uptake in the liver, metabolic conjugation, and urinary excretion. Ultimately, this may lead to decreased thyroid hormone levels in the blood.Proteins involved in the catalysis of a metabolic reaction are called enzymes. The general formula of such a reaction isBinding of a toxic compound to an enzyme usually causes an inhibition of the enzyme activity, i.e. a decrease in the conversion rate of the endogenous substrate(s) of the enzyme into its/their corresponding product(s). In practice, this causes a toxic response due to a surplus of substrate and/or a deficit of product. One of the classical examples of enzyme inhibition by toxic compounds is the inhibition of the enzyme acetylcholinesterase (AChE) by organophosphate insecticides. AChE catalyzes the hydrolysis of the neurotransmitter acetylcholine (ACh), in the cholinergic synapses. During transfer of an action potential from one cell to the other, ACh is released in these synapses from the presynaptic cell into the synaptic cleft in order to stimulate the acetylcholine-receptor (AChR) on the membrane of the postsynaptic cell. AChE, which is also present in these synapses, is then responsible to break down the ACh into acetic acid and choline:By covalent binding to serine residues in the active site of the AChE enzyme, organophosphate insecticides can inhibit this reaction causing accumulation of the ACh neurotransmitter in the synapse. As a consequence, the AChR is overstimulated causing convulsions, hypertension, muscle weakness, salivation, lacrimation, gastrointestinal problems, and slow heartbeat.Organophosphate insecticides bind covalently to the AChE enzyme thereby causing irreversible enzyme inhibition. Irreversible enzyme inhibition progressively increases in time following first-order kinetics (link to section on Bioaccumulation and kinetic modelling). Recovery of enzyme activity can only be obtained by de novo synthesis of enzymes. In contrast to AChE inhibition, inhibition of the T4 transport function of TTR is reversible because the halogenated phenols bind to TTR in a non-covalent way. Similarly, non-covalent binding of a toxic compound to an enzyme causes reversible inhibition of the enzyme activity.In addition to covalent and non-covalent enzyme binding, irreversible enzyme inhibition may occur when toxic compounds cause an error during enzyme synthesis. For instance, ions of essential metals, which are present as cofactors in the active site of many enzymes, may be replaced by ions of other metals during enzyme synthesis, yielding inactive enzymes. A classic example of such decreased enzyme activity is the inhibition of δ-aminolevulinic acid dehydratase (δ-ALAD) by lead. In this case, lead replaces zinc in the active site of the enzyme, thereby inhibiting a catalytic step in the synthesis of a precursor of heme, a cofactor of the protein hemoglobulin (link to section on Toxicity mechanisms of metals).With respect to reversible enzyme inhibition, three types of inhibition can be distinguished, i.e. competitive, non-competitive, and uncompetitive inhibition.Competitive inhibition refers to a situation where the chemical competes ("fights") with the substrate for binding to the active site of the enzyme. Competitive inhibition is very specific, because it requires that the inhibitor resembles the substrate and fits in the same binding pocket of the active site. The TTR-binding example described above is a typical example of competitive inhibition between thyroid hormone and halogenated phenols for occupation of the TTR-binding site. A more classic example of competitive inhibition is the inhibition of beta-lactamase by penicillin. Beta-lactamase is an enzyme responsible for the hydrolysis of beta-lactam, which is the final step in bacterial cell wall synthesis. By defective cell wall synthesis, penicillin is an antibiotic causing bacterial death.Non-competitive inhibition refers to a situation where the chemical binds to an allosteric site of the enzyme (i.e. not the active site), thereby causing a conformational change of the active site. As a consequence, the substrate cannot enter the active site, or the active site becomes inactive, or the product cannot be released from the active site. For instance, echinocandin antifungal drugs non-competitively inhibit the enzyme 1,3-beta glucan synthase, which is responsible for the synthesis of beta-glucan, a major constituent of the fungal cell wall. Lack of beta-glucan in fungal cell walls prevents fungal resistance against osmotic forces, leading to cell lysis.Uncompetitive inhibition refers to a situation where the chemical can only bind to the enzyme if the substrate is simultaneously bound. Substrate binding leads to a conformational change of the enzyme, which leads to the formation of an allosteric binding site for the inhibitor. Uncompetitive inhibition is more common in two-substrate enzyme reactions than in one-substrate enzyme reactions. An example of uncompetitive inhibition is the inhibition by lithium of the enzyme inositol mono phosphatase (IMPase), which is involved in recycling of the second messenger inositol-3-phospate (I3P) (link to section on Receptor interaction). IMPase is involved in the final step of dephosphorylating inositol monophosphate into inositol. Since lithium is the primary treatment for bipolar disorder, this observation has led to the inositol depletion hypothesis that inhibition of inositol phosphate metabolism offers a plausible explanation for the therapeutic effects of lithium.Explain how binding of organophosphate insecticides to acetylcholinesterase enzymes may cause neurotoxicity.Explain how organohalogenated phenols may cause decreased blood levels of thyroid hormone T4.What is the difference between a competitive and a non-competitive enzyme inhibitor?Is it possible to outcompete a competitive enzyme inhibitor by increasing the substrate concentration?Is it possible to outcompete a non-competitive enzyme inhibitor by increasing the substrate concentration?Author: Timo HamersReviewers: Frank van Belleghem and Ludek BlahaLearning objectivesYou should be able toKey words: Ion channels, G-protein coupled receptors, enzyme linked receptors, nuclear receptorsReceptor proteins specifically bind and respond to endogenous signalling ligands such as hormones, prostaglandins, growth factors, or neurotransmitters, by causing a typical cellular response. Receptor proteins can be located in the cell membrane, in the cytosol, and in the nucleus of a cell. Agonistic receptor ligands activate the receptor protein whereas antagonistic ligands inactivate the receptor and prevent (endogenous) agonists from activating the receptor. Based on the role of the receptor protein, binding by ligands may interfere with:1. ion channels2. G-protein coupled receptors3. enzyme-linked receptors4. nuclear receptors.Xenobiotic ligands can interfere with these cellular responses by acting as agonistic or antagonistic ligands..Ion channels are transmembrane protein complexes that transport ions across a phospholipid bilayer membrane. Ion channels are especially important in neurotransmission, when stimulating neurotransmitters (e.g. acetylcholine or ACh) bind to the (so-called ionotropic) receptor part of the ion channel and open the ion channel for a very short (i.e. millisecond) period of time. As a result, ions can cross the membrane causing a change in transmembrane potential (see . On the other hand, receptor-binding by inhibiting neurotransmitters (e.g. gamma-aminobutyric acid or GABA) prevents the opening of ion channels.Compounds interfering with sodium channels, for instance, are neurotoxic compounds (see section on Neurotoxicity). They can either block the ion channels or keep them in a prolonged or permanently open state. Many compounds known to interfere with ion channels are natural toxins. For instance, tetrodotoxin (TTX), which is produced by marine bacteria and highly accumulated in puffer fish, and saxitoxin, which is produced by dinoflagellates and is accumulated in shellfish are capable of blocking voltage-gated sodium channels in nerve cells. In contrast, ciguatoxin, which is another persistent toxin produced by dinoflagellates that accumulates in predatory fish positioned high in the food chain, causes prolongation of the opening of voltage-gated sodium channels. Some pesticides like DDT and pyrethroid insecticides also prevent closure of voltage-gated sodium channels in nerve cells. As a consequence, full repolarization of the membrane potential is not achieved. As a consequence, the nerve cells do not reach the resting potential and any new stimulus that would be too low to reach the threshold for depolarization under normal conditions, will now cause a new action potential. In other words, the nerve cells become hyperexcitable and undergo a series of action potentials (repetitive firing) causing tremors and hyperthermia.GPCRs are transmembrane receptors that transfer an extracellular signal into an activated G-protein that is connected to the receptor on the intracellular side of the membrane. G-proteins are heterotrimer proteins consisting of three subunits alpha, beta, and gamma, of which the alpha subunit - in inactivated form - contains a guanosine diphosphate (GDP) molecule. Upon binding by endogenous ligands such as hormones, prostaglandins, or neurotransmitters (i.e. the signal or "first messenger") to the (so-called metabotropic) receptor, a conformational change in the GPCR complex leads to an exchange of the GDP for a guanosine triphosphate (GTP) molecule in the alpha monomer part of the G-protein, causing release of the activated alpha subunit from the beta/gamma dimer part. The activated alpha monomer can interact with several target enzymes causing an increase in "second messengers" starting signal transduction pathways (see point 3 Enzyme-linked receptors). The remaining beta-gamma complex may also move along the inner membrane surface and affect the activity of other proteins.Two major enzymes that are activated by the alpha monomer are adenylyl cyclase causing an increase in second messenger cyclic AMP (cAMP) and phospholipase C causing an increase in second messenger diacylglycerol (DAG). In turn, cAMP and DAG activate protein kinases, which can phosphorylate many other enzymes. Activated phospholipase C also causes an increase in levels of the second messenger inositol-3-phosphate (I3P), which opens ion channels in the endoplasmic reticulum causing a release of calcium from the endoplasmic store, which also acts as a second messenger. On the other hand, the increase in cytosolic calcium levels is simultaneously tempered by the beta/gamma dimer, which can inhibit voltage-gated calcium channels in the cell membrane. Ultimately, the GPCR signal is extinguished by slow dephosphorylation of GTP into GDP by the activated alpha monomer, causing it to rearrange with the beta/gamma dimer into the original inactivated trimer G-protein (see also courses.washington.edu/conj/bess/gpcr/gpcr.htm).The most well-known example of disruption of GPCR signalling is by cholera toxin (see text block Cholera toxin below).Despite the recognized importance of GPRCs in medicine and pharmacology, little attention has so-far been paid in toxicology to interaction of xenobiotics with GPCRs. Although a limited number of studies have demonstrated that endocrine disrupting compounds including PAHs, dioxins, phthalates, bisphenol-A, and DDT can interact with GPCR signalling, the toxicological implications of these interactions (especially with respect to disturbed energetic metabolism) remain subject for further research (see review by Le Ferrec and Øvrevik, 2018).Cholera toxinCholera toxin is a so-called AB exotoxin by Vibrio cholerae bacteria, consisting of an "active" A-part and a "binding" B-part (see http://www.sumanasinc.com/webcontent/animations/content/diphtheria.html). Upon binding by the B-part to the intestinal epithelium membrane, the entire AB complex is internalized into the cell via endocytosis, and the active A-part is released. This A-part adds an ADP-ribose group to G-proteins making the GTP dephosphorylation of activated G-proteins impossible. As a consequence, activated G-proteins remain in a permanent active state, adenylyl cyclase is permanently activated and cAMP levels rise, which in turn cause an imbalance in ion housekeeping, i.e. an excessive secretion of chloride ions to the gut lumen and a decreased uptake of sodium ions from the gut lumen. Due to the increased osmotic pressure, water is released to the gut lumen causing dehydration and severe diarrhoea ("rice-water stool").Enzyme-linked receptors are transmembrane receptors that transfer an extracellular signal into an intracellular enzymatic activity. Most enzyme-linked receptors belong to the family of receptor tyrosine kinase (RTK) proteins. Upon binding by endogenous ligands such as hormones, cytokines, or growth factors (i.e. the signal or primary messenger) to the extracellular domain of the receptors, the receptor monomers dimerize and develop kinase activity, i.e. become capable of coupling of a phosphate group donated by a high-energy donor molecule to an acceptor protein. The first substrate for this phosphorylation activity is the dimerized receptor itself, which accepts a phosphate group donated by ATP on its intracellular tyrosine residues. This autophosphorylation is the first step of a signalling pathway consisting of a cascade of subsequent phosphorylation steps of other kinase proteins (i.e. signal transduction), ultimately leading to transcriptional activation of genes followed by a cellular response. proteins become autophosphorylated and may phosphorylate (i.e. activate other proteins), including other kinases.Xenobiotic compounds can interfere with these signalling pathways in many different ways. Compounds may avoid binding of the endogenous ligand, by blocking the receptor or by chelating the endogenous ligands. Most RTK inhibitors inhibit the kinase activity directly by acting as a competitive inhibitor for ATP binding to the tyrosine residues. Many RTK inhibitors are used in cancer treatment, because RTK overactivity is typical for many types of cancer. This overactivity may for instance be caused by increased levels of receptor-activating growth factors, or to spontaneous dimerization when the receptor is overexpressed or mutated).Nuclear receptors are proteins that are activated by endogenous compounds (often hormones) leading ultimately to expression of genes specifically regulated by these receptors. Apart from ligand binding, activation of most nuclear receptors requires dimerization with a coactivating transcription factor. While some nuclear receptors are located in the nucleus in inactive form (e.g. the thyroid hormone receptor), most nuclear receptors are located in the cytosol, where they are bound to co-repressor proteins (often heat-shock proteins) keeping them in an inactive state. Upon ligand binding to the ligand binding domain (LBD) of the receptor, the co-repressor proteins are released and the receptor either forms a homodimer with a similar activated nuclear receptor or forms a heterodimer with a different nuclear receptor, which is often the retinoid-X receptor (RXR) for nuclear hormone receptors. Before or after dimerization, activated nuclear receptors are translocated to the nucleus. In the nucleus, they bind through their DNA-binding domain (DBD, or "zinc finger") to a responsive element in the DNA located in the promotor region of receptor-responsive genes. Consequently, these genes are transcribed to mRNA in the nucleus, which is further translated into proteins in the cell cytoplasm, see .Xenobiotic compounds may act as agonist or antagonists of nuclear receptor activation. Chemicals that act as a nuclear receptor agonist mimic the action of the endogenous activator(s), whereas chemicals that act as a nuclear receptor antagonist basically block the LBD of the receptor, preventing the binding of the endogenous activator(s). Over the past decades, interaction of xenobiotics with nuclear receptors involved in signalling of both steroid and non-steroid hormones has gained a lot of attention of researchers investigating endocrine disruption (link to section on Endocrine Disruption). Nuclear receptor activation is also the key mechanism in dioxin-like toxicity (see text block dioxin-like toxicity below).Dioxin-like toxicityThe term dioxins refers to polyhalogenated dibenzo-[p]-dioxin (PHDD) compounds, which are planar molecules consisting of two halogenated aromatic rings, which are connected by two ether bridges. The most potent and well-studied dioxin is 2,3,7,8-tetrachloro-[p]-dibenzodioxin (2,3,7,8-TCDD), which is often too simply referred to as TCDD or even just "dioxin". Other compounds with similar properties (dioxin-like compounds) include polyhalogenated dibenzo-[p]-furan (PHDF) compounds (often too simply referred to as "furans"), which are planar molecules consisting of two halogenated aromatic rings connected by one ether bridge and one carbon-carbon bond. A third major class of dioxin-like compounds belong to the polyhalogenated biphenyls (PHB), which consist of two halogenated aromatic rings connected only by a carbon-carbon bond. The most well-known compounds belonging to this latter category are the polychlorinated biphenyls (PCBs). Of all PHDD, PHDF or PHB compounds, only the persistent and planar compounds are considered dioxin-like compounds. For the PHBs, this implies that they should contain zero or at maximum one halogen-substitution in any of the four ortho-positions (see examples below). Non-ortho-substituted PHBs can easily obtain a planar confirmation with the two aromatic rings in one planar field, whereas mono-ortho-substituted PHBs can obtain such confirmation at higher energetic costs..2,3,7,8-tetrachlorodibenzo-[p]-dioxin (2,3,7,8-TCDD) is the most potent and well-studied dioxin-like compound, usually too simply referred to as "dioxin".2,3,7,8-tetrachlorodibenzo-[p]-furan (2,3,7,8-TCDF) a dioxin-like compound equally potent to 2,3,7,8-TCDD. It is usually too simply referred to as "furan".3,3',4,4',5-pentachlorinated biphenyl (PCB-126) is the most potent dioxin-like PCB compound, with no chlorine substitution in any of the four ortho positions next to the carbon-carbon bridge2,3',4,4',5-pentachlorinated biphenyl (PCB-118) is a weak dioxin-like PCB compound, with one chlorine substitution in the four ortho positions next to the carbon-carbon bridge2,2',4,4',5,5'-hexachlorinated biphenyl (PCB-153) is a non-dioxin-like (NDL) PCB compound, with two chlorine substitution in the four ortho positions next to the carbon-carbon bridgeThe planar composition is required for the dioxin-like compounds to fit as a key in the lock of the arylhydrocarbon (AhR) receptor (also known as the "dioxin-receptor or DR), present in the cytosol. The activated AhR then dissociates from its repressor proteins, is translocated to the nucleus, and forms a heterodimer with the AhR nuclear translocator (ARNT). The AhR-ARNT complex binds to dioxin-response elements (DRE) in the promotor regions of dioxin-responsive genes in the DNA, ultimately leading to transcription and translation of these genes (see in Denison & Nagy, 2003). Famous examples of such genes belong to the CYP1, UGT, and GST families, which are Phase I and Phase II metabolic enzymes whose activation by the AhR-ARNT complex is a natural response triggered by the need to remove xenobiotics (link to section on Xenobiotic metabolism and defence). Other genes with a DRE in their promotor region include genes involved in protein phosphorylation, such as the proto-oncogen c-raf and the cyclin dependent kinase inhibitor p27.This classical mechanism of ligand:AhR:ARNT:DRE complex-dependent induction of gene expression, however, cannot explain all the different types of toxicity observed for dioxins, including immunotoxicity, reproductive toxicity and developmental toxicity. Still, these effects are known to be mediated through the AhR as well, as they were not observed in AhR knockout mice. This can partly be explained by the fact that not all genes that are under transcriptional control of a DRE are known yet. Moreover, AhR dependent mechanisms other than this classical mechanism have been described. For instance, AhR activation may have anti-estrogenic effects because activated AhR binds to the estrogen receptor (ER) and targets it for degradation, binds (with ARNT) to inhibitory DREs in the promotor of ER-dependent genes, and competes with the ER-dimer for common coactivators. Although dioxin-like compounds absolutely require the AhR to exert their major toxicological effects, several AhR independent effects have been described as well, such as AhR-independent alterations in gene expression and changes in Ca2+ influx related to changes in protein kinase activity.Apart from the persistent halogenated dioxinlike compounds described above, other compounds may also activate the AhR, including natural AhR agonists (nAhRAs) found in food (e.g. indolo[3,2-b]carbazole (ICZ) in cruciferous vegetables, bergamottin in grapefruits, tangeretin in citrus fruits), and other planar aromatic compounds, including polycyclic aromatic hydrocarbons (PAHs) produced by incomplete combustion of organic fuels. Upon activation of the AhR, these non-persistent compounds are metabolized by the induced CYP1A biotransformation enzymes. In addition, an endogenous AhR ligand called 6-formylindolo[3,2-b]carbazole (FICZ) has been identified. FICZ is a mediator in many physiological processes, including immune responses, cell growth and differentiation. Endogenous FICZ levels are regulated by a negative feedback FICZ/AhR/CYP1A loop, i.e. FICZ activates AhR and is metabolized by the subsequently induced CYP1A. Dysregulation of this negative feedback loop by other AhR agonists may disrupt FICZ functioning, and could possibly explain some of the effects observed for dioxinlike compounds.Further reading:Denison, M.S., Soshilov, A.A., He, G., De Groot, D.E., Zhao, B.. Exactly the same but different: promiscuity and diversity in the molecular mechanisms of action of the Aryl hydrocarbon (Dioxin) Receptor. Toxicological Sciences 124, 1-22.Boelsterli, U.A.. Mechanistic Toxicology (2nd edition). Informa Healthcare, New York, London.Le Ferrec, E., Øvrevik J.. G-protein coupled receptors (GPCR) and environmental exposure. Consequences for cell metabolism using the b-adrenoceptors as example. Current Opinion in Toxicology 8, 14-19.Why are compounds interfering with ion channels mainly neurotoxic compounds?GPRC signalling is not only disrupted through interaction with the receptor. What alternative mechanisms can play a role?What is the main effect of activating enzyme-linked receptors?What happens if a compound binds to a nuclear receptor?Reactive oxygen species and antioxidantsAuthor: Frank van BelleghemReviewers: Raymond Niesink, Kees van Gestel, Éva HidegLearning objectives:You should be able toKeywords: Reactive oxygen species, Fenton reaction, Enzymatic antioxidants, Non-enzymatic antioxidants, Lipid peroxidation.Molecular oxygen (O2) is a byproduct of photosynthesis and essential to all heterotrophic cells because it functions as the terminal electron acceptor during the oxidation of organic substances in aerobic respiration. This process results in the reduction of O2 to water, leading to the formation of chemical energy and reducing power. The reason why O2 can be reduced with relative ease in biological systems can be found in the physicochemical properties of the oxygen molecule (in the triplet ground state, i.e. as it occurs in the atmosphere). Because of its electron configuration, O2 is actually a biradical that can act as an electron acceptor. The outer molecular orbitals of O2 each contain one electron, the spins of these electrons are parallel. As a result, oxygen (in the ground state) is not very reactive because, according to the Pauli exclusion principle, only one electron at a time can react with other electrons in a covalent bond. As a consequence, oxygen can only undergo univalent reductions, and the complete reduction of oxygen to water requires the sequential addition of four electrons leading to the formation of one-, two-, three-electron oxygen intermediates. These oxygen intermediates are, in sequence, the superoxide anion radical (O2●-), hydrogen peroxide (H2O2) and the hydroxyl radical (●OH).Another reactive oxygen species of importance is singlet oxygen (1O2 or 1Δg). Singlet oxygen is formed by converting ground-state molecular oxygen into an excited energy state, which is much more reactive than the normal ground-state molecular oxygen. Singlet oxygen is typically generated by a process called photosensitization, for example in the lens of the eye. Photosensitization occurs when light (UV) absorption by an endogenous or xenobiotic substance lifts the compound to a higher energy state (a high-energy triplet intermediate) which can transfer its energy to oxygen, forming highly reactive singlet oxygen. Apart from oxygen-dependent photodynamic reactions, singlet oxygen is also produced by neutrophils and this has been suggested to be important for bacterial killing through the formation of ozone (O3) (Onyango, 2016). Because these oxygen intermediates are potentially deleterious products that can damage cellular components, they are referred to as reactive oxygen species (ROS). ROS are also often termed 'free radicals' but this is incorrect because not all ROS are radicals (e.g. H2O2, 1O2 and O3). Moreover, as all radicals are (currently) considered as unattached, the prefix 'free' is actually unnecessary (Koppenol & Traynham, 1996).ROS are byproducts of aerobic metabolism in the different organelles of cells, for instance respiration or photosynthesis, or as part of defenses against pathogens. Endogenous sources of reactive oxygen species include oxidative phosphorylation, P450 metabolism, peroxisomes and inflammatory cell activation. For example, superoxide anion radicals are endogenously formed from the reduction of oxygen by the semiquinone of ubiquinone (coenzyme Q), a coenzyme widely distributed in plants, animals, and microorganisms. Ubiquinones function in conjunction with enzymes in cellular respiration (i.e., oxidation-reduction processes). The superoxide anion radical is formed when one electron is taken up by one of the antibonding π*-orbitals (formed by two 2p atomic orbitals) of molecular oxygen.A second example of an endogenous source of superoxide anion radicals is the auto-oxidation of reduced heme proteins. It is known, for example, that oxyferrocytochrome P-450 substrate complexes may undergo auto-oxidation and subsequently split into (ferri) cytochrome P-450, a superoxide anion radical and the substrate (S). This process is known as the uncoupling of the cytochrome P-450 (CYP) cycle and also referred to as the oxidase activity of cytochrome P-450. However, it should be mentioned that this is not the normal functioning of CYP. Only when the transfer of an oxygen atom to a substrate is not tightly coupled to NADPH utilization, so that electrons derived from NADPH are transferred to oxygen to produce O2●- (and also H2O2).Table 1 shows the key oxygen species and their biological half-life, their migration distance, the endogenous source and their reaction with biological compounds.Table 1. The key oxygen species and their characteristics (table adapted from Das & Roychoudhury, 2014)ROS speciesHalf-life (T1/2)Migration distanceEndogenous sourceMode of actionSuperoxide anion radical (O2●-)1-4 µs30 nmMitochondria, cytochrome P450, macrophage/ inflammatory cells, membranes, chloroplastsReacts with compounds with double bondsHydroxyl radical (●OH)1 µs1 nmMitochondria, membranes, chloroplastsReacts vigorously with all biomolecules.Hydrogen peroxide (H2O2)1 ms1 µmMitochondria, membranes,peroxisomes, chloroplastsOxidizes proteins by reacting with the Cys residue.Singlet Oxygen1-4 µs30 nmMitochondria, membranes, chloroplastsOxidizes proteins, polyunsaturated fatty acids and DNABecause of their reactivity, at elevated levels ROS can indiscriminately damage cellular components such as lipids, proteins and nucleic acids. In particular the superoxide anion radical and hydroxyl radicals that possess an unpaired electron are very reactive. In fact, hydroxyl has the highest 1-electron reduction potential, making it the single most reactive radical known. Hydroxyl radicals can arise from hydrogen peroxide in the presence of redox-active transition metal, notably Fe2+/3+ or Cu+/2+, via the Fenton reaction. In case of iron, for this reaction to take place, the oxidized form (Fe3+) has to be reduced to Fe2+. This means that Fe2+ is only released in an acidic environment (local hypoxia) or in the presence of superoxide anion radicals. The reduction of Fe3+, followed by the interaction with hydrogen peroxide, leading to the generation of hydroxyl radical, is called the iron catalyzed Haber-Weiss reaction.In order to keep the ROS concentrations at low physiologic levels, aerobic organisms have evolved complex antioxidant defense systems that include antioxidant components that are enzymatic and non-enzymatic. These are cellular mechanisms that are evolved to inhibit oxidation by quenching ROS. Three classes of enzymes are known to provide protection against reactive oxygen species: the superoxide dismutases that catalyze the dismutation of the superoxide anion radical, and the catalases and peroxidases that react specifically with hydrogen peroxide. These antioxidant enzymes can be seen as a first-line defense as they prevent the conversion of the less reactive oxygen species, superoxide anion radical and hydrogen peroxide, to more reactive species such as the hydroxyl radical. The second line of defense largely consists of non-enzymatic substances that eliminate radicals such as glutathione and vitamins E and C. An overview of the cellular defense system is provided in are metal-containing proteins (metalloenzymes) that catalyze the dismutation of the superoxide anion radical to molecular oxygen in the ground state and hydrogen peroxide, as illustrated by following reactions:Dismutation of superoxide anion radicals acts in the first part of the reaction with the superoxide anion radical as a reducing agent (a), and as an oxidant in the second part (b). Different types of SOD are located in different cellular locations, for instance Cu-Zn-SOD are mainly located in the cytosol of eukaryotes, Mn-SOD in mitochondria and prokaryotes, Fe-SOD in chloroplasts and prokaryotes and Ni-SOD in prokaryotes. Mn, Fe, Cu and Ni are the redox active metals in the enzymes, whereas Zn not being catalytic in the Cu-Zn-SOD.H2O2 is further degraded by catalase and peroxidase. Catalase (CAT) contains four iron-containing heme groups that allow the enzyme to react with the hydrogen peroxide and is usually located in peroxisomes, which are organelles with a high rate of ROS production. Catalase converts hydrogen peroxide to water and oxygen. In fact, catalase cooperates with superoxide dismutase in the removal of the hydrogen peroxide resulting from the dismutation reaction. Catalase acts only on hydrogen peroxide, not on organic hydroperoxide.Peroxidases (Px) are hemoproteins that utilize H2O2 to oxidize a variety of endogenous and exogenous substrates. An important peroxidase enzyme family is the selenium-cysteine containing Glutathione peroxidase (GPx), present in the cytosol and mitochondria. It catalyzes the conversion of hydrogen H2O2 to H2O via the oxidation of reduced glutathione (GSH) into its disulfide form glutathione disulfide (GSSG). Glutathione peroxidase catalyzes not only the conversion of hydrogen peroxide, but also that of organic peroxides. It can transform various peroxides, e.g. the hydroperoxides of lipids. Glutathione peroxidase is found in both the cytosol and in the mitochondria. In the cytosol, the enzyme is present in special vesicles. Another group of enzymes, not further described here, are the Peroxiredoxins (Prxs), present in the cytosol, mitochondria, and endoplasmic reticulum, use a pair of cysteine residues to reduce and thereby detoxify hydrogen peroxide and other peroxides. It has to be mentioned that no enzymes react with hydroxyl radical or singlet oxygen.The second line of defense largely consists of non-enzymatic substances that eliminate radicals. The major antioxidant is glutathione (GSH), which acts as a nucleophilic scavenger of toxic compounds, trapping electrophilic metabolites by forming a thioether bond between the cysteine residue of GSH and the electrophile. The result generally is a less reactive and more water-soluble conjugate that can easily be excreted (see also phase II biotransformation reactions). GSH also is a co-substrate for the enzymatic (GS peroxidase-catalyzed) degradation of H2O2 and it keeps cells in a reduced state and is involved in the regeneration of oxidized proteins.Other important radical scavengers of the cell are the vitamins E and C. Vitamin E (α-tocopherol) is lipophilic and is incorporated in cell membranes and subcellular organelles (endoplasmic reticulum , mitochondria, cell nuclei) and reacts with lipid peroxides. α-Tocopherol can be divided into two parts, a lipophilic phytyl tail (intercalating with fatty acid residues of phospholipids ) and a more hydrophilic chroman head with a phenolic group (facing the cytoplasm). This phenolic group can reduce radicals (e.g. lipid peroxy radicals (LOO●, see and is thereby oxidized in turn to the tocopheryl radical which is relatively unreactive because it is stabilized by resonance. The radical is regenerated by vitamin C or by reduced glutathione. Oxidized non-enzymatic antioxidants are regenerated by various enzymes such as glutathione.Vitamin C (ascorbic acid) is a water-soluble antioxidant and is present in the cytoplasm. Ascorbic acid is an electron donor which reacts quite rapidly with the superoxide anion radical and peroxyl radicals, but is generally ineffective in detoxifying hydroxyl radicals because of its extreme reactivity it does not reach the antioxidant (See Klaassen, 2013). Moreover, it regenerates α-Tocopherol in combination with reduced GSH or compounds capable of donating reducing equivalents (Nimse and Pal, 2015): and response of antioxidants as ROS-scavengers during environmental stress in plants. Frontiers in Environmental Science 2, 53.Edreva, A.. Generation and scavenging of reactive oxygen species in chloroplasts: a submolecular approach.Agriculture, Ecosystems & Environment 106, 119-133.Klaassen, C. D.. Casarett & Doull's Toxicology: The Basic Science of Poisons, Eighth Edition, McGraw-Hill Professional.Koppenol, W.H., Traynham, J.G.. Say NO to nitric oxide: nomenclature for nitrogen-and oxygen-containing compounds. In: Methods in Enzymology (Vol. 268, pp. 3-7). Academic Press.Louise Bolton, J.. Quinone methide bioactivation pathway: contribution to toxicity and/or cytoprotection?. Current Organic Chemistry 18, 61-69.Nimse, S.B., Pal, D.. Free radicals, natural antioxidants, and their reaction mechanisms. Rsc Advances 5, 27986-28006.Onyango, A.N.. Endogenous generation of singlet oxygen and ozone in human and animal tissues: mechanisms, biological significance, and influence of dietary components. Oxidative medicine and cellular longevity, 2016.Niesink, R.J.M., De Vries, J., Hollinger, M.A.. Toxicology: Principles and Applications. CRC Press.Smart, R.C., Hodgson, E. (Eds.).. Molecular and Biochemical Toxicology. John Wiley & Sons.What are the chances of hydroxyl radicals being formed inside the cell? On what factors does such formation depend?Given:Two oxygen species:I atmospheric oxygen (O2)II singlet oxygen (1O2)Which oxygen species contains one or more unpaired electrons, and therefore has radical properties?I and IIonly Ionly IIneither I nor IIWhich of the following radicals are detoxified by a-tocopherol (vitamine E)?I hydroxyl radical, •OHII superoxide anion radical, O2-•III lipid radical (L•)IV lipid peroxyl radical (LOO•)I and IIIII and IVI, II and IIIII, III and IVGiven:Three enzymes:I catalaseII peroxidaseIII superoxide dismutaseWhat enzyme removes hydrogen peroxide?I and III and IIIII and IIIonly IIIInduction by chemical exposure and possible effectsAuthor: Frank van BelleghemReviewers: Raymond Niesink, Kees van Gestel, Éva HidegLearning objectives:You should be able toKeywords: prooxidant-antioxidant balance, bioactivation, oxidative damage,The formation of reactive oxygen species (ROS; see section on Oxidative stress I) may involve endogenous substances and chemical-physiological processes as well as xenobiotics. Experimental evidence has shown that oxidative stress can be considered as one of the key mechanisms contributing to the cellular damage of many toxicants. Oxidative stress has been defined as "a disturbance in the prooxidant-antioxidant balance in favour of the former", leading to potential damage. It is the point at which the production of ROS exceeds the capacity of antioxidants to prevent damage (Klaassen et al., 2013).Xenobiotics involved in the formation of the superoxide anion radical are mainly substances that can be taken up in so reactive oxygen species -called redox cycles. These include quinones and hydroquinones in particular. In the case of quinones the redox cycle starts with a one-electron reduction step, just as in the case of benzoquinone. The resulting benzosemiquinone subsequently passes the electron received on to molecular oxygen. The reduction of quinones is catalyzed by the NADPH-dependent cytochrome P-450 reductase.Obviously, hydroquinones can enter a redox cycle via an oxidative step. This step may be catalyzed by enzymes, for example prostaglandin synthase.Other types of xenobiotic that can be taken up in a redox cycle, are the bipyridyl derivatives. A well-known example is the herbicide paraquat, which causes injury to lung tissue in humans and animals. schematically shows its bioactivation. Other compounds that are taken up in a redox cycle are nitroaromatics, azo compounds, aromatic hydroxylamines and certain metal (particularly Cu and Zn) chelates.Xenobiotics can enhance ROS production if they are able to enter mitochondria, microsomes, or chloroplasts and interact with the electron transport chains, thus blocking the normal electron flow. As a consequence, and especially if the compounds are electron acceptors, they divert the normal electron flow and increase the production of ROS. A typical example is the cytostatic drug doxorubicin, a well-known chemotherapeutic agent, which is used in treatment of a wide variety of cancers. Doxorubicin has a high affinity for cardiolipin, an important compound of the inner mitochondrial membrane and therefore accumulates at that subcellular location.Xenobiotics can cause oxidative damage indirectly by interfering with the antioxidative mechanisms. For instance it has been suggested that as a non-Fenton metal, cadmium (Cd) is unable to directly induce ROS. However, indirectly, Cd induces oxidative stress by a displacement of redox-active metals, depletion of redox scavengers (glutathione) and inhibition of antioxidant enzymes (protein bound sulfhydryl groups) (Cuypers et al., 2010;Thévenod et al., 2009).As mentioned before, oxidative stress has been defined as "a disturbance in the prooxidant-antioxidant balance in favour of the former". ROS can damage proteins, lipids and DNA via direct oxidation, or through redox sensors that transduce signals, which in turn can activate cell-damaging processes like apoptosis.Xenobiotic-induced generation of ROS can damage proteins through the oxidation of side chains of amino acids residues, the formation of protein-protein cross-links and fragmentation of proteins due to peptide backbone oxidation. The sulfur-containing amino acids cysteine and methionine are particularly susceptible for oxidation. An example of side chain oxidation is the direct interaction of the superoxide anion radical with sulfhydryl (thiol) groups, thereby forming thiyl radicals as intermediates:As a consequence, glutathione, composed of three amino acids (cysteine, glycine, and glutamate) and an important cellular reducing agent, can be damaged in this way. This means that if the oxidation cannot be compensated or repaired, oxidative stress can lead to depletion of reducing equivalents, which may have detrimental effects on the cell.Fortunately, antioxidant defence mechanisms limit the oxidative stress and the cell has repair mechanisms to reverse the damage. For example, heat shock proteins (hsp) are able to renature damaged proteins and oxidatively damaged proteins are degraded by the proteasome.Oxidative lipid damage Increased concentrations of reactive oxygen radicals can cause membrane damage due to lipid peroxidation (oxidation of polyunsaturated lipids). This damage may result in altered membrane fluidity, enzyme activity and membrane permeability and transport characteristics. An important feature characterizing lipid peroxidation is the fact that the initial radical-induced damage at a certain site in a membrane lipid is readily amplified and propagated in a chain-reaction-like fashion, thus dispersing the damage across the cellular membrane. Moreover, the products arising from lipid peroxidation (e.g. alkoxy radicals or toxic aldehydes) may be equally reactive as the original ROS themselves and damage cells by additional mechanisms. The chain reaction of lipid peroxidation consists of three steps:summarizes the various stages in lipid peroxidation.In step II, the peroxidation of biomembranes generates a variety of reactive electrophiles such as epoxides (LOO•) and aldehydes, including malondialdehyde (MDA). MDA is a highly reactive aldehyde which exhibits reactivity toward nucleophiles and can form MDA-MDA dimers. Both MDA and the MDA-MDA dimers are mutagenic and indicative of oxidative damage of lipids from a variety of toxicants.A classic example of xenobiotic bioactivation to a free radical that initiates lipid peroxidation is the cytochrome P450-dependent conversion of carbon tetrachloride (CCl4) to generate the trichloromethyl radical (•CCl3) and then the trichloromethyl peroxylradical CCl3OO•. Also the cytotoxicity of free iron is attributed to its function as an electron donor for the Fenton reaction (see section on Oxidative stress I) for instance via the generation of superoxide anion radicals by paraquat redox cycling) leading to the formation of the highly reactive hydroxyl radical, a known initiator of lipid peroxidation.Oxidative DNA damage ROS can also oxidize DNA bases and sugars, produce single- or double-stranded DNA breaks, purine, pyrimidine, or deoxyribose modifications and DNA crosslinks. A common modification to DNA is the hydroxylation of DNA bases leading to the formation of oxidized DNA adducts. Although these adducts have been identified in all four DNA bases, guanine is the most susceptible to oxidative damage because it has the lowest oxidation potential of all of the DNA bases. The oxidation of guanine and by hydroxyl radicals leads to the formation 8-hydroxyguanosine (8-OH-dG).Oxidation of guanine has a detrimental effect on base paring, because instead of hydrogen bonding with cytosine as guanine normally does, it can form a base pair with adenine. As a result, during DNA replication, DNA polymerase may mistakenly insert an adenosine opposite to an 8-oxo-2'-deoxyguanosine (8-oxo-dG), resulting in a stable change in DNA sequence, a process known as mutagenesis.Fortunately, there is an extensive repair mechanism that keeps mutations to a relatively low level. Nevertheless, persistent DNA damage can result in replication errors, transcription induction or inhibition, induction of signal transduction pathways and genomic instability, events that are possibly involved in carcinogenesis. It has to be mentioned that mitochondrial DNA, is more susceptible to oxidative base damage compared to nuclear DNA due to its proximity to the electron transport chain (a source of ROS), and the fact that mitochondrial DNA is not protected by histones and has a limited DNA repair system., Cu(II), Ag(I), Cr(III), Cr(VI), which may entail, as seen before, the production of hydroxyl radicals. Other (non-redox-active) metals that can induce ROS-formation themselves or participate in the reactions leading to endogenously generated ROS are Pb(II), Cd(II), Zn(II), and the metalloid As(III) and As(V). Compounds like polycyclic aromatic hydrocarbons (PAHs), likely the largest family of pollutants with genotoxic effects, require activation by endogenous metabolism to become reactive and capable of modifying DNA. This activation is brought about by the so-called Phase I biotransformation (see Section on Xenobiotic metabolism and defence).Genetic detoxifying enzymes, like cytochrome P-450A1, are able to hydrophylate hydrophobic substrates. Whereas this reaction normally facilitates the excretion of the modified substance, some polycyclic aromatic hydrocarbons (PAHs), like benzo[a]pyrene generate semi stable epoxides that can ultimately react with DNA forming mutagenic adducts (see Section on Xenobiotic metabolism and defence). The main regulator of phase I metabolism in vertebrates, the Aryl hydrocarbon receptor (AhR), is a crucial player in this process. Some PAHs, dioxins, and some PCBs (the so-called coplanar congeners; see section on Complex mixtures) bind and activate AhR and increase the activity of phase I enzymes, including cytochrome P-450A1 (CYP1A1), by several fold. This increased oxidative metabolism enhances the toxic effects of the substances leading to increased DNA damage and inflammation.ROS production and oxidative stress can act both on cell proliferation and apoptosis. It has been demonstrated that low levels of ROS influence signal transduction pathways and alter gene expression. kinases. Activation of these signaling cascades ultimately leads to altered gene expression or a number of genes including those affecting proliferation, differentiation, and apoptosis.Boelsterli, U.A.. Mechanistic toxicology: the molecular basis of how chemicals disrupt biological targets. CRC Press.Cuypers, A., Plusquin, M., Remans, T., Jozefczak, M., Keunen, E., Gielen, H., ... , Nawrot, T.. Cadmium stress: an oxidative challenge. Biometals 23, 927-940.Furue, M., Takahara, M., Nakahara, T., Uchi, H.. Role of AhR/ARNT system in skin homeostasis. Archives of Dermatological Research 306, 769-779.Klaassen, C.D.. Casarett & Doull's Toxicology: The Basic Science of Poisons, Eighth Edition, McGraw-Hill Professional.Niesink, R.J.M., De Vries, J. & Hollinger, M. A.. Toxicology: Principles and Applications. CRC Press.Thévenod, F.. Cadmium and cellular signaling cascades: to be or not to be? Toxicology and Applied Pharmacology 238, 221-239.The herbicide paraquat induces oxidative stress due toIts involvement in the redox cycle.It interacts with glutathione.Its involvement in the Fenton reaction.Which biopolymers can undergo damage from reactive oxygen species?only DNA and proteinsonly DNA and membranesonly proteins and membranesDNA, proteins and membranesGiven: The three steps of lipid peroxidation:I initiationII propagationIII terminationQuestion: In which step(s) is O2 involved as a reagent or as a product?Only IOnly III and IIII and IIIMitochondrial DNA, compared to nuclear DNA, is relatively susceptible to oxidative base damage. Which of the given alternatives is not correct?The increased susceptibility of mitochondrial DNA is due to:The proximity of mitochondrial DNA to the electron transport chainMitochondrial DNA is not protected by histonesThe limited levels of antioxidative compounds inside mitochondriaThe limited mitochondrial DNA repair systemAuthors: Frank Van Belleghem, Karen SmeetsReviewers: Timo Hamers, Bas J. BlaauboerLearning objectives:You should be able to:Keywords: cell death, apoptosis, necrosis, caspase activation, mitochondrial permeability transitionCytotoxicity or cell toxicity is the result of chemical-induced macromolecular damage (see the section on Protein inactivation) or receptor-mediated disturbances (see the section on Receptor interactions). Initial events such as covalent binding to DNA or proteins; loss of calcium control or oxidative stress (see the sections on Oxidative stress I and II) can compromise key cellular functions or trigger cell death. Cell death is the ultimate endpoint of lethal cell injury; and can be caused by chemical compounds, mediator cells (i.e. natural killer cells) or physical/environmental conditions (i.e. radiation, pressure, etc.). The multistep process of cell death involves several regulated processes and checkpoints to be passed before the cell eventually reaches a point of no return, leading to either programmed cell death or apoptosis, or to a more accidental form of cell death, called necrosis. This section describes the cytotoxic process itself, in vitro cytotoxicity testing is dealt with in the section on Human toxicity testing - II. In vitro tests.Cells can actively maintain the intracellular environment within a narrow range of physiological parameters despite changes in the conditions of the surrounding environment. This internal steady-state is termed cellular homeostasis. Exposure to toxic compounds can compromise homeostasis and lead to injury. Cell injury may be direct (primary) when a toxic substance interacts with one or more target molecules of the cell (e.g. damage to enzymes of the electron transport chain), or indirect (secondary) when a toxic substance disturbs the microenvironment of the cell (e.g. decreased supply of oxygen or nutrients). The injury is called reversible when cells can undergo repair of adaptation to achieve a new viable steady state. When the injury persists or becomes too severe, it becomes irreversible and the cell eventually perishes, thereby terminating cellular functions like respiration, metabolism, growth and proliferation, resulting in cell death (Niesink et al., 1996).The main factors determining the occurrence of cell death are:It is important to realize that also "harmless" substances such as glucose or salt may lead to cell injury and cell death by disrupting the osmotic homeostasis at sufficient concentrations. Even an essential molecule such as oxygen causes cell injury at sufficiently high partial pressures (see the sections on Oxidative stress I and II). Apart from that, all chemicals exert "baseline toxicity" (also called "narcosis") as described in the textbox "narcosis and membrane damage" in the section on Toxicodynamics & Molecular Interactions.The main types of cell death: necrosis and apoptosis The two most important types of cell death are necrosis or accidental cell death (ACD) and apoptosis, a form of programmed cell death (PCD) or cell suicide. Cellular imbalances that initiate or promote cell death alone or in combination are oxidative stress, mitochondrial injury or disturbed calcium fluxes. These alterations are reversible at first, but after progressive injury, result in irreversible cell death. Cell death can also be initiated via receptor-mediated signal transduction processes. Apoptotic and necrotic cells differ in both the morphological appearance as well as biochemical characteristics. Necrosis is associated with cell swelling and a rapid loss of membrane integrity. Apoptotic cells shrink into small apoptotic bodies. Leaking cells during necrosis induce inflammatory responses, although inflammation is not entirely excluded during the apoptotic process (Rock & Kono, 2008).Necrosis Necrosis has been termed accidental cell death because it is a pathological response to cellular injury after exposure to severe physical, chemical, or mechanical stressors. Necrosis is an energy-independent process that corresponds with damage to cell membranes and subsequent loss of ion homeostasis (in particular Ca2+). Essentially, the loss of cell membrane integrity allows enzymes to leak out of the lysosomal membranes, destroying the cell from the inside. Necrosis is characterized by swelling of cytoplasm and organelles, rupture of the plasma membrane and chromatin condensation (see . These morphological appearances are associated with ATP depletion, defects in protein synthesis, cytoskeletal damage and DNA-damage. Besides, cell organelles and cellular debris leak via the damaged membranes into the extracellular space, leading to activation of the immune system and inflammation (Kumar et al., 2015). In contrast to apoptosis, the fragmentation of DNA is a late event. In a subsequent stage, injury is propagated across the neighbouring tissues via the release of proteolytic and lipolytic enzymes resulting in larger areas of necrotic tissue. Although necrosis is traditionally considered as an uncontrolled form of cell death, emerging evidence points out that the process can also occur in a regulated and genetically controlled manner, termed regulated necrosis (Berghe et al., 2014). Moreover, it can also be an autolytic process of cell disintegration after the apoptotic program is completed in the absence of scavengers (phagocytes), termed post-apoptotic or secondary necrosis (Silva, 2010).Apoptosis Apoptosis is a regulated (programmed) physiological process whereby superfluous or potentially harmful cells (for example infected or pre-cancerous cells) are removed in a tightly controlled manner. It is an important process in embryonic development, the immune system and in fact, all living tissues. Apoptotic cells shrink and break into small fragments that are phagocytosed by adjacent cells or macrophages without producing an inflammatory response. It can be seen as a form of cellular suicide because cell death is the result of induction of active processes within the cell itself. Apoptosis is an energy-dependent process (it requires ATP) that involves the activation of caspases (cysteine-aspartyl proteases), pro-apoptotic proteins present as zymogens (i.e. inactive enzyme precursors that are activated by hydrolysis). Once activated, they function as cysteine proteases and activate other caspases. Caspases can be distinguished into two groups, the initiator caspases, which start the process, and the effector caspases, which specifically lyse molecules that are essential for cell survival (Blanco & Blanco 2017). Apoptosis can be triggered by stimuli coming from within the cell (intrinsic pathway) or from the extracellular medium (extrinsic pathway) as shown in The extrinsic pathway activates apoptosis in response to external stimuli, namely by extracellular ligands binding to cell-surface death receptors (Tumour Necrosis Factor Receptor ((TNFR)), leading to the formation of the death-inducing signalling complex (DISC) and the caspase cascade leading to apoptosis. The intrinsic pathway is activated by cell stressors such as DNA damage, lack of growth factors, endoplasmic reticulum (ER) stress, reactive oxygen species (ROS) burden, replication stress, microtubular alterations and mitotic defects (Galluzzi et al., 2018). These cellular events cause the release of cytochrome c and other pro-apoptotic proteins from the mitochondria into the cytosol via the mitochondrial permeability transition (MPT) pore. This is a megachannel in the inner membrane of the mitochondria composed of several protein complexes that facilitate the release of death proteins such as cytochrome c. The opening is triggered and tightly regulated by anti-apoptotic proteins, such as B-cell lymphoma-2 (Bcl-2) and pro-apoptotic proteins, such as Bax (Bcl-2 associated X protein) and Bak (Bcl-2 antagonist killer). The intrinsic and extrinsic pathways are regulated by the apoptosis inhibitor protein (AIP) which directly interacts with caspases and suppresses apoptosis. The release of the death protein cytochrome c induces the formation of a large protein structure formed in the process of apoptosis (the apoptosome complex) activating the caspase cascade leading to apoptosis. Other pro-apoptotic proteins oppose to Bcl (SMAC/Diablo) and stimulate caspase activity by interfering with AIP (HtrA2/Omi). HtrA2/Omi also activates caspases and endonuclease G (responsible for DNA degradation, chromatin condensation, and DNA fragmentation). The apoptosis-inducing factor (AIF) is involved in chromatin condensation and DNA fragmentation. Many xenobiotics interfere with the MPT pore and the fate of a cell depends on the balance between pro- and anti-apoptotic agents (Blanco & Blanco, 2017).What determines the form of cell death caused by chemical substances? Traditionally, toxic cell death was considered to be uniquely of the necrotic type. The classic example of necrosis is the liver toxicity of carbon tetrachloride (CCl4) caused by the biotransformation of CCl4 to the highly reactive radicals (CCl3• and CCl3OO•).Several environmental contaminants including heavy metals (Cd, Cu, CH3Hg, Pb), organotin compounds and dithiocarbamates can exert their toxicity via induction of apoptosis, likely mediated by disruption of the intracellular Ca2+ homeostasis, or induction of mild oxidative stress (Orrenius et al., 2011).In addition, some cytotoxic substances (e.g. arsenic trioxide (As2O3)) tend to induce apoptosis at low exposure levels or early after exposure at high levels, whereas they cause necrosis later at high exposure levels. This implicates that the severity of the insult determines the mode of cell death (Klaassen, 2013). In these cases, both apoptosis and necrosis involve the dysfunction of mitochondria, with a central role for the mitochondrial permeability transition (MPT). Normally, the mitochondrial membrane is impermeable to all solutes except for the ones having specific transporters. MPT allows the entry into the mitochondria of solutes with a molecular weight of lower than 1500 Daltons, which is caused by the opening of mitochondrial permeability transition pores (MPTP) in the inner mitochondrial membrane. As these small-molecular-mass solutes equilibrate across the internal mitochondria membrane, the mitochondrial membrane potential (ΔΨmt) vanishes (mitochondrial depolarization), leading to uncoupling of oxidative phosphorylation and subsequent adenosine triphosphate (ATP) depletion. Moreover, since proteins remain within the matrix at high concentration, the increasing colloidal osmotic pressure will result in movement of water into the matrix, which causes swelling of the mitochondria and rupture of the outer membrane. This results in the loss of intermembrane components (like cytochrome c, AIF, HtrA2/Omi, SMAC/Diablo & Endonuclease G) to the cytoplasm. When MPT occurs in a few mitochondria, the affected mitochondria are phagocytosed and the cell survives. When more mitochondria are affected, the release of pro-apoptotic compounds will lead to the caspase activation resulting in apoptosis. When all mitochondria are affected, ATP becomes depleted and the cell will eventually undergo necrosis as shown in (Klaassen et al., 2013).Berghe, T.V., Linkermann, A., Jouan-Lanhouet, S., Walczak, H., Vandenabeele, P.. Regulated necrosis: the expanding network of non-apoptotic cell death pathways. Nature reviews Molecular Cell Biology 15, 135. //doi.org/10.1038/nrm3737Blanco, G., Blanco, A.. Chapter 32 - Apoptosis. Medical biochemistry. (pp. 791-796) Academic Press. //doi.org/10.1016/B978-0-12-803550-4.00032-XGalluzzi, L., Vitale, I., Aaronson, S.A., Abrams, J.M., Adam, D., Agostinis, P., ... & Annicchiarico-Petruzzelli, M.. Molecular mechanisms of cell death: recommendations of the Nomenclature Committee on Cell Death 2018. Cell Death & Differentiation, 1. //doi.org/10.1038/s41418-017-0012-4Klaassen, C.D., Casarett, L.J., & Doull, J.. Casarett and Doull's Toxicology: The basic science of poisons (8th ed.). New York: McGraw-Hill Education / Medical. ISBN: 978-0-07-176922-8Kumar, V., Abbas, A.K.,& Aster, J.C.. Robbins and Cotran pathologic basis of disease, professional edition. Elsevier Health Sciences. ISBN 978-0-323-26616-1.Niesink, R.J.M., De Vries, J., Hollinger, M.A. Toxicology: Principles and Applications, (1st ed.). CRC Press. ISBN 0-8493-9232-2.Orrenius, S., Nicotera, P., Zhivotovsky, B.. Cell death mechanisms and their implications in toxicology. Toxicological Sciences 119, 3-19. //doi.org/10.1093/toxsci/kfq268Rock, K.L., Kono, H.. The inflammatory response to cell death. Annu. Rev. Pathmechdis. Mech. Dis. 3, 99-126. //doi.org/10.1146/annurev.pathmechdis.3.121806.151456Silva, M.T.. Secondary necrosis: the natural outcome of the complete apoptotic program. FEBS Letters 584, 4491-4499. doi.org/10.1016/j.febslet.2010.10.046Toné, S., Sugimoto, K., Tanda, K., Suda, T., Uehira, K., Kanouchi, H., ... & Earnshaw, W. C.. Three distinct stages of apoptotic nuclear condensation revealed by time-lapse imaging, biochemical and electron microscopy analysis of cell-free apoptosis. Experimental Cell Research 313, 3635-3644. //doi.org/10.1016/j.yexcr.2007.06.018Which of the following is a characteristic of necrosis?It does not lead to inflammationWhich of the following is a characteristic of apoptosis?Membrane bleb formationRapid loss of membrane integritySwelling of mitochondriaCell shrinkingWhich cellular organelles are involved in the initiation of the intrinsic pathway of apoptosis?RibosomesLysosomesMitochondriaPeroxisomesConsider following statements:Which statement is correct?Author: Jessica LegradiReviewers: Timo Hamers, Ellen FritscheLearning objectivesYou should be able toKeywords: Nervous system, Signal transmission, Pesticides, Drugs, Developmental NeurotoxicityNeurotoxicity is defined as the capability of agents to cause adverse effects on the nervous system. Environmental neurotoxicity describes neurotoxicity caused by exposure to chemicals from the environment and mostly refers to human exposure and human neurotoxicity. Ecological neurotoxicity (eco-neurotoxicity) is defined as neurotoxicity resulting from exposure to environmental chemicals in species other than humans (e.g. fish, birds, invertebrates).The nervous system consists of the central nervous system (CNS) including the brain and the spinal cord and the peripheral nervous system (PNS). The PNS is divided into the somatic system (voluntary movements), the autonomic (sympathic and parasympathic) system and the enteric (gastrointestinal) system. The CNS and PNS are built from two types of nerve cells, i.e. neurons and glial cells. Neurons are cells that receive, process, and transmit information through electrical and chemical signals. Neurons consist of the soma with the surrounding dendrites and one axon with an axon terminal where the signal is transmitted to another cell. Compared to neurons, glial cells can have very different appearances, but are always found in the surrounding tissue of neurons where they provide metabolites, support and protection to neurons without being directly involved in signal transmission. and of glial cells (right)Neurons are connected to each other via synapses. The sending neuron is called the presynaptic neuron whereas the receiving neuron is the postsynaptic neuron. In the synapse, a small space exists between the axon terminal of the presynaptic neuron and a dendrite of the postsynaptic neuron. This space is named synaptic cleft. Both neurons have ion channels that can be opened and closed in the area of the synapse. There are channels selective for chloride, sodium, calcium, potassium, or protons and non-selective channels. The channels can be voltage gated (i.e. they open and close depending on the membrane potential), ligand gated (i.e. they open and close depending on the presence of other molecules binding to the ion channel), or they can be stress activated (i.e. they open and close due to physical stress (stretching)). Ligands that can open or close ion channels are called neurotransmitters. Depending on the ion channel and if it opens or closes upon neurotransmitter binding, a neurotransmitter can inhibit or stimulate membrane depolarization (i.e. inhibitory or excitatory neurotransmitter, respectively). The ligands bind to the ion channel via receptors (link to section on Receptor interaction). Neurotransmitters have very distinct functions and are linked to physical processes like muscle contraction and body heat and to emotional/cognitive processes like anxiety, pleasure, relaxing and learning. The signal transmission via the synapse (i.e. neurotransmission) is illustrated in , the inside charge of the neuron is negative relative to the outside. The cell membrane is then at its resting potential. When a neuron is signalling, however, changes in ion inflow and outflow of ions lead to a quick depolarization followed by a repolarization of the membrane potential called action potential. A video showing how the action potential is produced can be found here.Neurons can be damaged via substances that damage the cell body (neuronopathy), the axon (axonopathy), or the myelin sheet or glial cells (myelopathy). Aluminum, arsenic, methanol, methylmercury and lead can cause neuropathy. Acrylamide is known to specifically affect axons and cause axonopathy.Some of the modes of action relevant for neurotoxicity are disturbances of electric signal transmission and inhibition of chemical signal transmission, mainly through interference with the neurotransmitters. Pesticides are mostly designed to interfere with neurotransmission.1. Interfering with Ion channels (see section on Receptor interaction)Pesticides such as DDT bind to open sodium channels in neurons, which prevents closing of the channels and leads to over-excitation. Pyrethroids, such as permethrin, increase the time of opening of the sodium channels, leading to similar symptoms. Lindane, cyclodiene insecticides like aldrin, dieldrin and endrin ("drins") and phenyl-pyrazols such as fipronil block GABA-mediated chloride channels and prevent hyperpolarization. GABA (gamma-aminobutyric acid) is an inhibitory neurotransmitter which is linked to relaxation and calming. It stimulates opening of chloride channels causing the transmembrane potential to become more negative (i.e. hyperpolarization), thereby increasing the depolarisation threshold for a new action potential. Blockers of GABA-mediated chloride channels prevent the hyperpolarizing effect of GABA, thereby decreasing the inhibitory effect of GABA. Neonicotinoids (e.g., imidacloprid) mimic the action of the excitatory neurotransmitter ACh by activating the nicotinic acetylcholine receptors (nAChR) in the postsynaptic membrane. These compounds are specifically designed for displaying a high affinity to insect nAChR.Many human drugs, like sedatives also bind to neuro-receptors. Benzodiazepine drugs activate GABA-receptors causing hyperpolarization (activating GABA). Tetrahydrocannabinol (THC), which is the active ingredient in cannabis, activates the cannabinoid receptors also causing hyperpolarization. Compounds activating the GABA or cannabinoid receptors induce a strong feeling of relaxation. Nicotine binds and activates the AChR, which can help to concentrate.Another very common neurotoxic mode of action is the inhibition of acetylcholinesterase (AChE). Organophosphate insecticides like dichlorvos and carbamate insecticides like propoxur bind to AChE, and hence prevent the degradation of acetylcholine in the synaptic cleft, leading to overexcitation of the post-synaptic cell membrane (see also section on Protein interaction).MDMA (3,4-methylenedioxymethamphetamine, also known as ecstasy or XTC) and cocaine block the re-uptake of serotonin, norepinephrine and to a lesser amount dopamine into the pre-synaptic neuron, thereby increasing the amount of these neurotransmitters in the synaptic cleft. Amphetamines also increase the amount of dopamine in the cleft by stimulating the release of dopamine form the vesicles. Dopamine is a neurotransmitter which is involved in pleasure and reward feelings. Serotonin or 5-hydroxytryptamine is a monoamine neurotransmitter linked to feelings of happiness, learning, reward and memory.When receptors are continuously activated or when neurotransmitter levels are continuously elevated, the nervous system adapts by becoming less sensitive to the stimulus. This explains why drug addicts have to increase the number of drugs taken to get to the desired state. If no stimulant is taken, withdrawal symptoms occur from the lack of stimulus. In most cases, the nervous system can recover from drug addiction. Differences in species sensitivity can be explained by differences in metabolic capacities between species. Most compounds need to be bio-activated, i.e. being biotransformed into a metabolite that causes the actual toxic effect. For example, most organophosphate insecticides are thio-phosphoesters that require oxidation prior to causing inhibition of AChE. As detoxification is the dominant pathway in mammals and oxidation is the dominant pathway in invertebrates, organophosphate insecticides are typically more toxic to invertebrates than to vertebrates (see . Other factors important for species sensitivity are uptake and depuration rate.Developmental neurotoxicity (DNT) particularly refers to the effects of toxicants on the developing nervous system of organisms. The developing brain and nervous system are supposed to be more sensitive to toxic effects than the mature brain and nervous system. DNT studies must consider the temporal and regional occurrence of critical developmental processes of the nervous system, and the fact that early life exposure can lead to long-lasting neurotoxic effects or delays in neurological development. Species differences are also relevant for DNT. Here, developmental timing, speed, or cellular specificities might determine toxicity.What are the two major cell types found in the nervous system?What does GABA do?How does AChE inhibition work?What makes invertebrates more sensitive to organophosphate insecticides?Why is DNT important to study?Author: Nico M. van StraalenReviewers: Cornelia Kienle, Henk SchatLearning objectivesYou should be able toKeywords: Amino acid inhibitor, growth regulator, photosynthesis inhibitor, pre-emergence application, selectivityHerbicides are pesticides (see section on Crop protection products) that aim to kill unwanted weeds in agricultural systems, and weeds growing on infrastructure such as pavement and train tracks. Herbicides are also applied to the crop itself, e.g. as a pre-harvest treatment in crops like potato and oilseed rape, to prevent growth of pathogens on older plants, or to ease mechanical harvest. In a similar fashion, herbicides are used to destroy grass of pastures in preparation of their conversion to cropland. These applications are designated "desiccation". Finally herbicides are used to kill broad-leaved weeds in pure grass-fields (e.g. golf courts).Herbicides represent the largest volume of pesticides applied to date (about 60%), partly because mechanical and hand-executed weed control has declined considerably. The tendency to limit soil tillage (as a strategy to maintain a diverse and healthy soil life) has also stimulated the use of chemical herbicides.Herbicides are obviously designed to kill plants and therefore act upon biochemical targets that are specific to plants. As the crop itself is also a plant, selectivity is a very important issue in herbicide application. This is achieved in several ways.The diversity of chemical compounds that have been synthesized to attack specific biochemical targets in plants is enormous. In an attempt to classify herbicides by mode of action a system of 22 different categories is often used (Sherwani et al. 2015). Here we present a simplified classification specifying only eight categories (Plant & Soil Sciences eLibrary 2019, Table 1).Table 1. Classification of herbicides by mode of actionNo.Class (mode of action)Examples of chemical groupsExample of active ingredient1Amino acid synthesis inhibitorsSulfonylureas, imidazolones, triazolopyrimidines, epsp synthase inhibitorsGlyphosate2Seedling growth inhibitorsCarbamothiates, acetamides, dinitroanilinesEPTC3Growth regulators (interfere with plant hormones)Phenoxy-acetic acids, benzoic acid, carboxylic acids, picolinic acids2,4-D4Inhibitors of photosynthesisTriazines, uracils, phenylureas, benzothiadiazoles, nitriles, pyridazinesAtrazine5Lipid synthesis inhibitorsAryloxyphenoxypropionates, cyclohexanedionesSethoxydim6Cell membrane disruptersDiphenylethers, aryl triazolinones, phenylphthalamides, bipyridiliumParaquat7Inhibitors of protective pigmentsIsoxazolidones, isoxazoles, pyridazinonesMesotrione8UnknownChemical compounds with proven herbicide efficacy but unknown mode of actionEthofumesateTo illustrate the diversity of herbicidal mode of action, two examples of well-investigated mechanisms are highlighted here.Plants synthesize aromatic amino acids using the shikimate pathway. Also bacteria and fungi avail of this pathway, but it is not present in animals. They must obtain aromatic amino acids through their diet. The first step in this pathway is the conversion of shikimate-3-phosphate and phosphoenolpyruvate (PEP) to 5-enolpyruvylshikimate-3-phosphate (EPSP), by the enzyme EPSP synthase. EPSP is subsequently dephosphorylated and forms the substrate for the synthesis of aromatic amino acids such as phenylalanine, tyrosine and tryptophan.Glyphosate bears a structural resemblance to PEP and competes with PEP as a substrate for EPSP synthase. However, in contrast to PEP it binds firmly to the active site of the enzyme and blocks its activity. The ensuing metabolic deficiency quickly leads to loss of growth potential of the plant.Another very well investigated mode of herbicidal action is photosynthesis inhibition by atrazine and other symmetrical triazines. In contrast to glyphosate, atrazine can only act in aboveground plants with active photosynthesis. Sunny weather stimulates the action of such herbicides. The action of atrazine is due to binding to the D1 quinone protein of the electron transport complex of photosystem II sitting in the inner membrane of the chloroplast (see in Giardi and Pace, 2005). Photosystem II (PSII) is a complex of macromolecules with light harvesting and antenna units, chlorophyll P680, and reaction centers that capture light energy and use it to split water, produce oxygen and transfer electrons to photosystem I, which uses them to eventually produce reduction equivalents. The D1 quinone has a "herbicide binding pocket" and binding of atrazine to this site blocks the function of PSII. A single amino acid in the binding pocket is critical for this; alterations in this amino acid provide a relatively easy possibility for the plant to become resistant to triazines.Most herbicides are polar compounds with good water solubility, which is a crucial property for them to be taken up by plants. This implies that herbicides, especially the more persistent ones, tend to leach to groundwater and surface water and are sometimes also found in drinking water resources. Given the large volumes applied in agriculture, concern has arisen that such compounds, despite them being designed to affect only plants, might harm other, so called "non-target" organisms.In agricultural systems and their immediate surroundings, complete removal of weeds will reduce plant biodiversity, with secondary effects on plant-feeding insects and insectivorous birds. In the short term however herbicides will increase the amount of dead plant remains on the soil, which may benefit invertebrates that are less susceptible to the herbicidal effect, and find shelter in plant litter and feed on dead organic matter. Studies show that there is often a positive effect of herbicides on Collembola, mites and other surface-active arthropods (e.g. Fratello et al. 1985). Other secondary effects may occur when herbicides reach field-bordering ditches, where suppression of macrophytes and algae can affect populations of macro-invertebrates such as gammarids and snails.Direct toxicity to non-target organisms is expected from broad-spectrum herbicides that kill plants due to a general mechanism of toxicity. This holds for paraquat, a bipyridilium herbicide (cf. Table 1) that acts as a contact agent and rapidly damages plant leaves by redox-cycling; enhanced by sunshine, it generates oxygen radicals that disrupt biological membranes. Paraquat is obviously toxic to all life and represents an acute hazard to humans. Consequently, its use as a herbicide is forbidden in the EU since 2007.In other cases the situation is more complex. Glyphosate, the herbicide with by far the largest application volume worldwide is suspect of ecological side-effects and has even been labelled "a probable carcinogen" by the IUCR (Tarazona et al., 2017). However, glyphosate is an active ingredient contained in various herbicide formulations, e.g. Roundup Ready, Roundup 360 plus, etc. Evidence indicates that most of the toxicity attributed to glyphosate is actually due to adjuvants in the formulation, specifically polyethoxylated tallowamines (Mesnage et al., 2013).Another case of an unexpected side-effect from a herbicide is due to atrazine. In 2002 a group of American ecologists (Hayes et al., 2002) reported that the incidence of developmental abnormalities in wild frogs was correlated with the volume of atrazine sold in the area where frogs were monitored, across a large number of sites in the U.S. Male Rana pipiens exposed to atrazine in concentrations higher than 0.1 µg/L during their larval stages showed an increased rate of feminization, i.e. the development of oocytes in the testis. This would be due to induction of aromatase, a cytochrome P450 activity responsible for the conversion of testosterone to estradiol.Finally the development of resistance may also be considered an undesirable side-effect. There are currently 499 unique cases (255 species of plant, combined with 167 active ingredients) of herbicide resistance, indicating the agronomical seriousness of this issue. A full discussion of this topic falls, however, beyond the scope of this module.Herbicides are currently an indispensable, high-volume component of modern agriculture. They represent a very large number of chemical groups and different modes of action, often plant-specific. While some of the older herbicides (paraquat, atrazine, glyphosate) have raised concern regarding their adverse effects on non-plant targets, the development of new chemicals and the discovery of new biochemical targets in plant-specific metabolic pathways remains an active field of research.Fratello, B. et al. Effects of atrazine on soil microarthropods in experimental maize fields. Pedobiologia 28: 161-168.Giardi, M.T., Pace, E. Photosynthetic proteins for technological applications. Trends in Biotechnology 23, 257-263.Hayes, T., Haston, K., Tsui, M., Hoang, A., Haeffele, C., Vonk, A.. Feminization of male frogs in the wild. Nature 419, 895-896.Mesnage, R., Bernay, B., Séralini, G.-E.. Ethoxylated adjuvants of glyphosate-based herbicides are active principles of human cell toxicity. Toxicology 313, 122-128.Plant & Soil Sciences eLibrary, //passel.unl.edu.Sherwani, S.I., Arif, I.A., Khan, H.A.. Modes of action of different classes of herbicides. In: Price, J., Kelton, E., Suranaite, L. (Eds.). Herbicides. Physiological Action and Safety. Chapter 8, IntechOpen.Tarazona, J.V., Court-Marques, D., Tiramani, M., Reich, H., Pfeil, R., Istace, F., Crivellente, F.. Glyphosate toxicity and carcinogenicity: a review of the scientific basis of the European Union Assessment and its differences with IARC. Archives of Toxicology 91, 2723-2743.Define what is meant by a pre-emergence herbicide and why this is useful in agronomy?With a herbicide application in agriculture you want to kill unwanted plants among a crop that is in itself also a plant. How is this possible?Enumerate the eight major modes of action of herbicidesCan herbicides cause adverse effects on non-plant species?Author: Timo HamersReviewer: Frederik-Jan van SchootenLearning objectivesYou should be able toKey words: Bioactivation; Mutation; Tumour promotion; Tumour progression; Ames testCancer is a collective name for multiple diseases sharing a common phenomenon that cell division is out of the control by growth-regulating processes. The consequent, autonomic growing cells are usually concentrated in a neoplasm (often referred to a tumour) but can also be diffusely dispersed, for instance in case of leukaemia or a mesothelioma. Benign tumours refer to neoplasms that are encapsulated and do not distribute through the body, whereas malign tumours cause metastasis , i.e. spreading of carcinogenic cells through the body causing new neoplasms at distant. The term benign sounds more friendly than it actually is: benign tumours can be very damaging to organs which are limited in available space (e.g. the brain in the skull) or to organs that can be obstructed by the tumour (e.g. the gut system).The process of developing cancer (carcinogenesis) is traditionally divided in three phases, i.e.Chemical carcinogenesis means that a chemical substance is capable of stimulating one or more of these phases. Carcinogenic compounds are often named after the phase that they affect, i.e. initiators (also called mutagens), tumour promotors, and tumour progressors. It is important to realize that many substances and processes naturally occurring in the body can also stimulate the different phases, i.e. inflammation and exposure to sun light may cause mutations, some endogenous hormones can act as very active promotors in hormone-sensitive cancers, and spontaneous mutations may stimulate the tumour progression phase.Gene mutations (aka point mutations) are permanent changes in the order of the nucleotide base-pairs in the DNA. Based on what happens at the DNA level, point mutations can be divided in three types, i.e. a replacement of an original base-pair by another base-pair (base-pair substitution), the insertion of an extra base-pair or the deletion of an original base-pair. In a coding part of DNA, three adjacent nucleotides on a DNA strand (i.e. a triplet) form a codon that encodes for an amino acid in the ultimate protein. Because insertions and deletions cause a shift in these triplet reading frames with one nucleotide to the left or to the right, respectively, these point mutations are also called frame-shift mutations.Based on what happens at the protein level for which a gene encodes, point mutations can also be divided into three types. A missense mutation means that the mutated gene encodes for a different protein than the wildtype gene, a nonsense mutation means that the mutation introduces a STOP codon that interrupts gene transcription resulting in a truncated protein, and a silent mutation means that the mutated gene still encodes for exactly the same protein, despite the fact that the genetic code has been changed. Silent mutations are always base-pair substitutions, because the triplet structure of the DNA has not been damaged.A very illustrative example of the difference between a base-pair substitution and a frameshift mutation at the level of protein expression is the following "wildtype" sentence, consisting of only three letter words representing the triplets in the genomic DNA:The fat cat ate the hot dog.Imagine that the letter t in cat is replaced by an r due to a base-pair substitution. The sentence then reads:The fat car ate the hot dog.This sentence clearly has another meaning, i.e. it contains missense information.Imagine now that the letter a in fat is replaced by an e due to a base-pair substitution. The sentence then reads:The fet cat ate the hot dog.This sentence clearly contains a spelling error (i.e. a mutation), but it's meaning has not changed, i.e. it contains a silent mutation.Imagine now that an additional letter m causes a frameshift in the word fat, due to an insertion. The sentence then reads:The fma tca tat eth eho tdo.This sentence clearly has another meaning, i.e. it contains missense information.Similarly, leaving out the letter a in fat also causes a frameshift mutation, due to a deletion. The sentence then reads:The ftc ata tet heh otd og. Again, this sentence clearly has another meaning, i.e. it contains missense information.Base-pair substitutions are often caused by electrophilic substances that want to take up an electron from especially the nucleophilic guanine base that wants to donate an electron to form an electron pair. The consequent guanine addition product (adduct) forms a base-pair with thymine causing a base-pair substitution from G-C to A-T. Alternatively, the guanine adduct may split from the phosphate-sugar backbone of the DNA, leaving an "empty" nucleotide spot in the triplet that can be taken by any nucleotide during DNA replication. Alternatively, base-pair substitutions may be caused by reactive oxygen species (ROS), which are radical compounds that also take up an electron from guanine and form guanine oxidation products (for instance hydroxyl adducts). It should be realized that a DNA adduct can only cause an error in the order of nucleotides (i.e. a mutation) if it is present during DNA replication. Before a cell goes into the DNA synthesis phase of the cell cycle, however, the DNA is thoroughly checked, and possible errors are repaired by DNA repair systems.Exposure to direct mutagenic electrophilic agents rarely occurs because these substances are so reactive that they immediately bind to proteins and DNA in our food and environment. Therefore, DNA damage by such substances in most cases originates from indirect mutagenic compounds, which are activated into DNA-binding agents during Phase I of the biotransformation. This process of bioactivation is a side-effect of the biotransformation, which is actually aiming at rapid detoxification and elimination of toxic compounds.Frame-shift mutations are often caused by intercalating agents. Unlike electrophilic agents and ROS, intercalating agents do not form covalent bonds with the DNA bases. Instead, due to their planar structure intercalating agents fit exactly between two adjacent nucleotides in the DNA helix. As a consequence, they hinder DNA replication, causing the insertion of an extra nucleotide or the deletion of an original nucleotide in the replicate DNA strain.Ames test for mutagenicityAs stated above, non-mutagenic carcinogens are involved in stimulating the tumour promotion. Tumour promoting substances stimulate cell proliferation and inhibit cell differentiation and apoptosis. Unlike mutagenic compounds, tumour promoting compounds do not interfere directly with DNA and their effect is reversible. Many endogenous substances (e.g. hormones) may act as tumour promoting agents.Tumour progression is the result of aberrant transcriptional activity from either genetic or epigenetic alterations. Genetic alterations can be caused by substances that damage the DNA (called genotoxic substances) and thereby introduce strand breaks and incorrect chromosomal division after mitosis. This results in the typical instable chromosomal characteristics of a malign tumour cell, i.e. a karyotype consisting of reduced and increased numbers of chromosomes (called aneuploidy and polyploidy, respectively) and damaged chromosomal structures (abberations). Chemical substances causing aneuploidy are called aneugens and substances causing chromosomal abberations are called clastogens. Genotoxic substances are also very often mutagenic compounds. Multiple mutations in so-called proto-oncogenes and tumour suppressor genes are necessary to transform a normal cell into a tumour cell. In a healthy cell, cell proliferation is under control by proto-oncogenes that stimulate cell proliferation and tumour suppressor genes that inhibit cell proliferation. In a cancer cell, the balance between proto-oncogenes and tumour suppressor genes is disturbed: proto-oncogenes act as oncogenes, meaning that they continuously stimulate cell proliferation, due to mutations and polyploidy, whereas tumour suppressor genes have become inactive due to mutations and aneuploidy.Epigenetic alterations are changes in the DNA, but not in its order of nucleotides. Typical epigenetic changes include changes in DNA methylation, histone modifications, and microRNA expression. Compounds that change the epigenome may stimulate tumour progression for instance by stimulating expression of oncogenes and inhibiting expression of tumour suppressor genes. The role in tumour promotion and progression of substances that are capable to induce epigenetic changes is a field of ongoing study.What are the three characteristics of the different cancer development stages?What is the difference between a direct and an indirect mutagenic substance?Explain the principle of the Ames test?What is the difference between a base-pair substitution and a frameshift mutation?Author: Majorie van DuursenReviewer: Timo Hamers, Andreas KortenkampLearning objectivesYou should be able toKeywords: Endocrine system; Endocrine Disrupting Chemical (EDC); DES; Thyroid hormone disruption; Multi- and transgenerational effectsThe endocrine system plays an essential role in the short- and long-term regulation of a variety of biochemical and physiological processes, such as behavior, reproduction, growth as well as nutritional aspects, gut, cardiovascular and kidney function and the response to stress. As a consequence, chemicals that cause changes in hormone secretion or in hormone receptor activity may target many different organs and functions and may result in disorders of the endocrine system and adverse health effects. The nature and the size of endocrine effects caused by chemicals depend on the type of chemical, the level and duration of exposure as well as on the timing of exposure.The "DES drug disaster" is one of the most striking examples that endocrine-active chemicals can have severe adverse health effects in humans. There was a time when the synthetic estrogen diethylstilbestrol (DES) was considered a miracle drug. DES was prescribed from the 1940s-1970s to millions of women around the world to prevent miscarriages, abortion and premature labor. However, in the early 1970s it was found that daughters of mothers who took DES during their pregnancy have an increased risk of developing a specific vaginal and cervical cancer type. Other studies later demonstrated that women who had been exposed to DES in the womb (in utero) also had other health problems, like increased risk of breast cancer, increased incidence of genital malformations, infertility, miscarriages, and complicated pregnancies. Now, even two generations later, babies are born with reproductive tract malformations that are suspected to be caused by this drug their great grandmothers took during pregnancy. The effects of DES are attributed to the fact that it is a synthetic estrogen (i.e. a xenobiotic compound having similar properties as the natural estrogen 17β-estradiol), thereby disrupting normal endocrine regulation as well as epigenetic processes during development (link to section on Developmental Toxicity).Around the same time of the DES drug disaster, Rachel Carson wrote a New York Times best-seller called Silent Spring. The book focused on endocrine disruptive properties of persistent environmental contaminants, such as the insecticide DDT (Dichloro Diphenyl Trichloroethane). She wrote that these environmental contaminants were badly degradable in the environment and cause reproductive failure and population decline in a variety of wild life. At the time the book was published, endocrine disruption was a controversial scientific theory that was met with much scepticism as empirical evidence was largely lacking. Still, the book of Rachel Carson has encouraged scientific, societal and political discussions about endocrine disruption. In 1996, another popular scientific book was published that presented more scientific evidence to warn against the effects of endocrine disruption: Our Stolen Future: Are We Threatening Our Fertility, Intelligence, and Survival? A Scientific Detective Story by Theo Colborn, Dianne Dumanoski and John Peterson Myers.Currently, endocrine disruption is a widely accepted concept and many scientific studies have demonstrated a wide variety of adverse health effects that are attributed to exposure to endocrine active compounds in our environment. Human epidemiological studies have shown dramatic increases in incidences of hormone-related diseases, such as breast, ovarian, testicular and prostate cancer, endometrial diseases, infertility, decreased sperm quality, and metabolic diseases. Considering that hormones play a prominent role in the onset of these diseases, it is highly likely that exposure to endocrine disrupting compounds contributes to these increased disease incidences in humans. In wildlife, the effects of endocrine disruption include feminizing and demasculinizing effects leading to deviant sexual behaviour and reproductive failure in many species, such as fish, frogs, birds and panthers. A striking example of endocrine disruption can be found in the lake Apopka alligator population. Lake Apopka is the third largest lake in the state of Florida, located a few kilometres north west of Orlando. In July 1980, heavy rainfall caused the spill of huge amounts of DDT in the lake by a local pesticide manufacturer. After that, the alligator population in Lake Apopka started to show a dramatic decline. Upon closer examination, these alligators had higher estradiol and lower testosterone levels in their blood, causing poorly developed testes and extremely small penises in the male offspring and severely malformed ovaries in females.Since the early discussions on endocrine disruption, the World Health Organisation (WHO) has published several reports to present the state-of-the-art in scientific evidence on endocrine disruption, associated adverse health effects and the underlying mechanisms. In 2002, the WHO proposed a definition for an endocrine disrupting compound (EDC), which is still being used. According to the WHO, an EDC can be defined as "an exogenous substance or mixture that alters function(s) of the endocrine system and consequently causes adverse health effects in an intact organism, or its progeny, or (sub) populations." In 2012, WHO stated that "EDCs have the capacity to interfere with tissue and organ development and function, and therefore they may alter susceptibility to different types of diseases throughout life. This is a global threat that needs to be resolved." The European Environment Agency concluded in 2012 that "chemically induced endocrine disruption likely affects human and wildlife endocrine health the world over." A recent report (Demeneix & Slama, 2019) that was commissioned by the European Parliament concluded that the lack of EDC consideration in regulatory procedures is "clearly detrimental for the environment, human health, society, sustainability and most probably for our economy".Higher animals, including humans, have developed an endocrine system that allows them to regulate their internal environment. The endocrine system is interconnected and communicates bidirectionally with the neuro- and immunesystems. The endocrine system consist of glands that secrete hormones, the hormones themselves and targets that respond to the hormone. Glands that secrete hormones include the pituitary, thyroid, adrenals, gonads and pancreas. There are three major classes of hormones: amino-acid derived hormones (e.g. thyroid hormones T3 and T4), peptide hormones (e.g. pancreatic hormones) and steroid hormones (e.g. testosterone and estradiol). Hormones elicit a wide variety of biological responses, which almost always start with binding of a hormone to a receptor in its target tissue. This will trigger a chain of intracellular events and eventually a physiological response. Understanding the chemical characteristics of a hormone and its function, may help explain the mechanisms by which chemicals can interact with the endocrine system and subsequently cause adverse health effects.Inherent to the complex nature of the endocrine system, endocrine disruption comes in many shapes and forms. It can occur at the receptor level (link to section on Receptor Interaction), but endocrine disruptors can also disturb the synthesis, metabolism or transport of hormones (locally or throughout the body), or display a combination of multiple mechanisms. For example, DDT can decrease testosterone levels via increased testosterone conversion by the aromatase enzyme, but also acts like an anti-androgen by blocking the androgen receptor and as an estrogen by activating the estrogen receptor. PCBs, polychlorinated biphenyls, are well-characterized thyroid hormone disrupting chemicals. PCBs are industrial chemicals that were widely used in transformators until their ban in the 1970s, but, due to their persistency, PCBs can still ubiquitously be found in the environment, human and wildlife blood and tissue samples (link to section on POPs). PCBs are known to interfere with the thyroid system via inhibition of thyroid hormone synthesis and/or increasing thyroid hormone metabolism, inhibiting binding of thyroid hormones to serum binding proteins, or blocking the ability of thyroid hormones to thyroid hormone receptors. These thyroid disrupting effects can occur in different organs throughout the body (see in Gilbert et al., 2012).In the 18th Century, physician and alchemist Paracelsus phrased the toxicological paradigm: "Everything is a poison. Only the dose makes that something is not a poison" (link to section Concentration-response relationships, and to Introduction). Generally, this is understood as "the effect of the poison increases with the dose". According to this paradigm, upon deteriming the exposure levels where the toxic response begins and ends, safety levels can be derived to protect humans, animals and their environment. However, interpretation and practical implementation of this concept is challenged by issues that have arisen in modern-day toxicology, especially with EDCs, such as non-monotonic dose-response curves and timing of exposure.To establish a dose-response relationship, traditionally, toxicological experiments are conducted where adult animals are exposed to very high doses of a chemical. To determine a safe level, you determine the highest test dose at which no toxic effect is seen (the NOAEL or no observed adverse effect level) and add an additional "safety" or "uncertainty" factor of usually 100. This factor 100 accounts for differences between experimental animals and humans, and differences within the human population (see chapter 6 on Risk assessment). Exposures below the safety level are generally considered safe. However, over the past years, studies measuring the effects of hormonally active chemicals also began to show biological effects of endocrine active chemicals at extremely low concentrations, which were presumed to be safe and are in the range of human exposure levels. There are several physiological explanations to this phenomenon. It is important to realize that endogenous hormone responses do not act in a linear, mono-tonic fashion (i.c. the effect goes in one direction), as can be seen in for thyroid hormone levels and IQ. There are feedback loops to regulate the endocrine system in case of over- or understimulation of a receptor and there are clear tissue-differences in receptor expression and sensitivity to hormonal actions. Moreover, hormones are messengers, which are designed to transfer a message across the body. They do this at extremely low concentrations and small changes in hormone concentrations can cause large changes in receptor occupancy and receptor activity. At high concentrations, the change in receptor occupancy is only minimal. This means that the effects at high doses do not always predict the effects of EDCs at lower doses and vice versa.It is becoming increasingly clear that not only the dose, but also the timing of exposure plays an important role in determining health effects of EDCs. Multi-generational studies show that EDC exposure in utero can affect future generations. Studies on the grandsons and granddaughters whose mothers were exposed prenatally to DES are limited as they are just beginning to reach the age when relevant health problems, such as fertility, can be studied. However, rodent studies with DES, bisphenol-A and DEHP show that perinatally exposed mothers have grandchildren with malformations of the reproductive tract as well as an increased susceptibility to mammary tumors in female offspring and testicular cancer and poor semen quality in male offspring. Some studies even show effects in the great-grandchildren (F3 generation), which indicates that endocrine disrupting effects have been passed to next generations without direct exposure of these generations. These are called trans-generational effects. Long-term, delayed effects of EDCs are thought to arise from epigenetic modifications in (germ) cells and can be irreversible and transgenerational (link to section on Developmental Toxicity). Consequently, safe levels of EDC exposure may vary, dependent on the timing of exposure. Adult exposure to EDCs is often considered activational, e.g. an estrogen-like compounds such as DES can stimulate proliferation of estrogen-sensitive breast cells in an adult leading to breast cancer. When exposure to EDCs occurs during development, the effects are considered to be organizational, e.g. DES changes germ cell development of perinatally exposed mothers and subsequently leads to genital tract malformations in their grandchildren. Multi-generational effects are clear in rodent studies, but are not so clear in humans. This is because it is difficult to characterize EDC exposure in previous generations (which may span over 100 years in humans), and it is challenging to filter out the effect of one specific EDC as humans are exposed to a myriad of chemicals throughout their lives.Some well-known examples of EDCs are pesticides (e.g. DDT), plastic softeners (e.g. phthalates, like DEHP), plastic precursors (e.g. bisphenol-A), industrial chemicals (e.g. PCBs), water- and stain-repellents (perfluorinated substances such as PFOS and PFOA) and synthetic hormones (e.g. DES). Exposure to EDCs can occur via air, housedust, leaching into food and feed, waste- and drinking water. Exposure is often unintentional and at low concentrations, except for hormonal drugs. Clearly, synthetic hormones can also have beneficial effects. Hormonal cancers like breast and prostate cancers can be treated with synthetic hormones. And think about the contraceptive pill that has changed the lives of many women around the world since the 1960s. Nowadays, no other method is so widely employed in so many countries around the world as the birth control pill, with an estimate of 75 million users among reproductive-age women with a partner. An unfortunate side effect of this is the increase in hormonal drug levels in our environment leading to feminization of male fish swimming in the polluted waters. Pharmaceutical hormones, along with naturally-produced hormones, are excreted by women and men and these are not fully removed through conventional wastewater treatments. In addition, several pharmaceuticals that are not considered to act via the endocrine system, can in fact display endocrine activity and cause reproductive failure in fish. These are for example the beta-blocker atenolol, antidiabetic drug metformin and analgesic paracetamol.WHO-IPCS. Global Assessment of the State-of-the-Science of Endocrine Disruptors. 2002 //www.who.int/ipcs/publications/new_issues/endocrine_disruptors/en/WHO-UNEP. State of the Science of Endocrine Disrupting Chemicals.2012 www.who.int/iris/bitstream/10665/78101/1/9789241505031_eng.pdfEuropean Environment Agency (EEA) The impacts of endocrine disrupters on wildlife, people and their environments The Weybridge+15 report. EEA Technical report No 2/2012 EEA Copenhagen. ISSN 1725-2237 //www.eea.europa.eu/publications/the-impacts-of-endocrine-disruptersDemeneix, B., Slama, R. Endocrine Disruptors: from Scientific Evidence to Human Health Protection. Policy Department for Citizens' Rights and Constitutional Affairs Directorate General for Internal Policies of the Union PE 608.866 http://www.europarl.europa.eu/RegData/etudes/STUD/2019/608866/IPOL_STU608866_EN.pdfWhat are possible mechanisms to reduce the action of a certain hormone?Give three target sites for thyroid hormone discuption and name the biological process at that target site that can be affected by an EDC.What mechanism can cause demasculinization of male aligators by DDT?Why is timing of exposure important when assessing the risk of EDC exposure?Author: Jessica Legradi, Marijke de CockReviewer: Paul FowlerLearning objectivesYou should be able toKeywords: Teratogenicity, developmental toxicity, DoHAD, Epigenetics, transgenerationalDevelopmental toxicity refers to any adverse effects, caused by environmental factors, that interfere with homeostasis, normal growth, differentiation, or development before conception (either parents), during prenatal development, or postnatally until puberty. The effects can be reversible or irreversible. Environmental factors that can have an impact on development are lifestyle factors like alcohol, diet, smoking, drugs, environmental contaminants, or physical factors. Anything that can disturb the development of the embryo or foetus and produces a malformation is called a teratogen. Teratogens can terminate a pregnancy or produce adverse effects called congenital malformations (birth defects, anomaly). A malformation refers to any effect on the structural development of a foetus (e.g. delay, misdirection or arrest of developmental processes). Malformations occur mostly early in development and are permanent. Malformations should not be confused with deformations, which are mostly temporary effects caused by mechanical forces (e.g. moulding of the head after birth). One teratogen can induce several different malformations. All malformations caused by one teratogen are called a syndrome (e.g. fetal alcohol syndrome). In 1959 James G. Wilson published the 6 principles of teratology. Till now these principles are still seen as the basics of developmental toxicology. The principles are:Species differences: different species can react different (sensitivities) to the same teratogen. For example, thalidomide (softenon) a drug used to treat morning sickness of pregnant woman causes severe limb malformations in humans whereas such effects were not seen in rats and mice.Strain and intra litter differences: the genetic background of individuals within one species can cause differences in the response to a teratogen.Interaction of genome and environment: organisms of the same genetic background can react differently to a teratogen in different environments.Multifactorial causation: the summary of the above. The severity of a malformation depends on the interplay of several genes (inter and intra species) and several environmental factors.During development there are periods where the foetus is specifically sensitive to a certain malformation. In general, the very early (embryonic) development is more susceptible to teratogenic effects.Every teratogenic agent produces a distinctive malformation pattern. One example is the foetal alcohol syndrome, which induces abnormal appearance, short height, low body weight, small head size, poor coordination, low intelligence, behaviour problems, and problems with hearing or seeing and very characteristic facial features (increased distance between the eyes).Teratogens can be radiation, infections or chemicals including drugs. The teratogenic effect depends on the concentration of a teratogen that reaches the embryo. The concentration at the embryo is influenced by the maternal absorption, metabolisation and elimination and the time the agent needs to get to the embryo. This can be very different between teratogens. For example, strong radiation is also a strong teratogen as it easily reaches all tissues of the embryo. This also means that a compound tested to be teratogenic in an in vitro test with embryos in a tube might not be teratogenic to an embryo in the uterus of a human or mouse as the teratogen may not reach the embryo at a critical concentration.A teratogen can cause minor effects like functional deficits (e.g. reduced IQ), growth retardations or adverse effects like malformations or death. Depending on the timing of exposure and degree of genetic sensitivity, an embryo will have a greater or lesser risk of death or malformations. Very early in development, during the first cell divisions, an embryo will be more likely to die rather than being implanted and developing further.The number of effects and the severity of the effects increases with the concentration of a teratogen. This means that there is a threshold concentration below which no teratogenic effects occur (no effect concentration).The concept of Developmental Origins of Health and Disease (DOHaD) describes that environmental factors early in life contribute to health and disease later in life. The basis of this concept was the Barker hypothesis, which was formulated as an explanation for the rise in cardiovascular disease (CVD) related mortality in the United Kingdom between 1900 and 1950. Barker and colleagues observed that the prevalence of CVD and stroke was correlated with neo- and post-natal mortality. This led them to formulate the hypothesis that poor nutrition early in life leads to increased risk of cardiovascular disease and stroke later in life. Later, this was developed into the thrifty phenotype hypothesis stating that poor nutrition in utero programs for adaptive mechanisms that allow to deal with nutrient-poor conditions in later life, but may also result in greater susceptibility to metabolic syndrome. This thrifty hypothesis was finally developed into the DOHaD theory.The effect of early life nutrition on adult health is clearly illustrated by the Dutch Famine Birth Cohort Study. In this cohort, women and men who were born during or just after the Dutch famine, were studied retrospectively. The Dutch famine was a famine that took place in the Western part of the German-occupied Netherlands in the winter of 1944-1945. Its 3-months duration creates the possibility to study the effect of poor nutrition during each individual trimester of pregnancy. Effects on birth weight, for example, may be expected if caloric intake during pregnancy is restricted. This was, however, only the case when the famine occurred during the second or the third trimesters. Higher glucose and insulin levels in adulthood were only seen for those exposed in the third trimester, whereas those exposed during the second trimester showed a higher prevalence of obstructive airways disease. These effects were not observed for the other trimesters, which can be explained by the timing of caloric restriction during pregnancy: during normal pregnancy pancreatic islets develop during the third trimester, while during the second trimester the number of lung cells is known to double.The DOHaD concept does not merely focus on early life nutrition, but includes all kinds of environmental stressors during the developmental period that may contribute to adult disease, including exposure to chemical compounds. Chemicals may elicit effects such as endocrine disruption or neurotoxicity, which can lead to permanent morphological and physiological changes when occurring early in life. Well-known examples of such chemicals are diethylstilbestrol (DES) and dichlorodiphenyltrichloroethane (DDT). DES was an estrogenic drug given to women between 1940 and 1970 to prevent miscarriage. It was withdrawn from the market in 1971 because of carcinogenic effects as well as an increased risk for infertility in children who were exposed in utero (link to section on Endocrine disruption). DDT is a pesticide that has been banned in most countries, but is still used in some for malaria control. Several studies, including a pooled analysis of seven European cohorts, found associations between in utero DDT exposure levels and infant growth and obesity.The ubiquitous presence of chemicals in the environment makes it extremely relevant to study health effects in humans, but also makes it very challenging as virtually no perfect control group exists. This emphasizes the importance of prevention, which is the key message of the DOHaD concept. Adult lifestyle and corresponding exposure to toxic compounds remain important modifiable factors for both treatment and prevention of disease. However, as developmental plasticity, and therefore the potential for change, is highest early in life, it is important to focus on exposure in the early phases: during pregnancy, infancy, childhood and adolescence. This is reflected by regulators frequently imposing lower tolerable exposure levels for infants compared to adults.For some compounds in utero exposure is known to cause effects later in life (see DOHad) or even induce effects in the offspring or grand-offspring of the exposed embryo (transgenerational effect). Diethylstilbesterol (DES) is a compound for which transgenerational effects are reported. DES was given to pregnant women to reduce the risk of spontaneous abortions and other pregnancy complications. Women who took DES during pregnancy have a slightly increased risk of breast cancer. Daughters exposed in utero, on the other hand, had a high tendency to develop rare vaginal tumours. In the third-generation, higher incidences of infertility, ovarian cancer and an increased risk of birth defects were observed. However, the data available for the third generation is small and therefore possess only limited evidence so far. Another compound suspected to cause transgenerational effects is the fungicide vinclozolin. Vinclozolin is an anti-androgenic endocrine disrupting chemical. It has been shown that exposure to vinclozolin leads to transgenerational effects on testis function in mice.Transgenerational effects can be induced via genetic alterations (mutations) in the DNA. Thereby the order of nucleotides in the genome of the parental gametocyte is altered and this alteration is inherited to the offspring. Alternatively, transgenerational effects can be induced is via epigenetic alterations. Epigenetics is defined as the study of changes in gene expression that occur without changes in the DNA sequence, and which are heritable in the progeny of cells or organisms. Epigenetic changes occur naturally but can also be influenced by lifestyle factors or diseases or environmental contaminants. Epigenetic alterations are a special form of developmental toxicology as effects might not cause immediate teratogenic malformations. Instead the effects may be visible only later in life or in subsequent generations. It is assumed that compounds can induce epigenetic changes and thereby cause transgenerational effects. For DES and vinclozolin epigenetic changes in mice have been reported and these might explain the transgenerational changes seen in humans. Two main epigenetic mechanisms are generally described as being responsible for transgenerational effects, i.e. DNA methylation and histone modifications.DNA methylation is the most studied epigenetic modification and describes the methylation of cytosine nucleotides in the genome by DNA methyltransferase (DNMTs). Gene activity generally depends on the degree of methylation of the promotor region: if the promotor is methylated the gene is usually repressed. One peculiarity of DNA methylation is that it can be wiped and replaced again during epigenetic reprogramming events to set up cell- and tissue-specific gene expression patterns. Epigenetic reprogramming occurs very early in development. During this phase epigenetic marks, like methylation marks are erased and remodelled. Epigenetic reprogramming is necessary as maternal and paternal genomes are differentially marked and must be reprogrammed to assure proper development.Within the chromosome the DNA is densely packed around histone proteins. Gene transcription can only take place if the DNA packaging around the histones is loosened. Several histone modification processes are involved in loosening this packaging, such as acetylation, methylation, phosphorylation or ubiquitination of the histone molecules. The DNA is wrapped around the histone molecules. The histone molecules are arranged in a way that their amino acid tails are pointing out of the package. These tails can be altered for example via acetylation. (b) If the tails are acetylated the DNA is packed less tightly and genes can be transcribed. If the tails are not acetylated the DNA is packed very tight and gene transcription is hampered.Barker, D.J., Osmond, C.. Infant mortality, childhood nutrition, and ischaemic heart disease in England and Wales. Lancet 1, 1077-1081.Describe the difference between malformation and deformation?What factors can influence the susceptibility of an organism to teratogen?Describe how DOHaD differs from the Barker hypothesis?What are the two epigenetic mechanisms mostly responsible for transgenerational effects?Name a teratogenic compound?Author: Nico van den BrinkReview: Manuel E. Ortiz-SantaliestraLearning objectives:You should be able to:Key words: Immune toxicology, pathogens, innate and adaptive immune system, lyme diseaseThe immune system of organisms is very complex with different cells and other components interacting with each other. The immune system has the function to protect the organism from pathogens and infections. It consists of an innate part, which is active from infancy and an acquired part which is adaptive to exposure to pathogens. The immune system may include different components depending on the species.The main organs involved in the immune system of mammals are spleen, thymus, bone marrow and lymph nodes. In birds, besides all of the above, there is also the bursa of Fabricius. These organs all play specific roles in the immune defence, e.g. the spleen synthesises antibodies and plays an important role in the dynamics of monocytes; the thymus is the organ where T-cells develop while in bone marrow lymphoid cells are produced, which are transported to other tissues for further development. The bursa of Fabricius is specific for birds and is essential for B-cell development. Blood is an important tissue to be considered because of its role in transporting cells. The innate system generally provides the first response to infections and pathogens, however it is not very specific. It consists of several cell types with different functions like macrophages, neutrophils and mast cells. Macrophages and neutrophils may act against pathogens by phagocytosis (engulfing in cellular lysosomes and destruction of the pathogen). Neutrophils are relatively short lived, act fast and can produce a respiratory burst to destroy the pathogen/microbe. This involves a rapid production of Reactive Oxygen Species (ROS) which may destroy the pathogens. Macrophages generally have a longer live span, react slower but more prolonged and may attack via production of nitric oxide and less via ROS. Macrophages produce cytokines to communicate with other members of the immune system, especially cell types of the acquired system. Other members of the innate immune system are mast cells which can excrete e.g. histamine on detection of antigens. Cells of the acquired, or adaptive immune system mount more specific responses for the immune insult, and are therefore generally more effective. Lymphocytes are the cells of the adaptive immune system which can be classified in B-lymphocytes and T-lymphocytes. B-lymphocytes produce antibodies which can serve as cell surface antigen-receptors, essential in the recognition of e.g. microbes. B-lymphocytes facilitate humoral (extracellular) immune responses against extracellular microbes (in the respiratory gastrointestinal tract and in the blood/lymph circulation). Upon recognition of an antigen, B-lymphocytes produce species antibodies which bind to the specific antigen. This on the one hand may decrease the infectivity of pathogens (e.g. microbes, viruses) directly, but also mark them for recognition by phagocytic cells. T-lymphocytes are active against intracellular pathogens and microbes. Once inside cells, pathogens are out of reach of the B-lymphocytes. T-lymphocytes may activate macrophages or neutrophils to destroy phagocytosed pathogens or even destroy infected cells. Both B- and T-lymphocytes are capable of producing an extreme diversity of clones, specific for antigen recognition. Communication between the different immune cells occurs by the production of e.g. cytokines, including interleukins (ILs), chemokines, interferons (IFs), and also Tumour Necrosis Factors (TNFs). Cytokines and TNFs are related to specific responses in the immune system, for instance IL6 is involved in activating B-cells to produce immunoglobulins, while TNF-α is involved in the early onset of inflammation, therefore one of the cytokines inducing acute immune responses. Inflammation is a generic response to pathogens mounted by cells of the innate part of the immune system. It generally results in increased temperature and swelling of the affected tissue, caused by the infiltration of the tissue by leukocytes and other cells of the innate system. A proper acute inflammatory response is not only essential as a first defence but will also facilitate the activation of the adaptive immune system. Communication between immune cells, via cytokines, not only directs cells to the place of infection but also activates for instance cells of the acquired immune system. This is a very short and non-exhaustive description of the immune system, for more details on the functioning of the immune system see for instance Abbas et al..Chemicals may affect the immune system in different ways. Exposure to lead for instance may result in immune suppression in waterfowl and raptors (Fairbrother et al. 2004, Vallverdú-Coll et al., 2019). Decreasing spleen weights, lower numbers of white blood cells and reduced ability to mount a humoral response against a specific antigen (e.g. sheep red blood cells), indicated a lower potential of exposed birds to mount proper immune responses upon infection. Exposure to mercury resulted in decreased proliferation of B-cells in zebra finches (Taeniopygia guttata), affecting the acquired part of the immune system (Lewis et al., 2013). However, augmentation of the immune system upon exposure to e.g. cadmium has also been reported in for instance small mammals, indicating an enhancement of the immune response (Demenesku et al., 2014). Both immune suppression as well as immune enhancement may have negative impacts on the organisms involved; the former may decrease the ability of the organism to deal with pathogens or other infections, while immune enhancement may increase the energy demands of the organism and it may also result in for instance hypersensitivity or even auto-immunity in organisms.Chemicals may affect immune cells via toxicity to mechanisms that are not specific to the immune system. Since many different cell types are involved in the immune system, the sensitivity to these modes of toxicity may vary considerably among cells and among chemicals. This would imply that as a whole, the immune system may inherently include cells that are sensitive to different chemicals, and as such may be quite sensitive to a range of toxicants. For instance induction of apoptosis, programmed cell death, is essential to clear the activated cells involved in an immune response after the infection is minimised and the system is returning to a state of homeostasis (see . Chemicals may induce apoptosis, and thus interfere with the kinetics of adaptive immune responses, potentially reducing the longevity of cells.Toxic effects on mechanisms specific to the immune system may be related to its functioning. Since the production of ROS and nitric oxides are effector pathways along which neutrophils and macrophages of the innate systems combat pathogens (via a high production of reactive oxygen species, i.e. oxidative burst, to attack pathogens), impacts on the oxidative status of these cells may not only result in general toxicity, potentially affecting a range of cell types, but it may also affect the responsiveness of the (innate) immune system particularly. For instance, cadmium has a high affinity to bind to glutathione (GSH), a prominent anti-oxidant in cells, and has shown to affect acute immune responses in thymus and spleens of mice (Pathak and Khandelwal, 2007) via this mechanism. A decrease of GSH by binding of chemicals (like cadmium) may modulate macrophages towards a pro-inflammatory response by changes in the redox status of the cells involved, changing not only their activities against pathogens but potentially also their production and release of cytokines (Dong et al., 1998).GSH is also involved in the modulation of the acquired immune system by affecting so-called antigen-presenting cells (APCs, e.g. dendritic cells). APCs capture microbial antigens that enter the body, transport these to specific immune-active tissues (e.g. lymph nodes) and present them to naive T-lymphocytes, inducing a proper immune response, so-called T-helper cells. T-helper cells include subsets, e.g. T-helper 1 cells (Th1-cells) and T-helper 2 cells (Th2-cells). Th1 responses are important in the defence against intracellular infections, by activation of macrophages to ingest microbes. Th2-responses may be initiated by infections by organisms too large to be phagocytosed, and mediated by e.g. allergens. As mentioned, GSH depletion may result in changes in cytokine production by APC (Dong et al., 1998), generally affecting the release of Th1-response promoting cytokines. Exposure to chemicals interfering with GSH kinetics may therefore result in a dis-balance between Th1 and Th2 responses and as such affect the responsiveness of the immune system. Cadmium and other metals have a high affinity to bind to GSH and may therefore reduce Th1 responses, while in contrast, GSH promoting chemicals may reduce the organisms' ability to initiate Th2-responses (Pathak & Khandelwal, 2008).The overview on potential effects that chemicals may have on the immune system as presented here is not exhaustive at all. This is even more complicated because effects may be contextual, meaning that chemicals may have different impacts depending on the situation an organism is in. For instance, the magnitude of immunotoxic effects may be dependent on the general condition of the organism, and hence some infected animals may show effects from chemical exposure while others may not. Impacts may also differ between types of infection (e.g. Th1 versus Th2 responsive infections). This, together with the complex and dynamic composition of the immune system, limits the development of general dose response relationships and hazard predictions for chemicals. Furthermore, most of the research on effects of chemicals on the immune system is focussed on humans, based on studies on rats and mice. Little is known on differences among species, especially in non-mammalian species which may have completely differentially structured immune systems. Some studies on wildlife have shown effects of trace metals on small mammals (Tersago et al., 2004, Rogival et al., 2006, Tête et al., 2015) and of lead on birds (Vallverdú-Coll et al., 2015). However, specific modes of action are still to be resolved under field conditions. Research on immuno-toxicity in wildlife however is essential not only from a conservational point of view (to protect the organisms and species involved) but also from the perspective of human health. Wildlife plays an important role in the kinetics of zoonotic diseases, for instance small mammals are the prime reservoir for Borrelia spirochetes, the causative pathogens of Lyme-disease while migrating waterfowl are indicated to drive the spread of e.g. avian influenza. The role of wildlife in the kinetics of the environmental spread of zoonotic diseases is therefore eminent, which may seriously be affected by chemical induced alterations of their immune system.Abbas, A.K., Lichtman, A.H., Pillai, S.. Cellular and Molecular Immunology. 9th Edition. Elsevier, Philadelphia, USA. ISBN: 978-0-323-52324-0Demenesku, J., Mirkov, I., Ninkov, M., Popov Aleksandrov, A., Zolotarevski, L., Kataranovski, D., Kataranovski, M.. Acute cadmium administration to rats exerts both immunosuppressive and proinflammatory effects in spleen. Toxicology 326, 96-108.Dong, W., Simeonova, P.P., Gallucci, R., Matheson, J., Flood, L., Wang, S., Hubbs, A., Luster, M.I.. Toxic metals stimulate inflammatory cytokines in hepatocytes through oxidative stress mechanisms. Toxicology and Applied Pharmacology 151, 359-366.Fairbrother, A., Smits, J., Grasman, K.A.. Avian immunotoxicology. Journal of Toxicology and Environmental Health, Part B 7, 105-137.Galloway, T., Handy, R.. Immunotoxicity of organophosphorous pesticides. Ecotoxicology 12, 345-363.Lewis, C.A., Cristol, D.A., Swaddle, J.P., Varian-Ramos, C.W., Zwollo, P.. Decreased immune response in Zebra Finches exposed to sublethal doses of mercury. Archives of Environmental Contamination & Toxicology 64, 327-336.Pathak, N., Khandelwal, S.. Role of oxidative stress and apoptosis in cadmium induced thymic atrophy and splenomegaly in mice. Toxicology Letters 169, 95-108.Pathak, N., Khandelwal, S.. Impact of cadmium in T lymphocyte subsets and cytokine expression: differential regulation by oxidative stress and apoptosis. Biometals 21, 179-187.Rogival, D., Scheirs, J., De Coen, W., Verhagen, R., Blust, R.. Metal blood levels and hematological characteristics in wood mice (Apodemus sylvaticus L.) along a metal pollution gradient. Environmental Toxicology & Chemistry 25, 149-157.Tersago, K., De Coen, W., Scheirs, J., Vermeulen, K., Blust, R., Van Bockstaele, D., Verhagen, R.. Immunotoxicology in wood mice along a heavy metal pollution gradient. Environmental Pollution 132, 385-394.Tête, N., Afonso, E., Bouguerra, G., Scheifler, R.. Blood parameters as biomarkers of cadmium and lead exposure and effects in wild wood mice (Apodemus sylvaticus) living along a pollution gradient. Chemosphere 138, 940-946.Vallverdú-Coll, N., López-Antia, A., Martinez-Haro, M., Ortiz-Santaliestra, M.E., Mateo, R.. Altered immune response in mallard ducklings exposed to lead through maternal transfer in the wild. Environmental Pollution 205, 350-356.Vallverdú-Coll, N., Mateo, R., Mougeot, F., Ortiz-Santaliestra, M.E.. Immunotoxic effects of lead on birds. Science of the Total Environment 689, 505-515.Name two general reasons why immunomodulation in organisms may be very sensitive to exposure to environmental chemicalsWhy is it that immunomodulatory chemicals to which humans are not exposed still may have an impact on human health?Author: Nico M. van StraalenReviewers: Philip S. Rainbow, Henk SchatLearning objectivesYou should be able toKeywords: Reactive oxygen species, protein binding, DNA binding, ion pumps,Toxicity of metals on the biochemical level is due to a wide variety of mechanisms, which may be classified as follows, although they are not mutually exclusive: generation of radical oxygen species (Fe, Cu), binding to nucleophilic groups in proteins (Cd, Pb), binding to DNA (Cr, Cd), binding to ion channels or membrane pumps (Pb, Cd), interaction with the function of essential cellular moieties such as phosphate, sulfhydryl groups, iron or calcium (As, Cd, Al, Pb). In addition, these mechanisms may act simultaneously and interact with each other. There are interesting species patterns of susceptibility to metals, e.g. mammals are hardly susceptible to zinc, while plants and crustaceans are. Earthworms, gastropods and fungi are quite sensitive to copper, but not so for terrestrial vertebrates. In this section we discuss five different categories of metal toxicity as well as some patterns of species differences in sensitivity to metals.Reactive oxygen species (ROS) are activated forms of oxygen that have one or more unpaired electrons in the outer orbit. The best knowns are superoxide anion (O2-), singlet oxygen (1ΔgO2), hydrogen peroxide (H2O2) and hydroxyl radical (OH•) (see the section on Oxidative stress), effective catalyzers of reactive oxygen species. This relates to their capacity to engage in redox reactions with transfer of one electron. One of the most famous reactions is the so-called Fenton reaction catalyzed by reduced iron and copper ions:Fe2+ + H2O2 → Fe3+ + OH• + OH-Cu+ + H2O2 → Cu2+ + OH• + OH-Both reactions produce the highly reactive hydroxyl radical (OH•), which may trigger severe cellular damage by peroxidation of membrane lipids (see the section on Oxidative Stress). Very low concentrations of metal ions can keep this reaction running, because the reduced forms of the metal ions are restored by a second reaction with hydrogen peroxide:Fe3+ + H2O2 → Fe2+ + O2- + 2H+Cu2+ + H2O2 → Cu+ + O2- + 2H+The overall reaction is a metal-catalyzed degradation of hydrogen peroxide, causing superoxide anion and hydroxyl radical as intermediates. Oxidative stress is one of the most important mechanisms of toxicity of metals. This can also be deduced from the metal-induced transcriptome. Gene expression profiling has shown that it is not uncommon that more than 10% of the genome responds to sublethal concentrations of cadmium.Several metals have a great affinity towards sulfhydryl (-SH) groups in the cysteine residues of proteins. Binding to such groups may distort the secondary structure of a protein at sites where SH-groups coordinate to form S-S bridges. The SH-group is a typical example of a nucleophile, that is, a group that easily donates an electron pair to form a chemical bond. The group that accepts the electron pair is called an electrophile. Another amino acid in a protein to which metals are preferentially bound is the imidazole side-chain of histidine. This heterocyclic aromatic group with two nitrogen atoms easily engages into chemical bonds with metal ions. In fact, histidine residues are often used in metalloproteins to coordinate metals at the active site and to transport metals from the roots upwards through the xylem vessels of plants.A classical case of metal-protein interaction with subsequent toxicity is the case of lead binding to δ-aminolevulinic acid dehydratase (δ-ALAD). This is an enzyme involved in the synthesis of hemoglobin. It catalyzes the second step in the biosynthetic pathway, the condensation of two molecules of δ-aminolevulinic acid to one molecule of porphobilinogen, which is a precursor of porphyrin, a functional unit binding iron in hemoglobin. The enzyme has several sulfhydryl groups that are susceptible to lead. In the erythrocyte more than 80% of lead is in fact bound to the δ-ALAD protein (much more than is bound to hemoglobin). Inhibition of δ-ALAD leads to decreased porphyrin synthesis, insufficient hemoglobin, loss of oxygen uptake capacity, and eventually anemia.Because the inhibition of δ-ALAD by lead occurs at already very low exposure levels, it makes a very good biomarker for lead exposure. Measurement of δ-ALAD activity in blood has been conducted extensively in workers of metal-processing industries and people living in metal-polluted environments. Also in fish, birds and several invertebrates (earthworms, planarians) the δ-ALAD assay has been shown to be a useful biomarker of lead exposure. In addition to lead, mercury is known to inhibit δ-ALAD, while the inhibitions by both lead and mercury can be alleviated to some extent by zinc.Chromium, especially the trivalent (Cr3+) and the hexavalent (Cr6+) ions are the most notorious metal species known to bind to DNA. Both trivalent and hexavalent chromium may cause mutations and hexavalent chromium is also a known carcinogen. Although the salts of Cr6+ are only slightly soluble, the reactivity of the Cr6+-ion is so pronounced that only very little hexavalent chromium salt is already dangerous.The genotoxicity of trivalent chromium is due to the formation of crosslinks between proteins and DNA. Any DNA molecule is surrounded by proteins (histones, regulatory proteins, chromatine). Cr3+ binds to amino acids such as cysteine, histidine and glutamic acid on the one hand, and to the phosphate groups in DNA on the other, without any preference for a specific nucleotide (base). The result is a covalent bond between DNA and a protein that will inhibit transcription or regulatory functions of the DNA segment involved.Another metal known to interact with DNA is nickel. Although the primary effects of nickel are to induce allergic reactions, it is also a known carcinogen. The exact molecular mechanism is not as well known as in the case of chromium. Nickel could crosslink proteins and DNA in the same way as chromium, but is also argued that nickel's carcinogenicity is due to oxidative stress, resulting in DNA damage. Another suggested mechanism is that nickel could interfere with the DNA repair system.Many metals may compete with essential metals during uptake or transport across membranes. A well-known case is the competition between calcium and cadmium at the Ca2+ATPase pump in the basal membrane of fish gills.The gills of fish serve as a target for many water-born toxic compounds because of their large contact area with the water, consisting of several membranes, each with infoldings to increase the surface area, and also their high metabolic activity which stems from their important regulatory activities (uptake of oxygen, uptake of nutrients and osmoregulation). The single-layered epithelium has two types of cells, one active in osmoregulation (called chloride cells), and one active in transport of nutrients and oxygen (called respiratory cells). There are strong tight junctions between these cells to ensure complete impermeability of the epithelium to ions. The apical membrane of the respiratory cells has many uptake pumps and channels. Calcium enters the cells though a calcium channel (without energetic costs, following the gradient). The intracellular calcium concentration is regulated by a calcium-ATPase in the basal membrane, which pumps calcium out of the epithelial cells into the blood., s = serosa (basal side), BP = binding protein, mito = mitochondrion, ER = endoplasmic reticulum. From Verbost et al..Water-borne cadmium ions, which resemble calcium ions in their atomic radius, enter the cell through the same apical calcium channels, but subsequently inhibit the basal membrane calcium transporter by direct competition with calcium for the binding site on the ATPAse. The consequence is an accumulation of calcium in the respiratory cells, and a lack of calcium in the body of the fish, which causes a variety of secondary effects; amongst others hormonal disturbance, while a severe decline of plasma calcium is a direct cause of mortality. This effect of cadmium occurs at very low concentrations (nanomolar range), and it explains the high toxicity of this metal to fish. Similar cadmium-induced hypocalcemia mechanisms are present in the gill membranes of crustaceans and most likely also in gut epithelium cells of many other species.There are various cellular ligands outside proteins or DNA that may bind metals. Among these are organic acids (malate, citrate), free amino acids (histidine, cysteine), and glutathione. Metals may also interfere with the cellular functions of phosphate, iron, calcium or zinc, for example by replacing these elements from their normal binding sites in enzymes or other molecules. To illustrate a case of interaction with phosphate we discuss shortly the toxicity of arsenic. Arsenic is strictly speaking not a metal, since arsenic oxide may engage in both base-forming and acid-forming reactions. Together with antimony and four other, lesser-known elements, arsenic is indicated as a "metalloid".Arsenic is a potent toxicant; arsenic trioxide (As2O3) is well known for its high mammalian toxicity and its use as a rodenticide and wood preservative. There are also therapeutic applications of arsenic trioxide, against certain leukemias and arsenic is often implied in homeopathic treatments. Arsenic compounds are easily transported throughout the body, also across the placental barrier in pregnant women.Arsenic can occur in two different valency states: arsenate (As5+) and arsenite (As3+). The terms are also used to indicate the oxy-salts, such as ferric arsenate, FeAsO4, and ferric arsenite, FeAsO3. Inside the body, arsenic may be present in oxidized as well as reduced state, depending on the conditions in the cell, and it is enzymatically converted to one or the other state by reductases and oxidases. It may also be methylated by methyltransferases. The two different forms of arsenic have quite different toxicity mechanisms. Arsenate, AsO43-, is a powerful analog of phosphate, while arsenite (AsO33-) reacts with SH-groups in proteins, like the metals discussed above. Arsenite is also a known carcinogen; the mechanism seems not to rely on DNA binding, like in the case of chromium, but on the induction of oxidative stress and interference with cellular signaling.The most common reason of chronic arsenic poisoning is due to inhibition of the enzyme glyceraldehyde phosphate dehydrogenase (GAPDH). This is a critical enzyme of the glycolysis, converting glyceraldehyde-3-phosphate into 1,3-biphosphoglycerate. However, in the presence of arsenate, GAPDH converts glyceraldehyde-3-phosphate into 1-arseno-3-phosphoglycerate. Actually arsenate acts as a phosphate analog to "fool" the enzyme. The product 1-arseno-3-phosphoglycerate does not engage in the next glycolytic reaction, which normally produces one ATP molecule, but it falls back to arsenate and 3-phosphoglycerate, without the production of ATP, while the arsenate released can act again on the enzyme in a cyclical manner. The result is that the glycolytic pathway is uncoupled from ATP-production. Needless to say this signifies a severe and often fatal inhibition of energy metabolism.Animals, plants, fungi, protists and prokaryotes all differ greatly in their susceptibility to metals. To give a few examples:In the end, such patterns must be explained in terms of the presence of susceptible biochemical targets, different strategies for storage and excretion, and differing mechanisms of defence and sequestration. However, at the moment there is no general framework by which to compare the variation of sensitivity across species. Also, there is no relation between accumulation and susceptibility; some species that accumulate metals to a large degree (e.g. copper in isopods) are not sensitive to the same metal, while others, which do not accumulate the metal, are quite sensitive. Accumulation seems to be partly related to a species feeding strategy (e.g. spiders absorb almost al the (fluid) food they take in and any metals in the food will accumulate in the midgut gland); accumulation is also related to specific nutrient requirements (e.g. copper in isopods, manganese in some oribatid mites). Finally, some populations of some species have evolved specific tolerances in response to their living in a metal-contaminated environment, on top of the already existing accumulation and detoxification strategies.Metals do not form a homogeneous group. Their toxicity involves reactivity towards a great variety of biochemical targets. Often several mechanisms act simultaneously and interact with each other. Induction of oxidative stress is a common denominator, as is reaction to nucleophilic groups in macromolecules. The great variety of metal-induced responses makes them interesting model compounds for toxicological studies.Cameron, K.S., Buchner, V., Tchounwou, P.B.. Exploring the molecular mechanisms of nickel-induced genotoxicity and carcinogenicity: a literature review. Reviews of Environmental Health 26, 81-92.Ernst, W.H.O., Joosse-van Damme, E.N.G. Umwelbelastung durch Mineralstoffe. Fischer Verlag, Jena.Singh, A.P., Goel, R.K., Kaur, T. Mechanisms pertaining to arsenic toxicity. Toxicology International 18, 87-93.Verbost, P.M. Cadmium toxicity: interaction of cadmium with cellular calcium transport mechanisms Ph.D. thesis, Radboud Universiteit Nijmegen.Mention three different classes of primary lesions due to free metal ions and causing metal toxicity. Include the metals that are best known for causing each type of lesion.Is it possible to decide to which type of metal a cell is exposed, based on the kind of cellular disturbance that is observed?Several invertebrate accumulate metals to a very high degree. Mention a few examples and the metals they accumulate. Are these animals also among the most sensitive to metal toxicity? Please explain."Essential metals can be regulated and are therefore less toxic than xenobiotic metals" - Please comment on this thesis.Author: Nico M. van StraalenReviewers: Henk Schat, Jaco VangronsveldLearning objectivesYou should be able toKeywords: hyperaccumulation, metal uptake mechanisms, microevolutionSome species of plants and animals have evolved metal-tolerant populations that can survive exposures that are lethal for other populations of the same species. Best known is the heavy metal vegetation that grows on metalliferous soils. The study of these cases of "evolution in action" has revealed many aspects of metal trafficking in plants, transport across membranes, metal scavenging molecules in the cell, and subcellular distribution of metals, and how these processes have been adapted by natural selection for tolerance. Metal-tolerant plant varieties are usually dependent upon high metal concentrations in the soil and do not grow well in reference soils. In addition, some plant species show an extreme degree of metal accumulation. In animals metal tolerance has been demonstrated in some invertebrates that live in close contact with metal-containing soils and this is usually achieved by altered regulation of metal scavenging proteins such as metallothioneins, or by duplication of the corresponding genes. Genomics studies are broadening our perspective as the adaptation normally does not rely on a single gene but includes hypostatic factors and modifiers.As metals cannot be degraded or metabolized, the only way to deal with potentially toxic excess is to store or excrete them. Often both mechanisms are operational, excretion being preceded by storage or scavenging, but animals and plants differ greatly in the emphasis on one or the other mechanism. Both essential and nonessential metals are subject to all kinds of trafficking mechanisms aiming to keep the biologically active, free ion concentration of the metal extremely low. Still, there is hardly any relationship between accumulation and tolerance. Some species have low tissue concentrations and are sensitive, others have low tissue concentrations and are tolerant, some accumulate metals and suffer from the high concentrations, others accumulate and are extremely tolerant.Like the mechanisms of biotransformation (see the section on Genetic Variation) metal trafficking mechanisms show genetic variation and such variation may be subject to evolution. However, it has to be noted that only in a limited number of plants and animal species metal-tolerant populations have evolved. This may be due to the fact that evolution of metal tolerance makes use of already existing, moderately efficient, metal trafficking mechanisms in the ancestral species. This interpretation is suggested by the observation that the non-metal-tolerant varieties of metal-tolerant plants already have a certain degree of metal tolerance (larger than species that never evolve metal-tolerant varieties). So the mutational distance to metal tolerance was smaller in the ancestors of metal-tolerant plants than it is in "normal" plants.Real metal tolerance, where the metal-tolerant population can withstand orders of magnitude larger exposures than reference populations, and has become dependent on metal-rich soils, is only found in plants. Metal tolerance in animals is of degree, rather than of kind, and does not come with externally recognizable phenotypes. Most likely the combination of strong selection pressure, the impossibility to escape by locomotion and the right pre-existing genetic variation explain why metal tolerance in plants is so much more prominent compared to animals.In this section we will discuss the various mechanisms that have been shown to underlie metal tolerance. The evolutionary response to environmental metal exposure is one of the classical examples of "evolution in action", next to insecticide resistance in mosquitoes and industrial melanism in butterflies.Metal tolerance in plantsFor many years, most likely already since humans started to dig ores and use metals for the manufacture of utensils, pottery and tools, it has been known that naturally metal-rich soils harbour a specific metal-tolerant vegetation. This "Schwermetallvegetation", described in the classical book by the German-Dutch botanist W.H.O. Ernst, consists of a designated collection of plant species, with representatives from various families. Several species also have metal-sensitive populations living in normal soils, but some, like the European zinc violet, Viola calaminaria, are restricted to metal-rich soils. This is also seen in the metal-tolerant vegetations of New Caledonia, Cuba, Zimbabwe and Congo, which to a large degree consist of endemic metal-tolerant species (true metallophytes) that are never found in normal soils. However, some common species also developed metal-tolerant ecotypes.Metal-tolerant plant species have expanded their range when humans started to dig the metal ores and now can also be found extensively at mining sites, metal-enriched stream banks, and around metal smelters. Naturally metal-enriched soils differ from reference soils not only in metal concentration but also in other aspects, e.g. calcium and moisture, so the selection for metal tolerance comes goes hand-in-hand with selection by several other factors.Metal tolerance is mainly restricted to herbs and forbs, and (except some tropical serpentines) does not extend to trees. A heavy metal vegetation is recognizable in the landscape as a "meadow", lacking trees, with relatively few plant species and an abundance of metallophytes. In the past, metal ores were discovered from the presence of such metallophytes, an activity called bioprospecting.We know from biochemistry that different metals are bound to different ligands and follow different biochemical pathways in biological tissues (see the section on metal accumulation). Some metals (cadmium, copper, mercury) are "sulphur-seekers", others have an affinity to organic acids (zinc) and still others tend to be associated with calcium-rich tissues (lead). Essential metals such as copper, zinc and iron have their own, metal-specific, transport mechanisms. From these observations one may conclude that metal tolerance will also be specific to the metal and that cross-tolerance (tolerance directed to one metal causing tolerance to another metal as a side-effect) is relatively rare. This is indeed the case.In many cases metal-tolerant plants do not show the same growth characteristics as the non-tolerant varieties of the same species. Loss of growth potential has often been interpreted as a "cost of tolerance". However, genetic research has shown that the lower growth potential of metallophytes is a separate adaptation, to deal with the usually infertile metalliferous soils, and there is no mechanistic link to tolerance. Metabolic costs or negative pleiotropic effects of metal tolerance have not been described. The fact that metal-tolerant plants do not grow well in clean soils is explained by the constitutive upregulation of trafficking and compartmentalization mechanisms, causing increased metal requirements that cannot be met on non-metalliferous soils.Another striking fact is that metal tolerances in the same plant species at different sites have evolved independently from each other. The various metal-tolerant populations of a species do not all descend from a single ancestral population, but result from repeated local evolution. That still in different populations sometimes the same loci are affected by natural selection, is ascribed to the fact that, given the species' genetic background, there are only a limited number of avenues to metal tolerance.A final general principle is that metal tolerance in plants is often targeted towards proteins that transport metals across membranes (cell membrane, tonoplast). The genes of such transporters may be duplicated, the balance between high-affinity transporters and low-affinity versions may be altered, their expression may be upregulated or downregulated, or the proteins may be targeted to different cellular compartments.Although many details on the genetic changes responsible for tolerance in plants are still lacking, the work on copper tolerance in bladder campion, Silene vulgaris, illustrates many of the points listed above. The plant has many metal-tolerant populations, of which one found at Imsbach, Germany, shows an extreme degree of copper tolerance and also some (independently evolved) zinc and cadmium tolerance. The area is known for its "Bergbau" with historical mining activities for copper, silver and cobalt, but also some older calamine deposits, which explains the zinc and cadmium tolerance.Genetic work by H. Schat and colleagues has shown that two ATP-driven copper transporters, designated HMA5I and HMA5II are involved in copper tolerance of Silene. The HMA5I protein resides in the tonoplast to relocate copper into the vacuole, while HMA5II resides in the endoplasmic reticulum. When free copper ions appear in the cell, HMA5II relocates from the ER to the cell membrane and starts pumping copper out of the cell. During transport from roots to shoot (in the xylem vessels) copper is bound as a nicotianamine complex. In addition, plant metallothioneins play a role in copper binding and transport in the phloem and during redistribution from senescent leaves. Copper tolerance in Silene illustrates the principle referred to above that metal tolerance is achieved by enhancing the transport mechanisms already present, not by evolving new genes.Some plants accumulate metals to an extreme degree. Well-known are metallophytes growing on serpentine soils, which accumulate very large amounts of nickel. Also copper and cobalt accumulation is observed in several species of plants. Hyperaccumulators do not exclude metals but preferentially accumulate them when the concentration in the soil is extremely high (> 50.000 mg of copper per kg soil). The copper concentration of the leaves may reach values of more than 1000 μg/g. A very extreme example is a tree species, Sebertia acuminata, growing on the island of New Caledonia in ultramafic soil with 0.85% of nickel, which produces a latex containing 11% of nickel by weight. Such extraordinary high concentrations impose extreme demands on the efficiency of metal trafficking and so have attracted the attention of biological investigators. In Western Europe's heavy metal vegetation, zinc accumulators are present in several species of the genera Agrostis, Brassica, Thlaspi and Silene.Most of the experimental research is conducted on the brassicacean species Noccaea (Thlaspi) caerulescens and Arabidopsis halleri, with Arabidopsis thaliana as a non-accumulating reference model.The transport of metals in a plant involves a number of distinct steps, where each step is upregulated in the metal hyperaccumulator. This illustrated in in Verbruggen et al. for zinc hyperaccumulation in Thlaspi caerulescens.While the basic components of the system are beginning to be known, the question how the whole machinery is upregulated in a coherent fashion is not yet clear.Also in animals, metal tolerant populations of the same species have been reported, however, there is no specific metal-tolerant community with a designated set of species, like in plants. There are, however, obvious metal accumulators among animals. Best known are terrestrial isopods, which accumulate very high concentrations of copper in designated cells in the their hepatopancreas, and some species of oribatid mites which accumulate very high amounts of manganese and zinc.One of the factors investigated to explain metal tolerance in animals is a metal-binding protein, metallothionein (MT). Gene duplication of an MT gene has been implicated in the tolerance of Daphnia and Drosophila to copper. In addition, metal tolerance may be due to altered transcriptional regulation. The latter mechanism underlies the evolution of cadmium tolerance in the soil-living springtail, Orchesella cincta. Detailed genetic analysis of this model system has revealed that the MT promoter of O. cincta shows a very large degree of polymorphism, some alleles affecting the transcription factor binding sites and causing overexpression of MT. The promoter allele conferring strong overexpression of MT upon exposure to cadmium, had a significantly higher frequency in O. cincta populations from metal-contaminated soils.In addition to springtails, evolution of metal tolerance has also been described for the earthworm, Lumbricus rubellus. In a population living in a lead-contaminated deserted mining area in Wales two lineages were distinguished on the basis of the COI gene and RFLPs, Interestingly, the two lineages had colonized different microhabitats of the area, one of them being unable to survive high lead concentrations. Differential expressions were noted for genes in phosphate and calcium metabolism. Two crucial mutations in a calcium transport protein suggested that lead tolerance in L. rubellus is due to modification of calcium transport, a logical target since lead and calcium are often found to interact with each other's transport (see the section on metal accumulation).The study of metal tolerance is a rewarding topic of evolutionary ecotoxicology. Several crucial genetic mechanisms have been identified but in none of the study systems a complete picture of the evolved tolerance mechanisms is available. It may be expected that genome-wide studies will be able to identify the full network responsible for tolerance, which most likely includes not only major genes, but also hypostatic factors and modifiers.Andre, J., King, R.A., Stürzenbaum, S.R., Kille, P., Hodson, M.E., Morgan, A.J.. Molecular genetic differentiation in earthworms inhabiting a heterogeneous Pb-polluted landscape. Environmental Pollution 158, 883-890.Ernst, W.H.O. Schwermetallvegetation der Erde. Gustav Fischer Verlag, Stuttgart.Janssens, T.K.S., Roelofs, D., Van Straalen, N.M.. Molecular mechanisms of heavy metal tolerance and evolution in invertebrates. Insect Science 16, 3-18.Krämer, U.. Metal hyperaccumulation in plants. Annual Review of Plant Biology 61, 517-534.Li, X., Iqbal, M., Zhang, Q., Spelt, C., Bliek, M., Hakvoort, H.W.J., Quatrocchio, F.M., Koes, R., Schat, H.. Two Silene vulgaris copper transporters residing in different cellular compartments confer copper hypertolerance by distinct mechanisms when expressed in Arabidopsis thaliana. New Phytologist 215, 1102-1114.Lopes, I., Baird, D.J., Ribeiro, R.. Genetically determined resistance to lethal levels of copper by Daphnia longispina: association with sublethal response and multiple/coresistance. Environmental Toxicology and Chemistry 24, 1414-1419.Van Straalen, N.M., Janssens, T.K.S., Roelofs, D.. Micro-evolution of toxicant tolerance: from single genes to the genome's tangled bank. Ecotoxicology 20, 574-579.Verbruggen, N., Hermans, C., Schat, H. Molecular mechanisms of metal hyperaccumulation in plants. New Phytologist 181, 759-776.Describe the pathway of zinc ions taken up by plants from the soil solution to their final destination in the plant, and how the various steps have been modified in hyperaccumulation plants species such as Arabidopsis halleri.Discuss the difference between trans-regulatory change and cis-regulatory change in the evolution of metal tolerance.Author: Dick RoelofsReviewers: Nico van Straalen, Dries KnapenLearning objectives:You should be able toKeywords: Molecular initiation event, key event, in vitro assay, high throughput assay, pathwayOver the past two decades the availability of molecular, biochemical and genomics data has exponentially increased. Data are now available for a phylogenetically broad range of living organisms, from prokaryotes to humans. This has tremendously advanced our knowledge and mechanistic understanding of biological systems, which is highly beneficial for different fields of biological research such as genetics, evolutionary biology and agricultural sciences. Being an applied biological science, toxicology has not yet tapped this wealth of data, because it is difficult to incorporate mechanistic data when assessing chemical safety in relation to human health and the environment. However, society is increasingly concerned about the release of industrial chemicals with little or no hazard- or risk information. Consequently, a much larger number of chemicals need to be considered for potential adverse effects on human health and ecosystem functioning. To meet this challenge it is necessary to deploy fast, cost-effective and high throughput approaches that can predict potential toxicity of substances and replace traditional tests based on survival and reproduction that run for weeks or months and often are quite labour-intensive. A major challenge is however, to link these fast in vitro and in vivo assays to endpoints used in current risk assessment. This challenge was picked up by defining the adverse outcome pathway (AOP) framework, for the first time proposed by the Gerald Ankley and co-workers from the United States Environmental Protection Agency, US-EPA (Ankley et al., 2010).The AOP framework is defined as an evolution of prior pathway-based concepts, most notably mechanisms and modes of action, for assembling and depicting toxicological data across biological levels of organization (Ankley and Edwards, 2018). An AOP is a graphical representation of a series of measurable key events (KEs). A key event is a measurable directional change in the state of a biological process. KEs can be linked to one another through key event relationships (KERs; see . The first KE is depicted as the "molecular initiating event" (MIE), and represents the interaction of the chemical with a biological receptor that activates subsequent key events. The key event relationships should ideally be based on causal evidence. A cascade of key events can eventually result in an adverse outcome (AO) at the individual or population level. The MIE and AO are specialized KEs, but treated like any other KE in the AOP framework.The aim of an AOP is to represent and describe, in a simplified way, how responses at the molecular- and cellular level are translated to impacts on development, reproduction and survival, which are relevant endpoints in risk assessment (Villeneuve et al., 2014). Five core concepts have been defined in the development of AOPs:Generally, AOPs are simplified linear pathways but different AOPs can be organized in networks with shared nodes. The AOP networks are actually the functional units of prediction, because they represent the complex biological interactions that occur in response to exposure to a toxicant or a mixture of toxicants. Analysis of the intersections (shared key events) of different AOPs making up a network can reveal unexpected biological connections (Villeneuve et al., 2014).Typically, an AOP consists of only one MIE, and one AO, connected to each other by a potentially unlimited number of KEs and KERs. The MIE is considered to be the first anchor of an AOP at the molecular level, where stressors directly interact with the biological receptor. Identification of the MIE mostly relies on chemical analysis, in silico analysis or in chemico and in vitro data. For instance, the MIE for AOPs related to estrogen receptor activation involves the binding of chemicals to the estrogen receptor, thereby triggering a cascade of effects in hormone-related metabolism (see the section on Endocrine disruption). The MIE for AOPs related to skin sensitization (see below) involves the covalent interaction of chemicals to skin proteins in skin cells, an event called haptenization (Vinken, 2013).A wide range of biological data can support the understanding of KEs. Usually, early KEs (directly linked to MIEs) are assessed using in vitro assays, but may include in vivo data at the cellular level, while intermediate and late KEs rely on tissue-, organ- or whole organism measurements. Key-event measurements are also related to data from high-throughput screening and/or data generated by different -omics technologies. This is actually where the true value of the AOP framework comes in, since it is currently the only framework able to reach such a high level of data integration in the context of risk assessment. It is even possible to integrate comparative data from phylogenetically divergent organisms into key event measurements, valid across species, which could facilitate the evaluation of species sensitivity (Lalone et al., 2018). The final AO output is usually represented by apical responses, already described as standard guidelines accepted and instrumental in regulatory decision-making, which include endpoints such as development, growth, reproduction and survival.Development of the AOP framework is currently supported by US-Environmental Protection Agency, EU Joint Research Centers (ERC) and the Organization for Economic Cooperation and Development (OECD). Moreover, OECD has sponsored the development of an open access searchable database AOPWiki , comprising over 250 AOPs with associated MIEs, KEs and KERs, and more than 400 stressors. New AOPs are added regularly. The database also has a system for specifying the confidence to be placed in an AOP. Where KEs and KERs are supported by direct, specifically designed, experimental evidence, high confidence is placed in them. In other cases confidence is considered moderate or low, e.g. when there is a lack of supporting data or conflicting evidence.Skin sensitization is characterized by a two-step process, a sensitization phase and an elicitation phase. The first contact of electrophile compounds with the skin covalently modifies skin proteins and generates an immunological memory due to generated antigen/allergen specific T-cells. During the elicitation phase, repeated contact with the compound elicits the allergic reaction defined as allergic contact dermatitis, which usually develops into a lifelong effect. This is an important endpoint for safety assessment of personal care products, traditionally evaluated by in vivo assays. Based on changed public opinion the European Chemical Agency (ECHA) decided to move away from whole animal skin tests, and developed alternative assessment strategies. During sensitization, the MIE takes place when the chemical enters the skin, where it forms a stable complex with skin-specific carrier proteins (hapten complexes), which are immunogenic. A subsequent KE comprises inflammation and oxidative defense via a signaling cascade called the Keap1/Nrf2 signalling pathway (Kelch-like ECH-associated protein 1 / nuclear factor erythroid 2 related factor 2). At the same time, a second KE is defined as dendritic cell activation and maturation. This results into movement of dendritic cells to lymph nodes, where the hapten complex is presented to naive T-cells. The third KE describes the proliferation of hapten-specific T-cells and subsequent movement of antigen-specific memory cells that circulate in the body. Upon a second contact with the compound, these memory T-cells secrete cytokines that cause an inflammation reaction leading to the AO including red rash, blisters, and burning skin (Vinken et al., 2017). This AOP is designated AOP40 in the database of adverse outcome pathways.A suite of high throughput in vitro assays have now been developed to quantify the intermediate KEs in AOP40. These data formed the basis for the development of a Bayesian network analysis that can predict the potential for skin sensitization. This example highlights the use of pathway-derived data organized in an AOP, ultimately leading to an alternative fast screening method that may replace a conventional method using animal experiments.Ankley, G.T., Bennett, R.S., Erickson, R.J., Hoff, D.J., Hornung, M.W., Johnson, R.D., Mount, D.R., Nichols, J.W., Russom, C.L., Schmieder, P.K., Serrrano, J.A., Tietge, J.E., Villeneuve, D.L.. Adverse outcome pathways: A conceptual framework to support ecotoxicology research and risk assessment. Environmental Toxicology and Chemistry 29, 730-741.Ankley, G.T., Edwards, S.W.. The adverse outcome pathway: A multifaceted framework supporting 21st century toxicology. Current Opinion in Toxicology 9, 1-7.LaLone, C.A., Villeneuve, D.L., Doering, J.A., Blackwell, B.R., Transue, T.R., Simmons, C.W., Swintek, J., Degitz, S.J., Williams, A.J., Ankley, G.T.. Evidence for cross species extrapolation of mammalian-based high-throughput screening assay results. Environmental Science and Technology 18, 13960-13971.Villeneuve, D.L., Crump, D., Garcia-Reyero, N., Hecker, M., Hutchinson, T.H., LaLone, C.A., Landesmann, B, Lettieri, T., Munn, S., Nepelska, M., Ottinger, M.A., Vergauwen, L., Whelan, M.. Adverse Utcome Pathway development I: Strategies and principles. Toxicological Sciences 142, 312-320.Vinken, M.. The adverse outcome pathway concept: A pragmatic tool in toxicology. Toxicology 312, 158-165.Vinken, M., Knapen, D., Vergauwen, L., Hengstler, J.G., Angrish, M., Whelan, M.. Adverse outcome pathways: a concise introduction for toxicologists. Archives of Toxicology 91, 3697-3707.What is a molecular initiation event?Where in the chain of events do high-throughput in-vitro assays feed into AOPs?What is the ultimate goal of the AOP concept?How many intermediate key events are captured in the AOP for skin sensitization?Is an AOP always represented as a linear chain between MIE and AO?Author: Nico M van StraalenReviewers: Andrew Whitehead, Frank van BelleghemLearning objectives:You should be able toKeywords: toxicant susceptibility; genetic variation; biotransformation evolution of toxicant toleranceIn addition, a basic knowledge of genetics and evolutionary biology is needed to understand this module.Susceptibility to toxicants often shows inter-individual differences associated with genetic variation. While such differences are considered a nuisance in laboratory toxicity testing, they are an inextricable aspect of toxicant effects in the environment. Variation may be due to polymorphisms in the target site of toxicant action, but more often differences in metabolic enzymes and rates of excretion contribute to inter-individual variation. The structure of genes encoding metabolic enzymes, as well as polymorphisms in promoter regions of such genes are common sources of genetic variation. Under strong selection pressure species may evolve toxicant-tolerant populations, for example insects to insecticides and bacteria to antibiotics. In human populations, polymorphisms in drug metabolizing enzymes are mapped to provide a basis for personal therapies. This module aims to illustrate some of the genetic principles explaining inter-individual variation of toxicant susceptibility and its evolutionary consequences.For a long time it has been known that human subjects may differ markedly in their responses to drugs: while some patients hardly respond to a certain dosage, others react vehemently. Similar differences exist between the sexes and between ethnic groups. To avoid failure of treatment on the one hand and overdosing on the other, such personal differences have attracted the interest of pharmacological scientists. Also the tendency to develop cancer upon exposure to mutagenic chemicals is partly due to genetics. Since the rise of molecular ecology in the 1990s ecotoxicologists have noted that inter-individual differences in toxicant responses also exists in the environment.Due to genetic variation environmental pollution may trigger evolutionary change in the wild. From quantitative genetics we know that when a trait is due to many genes, each with an independent additive effect on the trait value, the response to selection R, is linearly related to the selection differential S according to the formula: R = h2S, where h2 is a measure of the heritability of the selected trait (fraction of additive genetic variance relative to total phenotypic variance). Since anthropogenic toxicants can act as very strong selective agents (large S) it is expected that whenever h2 > 0 there will be adaptation. However, the effectiveness of "evolutionary rescue" from pollution is limited to those species that have the appropriate genetic variation and the ability to quickly increase in population size.One of the most important enzyme systems contributing to metabolism of xenobiotic chemicals is the cytochrome P450 family, a class of proteins located in the smooth endoplasmic reticulum of the cell and acting in co-operation with several other proteins. Cytochrome P450 will oxidize the substrate and enhance its water-solubility (called phase-I reaction), and in many cases activate it for further reactions involving conjugation with an endogenous compound (phase II reactions). These processes generally lead to detoxification and increased excretion of toxic substances. The biochemistry of drug metabolism is discussed in detail in the section on Xenobiotic metabolism and defence.The human genome has 57 genes encoding a P450 protein. The genes are commonly designated as "CYP". Other organisms, especially insects and plants have many more CYPs. For example, the Drosophila genome encodes 83 functional P450 genes and the genome of the model plant Arabidopsis has 244 CYPs. Based on sequence similarity, CYPs are classified in 18 families and 43 subfamilies, but there is no agreement yet about the position of various CYP genes in lower invertebrates. The complexity is enhanced by duplications specific to certain evolutionary lineages, creating a complicated pattern of orthologs (homologs by descent from a common ancestor) and paralogs (homologs due to duplication in the same genome). In addition to functional enzymes it is also common to find many CYP pseudogenes in a genome. Pseudogenes are DNA-sequences that resemble functional genes, but are mutated and they do not result in functional proteins).The expression of CYP enzymes is markedly tissue-specific. Often CYP expression is high in epithelial tissues (lung, intestine) and organs with designated metabolic activity (liver, kidney). In the human body, the liver is the main metabolic organ and is known for its extensive CYP expression. P450 enzymes also differ in their inducibility by classes of chemicals and in their substrate specificity.It is often assumed that the versatility of an organism's CYP genes is a reflection of its ecology. For example, herbivorous insects that consume plants of different kinds with many different feeding repellents must avail of a wide diversity of CYP genes. It has also been shown that activity of CYP enzymes among terrestrial organisms is, in general, higher than among aquatic organisms and that plant-eating birds have higher biotransformation activities than predatory birds.One of the best-investigated CYP genes, especially due to its strong inducibility and involvement in xenobiotic metabolism, is mammalian CYP1A1. In humans induction of this gene is associated with increased lung cancer risk from smoking, and with other cancers, such as breast cancer and prostrate cancer. Human CYP1A1 is located on chromosome 15 and encodes 251 amino acids in seven exons (see in Zhou et al., 2009). About 133 single-nucleotide polymorphisms (SNPs, variations in a single nucleotide that occur at a specific position in the genome) have been described for this gene, of which 23 are non-synonymous (causing a substitution of an amino acid in the protein).Many of these SNPs have a medical relevance. For example, a rather common SNP in exon 7 changes codon 462 from isoleucine into valine. The substituted allele is called CYP1A1*2A, and this occurs at a frequency of 19% in the Caucasian part of the human population. The allelic variant of the enzyme has a higher activity towards 17β-estradiol and is a risk factor for several types of cancer. However, the expression of such traits may vary from one population to another, and may also interact with other risk factors. For example, CYP1A1*2A is a risk factor for cervical cancer in women with a history of smoking in the Polish population, but the same SNP may not be a risk factor in another population or among people with a non-smoking lifestyle. In genetics these effects are known as epistasis: the phenotypic effect of genetic variation at one locus depends on the genotype of another locus. This is also an example of a genotype-by-environment interaction, where the phenotypic effect of a genetic variant depends on the environment (smoking habit). In toxicology it is known that polymorphisms of phase II biotransformation enzymes may significantly contribute to epistatic interaction with CYP genes. Unraveling all these complicated interactions is a very active field of research in human medical genetics.Comparison of CYP genes in different species has revealed an enormously rapid evolution of this gene family, with many lineage-specific duplications. This indicates strong selective pressures imposed by the need to detoxify substances ingested with the diet. Especially herbivorous animals are constantly exposed to such compounds, synthesized by plants to deter feeding. We also see profound changes in CYP genes associated with evolutionary transitions such as colonization of terrestrial habitats by the various lineages of arthropods. Such natural variation, induced by plant toxins and habitat requirements, is also relevant in the responses to toxicants.In general, variation of biotransformation enzymes can be classified in four main categoriesTo illustrate the complicated evolution of biotransformation genes, we shortly discuss the CYPs of common cormorant, Phalacrocorax carbo. This is a bird known for its narrow diet (fish) and extraordinary potential for accumulation of dioxin-related compounds (PCBs, PCDDs and PCDFs). Environmental toxicologists have identified two CYP1A genes in the cormorant, called CYP1A4 and CYP1A5. It turns out that CYP1A4 is homologous by descent (orthologous) to mammalian CYP1A1 while CYP1A5 is an ortholog of mammalian CYP1A2. However, the orthologies are not revealed by common phylogenetic analysis if the whole coding sequence is used in the alignment (see in Kubota et al., 2006). This is a consequence of a process called interparalog gene conversion, which tends to homogenize DNA sequences of gene copies located on the same chromosome. This diminishes sequence variation between the paralogs, and creates chimeric gene structures, that are more similar to each other than expected from their phylogenetic relations. If a phylogenetic tree is made using a section of the gene that remained outside the gene conversion, the true phylogenetic relations are revealed (see in Kubota et al., 2006).Cytochrome P450 polymorphisms are also implicated in certain types of insecticide resistance. There are many ways in which insects and other arthropods can become resistant and several mechanisms may even be present in the same resistant strain. Target site alteration (making the target less susceptible to the insecticide, e.g. altered acetylcholinesterase, substitutions in the GABA-receptor, etc.) seems to be the most likely mechanism for resistance, however, such changes often come with substantial costs as they may diminish the natural function of the target (in genetics this is called pleiotropy). Increased metabolism does not usually contribute metabolic costs and this is where cytochromes P450 come into play. A model system for investigating the genetics of such mechanisms is DDT resistance in the fruit fly, Drosophila melanogaster.In a DDT-resistant Drosophila strain, all CYP genes were screened for enhanced expression and it was shown that DDT resistance was due to a highly upregulated variant of only a single gene, Cyp6g1. Further analysis showed that the gene's promoter carried an insertion with strong similarity to a transposable element of the Accord family. The insertion of this element causes a significant overexpression and a high rate of protein synthesis that allows the fly to quickly degrade a DDT dose. The fact that a simple change, in only one allele, can underlie such a distinctive phenotype as pesticide resistance is a remarkable lesson for molecular toxicology.A recent study on killifish, Fundulus heteroclitus, along the East coast of the United States has revealed a much more complicated pattern of resistance. Populations of these fish live in estuaries, some with severely polluted sediments, containing high concentrations of polychlorinated biphenyls (PCBs) and polycyclic aromatic hydrocarbons (PAHs). Killifish from the polluted environments are much more resistant to toxicity from the model compounds PCB126 and benzo(a)pyrene. This resistance is related to mutations in the gene encoding aryl hydrocarbon receptor (AHR), the protein that binds PAHs and certain PCB metabolites and activates CYP expression. Also mutations in a protein called aryl-hydrocarbon receptor-interacting protein (AIP), a protein that combines with AHR to ensure binding of the ligand, contribute to down-regulation of the CY1A1 pathway. The net result is that killifish CYP1A1 shows only moderate induction by PCBs and PAHs and the damaging effects of reactive metabolites are avoided. However, since direct knockdown of CYP1A1 does not provide resistance it is still unclear whether the beneficial effects of the mutations in AHR actually act through an effect on CYP1A1.Interestingly, the various killifish populations show at least three different deletions in the AHR genes. In addition, the tolerant populations show various degrees of CYP1A1 duplication; in one population even eight paralogs are present. This can be interpreted as compensatory adaptations ensuring a basal constitutive level of CYP1A1 protein to conduct routine metabolic activities. The killifish example shows a wonderful case of interplay between genetic tinkering, and strong selection emanating from a polluted environment.In this module we have focused on genetic variation in the phase I enzyme, cytochrome P450. A similar complexity lies behind the phase II enzymes and the various xenobiotic-induced transporters (phase III). Still the P450 examples suffice to demonstrate that the machinery of xenobiotic metabolism shows a very large degree of genetic variation, as well as species differences due to duplications, deletions, gene conversion and lineage-specific selection. The variation resides both in copy number variation, alteration of coding sequences and in promoter or enhancer sequences affecting the expression of the enzymes. Such genetic variation is the template for evolution. In polluted environments enhanced expression is sometimes selected for (to neutralize toxic compounds), but sometimes also attenuated expression is selected (to avoid production of toxic intermediates). In the human genome, many of the polymorphisms have a medical significance, determining a personal profile of drug metabolism and tendencies to develop cancer.Bell, G.. Evolutionary rescue and the limits of adaptation. Philosophical Transactions of the Royal Society B 368, 2012.0080.Daborn, P.J., Yen, J.L., Bogwitz, M.R., Le Goff, G., Feil, E., Jeffers, S., Tijet, N., Perry, T., Heckel, D., Batterham, P., Feyereisen, R., Wilson, T.G., Ffrench-Constant, R.H.. A single P450 allele associated with insecticide resistance in Drosophila. Science 297, 2253-2256.Feyereisen, R.. Insect P450 enzymes. Annual Review of Entomology 44, 507-533.Goldstone, H.M.H., Stegeman, J.J.. A revised evolutionary history of the CYP1A subfamily: gene duplication, gene conversion and positive selection. Journal of Molecular Evolution 62, 708-717.Kubota, A., Iwata, H., Goldstone, H.M.H., Kim, E.-Y., Stegeman, J.J., Tanabe, S.. Cytochrome P450 1A1 and 1A5 in common cormorant (Phalacrocorax carbo): evolutionary relationships and functional implications associated with dioxin and related compounds. Toxicological Sciences 92, 394-408.Reid, N.M., Proestou, D.A., Clark, B.W., Warren, W.C., Colbourne, J.K., Shaw, J.R., Hahn, M., Nacci, D., Oleksiak, M.F., Crawford, D.L., Whitehead, A.. The genomic landscape of rapid repeated evolutionary adaptation to toxic pollution in wild fish Science 354, 1305-1308.Preissner, S.C., Hoffmann, M.F., Preissner, R., Dunkel, R., Gewiess, A., Preissner, S.. Polymorphic cytochrome P450 enyzmes (CYPs) and their role in personalized therapy. PLoS ONE 8, e82562.Roszak, A., Lianeri, M., Sowinska, A., Jagodzinski, P.P.. CYP1A1 Ile462Val polymorphism as a risk factor in cervical cancer development in the Polish populations. Molecular Diagnosis and Therapy 18, 445-450.Taylor, M., Feyereisen, R.. Molecular biology and evolution of resistance to toxicants. Molecular Biology and Evolution 13, 719-734.Walker, C.H., Ronis, M.J.. The monooxygenases of birds, reptiles and amphibians. Xenobiotica 19, 1111-1121.Zhou, S.-F., Liu J.-P., Chowbay, B.. Polymorphism of human cytochrome P450 enzymes and its clinical impact. Drug Metabolism Reviews 41, 89-295.Comment upon: "In the human cytochrome P450 gene 1A1 133 single nucleotide polymorphisms have been described, of which 23 are non-synonymous".Some populations of killifish along the Atlantic coast of the U.S. show very high resistance against organic contaminants such as dioxin-like PCBs. Genetic research has shown that these resistant populations have a deletion in the gene encoding aryl hydrocarbon receptor (Ahr). Explain why such a mutation can cause resistance to dioxin-like PCBs.Comment upon the concept of "Evolutionary rescue from pollution"- the evolution of tolerance as a way to diminish toxic effects of pollution.This page titled 4.2: Toxicodynamics and Molecular Interactions is shared under a CC BY 4.0 license and was authored, remixed, and/or curated by Sylvia Moes, Kees van Gestel, & Gerco van Beek via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.
981
4.3: Toxicity Testing
https://chem.libretexts.org/Bookshelves/Environmental_Chemistry/Environmental_Toxicology_(van_Gestel_et_al.)/04%3A_Toxicology/4.03%3A_Toxicity_Testing
Author: Kees van GestelReviewer: Michiel KraakLearning objectives:You should be able toKeywords: single-species toxicity tests, test species selection, concentration-response relationships, endpoints, bioaccumulation testing, epidemiology, standardization, quality control, transcriptomics, metabolomics,Laboratory toxicity tests may provide insight into the potential of chemicals to bioaccumulate in organisms and into their hazard, the latter usually being expressed as toxicity values derived from concentration-response relationships. Section 4.3.1 on Bioaccumulation testing describes how to perform tests to assess the bioaccumulation potential of chemicals in aquatic and terrestrial organisms, and under static and dynamic exposure conditions. Basic to toxicity testing is the establishment of a concentration-response relationship, which relates the endpoint measured in the test organisms to exposure concentrations. Section 4.3.2 on Concentration-response relationships elaborates on the calculation of the relevant toxicity parameters like the median lethal concentration (LC50) and the medium effective concentration (EC50) from such toxicity tests. It also discusses the pros and cons of different methods for analyzing data from toxicity tests.Several issues have to be addressed when designing toxicity tests that should enable assessing the environmental or human health hazard of chemicals. This concerns among others the selection of test organisms (see section 4.3.4 on the Selection of test organisms for ecotoxicity testing), exposure media, test conditions, test duration and endpoints, but also requires clear criteria for checking the quality of toxicity tests performed (see below). Different whole organism endpoints that are commonly used in standard toxicity tests, like survival, growth, reproduction or avoidance behavior, are discussed in section 4.3.3 on Endpoints. The sections 4.3.4 to 4.3.7 are focusing on the selection and performance of tests with organisms representative of aquatic and terrestrial ecosystems. This includes microorganisms (section 4.3.6), plants (section 4.3.5), invertebrates (section 4.3.4) and vertebrate test organisms (e.g. fish: section 4.3.4 on ecotoxicity tests, and birds: section 4.3.7). Testing of vertebrates, including fish (section 4.3.4) and birds (section 4.3.7), is subject to strict regulations, aimed at reducing the use of test animals. Data on the potential hazard of chemicals to human health therefore preferably have to be obtained in other ways, like by using in vitro test methods (section 4.3.8), by using data from post-registration monitoring of exposed humans (section 4.3.9 on Human toxicity testing), or from epidemiological analysis on exposed humans (section 4.3.10).Traditionally, toxicity tests focus on whole organism endpoints, with survival, growth and reproduction being the most measured parameters (section 4.3.3). In case of vertebrate toxicity testing, also other endpoints may be used addressing effects at the level of organs or tissues (section 4.3.9 on human toxicity testing). Behavioural (e.g. avoidance behavior) and biochemical endpoints, like enzyme activity, are also regularly included in toxicity testing with vertebrates and invertebrates (sections 4.3.3, 4.3.4, 4.3.7, 4.3.9).With the rise of molecular biology, novel techniques have become available that may provide additional information on the effects of chemicals. Molecular tools may, for instance, be applied in molecular epidemiology (section 4.3.11) to find causal relationships between health effects and the exposure to chemicals. Toxicity testing may also use gene expression responses (transcriptomics; section 4.3.12) or changes in metabolism (metabolomics; section 4.3.13) in relation to chemical exposures to help unraveling the mechanism(s) of action of chemicals. A major challenge still is to explain whole organism effects from such molecular responses.The standardization of tests is organized by international bodies like the Organization for Economic Co-operation and Development (OECD), the International Standardization Organization (ISO), and ASTM International (formerly known as the American Society for Testing and Materials). Standardization aims at reducing variation in test outcomes by carefully describing the methods for culturing and handling the test organisms, the procedures for performing the test, the properties and composition of test media, the exposure conditions and the analysis of the data. Standardized test guidelines are usually based on extensive testing of a method by different laboratories in a so-called round-robin test.Regulatory bodies generally require that toxicity tests supporting the registration of new chemicals are performed according to internationally standardized test guidelines. In Europe, for instance, all toxicity tests submitted within the framework of REACH have to be performed according to the OECD guidelines for the testing of chemicals (see section on Regulation of chemicals).Since toxicity tests are performed with living organisms, this inevitably leads to (biological) variation in outcomes. Coping with this variation requires the use of sufficient replication, careful test designs and good choice of endpoints (section 4.3.3) to enable proper estimates of relevant toxicity data.In order to control the quality of the outcome of toxicity tests, several criteria have been developed, which mainly apply to the performance of the test organisms in the non-exposed controls. These criteria may e.g. require a minimum % survival of control organisms, a minimum growth rate or number of offspring being produced by the controls and limited variation (e.g. <30%) of the replicate control growth or reproduction data (sections 4.3.4, 4.3.5, 4.3.6, 4.3.7). When tests do not meet these criteria, the outcome is prone to doubts, as for instance a poor control survival will make it hard to draw sound conclusions on the effect of the test chemical on this endpoint. As a consequence, tests that do not meet these validity criteria may not be accepted by other scientists and by regulatory authorities.In case the test chemical is added to the test medium using a solvent, toxicity tests should also include a solvent control, in addition to a regular non-exposed control (see section 4.3.4 on the selection of test organisms for ecotoxicity testing). In case the response in the solvent control differs significantly from that in the negative control, the solvent control will be used as the control for analyzing the effects of the test chemical. The negative control will then only be used to check if the validity criteria have been met and to monitor the condition of the test organisms. In case the responses in the negative control and the solvent control do not differ significantly, both controls can be pooled for the data analysis.Most test guidelines also require frequent testing of a positive control, a chemical with known toxicity, to check if the long-term culturing of the test organisms does not lead to changes in their sensitivity.What are the main endpoints in toxicity testing?Which are the main groups of organisms used in toxicity testing?Why is standardization of methods for toxicity testing required?Which elements are included in the quality control of toxicity tests?Author: Kees van GestelReviewers: Joop Hermens, Michiel Kraak, Susana LoureiroLearning objectives:You should be able toKeywords: bioconcentration, bioaccumulation, uptake and elimination kinetics, test methods, soil, waterBioaccumulation is defined as the uptake of chemicals in organisms from the environment. The degree of bioaccumulation is usually indicated by the bioconcentration factor (BCF) in case the exposure is via water, or the biota-to-soil/sediment accumulation factor (BSAF) for exposure in soil or sediment (see section on Bioaccumulation).Because of the potential risk for food-chain transfer, experimental determination of the bioaccumulation potential of chemicals is usually required in case of a high lipophilicity (log Kow > 3), unless the chemical has a very low persistency. For very persistent chemicals, experimental determination of bioaccumulation potential may already be triggered at log Kow > 2. The experimental determination of BCF and BSAF values makes use of static or dynamic exposure systems.In static tests, the medium is dosed once with the test chemical, and organisms are exposed for a certain period of time after which both the organisms and the test medium are analyzed for the test chemical. The BCF or BSAF are calculated from the measured concentrations. There are a few concerns with this way of bioaccumulation testing.First, exposure concentrations may decrease during the test, e.g. due to (bio)degradation, volatilization, sorption to the walls of the test container, or uptake of the test compound by the test organisms. As a consequence, the concentration in the test medium measured at the start of the test may not be indicative of the actual exposure during the test. To take this into account, exposure concentrations can be measured at the start and the end of the test and also at some intermediate time points. Body concentrations in the test organisms may then be related to time-weighted-average (TWA) exposure concentrations. Alternatively, to overcome the problem of decreasing concentrations in aquatic test systems, continuous flow systems or passive dosing techniques can be applied. Such methods, however, are not applicable to soil or sediment tests, where repeated transfer of organisms to freshly spiked medium is the only way to guarantee more or less constant exposure concentrations in case of rapidly degrading compounds. To avoid that the uptake of the test chemical in test organisms leads to decreasing exposure concentrations, the amount of biomass per volume or mass of test medium should be sufficiently low.Second, it is uncertain whether at the end of the exposure period steady state or equilibrium is reached. If this is not the case, the resulting BSAF or BCF values may underestimate the bioaccumulation potential of the chemical. To tackle this problem, a dynamic test may be run to assess the uptake and elimination rate constants to derive a BSAF or BCF values using uptake and elimination rate constants (see below).Such uncertainties also apply to BCF and BSAF values obtained by analyzing organisms collected from the field and comparing body concentrations with exposure levels in the environment. Using data from field-exposed organisms on one hand have large uncertainty as it remains unclear whether equilibrium was reached, on the other hand they to do reflect exposure over time under fluctuating but realistic exposure conditions.Dynamic tests, also indicated as uptake/elimination or toxicokinetic tests, may overcome some, but not all, of the disadvantages of static tests. In dynamic tests, organisms are exposed for a certain period of time in spiked medium to assess the uptake of the chemical, after which they are transferred to clean medium for determining the elimination of the chemical. During both the uptake and the elimination phase, at different points in time, organisms are sampled and analyzed for the test chemical. The medium is also sampled frequently to check for a possible decline of the exposure concentration during the uptake phase. Also in dynamic tests, keeping exposure concentrations constant as much as possible is a major challenge, requiring frequent renewal (see above).Toxicokinetic tests should also include controls, consisting of test organisms incubated in the clean medium and transferred to clean medium at the same time the organisms from the treated medium are transferred. Such controls may help identifying possible irregularities in the test, such as poor health of the test organisms or unexpected (cross)contamination occurring during the test.The concentrations of the chemical measured in the test organisms are plotted against the exposure time, and a first-order one-compartment model is fitted to the data to estimate the uptake and elimination rate constants. The (dynamic) BSAF or BCF value is then determined as the ratio of the uptake and elimination rate constants (see section on Bioconcentration and kinetic models).In a toxicokinetics test, usually replicate samples are taken at each point in time, both during the uptake and the elimination phase. The frequency of sampling may be higher at the beginning than at the end of both phases: a typical sampling scheme is shown in Since the analysis of toxicokinetics data using the one-compartment model is regression based, it is generally preferred to have more points in time rather than having many replicates per sampling time. From that perspective, often no more than 3-4 replicates are used per sampling time, and 5-6 sampling times for the uptake and elimination phases each.Preferably, replicates are independent, so destructively sampled at a specific sampling point. Especially in aquatic ecotoxicology, mass exposures are sometimes used, having all test organisms in one or few replicate test containers. In this case, at each sampling time some replicate organisms are taken from the test container(s), and at the end of the uptake phase all organisms are transferred to (a) container(s) with clean medium.shows the result of a test on the uptake and elimination kinetics of molybdenum in the earthworm Eisenia andrei. From the ratio of the uptake rate constant (k1) and elimination rate constant (k2) a BSAF of approx. 1.0 could be calculated, suggesting a low bioaccumulation potential of Mo in earthworms in the soil tested.Another way of assessing the bioaccumulation potential of chemicals in organisms includes the use of radiolabeled chemicals, which may facilitate easy detection of the test chemical. The use of radiolabeled chemicals may however, overestimate bioaccumulation potential when no distinction is made between the parent compound and potential metabolites. In case of metals, stable isotopes may also offer an opportunity to assess bioaccumulation potential. Such an approach was also applied to distinguish the role of dissolved (ionic) Zn in the bioaccumulation of Zn in earthworms from ZnO nanoparticles. Earthworms were exposed to soils spiked with mixtures of 64ZnCl2 and 68ZnO nanoparticles. The results showed that dissolution of the nanoparticles was fast and that the earthworms mainly accumulated Zn present in ionic form in the soil solution (Laycock et al., 2017).Standard test guidelines for assessing the bioaccumulation (kinetics) of chemicals have been published by the Organization for Economic Cooperation and Development (OECD) for sediment-dwelling oligochaetes (OECD, 2008), for earthworms/enchytraeids in soil (OECD, 2010) and for fish (OECD, 2012).Diez-Ortiz, M., Giska, I., Groot, M., Borgman, E.M., Van Gestel, C.A.M.. Influence of soil properties on molybdenum uptake and elimination kinetics in the earthworm Eisenia andrei. Chemosphere 80, 1036-1043.Laycock, A., Romero-Freire, A., Najorka, J., Svendsen, C., Van Gestel, C.A.M., Rehkämper, M.. Novel multi-isotope tracer approach to test ZnO nanoparticle and soluble Zn bioavailability in joint soil exposures. Environmental Science and Technology 51, 12756−12763.OECD. Guidelines for the testing of chemicals No. 315: Bioaccumulation in Sediment-dwelling Benthic Oligochaetes. Organization for Economic Cooperation and Development, Paris.OECD. Guidelines for the testing of chemicals No. 317: Bioaccumulation in Terrestrial Oligochaetes. Organization for Economic Cooperation and Development, Paris.OECD. Guidelines for the testing of chemicals No. 305: Bioaccumulation in Fish: Aqueous and Dietary Exposure. Organization for Economic Cooperation and Development, Paris.Why may BSAF and BCF values obtained from static tests not reflect the real bioaccumulation potential of chemicals, even if it was possible to keep exposure concentrations constant?Describe the experimental design of a test for assessing the uptake and elimination kinetics of chemicals in test organisms, in soil or water.a. What experimental problem may be encountered when determining the bioaccumulation of chemicals in terrestrial or aquatic organisms?b. And how may this problems be overcome in case of aquatic organisms?c. Is such a solution also possible for terrestrial organisms?Author: Kees van GestelReviewers: Michiel Kraak, Thomas BackhausLearning goals:You should be able toKeywords: concentration-related effects, measure of lethal effect, measure of sublethal effect, regression-based analysisKey paradigm in human and environmental toxicology is that the dose determines the effect. This paradigm goes back to Paracelsus, stating that any chemical is toxic, but that the dose determines the severity of the effect. In practice, this paradigm is used to quantify the toxicity of chemicals. For that purpose, toxicity tests are performed in which organisms (microbes, plants, invertebrates, vertebrates) or cells are exposed to a range of concentrations of a chemical. Such tests also include incubations in non-treated control medium. The response of the test organisms is determined by monitoring selected endpoints, like survival, growth, reproduction or other parameters (see section on Endpoints). Endpoints can increase (e.g. mortality) or decrease with increasing exposure concentration (e.g. survival, reproduction, growth). The response of the endpoints is plotted against the exposure concentration, and so-called concentration-response curves are fitted, from which measures of the toxicity of the chemical can be calculated.The unit of exposure, the concentration or dose, may be expressed differently depending on the exposed subject. Dose is expressed as mg/kg body weight in human toxicology and following single (oral or dermal) exposure events in mammals or birds. For other orally or dermally exposed (invertebrate) organisms, like honey bees, the dose may be expressed per animal, e.g. µg/bee. Environmental exposures generally express exposure as the concentration in mg/kg food, mg/kg soil, mg/l surface, drinking or ground water, or mg/m3 air.Ultimately, it is the concentration (number of molecules of the chemical) at the target site that determines the effect. Consequently, expressing exposure concentrations on a molar basis (mol/L, mol/kg) is preferred, but less frequently applied.At low concentrations or doses, the endpoint measured is not affected by exposure. At increasing concentration, the endpoint shows a concentration-related decrease or increase. From this decrease or increase, different measures of toxicity can be calculated:ECx/EDx: the "effective concentration" resp. "effective dose"; "x" denotes the percentage effect relative to an untreated control. This should always be followed by giving the selected endpoint.LCx/LDx: same, but specified for a specific endpoint: lethality.EC50/ED50: the median effect concentration or dose, with "x" set to 50%. This is the most common estimate used in environmental toxicology. This should always be followed by giving the selected endpoint.LC50/LD50: same, but specified for a specific endpoint: lethality.The terms LCx and LDx refer to the fraction of animals responding (dying), while the ECx and EDx indicate the degree of reduction of the measured parameter. The ECx/EDx describe the overall average performance of the test organisms in terms of the parameter measured (e.g., growth, reproduction). The meaning of an LCx/LDx seems obvious: it refers to lethality of the test chemical. The use of ECx/EDx, however, always requires explicit mentioning of the endpoint it concerns.Concentration-response models usually distinguish quantal and continuous data. Quantal data refer to constrained ("yes/no") responses and include, for instance, survival data, but may also be applicable to avoidance responses. Continuous data refer to parameters like growth, reproduction (number of juveniles or eggs produced) or biochemical and physiological measurements. A crucial difference between quantal and continuous responses is that quantal responses are population-level responses, while continuous responses can also be observed on the level of individuals. An organism cannot be half-dead, but it can certainly grow at only half the control rate.Concentration-response models are usually sigmoidal on a log-scale and are characterized by four parameters: minimum, maximum, slope and position. The minimum response is often set to the control level or to zero. The maximum response is often set to 100%, in relation to the control or the biologically plausible maximum (e.g. 100% survival). The slope identifies the steepness of the curve, and determines the distance between the EC50 and EC10. The position parameter indicates where on the x-axis the curve is placed. The position may equal the EC50 and in that case it is named the turning point. But this in fact holds only for a small fraction of models and not for models that are not symmetrical to the EC50.In environmental toxicology, the parameter values are usually presented with 95% confidence intervals indicating the margins of uncertainty. Statistical software packages are used to calculate these corresponding 95% confidence intervals.Regression-based test designs require several test concentrations, and the results are dependent on the used statistical model, especially in the low-effect region. Sometimes it is simply impossible to use a regression-based design because the endpoint does not cover a sufficiently high effect range (>50% effect is typically needed for an accurate fit).In case of quantal responses, especially survival, the slope of the concentration-response curve is an indication of the sensitivity distribution of the individuals within the population of test organisms. For a very homogenous population of laboratory test animals having the same age and body size, a steeper concentration-response curve is expected than when using field-collected animals representing a wider range of ages and body sizes.In addition to ECx values, toxicity tests may also be used to derive other measures of toxicity:NOEC/NOEL: No-Observed Effect Concentration or Effect LevelLOEC/LOEL: Lowest Observed Effect Concentration or Effect LevelNOAEL: No-Observed Adverse Effect Level. Same as NOEL, but focusing on effects that are negative (adverse) compared to the control.LOAEL: Lowest Observed Adverse Effect Level. Same as LOEL, but focusing on effects that are negative (adverse) compared to the control.Where the ECx are derived by curve fitting, the NOEC and LOEC are derived by a statistical test comparing the response at each test concentration with that of the controls. The NOEC is defined as the highest test concentration where the response does not significantly differ from the control. The LOEC is the next higher concentration, so the lowest concentration tested at which the response significantly differs from the control. shows NOEC and LOEC values derived from a hypothetical test. Usually an Analysis of Variance (ANOVA) is used combined with a post-hoc test, e.g. Tukey, Bonferroni or Dunnett, to determine the NOEC and LOEC.Most available toxicity data are NOECs, hence they are the most common values found in databases and therefore used for regulatory purposes. From a scientific point of view, however, there are quite some disadvantages related to the use of NOECs:The NOEC may, due to its sensitivity to variation and test design, sometimes be equal to or even higher than the EC50.Because of the disadvantages of the NOEC, it is recommended to use measures of toxicity derived by fitting a concentration-response curve to the data obtained from a toxicity test. As an alternative to the NOEC, usually an EC10 or EC20 is used, which has the advantages that it is obtained using all data from the test and that it has a 95% confidence interval indicating its reliability. Having a 95% confidence interval also allows a statistical comparison of ECx values, which is not possible for NOEC values.Which four parameters describe a dose-response curve?What would be the preferred unit of measures of toxicity (e.g. EC20 or EC50) describing the effect of a chemical on the survival of soil invertebrates exposed in a standardized test soil?Why would you expect that using an age-synchronized laboratory population of test organisms results in a much steeper concentration-response curve for effects on survival of a chemical than a field-collected population of non-synchronized individuals?Why are EC10 values preferred over NOECs when using the outcomes of toxicity tests for the risk assessment of chemicals?Author: Michiel KraakReviewers: Kees van Gestel, Carlos BarataLearning objectives:You should be able toKeywords: Mortality, survival, sublethal endpoints, growth, reproduction, behaviour, photosynthesisMost toxicity tests performed are short-term high-dose experiments, acute tests in which mortality is often the only endpoint. Mortality, however, is a crude parameter in response to relatively high and therefore often environmentally irrelevant toxicant concentrations. At much lower and therefore environmentally more relevant toxicant concentrations, organisms may suffer from a wide variety of sublethal effects. Hence, toxicity tests gain ecological realism if sublethal endpoints are addressed in addition to mortality.Mortality can be determined in both acute and chronic toxicity tests. In acute tests, mortality is often the only feasible endpoint, although some acute tests take long enough to also measure sublethal endpoints, especially growth. Generally though, this is restricted to chronic toxicity tests, in which a wide variety of sublethal endpoints can be assessed in addition to mortality (Table 1).Mortality at the end of the exposure period is assessed by simply counting the number of surviving individuals, but it can also be expressed either as percentage of the initial number of individuals or as percentage of the corresponding control. The increasing mortality with increasing toxicant concentrations can be plotted in a dose-response relationship from which the LC50 can be derived (see section on Concentration-response relationship). If assessing mortality is non-destructive, for instance if this can be done by visual inspection, it can be scored at different time intervals during a toxicity test. Although repeated observations may take some effort, they generally do generate valuable insights in the course of the intoxication process over time.In acute toxicity tests it is difficult to assess other endpoints than mortality, since effects of toxicants on sublethal endpoints like growth and reproduction need much longer exposure times to become expressed (see section on Chronic toxicity). Incorporating sublethal endpoints in acute toxicity tests thus requires rapid responses to toxicant exposure. Photosynthesis of plants and behaviour of animals are elegant, sensitive and rapidly responding endpoints that can be incorporated into acute toxicity tests (Table 1).Behaviour is an understudied but sensitive and ecologically relevant endpoint in ecotoxicity testing, since subtle changes in animal behaviour may affect trophic interactions and ecosystem functioning. Several studies reported effects on animal behaviour at concentrations orders of magnitudes lower than lethal concentrations. Van der Geest et al. showed that changes in ventilation behaviour of fifth instar larvae of the caddisfly Hydropsyche angustipennis occurred at approximately 150 times lower Cu concentrations than mortality of first instar larvae. Avoidance behaviour of the amphipod Corophium volutator to contaminated sediments was 1,000 times more sensitive than survival (Hellou et al., 2008). Chevalier et al. tested the effect of twelve compounds covering different modes of action on the swimming behaviour of daphnids and observed that most compounds induced an early and significant swimming speed increase at concentrations near or below the 10% effective concentration (48-h EC10) of the acute immobilization test. Barata et al. reported that the short term (24 h) D. magna feeding inhibition assay was on average 50 times more sensitive than acute standardized tests when assessing the toxicity of a mixture of 16 chemicals in different water types combinations. These and many other examples all show that organisms may exhibit altered behaviour at relatively low and therefore often environmentally relevant toxicant concentrations.Behavioural responses to toxicant exposure can also be very fast, allowing organisms to avoid further exposure and subsequent bioaccumulation and toxicity. A wide array of such avoidance responses have been incorporated in ecotoxicity testing (Araújo et al., 2016), including the avoidance of contaminated soil by earthworms (Eisenia fetida) (Rastetter & Gerhardt; 2018), feeding inhibition of mussels (Corbicula fluminea) (Castro et al., 2018), aversive swimming response to silver nanoparticles by the unicellular green alga Chlamydomonas reinhardtii (Mitzel et al., 2017) and by daphnids to twelve compounds covering different modes of toxic action (Chevalier et al., 2015).Photosynthesis is a sensitive and well-studied endpoint that can be applied to identify hazardous effects of herbicides on primary producers. In bioassays with plants or algae, photosynthesis is often quantified using pulse amplitude modulation (PAM) fluorometry, a rapid measurement technique suitable for quick screening purposes. Algal photosynthesis is preferably quantified in light adapted cells as effective photosystem II (PSII) efficiency (ΦPSII) (Ralph et al., 2007; Sjollema et al., 2014). This endpoint responds most sensitively to herbicide activity, as the most commonly applied herbicides either directly or indirectly affect PSII (see section on Herbicide toxicity).Besides mortality, growth and reproduction are the most commonly assessed endpoints in ecotoxicity tests (Table 1). Growth can be measured in two ways, as an increase in length and as an increase in weight. Often only the length or weight at the end of the exposure period is determined. This, however, includes both the growth before and during exposure. It is therefore more distinctive to measure length or weight at the beginning as well as at the end of the exposure, and then subtract the individual or average initial length or weight from the final individual length or weight. Growth during the exposure period may subsequently be expressed as percentage of the initial lengths or weight. Ideally the initial length or weight is measured from the same individuals that will be exposed. When organisms are sacrificed to measure the initial length or weight, which is especially the case for dry weight, this is not feasible. In that case a subsample from the individuals is taken apart at the beginning of the test.Reproduction is a sensitive and ecological relevant endpoint in chronic toxicity tests. It is an integrated parameter, incorporating many different aspects of the process, that can be assessed one by one. The first reproduction parameter is the day of first reproduction. This is an ecologically very relevant parameter, as delayed reproduction obviously has strong implications for population growth. The next reproduction parameter is the amount of offspring. In this case the number of eggs, seeds, neonates or juveniles can be counted. For organisms that produce egg ropes or egg masses, both the number of egg masses as well as the number of eggs per mass can be determined. Lastly the quality of the offspring can be quantified. This can be achieved by determining their physiological status (e.g. fat content), their size, survival and finally their chance or reaching adulthood.Table 1. Whole organism endpoints often used in toxicity tests. Quantal refers to a yes/no endpoint, while graded refers to a continuous endpoint (see section on Concentration-response relationship).EndpointAcute/ChronicQuantal/Gradedmortalitybothquantalbehaviouracutegradedavoidanceacutequantalphotosynthesisacutegradedgrowth (length and weight)mostly chronicgradedreproductionchronicgradedA wide variety of other, less commonly applied sublethal whole organism endpoints can be assessed upon chronic exposure. The possibilities are endless, with some specific endpoints being designed for the effect of a single compound only, or species specific endpoints, sometimes described for only one organism. Sub-organismal endpoints are described in a separate chapter (see section on Molecular endpoints in toxicity tests).Araujo, C.V.M., Moreira-Santos, M., Ribeiro, R.. Active and passive spatial avoidance by aquatic organisms from environmental stressors: A complementary perspective and a critical review. Environment International 92-93, 405-415.Barata, C., Alanon, P., Gutierrez-Alonso, S., Riva, M.C., Fernandez, C., Tarazona, J.V.. A Daphnia magna feeding bioassay as a cost effective and ecological relevant sublethal toxicity test for environmental risk assessment of toxic effluents. Science of the Total Environment 405, 78-86.Castro, B.B., Silva, C., Macario, I.P.E., Oliveira, B., Concalves, F., Pereira, J.L.. Feeding inhibition in Corbicula fluminea (OF Muller, 1774) as an effect criterion to pollutant exposure: Perspectives for ecotoxicity screening and refinement of chemical control. Aquatic Toxicology 196, 25-34.Chevalier, J., Harscoët, E., Keller, M., Pandard, P., Cachot, J., Grote, M.. Exploration of Daphnia behavioral effect profiles induced by a broad range of toxicants with different modes of action. Environmental Toxicology and Chemistry 34, 1760-1769.Hellou J., Cheeseman, K., Desnoyers, E., Johnston, D., Jouvenelle, M.L., Leonard, J., Robertson, S., Walker, P.. A non-lethal chemically based approach to investigate the quality of harbor sediments. Science of the Total Environment 389, 178-187.Mitzel, M.R., Lin, N., Whalen, J.K., Tufenkji, N.. Chlamydomonas reinhardtii displays aversive swimming response to silver nanoparticles Environmental Science: Nano 4, 1328-1338.Ralph, P.J., Smith, R.A., Macinnis-Ng, C.M.O., Seery, C.R.. Use of fluorescence-based ecotoxicological bioassays in monitoring toxicants and pollution in aquatic systems: Review. Toxicological and Environmental Chemistry 89, 589-607.Rastetter, N., Gerhardt, A.. Continuous monitoring of avoidance behaviour with the earthworm Eisenia fetida. Journal of Soils and Sediments 18, 957-967.Sjollema, S.B., Van Beusekom, S.A.M., Van der Geest, H.G., Booij, P., De Zwart, D., Vethaak, A.D., Admiraal, W.. Laboratory algal bioassays using PAM fluorometry: Effects of test conditions on the determination of herbicide and field sample toxicity. Environmental Toxicology and Chemistry 33, 1017-1022.Van der Geest, H.G., Greve, G.D., De Haas, E.M., Scheper, B.B., Kraak, M.H.S., Stuijfzand, S.C., Augustijn, C.H., Admiraal, W.. Survival and behavioural responses of larvae of the caddisfly Hydropsyche angustipennis to copper and diazinon. Environmental Toxicology and Chemistry 18, 1965-1971.What is the importance of incorporating sublethal endpoints in acute and chronic toxicity tests?Name one animal and one plant specific sublethal endpoint that can be incorporated in acute toxicity test.Name the two most commonly assessed endpoints in chronic toxicity tests.Author: Michiel KraakReviewers: Kees van Gestel, Jörg RömbkeLearning objectives:You should be able toKey words: Test organism, standardized laboratory ecotoxicity tests, environmental compartment, habitat, different trophic levelsStandardized laboratory ecotoxicity tests require constant test conditions, standardized endpoints (see section on Endpoints) and good performance in control treatments. Actually, in reliable, reproducible and easy to perform toxicity tests, the test compound should be the only variable. This sets high demands on the choice of the test organisms.For a proper risk assessment, it is crucial that test species are representative of the community or ecosystem to be protected. Criteria for selection of organisms to be used in toxicity tests have been summarized by Van Gestel et al.. They include: 1. Practical arguments, including feasibility, cost-effectiveness and rapidity of the test, 2. Acceptability and standardisation of the tests, including the generation of reproducible results, and 3. Ecological significance, including sensitivity, biological validity etc. The most practical requirement is that the test organism should be easy to culture and maintain, but equally important is that the test species should be sensitive towards different stressors. These two main requirements are, however, frequently conflicting. Species that are easy to culture are often less sensitive, simply because they are mostly generalists, while sensitive species are often specialists, making it much harder to culture them. For scientific and societal support of the choice of the test organisms, preferably they should be both ecologically and economically relevant or serve as flagship species, but again, these are opposite requirements. Economically relevant species, like crops and cattle, hardly play any role in natural ecosystems, while ecologically highly relevant species have no obvious economic value. This is reflected by the research efforts on these species, since much more is known about economically relevant species than about ecologically relevant species.There is no species that is most sensitive to all pollutants. Which species is most sensitive depends on the mode of action and possibly also other properties of the chemical, the exposure route, its availability and the properties of the organism (e.g., presence of specific targets, physiology, etc.). It is therefore important to always test a number of species, with different life traits, functions, and positions in the food web. According to Van Gestel et al. such a battery of test species should be:1. Representative of the ecosystem to protect, so including organisms having different life-histories, representing different functional groups, different taxonomic groups and different routes of exposure;2. Representative of responses relevant for the protection of populations and communities; and3. Uniform, so all tests in a battery should be applicable to the same test media and applying to the same test conditions, e.g. the same range of pH values.Each environmental compartment, water, air, soil and sediment, requires its specific set of test organisms. The most commonly applied test organisms are daphnids (Daphnia magna) for water, chironomids (Chironomus riparius) for sediments and earthworms (Eisenia fetida) for soil. For air, in the field of inhalation toxicology, humans and rodents are actually the most studied organism. In ecotoxicology, air testing is mostly restricted to plants, concerning studies on toxic gasses. Besides the most commonly applied organisms, there is a long list of other standard test organisms for which test protocols are available (Table 1; OECD site).Table 1. Non-exhaustive list of standard ecotoxicity test species.Environmental compartment(s)Organism groupTest speciesWaterPlantWaterPlantWaterAlgaeSpecies of choiceWaterCyanobacteriaSpecies of choiceWaterFishWaterFishWaterAmphibianWaterInsectWaterCrustaceanWaterSnailWaterSnailWater-sedimentPlantWater-sedimentInsectWater-sedimentOligochaete wormSedimentAnaerobic bacteriaSewage sludgeSoilPlantSpecies of choiceSoilOligochaete wormSoilOligochaete wormSoilCollembolanSoilMiteSoilMicroorganismsNatural microbial communityDungInsectDungInsectAir-soilPlantSpecies of choiceTerrestrialBirdSpecies of choiceTerrestrialInsectTerrestrialInsectTerrestrialInsectTerrestrialMiteThe use of standard test organisms in standard ecotoxicity tests performed according to internationally accepted protocols strongly reduces the uncertainties in ecotoxicity testing. Yet, there are good reasons for deviating from these protocols. The species in Table 1 are listed according to their corresponding environmental compartment, but ignores differences between ecosystems and habitats. Soils may differ extensively in composition, depending on e.g. the sand, clay or silt content, and properties, e.g. pH and water content, each harbouring different species. Likewise, stagnant and current water have few species in common. This implies that based on ecological arguments there may be good reasons to select non-standard test organisms. Effects of compounds in streams can be better estimated with riverine insects rather than with the stagnant water inhabiting daphnids, while the compost worm Eisenia fetida is not necessarily the most appropriate species for sandy soils. The list of non-standard test organisms is of course endless, but if the methods are well documented in the open literature, there are no limitations to employ these alternative species. They do involve, however, experimental challenges, since non-standard test organisms may be hard to culture and to maintain under laboratory conditions and no protocols are available for the ecotoxicity test. Thus increasing the ecological relevance of ecotoxicity tests also increases the logistical and experimental constraints (see chapter 6 on Risk assessment).The vast majority of toxicity tests is performed with a single test species, resulting in large margins of uncertainty concerning the hazardousness of compounds. To reduce these uncertainties and to increase ecological relevance it is advised to incorporate more test species belonging to different trophic levels, for water e.g. algae, daphnids and fish. For deriving environmental quality standards from Species Sensitivity Distributions (see section on SSDs) toxicity data is required for minimal eight species belonging to different taxonomical groups. This obviously causes tension between the scientific requirements and the available financial resources.OECD site. //www.oecd-ilibrary.org/enviro...stems_20745761.Van Gestel, C.A.M., Léon, C.D., Van Straalen, N.M.. Evaluation of soil fauna ecotoxicity tests regarding their use in risk assessment. In: Tarradellas, J., Bitton, G., Rossel, D. (Eds). Soil Ecotoxicology. CRC Press, Inc., Boca Raton: 291-317.Name the requirements for suitable laboratory ecotoxicity test organisms.List the most commonly used standard test organisms per environmental compartment.Argue 1] the need for more than one test species, and 2] the need for non-standard test organisms.Author: J. Arie VonkReviewers: Michiel Kraak, Gertie Arts, Sergi SabaterLearning objectives:You should be able toKey words:Test organism, standardized laboratory ecotoxicity test, primary producers, algae, plants, environmental compartment, photosynthesis, growthPhoto-autotrophic primary producers use chlorophyll to convert CO2 and H2O into organic matter through photosynthesis under (sun)light. These primary producers are the basis of the food web and form an essential component of ecosystems. Besides serving as a food source, multicellular photo-autotrophs also form habitat for other primary producers (epiphytes) and many fauna species. Primary producers are a very diverse group, ranging from tiny unicellular pico-plankton up to gigantic trees. For standardized ecotoxicity tests, primary producers are represented by (micro)algae, aquatic macrophytes and terrestrial plants. Since herbicides are the largest group of pesticides used globally to maintain high crop production in agriculture, it is important to assess their impact on primary producers (Wang & Freemark, 1995). However, concerning testing intensity, primary producers are understudied in comparison to animals.Standardized laboratory ecotoxicity tests with primary producers require good control over test conditions, standardized endpoints (Arts et al., 2008; see the Section on Endpoints) and growth in the controls (i.e. doubling of cell counts, length and/or biomass within the experimental period). Since the metabolism of primary producers is strongly influenced by light conditions, availability of water and inorganic carbon (CO2 and/or HCO3- and CO32-), temperature and dissolved nutrient concentrations, all these conditions should be monitored closely. The general criteria for selection of test organisms are described in the previous chapter (see the section on the Selection of ecotoxicity test organisms). For primary producers, the choice is mainly based on the available test guidelines, test species and the environmental compartment of concern.There are a number of ecotoxicity tests with a variety of primary producers standardized by different organizations including the OECD and the USEPA (Table 1). Characteristic for most primary producers is that they are growing in more than one environmental compartment (soil/sediment; water; air). As a result of this, toxicant uptake for these photo-autotrophs might be diverse, depending on the chemical and the compartment where exposure occurs (air, water, sediment/soil).For both marine and freshwater ecosystems, standardized ecotoxicity tests are available for microalgae (unicellular micro-organisms sometimes forming larger colonies) including the prokaryotic Cyanobacteria (blue-green algae) and the eukaryotic Chlorophyta (green algae) and Bacillariophyceae (diatoms). Macrophytes (macroalgae and aquatic plants) are multicellular organisms, the latter consisting of differentiated tissues, with a number of species included in standardized ecotoxicity tests. While macroalgae grow in the water compartment only, aquatic plants are divided into groups related to their growth form (emergent; free-floating; submerged and sediment-rooted; floating and sediment-rooted) and can extend from the sediment (roots and root-stocks) through the water into the air. Both macroalgae and aquatic plants contain a wide range of taxa and are present in both marine and freshwater ecosystems.Terrestrial higher plants are very diverse, ranging from small grasses to large trees. Plants included in standardized ecotoxicity tests consist of crop and non-crop species. An important distinction in terrestrial plants is reflected in dicots and monocots, since both groups differ in their metabolic pathways and might reflect a difference in sensitivity to contaminants.Table 1. Open source standard guidelines for testing the effect of compounds on primary producers. All tests are performed in (micro)cosms except marked with *Primary producerSpeciesCompartmentTest numberOrganisation Microalgae& cyano-bacteriavarious speciesFreshwater201OECD 2011Freshwater850.4550USEPA 2012Pseudokirchneriella subcapitata,Freshwater,Marine water850.4500USEPA 2012Floating macrophytesLemna spp.Freshwater221OECD 2006Lemna spp.Freshwater850.4400USEPA 2012Submerged macrophytesFreshwater238OECD 2014Sediment (Freshwater)239OECD 2014Aquatic plants*not specifiedFreshwater850.4450USEPA 2012Terrestrial plantswide variety of speciesAir227OECD 2006wide variety of speciesAir850.4150USEPA 2012wide variety of species (crops and non-crops)Soil & Air850.4230USEPA 2012legumes and rhizobium symbiontSoil & Air850.4600USEPA 2012wide variety of species (crops and non-crops)Soil208OECD 2006wide variety of species (crops and non-crops)Soil850.4100USEPA 2012various crop speciesSoil & Air850.4800USEPA 2012Terrestrial plants*not specifiedTerrestrial850.4300USEPA 2012Since primary producers can take up many compounds directly by cells and thalli (algae) or by their leaves, stems, roots and rhizomes (plants), different environmental compartments need to be included in ecotoxicity testing depending on the chemical characteristics of the contaminants. Moreover, the chemical characteristics of the compound under consideration determine if and how the compound might enter the primary producers and how it is transported through organisms.For all aquatic primary producers, exposure through the water phase is relevant. Air exposure occurs in the emergent and floating aquatic plants, while rooting plants and algae with rhizoids might be exposed through sediment. Sediment exposure introduces additional challenges for standardized testing conditions, since changes in redox conditions and organic matter content of sediments can alter the behavior of compounds in this compartment.All terrestrial plants are exposed through air, soil and water (soil moisture, rain, irrigation). Air exposure and water deposition (rain or spraying) directly exposes aboveground parts of terrestrial plants, while belowground plant parts and seeds are exposed through soil and soil moisture. Soil exposure introduces additional challenges for standardized testing conditions, since changes in water or sediment organic matter content of soils can alter the behavior of compounds in this compartment.Bioaccumulation after uptake and translocation to specific cell organelles or plant tissue can result in incorporation of compounds in primary producers. This has been observed for heavy metals, pesticides and other organic chemicals. The accumulated compounds in primary producers can then enter the food chain and be transferred to higher trophic levels (see the section on Biomagnification). Although concentrations in primary producers are indicative of the presence of bioavailable compounds, these concentrations do not necessarily imply adverse effects on these organisms. Bioaccumulation measurements can therefore be best combined with one or more of the following endpoint assessments.Photosynthesis is the most essential metabolic pathway for primary producers. The mode of action of many herbicides is therefore photosynthesis inhibition, whereby different metabolic steps can be targeted (see the section on Herbicide toxicity). This endpoint is relevant for assessing acute effects on the chlorophyll electron transport using Pulse-Amplitude-Modulation (PAM) fluorometry or as a measure of oxygen or carbon production by primary producers.Growth represents the accumulation of biomass (microalgae) or mass (multicellular primary producers). Growth inhibition is the most important endpoint in test with primary producers since this endpoint integrates responses of a wide range of metabolic effects into a whole organism or a population response of primary producers. However, it takes longer to assess, especially for larger primary producers. Cell counts, increase in size over time for either leaves, roots, or whole organisms, and (bio)mass (fresh weight and dry weight) are the growth endpoints mostly used.Seedling emergence reflects the germination and early development of seedlings into plants. This endpoint is especially relevant for perennial and biannual plants depending on seed dispersal and successful germination to maintain healthy populations.Other endpoints include elongation of different plant parts (e.g. roots), necrosis of leaves, or disturbances in plant-microbial symbiont relationships.For terrestrial vascular plants, many crop and non-crop species can be used in standardized tests, however, for other environmental compartments (aquatic and marine) few species are available in standardized test guidelines. Also not all environmental compartments are currently covered by standardized tests for primary producers. In general, there are limited tests for aquatic sediments and there is a total lack of tests for marine sediments. Finally, not all major groups of primary producers are represented in standardized toxicity tests, for example mosses and some major groups of algae are absent.Challenges to improve ecotoxicity tests with plants would be to include more sensitive and early response endpoints. For soil and sediment exposure of plants to contaminants, development of endpoints related to root morphology and root metabolism could provide insights into early impact of substances to exposed plant parts. Also the development of ecotoxicogenomic endpoints (e.g. metabolomics) (see the section on Metabolomics) in the field of plant toxicity tests would enable us to determine effects on a wider range of plant metabolic pathways.Arts, G.H.P., Belgers, J.D.M., Hoekzema, C.H., Thissen, J.T.N.M.. Sensitivity of submersed freshwater macrophytes and endpoints in laboratory toxicity tests. Environmental Pollution 153, 199-206.Wang, W.C., Freemark, K. The use of plants for environmental monitoring and assessment. Ecotoxicology and Environmental Safety 30: 289-301.Which conditions need to be controlled carefully in laboratory ecotoxicity tests with primary producers?List the different groups of primary producers used in standardized tests for each environmental compartment.Argue why testing of primary producers is relevant in relation to [A] environmental exposure of ecosystems to pesticides and [B] the role of primary producers in ecosystemsAuthor: Patrick van BeelenReviewers: Kees van Gestel, Erland Bååth, Maria NiklinskaLearning objectives:You should be able toKeywords: microorganisms, processes, nitrogen conversion, test methodsMost organisms are microorganisms, which means they are generally too small to see with the naked eye. Nevertheless, microorganisms affect almost all aspects of our lives. Viruses are the smallest of microorganisms, the prokaryotic bacteria and archaea are bigger (in the micrometer range), and the sizes of eukaryotic microorganisms range from three to hundred micrometers. The microscopic eukaryotes have larger cells with a nucleus and come in different shapes like green algae, protists and fungi.Cyanobacteria and eukaryotic algae perform photosynthesis in the oceans, seas, brackish and freshwater ecosystems. They fix carbon dioxide into biomass and form the basis of the largest aquatic ecosystems. Bacteria and fungi degrade complex organic molecules into carbon dioxide and minerals, which are needed for plant growth.Plants often live in symbiosis with specialized microorganisms on their roots, which facilitate their growth by enhancing uptake of water and nutrients, speeding up plant growth. Invertebrate and vertebrate animals, including humans, have bacteria and other microorganisms in their intestines to facilitate the digestion of food. Cows for example cannot digest grass without the microorganisms in their rumen. Also, termites would not be able to digest lignin, a hard to digest wood polymer, without the aid of gut fungi. Leaf cutter ants transport leaves into their nest to feed the fungi which they depend on. Also, humans consume many foodstuffs with yeasts, fungi or bacteria for preservation of the food and a pleasant taste. Beer, wine, cheese, yogurt, sauerkraut, vinegar, bread, tempeh, sausage and may other foodstuffs need the right type of microorganisms to be palatable. Having the right type of microorganisms is also vital for human health. Human mother's milk contains oligosaccharides, which are indigestible for the newborn child. These serve as a major food source for the intestinal bacteria in the baby, which reduce the risk of dangerous infections.This shows that the interaction between specific microorganisms and higher organisms are often highly specific. Marine viruses are very abundant and can limit algal blooms promoting a more diverse marine phytoplankton. Pathogenic viruses, bacteria, fungi and protists enhance the biodiversity of plants and animals by the following mechanism: The densest populations are more susceptible to diseases since the transmission of the disease becomes more frequent. When the most abundant species become less frequent, there is more room for the other species and biodiversity is enhanced. In agriculture, this enhanced biodiversity is unwanted since the livestock and crop are the most abundant species. That is why disease control becomes more important in high intensity livestock farming and in large monocultures of crops. Microorganisms are at the base of all ecosystems and are vital for human health and the environment.The microbiological society has a nice video explaining why microbiology matters.The functioning of natural ecosystems on earth is threatened by many factors, such as habitat loss, habitat fragmentation, global warming, species extinction, over fertilization, acidification and pollution. Natural and man-made chemicals can exhibit toxic effects on the different organisms in natural ecosystems. Toxic chemicals released in the environment may have negative effects on biodiversity or microbial processes. In the ecosystem strongly affected by such changes, the abundance of different species could be smaller. The loss of biodiversity of the species in a specific ecosystem can be used as a measure for the degradation of the ecosystem. Humans benefit from the presence of properly functioning ecosystems. These benefits can be quantified as ecosystem services. Microbial processes contribute heavily to many ecosystem services. Groundwater for example, is often a suitable source of drinking water since microorganisms have removed pollutants and pathogens from the infiltrating water. See Section on Ecosystem services and protection goals.Most environmental toxicity tests are single species tests. Such tests typically determine toxicity of a chemical to a specific biological species like for example the bioluminescence by the Allivibrio fisheri bacteria in the Microtox test or the growth inhibition test on freshwater algae and cyanobacteria (see Section on Selection of test organisms - Eco plants). These tests are relatively simple using a specific toxic chemical on a specific biological species in an optimal setting. The OECD guidelines for the testing of chemicals, section 2, effects on biotic systems gives a list of standard tests. Table 1 lists different tests with microorganisms standardized by the Organization for Economic Cooperation and Development (OECD).Table 1. Generally accepted environmental toxicity tests using microorganisms, standardized by the Organization for Economic Cooperation and Development (OECD).OECD test NoTitleMediumTest type201Freshwater algae and cyanobacteria, growth inhibition test (chapter reference)AquaticSingle species209Activated sludge, respiration inhibition testSedimentProcess224 (draft guideline)Determination of the inhibition of the activity of anaerobic bacteriaSedimentProcess217Soil microorganisms: carbon transformation testSoilProcess216Soil microorganisms: nitrogen transformation testSoilProcessThe outcome of these tests can be summarized as EC10 values (see Section on Concentration-response relationships), which can be used in risk assessment (see Sections on Predictive risk assessment approaches and tools and on Diagnostic risk assessment approaches and tools) Basically, there are three types of tests. Single species tests, community tests and tests using microbial processes.The ecological relevance of a single species test can be a matter of debate. In most cases it is not practical to work with ecologically relevant species since these can be hard to maintain under laboratory conditions. Each ecosystem will also have its own ecologically relevant species, which would require an extremely large battery of different test species and tests, which are difficult to perform in a reproducible way. As a solution to these problems, the test species are assumed to exhibit similar sensitivity for toxicants as the ecological relevant species. This assumption was confirmed in a number of cases. If the sensitivity distribution of a given toxicant for a number of test species would be similar to the sensitivity distribution of the relevant species in a specific ecosystem, one could use a statistic method to estimate a safe concentration for most of the species.Toxicity tests with short incubation times are often disputed since it takes time for toxicants to accumulate in the test animals. This is not a problem in microbial toxicity tests since the small size of the test organisms allows a rapid equilibrium of the concentrations of the toxicant in the water and in the test organism. On the contrary, long incubation times under conditions that promote growth, can lead to the occurrence of resistant mutants, which will decrease the apparent sensitivity of the test organism. This selection and growth of resistant mutants cannot, however, be regarded as a positive thing since these mutants are different from the parent strain and might also have different ecological properties. In fact, the selection of antibiotic resistant microorganisms in the environment is considered to be a problem since these might transfer to pathogenic (disease promoting) microorganisms which gives problems for patients treated with antibiotics.The OECD test no 201, which uses freshwater algae and cyanobacteria, is a well-known and sensitive single species microbial ecotoxicity test. These are explained in more detail in the Section on Selection of test organisms - Eco plants.Microorganisms have a very wide range of metabolic diversity. This makes it more difficult to extrapolate from a single species test to all possible microbial species including fungi, protists, bacteria, archaea and viruses. One solution is to test a multitude of species (a whole community) exposed in a single toxicity experiment, it becomes more difficult to attribute the decline or increase of species to toxic effects. The rise and decline of species can also be caused by other factors, including species interactions. The method of Pollution-induced community tolerance is used for the detection of toxic effects on communities. Organisms survive in polluted environments only when they can tolerate toxic chemical concentrations in their habitat. During exposure to pollution the sensitive species become extinct and tolerant species take over their place and role in the ecosystem. This takeover can be monitored by very simple toxicity tests using a part of the community extracted from the environment. Some tests use the incorporation of building blocks for DNA (thymidine) and protein (leucine). Other tests use different substrates for microbial growth. The observation that this part of the community becomes more tolerant as measured by these simple toxicity tests reveals that the pollutant really affects the microbial community. This is especially helpful when complex and diverse environments like biofilms, sediments and soils are studied.The protection of ecosystem services is fundamentally different from the protection of biodiversity. When one wants to protect biodiversity all species are equally important and are worth protecting. When one wants to protect ecosystem services only the species that perform the process have to be protected. Many contributing species can be intoxicated without having much impact on the process. An example is nitrogen transformation, which is tested by measuring the conversion of ammonium into nitrite and nitrate (see box).The inactivation of the most sensitive species can be compensated by the prolonged activity or growth of less sensitive species. The test design of microbial process tests aims to protect the process and not the contributing species. Consequently, the process tests from Table 1 seldom play a decisive role in reducing the maximum tolerable concentration of a chemical. Reason is that the single species toxicity tests generally are more sensitive since they use a specific biological species as test organism instead of a process.Box: Nitrogen transformation testThe OECD test no. 216 Soil Microorganisms: Nitrogen Transformation Test is a very well-known toxicity test using the soil process of nitrogen transformation. The test for non-agrochemicals is designed to detect persistent adverse effects of a toxicant on the process of nitrogen transformation in soils. Powdered clover meal contains nitrogen mainly in the form of proteins which can be degraded and oxidized to produce nitrate. Soil is amended with clover meal and treated with different concentrations of a toxicant. The soil provides both the test organisms and the test medium. A sandy soil with a low organic carbon content is used to minimize sorption of the toxicant to the soil. Sorption can decrease the toxicity of a toxicant in soil. According to the guideline, the soil microorganisms should not be exposed to fertilizers, crop protection products, biological materials or accidental contaminations for at least three months before the soil is sampled. In addition, the soil microorganisms should at least form 1% of the soil organic carbon. This indicates that the microorganisms are still alive. The soil is incubated with clover meal and the toxicant under favorable growth conditions (optimal temperature, moisture) for the microorganisms. The quantities of nitrate formed are measured after 7 and 28 days of incubation. This allows for the growth of microorganisms resistant to the toxicant during the test, which can make the longer incubation time less sensitive. The nitrogen in the proteins of clover meal will be converted to ammonia by general degradation processes. The conversion of clover meal to ammonia can be performed by a multitude of species and is therefore not very sensitive to inhibition by toxic compounds.The conversion of ammonia to nitrate generally is performed in two steps. First, ammonia oxidizing bacteria or archaea, oxidize ammonia into nitrite. Second, nitrite is oxidized by nitrite oxidizing bacteria into nitrate. These latter two steps are generally much slower than ammonium production, since they require specialized microorganisms. These specialized microorganisms also have a lower growth rate than the common microorganisms involved in the general degradation of proteins into amino acids. This makes the nitrogen transformation test much more sensitive compared to the carbon transformation test, which uses more common microorganisms. Under the optimal conditions in the nitrogen transformation test some minor ammonia or nitrite oxidizing species might seem unimportant since they do not contribute much to the overall process. Nevertheless these minor species can become of major importance under less optimal conditions. Under acid conditions for example, only the archaea oxidize ammonia into nitrite while the ammonia oxidizing bacteria become inhibited. The nitrogen transformation test has a minimum duration of 28 days at 20°C under optimal moisture conditions, but can be prolonged to 100 days. Shorter incubation times would make the test more sensitive.What is the disadvantage of growth during a toxicity test using a microbial process?Mention the intermediates during the degradation of clover meal to nitrate.Why are shorter microbial tests often more sensitive than longer ones?When one microbial species in the environment is replaced by another one, can it have an effect on animals or plants?Author: Annegaaike LeopoldReviewers: Nico van den Brink, Kees van Gestel, Peter EdwardsLearning objectives:You should be able toKeywords: birds, risk assessment, habitats, acute, reproduction.Birds are seen as important models in ecotoxicology for a number of reasons:A few specific physiological features will be discussed here. Birds are oviparous, laying eggs with hard shells. This leads to concentrated exposure (as opposed to exposure via the bloodstream as in most other vertrebrate species) to maternally transferred material, and where relevant, its metabolites. It also means that offspring receive a single supply of nutrients (and not a continuous supply through the blood stream). This makes birds sensitive to contaminants in a different way than non-oviparous vertebrates, since the embryos develop without physiological maternal interference. The bird embryo starts to regulate its own hormone homeostasis early on in its development in contrast to mammalian embryos. As a result contaminants deposited in the egg by the female bird may cause disturbance of the regulation of these embryonic processes (Murk et al., 1996). Birds have a higher body temperature (40.6 ºC) and a relatively high metabolic rate, which can impact their response to chemicals. As chicks, birds generally have a rapid growth rate, compared to many vertebrate species. Chicks of precocial (or nidifugous) species leave the nest upon hatching and, while they may follow the parents around, they are fully feathered and feed independently. They typically need a few months to grow to full size. Altricial species are naked, blind and helpless at hatch and require parental care until they fledge the nest. They often grow faster - passerines (such as swallows) can reach full size and fledge 14 days after hatching. Many bird species migrate seasonally over long distances and adaptation to this, changes their physiology and biochemical processes. Internal concentrations of organic contaminants, for example, may increase significantly due to the use of lipids stores during migration, while changes in biochemistry may increase the sensitivity of birds to the chemical.Birds function as good biological indicators of environmental quality largely because of their position in the foodchain and habitat dependence. Protection goals are frequently focused on iconic species, for example the Atlantic puffin, the European turtle dove and the common barn owl (Birdlife International, 2018).It was recognized early on that exposure of birds to pesticides can take place through many routes of dietary exposure. Given their association with a wide range of habitats, exposure can take place by feeding on the crop itself, on weeds, or (treated) weed seeds, on ground dwelling or foliar dwelling invertebrates, by feeding on invertebrates in the soil, such as earthworms, by drinking water from contaminated streams or by feeding on fish living in contaminated streams . Following the introduction of persistent and highly toxic synthetic pesticides in the 1950s and prior to safety regulations, use of many synthetic organic pesticides led to wildlife losses - of birds, fish, and other wildlife (Kendall and Lacher, 1994). As a result, national and international guidelines for assessing first acute and subacute effects of pesticides on birds were developed in the 1970s. In the early 1980s tests were developed to study long-term or reproductive effects of pesticides. Current bird testing guidelines focused primarily on active ingredients used in plant protection products, veterinary medicines and biocides. In Europe the industrial chemicals regulation REACH only requires information on long-term or reproductive toxicity for substances manufactured or imported in quantities of at least 1000 tonnes per annum. These data may be needed to assess the risks of secondary poisoning by a substance that is likely to bioaccumulate and does not degrade rapidly. Secondary poisoning may occur, for example when raptors consume contaminated fish. In the United States no bird tests are required under the industrial chemicals legislation.The objective of performing avian toxicity tests is to inform an avian effects assessment (Hart et al., 2001) in order to:Selection of bird species for toxicity testing occurs primarily on the basis of their ecological relevance, their availability and ability to adjust to laboratory conditions for breeding and testing. This means most test species have been domesticated over many years. They should have been shown to be relatively sensitive to chemicals through previous experience or published literature and ideally have available historical control data.The bird species most commonly used in toxicity testing have all been domesticated:Other species of birds are sometimes used for specific, often tailor-designed studies. These species include:Table 1 provides an overview of all the avian toxicity tests that have been developed over the past approximately 40 years, the most commonly used guidelines, the recommended species, the endpoints recorded in each of these tests, the typical age of birds at the start of the test, the test duration and the length of exposure.Table 1: Most common avian toxicity tests with their recommended species and key characteristics.Avian toxicity testGuidelineRecommended speciesEndpointsAge at start of testLength of studyLength of exposureAcute oral gavage- sequential testing - average 26 birdsOECD 223bobwhite quail, Japanese quail, zebra finch, budgerigarmortality,clinical signs,body weight,food consumption,gross necropsyYoung birds not yet mated, at least 16 weeks old at start of test.At least 14 daysSingle oral dose at beginning of testAcute oral gavage - 60 bird designUSEPA OCSPP 850.2100Bobwhite quailsingle passerine species recommended.See aboveYoung birds not yet mated, at least 16 weeks old at start of test.14 daysSingle oral dose at beginning of testSub-acute dietary toxicity *OCSPP 850.2200Bobwhite quail, mallardSee aboveMallard: 5 days oldBobwhite quail: 10-14 days old8 days5 daysOne-generation reproductionOECD 206OCSPP 850.2200Bobwhite quail, mallard, Japanese quail**Adult body weight,food consumption,egg production, fertility, embryo survivial, hatchrate, chick survival.Approaching first breeding season: Mallard (6 to 12 months old)Bobwhite quail20 -24 weeks20 - 22 weeks10 weeksAvoidance testing (pen trials)OECD ReportAs closely related to species at risk as possible; eg: sparrowrock dove, pheasant,grey partridgeFood intake, mortality,Sublethal effectsYoung adults if possible (depends on study design)One to several days, depending on the study design.One to several days, depending on the study design.Two-generation endocrine disruptor testOCSPP 890.2100Japanese quailIn addition to endpoints listed for one-generation study: male sexual behaviour, biochemical, histological, and morphological endpoints4 weeks post hatch38 weeks8 weeks - adult (F0) generation +14 weeks F1 generation.Field studies to refine food residues in higher tier Bird risk assessments.Appendix N of the EFSA Bird and Mammal Guidance.Depends on the species at risk in the are of pesticide use.Depends on the study design developed.Uncontrollable in a field studyDepends on the study design developed.Depends on the study design developed.* This study is hardly every asked for anymore.** Only in OECD GuidelineTo assess the short-term risk to birds, acute toxicity tests must be performed for all pesticides (the active ingredient thereof) to which birds are likely to be exposed, resulting in an LD50 (mg/kg body/day) (see section on Concentration-response relationships). The acute oral toxicity test involves gavage or capsule dosing at the start of the study. Care must be taken when dosing birds by oral gavage. Some species can readily regurgitate leading to uncertainty in the the dose given. These include mallard duck, pigeons and some passerine species. Table 1 gives the birds species recommended in the OECD and USEPA guidelines, respectively. Gamebirds and passerines are a good combination to take account of phylogeny and a good starting point to better understand the distribution of species sensitivity.The OECD guideline 223 uses on average 26 birds and is a sequential design (Edwards et al., 2017). Responses of birds to each stage of the test are combined to estimate and improve the estimate of the LD50 and slope. The testing can be stopped at any stage once the accuracy of the LD50 estimate meets the requirements for the risk assessment, hence using far fewer birds for compliance with the 3Rs (reduction, refinement and replacement). If toxicity is expected to be low, 5 birds are dosed at the limit dose of 2000 mg/kg (which is the highest acceptable dose to be given by oral gavage, from a humane point of view). If there is no mortality in the limit test after 14 days the study is complete and the LD50 >2000 mg/kg body weight. If there is mortality a single individual is treated at each of 4 different doses in Stage 1. With these results a working estimate of the LD50 is determined to select 10 further dose levels for a better estimate of the LD50 in Stage 2. If a slope is required a further Stage 3 is required using 10 more birds in a combination of doses selected on the basis of a provisional estimate of the slope.The USEPA guideline is a single stage design preceeded by a range finding test (used only to set the concentrations for the main test). The LD50 test uses 60 birds (10 at each of five test concentrations and 10 birds in the control group). Despite the high numbers of birds used, the ability to estimate a slope is poor compared to OECD223 (the ability to calculate the LD50 is similar to the OECD 223 guideline).For the medium-term risk assessment an avian dietary toxicity test was regularly performed in the past exposing juvenile (chicks) of bobwhite quail, Japanese quail or mallard to a treated diet. This test determines the median lethal concentration (LC50) of a chemical in response to a 5-day dietary exposure. Given the scientific limitations and animal welfare concerns related to this test (EFSA, 2009) current European regulations recommend to only perform this test when it is expected that the LD50 value measured by the medium-term study will be lower than the acute LD50 i.e. if the chemical is cumulative in its effect.One-generation reproduction tests in bobwhite quail and/or mallard are requested for the registration of all pesticides to which birds are likely to be exposed during the breeding season. Table 1 presents the two standard studies: OECD Test 206 and the US EPA OCSPP 850.2100 study. The substance to be tested is mixed into the diet from the start of the test. The birds are fed ad libitum for a recommended period of 10 weeks before they begin laying eggs in response to a change in photoperiod. The egg-laying period should last at least ten weeks. Endpoints include adult body weight, food consumption, macroscopic findings at necropsy and reproductive endpoints, with the number of 14-day old surviving chicks/ducklings as an overall endpoint.The OECD guideline states that the Japanese quail (Coturnix coturnix japonica), is also acceptable.Avoidance behaviour by birds in the field could be seen as reducing the risk of exposure to a pesticide and therefore could be considered in the risk assessment. However, the occurrence of avoidance in the laboratory has a confounding effect on estimates of toxicity in dietary studies (LD50). Avoidance tests thus far have greatest relevance in the risk assessment of seed treatments. A number of factors need to be taken into account including the feeding rate and dietary concentration which may determine whether avoidance or mortality is the outcome. The following comprehensive OECD report provides an overview of guideline development and research activities that have taken place to date under the OECD flag. Sometimes these studies are done as semi-field (or pen) studies.Endocrine-disrupting substances can be defined as materials that cause effects on reproduction through the disruption of endocrine-mediated processes. If there is reason to suspect that a substance might have an endocrine effect in birds, a two-generation avian test design aimed specifically at the evaluation of endocrine effects could be performed. This test has been developed by the USEPA (OCSPP 890.210). The test has not, however, been accepted as an OECD test to date. It uses the Japanese quail as the preferred species. The main reasons that Japanese quail were selected for this test were: 1) Japanese quail is a precocial species as mentioned earlier. This means that at hatch Japanese quail chicks are much further in their sexual differentiation and development than chicks of altricial species would be. Hormonal processes occurring in Japanese quail in these early stages of development can be disturbed by chemicals maternally deposited in the egg (Ottinger and Dean, 2011). Conversely altricial species undergo these same sexual development stages post-hatch and can be exposed to chemicals in food that might impact these same hormonal processes. 2) as mentioned above, the young of the year mature and breed (themselves) within 12 months which makes the test more efficient that if one used bobwhite quail or mallard.It is argued among avian toxicologists, that it is necessary to develop a zebra finch endocrine assay system, alongside the Japanese quail system, as this will allow a more systematic determination of differences between responses to EDC's in altricial and precocial species, there by allowing a better evaluation and subsequent risk assessment of potential endocrine effects in birds. Differences in parental care, nesting behaviour and territoriality are examples of aspects that could be incorporated in such an approach (Jones et al., 2013).Field studies can be used to test for adverse effects on a range of species simultaneously, under conditions of actual exposure in the environment (Hart et al, 2001). The numbers of sites and control fields and methods (corpse searches, censusing and radiotracking) need careful consideration for optimal use of field studies in avian toxicology. The field site will define the species studied and it is important to consider the relevance of that species in other locations. For further reading about techniques and methods to be used in avian field research Sutherland et al and Bibby et al. are recommended.Bibby, C., Jones, M., Marsden, S.. Expedition Field Techniques Bird Surveys. Birdlife International.Birdlife International. The Status of the World's Birds. //www.birdlife.org/sites/default/files/attachments/BL_ReportENG_V11_spreads.pdfBrooks, A.C., Fryer, M., Lawrence, A., Pascual, J., Sharp, R.. Reflections on bird and mammal risk assessment for plant protection products in the European Union: Past, present and future. Environmental Toxicology and Chemistry 36, 565-575.Edwards, P.J., Leopold, A., Beavers, J.B., Springer, T.A., Chapman, P., Maynard, S.K., Hubbard, P.. More or less: Analysis of the performance of avian acute oral guideline OECD 223 from empirical data. Integrated Environmental Assessment and Management 13, 906-914.Hart, A., Balluff, D., Barfknecht, R., Chapman, P.F., Hawkes, T., Joermann, G., Leopold, A., Luttik, R. (Eds.). Avian Effects Assessment: A Framework for Contaminants Studies. A report of a SETAC workshop on 'Harmonised Approaches to Avian Effects Assessment', held with the support of the OECD, in Woudschoten, The Netherlands, September 1999. A SETAC Book.Jones, P.D., Hecker, M., Wiseman, S., Giesy, J.P.. Birds. Chapter 10 In: Matthiessen, P. (Ed.) Endocrine Disrupters - Hazard Testing and Assessment Methods. Wiley & Sons.Kendall, R.J., Lacher Jr, T.E. (Eds.). Wildlife Toxicology and Population Modelling - Integrated Studies of Agrochecosystems. Special Publication of SETAC.Murk, A.J., Boudewijn, T.J., Meininger, P.L., Bosveld, A.T.C., Rossaert, G., Ysebaert, T., Meire, P., Dirksen, S.. Effects of polyhalogenated aromatic hydrocarbons and related contaminants on common tern reproduction: Integration of biological, biochemical, and chemical data. Archives of Environmental Contamination and Toxicology 31, 128-140.Ottinger, M.A., Dean, K.. Neuroendocrine Impacts of Endocrine-Disrupting Chemicals in Birds: Life Stage and Species Sensitivities. Journal of Toxicology and Environmental Health, Part B: Critical Reviews. 26 July 2011.Sutherland, W.J., Newton, I., Green, R.E. (Eds.). Biological Ecology and Conservation. A Handbook of Techniques. Oxford University PressGive three reasons why birds are an important model in ecotoxicology?What is the objective of avian toxicity testing?Which avian species are most commonly used and why?What is the difference between precocial and altricial species?Which endpoints are typically used in standardised avian toxicity tests?Name two examples of how one might reduce uncertainty in assessing risks of chemicals on birds.Author: Timo HamersReviewer: Arno GutlebLearning goalsYou should be able to:Keywords: ligand binding assay; enzyme inhibition assay; primary cell culture; cell line; stem cell; organ on a chipIn vitro bioassays refer to testing methods making use of tissues, cells, or proteins. The term "in vitro" (meaning "in glass") refers to the test tubes or petri dishes made from glass that were traditionally used to perform these types of toxicity tests. Nowadays, in vitro bioassays are more often performed in plastic microtiter wells-plates containing multiple (6, 12, 24, 48, 96, 384, or 1536) test containers (called "wells") per plate. In vitro bioassays are usually performed to screen individual substances or samples for specific bioactive properties. As such, in vitro toxicology refers to the science of testing substances or samples for specific toxic properties using tissues, cells, or proteins.Most in vitro bioassays show a mechanism-specific response, which is for instance indicative of the inhibition of a specific enzyme or the activation of a specific molecular receptor. Moreover, in vitro bioassays are usually performed in small test volumes and have short test durations (usually incubation periods range from 15 minutes to 48 hours). As a consequence, multiple samples can be tested simultaneously in a single experiment and multiple experiments can be performed in a relatively short test period. This "medium-throughput" characteristic of in vitro bioassays can even be increased to high-throughput" if the time-limiting steps in the test procedure (e.g. sample preparation, cell culturing, pipetting, read-out) are further automated.Toxicity tests making use of bacteria are also often performed in small volumes, allowing short test-durations and high-throughput. Still, such tests make use of intact organisms and should therefore strictly be considered as in vivo bioassays. This holds especially true if bacteria are used to study endpoints like survival or population growth. However, bacteria test systems studying specific toxic mechanisms, such as the Ames test used to screen substances for mutagenic properties (see section on Carcinogenicity and Genotoxicity), are often considered as in vitro bioassays, because of the similarity in test characteristics when compared to in vitro toxicity tests with cells derived from higher organisms.The simplest form of an in vitro binding assay consists of a purified protein that is incubated with a potential toxic substance or sample. Purified proteins are usually obtained by isolation from an intact organism or from cultures of recombinant bacteria, which are genetically modified to express the protein of interest.Ligand binding assays are used to determine if the test substance is capable of binding to the protein, thereby inhibiting the binding capacity of the natural (endogenous) ligand to that protein (see section on Protein Inactivation). Proteins of interest are for instance receptor proteins or transporter proteins. Ligand binding assays often make use of a natural ligand that has been labelled with a radioactive isotope. The protein is incubated with the labelled ligand in the presence of different concentrations of the test substance. If protein-binding by the test substance prevents ligand binding to the protein, the free ligand shows a concentration-dependent increase in radioactivity (See . Consequently, the ligand-protein complex shows a concentration-dependent decrease in radioactivity. Alternatively, the natural ligand may be labelled with a fluorescent group. Binding of such a labelled ligand to the protein often causes an increase in fluorescence. Consequently, a decrease in fluorescence is observed if a test substance prevents ligand binding to the protein.Enzyme inhibition assays are used to determine if a test substance is capable to inhibit the enzymatic activity of a protein. Enzymatic activity is usually determined as the conversion rate of a substrate into a product. Enzyme inhibition is determined as a decrease in conversion rate, corresponding to lower concentrations of product and higher concentrations of substrate after different periods of incubation. Quantitative measures of substrate disappearance or product formation can be done by chemical analysis of the substrate or the product. Preferably, however, the reaction rate is measured by spectrophotometry or by fluorescence. This is achieved by performing the reaction with a substrate that has a specific colour or fluorescence by itself or that yields a product with a specific colour or fluorescence, in some cases after reaction with an additional indicator compound. A well-known example of an enzyme inhibition assay is the acetylcholinesterase inhibition assay (see section on Diagnosis - In vitro bioassays).Cell-based bioassays make use of cell cultures that are maintained in the laboratory. Cell culturing starts with mechanical or enzymatic isolation of single cells from a tissue (obtained from an animal or a plant). Subsequently, the cells are grown in cell culture medium, i.e. a liquid that contains all essential nutrients required for optimal cell growth (e.g. growth factors, vitamins, amino acids) and regulates the physicochemical environment of the cells (e.g. pH buffer, salinity). Typically, several types of cell cultures can be distinguished.Primary cell cultures consist of cells that are directly isolated from a donor organism and are maintained in vitro. Typically, such cell cultures consist of either a cell suspension of non-adherent cells or a monolayer of adherent cells attached to a substrate (i.e. often the bottom of the culture vessel). The cells may undergo several cell divisions until the cell suspension becomes too dense or the adherent cells grow on top of each other. The cells can then be further subcultured by transferring part of the cells from the primary culture to a new culture vessel containing fresh medium. This progeny of the primary cell culture is called a cell line, whereas the event of subculturing is called a passage. Typically, cell lines derived from primary cells undergo senescence and stop proliferating after a limited number of cell divisions. Consequently, such a finite cell line can undergo only a limited number of passages. Primary cell cultures and their subsequent finite cell lines have the advantage that they closely resemble the physiology of the cells in vivo. The disadvantage of such cell cultures for toxicity testing is that they divide relatively slowly, require specific cell culturing conditions, and are finite. New cultures can only be obtained from new donor organisms, which is time-consuming, expensive, and may introduce genetic variation.Alternatively, continuous cell lines have been established, which have an indefinite life span because the cells are immortal. Due to genetic mutations cells from a continuous cell line can undergo an indefinite number of cell divisions and behave like cancer cells. The immortalizing mutations may have been present in the original primary cell culture, if these cells were isolated from a malign cancer tumour tissue. Alternatively, the original finite cell line may have been transformed into a continuous cell line by introducing a viral or chemical induced mutation. The advantage of continuous cell lines is that the cells proliferate quickly and are easy to culture and to manipulate (e.g. by genetic modification). The disadvantage is that continuous cell lines have a different genotype and phenotype than the original healthy cells in vivo (e.g. have lost enzymatic capacity) and behave like cancer cells (e.g. have lost their differentiating capacities and ability to form tight junctions).To study the toxic effects of compounds in vitro, toxicologists prefer to use cell cultures that resemble differentiated, healthy cells rather than undifferentiated cancer cells. Therefore, differentiation models have gained increasing attention in in vitro toxicology in recent years. Such differentiation models are based on stem cells, which are cells that possess the potency to differentiate into somatic cells. Stem cells can be obtained from embryonic tissues at different stages of normal development, each with their own potency to differentiate into somatic cells (see in Berdasco and Esteller, 2011). In the very early embryonic stage, cells from the "morula stage" (i.e. after a few cell divisions of the zygote) are totipotent, meaning that they can differentiate in all cell types of an organism. Later in development, cells from the inner cell mass of the trophoblast are pluripotent, meaning that they can differentiate in all cell types, except for extra-embryonic cells. During gastrulation, cells from the different germ layers (i.e. ectoderm, mesoderm, and endoderm) are multipotent, meaning that they can differentiate into a restricted number of cell types. Further differentiation results in precursor cells that are unipotent, meaning that they are committed to differentiate into a single ultimate differentiated cell type.While remaining undifferentiated, in vitro embryonic stem cell (ESC) cultures can divide indefinitely, because they do not suffer from senescence. However, an ESC cell line cannot be considered as a continuous (or immortalized) cell line, because the cells contain no genetic mutations. ESCs can be differentiated into the cell type of interest by manipulating the cell culture conditions in such a way that specific signalling pathways are stimulated or inhibited in the same sequence as happens during in vivo cell type differentiation. Manipulation may consist of addition of growth factors, transcription factors, cytokines, hormones, stress factors, etc. This approach requires good understanding of which factors affect decision steps in the cell lineage of the cell type of interest.Differentiation of ESCs into differentiated cells is not only applicable in in vitro toxicity testing, but also in drug discovery, regenerative medicine, and disease modelling. Still, the destruction of a human embryo for the purpose of isolation of - mainly pluripotent - human ESCs (hESCs) raises ethical issues. Therefore, alternative sources of hESCs have been explored. The isolation and subsequent in vitro differentiation of multipotent stem cells from amniotic fluid (collected during caesarean sections), umbilical cord blood, and adult bone marrow is a very topical field of research.A revolutionary development in the field of non-embryonic stem cell differentiation models was the discovery that differentiated cells can be reprogrammed to undifferentiated cells with pluripotent capacities, called induced pluripotent stem cells (iPSCs). In 2012, the Nobel Prize in Physiology or Medicine was awarded to John B. Gurdon and Shinya Yamanaka for this ground-breaking discovery. Reprogramming of differentiated cells isolated from an adult donor is obtained by exposing the cells to a mixture of reprogramming factors, consisting of transcription factors typical for pluripotent stem cells. The obtained iPSCs can be differentiated again (similar as ESCs) into any type of differentiated cells, for which the required conditions for cell lineage are known and can be simulated in vitro.Whereas iPSC based differentiation models require a complete reprogramming of a differentiated somatic cell back to the stem cell level, transdifferentiation (or lineage reprogramming) is an alternative technique by which differentiated somatic cells can be transformed into another type of differentiated somatic cells, without undergoing an intermediate pluripotent stage. Especially fibroblast cell lines are known for their capacity to be transdifferentiated into different cell types, like neurons or adipocytes.In cell-based in vitro bioassays, the cell cultures are exposed to test compounds or samples and their response is measured. In principle, all types of cell culture models discussed above can be used for in vitro toxicity testing. For reasons of time, money, and comfort, continuous cell lines are commonly used, but more and more often primary cell lines and iPSC-derived cell lines are used, for reasons of higher biological relevance. Endpoints that are measured in in vitro cell cultures exposed to toxic compounds typically range from effects on cell viability (measured as decreased mitochondrial functioning, increased membrane damage, or changes in cell metabolism; see section on Cytotoxicity) and cell growth to effects on cell kinetics (absorption, elimination and biotransformation of cell substrates), changes in the cell transcriptome, proteome or metabolome, or effects on cell-type dependent functioning. In addition, cell differentiation models can be used not only to study effects of compounds on differentiated cells, but also to study the effects on the process of cell differentiation per se by exposing the cells during differentiation.A specific type of cell-based bioassays are the reporter gene bioassays, which are often used to screen individual compounds or complex mixtures extracted from environmental samples for their potency to activate or inactivate receptors that play a role in the expression of genes that play an important role in a specific path. Reporter gene bioassays make use of genetically modified cell lines or bacteria that contain an incorporated gene construct encoding for an easily measurable protein (i.e. the reporter protein). This gene construct is developed in such a way that its expression is triggered by a specific interaction between the toxic compound and a cellular receptor. If the receptor is activated by the toxic compound, transcription and translation of the reporter protein takes place, which can be easily measured as a change in colour, fluorescence, or luminescence (see section on Diagnosis - In vitro bioassays).Although there is a societal need for a non-toxic environment, there is also a societal demand to Reduce, Refine and Replace animal studies (three R principles). Replacement of animal studies by in vitro tests requires that the obtained in vitro results are indicative and predictive for what happens in the in vivo situation. It is obvious that a cell culture consisting of a single cell type is not comparable to a complex organism. For instance, toxicokinetic aspects are hardly taken into account in cell-based bioassays. Although some cells might have metabolic capacities, processes like adsorption, distribution, and elimination are not represented as exposure is usually directly on the cells. Moreover, cell cultures often lack repair mechanisms, feedback loops, and any other interaction with other cell types/tissues/organs as found in intact organisms. To expand the scope of in vitro - in vivo extrapolation (IVIVE), more complex in vitro models are developed nowadays that have a closer resemblance to the in vivo situation. For instance, whereas cell culturing was traditionally done in 2D monolayers (i.e. in layers of 1 cell thickness), 3D cell culturing is gaining ground. The advantage of 3D culturing is that it represents a more realistic type of cell growth, including cell-cell interactions, polarization, differentiation, extracellular matrix, diffusion gradients, etc. For epithelial cells (e.g. lung cells), such 3D cultures can even be grown at the air-liquid interphase reflecting the in vivo situation. Another development is cell co-culturing where different cell types are cultured together in a cell culture. For instance, two cell types that interact in an organ can be co-cultured. Alternatively, a differentiated cell type that has poor metabolic capacity can be co-cultured with a liver cell in order to take possible detoxification or bioactivation after biotransformation into account. The latest development in increasing complexity in in vitro test systems are so-called organ-on-a-chip devices, in which different cell types are co-cultured in miniaturized small channels. The cells can be exposed to different flows representing for instance the blood stream, which may contain toxic compounds (see for instance video clips at //wyss.harvard.edu/technology/human-organs-on-chips/). Based on similar techniques, even human body-on-a-chip devices can be constructed. Such chips contain different miniaturized compartments containing cell co-cultures representing different organs, which are all interconnected by different channels representing a microfluid circulatory system. Although such devices are in their infancies and regularly run into impracticalities, it is to be expected that these innovative developments will play their part in the near future of toxicity testing.What is the principle of a ligand binding assay?What is the principle of an enzyme inhibition assay?What is the principle of a reporter gene bioassay?What are the advantages and disadvantages of primary cell cultures versus continuous cell lines?What is the difference between embryonic stem cells and induced pluripotent stem cells?In preparation(Draft)Author: Nelly SaenenReviewers: Karen Smeets, Frank Van BelleghemLearning objectives:You should be able toKeywords: In vitro, toxicity, cytotoxicity, skinToxicity tests are required to assess potential hazards of new compounds to humans. These tests reveal species-, organ- and dose- specific toxic effects of the compound under investigation. Toxicity can be observed by either in vitro studies using cells/cell lines (see section on in vitro bioassay) or by in vivo exposure on laboratory animals; and involves different durations of exposure (acute, subchronic, and chronic). In line with Directive 2010/63/EC on the protection of animals used for scientific purposes, it is encouraged to use alternatives to animal testing (OECD: alternative methods for toxicity testing). The first step towards replacing animals is to use in vitro methods which can be used to predict acute toxicity. In this chapter, we present acute in vitro cytotoxicity tests (= quality of being toxic to cells) and skin corrosive, irritant, phototoxic, and sensitivity tests as skin is the largest organ of the body.The cytotoxicity test is one of the biological evaluation and screening tests in vitro to observe cell viability. Viability levels of cells are good indicators of cell health. Conventionally used tests for cytotoxicity include dye exclusion or uptake assays such as Trypan Blue Exclusion (TBE) and Neutral Red Uptake (NRU).The TBE test is used to determine the number of viable cells present in a cell suspension. Live cells possess intact cell membranes that exclude certain dyes, such as trypan blue, whereas dead cells do not. In this assay, a cell suspension incubated with serial dilutions of a test compound under study is mixed with the dye and then visually examined. A viable cell will have a clear cytoplasm whereas a nonviable cell will have a blue cytoplasm. The number of viable and/or dead cells per unit volume is determined by light microscopy using a hemacytometer counting chamber. This method is simple, inexpensive and a good indicator of membrane integrity but counting errors (~10%) can occur due to poor dispersion of cells, or improper filling of counting chamber.The NRU assay is an approach that assesses the cellular uptake of a dye (Neutral Red) in the presence of a particular substance under study (see e.g.in Repetto et al., 2008). This test is based on the ability of viable cells to incorporate and bind neutral red in the lysosomes, a process based on universal structures and functions of cells (e.g. cell membrane integrity, energy production and metabolism, transportation of molecules, secretion of molecules). Viable cells can take up neutral red via active transport and incorporate the dye into their lysosomes while non-viable cells cannot. After washing, viable cells can release the incorporated dye under acidified-extracted conditions. The amount of released dye can be measured by the use of spectrophotometry.Nowadays, colorimetric assays to assess cell viability have become popular. For example, the MTT assay (3-(4,5-dimethylthiazol-2-yl)-2,5-diphenyltetrazolium bromide) tests cell viability by assessing activity of mitochondrial enzymes. NAD(P)H-dependent oxidoreductase enzymes, which under defined conditions reflect the number of viable cells, are capable of reducing the yellow formazan salt dye into an insoluble purple crystalline product. After solubilizing the end product using dimethyl sulfoxide (DMSO), the product can be quantitatively measured by light absorbance at a specific wavelength. This method is easy to use, safe and highly reproducible. One disadvantage is that MTT formazan is insoluble so DMSO is required to solubilize the crystals.Skin corrosion refers to the production of irreversible damage to the skin; namely, visible necrosis (= localized death of living cells, see section on cell death) through the epidermis and into the dermis occurring after exposure to a substance or mixture. Skin irritation is a less severe effect in which a local inflammatory reaction is observed onto the skin after exposure to a substance or mixture. Examples of these substances are detergents and alkalis which commonly affect hands.The identification and classification of irritant substances has conventionally been achieved by means of skin or eye observation in vivo. Traditional animal testing used rabbits because of its thin skin. In the Draize test, for example, the test substance is applied to the eye or shaved skin of a rabbit, and covered for 24h. After 24 and 72h, the eye or skin is visually examined and graded subjectively based on appearance of erythema and edema. As these in vivo tests have been heavily criticized, they are now being phased out in favor of in vitro alternatives.The Skin Corrosion Test (SCT) and Skin Irritation Test (SIT) are in vitro assays that can be used to identify whether a chemical has the potential to corrode or irritate skin. The method uses a three-dimensional (3D) human skin model (Episkin model) which comprises of a main basal, supra basal, spinous and granular layers and a functional stratum corneum (the outer barrier layer of skin). It involves topical application of a test substance and subsequent assessment of cell viability (MTT assay). Test compounds considered corrosive or irritant are identified by their ability to decrease cell viability below the defined threshold level (lethal dose by which 50% of cells are still viable - LD50).Phototoxicity (photoirritation) is defined as a toxic response that is elicited after the initial exposure of skin to certain chemicals and subsequent exposure to light (e.g. chemicals that absorb visible or ultraviolet (UV) light energy that induces toxic molecular changes).The 3T3 NRU PT assay is based on an immortalised mouse fibroblast cell line called Balb/c 3T3. It compares cytotoxicity of a chemical in the presence or absence of a non-cytotoxic dose of simulated solar light. The test expresses the concentration-dependent reduction of the uptake of the vital dye neutral red when measured 24 hours after treatment with the chemical and light irradiation. The exposure to irradiation may alter cell surface and thus may result in decreased uptake and binding of neutral red. These differences can be measured with a spectrophotometer.Skin sensitisation is the regulatory endpoint aiming at the identification of chemicals able to elicit an allergic response in susceptible individuals. In the past, skin sensitisation has been detected by means of guinea pigs (e.g. guinea pig maximisation test and the Buehler occluded patch tests) or murine (e.g. murine local lymph node assay). The latter is based upon quantification of T-cell proliferation in the draining lymph nodes behind the ear (auricular) of mice after repeated topical application of the test compound.The key biological events underpinning the skin sensitisation process are well established and include:, ARE-Nrf2 Luciferase Test Method: KeratinoSens, Human Cell Line Activation Test (h-CLAT), U937 cell line activation test (U-SENS), and Interleukin-8 Reporter Gene assay (IL-8 Luc assay). Detailed information of these methods can be found on OECD site: skin sensitization.EUR-lex site. //eur-lex.europa.eu/legal-content/EN/ALL/?uri=CELEX:32010L0063OECD site. Alternative methods for toxicity testing. //ec.europa.eu/jrc/en/eurl/ecvam/alternative-methods-toxicity-testingEpiskin model. http://www.episkin.com/EpiskinOECD site. Skin sensitization. //ec.europa.eu/jrc/en/eurl/ecvam/alternative-methods-toxicity-testing/validated-test-methods/skin-sensitisationRepetto, G., Del Peso, A., Zurita, J.L.. Neutral red uptake assay for the estimation of cell viability/cytotoxicity.Nature protocols 3, 1125.(Draft)Author: Jan-Pieter PloemReviewers: Frank van BelleghemLearning objectives:You should be able toThe term "carcinogenicity" refers to the property of a substance to induce or increase the incidence of cancer after inhalation, ingestion, injection or dermal application.Traditionally, carcinogens have been classified according to their mode of action (MoA). Compounds directly interacting with DNA, resulting in DNA-damage or chromosomal aberrations are classified as genotoxic (GTX) carcinogens. Non-genotoxic (NGTX) compounds do not directly affect DNA and are believed to affect gene expression, signal transduction, disrupt cellular structures and/or alter cell cycle regulation.The difference in mechanism of action between GTX and NGTX require a different approach in many cases.Genotoxicity itself is considered to be an endpoint in its own right. The occurrence of DNA-damage can be observed/determined quit easily by a variety of methods based on both bacterial and mammalian cells. Often a tiered testing method is used to evaluate both heritable germ cell line damage and carcinogenicity.Currently eight in vitro assays have been granted OECD guidelines, four of which are commonly used.The gold standard for genotoxicity testing is the Ames test, a test that has been developed in the early seventies. The test evaluates the potential of a chemical to induce mutations (base pair substitutions, frame shift induction, oxidative stress, etc.) in Salmonella typhimurium. During the safety assessment process, it is the first test performed unless deemed unsuitable for specific reasons (e.g. during testing for antibacterial substances). With a sensitivity of 70-90% it is a relatively good predictor of genotoxicity.The principle of the test is fairly simple. A bacterial strain, with a genetic defect, is placed on minimal medium containing the chemical in question. If mutations are induced, the genetic defect in some cells will be restored, thus rendering the cells able to synthesize the deficient amino acid caused by the defect in the original cells.The Ames assay is basically a bacterial reverse mutation assay. In this case different strains of E. coli, which are deficient in both DNA-repair and an amino acid, are used to identify genotoxic chemicals. Often a combination of different bacterial strains is used to increase the sensitivity as much as possible.Chromosomal mutations can occur in both somatic cells and in germ cells, leading to neoplasia or birth and developmental abnormalities respectively. There are two types of chromosomal mutations:Structural changes: stable aberrations such as translocations and inversions, and unstable aberrations such as gaps and breaks.Numerical changes: aneuploidy (loss or gain of chromosomes) and polyploidy (multiples of the diploid chromosome complement).To perform the assay, mammalian cells are exposed in vitro to the potential carcinogen and then harvested. Through microscopy the frequency of aberrations is determined. The chromosome aberration can be and is performed with both rodent and human cells which is an interesting feat regarding the translational power of the assay.This mammalian genotoxicity assay utilizes the HPRT gene, a X-chromosome located reporter gene. The test relies on the fact that cells with an intact HPRT gene are susceptible to the toxic effects of 6-thioguanine, while mutants are resistant to this purine analogue. Wild-type cells are sensitive to the cytostatic effect of the compound while mutants will be able to proliferate in the presence of 6-thioguanine.Next to the four mentioned assays, there is a more recent developed test that already has proven to be a valuable resource for genotoxicity testing. The test provides an alternative to the chromosome aberration assay but can be evaluated faster. The test allows for automated measurement as the analysis of the damage proves to be less subjective. Micronuclei are "secondary" nuclei formed as a result of aneugenic or clastogenic damage.It is important to note that these assays are all described from an in vivo perspective. However, an in vivo approach can always be used, see two-year rodent assay. In that case, live animals are exposed to the compound after which specific cells are harvested. The advantage of this approach is the presence of the natural niche in which susceptible cells normally grow, resulting in a more relevant range of effects. The downside of in vivo assays is the current ethical pressure on these kind of methods. Several instances actively promote the development and usage of in vitro or simply non-animal alternative methods.For over 50 years, 2-year rodent carcinogenicity assay has been the golden standard for carcinogenicity testing. The assay relies on the exposure to a compound during a major part of an organism's lifespan. During the further development of the assay, a 2-species/2-gender setup became the preferred method, as some compounds showed different results in e.g. rats and mice and even between male and female individuals.For this approach model organisms are exposed for two years to a compound. Depending on the possible mode of exposure (i.e. inhalation, ingestion, skin/eye/... contact) when the compounds enters the relevant industry, a specific mode of exposure towards the model is chosen. During this time period the health of the model organism is documented through different parameters. Based on this a conclusion regarding the compound is drawn.Carcinogens not causing direct DNA-damage are classified as NGTX compounds. Due to the fact that there are a large number of potential malign pathways or effects that could be induced, the identification of NGTX carcinogens is significantly more difficult compared to GTX compounds.The two-year rodent carcinogenicity assay is one of the assays capable of accurately identify NGTX compounds. The use of transgenic models has greatly increased the sensitivity and specificity of this assay towards both groups of carcinogens while also improving the refinement of the assay by shortening the required time to formulate a conclusion regarding the compound.An in vitro method to identify NGTX compounds is rare. Not many alternative assays are able to cope with the vast variety of possible effects caused by the compounds resulting in many false negatives. However, cell morphology based methods such as the cell transformation assay, can be a good start in developing methods for this type of carcinogens.Stanley, L.. Molecular and Cellular Toxicology. p. 434.Authors: Eva Sugeng and Lily FredrixReviewers: Ľubica Murínová and Raymond NiesinkLearning objectives:You should be able to1. Definitions of epidemiologyEpidemiology (originating from Ancient Greek: Epi -upon, demos - people, logos - the study of) is the study of the distribution and determinants of health-related states or events in specified populations, and the application of this study to the prevention and control of health problems (Last, 2001). Epidemiologists study human populations with measurements at one or more points in time. When a group of people is followed over time, we call this a cohort (originating from Latin: cohors (Latin), a group of Roman soldiers). In epidemiology, the relationship between a determinant or risk factor and health outcome - variable is investigated. The outcome variable mostly concerns morbidity: a disease, e.g. lung cancer, or a health parameter, e.g. blood pressure, or mortality: death. The determinant is defined as a collective or individual risk factor (or set of factors) that is (causally) related to a health condition, outcome, or other defined characteristic. In human health - and, specifically, in diseases of complex etiology - sets of determinants often act jointly in relatively complex and long-term processes (International Epidemiological Association, 2014).The people that are subject of interest are the target population. In most cases, it is impossible and unnecessary to include all people from the target population and therefore, a sample will be taken from the target population, which is called the study population. The sample is ideally representative of the target population. To get a representative sample, it is possible to recruit subjects at random.Epidemiologic research can either be observational or experimental. Observational studies do not include interference (e.g. allocation of subjects into exposed / non-exposed groups), while experimental studies do. With regard to observational studies analytical and descriptive studies can be distinguished. Descriptive studies describe the determinant(s) and outcome without making comparisons, while analytical studies compare certain groups and derive inferences.In a cross-sectional study, determinant and outcome are measured at the same time. For example, pesticide levels in urine (determinant) and hormone levels in serum (outcome) are collected at one point in time. The design is quick and cheap because all measurements take place at the same time. The drawback is that the design does not allow to conclude about causality, that is whether the determinant precedes the outcome, it might be the other way around or caused by another factor (lacking Hill's criterion for causality temporality, Box 1). This study design is therefore mostly hypothesis generating.In a case-control study, the sample is selected based on the outcome, while the determinant is measured in the past. In contrast to a cross-sectional study, this design can include measurements at several time points, hence it is a longitudinal study. First, people with the disease (cases) are recruited and then matched controls (people not affected by the disease), comparable with regard to e.g. age, gender and geographical region, are involved into the study. Important is that controls have the same risk to develop the disease as the cases. The determinant is collected retrospectively, meaning that participants are asked about exposure in the past.The retrospective character of the design poses a risk for recall bias when people are asked about events that happened in the past, they might not remember them correctly. Recall bias is a form of information bias, when a measurement error results in misclassification. Bias is defined as a systematic deviation of results or inferences from the truth (International Epidemiological Association, 2014). One should be cautious to draw conclusions about causality with the case-control study design. According to Hill's criterion temporality (see Box 1), the exposure precedes the outcome, but because the exposure is collected retrospectively, the evidence may be too weak to draw conclusions about a causal relationship. The benefits are that the design is suitable for research on diseases with a low incidence (in a prospective cohort study it would result in a low number of cases), and for research on diseases with a long latency period, that is the time that exposure to the determinant can result in the disease (in a prospective cohort study, it would take many years to follow-up participants until the disease develops).Hoffman et al. investigated papillary thyroid cancer (PTC) and exposure to flame retardant chemicals (FRs) in the indoor environment. FRs are chemicals which are added to household products in order to limit the spread of fire, but can leach to house dust where residents can be exposed to the contaminated house dust. FRs are associated with thyroid disease and thyroid cancer. In this case-control study, PTC cases and matched cases were recruited (outcome), and FR exposure (determinant) was assessed by measuring FRs in the house dust of the participants. The study showed that participants with higher exposure to FRs (bromodiphenyl ether-209 concentrations above the median level) had 2.3 more odds (see section Quantifying disease and associations) on having PTC, compared to participants with lower exposure to FRs (bromodiphenyl ether-209 concentrations below the median level).A cohort study, another type of a longitudinal study, includes a group of individuals that are followed over time in the future (prospective) or that will be asked about the past (retrospective). In a prospective cohort study, the determinant is measured at the start of the study and the incidence of the disease is calculated after a certain time period, the follow-up. The study design needs to start with people who are at risk for the disease, but not yet affected by the disease. Therefore, the prospective study design allows to conclude that there may be a causal relationship, since the health outcome follows the determinant in time (Hill's criterion temporality). However, interference of other factors is still possible, see paragraph 3 about confounding and effect modification. It is possible to look at more than 1 health outcome, but the design is less suitable for diseases with a low incidence or with a long latency period, because then you either need a large study population to have enough cases, or need to follow the participants for a long time to measure cases. A major issue with this study design is attrition (loss to follow-up), it means to what extent do participants drop out during the study course. Selection bias can occur when a certain type of participants drops out more often, and the research is conducted with a selection of the target population. Selection bias can also occur at the start of a study, when some members of the target population are less likely to be included in the study population in comparison to other members and the sample therefore is not representative of the target population.De Cock et al. present a prospective cohort study investigating early life exposure to chemicals and health effects in later life, the LInking EDCs in maternal Nutrition to Child health (LINC study). For this, over 300 pregnant women were recruited during pregnancy. Prenatal exposure to chemicals was measured in, amongst others, cord blood and breast milk and the children were followed over time, measuring, amongst others, height and weight status. For example, prenatal exposure to dichlorodiphenyl-dichloroethylene (DDE), a metabolite of the pesticide dichlorodiphenyl-trichloroethane (DDT), was assessed by measuring DDE in umbilical cord blood, collected at delivery. During the first year, the body mass index (BMI), based on weight and height, was monitored. DDE levels in umbilical cord blood were divided into 4 equal groups, called quartiles. Boys with the lowest DDE concentrations (the first quartile) had a higher BMI growth curve in the first year, compared to boys with the highest concentrations DDE (the fourth quartile) (De Cock et al., 2016).When a case-control study is carried out within a cohort study, it is called a nested case-control study. Cases in a cohort study are selected, and matching non-cases are selected as controls. This type of study design is useful in case of a low amount of cases in a prospective cohort study.Engel et al. investigated attention-deficit hyperactivity disorder (ADHD) in children in relation to prenatal phthalate exposure. Phthalates are added to various consumer products to soften plastics. Exposure occurs during ingestion, inhalation or dermal absorption and sources are for example plastic packaging of food, volatile household products and personal care products (Benjamin et al., 2017). Engel et al. carried out a nested case-control study within the Norwegian Mother and Child Cohort (MoBa). The cohort included 112,762 mother-child pairs of which only a small amount of cases with a clinical ADHD diagnosis. A total of 297 cases were randomly sampled from registrations of clinically ADHD diagnoses. In addition, 553 controls without ADHD were randomly sampled from the cohort. Phthalate metabolites were measured in maternal urine collected at midpregnancy and concentrations were divided into 5 equal groups, called quintiles. Children of mothers in the highest quintile of the sum of metabolites of the phthalate bis(2-ethylhexyl) phthalate (DEHP) had 2.99 (95%CI: 1.47-5.49) more odds (see chapter Quantifying disease and associations) of an ADHD diagnosis in comparison to the lowest quintile. A randomized controlled trial (RCT) is an experimental study in which participants are randomly assigned to an intervention group or a control group. The intervention group receives an intervention or treatment, the control group receives nothing, usual care or a placebo. Clinical trials that test the effectiveness of medication are an example of an RCT. If the assignment of participants to groups is not randomized, the design is called a non-randomized controlled trial. The latter design provides less strength of evidence.When groups of people instead of individuals, are randomized, the study design is called a cluster-randomized controlled trial. This is, for example, the case when classrooms with children at school are randomly assigned to the intervention- and control group. Variations are used to switch groups between the intervention and control group. For example, a crossover design makes it possible that people are both intervention group and control group in different phases of the study. In order to not restrain the benefits of the intervention to the control group, a waiting list design makes the intervention available to the control group after the research period.An example of an experimental study design within environmental research is the study of Bae and Hong. In a randomized crossover trial, participants had to drink beverages either from a BPA containing can, or a BPA-free glass bottle. Besides BPA levels in urine, blood pressure was measured after exposure. The crossover design included 3 periods, with either drinking only canned beverages, both canned and glass-bottled beverages or only glass-bottled beverages. BPA concentration was increased with 1600% after drinking canned beverages in comparison to drinking from glass bottles.Confounding occurs when a third factor influences both the outcome and the determinant (see . For example, the number of cigarettes smoked is positively associated with the prevalence of esophageal cancer. However, the number of cigarettes smoked is also positively associated with the amount of standard glasses alcohol consumption. Besides, alcohol consumption is a risk factor for esophageal cancer. Alcohol consumption is therefore a confounder in the relationship smoking and esophageal cancer. One can correct for confounders in the statistical analysis, e.g. using stratification (results are presented for the different groups separately).Effect modification occurs when the association between exposure/determinant and outcome is different for certain groups. For example, the risk of lung cancer due to asbestos exposure is about ten times higher for smokers than for non-smokers. A solution to deal with effect modification is stratification as well.Box 1: Hill's criteria for causationWith epidemiological studies it is often not possible to determine a causal relationship. That is why epidemiological studies often employ a set of criteria, the Hill's criteria of causation, according to Sir Austin Bradford Hill, that need to be considered before conclusions about causality are justified (Hill, 1965).Bae, S., Hong, Y.C.. Exposure to bisphenol a from drinking canned beverages increases blood pressure: Randomized crossover trial. Hypertension 65, 313-319. //doi.org/10.1161/HYPERTENSIONAHA.114.04261Benjamin, S., Masai, E., Kamimura, N., Takahashi, K., Anderson, R.C., Faisal, P.A.. Phthalates impact human health: Epidemiological evidences and plausible mechanism of action. Journal of Hazardous Materials 340, 360-383. //doi.org/10.1016/j.jhazmat.2017.06.036De Cock, M., De Boer, M.R., Lamoree, M., Legler, J., Van De Bor, M.. Prenatal exposure to endocrine disrupting chemicals and birth weight-A prospective cohort study. Journal of Environmental Science and Health - Part A Toxic/Hazardous Substances and Environmental Engineering 51, 178-185. //doi.org/10.1080/10934529.2015.1087753De Cock, M., Quaak, I., Sugeng, E.J., Legler, J., Van De Bor, M.. Linking EDCs in maternal Nutrition to Child health (LINC study) - Protocol for prospective cohort to study early life exposure to environmental chemicals and child health. BMC Public Health 16: 147. //doi.org/10.1186/s12889-016-2820-8Engel, S.M., Villanger, G.D., Nethery, R.C., Thomsen, C., Sakhi, A.K., Drover, S.S.M., … Aase, H.. Prenatal phthalates, maternal thyroid function, and risk of attention-deficit hyperactivity disorder in the Norwegian mother and child cohort. Environmental Health Perspectives. //doi.org/10.1289/EHP2358Hill, A.B.. The Environment and Disease: Association or Causation? Journal of the Royal Society of Medicine 58, 295-300. //doi.org/10.1177/003591576505800503Hoffman, K., Lorenzo, A., Butt, C.M., Hammel, S.C., Henderson, B.B., Roman, S.A., … Sosa, J.A.. Exposure to flame retardant chemicals and occurrence and severity of papillary thyroid cancer: A case-control study. Environment International 107, 235-242. //doi.org/10.1016/j.envint.2017.06.021International Epidemiological Association.. Dictionary of epidemiology. Oxford University Press. //doi.org/10.1093/ije/15.2.277Last, J.M.. A Dictionary of Epidemiology. 4th edition, Oxford, Oxford University Press.Researcher A investigates the relation between parabens and breast cancer. She includes 200 women with breast cancer and 200 women without and asks all women about the use of personal care products that contain parabens in the past 10 years using a questionnaire. What is the study design that is used?Case-control studyCross-sectional studyProspective cohort studyRandomized controlled trialResearcher B has an opinion about the advantages and disadvantages of study design that was chosen. Which of the following advantages of this design is correct?There is a low chance of recall-bias.The design is able to prove a causal relationship between the exposure and the outcome.The design takes into account the long latency period for breast cancer.The Relative Risk (RR) can be used in this design.Researcher B is involved in a prospective cohort study investigating lead exposure and ADHD in children. Prenatal exposure to lead measured in umbilical cord blood, collected at delivery. At 7 years ADHD symptoms were counted. They found that higher lead concentrations was associated with more ADHD symptoms for both boys and girls. Boys had more ADHD symptoms than girls, and higher lead levels. What kind of factor is gender?DeterminantConfounderEffect modifierOutcomeAuthors: Eva Sugeng and Lily FredrixReviewers: Ľubica Murínová and Raymond NiesinkLearning objectivesYou should be able toPrevalence is the proportion of a population with an outcome at a certain time point (e.g. currently, 40% of the population is affected by disease Y) and can be calculated in cross-sectional studies.Incidence concerns only new cases, and the cumulative incidence is the proportion of new cases in the population over a certain time span (e.g. 60% new cases of influenza per year). The (cumulative) incidence can only be calculated in prospective study designs, because the population needs to be at risk to develop the disease and therefore participants should not be affected by the disease at the start of the study.Population Attributable Risk (PAR) is a measure to express the increase in disease in a population due to the exposure. It is calculated with this formula:Risk ratio or relative risk (RR) is the ratio of the incidence in the exposed group to the incidence in the unexposed group (Table 1):The RR can only be used in prospective designs, because it consists of probabilities of an outcome in a population at risk. The RR is 1 if there is no risk, <1 if there is a decreased risk, and >1 if there is an increased risk. For example, researchers find an RR of 0.8 in a hypothetical prospective cohort study on the region children live in (rural vs. urban) and the development of asthma (outcome). This means that children living in rural areas have a 0.8 lower risk to develop asthma, compared to children living in the urban areas.Risk difference (RD) is the difference between the risks in two groups (Table 1):Odds ratio (OR) is the ratio of odds on the outcome in the exposed group to the odds of the outcome in the unexposed group (Table 1).The OR can be used in any study design, but is most frequently used in case-control studies. (Table 1) The OR is 1 if there is no difference in odds, >1 if there is a higher odds, and <1 if there is a lower odds. For example, researchers find an OR of 2.5 in a hypothetical case-control study on mesothelioma cancer and occupational exposure to asbestos in the past. Patients with mesothelioma cancer had 2.5 higher odds on being occupational in the past exposed to asbestos compared to the healthy controls.The OR can also be used in terms of odds on the disease instead of the exposure, the formulae is then (Table 1):For example, researchers find an odds ratio of 0.9 in a cross-sectional study investigating mesothelioma cancer in builders working with asbestos comparing the use of protective clothing and masks. The builders who used protective clothing and masks had 0.9 odds on having mesothelioma cancer in comparisons to builders who did not use protective clothing and masks.Table 1: concept table to use for calculation of the RR, RD, and ORDisease/outcome +Disease/outcome -Exposure/determinant +ABExposure/determinant -CDMean difference is the difference between the mean in the exposed group versus the unexposed group. This is also applicable to experimental designs with a follow-up to assess increase or decrease of the outcome after an intervention: the mean at the baseline versus the mean after the intervention. This can be standardized using the following formulae:The standard deviation (SD) is a measure of the spread of a set of values. In practice, the SD must be estimated either from the SD of the control group, or from an 'overall' value from both groups. The best-known index for effect size is Cohens 'd'. The standardized mean difference can have both a negative and a positive value (between -2.0 and +2.0). With a positive value, the beneficial effect of the intervention is shown, with a negative value, the effect is counterproductive. In general, an effect size of for example 0.8 means a large effect.Effect measurements such as the relative risk, the odds ratio and the mean difference are reported together with statistical significance and/or a confidence interval. Statistical significance is used to retain or reject null hypothesis. The study starts with a null hypothesis assumption, we assume that there is no difference between variables or groups, e.g. RR=1 or the difference in means is 0. Then the statistical test gives us the probability of getting the outcome observed (e.g. OR=2.3, or mean difference=1.5) when in fact the null hypothesis is true. If the probability is smaller than 5%, we conclude that the observation is true and we may reject the null hypothesis. The 5% probability corresponds to a p-value of 0.05. A p-value cut-off of p<0.05 is generally used, which means that p-values smaller than 0.05 are considered statistically significant.The 95% confidence interval. A 95% confidence interval is a range of values within which you can be 95% certain that the true mean of the population or measure of association lies. For example, in the hypothetical cross-sectional study on smoking (yes or no) and lung cancer, an OR of 2.5 was found, with an 95% CI of 1.1 to 3.5. That means, we can say with 95% certainty that the true OR lies between 1.1 and 3.5. This is regarded statistically significant, since the 1, which means no difference in odds, does not lie within the 95% CI. If researchers also studied oesophagus cancer in relation to smoking and found an OR of 1.9 with 95% CI of 0.6-2.6, this is not regarded statistically significant, since 95% CI includes 1.When two populations investigated have a different distribution of, for example, age and gender, it is often hard to compare disease frequencies among them. One way to deal with that is to analyse associations between exposure and outcome within strata (groups). This is called stratification. Example: a hypothetical study to investigate differences in health (outcome, measured with number of symptoms, such as shortness of breath while walking) between two groups of elderly, urban elderly (n=682) and rural elderly (n=143) (determinant). No difference between urban and rural elderly was found, however there was a difference in the number of women and men in both groups. The results for symptoms for urban and rural elderly are therefore stratified by gender (Table 2). Then, it appeared that male urban elderly have more symptoms than male rural elderly (p=0.01). The difference is not significant for women (p=0.07). The differences in health of elderly living an urban region are different for men and women, hence gender is an effect modifier of our association of interest.Table 2. Number of symptoms (expressed as a percentage) for urban and rural elderly stratified by gender. Significant differences in bold.number of symptomsWomenMenUrbanRuralUrbanRuralNone16.030.416.243.5One26.430.445.247.8Two or more57.639.137.88.7N125237423p-value0.070.01Researcher B investigates the relation between parabens and breast cancer using a cross-sectional design. She includes 500 women between 55-65 years and asks them about the use of personal care products that contain parabens using a questionnaire, that divides the group into frequent users (N=267) and infrequent users (N=233). She in addition asks if the women have a breast cancer diagnosis and finds out that 41 women have breast cancer, of these women, 30 frequently use PCP. What is a correct outcome of this study?The Odds Ratio is 3.6.The Population Attributable Risk is 0.4.The incidence of breast cancer in this population is 8.2%The prevalence of breast cancer in this population is 8.2%.With the statistical analysis to get the OR, Researcher A finds the following numbers: OR=2.6, 95CI: 0.9-3.5, p=0.07. What is the interpretation?Frequent PCP users have significantly higher odds on having breast cancer.Frequent PCP users have higher odds on breast cancer, but not significantly.Frequent PCP users have higher risk on breast cancer, but not significantlyFrequent PCP users have significantly higher risk on breast cancerResearchers investigate an intervention to reduce phthalate exposure. The intervention group gets advice to reduce phthalates in food, personal care products and volatile household products. The control group does not get the intervention. Before and after the intervention period urine samples from the participants are collected and analyzed for phthalate metabolites. What is incorrect?The researchers can conclude about causality, because the exposure proceeds the outcome.The researchers can quantify the size of the difference between the intervention and control group using stratification.The researchers can show if the intervention reduces (or increases) the exposure with a risk difference.That there is no difference between the intervention and control group is called the null hypothesis.Author: Marja LamoreeReviewers: Michelle Plusquin and Adrian CovaciLearning objectives:You should be able toKeywords: chemical analysis, human samples, exposure, ethics, cohortHuman biomonitoring (HBM) involves the assessment of human exposure to natural and synthetic chemicals by the quantitative analysis of these compounds, their metabolites or reaction products in samples from human origin. Samples used in HBM can include blood, urine, faces, saliva, breast milk and sweat or other tissues, such as hair, nails and teeth.The concentrations determined in human samples are a reflection of the exposure of an individual to the compounds analysed, also referred to as the internal dose. HBM data are collected to obtain insight into the population's exposure to chemicals, often with the objective to integrate them with health data for health impact assessment in epidemiological studies. Often, specific age groups are addressed, such as neonates, toddlers, children, adolescents, adults and elderly. Human biomonitoring is an established method in occupational and environmental exposure assessment.In several countries, HBM studies have been conducted for decades already, such as the German Environment Survey (GerES) and the National Health and Nutrition Examination Survey (NHANES) program in the United States. HBM programs may sometimes be conducted under the umbrella of the World Health Organization (WHO). Other examples are the Canadian Health Measures Survey, the Flemish Environment and Health Study and the Japan Environment and Children's Study, the latter specifically focuses on young children. Children are considered to be more at risk for the adverse health effects of early exposure to chemical pollutants, because of their rapid growth and development and their limited metabolic capacity to detoxify harmful chemicals.Table 1. Information sources for Human Biomonitoring (HBM) programmesProgrammeInternet linkGerman Environment Survey (GerES)www.umweltbundesamt.de/en/topics/health/assessing-environmentally-related-health-risks/german-environmental-survey-geresNational Health and Nutrition Examination Survey (NHANES)//www.cdc.gov/nchs/nhanes/index.htmWHOwww.euro.who.int/en/data-and-evidence/environment-and-health-information-system-enhis/activities/human-biomonitoring-surveyCanadian Health Measures Survey (CHMS)http://www23.statcan.gc.ca/imdb/p2SV...rvey&Id=148760Japan Environment and Children's Study (JECS)//www.env.go.jp/en/chemi/hs/jecs/Studies focusing on the impact of exposure to chemicals on health are conducted with the use of cohorts: groups of people that are enrolled in a certain study and volunteer to take part in the research program. Usually, apart from donating e.g. blood or urine samples, health measures, such as blood pressure, body weight, hormone levels, etc. but also data on diet, education, social background, economic status and lifestyle are collected, the latter through the use of questionnaires. A cross-sectional study aims at the acquisition of exposure and health data of the whole (volunteer) group at a defined moment, whereas in a longitudinal study follow-up studies are conducted with a certain frequency (i.e. every few years) in order to follow and evaluate the changes in exposure, describe time trends as well as study health and lifestyle on the longer term (see section on Environmental Epidemiology). To obtain sufficient statistical power to derive meaningful relationships between exposure and eventual (health) effects, the number of participants in HBM studies is often very large, i.e. ranging to 100,000 participants.Because a lot of (sometimes sensitive) data is gathered from many individuals, ethics is an important aspect of any HBM study. Before starting a certain study involving HBM, a Medical Ethical Approval Committee needs to approve it. Applications to obtain approval require comprehensive documentation of i) the study protocol (what is exactly being investigated), ii) a statement regarding the safeguarding of the privacy and collected data of the individuals, the access of researchers to the data and the safe storage of all information and iii) an information letter for the volunteers explaining the aim of and procedures used in the study and their rights (e.g. to withdraw), so that they can give consent to be included in the study.Because chemicals often undergo metabolic transformation (see section on Xenobiotic metabolism and defence) after entering the body via ingestion, dermal absorption and inhalation, it is important to not only focus on the parent compound (= the compound to which the individual was exposed), but also include metabolites. Diet, socio-economic status, occupation, lifestyle and the environment all contribute to the exposure of humans, while age, gender, health status and weight of an individual define the effect of the exposure. HBM data provide an aggregation of all the different routes through which the individual was exposed. For an in-depth investigation of exposure sources, however, chemical analysis of e.g. diet (including drinking water), the indoor and outdoor environment are still necessary. Another important source of chemicals to which people are exposed in their day to day life are consumer products, such as electronics, furniture, textiles, etc., that may contain flame retardants, stain repellents, colorants and dyes, preservatives, among others.The distribution of a chemical in the body is highly dependent on its physico-chemical properties, such as lipophilicity/hydrophilicity and persistence, while also phase I and Phase II transformation (see section on Xenobiotic metabolism and defence) play a determining role, see storage occurs in fat tissue, while moderately lipophilic to hydrophilic compounds are excreted after metabolic transformation, or in unchanged form. Based on these considerations, a proper choice for sampling of the appropriate matrix can be made, i.e. some chemicals are best measured in urine, while for others blood may be better suitable. For the design of the sampling campaign, the properties of the compounds to be analyzed should be taken into account. In case of volatility, airtight sampling containers should be used, while for light-sensitive compounds amber coloured glassware is the optimal choice.Ideally, after collection, the samples need to be stored under the correct conditions as quickly as possible, in order the avoid degradation caused by thermal instability or by biodegradation caused by remaining enzyme activity in the sample (e.g. in blood or breast milk samples). Labeling and storage of the large quantities of samples generally included in HBM studies are important parts of the sampling campaign (see for video: //www.youtube.com/watch?v=FQjKKvAhhjM).Typically, for the determination of the concentrations of compounds to which people are exposed and the corresponding metabolites formed in the human body, analytical techniques such as liquid and gas chromatography (LC and GC, respectively) coupled to mass spectrometry (MS) are applied. Chromatography is used for the separation of the compounds, while MS is used to detect the compounds. Prior to the analysis using LC- or GC-MS, the sample is pretreated (e.g. particles are removed) and extracted, i.e. the compounds to be analysed are concentrated in a small volume while sample matrix constituents that may interfere with the analysis (e.g. lipids, proteins) are removed, resulting in an extract that is ready to be injected onto chromatographic system.In a schematic representation is given of all steps in the analytical procedure.The analytical methods to quantify concentrations of chemicals in order to assess human exposure need to be of a high quality due to the specific nature of HBM studies. The compounds to be analysed are usually present in very low concentrations (i.e. in the order of pg/L for cord blood), and the sample volumes are small. For some matrices, the small sample volumes is dictated by the fact that sample availability is not unlimited, e.g. for blood. Another factor that governs the limited sample volume available is the costs that are related to the requirement of dedicated, long term storage space at conditions of -20 ⁰C or even -80 ⁰C to ensure sample integrity and stability.The compounds on which HBM studies often focus are those to which we are exposed in daily life. This implies that the analytical procedure should be able to deal with contamination of the sample with the compounds to be analysed, due to the presence of the compounds in our surroundings. Higher background contamination leads to a decreased capacity to detect low concentrations, thus negatively impacting the quality of the studies. Examples of compounds that have been monitored frequently in human urine are phthalates, such as diethyl hexyl phthalate or shortly DEHP. DEHP is a chemical used in many consumer products and therefore contamination of the samples with DEHP from the surroundings severely influences the analytical measurements. One way around this is to focus on the metabolites of DEHP after Phase I or II metabolism: this guarantees that the chemical has passed the human body and has undergone a metabolic transformation, and its detection is not due to contamination from the background, which results in a more reliable exposure metric. When the analytical method is designed for the quantitative analysis of metabolites, an enzymatic step for the deconjugation of the Phase II metabolites should be included (see section on Xenobiotic metabolism and defence). Because the generated data, i.e. the concentrations of the compounds in the human samples, are used to determine parameters like average/median exposure levels, the detection frequency of specific compounds and highest/lowest exposure levels, the accuracy of the measurements should be high. In addition, analytical methods used for HBM should be capable of high throughput, i.e. the time needed per analysis should be low, because of the large numbers of samples that are typically analysed, in the order of a hundred to a few thousand samples, depending on the study.Summarizing, HBM data support the assessment of temporal trends and spatial patterns in human exposure, sheds light on subpopulations that are at risk and provides insight into the effectiveness of measures to reduce or even prevent adverse health effects due to chemical exposure.HBM4EU project info: www.hbm4eu.eu; video: //www.youtube.com/watch?v=DmC1v6EAeAM&feature=youtu.be.Draw a scheme of a typical analytical procedure for the quantitative determination of the concentration of a compound in a certain sample and name the different steps.What are the aims of human biomonitoring?Why is it useful to focus HBM on metabolites of chemicals and not on the original chemical, which is the case for e.g. phthalates?Mention 5 factors related to chemical analysis in HBM and explain their importanceEthics approval from a Medical Ethical Approval Committee is mandatory to carry out an HBM study. Specify what type of information should be included in a dossier in order to obtain approval.(draft)Authors: Karen Vrijens and Michelle PlusquinReviewers: Frank Van Belleghem,Learning objectivesYou should be able toThe exposome idea was described by Christopher Wild in 2005 as a measure of all human life-long exposures, including the process of how these exposures relate to health. An important aim of the exposome is to explain how non-genetic exposures contribute to the onset or development of important chronic disease. This concept represents the totality of exposures from three broad domains, i.e. internal, specific external and general external (Wild, 2012). The internal exposome includes processes such as metabolism, endogenous circulating hormones, body morphology, physical activity, gut microbiota, inflammation, and aging. The specific external exposures include diverse agents, for example, radiation, infections, chemical contaminants and pollutants, diet, lifestyle factors (e.g., tobacco, alcohol) and medical interventions. The wider social, economic and psychological influences on the individual make up the general external exposome, including the following factors but not limited to social capital, education, financial status, psychological stress, urban-rural environment, climate, etc1.The exposome concept clearly illustrates the complexity of the environment humans are exposed to nowadays, and how this can impact human health. There is a need for internal biomarkers of exposure (see section on Human biomonitoring) as well as biomarkers of effect, to disentangle the complex interplay between several exposures occurring potentially simultaneously and at different concentrations throughout life. Advances in biomedical sciences and molecular biology thereby collecting holistic information of epigenetics, transcriptome (see section on Gene expression), metabolome (see section on Metabolomics), etc. are at the forefront to identify biomarkers of exposure as well as of effect.To determine the health effect of environmental exposure, markers that can detect early changes before disease arises are essential and can be implemented in preventative medicine. These types of markers can be seen as intermediate biomarkers of effect, and their discovery relies on large-scale studies at different levels of biology (transcriptomics, genomics, metabolomics). The term "omics" refers to the quantitative measurement of global sets of molecules in biological samples using high throughput techniques (i.e. automated experiments that enable large scale repetition)7, in combination with advanced biostatistics and bioinformatics tools8. Given the availability of data from high-throughput omics platforms, together with reliable measurements of external exposures, the use of omics enhances the search for markers playing a role in the biological pathway linking exposure to disease risk.The meet-in-the-middle (MITM) concept was suggested as a way to address the challenge of identifying causal relationships linking exposures and disease outcome. The first step of this approach consists in the investigation of the association between exposure and biomarkers of exposure. The next step consists in the study of the relationship between (biomarkers of) exposure and intermediate omics biomarkers of early effects; and third, the relation between the disease outcome and intermediate omics biomarkers is assessed. The MITM stipulates that the causal nature of an association is reinforced if it is found in all three steps. Molecular markers that indicate susceptibility to certain environmental exposures are starting to become uncovered and can aid in targeted prevention strategies. Therefore, this approach is heavily dependent on new developments in molecular epidemiology, in which molecular biology is merged into epidemiological studies. Below, the different levels of molecular biology currently studied to identify markers of exposure and effect are discussed in detail.Intermediate biomarkers can be identified as measurable indicators of certain biological states at different levels of the cellular machinery, and vary in their response time, duration, site and mechanism of action. Different molecular markers might be preferred depending on the exposure(s) under study. Changes at the mRNA level can be studied following a candidate approach in which mRNAs with a biological role suspected to be involved in the molecular response to a certain type of exposure (e.g. inflammatory mRNAs in the case of exposure to tobacco smoke) are selected a priori and measured using quantitative PCR technology or alternatively at the level of the whole genome by means of microarray analyses or Next Generation Sequencing technology. 10 Changes at the transcriptome level, are studied by analysing the totality of RNA molecules present in a cell type or sample. Both types of studies have proven their utility in molecular epidemiology. About a decade ago the first study was published reporting on candidate gene expression profiles that were associated with exposure to diverse carcinogens11. Around the same time, the first studies on transcriptomics were published, including transcriptomic profiles for a dioxin-exposed population 12, in association with diesel-exhaust exposure,13 and comparing smokers versus non-smokers both in blood 14 as well as airway epithelium cells15. More recently, attention has been focused on prenatal exposures in association with transcriptomic signatures, as this fits within the scope of the exposome concept. As such, transcriptomic profiles have been described in association with exposure to maternal smoking assessed in placental tissue,16 as well as particulate matter exposure in cord blood samples17. Epigenetics related to all heritable changes in that do not affect the DNA sequence itself directly. The most widely studied epigenetic mechanism in the field of environmental epidemiology to date is DNA methylation. DNA methylation refers to the process in which methyl-groups are added to a DNA sequence. As such, these methylation changes can alter the expression of a DNA segment without altering its sequence. DNA methylation can be studied by a candidate gene approach using a digestion-based design or, more commonly used, a bisulfite conversion followed by pyrosequencing, methylation-specific PCR or a bead array. The bisulfite treatment of DNA mediates the deamination of cytosine into uracil, and these converted residues will be read as thymine, as determined by PCR-amplification and sequencing. However, 5 mC residues are resistant to this conversion and will remain read as cytosine.If an untargeted approach is desirable, several strategies can be followed to obtain whole-genome methylation data, including sequencing. Epigenotyping technologies such as the human methylation BeadChips 18 generate a methylation-state-specific 'pseudo-SNP' through bisulfite conversion; therefore, translating differences in the DNA methylation patterns into sequence differences that can be analyzed using quantitative genotyping methods19.An interesting characteristic of DNA methylation is that it can have transgenerational effects (i.e. effects that act across multiple generations). This was first shown in a study on a population that was prenatally exposed to famine during the Dutch Hunger Winter in 1944-1945. These individuals had less DNA methylation of the imprinted gene coding for insulin-like growth factor 2 (IGF2) measured 6 decades later compared with their unexposed, same-sex siblings. The association was specific for peri-conceptional exposure (i.e. exposure during the period from before conception to early pregnancy), reinforcing that very early mammalian development is a crucial period for establishing and maintaining epigenetic marks20.Post-translational modifications (i.e. referring to the biochemical modification of proteins following protein biosynthesis) recently gained more attention as they are known to be induced by oxidative stress 21 (see sections on Oxidative stress) and specific inflammatory mediators 22. Besides their function in the structure of chromatin in eukaryotic cells, histones have been shown to have toxic and pro-inflammatory activities when they are released into the extracellular space 23. Much attention has gone to the associations between metal exposures and histone modifications,24 although recently a first human study on the association between particulate matter exposure and histone H3 modification was published25.Expression of microRNAs (miRNAs are small noncoding RNAs of ∼22nt in length which are involved in the regulation of gene expression at the posttranscriptional level by degrading their target mRNAs and/or inhibiting their translation) {Ambros et al,2004}{Ambros, 2004 #324}{Ambros, 2004 #324} has also been shown to serve as a valuable marker of exposure, both candidate and untargeted approaches have resulted in the identification of miRNA expression patterns that are associated with exposure to smoking 26, particulate matter 27, and chemicals such as polychlorinated biphenyls (PCBs) 28. Metabolomics have been proposed as a valuable approach to address the challenges of the exposome. Metabolomics, the study of metabolism at the whole-body level, involves assessment of the entire repertoire of small molecule metabolic products present in a biological sample. Unlike genes, transcripts and proteins, metabolites are not encoded in the genome. They are also chemically diverse, consisting of carbohydrates, amino acids, lipids, nucleotides and more. Humans are expected to contain a few thousand metabolites, including those they make themselves as well as nutrients and pollutants from their environment and substances produced by microbes in the gut. The study of metabolomics increases knowledge on the interactions between gene and protein expression, and the environment29. Metabolomics can be a biomarker of effect of environmental exposure as it allows for the full characterization of biochemical changes that occur during xenobiotic metabolism (see Section on Xenobiotic metabolism and defence). Recent technological developments have allowed downscaling the sample volume necessary for the analysis of the full metabolome, allowing for the assessment of system-wide metabolic changes that occur as a result of an exposure or in conjunction with a health outcome 30. As for all discussed biomarkers, both targeted metabolomics, in which specific metabolites are measured in order to characterize a pathway of interest, as well as untargeted metabolomic approaches are available. Among "omics" methodologies, metabolomics interrogates the levels of a relatively lower number of features as there are about 2900 known human metabolites versus ~30,000 genes. Therefore it has strong statistical power compared to transcriptome-wide and genome-wide studies 31. Metabolomics is, therefore, a potentially sensitive method for identifying biochemical effects of external stressors. Even though the developing field of "environmental metabolomics" seeks to employ metabolomic methodologies to characterize the effects of environmental exposures on organism function and health, the relationship between most of the chemicals and their effects on the human metabolome have not yet been studied.Limitations of molecular epidemiological studies include the difficulty to obtain samples to study, the need for large study populations to identify significant relations between exposure and the biomarker, the need for complex statistical methods to analyse the data. To circumvent the issue of sample collection, much effort has been focused on eliminating the need for blood or serum samples by utilizing saliva samples, buccal cells or nail clippings to read out molecular markers. Although these samples can be easily collected in a non-invasive manner, care must be taken to prove that these samples indeed accurately reflect the body's response to exposure rather than a local effect. For DNA methylation, it has been shown this is heavily dependent on the locus under study. For certain CpG sites the correlation in methylation levels is much higher than for other sites 32. For those sites that do not correlate well across tissues, it has furthermore been demonstrated that DNA methylation levels can differ in their associations with clinical outcomes 33, so care must be taken in epidemiological study design to overcome these issues.Describe the 3 components of the exposomeExplain how using the meet in the middle model biomarkers can be identifiedGive 3 examples of molecular (bio)markers of environmental exposure.Author: Nico M. van StraalenReviewers: Dick Roelofs, Dave SpurgeonLearning objectives:You should be able toKeywords: genomics, transcriptomics, proteomics, metabolomics, risk assessmentLow-dose exposure to toxicants induces biochemical changes in an organism, which aim to maintain homoeostasis of the internal environment and to prevent damage. One aspect of these changes is a high abundance of transcripts of biotransformation enzymes, oxidative stress defence enzymes, heat shock proteins and many proteins related to the cellular stress response. Such defence mechanisms are often highly inducible, that is, their activity is greatly upregulated in response to a toxicant. It is also known that most of the stress responses are specific to the type of toxicant. This principle may be reversed: if an upregulated stress response is observed, this implies that the organism is exposed to a certain stress factor; the nature of the stress factor may even be derived from the transcription profile. For this reason, microarrays, RNA sequencing or other techniques of transcriptome analysis, have been applied in a large variety of contexts, both in laboratory experiments and in field surveys. These studies suggest that transcriptomics scores high on (in decreasing order) rapidity, specificity, and sensitivity. While the promises of genomics applications in environmental toxicology are high, most of the applications are in mode-of-action studies rather than in risk assessment.No organism is defenceless against environmental toxicants. Even at exposures below phenotypically visible no-effect levels a host of physiological and biochemical defence mechanisms are already active and contribute to the organism's homeostasis. These regulatory mechanisms often involve upregulation of defence mechanisms such as oxidative stress defence, biotransformation (xenobiotic metabolism), heat shock responses, induction of metal-binding proteins, hypoxia response, repair of DNA damage, etc. At the same time downregulation is observed for energy metabolism and functions related to growth and reproduction. In addition to these targeted regulatory mechanisms targeting, there are usually a lot of secondary effects and dysfunctional changes arising from damage. A comprehensive overview of all these adjustments can be obtained from analysis of the transcriptome.In this module we will review the various approaches adopted in "omics", with an emphasis on transcriptomics. "Omics" is a container term comprising five different activities. Table 1 provides a list of these approaches and their possible contribution to environmental toxicology. Genomics and transcriptomics deal with DNA and mRNA sequencing, proteomics relies on mass spectrometry while metabolomics involves a variety of separation and detection techniques, depending on the class of compounds analysed. The various approaches gain strength when applied jointly. For example proteomics analysis is much more insightful if it can be linked to an annotated genome sequence and metabolism studies can profit greatly from transcription profiles that include the enzymes responsible for metabolic reactions. Systems biology aims to integrate the different approaches using mathematical models. However, it is fair to say that the correlation between responses at the different levels is often rather poor. Upregulation of a transcript does not always imply more protein, more protein can be generated without transcriptional upregulation and the concentration of a metabolite is not always correlated with upregulation of the enzymes supposed to produce it. In this module we will focus on transcriptomics only. Metabolomics is dealt with in a separate section.Table 1. Overview of the various "omics" approachesTermDescriptionRelevance for environmental toxicologyGenomicsGenome sequencing and assembly, comparison of genomes, phylogenetics, evolutionary analysisExplanation of species and lineage differences in susceptibility from the structure of targets and metabolic potential, relationship between toxicology, evolution and ecologyTranscriptomicsGenome-wide transcriptome (mRNA) analysis, gene expression profilingTarget and metabolism expression indicating activity, analysis of modes of action, diagnosis of substance-specific effects, early warning instrument for risk assessmentProteomicsAnalysis of the protein complement of the cell or tissueSystemic metabolism and detoxification, diagnosis of physiological status, long-term or permanent effectsMetabolomicsAnalysis of all metabolites from a certain class, pathway analysisFunctional read-out of the physiological state of a cell or tissueSystems biologyIntegration of the various "omics" approaches, network analysis, modellingUnderstanding of coherent responses, extrapolation to whole-body phenotypic responsesThe aim of transcriptomics in environmental toxicology is to gain a complete overview of all changes in mRNA abundance in a cell or tissue as a function of exposure to environmental chemicals. This is usually done in the following sequence of steps:In the recent past, step 4 was done by microarray hybridization rather than by direct sequencing. In this technique two pools of cDNA (e.g. a control and a treatment) are hybridized to a large number of probes fixed onto a small glass plate. The probes are designed to represent the complete gene complement of the organism. Positive hybridization signals are taken as evidence for upregulated gene expression. Microarray hybridization arose in the years 1995-2005 but has now been largely overtaken by ultrafast and high-throughput next generation sequencing methods, however, due to cost-efficiency, relative simplicity of bioinformatics analysis, and standardization of the assessed genes it is still often used.We illustrate the principles of transcriptomics analysis and the kind of data analysis that follows it, by an example from the work by Bundy et al.. These authors exposed earthworms (Lumbricus rubellus) to soils experimentally amended with copper, quite a toxic element for earthworms. The copper-induced transcriptome was surveyed using a custom-made microarray and metabolic profiles were established using NMR (nuclear magnetic resonance) spectroscopy. From the 8,209 probes on the microarray, 329 showed a significant alteration of expression under the influence of copper. The data were plotted in a "heat map" diagram (, providing a quick overview of upregulated and downregulated genes. The expression profiles were also analysed in reduced dimensionality using principal component analysis (PCA). This showed that the profiles varied considerably with treatment. Especially the highest and the penultimate highest exposures generated a profile very different from the control (see . The genes could be allocated to four clusters, genes upregulated by copper over all exposures, genes downregulated by copper (see , genes upregulated by low exposures but unaffected at higher exposures (see , and genes upregulated by low exposure but downregulated by higher concentrations (see . Analysis of gene identity combined with metabolite analysis suggested that the changes were due to an effect of copper on mitochondrial respiration, reducing the amount of energy generated by oxidative phosphorylation. This mechanism was underlying the reduction of body-growth observed on the phenotypic level.How could omics-technology, especially transcriptomics, contribute to risk assessment of chemicals? Three possible advantages have been put forward:Among these advantages, the second one (specificity) has shown to be the most consistent and possibly brings the largest advantage. This can be illustrated by a study by Dom et al. in which gene expression profiles were generated for Daphnia magna exposed to different alcohols and chlorinated anilines.The profiles of replicates exposed to the same compound were always clustered together, except in one case (ethanol), showing that gene expression is quite specific to the compound. It is possible to reverse this argument: from the gene expression profile the compound causing it can be deduced. In addition, the example cited showed that the first separation in the cluster analysis was between exposures that did and did not affect energy reserves and growth. So the gene expression profiles are not only indicative of the compound, but also of the type of effects expected.The claim of rapidity also proved true, however, the advantage of rapidity is not always borne out. It may be an issue when quick decisions are crucial (evaluating a truck loaded with suspect contaminated soil, deciding whether to discharge a certain waste stream into a lake yes or no), but for regular risk assessment procedures it proved to be less of an advantage than sometimes expected. Finally, greater sensitivity of gene expression, in the sense of lower no-observed effect concentrations than classical endpoints is a potential advantage, but proves to be less spectacular in practice. However, there are clear examples in which exposures below phenotypic effect levels were shown to induce gene expression responses, indicating that the organism was able to compensate any negative effects by adjusting its biochemistry.Another strategy regarding the use of gene expression in risk assessment is not to focus on genome-wide transcriptomes but on selected biomarker genes. In this strategy, gene expressions are aimed for that show consistent dose-dependency, responses over a wide range of contaminants, and correlations with biological damage. For example, De Boer et al. analysed a composite data set including experiments with six heavy metals, six chlorinated anilines, tetrachlorobenzene, phenanthrene, diclofenac and isothiocyanate, all previously used in standardized experiments with the soil-living collembolan, Folsomia candida. Across all treatments a selection of 61 genes was made, that were responsive in all cases and fulfilled the three criteria listed above. Some of these marker genes showed a very good and reproducible dose-related response to soil contamination. Two biomarkers are shown in This experiment, designed to diagnose a field soil with complex unknown contamination, clearly demonstrated the presence of Cyp-inducing organic toxicants.Of course there are also disadvantages associated with transcriptomics in environmental toxicology, for example:Gene expression analysis has come to occupy a designated niche in environmental toxicology since about 2005. It is a field highly driven by technology, and shows continuous change over the last years. It may significantly contribute to risk assessment in the context of mode of action studies and as a source of designated biomarker techniques. Finally, transcriptomics data are very suitable to feed into information regarding key events, important biochemical alterations that are causally linked up to the level of the phenotype to form an adverse outcome pathway. We refer to the section on Adverse outcome pathways for further reading.Bundy, J.G., Sidhu, J.K., Rana, F., Spurgeon, D.J., Svendsen, C., Wren, J.F., Stürzenbaum, S.R., Morgan, A.J., Kille, P.. "Systems toxicology" approach identifies coordinated metabolic responses to copper in a terrestrial non-model invertebrate, the earthworm Lumbricus rubellus. BMC Biology 6, 25.De Boer, T.E., Janssens, T.K.S., Legler, J., Van Straalen, N.M., Roelofs, D.. Combined transcriptomics analysis for classification of adverse effects as a potential end point in effect based screening. Environmental Science and Technology 49, 14274-14281.Dom, N., Vergauwen, L., Vandenbrouck, T., Jansen, M., Blust, R., Knapen, D.. Physiological and molecular effect assessment versus physico-chemistry based mode of action schemes: Daphnia magna exposed to narcotics and polar narcotics. Environmental Science and Technology 46, 10-18.Gibson, G., Muse, S.V.. A Primer of Genome Science. Sinauer Associates Inc., Sunderland.Gibson, G.. The environmental contribution to gene expression profiles. Nature Reviews Genetics 9, 575-581.Roelofs, D., De Boer, M., Agamennone, V., Bouchier, P., Legler, J., Van Straalen, N.. Functional environmental genomics of a municipal landfill soil. Frontiers in Genetics 3, 85.Van Straalen, N.M., Feder, M.E.. Ecological and evolutionary functional genomics - how can it contribute to the risk assessment of chemicals? Environmental Science & Technology 46, 3-9.Van Straalen, N.M., Roelofs, D.. Genomics technology for assessing soil pollution. Journal of Biology 7, 19.Define: genomics, transcriptomics proteomics, and metabolomics, and indicate which of these approaches is most useful in the risk assessment of chemicals.Specify three advantages and at least one disadvantage of the use of genome-wide gene expression as a tool in environmental risk assessment.Author: Pim E.G. LeonardsReviewers: Nico van Straalen, Drew EkmanLearning objectives:You should be able to:Keywords: Metabolomics, metabolome, environmental metabolomics, application areas of metabolomics, targeted and untargeted metabolomics, metabolomics analysis and workflowMetabolomics is the systematic study of small organic molecules (<1000 Da) that are intermediates and products formed in cells and biofluids due to metabolic processes. A great variety of small molecules result from the interaction between genes, proteins and metabolites. The primary types of small organic molecules studied are endogenous metabolites (i.e., those that occur naturally in the cell) such as sugars, amino acids, neurotransmitters, hormones, vitamins, and fatty acids. The total number of endogenous metabolites in an organism is still under study but is estimated to be in the thousands. However, this number varies considerably between species and cell types. For instance, brain cells contain relative high levels of neurotransmitters and lipids, nevertheless the levels between different types of brain tissues can vary largely. Metabolites are working in a network, e.g. citric acid cycle, by the conversion of molecules by enzymes. The turnover time of the metabolites is regulated by the enzymes present and the amount of metabolites present.The field of metabolomics is relatively new compared to genomics, with the first draft of the human metabolome available in 2007. However, the field has grown rapidly since that time due to its recognized ability to reflect molecular changes most closely associated with an organism's phenotype. Indeed, in comparison to other 'omics approaches (e.g., transcriptomics), metabolites are the downstream results from the action of genes and proteins and, as such, provide a direct link with the phenotype. The metabolic status of an organism is directly related to its function (e.g. energetic, oxidative, endocrine, and reproductive status) and phenotype, and is, therefore, uniquely suitable to relate chemical stress to the health status of organisms. Moreover, transcriptomics and proteomics, the identification of metabolites does not require the existence of gene sequences, making it particularly useful for the those species which lack a sequenced genome.The complete set of small molecules in a biological system (e.g. cells, body fluids, tissues, organism) is called the metabolome (Table 1). The term metabolomics was introduced by Oliver et al. who described it "as the complete set of metabolites/low molecular weight compounds which is context dependent, varying according to the physiology, development or pathological state of the cell, tissue, organ or organism". This quote highlights the observation that the levels of metabolites can vary due to internal as well as external factors, including stress resulting from exposure to environmental contaminants. This has resulted in the emergence and growth of the field of environmental metabolomics which is based on the application of metabolomics, to biological systems that are exposed to environmental contaminants and other relevant stressors (e.g., temperature). In addition to endogenous metabolites, some metabolomic studies also measure changes in the biotransformation of environmental contaminants, food additives, or drugs in cells, the collection of which has been termed the xenometabolome.Table 1: Definitions of metabolomics.TermDefinitionRelevance for environmental toxicologyMetabolomicsAnalysis of small organic molecules (<1000 Da) in biological systems (e.g. cell, tissue, organism)Functional read-out of the physiological state of a cell or tissue and directly related to the phenotypeMetabolomeMeasurement of the complete set of small molecules in a biological systemDiscovery of affected metabolic pathways due to contaminant exposureEnvironmental metabolomicsMetabolomics analysis in biological systems that are exposed to environmental stress, such as the exposure to environmental contaminantsMetabolomics focused on environmental contaminant exposure to study for instance the mechanism of toxicity or to find a biomarker of exposure or effectXenometabolomeMetabolites formed from the biotransformation of environmental contaminants, food additives, or drugsUnderstanding the metabolism of the target contaminantTargeted metabolomicsAnalysis of a pre-selected set of metabolites in a biological systemFocus on the effects of environmental contaminants on specific metabolic pathwaysUntargeted metabolomicsAnalysis of all detectable (i.e., not preselected) of metabolites in a biological systemDiscovery-based analysis of the metabolic pathways affected by environmental contaminant exposureThe development and successful application of metabolomics relies heavily on i) currently available analytical techniques that measure metabolites in cells, tissues, and organisms, ii) the identification of the chemical structures of the metabolites, and iii) characterisation of the metabolic variability within cells, tissues, and organisms.The aim of metabolomics analysis in environmental toxicology can be:after environmental contaminant exposure: targeted metabolomicsIn targeted metabolomics a limited number of pre-selected metabolites (typically 1-100) are quantitatively analysed (e.g. nmol dopamine/g tissue). For example, metabolites in the neurotransmitter biosynthetic pathway could be targeted to assess exposures to pesticides. Targeting specific metabolites in this way typically allows for their detection at low concentrations with high accuracy. Conversely, in untargeted metabolomics the aim is to detect as many metabolites as possible, regardless of their identities so as to assess as much of the metabolome as possible. The largest challenge for untargeted metabolomics is the identification (annotation) of the chemical structures of the detected metabolites. There is currently no single analytical method able to detect all metabolites in a sample, and therefore a combination of different analytical techniques are used to detect the metabolome. Different techniques are required due to the wide range of physical-chemical properties of the metabolites. The variety of chemical structures of metabolites are shown in Metabolites can be grouped in classes such as fatty acids (the classes are given in brackets in , and within a class different metabolites can be found.A general workflow of environmental metabolomics analysis uses the following steps:Box: Analytical tools for metabolomics analysisThe most frequently used analytical tools for measuring metabolites are mass spectrometry (MS) and nuclear magnetic resonance (NMR) spectroscopy. MS is an analytical tool that generates ions of molecules and then measures their mass-to-charge ratios. This information can be used to generate a "molecular finger print" for each molecule, and based on this finger print metabolites can be identified. Chromatography is typically used to separate the different metabolites of a mixture found in a sample before it enters the mass spectrometer. Two main chromatography techniques are used in metabolomics: liquid chromatography and gas chromatography. Due to its high sensitivity, MS is able to measure a large number of different metabolites simultaneously. Moreover, when coupled with a separation method such as chromatography, MS can detect and identify thousands of metabolites.Mass spectrometry is much more sensitive than NMR, and it can detect a large range of different types of metabolites with different physical-chemical properties. NMR is less sensitive and can therefore detect a lower number of metabolites (typically 50-200). The advantages of NMR are the minimum amount of sample handling, the reproducibility of the measurements (due to high precision), and it is easier to quantify the levels of metabolites. In addition, NMR is a non-destructive technique such that a sample can often be used for further analyses after the data has been acquired.Metabolomics has been widely used in drug discovery and medical sciences. More recently, metabolomics is being incorporated into environmental studies, an emerging field of research called environmental metabolomics. Environmental metabolomics is used mainly in five application domains (Table 2). Arguably the most commonly used application is for studying the mechanism of toxicity/mode of action (MoA) of contaminants. However, many studies have identified select metabolites that show promise for use as biomarkers of exposure or effect. As a result of its strength in identifying response fingerprints, metabolomics is also finding use in the regulatory toxicology field particularly for read-across studies. This application is particularly useful for rapidly screening contaminants for toxicity. Metabolomics can be also be used in dose-response studies (benchmark dosing) to derive a point of departure (POD). This is especially interesting in regulatory chemical risk assessment.Currently, the field of systems toxicology is explored by combining data from different omics field (e.g. transcriptomics, proteomics, metabolomics) to improve our understanding of the relationship between the different omics, chemical exposure, and toxicity, and to better understand the mechanism of toxicity/ MoA.Table 2: Application areas of metabolomics in environmental toxicology.Application areaDescriptionMechanism of toxicity/Mode of action (MoA)Using metabolomics to understand at the molecular level the pathways that are affected by exposure to environmental contaminants. Discovery of the mode of action of chemicals. In an adverse outcome pathway (AOP)Discovery metabolomics is used to identify the key events (KE), by linking chemical exposure at the molecular level to functional endpoints (e.g. reproduction, behaviour).Biomarker discoveryIdentification of metabolites that can be used as convenient (i.e., easy and inexpensive to measure) indicators of exposure or effect.Read-acrossIn regulatory toxicology, metabolomics is used in read-across studies to provide information on the similarity of the responses between chemicals. This approach is useful to identify more environmentally toxic chemicals.Point of departureMetabolomics can be used in dose-response studies (benchmark dosing) to derive a point of departure (POD). This is especially interesting in regulatory chemical risk assessment. This application is currently not used yet.Systems toxicologyCombination approach of different omics (e.g. transcriptomics, proteomics, metabolomics) to improve our understanding of the relationship between the different omics and chemical exposure, and to better understand the mechanism of toxicity/ MoA.As an illustration of the mechanism of toxicity/mode of action application, Bundy et al. used NMR-based metabolomics to study earthworms (Lumbricus rubellus) exposed to various concentrations of copper in soil (0, 10, 40, 160, 480 mg copper / kg soil). They performed both transcriptomic and metabolomics studies. Both polar (sugars, amino acid, etc.) and apolar (lipids) metabolites were analysed, and fold changes relative to the control group were determined. For example, differences in the fold changes of lipid metabolites (e.g. fatty acids, triacylglycerol) as a function of copper concentration are shown as a "heatmap" in Clearly the highest dose group (480 mg/kg) has a very different lipid metabolite pattern than the other groups. The polar metabolite data was analysed using principal component analysis (PCA) , a multivariate statistical tool that reduces the number of dimensions of the data. The PCA score plot shown in reveals that the largest differences in metabolite profiles exist between: the control and low dose (10 mg Cu/kg) groups, the 40 mg Cu/kg and 160 mg Cu/kg groups, and the highest dose (480 mg Cu/kg) group. These separations indicate that the metabolite patterns in these groups were different as a result of the different copper exposures. Some of the metabolites were up- and some were down-regulated due to the copper exposure (two examples given in . The metabolite data were also combined with gene expression data in a systems toxicology application. This combined analysis showed that the copper exposures led to disruption of energy metabolism, particularly with regard to effects on the mitochondria and oxidative phosphorylation. Bundy et al. associated this effect on energy metabolism with a reduced growth rate of the earthworms. This study effectively showed that metabolomics can be used to understand the metabolite pathways that are affected by copper exposure and are closely linked to phenotypic changes (i.e., reduced growth rate). The transcriptome data collected simultaneously were in good accordance with the metabolome patterns, supporting Bundy et al.'s hypothesis that simultaneous measurement of the transcriptomes and metabolome can be used to validate the findings of both approaches, and in turn the value of "systems toxicology".Several challenges currently exist in the field of metabolomics. From a biological perspective, metabolism is a dynamic process and therefore very time-sensitive. Taking samples at different time-points during development of an organism, or throughout a chemical exposure can result in quite different metabolite patterns. Sample handling and storage can also be challenging as some metabolites are very unstable during sample collection and sample treatment. From an analytical perspective, metabolites possess a wide range of physico-chemical properties and occur in highly varying concentrations such that capturing the widest portion of the metabolome requires analysis with more than one analytical technique. However, the largest challenge is arguably the identification of the chemical structure of unknown metabolites. Even with state-of-the-art analytical techniques only a fraction of the unknown metabolites can be confidently identified.Metabolomics is a relative new field in toxicology, but is rapidly increasing our understanding of the biochemical pathways affected by exposure to environmental contaminants, and in turn their mechanisms of action. Linking the changed molecular pathways due to the contaminant exposure to phenotypical changes of the organisms is an area of great interest. Continual advances in state-of-the-art analytical tools for metabolite detection and identification will continue to this trend and expand the utility of environmental metabolomics for prioritizing contaminants. However, a number of challenges remain for the widespread use of metabolomics in regulatory toxicology. Fortunately, recent growth in international interest to address these challenges is underway, and is making great strides in a variety of applications.Bundy, J.G., Sidhu, J.K., Rana, F., Spurgeon, D.J., Svendsen, C., Wren, J.F., Sturzenbaum, S.R., Morgan, A.J., Kille, P.. 'Systems toxicology' approach identifies coordinated metabolic responses to copper in a terrestrial non-model invertebrate, the earthworm Lumbricus rubellus. BMC Biology, 6, 1-21.Bundy, J.G., Matthew P. Davey, M.P., Viant, M.R.. Environmental metabolomics: a critical review and future perspectives. Metabolomics, 5, 3-21.Johnson, C.H., Ivanisevic, J., Siuzdak, G.. Metabolomics: beyond biomarkers and towards mechanisms. Nature Reviews, Molecular and Cellular Biology 17, 451-459.What type of molecules are measured with metabolomics?ProteinsSmall moleculesGenesPolymersWhat are typical application areas of environmental metabolomics?Give four main elements of the workflow of metabolomics, and describe two in more detail?This page titled 4.3: Toxicity Testing is shared under a CC BY 4.0 license and was authored, remixed, and/or curated by Sylvia Moes, Kees van Gestel, & Gerco van Beek via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.
982
4.4: Increasing ecological realism in toxicity testing
https://chem.libretexts.org/Bookshelves/Environmental_Chemistry/Environmental_Toxicology_(van_Gestel_et_al.)/04%3A_Toxicology/4.04%3A_Increasing_ecological_realism_in_toxicity_testing
Author: Michiel KraakReviewer: Kees van GestelLearning objectives:You should be able toKeywords: single-species toxicity tests, mixture toxicity, multistress, chronic toxicity, multigeneration effects, ecological realism.The vast majority of single-species toxicity tests reported in the literature concerns acute or short-term exposures to individual chemicals, in which mortality is often the only endpoint. This is in sharp contrast with the actual situation at contaminated sites, where organisms may be exposed to relatively low levels of mixtures of contaminants under suboptimal conditions for their entire life span. Hence there is an urgent need to increase ecological realism in single-species toxicity tests by addressing sublethal endpoints, mixture toxicity, multistress effects, chronic toxicity and multigeneration effects.Mortality is a crude parameter representing the response of organiSince organisms are often exposed to relatively low levels of contaminants for their entire life span, the next step to increase ecological realism in single-species toxicity tests is to increase exposure time by performing chronic experiments (see section on Chronic toxicity). Moreover, in chronic toxicity tests a wide variety of sublethal endpoints can be assessed in addition to mortality, the most common ones being growth and reproduction (see to section on Endpoints). Given the relatively short duration of the life cycle of many invertebrates and unicellular organisms like bacteria and algae, it would be relevant to prolong the exposure time even further, by exposing the test organisms for their entire life span, so from the egg or juvenile phase till adulthood including their reproductive performance, or for several generations, assessing multigeneration effects (see section on Multigeneration effects).In contaminated environments, organisms are generally exposed to a wide variety of toxicants under variable and sub-optimal conditions. To further gain ecological realism, mixture toxicity and multistress scenarios should thus be considered (see sections on Mixture toxicity and Multistress). The highest ecological relevance of laboratory toxicity tests may be achieved by addressing the above mentioned issues all together in one type of experiment, chronic mixture toxicity tests assessing sublethal endpoints. Yet, even nowadays such studies remain scarce.Another way of increasing ecological realism of toxicity testing is by moving towards multispecies test systems that allow for assessing the impacts of chemicals and other stressors on species interactions within communities (see chapter 5 on Population, community and ecosystem ecotoxicology).Argue the need to increase ecological realism in single-species toxicity tests.List the consecutive steps to increase ecological realism in single-species toxicity tests.Authors: Michiel Kraak & Kees van GestelReviewer: Thomas BackhausLearning objectives:You should be able to· explain the concepts involved in mixture toxicity testing, including Concentration Addition and Response Addition.· design mixture toxicity experiments and to understand how the toxicity of (equitoxic) toxicant mixtures is assessed.· interpret the results of mixture toxicity experiments and to understand the meaning of Concentration Addition, Response Addition, as well as antagonism and synergism as deviations from Concentration Addition and Response Addition.Key words: Mixture toxicity, TU summation, equitoxicity, Concentration Addition, Response Addition, Independent ActionIn contaminated environments, organisms are generally exposed to complex mixtures of toxicants. Hence, there is an urgent need for assessing their joint toxic effects. In theory, there are four classes of joint effects of compounds in a mixture as depicted in Interaction(non-additive)Similar actionSimple similar action/Concentration AdditionComplex similar actionDissimilar actionIndependent action/Response AdditionDependent actionThe four classes of joint effects of compounds in a mixture, as proposed by Hewlett and Plackett.The most simple case concerns compounds that share the same mode of action and do not interact (upper left panel: simple similar action). This holds for compounds acting on the same biological pathway, affecting strictly the same molecular target. Hence, the only difference is the relative potency of the compounds. In this case Concentration Addition is taken as the starting point, following the Toxic Unit (TU) approach. This approach expresses the toxic potency of a chemical as TU, which is calculated for each compound in the mixture as:with c = the concentration of the compound in the mixture, and ECx = the concentration of the compound where the measured endpoint is affected by X % compared to the non-exposed control. Next, the toxic potency of the mixture is calculated as the sum of the TUs of the individual compounds:Imagine that the EC50 of compound A is 300 μg.L-1 and that of compound B 60 μg.L-1. In a mixture of A+B 30 μg.L-1 A and 30 μg.L-1 B are added. These concentrations represent 30/300 = 0.1 TU of A and 30/60 = 0.5 TU of B. Hence, the mixture consists of 0.1 + 0.5 = 0.6 TU. Yet, the two compounds in this mixture are not represented at equal toxic strength, since this specific mixture is dominated by compound B. To compose mixtures in which the compounds are represented at equal toxic strength, the equitoxicity concept is applied:1 Equitoxic TU A+B = 0.5 TU A + 0.5 TU B1 Equitoxic TU A+B = 150 μg.L-1 A + 30 μg.L-1 BAs in traditional concentration-response relationships, survival or a sublethal endpoint is plotted against the mixture concentration from which the EC50 value and the corresponding 95% confidence limits can be derived (see section on Concentration-response relationships). If the upper and lower 95% confidence limits of the EC50 value of the mixture include 1 TU, the EC50 of the mixture does not differ from 1 TU and the toxicity of the compounds in the mixture is indeed concentration additive.An experiment appealing to the imagination was performed by Deneer et al., who tested a mixture of 50 narcotic compounds (see section on Toxicodynamics and Molecular interactions) and observed perfect concentration addition, even when the individual compounds were present at only 0.25% (0.0025 TU) of their EC50. This showed in particular that narcotic compounds present at concentrations way below their no effect level still contribute to the joint toxicity of a mixture (Deneer et al., 1988). This was also shown for metals (Kraak et al., 1999). This is alarming, since even nowadays environmental legislation is still based on a compound-by-compound approach. The study by Deneer et al. also clearly demonstrated the logistical challenges of mixture toxicity testing. Since for composing equitoxic mixtures the EC50 values of the individual compounds need to be known, testing an equitoxic mixture of 50 compounds requires 51 toxicity tests: 50 individual compounds and 1 mixture.When chemicals have a different mode of action, act on different targets, but still contribute to the same biological endpoint, the mixture is expected to behave according to Response Addition (also termed Independent Action; . Such a situation would occur, for example, if one compound inhibits photosynthesis, and a second one inhibits DNA-replication, but both inhibit the growth of an exposed algal population. To calculate the effect of a mixture of compounds with different modes of action, Response Addition is applied as follows: The probability that a compound, at the concentration at which it is present in the mixture, exerts a toxic effect (scaled from 0 to 1), differs per compound and the cumulative effect of the mixture is the result of combining these probabilities, according to:E(mix) = E(A) + E(B) - E(A)E(B)Where E(mix) is the fraction affected by the mixture, and E(A) and E(B) are the fractions affected by the individual compounds A and B at the concentrations at which they occur in the mixture. In fact, this equation sums the fraction affected by compound A and the fraction affected by compound B at the concentration at which they are present in the mixture, and then corrects for the fact that the fraction already affected by chemical A cannot be affected again by chemical B (or vice versa). The latter part of the equation is needed to account for the fact that the chemicals act independent from each other. This is visualised in = E(A) + E(B) - E(A)E(B)can be rewritten as: 1-E(mix) = (1-EA)*(1-EB)This means that the probability of not being affected by the mixture (1-E(mix)) is the product of the probabilities of not being affected by (the specific concentrations of) compound A and compound B. At the EC50, both the affected and the unaffected fraction are 50%, hence (1-EA)*(1-EB) = 0.5. If both compounds equally contribute to the effect of the mixture, (1-EA) = (1-EB) and thus (1-EA or B)2 = 0.5, so both (1-EA) and (1-EB) equal = 0.71. Since the probability of not being affected is 0.71 for compound A and compound B, the probability of being affected is 0.29. Thus at the EC50 of a mixture of two compounds acting according to Independent Action, both compounds should be present at a concentration equalling their EC29.Concentration Addition as well as Response Addition both assume that the compounds in a mixture do not interact (see . However, in reality, such interactions can occur in all four steps of the toxic action of a mixture. The first step concerns chemical and physicochemical interactions. Compounds in the environment may interact, affecting each other's bioavailability. For instance, excess of Zn causes Cd to be more available in the soil solution as a result of competition for the same binding sites. The second step involves physiological interactions during uptake by an organism, influencing the toxicokinetics of the compounds, for example by competition for uptake sites at the cell membrane. The third step refers to the internal processing of the compounds, e.g. involving effects on each other's biotransformation or detoxification (toxicokinetics). The fourth step concerns interactions at the target site(s), i.e. the toxicodynamics during the actual intoxication process. The typical whole organism responses that are recorded in many ecotoxicity tests integrate the last three types of interactions, resulting in deviations from the toxicity predictions from Concentration Addition and Response Addition.If the EC50 of the mixture is higher than 1 TU and the lower 95% confidence limit is also above 1 TU, the toxicity of the compounds in the mixture is less than concentration additive, as more of the mixture is needed than anticipated to cause 50% effect . Correspondingly, if the EC50 of the mixture is lower than 1 TU and the upper 95% confidence limit is also below 1 TU, the toxicity of the compounds in the mixture is more than concentration additive .When the toxicity of a mixture is more than concentration additive, the compounds enhance each other's toxicity. When the toxicity of a mixture is less than concentration additive, the compounds reduce each other's toxicity. Both types of deviation from additivity can have two different reasons: 1. The compounds have the same mode of action, but do interact . 2. The compounds have different modes of actions (Independent action/Response Addition; .Elaborating on can simply be caused by the compounds behaving according to Response Addition, and not behaving antagonistically.The use of the terms synergism and antagonism may be problematic, because antagonism in relation to Concentration Addition (less than concentration additive; blue line) can simply be caused by the compounds behaving according to Response Addition, and not behaving antagonistically. Similarly, deviations from Response Addition could also mean that chemicals in the mixture do have the same mode of action, so act additively according to Concentration Addition. One can therefore only conclude on synergism/antagonism if the experimental observations are higher/lower than the predictions by both concepts.Rider, C.V., Simmons, J.E.. Chemical Mixtures and Combined Chemical and Nonchemical Stressors: Exposure, Toxicity, Analysis, and Risk, Springer International Publishing AG. ISBN-13: 978-3319562322.Bopp, S.K., Kienzler, A., Van der Linden, S., Lamon, L., Paini, A., Parissis, N., Richarz, A.N., Triebe, J., Worth, A.. Review of case studies on the human and environmental risk assessment of chemical mixtures. JRC Technical Reports EUR 27968 EN, European Union, doi:10.2788/272583.Berenbaum, M.C.. Criteria for analysing interactions between biologically active agents. Advances in Cancer Research 35, 269-335.Deneer, J.W., Sinnige, T.L., Seinen, W., Hermens, J.L.M.. The joint acute toxicity to Daphnia magna of industrial organic chemicals at low concentrations. Aquatic Toxicology 12, 33-38.Hewlett, P.S., Plackett, R.L.. A unified theory for quantal responses to mixtures of drugs: non-interactive action. Biometrics 15, 691 610.Kraak, M.H.S., Stuijfzand, S.C., Admiraal, W.. Short-term ecotoxicity of a mixture of five metals to the zebra mussel Dreissena polymorpha. Bulletin of Environmental Contamination and Toxicology 63, 805-812.Van Gestel, C.A.M., Jonker, M.J., Kammenga, J.E., Laskowski, R., Svendsen, C. (Eds.). Mixture toxicity. Linking approaches from ecological and human toxicology. SETAC Press, Society of Environmental Toxicology and Chemistry, Pensacola.What is the motivation to perform mixture toxicity experiments?When do you expect concentration-addition and when not?What are the three possible outcomes of a mixture toxicity experiment applying concentration addition?Calculate the effect concentration at which two compounds with different modes of action showing no interaction equally contribute to a mixture causing 60% effect.One can only conclude on synergism/antagonism, if the experimental observations are higher/lower than the predictions by both concepts (concentration addition and response addition). Why?Author: Michiel KraakReviewer: Kees van GestelLearning objectives:You should be able to· define stress and multistress.· explain the ecological relevance of multistress scenarios.Keywords: Stress, multistress, chemical-abiotic interactions, chemical-biotic interactionsIn contaminated environments, organisms are generally exposed to a wide variety of toxicants under variable and sub-optimal conditions. To gain ecological realism, multistress scenarios should thus be considered, but these are, however, understudied.Stress is defined as an environmental change that affects the fitness and ecological functioning of species (i.e., growth, reproduction, behaviour), ultimately leading to changes in community structure and ecosystem functioning. Multistress is subsequently defined as a situation in which an organism is exposed both to a toxicant and to stressful environmental conditions. This includes chemical-abiotic interactions, chemical-biotic interactions as well as combinations of these. Common abiotic stressors are for instance pH, drought, salinity and above all temperature, while common biotic stressors include predation, competition, population density and food shortage. Experiments on such stressors typically study, for instance, the effect of increasing temperature or the influence of food availability on the toxicity of compounds.The present definition of multistress thus excludes mixture toxicity (see section on Mixture toxicity) as well as situations in which organisms are confronted with several suboptimal (a)biotic environmental variables jointly without being exposed to toxicants. The next chapters deal with chemical-abiotic and chemical-biotic interactions and with practical issues related with the performance of multistress experiments, respectively.Give the definitions of stress and multistress.What is the ecological relevance of testing multistress scenarios?Authors: Marjolein Van Ginneken and Lieven BervoetsReviewers: Michiel Kraak and Martin HolmstrupLearning objectives:You should be able toKeywords: Multistress, chemical-biotic interactions, stressor interactions, bioavailability, behavior, energy trade-offGenerally, organisms have to cope with the joint presence of chemical and natural stressors. Both biotic and abiotic stressors can affect the chemicals' bioavailability and toxicokinetics. Additionally, they can influence the behavior and physiology of organisms, which could result in higher or lower toxic effects. Vice versa, chemicals can alter the way organisms react to natural stressors.By studying the effects of multiple stressors, we can identify potential synergistic, additive or antagonistic interactions, which are essential to adequately assess the risk of chemicals in nature. Relyea, for instance, found that apparently safe concentrations of carbaryl can become deadly to some amphibian species when combined with predator cues. This section focuses on biotic stress, which can be defined as stress caused by living organisms and includes predation, competition, population density, food availability, pathogens and parasitism. It will describe how biotic stressors and chemicals act and interact.Biotic stressors can have direct and indirect effects on organisms. For example, predators can change food web structures by consuming their prey and thus altering prey abundance and can indirectly affect prey growth and development as well, by inducing energetically-costly defense mechanisms. Also behaviors like (foraging) activity can be decreased and even morphological changes can be induced. For example, Daphnia pulex can develop neck spines when they are subject to predation. Similarly, parasites can alter host behavior or induce morphological changes, e.g., in coloration, but they usually do not kill their host. Yet, parasitism can compromise the immune system and alter the energy budget of the host.High population density is a stressor that can affect energy budgets and intraspecific and interspecific competition for space, status or resources. By altering resource availability, changes in growth and size at maturity can be the result. Additionally, these competition-related stressors can affect behavior, for example by limiting the number of suitable mating partners. Also pathogens (e.g., viruses, bacteria and fungi) can lower fitness and fecundity.It should be realized that the effects of different biotic stressors cannot be strictly separated from each other. For example, pathogens can spread more rapidly when population densities are high, while predation, on the other hand, can limit competition.Biotic stressors can alter the bioavailability of chemicals. For example in the aquatic environment, food level may determine the availability of chemicals to filter feeders, as they may adsorb to particulate organic matter, such as algae. As the exposure route (waterborne or via food) can influence the subsequent toxicokinetic processes, this may also change the chemicals' toxic effects.Biotic stressors have been reported to cause behavioral effects in organisms that could change the toxic effects of chemicals. These effects include altered feeding rates and reduced activities. The presence of a predator, for example, reduces prey (foraging) activity to avoid being detected by the perceived predator and so decreases chemical uptake via food. On the other hand, the condition of the prey organisms will decrease due to the lower food consumption, which means less energy is available for other physiological processes (see below).In addition to biotic stressors, also chemicals can disrupt essential behaviors by reduction of olfactory receptor sensitivity, cholinesterase inhibition, alterations in brain neurotransmitter levels, and impaired gonadal or thyroid hormone levels. This could lead to disruptive effects on communication, feeding rates and reproduction. An inability to find mating partners, for example, could then be worsened by a low population density. Furthermore, chemicals can alter predator-prey relationships, which might result in trophic cascades. Strong top-down effects will be observed when a predator or grazer is more sensitive to the contaminant than its prey. Alternatively, bottom-up effects are observed when the susceptibility of a prey species to predation is increased. For example, Cu exposure of fish and crustaceans can decrease their response to olfactory cues, making them unresponsive to predator stress and increasing the risk to be detected and consumed (Van Ginneken et al., 2018). Effects on the competition between species may also occur, when one species is more sensitive than the other. Thus, both chemical and biotic stressors can alter behavior and result in interactive effects that could change the entire ecosystem structure and function (Fleeger et al., 2003).Biotic stressors can cause elevated respiration rates of organisms, in aquatic organisms leading to a higher toxicant uptake through diffusion. On the other hand, they can also decrease respiration. For example, low food levels decrease metabolic activity and thus respiration. Additionally, a reduced metabolic rate could decrease the toxicity of chemicals which are metabolically activated. Also certain chemicals, such as metals, can cause a higher or lower oxygen consumption, which might counteract or reinforce the effects of biotic stressors.Besides affecting respiration, both biotic and chemical stressors can induce physiological damage to organisms. For instance, predator stress and pesticides cause oxidative stress, leading to synergistic effects on the induction of antioxidant enzymes such as catalase and superoxide dismutase (Janssens and Stoks, 2013). Furthermore, the organism can eliminate or detoxify internal toxicant concentrations, e.g. by transformation via Mixed Function Oxidation enzymes (MFO) or by sequestration, i.e. binding to metallothioneins or storage in inert tissues such as granules. These defensive mechanisms for detoxification and damage control are energetically costly, leading to energy trade-offs. This means less energy can be used for other processes such as growth, locomotion or reproduction. Food availability and lipid reserves can then play an important role, as well-fed organisms that are exposed to toxicants can more easily pay the energy costs than food-deprived organisms.The possible interactions, i.e. antagonism, synergism or additivity, between effects of stressors are difficult to predict and can differ depending on the stressor combination, chemical concentration, the endpoint and the species. For Ceriodaphnia dubia, Qin et al. demonstrated that predator stress influenced the toxic effects of several pesticides differently. While predator cues interacted antagonistically with bifenthrin and thiacloprid, they acted synergistically with fipronil.It should also be noted that interactive effects in nature might be weaker than when observed in the laboratory as stress levels fluctuate more rapidly or animals can move away from areas with high predator risk or chemical exposure levels. On the other hand, because generally more than two stressors are present in ecosystems, which could interact in an additive or synergistic way as well, they might be even more important in nature. Understanding interactions among multiple stressors is thus essential to estimate the actual impact of chemicals in nature.Fleeger, J.W., Carman, K.R., Nisbet, R.M.. Indirect effects of contaminants in aquatic ecosystems. Science of the Total Environment 317, 207-233.Janssens, L., Stoks, R.. Synergistic effects between pesticide stress and predator cues: conflicting results from life history and physiology in the damselfly Enallagma cyathigerum. Aquatic Toxicology 132, 92-99.Qin, G., Presley, S.M., Anderson, T.A., Gao, W., Maul, J.D.. Effects of predator cues on pesticide toxicity: toward an understanding of the mechanism of the interaction. Environmental Toxicology and Chemistry 30, 1926-1934.Relyea, R.A.. Predator cues and pesticides: a double dose of danger for amphibians. Ecological Applications 13, 1515-1521.Van Ginneken, M., Blust, R., Bervoets, L.. Combined effects of metal mixtures and predator stress on the freshwater isopod Asellus aquaticus. Aquatic Toxicology 200, 148-157.Give the definition of biotic stress and give 3 examples.How can biotic stressors change the toxic effects of chemicals?Give an example of how chemicals can change the way organisms react to biotic stress?Author: Martina VijverReviewers: Kees van Gestel, Michiel Kraak, Martin HolmstrupLearning objectives:You should be able toKey words: Stress, ecological niche concept, multistress, chemical-abiotic interactionsThe concept of stress can be defined at various levels of biological organization, from biochemistry to species fitness, ultimately leading to changes in community structure and ecosystem functioning. Yet, stress is most often studied in the context of individual organisms. The concept of stress is not absolute and can only be defined with reference to the normal range of ecological functioning. This is the case when organisms are within their range of tolerance (so-called ecological amplitude) or within their ecological niche, which describes the match of a species to specific environmental conditions. Applying this concept to stress allows it to be defined as a condition evoked in an organism by one or more environmental factors that bring the organism near or over the edges of its ecological niche (Van Straalen, 2003), see . This includes chemical-abiotic interactions, chemical-biotic interactions (see section Multistress - chemical - biotic interactions) as well as combinations of these. In general, organisms living under conditions close to their environmental tolerance limits appear to be more vulnerable to additional chemical stress. The opposite also holds: if organisms are stressed due to exposure to elevated levels of contaminants, their ability to cope with sub-optimal environmental conditions is reduced.Temperature. One of the predominant environmental factors altering toxic effects obviously is temperature. For poikilothermic (cold-blooded) organisms, increases in temperature lead to an increase in activity, which may affect both the uptake and the effects of chemicals. In a review by Heugens et al., studies reporting the effect chemicals on aquatic organisms in combination with abiotic factors like temperature, nutritional state and salinity were discussed. Generally, toxic effects increased with increasing temperature. Dependent on the effect parameter studied, the differences in toxic effects between laboratory and relevant field temperatures ranged from a factor of 2 to 130.Also freezing temperatures may interfere with chemical effects as was shown in another influential review of Holmstrup et al.. Membrane damage is mentioned as an explanation for the synergistic interaction between combinations of metals and temperatures below zero.Food. Food availability may have a strong effect on the sensitivity of organisms to chemicals (see section Multistress - chemical - biotic interactions). In general decreasing food or nutrient levels increased toxicity, resulting in differences in toxicity between laboratory and relevant field situations ranging from a factor of 1.2 to 10 (Heugens et al., 2001). Yet, way higher differences in toxic effects related to food levels have been reported as well: Experiments performed with daphnids in cages that were placed in outdoor mesocosm ditches (see sections on Cosm studies and In situ bioassays) showed stunning differences in sensitivity to the insecticide thiacloprid. Under conditions of low to ambient nutrient concentrations, the observed toxicity, expressed as the lowest observed effect concentration (LOEC) for growth and reproduction occurred at thiacloprid concentrations that were 2500-fold lower than laboratory-derived LOEC values. Contrary to the low nutrient treatment, such altered toxicity was often not observed under nutrient-enriched conditions (Barmentlo et al submitted). The difference was likely attributable to the increased primary production that allowed for compensatory feeding and perhaps also reduced the bioavailability of the insecticide. Similar results were observed for sub-lethal endpoints measured on the damselfly species Ischnura elegans, for which the response to thiacloprid exposure strongly depended on food availability and quality. Damselfies that were feeding on natural resources were significantly more affected than those that were offered high quality artificial food (Barmentlo et al submitted).Salinity. The influence of salinity on toxicity is less clear (Heugens et al. 2001). If salinity pushes the organism towards its niche boundaries, it will worsen the toxic effects that it is experiencing. In case that a specific salinity fits in the ecological niche of the organism, processes affecting exposure will predominantly determine the stress it will experience. This for instance means that metal toxicity decreases with increasing salinity, as it is strongly affected by the competition of ions (see section on Metal speciation). The toxic effect induced by organophosphate insecticides however, increases with increasing salinity. For other chemicals, no clear relationship between toxicity and salinity was observed. A salinity increase from freshwater to marine water decreased toxicity by a factor of 2.1 (Heugens et al. 2001). However, as less extreme salinity changes are more relevant under field conditions, the change in toxicity is probably much smaller.pH. Many organisms have a species-specific range of pH levels at which they function optimally. At pH values outside the optimal range, organisms may show reduced reproduction and growth, in extreme cases even reduced survival. In some cases, the effects of pH may be indirect, as pH may also have an important impact on exposure of organisms to toxicants. This is especially the case for metals and ionizable chemicals: metal speciation, but also the form in which ionizable chemicals occur in the environment and therefore their bioavailability, is highly dependent on pH (see sections on Metal speciation and Ionogenic organic chemicals). An example of the interaction between pH and metal effects was shown by Crommentuijn et al., who observed a reduced control reproduction of the springtail Folsomia candida, but also the lowest cadmium toxicity at a soil pHKCl 7.0 compared to pHKCl 3.1-5.7.Drought. In soil, the moisture content (see section on Soil) is an important factor, since drought is often limiting the suitability of the soil as a habitat for organisms. Holmstrup et al., reviewing the literature, concluded that chemicals interfering with the drought tolerance of soil organisms, e.g. by affecting the functioning of membranes or the accumulation of sugars, may exacerbate the effects of drought. Earthworms are breathing through the skin and can only survive in moist soils, and the eggs of springtails can only survive at a relative air humidity close to 100%. This makes these organisms especially sensitive to drought, which may be enhanced by exposure to chemicals like metals, polycyclic aromatic hydrocarbon or surfactants (Holmstrup et al., 2010).Many different abiotic conditions, such as oxygen levels, light, turbidity, and organic matter content, can push organisms towards the boundaries of their niche, but we will not discuss all stressors in this book.In environmental risk assessment, differences between stress-induced effects as determined in the laboratory under standardized optimal conditions with a single toxicant and the effects induced by multiple stressors are taken into account by applying an uncertainty factor. Yet, the choice for uncertainty factors is based on little ecological evidence. In 2001, Heugens already argued for obtaining uncertainty factors that sufficiently protect natural systems without being overprotective. Van Straalen echoed this and in current research the question is still raised if enough understanding has been gained to make accurate laboratory-to-field extrapolations. It remains a challenge to predict toxicant-induced effects on species and even on communities while accounting for variable and suboptimal environmental conditions, even though these conditions are common aspects of natural ecosystems (see for instance the section on Eco-epidemiology).Barmentlo, S.H., Vriend, L.M, van Grunsven, R.H.A., Vijver, M.G. (submitted). Evidence that neonicotinoids contribute to damselfly decline.Crommentuijn, T., Doornekamp, A., Van Gestel, C.A.M.. Bioavailability and ecological effects of cadmium on Folsomia candida (Willem) in an artificial soil substrate as influenced by pH and organic matter. Applied Soil Ecology 5, 261-271.Heugens, E.H., Hendriks, A.J., Dekker, T., Van Straalen, N.M., Admiraal, W.. A review of the effects of multiple stressors on aquatic organisms and analysis of uncertainty factors for use in risk assessment. Critical Reviews in Toxicology 31, 247-84.Holmstrup, M., Bindesbøl, A.M., Oostingh, G.J., Duschl, A., Scheil, V., Köhler, H.R., Loureiro, S., Soares, A.M.V.M., Ferreira, A.L.G., Kienle, C., Gerhardt, A., Laskowski, R., Kramarz, P.E., Bayley, M., Svendsen, C., Spurgeon, D.J.. Review Interactions between effects of environmental chemicals and natural stressors: A review. Science of the Total Environment 408, 3746-3762.Van Straalen, N.M.. Ecotoxicology becomes stress ecology Environmental Science and Technology 37, 324A-330A.1. Describe the niche-based definition of stress using .- 85% of the adult control chironomid midges from the control treatment should emerge between 12 and 23 days after the start of the experiment (OECD, 2010).- the mean number of juveniles produced by 10 control collembolans should be at least 100 (OECD, 2016a).Generally toxicity increases with increasing exposure time, often expressed as the acute-to-chronic ratio (ACR), which is defined as the LC50 from an acute test divided by the door NOEC of EC10 from the chronic test. Alternatively, as shown in it can be very high. Hence, there is a relationship between the mode of action of a compound and the ACR. Yet, if chronic toxicity has to be extrapolated from acute toxicity data and the mode of action of the compound is unknown, an ACR of 10 is generally considered. It should be realized though that this number is chosen quite arbitrarily, potentially leading to under- as well as over estimation of the actual ACR.Since reproductive events and the completion of life cycles are involved, chronic toxicity tests allow an array of sublethal endpoints to be assessed, including growth and reproduction, as well as species specific endpoints like emergence (time) of chironomids. Consequently, compounds with different modes of action may cause very diverse sublethal effects on the test organisms during chronic exposure. The polycyclic aromatic compound (PAC) phenanthrene did not affect the completion of the life cycle of the midges, but above a certain exposure concentration the larvae died and no emergence was observed at all, suggesting a non-specific mode of action (narcosis). In contrast, the PAC acridone caused no mortality but delayed adult emergence significantly over a wide range of test concentrations, suggesting a specific mode of action affecting life cycle parameters of the midges (Leon Paumen et al., 2008). This clearly demonstrates that specific effects on life cycle parameters of compounds with different modes of action need time to become expressed.Chronic toxicity tests are single species tests, but if the effects of toxicants are assessed on all relevant life-cycle parameters, these can be integrated into effects on population growth rate (r). For the 21-day daphnid test this is achieved by the integration of age-specific data on the probability of survival and fecundity. The population growth rates calculated from chronic toxicity data are obviously not related to natural population growth rates in the field, but they do allow to construct dose-response relationships for the effects of toxicants on r, the ultimate endpoint in chronic toxicity testing .Several protocols for standardized chronic toxicity tests are available, although less numerous than for acute toxicity testing. For water, the most common test is the 21 day Daphnia reproduction test (OECD, 2012), for sediment 28-day test guidelines are available for the midge Chironomus riparius (OECD, 2010) and for the worm Lumbriculus variegatus (OECD, 2007). For terrestrial soil, the springtail Folsomia candida (OECD, 2016a) and the earthworm Eisenia fetida (OECD, 2016b) are the most common test species, but also for enchytraeids a reproduction toxicity test guideline is available (OECD, 2016c). For a complete overview see .Leon Paumen, M., Borgman, E., Kraak, M.H.S., Van Gestel, C.A.M., Admiraal, W.. Life cycle responses of the midge Chironomus riparius to polycyclic aromatic compound exposure. Environmental Pollution 152, 225-232.OECD. OECD Guideline for Testing of Chemicals. Test No. 225: Sediment-Water Lumbriculus Toxicity Test Using Spiked Sediment. Section 2: Effects on Biotic Systems; Organization for Economic Co-operation and Development: Paris, 2007.OECD. OECD Guideline for Testing of Chemicals. Test No. 233: Sediment-Water Chironomid Life-Cycle Toxicity Test Using Spiked Water or Spiked Sediment. Section 2: Effects on Biotic Systems; Organization for Economic Co-operation and Development: Paris, 2010.OECD. OECD Guideline for Testing of Chemicals. Test No. 211. Daphnia magna Reproduction Test No. 211. Section 2: Effects on Biotic Systems; Organization for Economic Co-operation and Development: Paris, 2012.OECD (2016a). OECD Guideline for Testing of Chemicals. Test No. 232. Collembolan Reproduction Test in Soil. Section 2: Effects on Biotic Systems; Organization for Economic Co-operation and Development: Paris, 2016.OECD (2016b). OECD Guideline for Testing of Chemicals. Test No. 222. Earthworm Reproduction Test (Eisenia fetida/Eisenia andrei). Section 2: Effects on Biotic Systems; Organization for Economic Co-operation and Development: Paris, 2016.OECD (2016c). OECD Guideline for Testing of Chemicals. Test No. 220: Enchytraeid Reproduction Test. Section 2: Effects on Biotic Systems; Organization for Economic Co-operation and Development: Paris, 2016.Waaijers, S.L., Bleyenberg, T.E., Dits, A., Schoorl, M., Schütt, J., Kools, S.A.E., De Voogt, P., Admiraal, W., Parsons, J.R., Kraak, M.H.S.. Daphnid life cycle responses to new generation flame retardants. Environmental Science and Technology 47, 13798-13803.In acute toxicity tests the LC50 is derived after a short exposure time. Mention two outcomes of chronic toxicity tests that cannot be determined in acute toxicity tests.For which mode of toxic action of compounds do you expect low Acute-to-Chronic Ratios (ACR) and for which chemical mode of action do you expect ACR to be high?Author: Michiel KraakReviewers: Kees van Gestel, Miriam Leon PaumenLearning objectives:You should be able to· explain how effects of toxicants may propagate during multigeneration exposure.· describe the experimental challenges and limitations of multigeneration toxicity testing and to be able to design multigeneration tests.· explain the implications of multigeneration testing for ecological risk assessment.Key words: Multigeneration exposure, extinction, adaptation, test designIt is generally assumed that chronic life cycle toxicity tests are indicative of the actual risk that populations suffer from long-term exposure. Yet, at contaminated sites, organisms may be exposed during multiple generations and the shorter the life cycle of the organism, the more realistic this scenario becomes. There are, however, only few multigeneration studies performed, due to the obvious time and cost constraints. Since both aquatic and terrestrial life cycle toxicity tests generally last for 28 days (see section on Chronic toxicity), multigeneration testing will take approximately one month per generation. Moreover, the test compound often affects the life cycle of the test species in a dose-dependent manner. Consequently, the control population, for example, could already be in the 9th generation, while an exposed population could still be in the 8th generation due to chemical exposure related delay in growth and/or development. On top of these experimental challenges, multigeneration experiments are extremely error prone, simply because the chance that an experiment fails increases with increasing exposure time.Designing a multigeneration toxicity experiment is challenging. First of all, there is the choice of how many generations the experiment should last, which is most frequently, but completely arbitrarily, set at approximately 10. Test concentrations have to be chosen as well, mostly based on chronic life cycle EC50 and EC10 values (Leon Paumen et al. 2008). Yet, it cannot be anticipated if, and to what extent, toxicity increases (or decreases) during multigeneration exposure. Hence, testing only one or two exposure concentrations increases the risk that the observed effects are not dose related, but are simply due to stochasticity. If the test concentrations chosen are too high, many treatments may go extinct after few generations. In contrast, too low test concentrations may show no effect at all. The latter was observed by Marinkovic at al., who had to increase the exposure concentrations during the experiment (see . Finally, since a single experimental treatment often consists of an entire population, treatment replication is also challenging.Once the experiment is running, choices have to be made on the transition from generation to generation. If a replicate is maintained in a single jar, vessel or aquarium, generations may overlap and exposure concentrations may decrease with time. Therefore, most often a new generation is started by exposing offspring from the previous exposed parental generation in a freshly spiked experimental unit.If the aim is to determine how a population recovers when the concentration of the toxicant decreases with time, exposure to a single spiked medium also is an option, which seems most applicable to soils (Ernst et al., 2016; van Gestel et al., 2017). To assess recovery after several generations of (continuous) exposure to contaminated media, offspring from previous exposed generations may be maintained under control conditions.A wide variety of endpoints can be selected in multigeneration experiments. In case of aquatic insects like the non-biting midge Chironomus riparius these include survival, larval development time, emergence, emergence time, adult life span and reproduction. For terrestrial invertebrates survival, growth and reproduction can be selected. Only a very limited number of studies evaluated actual population endpoints like population growth rate (Postma and Davids, 1995).If organisms are exposed for multiple generations the effects tend to worsen, ultimately leading to extinction, first of the population exposed to the highest concentration, followed by populations exposed to lower concentrations in later generations (Leon Paumen et al. 2008). Yet, it cannot be excluded that extinction occurs due to the relatively small population sizes in multigeneration experiments, while larger populations may pass a bottleneck and recover during later generations.Thresholds have also been reported, as shown in (Leon Paumen et al. 2008). Below certain exposure concentrations the exposed populations perform equally well as the controls, generation after generation. Hence, these concentrations may be considered as the 'infinite no effect concentration'. A mechanistic explanation may be that the metabolic machinery of the organism is capable of detoxifying or excreting the toxicants and that this takes so little energy that there is no trade off regarding growth and reproduction.It is concluded that the frequently reported worsening of effects during multigeneration toxicant exposure raises concerns about the use of single-generation studies in risk assessment to tackle long-term population effects of environmental toxicants.If populations exposed for multiple generations do not get extinct and persist, they may have developed resistance or adaptation. Regular sensitivity testing can therefore be included in multigeneration experiments, as depicted in Yet, it is still under debate whether this lower sensitivity is due to genetic adaptation, epigenetics or phenotypic plasticity (Marinkovic et al., 2012).Ernst, G., Kabouw, P., Barth, M., Marx, M.T., Frommholz, U., Royer, S., Friedrich, S.. Assessing the potential for intrinsic recovery in a Collembola two-generation study: possible implementation in a tiered soil risk assessment approach for plant protection products. Ecotoxicology 25, 1-14.Leon Paumen, M., Steenbergen, E., Kraak, M.H.S., Van Straalen, N. M., Van Gestel, C.A.M.. Multigeneration exposure of the springtail Folsomia candida to phenanthrene: from dose-response relationships to threshold concentrations. Environmental Science and Technology 42, 6985-6990.Marinkovic, M., De Bruijn, K., Asselman, M., Bogaert, M., Jonker, M.J., Kraak, M.H.S., Admiraal, W.. Response of the nonbiting midge Chironomus riparius to multigeneration toxicant exposure. Environmental Science and Technology 46, 12105−12111.Postma. J.F., Davids, C.. Tolerance induction and life-cycle changes in cadmium-exposed Chironomus riparius (Diptera) during consecutive generations. Ecotoxicology and Environmental Safety 30, 195-202.Van Gestel, C.A.M., De Lima e Silva, C., Lam, T., Koekkoek, J.C. Lamoree, M.H., Verwei, R.A.. Multigeneration toxicity of imidacloprid and thiacloprid to Folsomia candida. Ecotoxicology 26, 320-328.What is the motivation to perform multigeneration experiments?What are the two alternative outcomes of multigeneration toxicity experiments?What are the implications of multigeneration testing for ecological risk assessment?Authors: Michiel Daam, Jörg RömbkeReviewer: Kees van Gestel, Michiel KraakLearning objectives:You should be able· to name the distinctive features of tropical and temperate ecosystems· to explain their implications for environmental risk assessment in these regions· to mention some of the main research needs in tropical ecotoxicologyKey words: Environmental risk assessment; pesticides; temperature; contaminant fate; test methodsIntroductionThe tropics cover the area of the world (approx. 40%) that lies between the Tropic of Cancer, 23½° north of the equator and the Tropic of Capricorn, 23½° south of the equator. It is characterized by, on average, higher temperatures and sunlight levels than in temperate regions. Based on precipitation patterns, three main tropical climates may be distinguished: Tropical rainforest, monsoon and savanna climates. Due to the intrinsic differences between tropical and temperate regions, differences in the risks of chemicals are also likely to occur. These differences are briefly exemplified by taking pesticides as an example, addressing the following subjects: 1) Climate-related factors; 2) Species sensitivities; 3) Testing methods; 4) Agricultural practices and legislation.1. Climate-related factorsThree basic climate factors are essential for pesticide risks when comparing temperate and tropical aquatic agroecosystems: rainfall, temperature and sunlight. For example, high tropical temperatures have been associated with higher microbial activities and hence enhanced microbial pesticide degradation, resulting in lower exposure levels. On the other hand, toxicity of pesticides to aquatic biota may be higher with increasing temperature. Regarding terrestrial ecosystems, other important abiotic factors to be considered are soil humidity, pH, clay and organic carbon content and ion exchange capacity (i.e. the capacity of a soil to adsorb certain compounds) (Daam et al., 2019). Although several differences in climatic factors may be distinguished between tropical and temperate areas, these do not lead to consistent greater or lesser pesticide risk (e.g. .Tropical areas harbour the highest biodiversity in the world and generate nearly 60% of the primary production. This higher species richness, as compared to their temperate counterparts, dictates that the possible occurrence of more sensitive species cannot be ignored. However, studies comparing the sensitivity of species from the same taxonomic group did not demonstrate a consistent higher or lower sensitivity of tropical organisms compared to temperate organisms (e.g. .Given the vast differences in environmental conditions between tropical and temperate regions, the use of test procedures developed under temperate environments to assess pesticide risks in tropical areas has often been disputed. Subsequently, methods developed under temperate conditions need to be adapted to tropical environmental conditions, e.g. by using tropical test substrates and by testing at higher temperatures (Niva et al., 2016). As discussed above, tropical and temperate species from the same taxonomic group are not expected to demonstrate consistent differences in sensitivity. However, certain taxonomic groups may be more represented and/or ecologically or economically more important in tropical areas, such as freshwater shrimps (Daam and Rico, 2016) and (terrestrial) termites (Daam et al., 2019). Subsequently, the development of test procedures for such species and the incorporation in risk assessment procedures seems imperative.Agricultural practices in tropical countries are likely to lead to a higher pesticide exposure and hence higher risks to aquatic and terrestrial ecosystems under tropical conditions. Some of the main reasons for this include i) unnecessary applications and overuse; ii) use of cheaper but more hazardous pesticides, and iii) dangerous transportation and storage conditions, all often a result of a lack in training of pesticide applicators in the tropics (Daam and Van den Brink, 2010; Daam et al., 2019). Finally, countries in tropical regions usually do not have strict laws and risk assessment regulations in place regarding the registration and use of pesticides, meaning that pesticides banned in temperate regions for environmental reasons are often regularly available and used in tropical countries such as Brazil (e.g. Waichman et al. 2002).References and recommended further readingDaam, M.A., Van den Brink, P.J.. Implications of differences between temperate and tropical freshwater ecosystems for the ecological risk assessment of pesticides. Ecotoxicology 19, 24-37.Daam, M.A., Chelinho, S., Niemeyer, J., Owojori, O., de Silva, M., Sousa, J.P., van Gestel, C.A.M., Römbke, J.. Environmental risk assessment of pesticides in tropical terrestrial ecosystems: current status and future perspectives. Ecotoxicology and Environmental Safety 181, 534-547.Daam, M.A., Rico, A.. Freshwater shrimps as sensitive test species for the risk assessment of pesticides in the tropics. Environmental Science and Pollution Research 25, 13235-13243.Niemeyer, J.C., Moreira-Santos, M., Nogueira, M.A., Carvalho, G.M., Ribeiro, R., Da Silva, E.M., Sousa, J.P.. Environmental risk assessment of a metal contaminated area in the Tropics. Tier I: screening phase. Journal of Soils and Sediments 10, 1557-1571.Niva, C.C., Niemeyer, J.C., Rodrigues da Silva Júnior, F.M., Tenório Nunes, M.E., de Sousa, D.L., Silva Aragão, C.W., Sautter, K.D., Gaeta Espindola, E., Sousa, J.P., Römbke, J.. Soil Ecotoxicology in Brazil is taking its course. Environmental Science and Pollution Research 23, 363-378.Waichman, A.V., Römbke, J., Ribeiro, M.O.A., Nina, N.C.S.. Use and fate of pesticides in the Amazon State, Brazil. Risk to human health and the environment. Environmental Science and Pollution Research 9, 423-428.What are the most important climatic factors affecting the fate and effects of chemicals when comparing temperate and tropical regions?Can tropical organisms be expected to be more sensitive to chemicals than temperate organisms? Please justify your answer.Should ecotoxicological test methods be adapted for their use in tropical regions? If yes, please provide two examples of adaptations that should be made.If a chemical is allowed for use in Europe, would you recommend its use in a tropical country without additional testing? Please justify your answer.This page titled 4.4: Increasing ecological realism in toxicity testing is shared under a CC BY 4.0 license and was authored, remixed, and/or curated by Sylvia Moes, Kees van Gestel, & Gerco van Beek via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.
983
4.5: Availability and bioavailability
https://chem.libretexts.org/Bookshelves/Environmental_Chemistry/Environmental_Toxicology_(van_Gestel_et_al.)/04%3A_Toxicology/4.05%3A_Availability_and_bioavailability
Authors: Martina VijverReviewer: Kees van Gestel, Ravi NaiduLeaning objectives:You should be able to:Keywords: Chemical availability, actual and potential uptake, toxico-kinetics, toxico-dynamics.Introduction:Although many environmental chemists, toxicologists, and engineers claim to know what bioavailability means, the term eludes a consensus definition. Bioavailability may be defined as that fraction of chemical present in the environment that is or may become available for biological uptake by passage across cell membranes.Bioavailability generally is approached from a process-oriented point of view within a toxicological framework, which is applicable to all types of chemicals.The first process is chemical availability which can be defined as the fraction of the total concentration of chemicals present in an environmental compartment that contributes to the exposure of an organism. The total concentration in an environmental compartment is not necessarily involved in the exposure, as a smaller or larger fraction of the chemical may be bound to organic or inorganic components of the environment. Organic matter and clay particles, for instance, are important in binding chemicals (see section on Soil), while also the presence of cations and pH are important factors modifying the partitioning of chemicals between different environmental phases (see section on Metal speciation).The second process is the actual or potential uptake, described as the toxicokinetics of a substance which reflects the development with time of its concentration on, and in, the organism (see section on Bioconcentration and kinetics modelling).The third process describes the internal distribution of the substance leading to its interaction(s) at the cellular site of toxicity activation. This is sometimes referred to as toxico-availability and also includes the biochemical and physiological processes resulting from the effects of the chemical at the site of action.Details on the bioavailability concept described above as well as how the physico-chemical interactions influencing each process are described in the sections on Metal speciation and Bioconcentration and kinetics modelling.Kinetics are involved in all three basic processes. The timeframe can vary from very brief (less than seconds) to very long in the order of hundreds of years. shows that some fractions of pollutants are present in soil or sediment, but may never contribute to the transport of chemicals that could reach the internal site during an organism's lifespan. The fractions with different desorption kinetics may relate to different experimental techniques to determine the relevant bioavailability metric.Box 1: Illustration of how bioavailability influences our human fitnessIron deficiency occurs when a body has not enough iron to supply its needs. Iron is present in all cells of the human body and has several vital functions. It is a key component of the hemoglobin protein, carrying oxygen to the tissues from the lungs. Iron also plays an important role in oxidation/reduction reactions, which are crucial for the functioning of the cytochrome P450 enzymes that are responsible for the biotransformation of endogenic as well as xenobiotic chemicals. Iron deficiency therefore can interfere with these vital functions, leading to a lack of energy (feeling tired) and eventually to malfunctioning of muscles and the brain.In case of iron deficiency, the medical doctor will prescribe Fe-supplements and iron-rich food such as red meat and green leafy vegetables like spinach. Although this will lead to a higher intake of iron (after all exposure is higher), it does not necessarily lead to a higher uptake as here bioavailability becomes important. It is advised to avoid drinking milk or caffeinated drinks together with eating iron-rich products or supplements because both drinks will prevent the absorption of iron in the intestinal tract. Calcium ions abundant in milk will compete with iron ions for the same uptake sites, so excess calcium will reduce iron uptake. Carbonates and caffeine molecules, but also phytate (inositol polyphosphate) present in vegetables, will strongly bind the iron, also reducing its availability for uptake.For regulatory purposes, it is necessary to use a straightforward approach to assess and prioritize contaminated sites based on their risk to human and environmental health. The bioavailability concept offers a scientific underpinned concept to be used in risk assessment. Examples for inorganic contaminants are the derived 2nd tier models such as the Biotic Ligand Models, while for organic chemicals the Equilibrium Partitioning (EqP) concept (see Box 2 in the section on Sorption) is applied.A quantitative example is given for copper in different water types in and Table 1, in which water chemistry is explicitly accounted for to enable estimating the available copper concentration. The current Dutch generic quality target for surface waters is 1.5 µg/L total dissolved copper. The bioavailability-corrected risk limits (HC5) for different water types, in most cases, exceeded this generic quality target.Table 1. Bioavailability adjusted Copper 5% Hazardous Concentration (HC5, potentially affecting <5% of relevant species) for different water types.Water type descriptionno.DOC (mg/L)pHAverage HC5 (µg/L)Large rivers13.1 ± 0.97.7 ± 0.29.6 ± 2.9Canals, lakes28.4 ± 4.48.1 ± 0.435.0 ± 17.9Streams, brooks318.2 ± 4.37.4 ± 0.173.6 ± 18.9Ditches427.5 ± 12.26.9 ± 0.864.1 ± 34.5Sandy springs52.2 ± 1.06.7 ± 0.17.2 ± 3.1When the calculated HC5 value is lower, this means that the bioavailability of copper is higher and hence at the same total copper concentration in water the risk is higher. The bioavailability-corrected HC5s for Cu differ significantly among water types. The lowest HC5 values were found for sandy springs (water type V) and large rivers (water type I), which appear to be sensitive water bodies. These differences can be explained from partitioning processes (chemical availability) and competition processes (the toxicokinetics step) on which the BLMs are based. Streams and brooks (water type III) have rather high total copper concentrations without any adverse effects, which can be attributed to the protective effect of relatively high dissolved organic carbon (DOC) concentrations and the neutral to basic pH causing a high binding of Cu to the DOC.For risk managers, this water type specific risk approach can help to identify the priority in cleanup activities among sites having elevated copper concentrations. It remains possible that, for extreme environmental situations (e.g., extreme droughts and low water discharges or extreme rain fall and high runoff), combinations of the water chemistry parameters may result in calculated HC5 values that are even lower than the calculated average values. For the latter (important) reason, the generic quality target is more strict.Hamelink, J., Landrum, P.F., Bergman, H., Benson, W.H. Bioavailability: physical, chemical, and biological interactions, CRC Press.Ortega-Calvo, J.J., Harmsen, J., Parsons, J.R., Semple, K.T., Aitken, M.D., Ajao, C., Eadsforth, C., Galay-Burgos, M., Naidu, R., Oliver, R., Peijnenburg, W.J.G.M., Römbke, J., Streck, G., Versonnen, B. From bioavailability science to regulation of organic chemicals. Environmental Science and Technology 49, 10255−10264.Vijver, M.G., de Koning, A., Peijnenburg, W.J.G.M. Uncertainty of water type-specific hazardous copper concentrations derived with biotic ligand models. Environmental Toxicology and Chemistry 27, 2311-2319.What are the three processes that define bioavailability?Explain how metal uptake changes when high concentrations of dissolved organic matter are in the exposure medium.Explain why bioavailability is a dynamic concept.Author: Jose Julio Ortega-CalvoReviewers: John Parsons, Gerard CornelissenLearning objectives:You should be able to:Keywords: Bioavailability, Freely-dissolved concentration, Desorption, Passive sampling, Infinite sinkIn many exposure scenarios involving organic chemicals, ranging from a bacterial cell to a fish, or from a sediment bed to a soil profile, the organisms experience the pollution through the water phase. Even when this is not the case, for example when uptake is from sediment consumed as food, the aqueous concentration may be a good indicator of the bioavailable concentration, since ultimately a chemical equilibrium will be established between the solid phase, the aqueous phase (possibly in the intestine), and the organism. Thus, taking an aqueous sample from a given environment, and determining the concentration of a certain chemical with the appropriate analytical equipment seems a straightforward approach to assess bioavailability. However, especially for hydrophobic chemicals, which tend to remain sorbed to solid surfaces (see sections on Relevant chemical properties and Sorption of organic chemicals), the determination of the chemicals present in the aqueous phase, as a way to assess bioavailability, has represented a significant challenge to environmental organic chemistry. The phase exchange among different compartments often leads to equilibrium aqueous concentrations that are very low, because most of the chemicals remain associated to the solids, and after sustained exposure, to the biota. These freely dissolved concentrations (Cfree) are very useful to determine bioavailability, as they represent the "tip of the iceberg" under equilibrium exposure, and are what organisms "see" . Similarly to the balance between gravity and buoyancy forces leading to iceberg flotation up to a certain level, Cfree is determined by the equilibrium between sorption and desorption, and connected to the concentration of the sorbed chemical (Csorbed) through a partitioning coefficient.Biological uptake may also result in the fast removal of the chemical from the aqueous phase, and thus in further desorption from the solids, so equilibrium is never achieved, and actual aqueous concentrations are much lower than the equilibrium Cfree (or even close to zero). In these situations, bioavailability is driven by the desorption kinetics of the chemical. Usually, desorption occurs as a biphasic process, where a fast desorption phase, occurring during a few hours or days, is followed by a much slower phase, taking months or even years. Therefore, for scenarios involving rapid exposures, or for studies on coupled desorption/biodegradation, the fast-desorbing fraction of the chemicals (Ffast) can be used to determine bioavailability. This fraction is often referred to as the bioaccessible fraction. Following the iceberg analogy , Ffast would constitute the upper iceberg fraction rapidly melting by sun irradiation, with a very minimal "visible" surface (representing the desorbed chemical in the aqueous solution, which is quickly removed by biological uptake). The slowly desorbing -or melting- fraction, Fslow, would remain in the sorbed state, within a given time span, having little interactions with the biota.Cfree can be determined with a passive sampler, in the form of polymer-coated fibers or sheets (membranes) made of a variety of polymers, which establish an additional sorption equilibrium with the aqueous phase in contact with the soil or sediment (Jonker et al., 2018). Depending on the analytes of interest, different polymers, such as polydimethylsiloxane (PDMS) or polyethylene (PE), are used in passive samplers. The passive sampler, enriched in the analyte (similarly to the floating iceberg in , can be used in this way to determine indirectly the pollutant concentration present in the aqueous phase, even at very low concentrations, though the appropriate distribution ratio between sampler and water. In bioavailability estimations, passive sampling is designed for equilibrium and non-depletive conditions. This means that the amount of chemical sampled does not alter the solid-water equilibrium, i.e., it is essential that Cfree is not affected significantly by the sampler. Equilibrium achievement is critical, and it may take days or weeks.Cfree can be calculated from the concentration of the pollutant in the passive sample polymer at equilibrium (Cp), and the polymer-to-water partitioning coefficient (Kpw):Cfree values can be the basis of predictions for bioaccumulation that use the equilibrium partitioning approach, either directly or through a bioconcentration factor, and for sediment toxicity in conjunction with actual toxicity tests. Passive sampling methods are well suited for contaminated sediments, and they have already been implemented in regulatory environmental assessments based on bioavailability (Burkhard et al., 2017).The determination of Ffast can be achieved with the use of methods that trap the desorbed chemical once it appears in the aqueous phase. Far from equilibrium conditions, desorption is driven to its maximum rate by placing a material in the aqueous phase that acts as an infinite sink (comparable to the sun irradiation of a melting iceberg in . The most accepted materials for these desorption extraction methods are Tenax, a sorptive resin, and cyclodextrin, a solubilizing agent (ISO, 2018). These methods allow a permanent aqueous chemical concentration of almost zero, and therefore, sorption of the chemical back to the soil or sediment can be neglected. Several extraction steps can be used, covering a variable time span, which depends on the environmental sample.The following first-order, two-compartment kinetic model can be used to analyze desorption extraction data:In this equation, St and So (mg) are the soil-sorbed amounts of the chemical at time t (h) and at the start of the experiment, respectively. Ffast and Fslow are the fast- and slow-desorbing fractions, and kfastand kslow(h-1) are the rate constants of fast and slow desorption, respectively. To calculate the values of the different constants and fractions (Ffast, Fslow, kfast, and kslow) exponential curve fitting can be used. The ln form of the equation can be used to simplify curve fitting.Once the desorption kinetics are known, the method can be simplified for a series of samples, by using single time point-extractions. A time period of 20 h has been suggested as a sufficient time period to approximate Ffast. It is highly convenient for operational reasons (ISO, 2018), but indicative at best, since the time needed to extract Ffast tends to vary between chemicals and soils/sediments.ReferencesBurkhard, L.P., Mount, D.,R., Burgess, R.,M.. Developing Sediment Remediation Goals at Superfund Sites Based on Pore Water for the Protection of Benthic Organisms from Direct Toxicity to Nonionic Organic Contaminants EPA/600/R 15/289; U.S. Environmental Protection Agency Office of Research and Development: Washington, DC.ISO. Technical Committee ISO/TC 190 Soil quality - Environmental availability of non-polar organic compounds - Determination of the potentially bioavailable fraction and the non-bioavailable fraction using a strong adsorbent or complexing agent; International Organization for Standardization: Geneva, Switzerland.Jonker, M.T.O., van der Heijden, S.A., Adelman, D., Apell, J.N., Burgess, R.M., Choi, Y., Fernandez, L.A., Flavetta, G.M., Ghosh, U., Gschwend, P.M., Hale, S.E., Jalalizadeh, M., Khairy, M., Lampi, M.A., Lao, W., Lohmann, R., Lydy, M.J., Maruya, K.A., Nutile, S.,A., Oen, A.M.P., Rakowska, M.I., Reible, D., Rusina, T.P., Smedes, F., Wu, Y. Advancing the use of passive sampling in risk assessment and management of sediments contaminated with hydrophobic organic chemicals: results of an international ex situ passive sampling interlaboratory comparison. Environmental Science & Technology 52, 3574-3582.Describe why the freely dissolved concentration is often used as an indicator of the bioavailability of organic chemicals in soil and sediment.What is passive sampling and how can this be used to determine the bioavailable concentrations of chemicals in soil or sediment? What is the meant by fast-desorbing fraction of a chemical in soil or sediment and how can this be determined with desorption extraction methods?Authors: Kees van GestelReviewer: Martina Vijver, Steve LoftsLeaning objectives:You should be able to:Keywords: Chemical availability, actual and potential uptake, toxicokinetics, toxicodynamics.Introduction:Total concentrations are not very informative about the availability of metals in soils or sediments. Fate and behavior of metals - in general terms mobility - as well as biological uptake and toxicity is highly determined by their speciation. Speciation describes the partitioning of a metal among the various forms in which it may exist (see section on Metal speciation). For assessing the risk of metals to man and the environment, speciation therefore is highly relevant as it may determine their availability for uptake and effects in organisms. Several tools have been developed to determine available metal concentrations or their speciation in soils and sediments. As indicated in the section on Availability and bioavailability, such chemical methods are just indicative, and to a large extent ignore dynamics of availability. Moreover, availability is also influenced by biological processes, with abiotic-biotic interactions influencing the bioavailability process being species- and often even life-stage specific. Nevertheless, chemical extractions may provide useful information to predict or estimate the potential risks of metals and therefore are preferred over the determination of total metal concentrations.The available methods include:Porewater extraction probably best approaches the readily available fraction of metals in soil, which drives mobility and is the fraction of metals experienced directly by many organisms exposed. In general, pore water is extracted from soil or sediment by centrifugation, and filtration over a 0.45 µm (or 0.22 µm) filter to remove larger particles and perhaps some of the dissolved organic matter. Filtration, however, will not remove all complexes, making it impossible to determine solely the dissolved metal fraction in the pore water. Nevertheless, porewater metal concentrations have been shown to have significant correlations with metal uptake (e.g. for copper uptake by barley and tomato by Zhao et al., 2006) and to be useful for predicting toxic threshold concentrations of metals, with correction for pH (e.g. for nickel toxicity to tomato and barley by Rooney et al., 2007).Extraction with water simulates the immediately available fraction, so the fraction present in the soil solution or pore water. By extracting soil with water, the pore water however, is diluted, which on one hand may facilitate metal analysis by creating larger volumes of solution, but on the other hand may lead to differences between measured and actual metal concentrations in the pore water as it may impact chemical equilibria.Extraction with diluted salts aims to determine the fraction of metal that is easily available or may become available as it is in the exchangeable form. This refers to cationic metals that may be bound to the negatively charged soil particles (see section on Soil). Buffered salt solutions, for instance 1 M NH4-acetate at pH 4.8 (with acetic acid) or at pH 7, may under- or overestimate available metal concentrations because of their interference with soil pH. Unbuffered salt solutions therefore are more widely used and may for instance include 0.001 or 0.01 M CaCl2, 0.1 M NaNO3 or 1 M NH4NO3 (Gupta and Aten, 1993; Novozamsky et al., 1993). Gupta and Aten showed good correlations between the uptake of some metals in plants and 0.1 M NaNO3 extractable concentrations in soil, while Novozamsky et al. found similar well-fitting correlations using 0.01 M CaCl2. The latter method also seemed well capable of predicting metal uptake in soil invertebrates, and therefore has been more widely accepted for predicting metal availability in soil ecotoxicology. (Zhang et al., 2019) provides an example with the correlation between Pb toxicity to enchytraeid worms in different soils and 0.01 M CaCl2 extractable concentrations.Extractions with water (including porewater) and dilute salts are most accurately described as measures of the chemical solubility of the metal in the soil. The values obtained can be useful indicators of the relative metal reactivity across soils, but tend to be less useful for bioavailability assessment, unless the soils under consideration have a narrow range of soil properties. This is because the solutions obtained from such soils themselves have varying chemical properties (e.g. pH, DOC concentration) which are likely to affect the availability of the measured metal to organisms.Extraction with chelating agents, such as EDTA (0.01-0.05 M) or DTPA (0.005 M) (as their sodium or ammonium salts), aims at assessing the availability of metals for plants. Many plants have the ability to actively affect metal speciation in the soil by producing root exudates. These extractants may form very stable water-soluble complexes with many different polyvalent cationic metals. It should be noted that the large variation in plant species and corresponding physiologies as well as their interactions with symbiotic microorganisms (e.g. mycorrhizal fungi) make that there is no single extraction method is capable of properly predicting metal availability to all plant species.Extraction with diluted acids has been advocated for predicting the potentially available fraction of metals in soils, so the fraction that may become available in the long run. It is a quite rigorous extraction method that can be executed in a robust way. Metal concentrations determined by extracting soils with 0.43 M HNO3 showed very good correlation with oral bioaccessible concentrations (Rodrigues et al., 2013), probably because it to some degree simulates metal release under acidic stomach conditions.Both extraction methods with chelating agents and diluted acid may also dissolve solids, such as carbonates and Fe- and Al-oxides. This raises concerns as to the interpretation of results of these extraction systems, and especially to their generalization to different soil-plant systems (Novozamsky et al., 1993). The extractions with chelating agents and dilute acids are considered methods to estimate the 'geochemically active' metal in soil - the pool of adsorbed metal that can participate in solid-solution adsorption/desorption and exchange equilibria on timescales of hours to days. This pool, along with the basic soil properties such as pH etc., also controls the readily available concentrations obtained with water/weak salt/porewater extraction. From the bioavailability point of view, these extractions tend to be most useful as inputs to bioavailability/toxicity models such as that of Lofts et al., which take further account of the effects of metal speciation and soil chemistry on metal bioavailability to environmental organisms.Sequential extraction brings together different extraction methods, and aims to determining either how strongly metals are retained or to which components of the solid phase they are bound in soils or sediments. This allows to determine how metals are bound to different fractions within the same soil or sediment, and allows interpretation to the bioavailability dynamics. By far the most widely used method of sequential extraction is the one proposed by Tessier et al.. Five fractions are distinguished, indicating how metals are interacting with soil or sediment components: see following gut passage of soil particles (see e.g. Basta and Gradwohl, 2000).Passive sampling may also be applied to assess available metal concentrations. The best known method is that of Diffusive Gradients in Thin films (DGT), developed by Zhang et al.. In this method, a resin (Chelex) with high affinity for metals is placed in a device and covered with a diffusive gel and a 0.45 µm cellulose nitrate membrane. The membrane is brought into contact with the soil. Metals dissolved in the soil solution will diffuse through a membrane and diffusive gel and bind to the resin. Based on the thickness of the membrane and gel and the contact time with the soil, the metal concentration in the pore water can be calculated from the amount of metal accumulated in the resin. The method may be indicative of available metal concentrations in soils and sediments, but can only work effectively when soil is sufficiently moist to guarantee optimal diffusion of metals to the resin. For the same reasons, the method probably is better suited for assessing the availability of metals to plants than to invertebrates, especially for animals that are not in continuous contact with the soil solution.Several of the above described methods have been adopted by the International Standardization Organization (ISO) in (draft) standardized test guidelines for assessing available metal fractions in soils, sediments and waste materials, e.g. to assess the potential for leaching to groundwater or their potential bioaccessibility. This includes e.g. ISO/TS 21268-1 "Soil quality - Leaching procedures for subsequent chemical and ecotoxicological testing of soil and soil materials - Part 1: Batch test using a liquid to solid ratio of 2 l/kg dry matter", ISO 19730 "Soil quality -Extraction of trace elements from soil using ammonium nitrate solution" and ISO 17586 "Soil quality -- Extraction of trace elements using dilute nitric acid".References:Basta, N., Gradwohl, R.. Estimation of Cd, Pb, and Zn bioavailability in smelter-contaminated soils by a sequential extraction procedure. Journal of Soil Contamination 9, 149-164.Gupta, S.K., Aten, C.. Comparison and evaluation of extraction media and their suitability in a simple model to predict the biological relevance of heavy metal concentrations in contaminated soils. International Journal of Environmental Analytical Chemistry 51, 25-46.Lofts, S., Spurgeon, D.J., Svendsen, C., Tipping, E.. Deriving soil critical limits for Cu, Zn, Cd, and Pb: A method based on free ion concentrations. Environmental Science and Technology 38, 3623-3631.Novozamsky, I., Lexmond, Th.M., Houba, V.J.G.. A single extraction procedure of soil for evaluation of uptake of some heavy metals by plants. International Journal of Environmental Analytical Chemistry 51, 47-58.Rodrigues, S.M., Cruz, N., Coelho, C., Henriques, B., Carvalho, L., Duarte, A.C., Pereira, E., Römkens, P.F.. Risk assessment for Cd, Cu, Pb and Zn in urban soils: chemical availability as the central concept. Environmental Pollution 183, 234-242.Rooney, C.P., Zhao, F.-J., McGrath, S.P.. Phytotoxicity of nickel in a range of European soils: Influence of soil properties, Ni solubility and speciation. Environmental Pollution 145, 596-605.Tessier, A., Campbell, P.G.C., Bisson, M.. Sequential extraction procedure for the speciation of particulate trace metals. Analytical Chemistry 51, 844-851.Zhang, H., Davison, W., Knight, B., McGrath, S.. In situ measurements of solution concentrations and fluxes of trace metals in soils using DGT. Environmental Science and Technology 32, 704-710.Zhang, L., Verweij, R.A., Van Gestel, C.A.M.. Effect of soil properties on Pb bioavailability and toxicity to the soil invertebrate Enchytraeus crypticus. Chemosphere 217, 9-17.Zhao, F.J., Rooney, C.P., Zhang, H., McGrath, S.P.. Comparison of soil solution speciation and diffusive gradients in thin-films measurement as an indicator of copper bioavailability to plants. Environmental Toxicology and Chemistry 25, 733-742.Which metal fractions are extracted with the Tessier method, and what does that say about metal availability?Why is porewater extraction to be preferred above water extraction?What is the principle of the DGT passive sampling method, and why does it not work for fairly dry soils?Which method may give the best estimate of potentially available metals for oral uptake by mammals?What are the pros and cons of chemical extraction methods for assessing the availability of metals in soils or sedimentsThis page titled 4.5: Availability and bioavailability is shared under a CC BY 4.0 license and was authored, remixed, and/or curated by Sylvia Moes, Kees van Gestel, & Gerco van Beek via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.
984
4.6: Degradation
https://chem.libretexts.org/Bookshelves/Environmental_Chemistry/Environmental_Toxicology_(van_Gestel_et_al.)/04%3A_Toxicology/4.06%3A_Degradation
Authors: John ParsonsReviewers: Steven Droge, Kristopher McNeillLeaning objectives:You should be able to:Keywords: Environmental degradation reactions, Hydrolysis, Reduction, Dehalogenation, Oxidation, PhotodegradationTransformation of organic chemicals in the environment can occur by a variety of reactions. These may be purely chemical reactions, such as hydrolyses or redox reactions, photochemical reactions with the direct or indirect involvement of light, or biochemical reactions. Such transformations can change the biological activity (toxicity) of a molecule; it can change the physico-chemical properties and thus change its environmental partitioning processes; it can change its bioavailability, for example facilitating biodegradation; or it may contribute to the complete removal (mineralization) of the chemical from the environment. In many cases, chemicals may be removed by combinations of these different processes and it is sometimes difficult to unequivocally identify the contributions of the different mechanisms. Indeed, combinations of different mechanisms are sometimes important, for example in cases where microbial activity is responsible for creating conditions that favour chemical reactions. Here we will focus on two types of reactions: Abiotic (dark) reactions and photochemical reactions. Biodegradation reactions are covered elsewhere (see section on Biodegradation).Hydrolytic reactions are important chemical reactions removing organic contaminants and are particularly important for chemicals containing acid derivatives as functional groups. Common examples of such chemicals are pesticides of the organophosphate and carbamate classes such as parathion, diazinon, aldicarb and carbaryl. Organophosphate chemicals are also used as flame retardants and are widely distributed in the environment. Some examples of hydrolysis reactions are shown in to break (-lysis) a bond. Hydrolyses are reactions with water to produce an acid and either an alcohol or amine as products. Hydrolyses can be catalysed by either OH- or H+ ions and their rates are therefore pH dependent. Some examples of pH-dependent ester hydrolysis reactions are shown in . The rates of these reactions vary strongly depending on the structure of the organohalogen molecule and the halogen substituent (with Br and I being substituted more rapidly than Cl, and much more rapidly than F) and in general the rates of these reactions are too slow to be of more than minor importance except for tertiary organohalogens and secondary organohalogens with Br and I (Schwarzenbach et al. 2017).In some cases, other substitution reactions not involving water as reactant may be important. Some examples include Cl- in seawater converting CH3I to CH3Cl and reaction of thiols with alkyl bromines in anaerobic groundwater and sediment porewater under sulfate-reducing conditions (Schwarzenbach et al. 2017)Redox (reduction and oxidation) reactions are another important reaction class involved in the degradation of organic chemicals. In the presence of oxygen, the oxidation of organic chemicals is thermodynamically favourable but occurs at insignificant rates unless oxygen is activated in the form of oxygen radicals or peroxides (following light absorption for example, see below) or if the reaction is catalysed by transition metals or transition metal-containing enzymes (see the sections on Biodegradation and Xenobiotic metabolism and defence).Reduction reactions are important redox reactions for environmental contaminants in anaerobic environments such as sediment and groundwater aquifers. Under these conditions, organic chemicals containing reducible functional groups such as carboxylic acids and nitro groups undergo reduction reactions (Table 1).Table 1: Examples of chemical redox reactions that may occur in the environment (adapted from Schwarzenbach et al. 2017)Sunlight is an important source of energy to initiate chemical reactions and photochemical reactions are particularly important in the atmosphere. Aromatic compounds and other chemicals containing unsaturated bonds that are able to absorb light in the frequency range available in sunlight become exited (energized) and this can lead to chemical reactions. These reactions lead to cleavage of bonds between carbon atoms and other atoms such as halogens to produce radical species. These radicals are highly reactive and react further to remove hydrogen or OH radicals from water to produce C-H or C-OH bonds or may react with themselves to produce larger molecules. Well known examples of atmospheric photochemical stratospheric reactions of CFCs that have had a negative impact on the so-called ozone layer and photochemical oxidations of hydrocarbons that are involved in the generation of smog.In the aquatic environment, light penetration is sufficient to lead to photochemical reactions of organic chemicals at the water surface or in the top layer of clear water. The presence of particles in a waterbody reduces light intensity through light scattering as does dissolved organic matter through light absorption. Photodegradation contributes significantly to removing oil spills and appears to favour the degradation of longer chain alkanes compared to the preferential attack of linear and small alkanes by biodegradation (Garrett et al., 1998). Cycloalkanes and aromatic hydrocarbons are also removed by photodegradation (D'Auria et al., 2009). There is comparatively little known about the role photodegradation of other organic pollutants in the marine environment although there is, for example, evidence that triclosan is removed by photolysis in the German Bight area of the North Sea (Xie et al., 2008). In the soil environment, there is some evidence that photodegradation may contribute to the removal of a variety of organic chemicals such as pesticides and chemicals present in sewage sludge that is used as a soil amendment but the significance of this process is unclear. Similarly, chemicals that have accumulated in ice, for example as a result of long range transport to polar regions, also seem to be susceptible to photodegradation. Some examples of photodegradation reactions are shown in , Springer, ISBN 978-1-4020-6101-1Which organic chemicals would you expect to undergo hydrolytic degradation in the environment? Explain why these reaction depend on the pH. Reductive dehalogenation reactions are often observed to occur for organochlorine compounds in anaerobic environments. Why are these reactions called reductive dehalogenation? What products would you expect to be formed by the reductive dehalogenation of tetrachloroethene (Cl2C=CCl2)? Describe two ways in which bacteria are involved in these reactions.In which environmental compartments is photochemical transformation or photodegradation a potentially important degradation mechanism for organic chemicals and why is this the case? Explain with examples the differences between direct and indirect photodegradation in the environment.Author: John ParsonsReviewers: Steven Droge, Russell DavenportLeaning objectives:You should be able to:Keywords: Primary biodegradation, mineralisation, readily biodegradable chemicals, persistent chemicals, oxygenation reactions, reductions reactionsBiodegradation and biotransformation both refer to degradation reactions that are catalyzed by enzymes. In general, biodegradation is usually used to describe the degradation carried out by microorganisms and biotransformation often refers to reactions that follow the uptake of chemicals by higher organisms. This distinction is important and arises from the role that bacteria and other microorganisms play in natural biogeochemical cycles. As a result, microorganisms have the capacity to degrade most (perhaps all) naturally occurring organic chemicals in organic matter and convert them to inorganic end products. These reactions supply the microorganisms with the nutrients and energy they need to grow. This broad degradative capacity means that they are able to degrade many anthropogenic chemicals and potentially convert them to inorganic end products, a process referred to as mineralisation.Although higher organisms are also able to degrade (metabolise) many anthropogenic chemicals, these chemicals are not taken up as source of nutrients and energy. Many anthropogenic chemicals can disturb cell functioning processes, and the biotransformation process has been proposed as a detoxification mechanism. Undesirable chemicals that may accumulate to potentially harmful levels are converted to products that are more rapidly excreted. In most cases, a polar and/or ionizable unit is attached to the chemical in one or two steps, making the compound more soluble in blood and more readily removed via the kidneys to the urine. This also renders most hazardous chemicals less toxic than the original chemical. Such biotransformation steps always costs energy (ATP, or through the use of e.g. NADH or NADPH in the enzymatic reactions) from the organism. Biotransformation is sometimes also used to describe degradation by microorganisms when this is limited to a conversion of a chemical into a new product.Biodegradation is for many organic contaminants the major process that removes them from the environment. Measuring the rates of biodegradation therefore is a prominent aspect of chemical risk assessment. Internationally recognized standardised protocols have been developed to measure biodegradation rates of chemicals. Well know examples of these are the OCED Guidelines. These guidelines include screening tests designed to identify chemicals can be regarded as readily (i.e. rapidly) biodegradable as well as more complex tests to measure biodegradation rates of chemicals that degrade slowly in a variety of simulated environments. For more complex mechanistic studies, microorganisms able to degrade specific chemicals are isolated from environmental samples and cultivated in laboratory systems.In principle, biodegradation of a chemical can be determined by either following the concentration of the chemical during the test or by following the conversion to end products (in most cases by either measuring oxygen consumption or CO2 production). Although measuring the concentration gives the most directly relevant information on a chemical, it requires the availability or development of analytical methods which is not always within the capability of routine testing laboratories. Measuring the conversion to CO2 is comparatively straightforward but the production of CO2 from other chemicals present in the test system (such as soil or dissolved organic matter) should be accounted for. This can be done by using 14C-labelled chemicals in the tests but not all laboratories have facilities for this. The main advantage of this approach is that demonstration of quantitative conversion of a chemical to CO2 etc. means that there is no concern about the accumulation of potentially toxic metabolites.Since it is an enzymatically catalysed process, the rates of biodegradation should be modelled using the Michaelis Menten kinetics, or Monod kinetics if growth of the microorganisms is taken into account. In practice, however, first order kinetics are often used to model biodegradation in the absence of significant growth of the degrading microorganisms. This is more convenient that using Michaelis Menten kinetics but there is some justification for this simplification since the concentrations of chemicals in the environment are in general much lower than the half saturation concentrations of the degrading enzymes.Table 1. Influence of molecular structure on the biodegradability of chemicals in the aerobic environment.Type of compounds or substituentsMore biodegradableLess biodegradablehydrocarbonslinear alkanes < C12linear alkanes > C12alkanes with not too high molecular weighthigh molecular weight alkaneslinear chainbranched chain-C-C-C--C-O-C-aliphaticaromaticaliphatic chlorineCl more than 6 carbons from terminal CCl at less than 6 carbons from terminal CSubstituents to an aromatic ring-OH-F-CO2H-Cl-NH2-NO2-OCH3-CF3Whether expressed as terms of first order kinetics or Michaelis Menten parameters, rates of biodegradation vary widely for different chemicals showing that chemical structure has a large impact on biodegradation. Large variations in biodegradation rates are however often observed for the same chemical in different experimental systems. This shows that environmental properties and conditions also play a key role in determining removal by biodegradation and it is often almost impossible to distinguish the effects of chemical properties from those of environmental properties. In other words, there is no such thing as an intrinsic biodegradation rate of a chemical. Nevertheless, we can derive some generic relationships between the structure and biodegradability of chemicals, as listed in Table 1. Examples are that branched hydrocarbon structures are degraded more slowly than linear hydrocarbon structures, and cyclic and in particular aromatic chemicals are degraded more slowly than aliphatic (non-aromatic) chemicals. Substituents and functional groups also have a major impact on biodegradability with halogens and other electron withdrawing substituents having strongly negative effects. It is therefore no surprise than the list of persistent organic pollutants is dominated by organohalogen compounds and in particular those with aromatic or alicyclic structures. It should be recognized that biodegradation rates have often been observed to change over time. Long term exposure of microbial communities to new chemicals has often been observed to lead to increasing biodegradation rates. This phenomenon is called adaptation or acclimation and is often the case following repeated application of a pesticide at the same location. An example is shown for atrazine in where degradation rates increase following longer exposure to the pesticide.Another recent example is the differences in biodegradation rates of the builder L-GLDA (tetrasodium glutamate diacetate) by activated sludge from different waste water treatment plants in the USA. Sludge from regions where L-GLDA was not or only recently on the market required long lag time before degradation started whereas sludge from regions where L-GLDA -containing products had been available for several months required shorted lag phases.Adaptation can results from i) shifts in composition or abundances of species in a bacterial community, ii) mutations within single populations, iii) horizontal transfer of DNA or iv) genetic recombination events, or combinations of these.Biodegradation of chemicals that we regard as pollutants takes place when these chemicals are incorporated into the metabolism of microorganisms. The reactions involved in biodegradation are therefore similar to those involved in common metabolic reactions, such as hydrolyses, oxidations and reductions. Since the conversion of an organic chemical to CO2 is an overall oxidation reaction, oxidation reactions involving molecular oxygen are probably the most important reactions. These reactions with oxygen are often the first but essential step in degradation and can be regarded as activation step converting relatively stable molecules to more reactive intermediates. This is particularly important for aromatic chemicals since oxygenation is required to make aromatic rings susceptible to ring cleavage and further degradation. These reactions are catalysed by enzymes called oxygenases of which there are broadly speaking two classes. Monoxygenases are enzymes catalysing reactions in which one oxygen atom of O2 reacts with an organic molecule to produce a hydroxylated product. Examples of such enzymes are the cytochrome P450 family and are present in all organisms. These enzymes are for example involved in the oxidation of alkanes to carboxylic acids as part of the "beta-oxidation" pathway, which shortens linear alkanoic acids in steps of C2-units, as shown in does not preclude oxidation of organic chemicals. Other oxidants present (nitrate, sulphate, Fe(III) etc) may be present in sufficiently high concentrations to act as oxidants and terminal electron acceptors supporting microbial growth. In the absence of oxygen, activation relies on other reactions, the most important reactions seem to be carboxylation or addition of fumarate. shows an example of the degradation of naphthalene to CO2 in sediment microcosms under sulphate-reducing conditions.Other important reactions in anaerobic ecosystems (sediments and groundwater plumes) are reductions. This affects functional groups, for example reduction of acids to aldehydes to alcohols, nitro groups to amino groups and, particularly important, substitution of halogens by hydrogen. The latter reactions can contribute to the conversion of highly chlorinated chemicals, that are resistant to oxidative biodegradation, to less chlorinated products which are more amenable to aerobic biodegradation. Many examples of these reductive dehalogenation reactions have been shown to occur in, for example, tetrachloroethene-contaminated groundwater (e.g. from dry-cleaning processes) and PCB-contaminated sediment. These reactions are exothermic under anaerobic conditions and some microorganisms are able to harvest this energy to support their growth. This can be considered to be a form of respiration based on dechlorination and is sometimes referred to as chlororespiration.As is the case for abiotic degradation, hydrolyses are also important reactions in biodegradation pathways, particularly for chemicals that are derivatives of organic acids, such as carbamate, ester and organophosphate pesticides where hydrolyses are often the first step in their biodegradation. These reactions are similar to those described in the section on Chemical degradation.Itrich, N.R., McDonough, K.M., van Ginkel, C.G., Bisinger, E.C., LePage, J.N., Schaefer, E.C., Menzies, J.Z., Casteel, K.D., Federle, T.W.. Widespread microbial adaptation to L-glutamate-N,N,-diacetate (L-GLDA) following its market introduction in a consumer cleaning product. Environmental Science & Technology 49, 13314-13321.Janssen, D. B., Dinkla, I. J. T., Poelarends, G. J., Terpstra, P.. Bacterial degradation of xenobiotic compounds: evolution and distribution of novel enzyme activities, Environmental Microbiology 7, 1868-1882.Kleemann, R., Meckenstock, R.U.. Anaerobic naphthalene degradation by Gram-positive, iron-reducing bacteria. FEMS Microbial Ecology 78, 488-496.Schwarzenbach, R.P., Gschwend, P.M., Imboden, D.M.. Environmental Organic Chemistry, Third Edition, Wiley, ISBN 978-1-118-76723-8Van Leeuwen, C., Vermeire, T.G.. Risk Assessment of Chemicals: An Introduction (2nd ed.), Springer, ISBN 978-1-4020-6101-1Zhou, Q., Chen, L. C., Wang, Z., Wang, J., Ni, S., Qiu, J., Liu, X., Zhang, X., Chen, X.. Fast atrazine degradation by the mixed cultures enriched from activated sludge and analysis of their microbial community succession. Environmental Science & Pollution Research 24, 22152-22157.Degradation by microorganisms plays an important role in the environmental fate of industrial organic chemicals. Explain briefly why the role of microorganisms is so important.Biodegradation rates depend on amongst other factors on the structure of the chemicals. Mention three structural factors responsible for slow biodegradation.DDT is one of the original "dirty dozen" Persistent Organic Pollutants (POPs). Explain what these POPs are and why they are labelled as persistent. What structural features are responsible for them being labelled as POPs?Groundwater used to prepare drinking water is discovered to be contaminated with toluene and tetrachloroethene. Although nothing is known about the geochemical conditions in the groundwater aquifer, you are asked to investigate whether there is any evidence for biodegradation of these compounds occurring in the aquifer. Suggest which compounds could be analysed as evidence for biodegradation.Authors: John ParsonsReviewers: Steven Droge, Russell DavenportLeaning objectives:You should be able to:Keywords: Environmental fate, chemical degradation, photochemical degradation, biodegradation, mineralisation, degradation rateMany experimental approaches are possible to measure the environmental degradation of chemicals, ranging from highly controlled laboratory experiments to environmental monitoring studies. While each of these approaches has its advantages and disadvantages, a standardised and relatively straightforward set of protocols has clear advantages such as suitability for a wide range of laboratories, broad scientific and regulatory acceptance and comparability for different chemicals.The system of OECD test guidelines (see links in the reference list of this chapter) is the most important set of standardised protocols although other test systems may be used in other regulatory contexts. As well as tests covering environmental fate processes, they also cover physical-chemical properties, bioaccumulation, toxicity etc. These guidelines have been developed in an international context and are adopted officially after extensive validation and testing in different laboratories. This ensures their wide acceptance and application in different regulatory contexts for chemical hazard and risk assessment.The OECD Guidelines include only two tests specific for chemical degradation. This might seem surprising but it should not be forgotten that chemical degradation could also contribute to the removal observed in biodegradability tests. The OECD Guidelines for chemical degradation are OECD Test 111: Hydrolysis as a Function of pH (OECD 2004) and OECD Test 316: Phototransformation of Chemicals in Water - Direct Photolysis (OECD 2008). If desired, sterilised controls may also be used to determine the contribution of chemical degradation in biodegradability tests.OECD Test 111 measures hydrolytic transformations of chemicals in aquatic systems at pH values normally found in the environment (pH 4 - 9). Sterile aqueous buffer solutions of different pH values (pH 4, 7 and 9) containing radio-labelled or unlabelled test substance (below saturation) are incubated in the dark at constant temperature and analysed after appropriate time intervals for the test substance and for hydrolysis products. The preliminary test is carried out for 5 days at 50°C and pH 4.0, 7.0 and 9.0, this is known as a first tier test. Further second tier tests study the hydrolysis of unstable substances and the identification of hydrolysis products and may extend for 30 days.OECD Test 316 measures direct photolysis rate constants using xenon arc lamp capable of simulating natural sunlight in the 290 to 800 nm or natural sunlight, and extrapolated to natural water. If estimated losses are superior or equal to 20%, the transformation pathway and the identities, concentrations, and rate of formation and decline of major transformation products are identified.Biodegradation is in general considered to be the most important removal process for organic chemicals in the environment and it is therefore no surprise that biodegradability testing plays a key role in the assessing the environmental fate and subsequent exposure risks of chemicals. Biodegradation is an extensively researched area but data from standardised tests are favoured for regulatory purposes as they are assumed to yield reproducible and comparable data. Standardised tests have been developed internationally, most importantly under the auspices of the OECD and are part of the wider range of tests to measure physical-chemical, environmental and toxicological properties of chemicals. An overview of these biodegradability tests is given in Table 1.The way that biodegradability testing is implemented can vary in detail depending on the regulatory context but in general it is based on a tiered approach with all chemicals being subjected to screening tests to identify chemicals that can be considered to be readily biodegradable and therefore removed rapidly from wastewater treatment plants (WWTPs) and the environment in general. These tests were originally developed for surfactants and often use activated sludge from WWTPs as a source of microorganisms since biodegradation during wastewater treatment is a major conduit of chemical emissions to the environment. The so-called ready biodegradability tests are designed to be stringent with low bacterial concentrations and the test chemical as the only potential source of carbon and energy at high concentrations. The assumption is that chemicals that show rapid biodegradation under these unfavourable conditions will always be degraded rapidly under environmental conditions. Biodegradation is determined as conversion to CO2 (mineralisation), either by directly measuring CO2 produced, or the consumption of oxygen, or removal of dissolved organic carbon, as this is the most desirable outcome of biodegradation. The results that have to be achieved for a chemical to be considered readily biodegradable vary slightly depending on the test, but as an example in the OECD 301D test (OECD 2014), the consumption of oxygen should reach 70% of that theoretically required for complete mineralisation within 28 days.Table 1. The OECD biodegradability testsOECD TEST GUIDELINEPARAMETER MEASUREDREFERENCEReady biodegradability tests301A: DOC Die-away testDOCOECD 1992a301B: CO2 evolution testCO2OECD 1992a301C: Modified MITI(I) testO2OECD 1992a301D: Closed bottle testO2OECD 1992a301E: Modified OECD screening testDOCOECD 1992a301F: Manometric respirometry testO2OECD 1992a306: Biodegradability in seawaterDOCOECD 1992c310: Test No. 310: Ready Biodegradability - CO2 in sealed vessels (Headspace Test).CO2OECD 2014Inherent biodegradability tests302A: Modified Semi-continuous Activated Sludge (SCAS) testDOCOECD 1981b302B: Zahn-Wellens testDOCOECD 1992b302C: Modified MITI(II) testO2OECD 2009Simulation tests303A: Activated sludge unitsDOCOECD 2001303B: BiofilmsDOCOECD 2001304A: Inherent biodegradability in soil14CO2OECD 1981a307: Aerobic and anaerobic transformation in soil14CO2/CO2OECD 2002a308: Aerobic and anaerobic transformation in aquatic sediment systems14CO2/CO2OECD 2002b309: Aerobic mineralization in surface water14CO2/CO2OECD 2004b311: Anaerobic biodegradability of organic compounds in digested sludge: by measurement of gas productionCO2 and CH4OECD 2006314: Simulation tests to assess the biodegradability of chemicals discharged in wastewaterConcentration of chemical, 14CO2/CO2OECD 2008aThese test systems are widely applied for regulatory purposes but they do have a number of issues. These include the fact that there are practical difficulties when applied to volatile or poorly soluble chemicals, but probably the most important is that for some chemicals the results can be highly variable. This is usually attributed to the source of the microorganisms used to inoculate the system. For many chemicals, there is a wide variability in how quickly they are degraded by activated sludge from different WWTPs. This is probably the result of different exposure concentrations and exposure periods to the chemicals, and may also be caused by dependence on the ability of small populations of degrading microorganisms, which may not always be included in the sludge samples used in the tests. These issues are not dealt with in any systematic way in biodegradability testing. It has been suggested that a preliminary period of exposure to the chemicals to be tested would allow sludge to adapt to the chemicals and may yield more reproducible test results. Further suggestions include using a higher, more environmentally relevant, concentration of activated sludge as the inoculum.Failure to comply with the pass criteria in ready biodegradability tests does not necessarily mean that the chemical is persistent in the environment since it is possible that slow biodegradation may occur. These chemicals may therefore be tested further in higher tier tests, for what is referred to as inherent biodegradability in tests performed under more favourable conditions or in simulation tests representing specific compartments, to determine whether biodegradation may contribute significantly to their removal. These tests are also standardised (see Table 1). Simulation tests are designed to represent environmental conditions in specific compartments, such as redox potential, pH, temperature, microbial community, concentration of test substance and occurrence and concentration of other substrates.The criteria used in classifying the biodegradability of chemicals depend on the regulatory context. Biodegradability tests can be used for different purposes: in the EU this includes 3 distinct purposes; classification and labelling, hazard/persistent assessment, and environmental risk assessment Recently regulatory emphasis has shifted to identifying hazardous chemicals, and therefore those chemicals that are less biodegradable and likely to persist in the environment. Examples for the classification as PBT (persistent, bioaccumulative and toxic) or vPvB (very persistent and very bioaccumulative) chemicals are shown in Table 2. As well as the results of standardised tests, other data such as the results of environmental monitoring data or studies on the microbiology of biodegradation can also be taken into account in evaluations of environmental degradation in a so-called weight of evidence approach.Table 2. Criteria used to classify chemicals as PBT or vPvB (van Leeuwen & Vermeire 2007)PropertyPBT criteriavPvB criteriaPersistenceT1/2 >60 days in marine water, orT1/2 >40 days in fresh/estuarine water, orT1/2 >180 days in marine sediment, orT1/2 >120 days in fresh/estuarine sediment, orT1/2 >120 days in soil.T1/2 >60 days in marine, fresh or estuarine water, orT1/2 >180 days in marine, fresh or estuarine sediment, orT1/2 >180 days in soilBioaccumulationBCF > 2000 L/kgBCF > 5000 L/kgToxicity- NOEC < 0.01 mg/L for marine or freshwater organisms, or- substance is classified as carcinogenic, mutagenic, or toxic for reproduction, or- there is other evidence of chronic toxicity, according to Directive 67/548/EECThe results of biodegradability tests are sometimes also used to derive input data for environmental fate models (see section on Multicompartment modeling). It is however not always straightforward to transfer data measured in what is sometimes a multi-compartment test system into degradation rates in individual compartments as other processes (e.g. partitioning) need to be taken into account.OECD, 1981a. OECD Guidelines for the Testing of Chemicals. Test No. 304A: Inherent Biodegradability in Soil.OECD, 1981b. OECD Guidelines for the Testing of Chemicals. Test No. 302A: Inherent Biodegradability: Modified SCAS Test.OECD, 1992a. OECD Guidelines for the Testing of Chemicals. Test No. 301: Ready Biodegradability.OECD, 1992b. OECD Guidelines for the Testing of Chemicals. Test No. 302B: Inherent Biodegradability: Zahn-Wellens/ EVPA Test.OECD, 1992c. OECD Guidelines for the Testing of Chemicals. Test No. 306: Biodegradability in Seawater.OECD, 2001. OECD Guidelines for the Testing of Chemicals. Test No. 303: Simulation Test - Aerobic Sewage Treatment - A: Activated Sludge Units; B: Biofilms.OECD, 2002a. OECD Guidelines for the Testing of Chemicals. Test No. 307: Aerobic and Anaerobic Transformation in Soil.OECD, 2002b. OECD Guidelines for the Testing of Chemicals. Test No. 308: Aerobic and Anaerobic Transformation in Aquatic Sediment Systems.OECD, 2004a. OECD Guidelines for the Testing of Chemicals. Test No. 111: Hydrolysis as a Function of pH.OECD, 2004b. OECD Guidelines for the Testing of Chemicals. Test No. 309: Aerobic Mineralisation in Surface Water - Simulation Biodegradation TestOECD, 2006. OECD Guidelines for the Testing of Chemicals. Test No. 311: Anaerobic Biodegradability of Organic Compounds in Digested Sludge: by Measurement of Gas Production.OECD, 2008a. OECD Guidelines for the Testing of Chemicals. Test No. 314: Simulation Tests to Assess the Biodegradability of Chemicals Discharged in WastewaterOECD, 2008b. OECD Guidelines for the Testing of Chemicals. Test No. 316: Phototransformation of Chemicals in Water - Direct Photolysis.OECD, 2009. OECD Guidelines for the Testing of Chemicals. Test No. 302C: Inherent Biodegradability: Modified MITI Test (II).OECD, 2014. OECD Guidelines for the Testing of Chemicals. Test No. 310: Ready Biodegradability - CO2 in sealed vessels (Headspace Test).Van Leeuwen, C.J., Vermeire, T.G.. Risk Assessment of Chemicals: An Introduction (2nd ed.), Springer, ISBN 978-1-4020-6101-1The OECD has published extensive protocols that can be used to evaluate the degradability of chemicals in the environment. Which forms of abiotic (chemical) degradation do these include?Biodegradation tests used to evaluate the environmental effects of chemicals often measure the mineralisation of the chemicals, for example by measuring the amount of carbon dioxide produced. Explain why this is preferred to measuring the removal of the original chemical (primary biodegradation).Assessment of the biodegradability of chemicals often follows a tiered system in which they are first screened for their ready biodegradability before undergoing more extensive testing to assess their inherent biodegradability or biodegradation in specific environmental compartments using simulation tests. Explain why this approach is applied. This page titled 4.6: Degradation is shared under a CC BY 4.0 license and was authored, remixed, and/or curated by Sylvia Moes, Kees van Gestel, & Gerco van Beek via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.
985
4.7: Modelling exposure
https://chem.libretexts.org/Bookshelves/Environmental_Chemistry/Environmental_Toxicology_(van_Gestel_et_al.)/04%3A_Toxicology/4.07%3A_Modelling_exposure
In preparationAuthors: Dik van de Meent and Michael MatthiesReviewer: John ParsonsLearning objectives:You should be able toKeywords: mass balance equation, environmental fate modelMulticompartment (or multimedia) mass balance modeling starts from the universal conservation principle, formulated as balance equation. The governing principle is that the rate of change (of any entity, in any system) equals the difference between the sum of all inputs (of that entity) to the system and the sum of all outputs from it. Environmental modelers use the balance equation to predict exposure concentrations of chemicals in the environment by deduction from knowledge of the rates of input- and output processes, which can be understood easiest from considering the mass balance equation for one single environmental compartment: (eq. 1)where dmi,j/dt represents the change of mass of chemical i in compartment j (kg) over time (s), and inputi,j and outputi,j denote the rates of input and output of chemical to and from compartment j, respectively.In multimedia mass balance modeling, mass balance equations (of the type shown in equation 1) are formulated for each environmental compartment. Outflows of chemical from the compartments are often proportional to the amounts of chemical present in the compartments, while external inputs (emissions) may often be assumed constant. In such cases, i.e. when first-order kinetics apply (see section 3.3 on Environmental fate of chemicals), mass balance equations take the form of equation 1 in section 3.3. For one compartment (e.g. a lake, as in only: (eq. 2)in which dm/dt (kg.s-1) is the rate of change of the mass (kg) of chemical in the lake, I (kg.s-1) is the (constant) emission rate, and the product k.m (kg.s-1) denotes the first-order loss rate of the chemical from the lake. It is obvious that eventually a steady state must develop, in which the mass of chemical in the lake reaches a predictable maximum (eq. 3)When the input rate (emission) is constant, i.e. that it does not vary with time, and is independent of the mass of chemical present, the mass of chemical in the systems is expected to increase exponentially, from its initial value at, to a steady level at . According to equation 3, a final mass level equal to is to be expected.The prefix 'multi' indicates that generally (many) more than one environmental compartment is considered. The Unit World (see below) contains air, water, biota, sediment and soil; more advanced global modeling systems may use hundreds of compartments. The case of three compartments (typically one air, one water, one soil) is schematically worked out in , and chemical can be exported from each compartment by degradation or advective outflow, as in the one-compartment model. In addition, chemical can be transported between compartments (simultaneous import-export). All mass flows are characterized by (pseudo) first-order rate constants (see section 3.3 on Environmental fate processes). The three mass balance equations eventually balance to zero at infinite time: (eq. 4)where the symbols denote mass in compartments i at steady state. Sets of n linear equations with n unknowns can be solved algebraically, by manually manipulating equations 4, until clean expressions for each of the three mi values are obtained, which inevitably becomes tedious as soon as more than two mass balance equations are to be solved - this did not withhold one of Prof. Mackay's most famous PhD students from successfully solving a set of 14 equations! An easier way of solving sets of n linear equations with n unknowns is by means of linear algebra. Using linear algebraic vector-matrix calculus, the equations 4 can be rewritten into one linear-algebraic equation: (eq. 5)in which in which is the vector of masses in the three compartments, is the model matrix of known rate constants and is the vector of known emission rates:The solution of equation 5 isin which is the vector of masses at steady state and is the inverse of model matrix . The linear algebraic method of solving linear mass balance equations is easily carried with spreadsheet software (such as MS Excel, LibreOffice Calc or Google Sheets), which contain built-in array functions for inverting matrices and multiplying them by vectors.In the late 1970s, pioneering environmental scientists at the USEPA Environmental Research Laboratory in Athens GA, recognized that the universal (mass) balance equation, applied to compartments of environmental media (air, water, biota, sediment, soil) could serve as a means to analyze and understand differences in environmental behavior and fate of chemicals. Their 'evaluative Unit World Modeling' (Baughman and Lassiter, 1978; Neely and Blau, 1985) was the start of what is now known as multimedia mass balance modeling. The Unit World concept was further developed and polished by Mackay and co-workers (Neely and Mackay, 1982; Mackay and Paterson, 1982; Mackay et al., 1985; Paterson and Mackay, 1985, 1989). In Unit World modeling, the environment is viewed of as a set of well-mixed chemical reactors, each representing one environmental medium (compartment), to and from which chemical flows, driven by 'departure from equilibrium' - this is chemical technology jargon for expressing the degree to which thermodynamic equilibrium properties such as 'chemical potential' or 'fugacity' differ. Mackay and co-workers used fugacity in mass balance modeling as the central state variable. Soon after publication of this 'fugacity approach' (Mackay, 1991), the term 'fugacity model' became widely used to name all models of the 'Mackay-type', which applied 'Unit World mass balance modeling', even though most of these models kept using the more traditional chemical mass as a state variable.While conceptually simple (environmental fate is like a leaking bucket, in the sense that its steady-state water height is predictable from first-order kinetics), the dynamic character of mass balance modeling is often not so intuitive. The abstract mathematical perspective may best suit explain mass balance modeling, but this may not be practical for all students. In his book about multimedia mass balance modeling, Mackay chose to teach his students the intuitive approach, by means of his famous water tank analogy.According to this intuitive approach, mass balance modeling can be done at levels of increasing complexity, where the lowest, simplest, level that serves the purpose should be regarded as the most suitable. The least complex is level I assuming no input and output. A chemical can freely (i.e. without restriction) flow from one environmental compartment to another, until it reaches its state of lowest energy: the state of thermodynamic equilibrium. In this state, the chemical has equal chemical potential and fugacity in all environmental media. The system is at rest; in the hydraulic analogy, water has equal levels in all tanks. This is the lowest level of model complexity, because this model only requires knowledge of a few thermodynamic equilibrium constants, which can be reasoned from basic physical substance properties.The more complex modeling level III describes an environment in which flow of chemical between compartments experiences flow resistance, so that a steady state of balance between outputs and inputs is reached only at the cost of permanent 'departure from equilibrium'. Degradation in all compartments and advective flows, e.g. rain fall or wind and water currents, are also considered. The steady state of level III is one in which fugacities of chemical in the compartments are unequal (no thermodynamic equilibrium); in the hydraulic analogy, water in the tanks rest at different heights. Naturally, solving modeling level III requires detailed knowledge of the inputs (into which compartment(s) is the chemical emitted?), the outputs (at what rates is the chemical degraded in the various compartments?) and the transfer resistances (how rapid or slow is the mass transfer between the various compartments?). Level III modelers are rewarded for this by obtaining more realistic model results.The fourth complex level of multimedia mass balance modeling (level IV, not shown in produces transient (time dependent) solutions. Model simulations start (t = 0) with zero chemical (m = 0; empty water tanks). Compartments (tanks), fill up gradually until the system comes to a steady state, in which generally one or more compartments depart from equilibrium, as in level III modeling. Level IV is the most realistic representation of environmental fate of chemicals, but requires most detailed knowledge of mass flows and mass transfer resistances. Moreover, time-varying states are least easy to interpret and not always most informative of chemical fate. The most important piece of information to be gained from level IV modeling is the indication of time to steady state: how long does it take to clear the environment from persistent chemicals that are no longer used?Mackay describes an intermediate level of complexity (level II), in which outputs (degradation, advective outflows) balance inputs (as in level III), and chemical is allowed to freely flow between compartments (as in level I). A steady state develops in level II and there is thermodynamic equilibrium at all times. Modeling at level II does not require knowledge of mass transfer resistances (other than that resistances are negligible!), but degradation and outflow rates increase the model complexity compared to that of level I. In many situations, level II modeling yields surprisingly realistic results.Soon after publication of the first use of 'evaluative Unit World modeling' (Mackay and Paterson, 1982), specific applications of the 'Mackay approach' to multimedia mass balance modeling started to appear. The Mackay group published several models for the evaluation of chemicals in Canada, of which ChemCAN (Mackay et al., 1995) is known best. Even before ChemCAN, the Californian model CalTOX (Mckone, 1993) and the Dutch model SimpleBox (Van de Meent, 1993) came out, followed by publication of the model HAZCHEM by the European Centre for Ecotoxicology and Toxicology of Chemicals (ECETOC, 1994) and the German Umwelt Bundesamt's model ELPOS (Beyer and Matthies, 2002). Essentially, all these models serve the very same purpose as the original Unit World model, namely providing standardized modeling platforms for evaluating the possible environmental risks from societal use of chemical substances.Multimedia mass balance models became essential tools in regulatory environmental decision making about chemical substances. In Europe, chemical substances can be registered for marketing under the REACH regulation only when it is demonstrated that the chemical can be used safely. Multimedia mass balance modeling with SimpleBox (Hollander et al., 2014) and SimpleTreat (Struijs et al, 2016) plays an important role in registration.While early multimedia mass balance models all followed in the footsteps of Mackay's Unit World concept (taking the steady-state approach and using one compartment per environmental medium), later models became larger and spatially and temporally explicit, and were used for in-depth analysis of chemical fate.In the late 1990s, Wania and co-workers developed a Global Distribution Model for Persistent Organic Pollutants (GloboPOP). They used their global multimedia mass balance model to explore the so-called cold condensation effect, by which they explained the occurrence of relatively large amounts of persistent organic chemicals in the Arctic, where no one had ever used them (Wania, 1999). Scheringer and co-workers used their CliMoChem model to investigate long-range transport of persistent chemicals into Alpine regions (Scheringer, 1996; Wegmann et al., 2005). MacLeod and co-workers (Toose et al., 2004) constructed a global multimedia mass balance model (BETR World) to study long-range, global transport of pollutants.Baughman, G.L., Lassiter, R.. Predictions of environmental pollutant concentrations. In: Estimating the Hazard of Chemical Substances to Aquatic Life. ASTM STP 657, pp. 35-54.Beyer, A., Matthies, M.. Criteria for Atmospheric Long-range Transport Potential and Persistence of Pesticides and Industrial Chemicals. Umweltbundesamt Berichte 7/2002, E. Schmidt-Verlag, Berlin. ISBN 3-503-06685-3.ECETOC. HAZCHEM, A mathematical Model for Use in Risk Assessment of Substances. European Centre for Ecotoxicology and Toxicology of Chemicals, Brussels.Hollander, A., Schoorl, M., Van de Meent, D.. SimpleBox 4.0: Improving the model, while keeping it simple... Chemosphere 148, 99-107.Mackay, D.. Multimedia Environmental Fate Models: The Fugacity Approach. Lewis Publishers, Chelsea, MI.Mackay, D., Paterson, S.. Calculating fugacity. Environmental Science and Technology 16, 274-278.Mackay, D., Paterson, S., Cheung, B., Neely, W.B.. Evaluating the environmental behaviour of chemicals with a level III fugacity model. Chemosphere 14, 335-374.Mackay, D., Paterson, S., Tam, D.D., Di Guardo, A., Kane, D.. ChemCAN: A regional Level III fugacity model for assessing chemical fate in Canada. Environmental Toxicology and Chemistry 15, 1638-1648.McKone, T.E.. CALTOX, A Multi-media Total-Exposure Model for Hazardous Waste sites. Lawrence Livermore National Laboratory. Livermore, CA.Neely, W.B., Blau, G.E.. Introduction to Exposure from Chemicals. In: Neely, W.B., Blau, G.E. (Eds). Environmental Exposure from Chemicals Volume I, CRC Press, Boca Raton, FL., pp 1-10.Neely, W.B., Mackay, D.. Evaluative model for estimating environmental fate. In: Modeling the Fate of Chemicals in the Aquatic Environment. Ann Arbor Science, Ann Arbor, MI, pp. 127-144.Paterson, S.. Equilibrium models for the initial integration of physical and chemical properties. In: Neely, W.B., Blau, G.E. (Eds). Environmental Exposure from Chemicals Volume I, CRC Press, Boca Raton, FL., pp 218-231.Paterson, S., Mackay, D.. A model illustrating the environmental fate, exposure and human uptake of persistent organic chemicals. Ecological Modelling 47, 85-114.Scheringer, M.. Persistence and spatial range as endpoints of an exposure-based assessment of organic chemicals. Environmental Science and Technology 30, 1652-1659.Struijs, J., Van de Meent, D., Schowanek, D., Buchholz, H., Patoux, R., Wolf, T., Austin, T., Tolls, J., Van Leeuwen, K., Galay-Burgos, M.. Adapting SimpleTreat for simulating behaviour of chemical substances during industrial sewage treatment. Chemosphere 159:619-627.Toose, L., Woodfine, D.G., MacLeod, M., Mackay, D., Gouin, J.. BETR-World: a geographically explicit model of chemical fate: application to transport of alpha-HCH to the Arctic Environmental Pollution 128, 223-40.Van de Meent D.. SimpleBox: A Generic Multi-media Fate Evaluation Model. National Institute for Public Health and the Environment. RIVM Report 672720 001. Bilthoven, NL.Van de Meent, D., McKone, T.E., Parkerton, T., Matthies, M., Scheringer, M., Wania, F., Purdy, R., Bennett, D.. Persistence and transport potential of chemicals in a multimedia environment. In: Klecka, g. et al. (Eds.) Evaluation of Persistence and Long-Range Transport Potential of Organic Chemicals in the Environment. SETAC Press, Pensacola FL, Chapter 5, pp. 169-204.Van de Meent, D., Hollander, A., Peijnenburg, W., Breure, T.. Fate and transport of contaminants. In: Sánches-Bayo, F., Van den Brink, P.J., Mann, R.M. (eds.),Ecological Impacts of Toxic Chemicals, Bentham Science Publishers, pp. 13-42.Wania, F.. On the origin of elevated levels of persistent chemicals in the environment. Environmental Science and Pollution Research 6, 11-19.Wegmann, F., Scheringer, M., Hungerbühler, K.. First investigations of mountainous cold condensation effects with the CliMoChem model. Ecotoxicology and Environmental Safety 63, 42-51.Describe, using your own words, the essential characteristics of mass balance equations.What is "steady state"? What is "equilibrium"? Use a few lines of text to describe the essentials, indicating differences and commonalities.Mass balance equations are used in models to calculate concentrations of substances in the environment, given knowledge of the rates of emission. Give a worked-out example for a one-compartment situation, e.g. a fresh-water lake.Name and describe one (or more) example(s) of multimedia mass balance modeling.Authors: Wilko VerweijReviewers: John Parsons, Stephen LoftsLearning objectives:You should be able toKeywords: speciation modeling, solubility, organic complexationSpeciation models allow users to calculate the speciation of a solution rather than to measure it in a chemical way or to assess it indirectly using bioassays (see section 3.5). As a rule, speciation models take total concentrations as input and calculate species concentrations.Speciation models use thermodynamic data about chemical equilibria to calculate the speciation. This data, expressed in free energy or as equilibrium constants, can be found in the literature. The term 'constant' is slightly misleading as equilibrium constants depend on the temperature and ionic strength of the solution. The ionic strength is calculated from the concentrations (C) and charges (Z) of ions in solution using the equation:For many equilibria, no information is available to correct for temperature. To correct for ionic strength, many semi-empirical methods are available, none of which is perfect.For each equilibrium reaction, an equilibrium constant can be defined. For example, for the reactionCu2+ + 4 Cl- ⇌ CuCl42-the equilibrium constant can be defined asConsequently, when the concentrations of free Cu2+ and free Cl- are known, the concentration of CuCl42- can be easily calculated as:[CuCl42-] = β * [Cu2+] * [Cl-]4In fact, the concentrations of free Cu2+ and free Cl- are often NOT known, but what is known are the total concentrations of Cu and Cl in the system. In order to find the speciation, we need to set up a set of mass balance equations needs to be set up, for example:[total Cu] = [free Cu2+] + [CuOH+] + [Cu(OH)2] + [Cu(OH)3-] (..) + [CuCl+] + [CuCl2] (..) etc.[total Cl] = (..)Each concentration of a complex is a function of the free concentrations of the ions that make it up. So we can say that if we know the concentrations of all the free ions, we can calculate the concentrations of all the complexes, and then we can calculate the total concentrations. A solution to the problem cannot be found by rearranging the mass balance equations, because they are non-linear. What a speciation model does is to repeatedly estimate the free ion concentrations, on each loop adjusting them so that the calculated total concentrations more closely match the known totals. When the calculated and known total concentrations all agree to within a defined precision, the speciation has been calculated. The critical part of the calculation is adjusting the free ion concentrations in a sensible and efficient way to find the solution as quickly as possible. Several more or less sophisticated methods are available to solve this, but usually a Newton-Raphson method is applied.In fact the explanation above is too simple. Equilbrium constants are valid under specific conditions for temperature and ionic strength (for example the standard conditions of 25oC and [endif]--> and need to be converted to the temperature and ionic strength of the system for which speciated is being calculated. It is possible to adapt the equilibrium constants for non-standard temperatures, but this requires knowledge of heat capacity (ΔH) data of each equilibrium. That knowledge is often not available. Constants can be converted from 25°C to other temperatures using the Van 't Hoff-equation:where K1 and K2 are the constants, T1 and T2 the temperatures, ΔH is the enthalpy of a reaction and R is the gas constant.Equilbrium constants are also valid for one specific value of ionic strength. For conversion from one value of ionic strength to another, many different approaches may be used. This conversion is quite important, because already at relatively low ionic strengths, deviations from ideality become significant, and the activity of a species starts to deviate from its concentration. Hence, the intrinsic, or thermodynamic, equilibrium constants (i.e. constants at a hypothetical ionic strength of zero) are no longer valid and the activity a of ions at non-zero ionic strength needs to be calculated from the concentration and the activity coefficient:a = γ * cwhere γ is the activity coefficient (dimensionless; sometimes also called f) and c is the concentration; a and c are in mol/liter.The first solution to calculate activity coefficients for non-zero ionic strength was proposed by Debye and Hückel in 1923. The Debye-Hückel theory assumes ions are point charges so it does not take into account the volume that these ions occupy nor the volume of the shell of ligands and/or water molecules around them. The Debye-Hückel gives good approximations, up to circa 0.01 M for a 1:1-electrolyte, but only up to circa 0.001 M for a 2:2-electrolyte. When the ionic strength exceeds these values, the activity coefficients that the Debye-Hückel approximation predicts deviate significantly from experimental values. Many environmental applications require conversions for higher ionic strengths making the Debye-Hückel-equation insufficient. To overcome this problem, many researchers have suggested other methods, like the extended Debye-Hückel-equation, the Güntelberg-equation and the Davies-equation, but also the Bromley-equation, the Pitzer-equation and the Specific Ion Interaction Theory (SIT).Many programs use the Davies-equation, which calculates activity coefficients γ as follows:where z is the charge of the species and I the ionic strength. Sometimes 0.2 instead of 0.3 is used. Basically all these approaches take the Debye-Hückel-equation as a starting point, and add one or more terms to correct for deviations at higher ionic strengths. Although many of these methods are able to predict the activity of ions fairly well, they are in fact mainly empirical extensions without a solid theoretical basis.Most salts have a limited solubility; in several cases the solubility is also important under conditions that occur in the environment. For instance, for CaCO3 the solubility product is 10-8.48, which means that when [Ca2+] * [CO32-] > 10-8.48, CaCO3 will precipitate, until [Ca2+] * [CO32-] = 10-8.48. But it also works the other way around: if solid CaCO3 is present in a solution where [Ca2+] * [CO32-] < 10-8.48 (note the '<'-sign), solid CaCO3 will dissolve, until [Ca2+] * [CO32-] = 10-8.48. Note that the Ca and CO3 in the formula here refer to free ions. For example, a 10-13 M solution of Ag2S will lead to precipitation of Ag2S. The free concentrations of Ag and S are 6.5*10-15 M and 1.8*10-22 M resp. (which corresponds with the solubility product of 10-50.12, but the dissolved concentrations of Ag and S are 7.1*10-15 M and 3.6*10-15 M resp., so for S seven order of magnitude higher. This is caused by the formation of S-complexes with protons (HS- and H2S (aq)) and to a lesser extent with Ag.Complexation with Dissolved Organic Carbon (DOC) is different from inorganic complexation or complexation with well-defined compounds such as acetate or NTA. The reasons for that difference are as follows.Among the most popular models to assess organic complexation are Model V, VI and VII, also known as WHAM, written by Tipping and co-authors (Tipping & Hurley, 1992; Tipping, 1994, 1998; Tipping, Lofts & Sonke, 2011). All these models assume that two types of binding occur: specific binding and accumulation in the diffuse double layer. Specific binding is the formation of a chemical bond between an ion and a functional group (or groups) on the organic molecule. Diffuse double layer accumulation is the accumulation of ions of opposite electrical charge adjacent to the molecule, without formation of a chemical bond (the electrical charge is usually negative, so the ions that accumulate are cations).For specific binding, all these models distinguish fulvic acids (FA) and humic acids (HA) which are treated separately. These two classes of DOC are typically the most abundant components of natural organic matter in the environment - in surface freshwaters, the fulvic acids are typically the most abundant. For each class, eight different discrete binding sites are used in the model. The sites have a range of acid-base properties. Metals bind to these sites, either to one site alone (monodentate), to two sites (bidentate) or, starting with Model VI, to three (tridentate). A fraction of the sites is allowed to form bidentate complexes. Starting with Model VI, for each bidentate and tridentate group three sub-groups are assumed to be present - this further increases the range of metal binding strengths.Binding constants depend on ionic strength and electrostatic interactions. Conditional constants are calculated in the same way in Model V, VI and VII, as follows:where:where:Therefore, the conditional constant depends on the charge on the organic acids as well as on the ionic strength. For the binding of metals, the calculation of the conditional constant occurs in a similar way.The diffuse double layer is usually negatively charged, so it is usually populated by cations, in order to maintain electric neutrality. Calculations for the diffuse double layer are the same for Model V, Model VI and Model VII. The volume of the diffuse double layer is calculated separately for each type of acid, as follows:where:Simply applying this formula in situations of low ionic strength and high content of organic acid would lead to artifacts (where volume of diffuse layer can be calculated to be more than 1 liter/liter). Therefore, some "tricks" are implemented to limit the volume of the diffuse double layer to 25% of the total.In case the acid has a negative charge (as it has in most cases), positive and neutral species are allowed to enter the diffuse double layer, just enough to make the diffuse double layer electrically neutral. When the acid has a positive charge, negative and neutral species are present.The concentration of species in the diffuse double layer is calculated by assuming that the concentration of that species in the diffuse double layer depends on the concentration in the bulk solution and the charge.In formula:where R is calculated iteratively, to ensure the diffuse double layer is electrically neutral.Speciation models can be used for many purposes. Basically, two groups of applications can be distinguished. The first group consists of applications meant to understand the chemical behaviour of any system. The second group focuses on bioavailability.Speciation models can be helpful in understanding chemical behaviour in either laboratory situations or field situations. For instance, if you want to add EDTA to a solution to prevent metals from precipitation, the choice of the EDTA-substance also determines the pH of the final solution. shows the pH of a 1 mM solution of EDTA for five different EDTA-salts. This shows that if you want to end up with a near neutral solution, the best choice is to add EDTA as the Na3HEDTA-salt. Adding a different salt requires adding either acid or base, or more buffer capacity, which in turn will influence the chemical behaviour of the solution.If you have field measurements of redox potential, speciation models can help to predict whether iron will be present as Fe(II) or Fe(III), which is important because Fe(II) behaves quite different chemically than Fe(III) and also has a quite different bioavailability. The same holds for other elements that undergo redox equilibria like N, S, Cu or Mn.Phase reactions can be predicted with speciation models, for example the dissolution of carbonate due to the gas solution reaction of CO2. Another example is the speciation in Dutch Standard Water (DSW), a frequently used test medium for ecotoxicological experiments, which is oversaturated with respect to CaCO3 and therefore displays a part of Ca as a precipitate. The fraction that precipitates is very small (less than 2% of the Ca) so it seems unimportant at first glance, but the precipitate induces a pH-shift of 0.22, a factor of almost two in the concentration of free H+.Many metals are amphoteric and have therefore a minimum solubility at a moderate pH, while dissolving more at both higher and lower pH-values. This can easily be seen in the case of Al: shows the concentration of dissolved Al as a function of pH (note log-scale for Y-axis). Around pH of 6.2, the solubility is at its minimum. At higher and lower pH-values, the solubility is (much) higher.Speciation models can also help to understand differences in the growth of organisms or adverse effects on organisms, in different chemical solutions. For example, shows that changes in speciation of boron can be expected only between roughly pH 8 and 10.5, so when you observe a biological difference between pH 7 and 8, it is not likely that boron is the cause. Copper on the other hand (see does display differences in speciation between pH 7 and 8 so is a more likely cause of different biological behaviour.In field situations, the chemistry is usually much more complex than under laboratory conditions. Decomposition of organisms (including plants) results in a huge variety of organic compounds like fulvic acids, humic acids, proteins, amino acids, carbohydrates, etc. Many of these compounds interact strongly with cations, some also with anions or uncharged molecules. In addition, metals easily adsorb to clay and sand particles that are found everywhere in nature. To make it more complex, suspended matter can contain a high content of organic material which is also capable of binding cations.For complexation by fulvic and humic acids, Tipping and co-workers have developed a unifying model (Tipping & Hurley, 1992; Tipping, 1994, 1998; Tipping, Lofts & Sonke, 2011). The most recent version, WHAM 7 (Tipping, Lofts & Sonke, 2011), is able to predict cation complexation by fulvic acids and humic acids over a wide range of chemical circumstances, despite the large difference in composition of these acids. This model is now incorporated in several speciation programs.Suspended matter may be of organic or of inorganic character. Inorganic matter usually consists of (hydr)oxides of metals, such as Mn, Fe, Al, Si or Ti, and clay minerals. In practice, the (hydr)oxides and clays occur together, but the mutual proportions may differ dramatically depending on the source. Since the chemical properties of these metal (hydr)oxides clays are quite different, there is a huge variation in the chemical properties of inorganic suspended matter in different places and different times. As a consequence, modeling interactions between dissolved constituents and suspended inorganic matter is challenging. Only by measuring some properties of suspended inorganic matter, can modeling be applied successfully. For suspended organic matter, the variation in properties is also large and modelling is challenging.Speciation models are useful in understanding and assessing the bioavailability of metals and other elements in test media. Test media often contain substances like EDTA to keep metals in solution. EDTA-complexes in general are not bioavailable, so in addition to keeping metals in solution they also change their bioavailability. Models can calculate the speciation and help you to assess what is actually happening in a test medium. An often forgotten aspect is the influence of CO2. CO2 from the ambient atmosphere can enter a solution or carbonate in solution (if in excess over the equilibrium concentration) can escape to the atmosphere. The degree to which this exchange takes place, influences the pH of the solution as well as the amount of carbonate that stays in solution (carbonates are often poorly soluble).Similarly, in field situations models can help to understand the bioavailability of elements. As stated above, the influence of DOC can nowadays be assessed properly in many situations, the influence of suspended matter remains more difficult to assess. Nevertheless models can deliver insights in seconds that otherwise can be obtained only with great difficulty.There are many speciation programs available and several of them are freely available. Usually they take a set of total concentrations as input, plus information about parameters such as pH, redox, concentration of organic carbon etc. Then the programs calculate the speciation and present them to the user. The equations cannot be solved analytically, so an iterative procedure is required. Although different numerical approaches are used, most programs construct a set of non-linear mass balance equations and solve them by simple or advanced mathematics. A complication in this procedure is that the equilibrium constants depend on the ionic strength of the solution, and that this ionic strength can only be calculated when the speciation is known. The same holds for the precipitation of solids. The procedure is shown in Figure 5.For modeling speciation, thermodynamic data is needed for all relevant equilibrium reactions. For many equilibria, this information is available, but not for all. This hampers the usefulness of speciation modeling. In addition, there can be large variations in the thermodynamic values found in the literature, resulting in uncertainty about the correct value. A factor of 10 between the highest and lowest values found is not an exception. This of course influences the reliability of speciation calculations. For many equilibria, the thermodynamic data is only available for the standard temperature of 25°C and no information is available to assess the data at other temperatures, although the effect of temperature can be quite strong. Also ionic strength has a high impact on equilibrium 'constants'; there are many methods available to correct for the effect of ionic strength, but most of them are at best semi-empirical. Simonin recently proposed a method with a solid theoretical basis; however, the data required for his method are available only for a few complexes so far.More fundamentally, you should realize that speciation programs typically calculate the equilibrium situation, while some reactions are very slow and, more inportant, nature is in fact a very dynamic system and therefore never in equilibrium. If a system is close to equilibrium, speciation programs can often make a good assessment of the actual situation, but the more dynamic a system, the more care you should take in believing the programs' results. Nevertheless it is good to realise that a chemical system will always move towards the equilibrium situation, while organisms may move them away from equilibrium. Phototrophs are able to move a system away from its equilibrium situation whereas decomposers and heterotrophs generally help to move a system towards its equilibrium state.Simonin , J.-P.. Thermodynamic consistency in the modeling of speciation in self-complexing electrolytes. Ind. Eng. Chem. Res. 56, 9721-9733.Tipping, E., Hurley, M.A.. A unifying model of cation binding by humic substances. Geochimica et Cosmochimica Acta 56, 3627 - 3641.Tipping, E.. WHAM - A chemical equilibrium model and computer code for waters, sediments, and soils incorporating a discrete site/electrostatic model of ion-binding by humic substances. Computers & Geosciences 20, 973 - 1023.Tipping, E.. Humic Ion-Binding Model VI: An Improved Description of the Interactions of Protons and Metal Ions with Humic Substances. Aquatic Geochemistry 4, 3 - 48.Tipping, E., Lofts, S., Sonke, J.E.. Humic Ion-Binding Model VII: a revised parameterisation of cation-binding by humic substances. Environmental Chemistry 8, 228 - 235.Stumm, W., Morgan, J.J.. Aquatic chemistry. John Wiley & Sons, New York.More, F.M.M., Hering, J.G.. Principles and Applications of Aquatic Chemistry. John Wiley & Sons, New York.Briefly describe what speciation models are.What are the factors determining speciation and how can these be accounted for?Give two examples of situations where speciation modeling can be useful.In preparationThis page titled 4.7: Modelling exposure is shared under a CC BY 4.0 license and was authored, remixed, and/or curated by Sylvia Moes, Kees van Gestel, & Gerco van Beek via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.
986
4.9: Section 4.4. Increasing ecological realism in toxicity testing
https://chem.libretexts.org/Bookshelves/Environmental_Chemistry/Environmental_Toxicology_(van_Gestel_et_al.)/04%3A_Toxicology/4.09%3A_Section_4.4._Increasing_ecological_realism_in_toxicity_testing
If you want to re-use this section, for e.g. in your electronic learning environment, feel free to copy this url: //maken.wikiwijs.nl/162480/4_4__Increasing_ecological_realism_in_toxicity_testingThis page titled 4.9: Section 4.4. Increasing ecological realism in toxicity testing is shared under a CC BY 4.0 license and was authored, remixed, and/or curated by Sylvia Moes, Kees van Gestel, & Gerco van Beek via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.
988
4.10: Section 4.2. Toxicodynamics and Molecular Interactions
https://chem.libretexts.org/Bookshelves/Environmental_Chemistry/Environmental_Toxicology_(van_Gestel_et_al.)/04%3A_Toxicology/4.10%3A_Section_4.2._Toxicodynamics_and_Molecular_Interactions
If you want to re-use this section, for e.g. in your electronic learning environment, feel free to copy this url: //maken.wikiwijs.nl/162477/4_2__Toxicodynamics___Molecular_InteractionsThis page titled 4.10: Section 4.2. Toxicodynamics and Molecular Interactions is shared under a CC BY 4.0 license and was authored, remixed, and/or curated by Sylvia Moes, Kees van Gestel, & Gerco van Beek via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.
989
5.2: Population ecotoxicology in laboratory settings
https://chem.libretexts.org/Bookshelves/Environmental_Chemistry/Environmental_Toxicology_(van_Gestel_et_al.)/05%3A_Population_Community_and_Ecosystem_Ecotoxicology/5.02%3A_Population_ecotoxicology_in_laboratory_settings
Author: Michiel KraakReviewers: Nico van den Brink and Matthias LiessLearning objectives:You should be able to· motivate the importance of studying ecotoxicology at the population level.· name the properties of populations, unique to this level of biological organisation.· explain the implications of age and developmental stage specific sensitivities for population responses to toxicant exposure.Key words: Population ecotoxicology, density, age structure, population growth rateThe motivation to study ecotoxicological effects at the population level is that generally the targets of environmental protection are indeed populations, communities and ecosystems. Additionally, several phenomena are unique to this level, including age specific sensitivity and interaction between individuals. Studying the population level is distinguished from the individual level and lower by a less direct link between the chemical exposure and the observed effects, due to individual variability and several feedback loops, loosening the dose-response relationships. Research at the population level is thus characterized by an increasing level of uncertainty if these processes are not properly addressed and by increasing time and efforts. Hence, it is not surprising that effects at the population are understudied. This is even more the case for investigations on higher levels like meta-populations, communities and ecosystems (see sections on meta-populations, communities and ecosystems). It is thus highly important to obtain data and insights into mechanisms leading to effects at the population level, keeping in mind the relevant interactions with lower and higher levels of organisation.Properties of populations are unique to this level of biological organization and include social structure (see section on invertebrate community ecotoxicology), genetic composition (see section on genetic variation), density and age structure. This gives room to age and developmental stage specific sensitivities to chemicals. For almost all species, young individuals like neonates or first instars are markedly more sensitive than adults or late instar larvae. This difference may run up to three orders of magnitude and consequently instar specific sensitivities may vary as much as species specific sensitivities. Population developmental stage specific sensitivities have also been reported. Exponentially growing daphnid populations exposed to the insecticide fenvalerate recovered much faster than populations that reached carrying capacity (Pieters and Liess, 2006). Given the age and developmental stage specific sensitivities, the timing of exposure to toxicants in relation to the critical life stage of the organism may seriously affect the extent of the adverse effects, especially in seasonally synchronised populations.A challenging question involved in population ecotoxicology is when a population is considered to be stable or in steady state. In spite of the various types of oscillation all populations depicted in can be considered to be stable. One could even argue that any population that does not go extinct can be considered stable. Hence, a single population could vary considerable in density over time, potentially strongly affecting the impact of exposure to toxicants.When populations suffer from starvation and crowding due to high densities and intraspecific competition, they are markedly more sensitive to toxicants, sometimes even up to a factor of 100 (Liess et al., 2016). This may even lead to unforeseen, indirect effects. Relative population growth rate (individual/individual/day) of high density populations of chironomids actually increased upon exposure to Cd, because Cd induced mortality diminished the food shortage for the surviving larvae. Only at the highest Cd exposure population growth rate decreased again. For populations at low densities, the anticipated decrease in population growth rate with increasing Cd concentrations was observed. Yet, at all Cd exposure levels growth rate of low density populations was markedly higher than that of high density populations.In chronic ecotoxicity studies, preferably cohorts of individuals of the same size and age are selected to minimize variation in the outcome of the test, whereas in population ecotoxicology the natural heterogenous population composition is taken into account. This does make it harder though to interpret the obtained experimental data. Especially when studying populations of higher organisms in the wild, the increasing time to complete the research due to the long life span of these organisms imposes practical limitations (see section on wildlife population ecotoxicology). In the laboratory, this can be circumvented by selecting test species with relatively short life cycles, like algae, bacteria and zooplankton. For algae, a three or four day test can be considered as a multigeneration experiment and during 21 d female daphnids may release up to three clutches of neonates. These population ecotoxicity tests offer the unique possibility to calculate the ultimate population parameter, the population growth rate (r). This is a demographic population parameter, integrating survival, maturity time and reproduction (see section on population modeling). Yet, such chronic experiments are typically performed with cohorts and not with natural populations, making these experiments rather an extension of chronic toxicity tests than true population ecotoxicity tests.Knillmann, S., Stampfli, N.C., Beketov, M.A., Liess, M.. Intraspecific competition increases toxicant effects in outdoor microcosms. Ecotoxicology 21, 1857-1866.Liess, M., Foit, K., Knillmann, S., Schäfer, R.B., Liess, H.-D.. Predicting the synergy of multiple stress effects. Scientific Reports 6, 32965.Pieters, B.J., Liess, M.. Population developmental stage determines the recovery potential of Daphnia magna populations after fenvalerate application. Environmental Science and Technology 40, 6157-6162.Motivate the importance of studying ecotoxicology at the population level and higher.Name the properties of populations that are unique to this level of biological organisation.Why is it important to understand the implications of age and developmental stage specific sensitivities for population responses to toxicant exposure?Explain the results observed in Figure 3.This page titled 5.2: Population ecotoxicology in laboratory settings is shared under a CC BY 4.0 license and was authored, remixed, and/or curated by Sylvia Moes, Kees van Gestel, & Gerco van Beek via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.
992
5.3: Wildlife population ecotoxicology
https://chem.libretexts.org/Bookshelves/Environmental_Chemistry/Environmental_Toxicology_(van_Gestel_et_al.)/05%3A_Population_Community_and_Ecosystem_Ecotoxicology/5.03%3A_Wildlife_population_ecotoxicology
Author: Nico van den BrinkReviewers: Ansje Löhr, John ElliottLearning objectives:You should be able toKeywords: Pharmaceuticals, uncertainty, population decline, retrospective monitoringHistorically, vulture populations in India, Pakistan and Nepal were too numerous to be effectively counted. In the mid-1990s numbers in northern India started to decline catastrophically, which was evidenced in the Keoladeo National Park (figure 1, Prakash 1999). Further monitoring of population numbers indicated unprecedented declines of over 90-99% from the mid-1990s to the early 2000s for Oriental White-backed vultures (Gyps bengalensis), Long-billed vultures (Gyps indicus) and also Slender-billed vultures (Gyps tenuirostris) (Prakash 1999).In the following years, similar declines were observed in Pakistan and Nepal, indicating that the causative factor was not restricted to a specific country or area. Total losses of vultures were estimated to be in the order of tens of millions. The first ideas about potential causes of those declines focussed on known infectious diseases or the possibility of new diseases to which the vulture population had not been previously exposed. However, no diseases were identified that had shown similar rates of mortalities in other bird species. Vultures are also considered to have a highly developed immune response given their diet of scavenging dead and often decaying animals. To obtain insights, initial interdisciplinary ecological studies were performed to provide a basic understanding of background mortality in the species affected. These studies started in large colonies in Pakistan, but were literally races against time, as some populations had already decreased by 50%, while others were already extirpated, (Gilbert et al., 2006). Despite those difficulties it was determined that mortalities were occurring principally in adult birds and not at the nestling phase. More in depth studies were performed to discriminate between natural mortality of for instance juvenile fledglings, which may be high in summer, just after fledging. After scrutinising the data no seasonality was observed in the abnormal, high mortality, indicating that this was not related to breeding activities. The investigations also revealed another important factor that these vultures were predominantly feeding on domestic livestock, while telemetric observations, using transmitters to assess flight and activity patterns f the birds, showed that the individual birds could range over very long distances to reach carcasses of livestock (up to over 100 km).Since no apparent causes for mortality were obtained in the ecological studies, more diagnostic investigations were started, focussing on infectious diseases and carried out in Pakistan (Oaks, 2011). However, that was easier said than done. Since large numbers of birds died, it was deemed essential to establish the logistics necessary to perform the diagnostics, including post-mortems, on all birds found dead. Although high numbers of birds died, hardly any fresh carcasses were available, due to remoteness of some areas, the presence of other scavengers and often hot conditions which fostered rapid decay of carcasses. Post-mortems on a selection off birds revealed that birds suspect of abnormal mortality all suffered from visceral gout, which is a white pasty smear covering tissues in the body including liver and heart. In birds, this is indicative for kidney failure. Birds metabolise nitrogen into uric acid (mammals into urea) which is normally excreted with the faeces. However, in case of kidney failure the uric acid is not excreted but deposited in the body. Further inspections of more birds confirmed this, and the working hypothesis became that the increased mortality was caused by a factor inducing kidney failure in the birds.Based on the establishment of kidney failure as the causative factor, histological and pathological studies were performed on several birds found dead which revealed that in birds with visceral gout, kidney lesions were severe with acute renal tubular necrosis (Oaks et al., 2004), confirming the kidney failure hypothesis. However, no indications of inflammatory cell infiltrations were apparent, ruling out the possibilities of infectious diseases. Those observation shifted the focus to potential toxic effects, although no previous case was known with a chemical causing such severe and extremely acute effects. First the usual suspects for kidney failure were addressed, like trace metals (cadmium, lead) but also other acute toxic chemicals like organophosphorus and carbamate pesticides and organochlorine chemicals None of those chemicals occurred at levels of concern and were ruled out. That left the researchers without leads to any clear causative factor, even after years of study!Some essential pieces of information were available, however:1) acute renal failure seemed associated with the mortality,2) no infectious agent was likely to be causative pointing to chemical toxicity,3) since exposure was likely to be via the diet the chemical exposure needed to be related to livestock (the predominant diet for the vultures), pointing to compounds present in livestock such as veterinarian products,4) widespread use of veterinarian chemicals had started relatively recently.After a survey of veterinarians in the affected areas of Pakistan, a single veterinarian pharmaceutical matched the criteria, diclofenac. This is a non-steroid anti-inflammatory drug (NSAID) since long used in human medicine but only introduced since the 1990s as a veterinarian pharmaceutical in India, Pakistan and surrounding countries. NSAIDs are known nephrotoxic compounds, although no cases were known with such acute and sever impacts. Chemical analyses of kidneys of vultures confirmed that kidneys of birds with visceral gout contained diclofenac, birds without signs of visceral gout did not. Also kidneys from birds that showed visceral gout and that died in captivity while being studied, were positive for diclofenac, as was the meat they were fed with. This all indicated diclofenac toxicity as the cause of the mortality, which was validated in exposure studies, dosing captive vultures with diclofenac. The species of Gyps vultures appeared extremely sensitive to diclofenac, showing toxic effects at 1% of the therapeutic dose for livestock mammalian species.This underlying mechanism for that sensitivity has yet to be explained, but initially it was also unclear why the populations were impacted to such severe extent. That was found to be related to the feeding ecology of the vultures. They were shown to fly long ranges to search for carcasses, and as a result of that they show very aggregated feeding, i.e. a lot of birds on a single carcass (Green et al., 2004). Hence, a single contaminated carcass may expose an unexpected large part of the population to diclofenac. Hence, a combination of extreme sensitivity, foraging ecology and human chemical use caused the onset of extreme population declines of some Asian vulture species of the Gyps genus, or so called "Old World vultures".This case demonstrated the challenges involved in attempting to disentangle the stressors causing very apparent population effects even on imperative species like vultures. It took several years of different groups of excellent researcher to perform the necessary research and forensic studies (under sometimes difficult conditions). Lessons learned are that even for compounds that have been used for a long time and thought to be well understood, unexpected effects may become evident. There is consensus that such effects may not be covered in current risk assessments of chemicals prior to their use and application, but this also draws attention to the need for continued post-market monitoring of organisms for potential exposure and effects. It should be noted that even nowadays, although the use of diclofenac is prohibited in larger parts of Asia, continued use still occurs due to its effectiveness in treating livestock and its low costs making it available to the farmers. Nevertheless, populations of Gyps vultures have shown to recover slowly.Green, R.E., Newton, I.A., Shultz, S., Cunningham, A.A., Gilbert, M., Pain, D.J., Prakash, V.. Diclofenac poisoning as a cause of vulture population declines across the Indian subcontinent. Journal of Applied Ecology 41, 793-800.Gilbert, M., Watson, R.T., Virani, M.Z., Oaks, J.L., Ahmed, S., Chaudhry, M.J.I., Arshad, M., Mahmood, S., Ali, A., Khan, A.A.. Rapid population declines and mortality clusters in three Oriental whitebacked vulture Gyps bengalensis colonies in Pakistan due to diclofenac poisoning. Oryx 40, 388-399.Oaks, J.L., Gilbert, M., Virani, M.Z., Watson, R.T., Meteyer, C.U., Rideout, B.A., Shivaprasad, H.L., Ahmed, S., Chaudhry, M.J.I., Arshad, M., Mahmood, S., Ali, A., Khan, A.A.. Diclofenac residues as the cause of vulture population decline in Pakistan. Nature 427, 630-633.Oaks, J.L., Watson, R.T.. South Asian vultures in crisis: Environmental contamination with a pharmaceutical. In: Elliott, J.E., Bishop, C.A., Morrissey, C.A. (Eds.) Wildlife Ecotoxicology. Springer, New York, NY. pp. 413-441.Prakash, V.. Status of vultures in Keoladeo National Park, Bharatpur, Rajasthan, with special reference to population crash in Gyps species. Journal of the Bombay Natural History Society 96, 365-378.Which ecological traits make vulture populations extremely vulnerable to exposure to chemicals like diclofenac?What indirect effects do you expect from the chemical induced crashes of the Asian vultures?Author: Nico van den BrinkReviewers: Ansje Löhr, Michiel Kraak, Pim Leonards, John ElliottLearning objectivesYou should be able to:Keywords: Threshold levels, read across, species specific sensitivityThe European otter (Lutra lutra) is a lively species which historically ranges all over Europe. In the second half of last century populations declined in North-West Europe, and at the end of the 1980s the species was declared extinct in the Netherlands. Several factors contributed to these declines, exposure to polychlorinated biphenyls (PCBs) and other contaminants was considered a prominent cause. PCBs can have different effects on organisms, primarily Ah-receptor mediated (see section on Receptor interactions). In order to assess the actual contribution of chemical exposure to the extinction of the otters, and the potential for population recovery it is essential to gain insight in the ratios between exposure levels and risk thresholds. However, since otters are rare and endangered, limited toxicological data is available on such thresholds. Most toxicological data is therefore inferred from research on another mustelids species the mink (Mustela vison) (Basu et al., 2007) a high trophic level, piscivorous species often used in toxicological studies. Several studies show that mink is quite sensitive to PCBs, showing e.g. effects on the length of the baculum of juveniles (Harding et al., 1999) and induction of hepatic enzyme systems and jaw lesions (Folland et al., 2016). Based on such studies, several threshold levels for otters were derived, depending on the toxic endpoints addressed. Based on number of offspring size and kit survival, EC50 were derived of approximately 1.2 to 2.4 mg/kg wet weight (Leonards et al., 1995), while for decreases in vitamin A levels due to PCB exposure, a safety threshold of 4 mg/kg in blood was assessed (Murk et al., 1998).To re-establish a viable population of otters in the Netherlands, a program was established in the mid-1990s to re-introduce otters in the Netherlands, including monitoring of PCBs and other organic contaminants in the otters. Otters were captured in e.g. Belarus, Sweden and Czech Republic. Initial results showed that these otters already contained < 1 mg/kg PCBs based on wet weight (van den Brink & Jansman, 2006), which was considered to be below the threshold limits mentioned before. Individual otters were radio-tagged, and most were recovered later as victims of car incidences. Over time, PCB concentrations had changed, although not in the same direction for all specimen. Females with high initial concentrations showed declining concentrations, due to lactation, while in male specimens most concentrations increased over time, as you would expect. Nevertheless, concentrations were in the range of the threshold levels, hence risks on effects could not be excluded. Since the re-introduction program was established in a relatively low contaminated area in the Netherlands, questions were raised for re-introduction plans in more contaminated areas, like the Biesbosch where contaminants may still affect otters .To assess potential risks of PCB contamination in e.g. the Biesbosch for otter populations a modelling study was performed in which concentrations in fish from the Biesbosch were modelled into concentrations in otters. Concentrations of PCBs in the fish differed between species (lipid rich fish such as eel greater concentrations than lean white fish), size of the fish (larger fish greater concentrations than smaller fish) and between locations within the Biesbosch. Using Biomagnification Factors (BMFs) specific for each PCB-congener (see section on Complex mixtures), total PCB concentrations in lipids of otters were calculated based on fish concentrations and different compositions of fish diets of the otters (e.g. white fish versus eel, larger fish versus smaller fish, different locations). Different diets resulted in different modelled PCB concentrations in the otters, however all modelled concentrations were above the earlier mentioned threshold levels (van den Brink and Sluiter, 2015). This would indicate that risks of effects for otters could not be ruled out, and led to the notion that release of otters in the Biesbosch would not be the best option.However, a major issue related to such risk assessment is whether the threshold levels derived from mink are applicable to otter. The resulting threshold levels for otter are rather low and exceedance of these concentrations has been noticed in several studies. For instance, in well-thriving Scottish otter populations PCBs levels have been recorded greater than 50 mg/kg lipid weight in livers (Kruuk & Conroy, 1996). This is an order of magnitude higher than the threshold levels, which would indicate that even at higher concentrations, at which effects are to be expected based on mink studies, populations of free ranging otters do not seem to be affected adversely. Based on this, the applicability of mink-derived threshold levels for otters may be open to discussion.The case on otters showed that the derivation of ecological relevant toxicological threshold levels may be difficult due to the fact that otters are not regularly used in toxicity tests. Application of data from a related species, in this case the American mink, however, may be limited due to differences in sensitivity. In this case, this could result in too conservative assessments of the risks, although it should be noted that this may be different in other combinations of species. Therefore, the read across of information of closely related species should therefore be performed with great care.Basu, N., Scheuhammer, A.M., Bursian, S.J., Elliott, J., Rouvinen-Watt, K., Chan, H.M.. Mink as a sentinel species in environmental health. Environmental Research 103, 130-144.Harding, L.E., Harris, M.L., Stephen, C.R., Elliott, J.E.. Reproductive and morphological condition of wild mink (Mustela vison) and river otters (Lutra canadensis) in relation to chlorinated hydrocarbon contamination. Environmental Health Perspectives 107, 141-147.Folland, W.R., Newsted, J.L., Fitzgerald, S.D., Fuchsman, P.C., Bradley, P.W., Kern, J., Kannan, K., Zwiernik, M.J.. Enzyme induction and histopathology elucidate aryl hydrocarbon receptor-mediated versus non-aryl receptor-mediated effects of Aroclor 1268 in American Mink (Neovison vison). Environmental Toxicology and Chemistry 35, 619-634.Kruuk, H., Conroy, J.W.H.. Concentrations of some organochlorines in otters (Lutra lutra L) in Scotland: Implications for populations. Environmental Pollution 92, 165-171.Leonards, P.E.G., De Vries, T.H., Minnaard, W., Stuijfzand, S., Voogt, P.D., Cofino, W.P., Van Straalen, N.M., Van Hattum, B.. Assessment of experimental data on PCB‐induced reproduction inhibition in mink, based on an isomer‐ and congener‐specific approach using 2,3,7,8‐tetrachlorodibenzo‐p‐dioxin toxic equivalency. Environmental Toxicology and Chemistry 14, 639-652.Murk, A.J., Leonards, P.E.G., Van Hattum, B., Luit, R., Van der Weiden, M.E.J., Smit, M.. Application of biomarkers for exposure and effect of polyhalogenated aromatic hydrocarbons in naturally exposed European otters (Lutra lutra). Environmental Toxicology and Pharmacology 6, 91-102.Van den Brink, N.W., Jansman, H.A.H.. Applicability of spraints for monitoring organic contaminants in free-ranging otters (Lutra lutra). Environmental Toxicology & Chemistry 25, 2821-2826.Name three reasons why assessment of risk of PCBs to otters is relatively complicatedHow is it possible that although toxicity threshold levels are exceeded in some otter populations, for example in Scotland, the population seem to thrive really well?This page titled 5.3: Wildlife population ecotoxicology is shared under a CC BY 4.0 license and was authored, remixed, and/or curated by Sylvia Moes, Kees van Gestel, & Gerco van Beek via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.
993
5.4: Trait-based approaches
https://chem.libretexts.org/Bookshelves/Environmental_Chemistry/Environmental_Toxicology_(van_Gestel_et_al.)/05%3A_Population_Community_and_Ecosystem_Ecotoxicology/5.04%3A_Trait-based_approaches
Author: Paul J. Van den BrinkReviewers: Nico van den Brink, Michiel Kraak, Alexa Alexander-TrusiakLearning objectives:You should be able toKeywords: Sensitivity, levels of biological organisation, species traits, recovery, indirect effectsIt is impossible to assess the sensitivity of all species to all chemicals. Risk assessments therefore, need methods to extrapolate the sensitivity of a limited number of species to all species present in the environment is desired. Statistical approaches, like the species sensitivity distribution concept, perform this extrapolation by fitting a statistical distribution (e.g. log-normal distribution) to a selected set of sensitivity data (e.g. 96h-EC50 data) in order to obtain a distribution of the sensitivity of all species. From this distribution a threshold value associated with the lower end of the distribution can be chosen and used as a protective threshold value.The disadvantage of this approach is that it does not include mechanistic knowledge on what determines species' sensitivity and uses species taxonomy rather than their characteristics. To overcome these and other problems associated with a taxonomy based approach (see Van den Brink et al., 2011 for a review) traits-based bioassessment approaches have been developed for assessing the effects of chemicals on aquatic ecosystem. In traits-based bioassessment approaches, species are not represented by their taxonomy but by their traits. A trait is a phenotypic or ecological characteristic of an organism, usually measured at the individual level but often applied as the average state/condition of a species. Examples of traits are body size, feeding habits, food preference, mode of respiration and lipid content. Traits describe the physical characteristics, ecological niche, and functional role of a species within the ecosystem. The recognized strengths of traits-based bioassessment approaches include: traits add mechanistic and diagnostic knowledge, traits are transferrable across geographies, traits require no new sampling methodology as data that are currently collected can be used, the use of traits has a long-standing tradition in ecology and can supplement taxonomic analysis.When traits are used to study effects of chemical stressors on ecosystem structure (community composition) and function (e.g. nutrient cycling) it is important to make a distinction between response and effects traits. Response traits are traits that enable a response of the species to the exposure to a chemical. An example of a response trait may be size related surface area of an organism. Smaller organisms have relatively large surface areas, because their surface to volume ratio is higher than for larger animals Herewith, the uptake rate of the chemical stressor is generally higher in smaller animals compared to larger ones (Rubach et al., 2012). Effects traits of organisms influence the surrounding environment by the organisms, by altering the structure and functioning of the ecosystem. An example of an effect trait is the food preference of an organism. For instance, if the small (response trait) and herewith sensitive organisms happen to be herbivorous (effect trait) an increase in algal biomass may be expected when the organisms are affected (Van den Brink, 2008). So, to be able to predict ecosystem level responses it is important to know the (cor)relations between response and effect traits as traits are not independent from each other but can be linked phylogenetically or mechanistically and thus form trait syndromes (Van den Brink et al., 2011).One of the holy grails of ecotoxicology is to find out which species traits make one species more sensitive to a chemical stressor than another one. In the past, two approaches have been used to assess the (cor)relationships between species traits and their sensitivity, one based on empirical correlations between species' traits and their sensitivity as represented by EC50's (Rico and Van den Brink, 2015) and one based on a more mechanistic approach using toxicokinetic/toxicodynamic experiments and models (Rubach et al., 2012). Toxicokinetic-toxicodynamic models (TKTD models) simulate the time-course of processes leading to toxic effects on organisms (Jager et al., 2011). Toxicokinetics describe what an individual does with the chemical and, in their simplest form, include the processes of uptake and elimination, thereby translating an external concentration of a toxicant to an internal body concentration over time (see Section on Toxicokinetics and Bioaccumulation). Toxicodynamics describes what the chemical does with the organism, herewith linking the internal concentration to the effect at the level of the individual organism over time (e.g., mortality) (Jager et al., 2011) (see Sections on Toxicokinetics and Bioaccumulation and on Toxicodynamics and Molecular Interactions). Rubach et al. showed that almost 90% of the variation in uptake rates and 80% of the variation in elimination rates of an insecticide in a range of 15 freshwater arthropod species could be explained by 4 species traits. These traits were: i) surface area (without gills), ii) detritivorous feeding, iii) using atmospheric oxygen and iv) phylogeny in case of uptake, and i) thickness of exoskeleton, ii) complete sclerotization, iii) using dissolved oxygen and iv) % lipid of dry weight in case of elimination. For most of these traits, a mechanistic hypothesis between the traits and their influence on the uptake and elimination can be made (Rubach et al., 2012). For instance, a higher surface area to volume ratio increases the uptake of the chemical, so uptake is expected to be higher in small animals compared to larger animals. This shows that it is possible to construct mechanistic models that are able to predict the toxicokinetics of chemicals in species and herewith the sensitivity of species to chemicals based on their traits.Traits determining the way organisms within an ecosystem react to chemical stress are related to the intrinsic sensitivity of the organisms on the one hand (response traits) and their recovery potential and food web relations (effect traits) on the other hand (Van den Brink, 2008). Recovery of aquatic invertebrates is, for instance, determined by traits like number of life cycles per year, the presence of insensitive life stages like resting eggs, dispersal ability and having an aerial life stage (Gergs et al., 2011).Besides recovery, effect traits will also determine how individual level effects will propagate to higher levels of biological organisation like the community or ecosystem level. For instance, when Daphnia are affected by a chemical, their trait related to food preference (algae) will ensure that, under nutrient-rich conditions, the algae will not be subjected to top-down control and will increase in abundance. The latter effects are called indirect effects, which are not a direct result of the exposure to the toxicant but an indirect one through competition, food-web relationships, etc..Gergs, A., Classen, S., Hommen, U.. Identification of realistic worst case aquatic macroinvertebrate species for prospective risk assessment using the trait concept. Environmental Science and Pollution Research 18, 1316-1323.Jager, T., Albert, C., Preuss, T.G., Ashauer, R.. General unified threshold model of survival-A toxicokinetic-toxicodynamic framework for ecotoxicology. Environmental Science and Technology 45, 2529-2540Rico, A., Van den Brink, P.J.. Evaluating aquatic invertebrate vulnerability to insecticides based on intrinsic sensitivity, biological traits and toxic mode-of-action. Environmental Toxicology and Chemistry 34, 1907-1917.Rubach, M.N., D.J. Baird, M-C. Boerwinkel, S.J. Maund, I. Roessink, Van den Brink P.J.. Species traits as predictors for intrinsic sensitivity of aquatic invertebrates to the insecticide chlorpyrifos. Ecotoxicology 21, 2088-2101.Van den Brink, P.J.. Ecological risk assessment: from book-keeping to chemical stress ecology. Environmental Science and Technology 42, 8999-9004.Van den Brink P.J., Alexander, A., Desrosiers, M., Goedkoop, W., Goethals, P., Liess, M., Dyer, S.. Traits-based approaches in bioassessment and ecological risk assessment: strengths, weaknesses, opportunities and threats. Integrated Environmental Assessment and Management 7, 198-208.Which two traits may determine the sensitivity of invertebrates to insecticides?Will two daphnids which are equal except in their size be equally sensitive to a chemical?What is the difference between a response and an effect trait?Which traits are of importance for the recovery of impacted populations?How will the insecticide-induced death of water fleas propagate to the ecosystem level under eutrophic circumstances, and what are these types of effects called?This page titled 5.4: Trait-based approaches is shared under a CC BY 4.0 license and was authored, remixed, and/or curated by Sylvia Moes, Kees van Gestel, & Gerco van Beek via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.
994
5.5: Population models
https://chem.libretexts.org/Bookshelves/Environmental_Chemistry/Environmental_Toxicology_(van_Gestel_et_al.)/05%3A_Population_Community_and_Ecosystem_Ecotoxicology/5.05%3A_Population_models
Authors: A. Jan Hendriks and Nico van StraalenReviewers: Aafke Schipper, John D. Stark and Thomas G. PreussLearning objectivesYou should be able toKeywords: intrinsic rate of increase, carrying capacity, exponential growth,Ecological risk assessment of toxicants usually focuses on the risks run by individuals, by comparing exposures with no-effect levels. However, in many cases it is not the protection of individual plants or animals that is of interest but the protection of a viable population of a species in an ecological context. Risk assessment generally does not take into account the quantitative dynamics of populations and communities. Yet, understanding and predicting effects of chemicals at levels beyond that of individuals is urgently needed for several reasons. First, we need to know whether quality standards are sufficiently but not overly protective at the population level, when extrapolated from toxicity tests. Second, responses of isolated, homogenous cohorts in the laboratory may be different from those of interacting, heterogeneous populations in the field. Third, to set the right priorities in management, we need to know the relative and cumulative effect of chemicals in relation to other environmental pressures.Ecological population models for algae, macrophytes, aquatic invertebrates, insects, birds and mammals have been widely used to address the risk of potentially toxic chemicals, however, until recently, these models were only rarely used in the regulatory risk assessment process due to a lack of connection between model output and risk assessment needs (Schmolke et al., 2010). Here, we will sketch the basic principles of population dynamics for environmental toxicology applications.Ecological textbooks usually start their chapter on population ecology by introducing exponential and logistic growth. Consider a population of size N. If resources are unlimited, and the per capita birth (b) and death rates (d) are constant in a population closed to migration, the number of individuals added to the population per time unit (dN/dt) can be written as: or where r is called the intrinsic rate of increase. This differential equation can be solved with boundary condition N = N0 to yieldSince toxicants will affect either reproduction or survival, or both, they will also affect the exponential growth rate. This suggests that r can be considered a measure of population performance under toxic stress. But rather than from observed population trajectories, r is usually estimated from life-history data. We know from basic demographic theory that any organism with "time-invariant" vital rates (that is, fertility and survival may depend on age, but not on time), will be growing exponentially at rate r. The intrinsic rate of increase can be derived from age-specific survival and fertility rates using the so-called Euler-Lotka equation, which reads:in which x is age, xm maximal age, l(x) survivorship from age zero to age x and m(x) the number of offspring produced per time unit at age x. Unfortunately this equation does not allow for a simple derivation of r; r must be obtained by iteration and the correct value is the one that, when combined with the l(x) and m(x) data, makes the integral equal to 1. Due to this complication approximate approaches are often applied. For example, in many cases a reasonably good estimate for r can be obtained from the age at first reproduction α, survival to first reproduction, S, and reproductive output, m, according to the following formula:This is due to the fact that for many animals in the environment, especially those with high reproductive output and low juvenile survivorship, age at first reproduction is the dominant variable determining population growth (Forbes and Calow, 1999).The classical demographic modelling approach, including the Euler-Lotka equation, considers time as a continuous variable and solves the equations by calculus. However, there is an equivalent formalism based on discrete time, in which population events are assumed to take place only at equidistant moments. The vital rates are then summarized in a so-called Leslie matrix, a table of survival and fertility scores for each age class, organized in such a way that when multiplied by the age distribution at any moment, the age distribution at the following time point is obtained. This type of modelling lends itself more easily to computer simulation. The outcome is much the same: if the Leslie matrix is time-invariant the population will grow each time step by a factor λ, which is related to r as ln λ = r (λ = 1 corresponds to r = 0). Mathematically speaking λ is the dominant eigenvalue of the Leslie matrix. The advantage of the discrete-time version is that λ can be more easily decomposed into its component parts, that is, the life-history traits that are affected by toxicants (Caswell, 1996).The demographic approach to exponential growth has been applied numerous times in environmental toxicology, most often in studies of water fleas (Suhett et al., 2015), and insects (Stark and Banks, 2003). The tests are called "life-table response experiments" (see section on Population ecotoxicology in a laboratory setting). The investigator observes the effects of toxicants on age-specific survival and fertility, and calculates r as a measure of population performance for each exposure concentration. An example is given in showed that zinc in the diet did not affect the carrying capacity of contained laboratory populations, although there were several interactions below K that were influenced by zinc, including hormesis (growth stimulation by low doses of a toxicant), and Allee effects (loss of growth potential at low density due to lower encounter rate).Density-dependence is expected to act as buffering mechanism at the population level because toxicity-induced population decline diminishes competition, however, the effects very much depend on the details of population regulation. This was demonstrated in a model for peregrine falcon exposed to DDE and PBDEs (Schipper et al., 2013). While the equilibrium size of the population declined by toxic exposure, the probability of individual birds finding a suitable territory increased. However, at the same time the number of non-breeding birds shifting to the breeding stage became limiting and this resulted in a strong decrease in the equilibrium number of breeders.To enhance the potential for application of population models in risk assessment, more ecological details of the species under consideration must be included, e.g. effects of dispersal, abiotic factors, predators and parasites, dispersal, landscape structure and many more. A further step is to track the physiology and ecology of each individual in the population. This is done in the dynamic energy budget modelling approach (DEB) developed by (Kooijman et al., 2009). By including such details, a model will become more realistic, and more precise predictions can be made on the effects of toxic exposures. These types of models are generally called "mechanistic effect models' (MEMs). They allow a causal link between the protection goal, a scenario of exposure to toxicants and the adverse population effects generated by model output (Hommen et al., 2015). The European Food Safety Authority (EFSA) in 2014 issued an opinion paper containing detailed guidelines on the development of such models and how to adjust them to be useful in the risk assessment of plant protection products.Caswell, H.. Demography meets ecotoxicology: untangling the population level effects of toxic substances. In: Newman, M.C., Jagoe, C.H. (Eds.). Ecotoxicology. A hierarchical treatment. Lewis Publishers, Boca Raton, pp. 255-292.Barata, C., Baird, D.G., Amata, F., Soares, A.M.V.M.. Comparing population response to contaminants between laboratory and field: an approach using Daphnia magna ephippial egg banks. Functional Ecology 14, 513-523.EFSA. Scientific Opinion on good modeling practice in the context of mechanistic effect models for risk assessment of plant protection products. EFSA Panel on Plant Protection and their Residues (PPR). EFSA Journal 12, 3589.Forbes, V.E., Calow, P.. Is the per capita rate of increase a good measure of population-level effects in ecotoxicology. Environmental Toxicology and Chemistry 18, 1544-1556.Hendriks, A.J., Maas, J.L., Heugens, E.H.W., Van Straalen, N.M.. Meta-analysis of intrinsic rates of increase and carrying capacity of populations affected by toxic and other stressors. Environmental Toxicology and Chemistry 24, 2267-2277Hommen, U., Forbes, V., Grimm, V., Preuss, T.G., Thorbek, P., Ducrot, V.. How to use mechanistic effect models in environmental risk assessment of pesticides: case studies and recommendations from the SETAC workshop Modelink. Integrated Environmental Assessment and Management 12, 21-31.Kooijman, S.A.L.M., Baas, J., Bontje, D., Broerse, M., Van Gestel, C.A.M., Jager, T.. Ecotoxicological Applications of Dynamic Energy Budget theory. In: Devillers, J. (Ed.). Ecotoxicology Modeling, Volume 2, Springer, Dordrecht, pp. 237-260.Noël, H.L., Hopkin, S.P., Hutchinson, T.H., Williams, T.D., Sibly, R.M.. Towards a population ecology of stressed environments: the effects of zinc on the springtail Folsomia candida. Journal of Applied Ecology 43, 325-332.Schipper, A.M., Hendriks, H.W.M., Kaufmann, M.J., Hendriks, A.J., Huijbregts, M.A.J.. Modelling interactions of toxicants and density dependence in wildlife populations. Journal of Applied Ecology 50, 1469-1478.Schmolke, A., Thorbek, P., Chapman, P., Grimm, V. Ecological models and pesticide risk assessment: current modelling practice. Environmental Toxicology and Chemistry 29, 1006-1012.Stark, J.D., Banks, J.E. Population effects of pesticides and other toxicants on arthropods. Annual Review of Entomology 48, 505-519.Suhett, A.L. et al. An overview of the contribution of studies with cladocerans to environmental stress research. Acta Limnologica Brasiliensia 27, 145-159.This page titled 5.5: Population models is shared under a CC BY 4.0 license and was authored, remixed, and/or curated by Sylvia Moes, Kees van Gestel, & Gerco van Beek via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.
995
5.6: Metapopulations
https://chem.libretexts.org/Bookshelves/Environmental_Chemistry/Environmental_Toxicology_(van_Gestel_et_al.)/05%3A_Population_Community_and_Ecosystem_Ecotoxicology/5.06%3A_Metapopulations
Author: Nico van den BrinkReviewers: Michiel Kraak, Heikki Setälä,Learning objectivesYou should be able toImplications of meta-population dynamics on risks of environmental chemicalsPopulations can be defined as a group of organisms from the same species which live in a specific geographical area. These organisms interact and breed with each other. At a higher level, one can define meta-populations which can be described as a set of spatially separated populations which interact to a certain extent. The populations may function separately, but organisms can migrate between the populations. Generally the individual populations occur in more or less favourable habitat patches which may be separated by less favourable areas. However, in between populations, good habitats may also occur, where populations have not yet established , or the local populations may have gone extinct. Exchange between populations within a meta-population depends on i) the distances between the individual populations, ii) the quality of the habitat between the populations, e.g. the availability of so-called stepping stones, areas where organisms may survive for a while but which are too small or of too low habitat quality to support a local population and iii) the dispersal potential of the species. Due to the interactions between the different populations within a meta-population, chemicals may affect species at levels higher than the (local) population, also at non-contaminated sites.An important effect of chemicals at meta-population scale is that local populations may act as a source or sink for other populations within the meta-population. When a chemical affects the survival of organisms in a local population, the local population densities decline. This may increase the immigration of organisms from neighbouring populations within the meta-population. Decrease of local densities would decrease emigration, resulting in a net-influx of organisms into the contaminated site. This is the case when organisms do not sense the contaminants, or that the contaminants do not alter the habitat quality for the organisms. In case the immigration rate at the delivering/source population to replace the populations is too high to replace the leaving organisms, population densities in neighbouring populations may decline, even at the non-contaminated source sites. Consequently, local populations at contaminated sites may act as a sink for other populations within the meta-population, so chemicals may have a much broader impact than just local.On the contrary, when the local population is relatively small, or the chemical stress is not chronic, meta-population dynamics may also mitigate local chemical stress. Population level impacts of chemicals may be minimised by influx of organisms of neighbouring populations, potentially recovering the population densities prior to the chemical stress. Such recovery depends on the extent and duration of the chemical impact on the local populations and the capacity of the other populations to replenish the loss of the organisms in the affected population.Meta-population dynamics may thus alter the extent to which contaminants may affect local populations _ through migration between populations. However, chemicals may affect the total carrying capacity of the meta-population as a whole. This can be illustrated by the modelling approach developed by Levins in the late 1960s (Levins 1969). A first assumption in this model is that not all patches that can potentially carry a local population are actually occupied, so let F be the fraction of occupied patches (1-F being the fraction not occupied). Populations have an average change of extinction being e (day-1 when calculating on a daily base), while non-occupied patches have a change of c of being populated (day-1) from the populated patches. The daily change in numbers of occupied patches is therefore:In this formula c*F*(1-F) equals the number of non-occupied patches that are being occupied from the occupied patches, while e * F equals the fraction of patches that go extinct during the day. This can be recalculated to a carrying capacity (CC) ofwhile the growth rate (GR) of the meta-population can be calculated byIn case chemicals increase extinction risk (e), or decrease the chance on establishment in a new patch (c) this will affect the CC (which will decrease because e/c will increase) as well as the GR (will decrease, may even go below 0). However, this model uses average coefficients, which may not be directly applicable to individual contaminated sites within a meta-population. More (complex) recent approaches include the possibility to use local-population specific parameters and even more, such model include stochasticity, increasing their environmental relevance.Besides affecting populations directly in their habitats, chemicals may also affect the areas between habitat patches. This may affect the potential of organisms to migrate between patches. This may decrease the chances of organisms to repopulate non-occupied patches, i.e. decrease c, and as such both CC and GR. Hence, in a meta-population setting chemicals even in a non-preferred habitat may affect long term meta-population dynamics.Levins, R.. Some demographic and genetic consequences of environmental heterogeneity for biological control. Bulletin of the Entomological Society of America 15, 237-240What are drivers of recovery of local populations that are affected by a chemical stressor in a meta-population setting?In what type of meta-population would a local population be less affected, with smaller number of local populations which are relatively large, or a setup with lot of small local populations?This page titled 5.6: Metapopulations is shared under a CC BY 4.0 license and was authored, remixed, and/or curated by Sylvia Moes, Kees van Gestel, & Gerco van Beek via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.
996
5.7: Community ecotoxicology
https://chem.libretexts.org/Bookshelves/Environmental_Chemistry/Environmental_Toxicology_(van_Gestel_et_al.)/05%3A_Population_Community_and_Ecosystem_Ecotoxicology/5.07%3A_Community_ecotoxicology
Authors: Michiel Kraak and Ivo RoessinkReviewers: Kees van Gestel, Nico van den Brink, Ralf B. SchäferLearning objectives:You should be able toKeywords: Community ecotoxicology, species interactions, indirect effects, mesocosm, ecosystem processes.The motivation to study ecotoxicological effects at the community level is that generally the targets of environmental protection are populations, communities and ecosystems. Consequently, when scaling up research from the molecular level, via cells, organs and individual organisms towards the population, community or even ecosystem level the ecological and societal relevance of the obtained data strongly increase. Yet, the difficulty of obtaining data increases, due to the increasing complexity, lower reproducibility and the increasing time needed to complete the research, which typically involves higher costs. Moreover, when effects are observed in the field it may be difficult to link these to specific chemicals and to identify the drivers of the observed effects. Not surprisingly, ecotoxicological effects at the community and ecosystem level are understudied.Community ecology is defined as the study of the organization and functioning of communities, which are assemblages of interacting populations of species living within a particular area or habitat. Building on this definition, community ecotoxicology is defined as the study of the effects of toxicants on patterns of species abundance, diversity, community composition and species interactions. These species interactions are unique to the community and ecosystem level and may cause direct effects of toxicants on specific species to exert indirect effects on other species. It has been estimated that the majority of effects at these levels of biological organization are indirect rather than direct. These indirect effects are exerted via:As an example, Roessink et al. studied the impact of the fungicide triphenyltin acetate (TPT) on benthic communities in outdoor mesocosms. For several species a dose-related decrease in abundance directly after application was observed, followed by a gradual recovery coinciding with decreasing exposure concentrations, all implying direct effects of the fungicide. For some species, however, opposite results were obtained and abundance increased shortly after application, followed by a gradual decline; see the example of the Culicidae in In this case, these typical indirect effects were explained by a higher sensitivity of the predators and competitors of the Culicidae. Due to diminished predation and competition and higher food availability abundances of the Culicidae temporarily increased after toxicant exposure. With the decreasing exposure concentrations over time, the populations of the predators and competitors recovered, leading to a subsequent decline in numbers of the Culicidae.The indirect effects described above are thus due to species-specific sensitivities to the compound of interest, which influence the interactions between species. Yet, at higher exposure concentrations also the less sensitive species will start to be affected by the chemical. This may lead to an "arch-shaped" relationship between the number of individuals of a certain species and the concentration of a toxicant. In a mesocosm study with the insecticide lambda-cyhalothin this was observed for Daphnia, which are prey for the more sensitive phantom midge Chaoborus (Roessink et al., 2005; . At low exposure concentrations the indirect effects, such as release from predation by Chaoborus, led to an increase in abundance of the less sensitive Daphnia. At intermediate exposure concentrations there was a balance between the positive indirect effects and the adverse direct effects of the toxicant. At higher exposure concentrations the adverse direct effects overruled the positive indirect effects resulting in a decline in abundance of the Daphnia. These combined dose dependent direct and indirect effects are inherent to community-level experiments, but are compound and species-interaction specific.To study community ecotoxicology, experiments have to be scaled up and are therefore often performed in mesocosms, artificial ponds, ditches and streams, or even in the field, sometimes accompanied by the use of in- and exclosures. To assess the effects of toxicants on communities in such large systems requires meticulous sampling schemes, which often make use of artificial substrates and e.g. emergence traps for aquatic invertebrates with terrestrial adult life stages (see section on Community ecotoxicology in practice).Alternatively to scaling up the experiments in community ecotoxicology, the size of the communities may be scaled down. Algae and bacteria grown on coin sized artificial substrates in the field or in experimental settings provide the unique advantage that the experimental unit is actually an entire community.Given the large scale and complexity of experiments at the community level, such experiments generally generate overwhelming amounts of data, making appropriate analysis of the results challenging. Data analysis focusing on individual responses, so-called univariate analysis, that suffice in single species experiments, obviously falls short in community ecotoxicology, where cosm or (semi-)field communities sometimes consist of over a hundred different species. Hence, multivariate analysis is often more appropriate, similar to the approaches frequently applied in field studies to identify possible drivers of patterns in species abundances. Alternative approaches are also applied, like using ecological indices such as species richness or categorizing the responses of communities into effect classes. To determine if species under semi-field conditions respond equally sensitive to toxicant exposure as in the laboratory, the construction and subsequent comparison of species sensitivity distributions (SSD) (see section on SSDs) may be helpful.The analysis and interpretation of community ecotoxicity data is also challenged by the dynamic development of each individual replicate cosm, artificial pond, ditch or stream, including those from the control. From the start of the experiment, each control replicate develops independently, matures, and at the end of the experiments that generally last for several months control replicates may differ not only from the treatments, but also among each other. The challenge is then to separate the toxic signal from the natural variability in the data.In experiments that include a recovery phase, it is frequently observed that previously exposed communities do recover, but develop in another direction than the controls, which actually challenges the definition of recovery. Moreover, recovery can be decelerated or accelerated depending on the dispersal capacity of the species potentially inhabiting the cosms and the distance to nearby populations within a metapopulation (see section on Metapopulations). Other crucial factors that may affect the impact of a toxicant on communities, as well as their recovery from this toxicant exposure include habitat heterogeneity and the state of the community in combination with the moment of exposure. Habitat heterogeneity may affect the distribution of toxicants over the different environmental compartments and may provide shelter to organisms. Communities generally exhibit temporal dynamics in species composition and in their contribution to ecosystem processes (see section on Structure versus function), as well in the lifecycle stages of the individual species. Exponentially growing populations recover much faster than populations that reached carrying capacity and for almost all species, young individuals are up to several orders of magnitude more sensitive than adults or late instar larvae (see section on Population ecotoxicology). Hence, the timing of exposure to toxicants may seriously affect the extent of the adverse effects, as well as the recovery potential of the exposed communities.When scaling up from the community to the ecosystem level, again unique characteristics emerge: structural characteristics like biodiversity, but also ecosystem processes, quantified by functional endpoints like primary production, ecosystem respiration, nutrient cycling and decomposition. Although a good environmental quality is based on both ecosystem structure and functioning, there is definitely a bias towards ecosystem structure, both in science and in policy (see section on Structure versus function). Levels of biological organisation higher than ecosystems are covered by the field of landscape ecotoxicology (see section on Landscape ecotoxicology) and in a more practical way by the concept of ecosystem services (see section on Ecosystem services).Roessink, I., Crum, S.J.H., Bransen, F., Van Leeuwen, E., Van Kerkum, F., Koelmans, A.A., Brock, T.C.M.. Impact of triphenyltin acetate in microcosms simulating floodplain lakes. I. Influence of sediment quality. Ecotoxicology 15, 267-293.Roessink, I., Arts, G.H.P., Belgers, J.D.M., Bransen, F., Maund, S.J., Brock, T.C.M.. Effects of lambda-cyhalothrin in two ditch mesocosm systems of different trophic status. Environmental Toxicology and Chemistry 24, 1684-1696.Clements, W.H., Newman, M.C.. Community Ecotoxicology. John Wiley & Sons, Ltd.Motivate the importance of studying ecotoxicology at the community level.Define community ecotoxicology and name specific phenomena at the community and ecosystem level.Roessink et al. studied the impact of the fungicide triphenyltin acetate (TPT) on benthic communities in outdoor mesocosms. For several species they observed a dose related decrease in abundance directly after application, followed by a gradual recovery, see example in the graph below.For some species however, opposite results were obtained and the abundance increased shortly after application, followed by a gradual decline, see example in the graph below.Explain the results shown in the lower graph.Mention three ways to analyze data from experiments at the community level.Mention one advantage and three disadvantages of cosm experiments.Author: Martina G. VijverReviewers: Paul J. van den Brink, Kees van GestelLearning objectives:To be able toKeywords: microcosms, mesocosms, realism, different biological levelsIt is generally anticipated that ecotoxicological tests should provide data useful for making realistic predictions of the fate and effects of chemicals in natural ecosystems (Landner et al., 1989). The ecotoxicological test, if used in an appropriate way, should be able to identify the potential environmental impact of a chemical before it has caused any damage to the ecosystem. In spite of the considerable amount of work devoted to this problem and the plethora of test methods being published, there is still reason to question whether current procedures for testing and assessing the hazard of chemicals in the environment do answer the questions we have asked. Most biologists agree that at each succeeding level of biological organization new properties appear that would not have been evident even by the most intense and careful examination of lower levels of organization (Cairns Jr., 1983).These levels of biological hierarchy might be crudely characterized as subcellular, cellular, organ, organism, population, multispecies, community, and ecosystem. At the lower biological level, responses are faster than those occurring at higher levels of organization.Experiments executed at the lower biological level often are performed under standard laboratory conditions (see Section on Toxicity testing). The laboratory setting has advantages like allowing for replication, the use of relatively easy and simplified conditions that enable outcomes that are rather robust across different laboratories, the stressor of interest being more traceable under optimal stable conditions, and easy repetition of experiments. As a consequence, at the lower biological level the responses of organisms to chemical stressors tend to be more tractable, or more causal, than those identified when studying effects at higher tiered levels.The merit to perform cosm studies, so at the higher biological level (see , is to investigate the impact of a stressor on a variety of species, all having interactions with each other. This enables detecting both direct and indirect effects on the structure of species assemblages due to the chemicals. Indirect effects can become manifest as disruptions of species interactions, e.g. competition, predator-prey interactions and the like. A second important reason for conducting cosm studies is that abiotic interactions at the level of the ecosystem can be accounted for, allowing for measurement of effects of chemicals under more environmentally realistic exposure conditions. Conditions that likely influence the fate and behavior of chemical are sorption to sediments and plants, photolysis, changes in pH (see section on Bioavailability for a more detailed description), and other natural fluctuations.Microcosm or mesocosm (or cosm) studies represent a bridge between the laboratory and the natural world (examples of aquatic cosms are given in . The difference between micro- and mesocosms is mostly restricted to size (Cooper and Barmuta, 1993). Aquatic microcosms are 10-3 to 10 m3 in size, while mesocosms are 10 to 104 m3 or even larger equivalent to whole or natural systems. The originality of cosms is mainly based on the combination of ecological realism, achieved by the introduction of the basic components of natural ecosystems, and facilitated access to a number of physicochemical, biological, and toxicological parameters that can be controlled to some extent. The cosm approach also makes it possible to work with treatments that can be replicated, so enabling the study of multiple environmental factors which can be manipulated. The system allows the establishment of food webs, the assessment of direct and indirect effects, and the evaluation of effects of contamination on multiple trophic and taxonomic levels in an ecologically relevant context. Cosm studies make it possible to assess effects of contaminants by looking at the parts (individuals, populations, communities) and the whole (ecosystems) simultaneously.As given in the OECD guidance document (OECD, 2004), the size to be selected for a meso- or microcosm study will depend on the objectives of the study and the type of ecosystem that is to be simulated. In general, studies in smaller systems are more suitable for short-term studies of up to three to six months and studies with smaller organisms (e.g. planktonic species). Larger systems are more appropriate for long-term studies (e.g. 6 months or longer). Numerous ecosystem-level manipulations have been conducted since the early 1970s (Hurlbert et al., 1972). The Experimental Lakes Area (ELA) situated in Ontario, Canada deserves special attention because of its significant contributions to the understanding of how natural communities respond to chemical stressors. This ELA consists of 46 natural, relatively undisturbed lakes, which were designated specially for ecosystem-level research. Many different questions have been tackled here, e.g. manipulations with nutrients (amongst others Levine and Schindler, 1999), synthetic estrogens (e.g. Kidd et al., 2014) and Wallace with pesticides in the Coweeta district (Wallace et al., 1999). It is nowadays realized that there is a need for testing more than just individual species and to take into account ecosystem elements such as fluctuations of abiotic conditions and biotic interactions when trying to understand the ecological effects of chemicals. Therefore a selection of study parameters is often considered as given by OECD:A cosm approach assist in identifying and quantifying direct as well as indirect effects. Here two different types of responses are described, for more examples it is referred to the Section on Multistress.Joint interactions: Barmentlo et al. used an outdoor mesocosm system consisting of 65 L ponds. Using a full factorial design, they investigated the population responses of macroinvertebrate species assemblages exposed for 35 days to environmentally relevant concentrations of three commonly used agrochemicals (imidacloprid, terbuthylazine, and NPK fertilizers). A detrivorous food chain as well as an algal-driven food chain were inoculated into the cosms. At environmentally realistic concentrations of binary mixtures, the species responses could be predicted based on concentration addition (see Section on Mixture toxicity). Overall, the effects of trinary mixtures were much more variable and counterintuitive. This was nicely illustrated by how the mayfly Cloeon dipterum reacted to the various combinations of the pesticides. Compared to single substance exposures and binary mixtures, extreme low recovery of C. dipterum (3.6% of control recovery for both mixtures) was seen. However, after exposure to the trinary mixture, recovery of C. dipterum no longer deviated from the control, and therefore was was higher than expected. Unexpected effects of the mixtures were also obtained for both zooplankton species (Daphnia magna and Cyclops sp.) As expected, the abundance of both zooplankton species was positively affected by nutrient applications, but pesticide addition did not lower their recovery. These type of unexpected results can only been identified when multiple species and multiple stressors are tested and cannot be detected in a lab-test with single species.Indirect cascading effects: Van den Brink et al. studied the effects of chronic applications of a mixture of the herbicide atrazine and the insecticide lindane in indoor freshwater plankton-dominated microcosms. Both top-down and bottom-up regulation mechanisms of the species assemblage selected were affected by the pesticide mixture. Lindane exposure also caused a decrease in sensitive detritivorous macro-arthropods and herbivore arthropods. This allowed insensitive food competitors like worms, rotifers and snails to increase in abundance (although not always significantly). Atrazine inhibited algal growth and hence also affected the herbivores. A direct result of the inhibition of photosynthesis by atrazine exposure were lower dissolved oxygen and pH levels and an increase in alkalinity, nitrogen and electrical conductivity. See for a synthesis of all interactions observed in the study of Van den Brink et al..There is a conceptual conflict between realism and replicability when applied to mesocosms. Replicability may be achieved, in part, by a relative simplification of the system. The crucial point in designing a model system may not be to maximize the realism, but rather to make sure that ecologically relevant information can be obtained. Reliability of information on ecotoxicological effects of chemicals tested in mesocosms closely depends on the representativeness of biological processes or structures that are likely to be affected. This means that within cosms key features at both structural and functional levels should be preserved as they ensure ecological representativeness. Extrapolation from small experimental systems to the real world seems generally more problematic than the use of larger systems in which more complex interactions can be studied experimentally as well. For that reason, Caquet et al. claim that testing chemicals using mesocosms refines the classical methods of ecotoxicological risk assessment because they provide conditions for a better understanding of environmentally relevant effects of chemicals.Barmentlo S.H., Schrama M., Hunting E.R., Heutink R., Van Bodegom P.M., De Snoo G.R., Vijver M.G.. Assessing combined impacts of agrochemicals: Aquatic macroinvertebrate population responses in outdoor mesocosms, Science of the Total Environment 631-632, 341-347.Caquet, T., Lagadic, L., Sheffield, S.R. Mesocosm in ecotoxicology: outdoor aquatic systems. Reviews of Environmental Contamination and Toxicology 165, 1-38.Cairns Jr. J.. Are single species toxicity tests alone adequate for estimating environmental hazard? Hydrobiologica 100, 47-57.Cooper, S.D., Barmuta, L.A. Field experiments in biomonitoring. In Rosenberg, D.M., Resh, V.H. (Eds.) Freshwater Biomonitoring and Benthic Macroinvertebrates. Chapman and Hall, New York, pp. 399-441.OECD. Draft Guidance Document on Simulated Freshwater Lentic Field Tests (Outdoor Microcosms and Mesocosms) (July 2004). Organization for Economic Cooperation and Development, Paris. http://www.oecd.org/fr/securitechimique/essais/32612239.pdfHurlbert, S.H., Mulla, M.S., Willson, H.R. Effects of an organophosphorus insecticide on the phytoplankton, zooplankton, and insect populations of fresh-water ponds. Ecological Monographs 42, 269-299.Kidd, K.A., Paterson, M.J., Rennie, M.D., Podemski, C.L., Findlay, D.L., Blanchfield, P.J., Liber, K.. Direct and indirect responses of a freshwater food web to a potent synthetic oestrogen. Philosophical Transactions of the Royal Society B Biological Sciences 369, Article AR 20130578, DOI:10.1098/rstb.2013.0578Landner, L., Blanck, H., Heyman, U., Lundgren, A., Notini, M., Rosemarin, A., Sundelin, B. Community Testing, Microcosm and Mesocosm Experiments: Ecotoxicological Tools with High Ecological Realism. Chemicals in the Aquatic Environment. Springer, pp. 216-254.Levine, S.N., Schindler, D.W.. Influence of nitrogen to phosphorus supply ratios and physicochemical conditions on cyanobacteria and phytoplankton species composition in the Experimental Lakes Area, Canada. Canadian Journal of Fisheries and Aquatic Sciences 56, 451-466.Newman, M.C.. Ecotoxicology: The History and Present Directions. In Jørgensen, S.E., Fath, B.D. (Eds.), Ecotoxicology. Vol. 2 of Encyclopedia of Ecology, 5 vols. Oxford: Elsevier, pp.1195-1201.Van den Brink, P.J., Crum, S.J.H., Gylstra, R., Bransen, F., Cuppen, J.G.M., Brock,. Effects of a herbicide - insecticide mixture in freshwater microcosms: risk assessment and ecological effect chain. Environmental Pollution 157, 237-249.Wallace, J.B., Grubaugh, J.W., Whiles, M.R.. Biotic indices and stream ecosystem processes: Results from an experimental study. Ecological Applications 6, 140-151.Responses of organisms to long-term exposure can be detected at the sub-organism level by making use of biomarkers and then extrapolating these results to organism fitness and consequence at the population level. Mention two benefits of performing tests making use of biomarkersMention at least two benefits of performing tests at the higher biological level such as community or ecosystem levels.This page titled 5.7: Community ecotoxicology is shared under a CC BY 4.0 license and was authored, remixed, and/or curated by Sylvia Moes, Kees van Gestel, & Gerco van Beek via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.
997
5.8: Structure versus function incl. ecosystem services
https://chem.libretexts.org/Bookshelves/Environmental_Chemistry/Environmental_Toxicology_(van_Gestel_et_al.)/05%3A_Population_Community_and_Ecosystem_Ecotoxicology/5.08%3A_Structure_versus_function_incl._ecosystem_services
Author: Herman EijsackersReviewers: Nico van den Brink, Kees van Gestel, Lorraine MaltbyLearning objectives:You should be able toKeywords: structural biodiversity, functional biodiversity, functional redundancy, food web interactionsIn ecology, biodiversity describes the richness of natural life at three levels: genetic diversity, species diversity (the most well-known) and landscape diversity. The most commonly used index, the Shannon Wiener index, expresses biodiversity in general terms as the number of species in relation to the number of individuals per species. Precisely, this index stands for the sum of the natural logarithm of the number of individuals per species present:-∑(pi*ln(pi))with pi = ni/N in which ni is the number of individuals of species i and N the total number of individuals of all species combined.In environmental toxicology, most attention is paid to species diversity. Genetic diversity plays a role in the assessment of more or less sensitive or resistant subspecies or local populations of a species, like in various mining areas. Landscape diversity is receiving attention only recently and aims primarily at the total load of e.g. pesticides applied in an agronomic landscape (see Section on Landscape ecotoxicology), although it should more logically focus on the interactions between the various ecosystems in a landscape, for instance a lake surrounded partly by a forest, partly by a grassland.In general, the various types of interactions between species do not play a major role in the study of biodiversity neither within ecology nor in environmental toxicology. The diversity in interactions described in the food web or food chain is not expressed in a term like the Shannon-Wiener index. However, in aquatic as well as soil ecological research, extensive, quantitative descriptions have been made of various ecosystems. These model descriptions, like the one for arable soil below, are partly based on the taxonomic background of species groups and partly on their functional role in the food web, expressed as their way of feeding (see for instance the phytophagous nematodes feeding from plants, the fungivorous nematodes eating fungi and the predaceous nematodes eating other nematodes).The scheme in shows a very general soils food web and the different trophic levels. Much more detailed soil food web descriptions also are available, that do not only link the different trophic groups but also describe the energy flows within the system and through these flows the intensity and thus strength of the interactions that together determine the stability of the system (see e.g. de Ruiter et al., 1998).This food web shown in illustrates that biodiversity not only has a structural side: the various types of species, but also a functional one: which species are involved in the execution of which process. At the species level this functional aspect has been further elaborated in specific feeding leagues. At the ecosystem level this functional aspect has clearly been recognized in the last decades and resulted in the development of the concept of ecosystem services (see Section on Ecosystem services). However, these do not trace back to the individual species level and as such not to the functional aspect of biodiversity. Another development to be mentioned is that of trait-based approaches, which attempt to group species according to certain traits that are linked not only to exposure and sensitivity but also to their functioning. With that the trait-based approach may enable linking structural and functional biodiversity (see Section on Trait-based approaches).When effects of contaminants on species are compared to effects on processes, the species effects are mostly more distinct than the process effects. In other words: effects on structural diversity will be seen already at lower concentrations, and probably also sooner, than effects on functional diversity. This can be explained by the fact that processes are executed by more than one species. When with increasing chemical exposure levels the most sensitive species disappear, their role is taken over by less sensitive species. This reasoning has been generalized in the concept of "functional redundancy", which postulates that not all species that can perform a specific process are always active, and thus necessary, in a specific situation. Consequently they are superfluous or redundant. When a sensitive species that can perform a similar function disappears, a redundant species may take over, so the function is still covered. It has to be realized, however, that in case this is relevant in situation A, that does not mean it is also relevant for situation B with different environmental conditions and another species composition. Another consequence of functional redundancy is that when functional biodiversity is affected, there is (severe) damage to structural biodiversity: most likely several important species will have gone extinct or are strongly inhibited.Redundant species are often less efficient in performing a certain function. Tyler observed in a gradient of Cu-contamination by a copper manufacturing plant in Gusum Sweden that specific enzyme functions as well as a general processes like mineralisation decreased faster than the total fungal biomass.. The explanation was provided by experimental research by Ruhling et al. who selected a number of these micro-fungi in the field and tested them for their sensitivity for copper. The various species showed different concentration-effect relationships but all going to zero, except for two species which increased in abundance at the higher concentration so that the total biomass stayed more or less the same.Another example of the importance of a combined approach to structural and functional diversity are the different ecological types of earthworms. According to their behaviour and role they are distinguished in the anecics (deep burrowing earthworms moving up and down from deeper soil layers to the soil surface and consuming leaf litter), the endogeics (active in the deeper laying mineral and humus soil layers and consuming fragmented litter material and humus), and the epigeics (active in the upper soil litter layer and consuming leaf litter). Adverse effects of contamination on the anecics will result in accumulation of litter at the soil surface, in reduced litter fragmentation by the epigeics and reduced humus-forming by the endogeics. In various studies it is shown that these earthworms have different sensitivities for different types of pesticides. However, so far the ranking of more or less sensitive species is different for different groups of pesticides. So, there is no general relation between the function of a species e.g. surface active earthworms (epigeics) and their exposure to and sensitivity for pesticides. Nevertheless, pesticide effects on anecics generally lead to reduced litter removal, effects on endogeics result in slower fragmentation, reduced humification etc., and an effect on earthworm communities in general may hamper soil aeration and lead to soil compaction.Another example of the impact of contaminants on functional diversity is from microbiological research on the impact of heavy metals by Doelman et al.. They isolated fungi, bacteria and actinomycetes from various heavy metal contaminated and clean areas, tested these species for their sensitivity to zinc and cadmium, and divided them accordingly in a sensitive and resistant group. As a next step they measured to what extent both groups were able to degrade and mineralize a series of organic compounds. shows that the sensitive group is much more effective in degrading a variety of organic compounds, whereas the heavy metal resistant microbes are far less effective. This would indicate that although functional redundancy may alleviate some of the effects that contaminants have on ecosystem functioning, the overall performance of the community generally decreases upon contaminant exposure.The latter example also shows that genetic diversity, expressed as the numbers of sensitive and resistant species, plays a role in the functional stability and sustainability of microbial degradation processes in the soil.In conclusion, ecosystem services are worth studying in relation to contamination (Faber et al., 2019), but also more specific in relation to functional diversity at the species level. A promising field of research in this framework would include microorganisms in relation to the variety of degradation processes they are involved in.De Ruiter, J.C., Neutel, A-M., Moore, J.C. 1995. Energetics, patterns of interaction strenghts and stability in real ecosystems. Science 269, 1257-60.Doelman, P., Jansen, E., Michels, M., Van Til, M.. Effects of heavy metals in soil on microbial diversity and activity as shown by the sensitivity-resistance index, an ecologically relevant parameter Biology and Fertility of Soils 17, 177-184.Faber, J.H., Marshall, S., Van den Brink, P.J., Maltby, L.. Priorities and opportunities in the application of the ecosystem services concept in risk assessment for chemicals in the environment. Science of the Total Environment 651, 1067-1077.Rühling, Å., Bååth, E., Nordergren, A., Söderström, B. Fungi in a metal-contaminated soil near the Gusum brass mill, Sweden. Ambio 13, 34-36.Tyler, G. The impact of heavy metal pollution on forests: A case study of Gusum, Sweden. Ambio 13, 18-24.Describe the structural and functional diversity at the species and at the landscape levelWhat is meant by redundancy?Does redundancy have an impact of the sensitivity of species (structural diversity) versus processes (functional diversity)?This page titled 5.8: Structure versus function incl. ecosystem services is shared under a CC BY 4.0 license and was authored, remixed, and/or curated by Sylvia Moes, Kees van Gestel, & Gerco van Beek via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.
998
6.1: Introduction - The Essence of Risk Assessment
https://chem.libretexts.org/Bookshelves/Environmental_Chemistry/Environmental_Toxicology_(van_Gestel_et_al.)/06%3A_Risk_Assessment_and_Regulation/6.01%3A_Introduction_-_The_Essence_of_Risk_Assessment
Author: Ad RagasReviewer: Martien JanssenAfter this module, you should be able to:Key wordsRisk, hazard, tiering, problem definition, exposure assessment, effect assessment, risk characterizationWe assess risks on a daily basis, although we may not always be aware of it. For example, when we cross the street, we - often implicitly - assess the benefits of crossing and weigh these against the risks of getting hit by a vehicle. If the risks are considered too high, we may decide not to cross the street, or to walk a bit further and cross at a safer spot with traffic lights.Risk assessment is common practice for a wide range of activities in society, for example for building bridges, protection against floods, insurance against theft and accidents, and the construction of a new industrial plant. The principle is always the same: we use the available knowledge to assess the probability of potential adverse effects of an activity as good as we can. And if these risks are considered too high, we consider options to reduce or avoid the risk.Risk assessment of chemicals aims to describe the risks resulting from the use of chemicals in our society. In chemical risk assessment, risk is commonly defined as "the probability of an adverse effect after exposure to a chemical". This is a very practical definition that provides natural scientists and engineers the opportunity to quantify risk using "objective" scientific methods, e.g. by quantifying exposure and the likelihood of adverse effects. However, it should be noted that this definition ignores more subjective aspects of risk, typically studied by social scientists, e.g. the perceptions of people and (dealing with) knowledge gaps. This subjective dimension can be important for risk management. For example, risk managers may decide to take action if a risk is perceived as high by a substantial part of the population, even if the associated health risks have been assessed as negligible by natural scientists and engineers.Next to the term "risk", the term "hazard" is often used. The difference between both terms is subtle, but important. A hazard is defined as the inherent capacity of a chemical (or agent/activity) to cause adverse effects. The labelling of a substance as "carcinogenic" is an example of a hazard-based action. The inherent capacity of the substance to trigger cancer, as for example demonstrated in an in vitro assay or an experiment with rats or mice, can be sufficient reason to label a substance as "carcinogenic". Hazard is thus independent of the actual exposure level of a chemical, whereas risk is not.Risk assessment is closely related to risk management, i.e. the process of dealing with risks in society. Decisions to accept or reduce risks belong to the risk management domain and involve consideration of the socio-economic implications of the risks as well as the risk management options. Whereas risk assessment is typically performed by natural scientists and engineers, often referred to as "risk assessors", risk management is performed by policy makers, often referred to as "risk managers".Risk assessment and risk management are often depicted as sequential processes, where assessment precedes management. However, strict separation of both processes is not always possible and management decisions may be needed before risks are assessed. For example, risk assessment requires political agreement on what should be protected and at what level, which is a risk management issue (see Section on Protection Goals). Similarly, the identification, description and assessment of uncertainties in the assessment is an activity that involves risk assessors as well as risk managers. Finally, it is often more efficient to define alternative management options before performing a risk assessment. This enables the assessment of the current situation and alternative management scenarios (i.e., potential solutions) in one round. The scenario with the maximum risk reduction that is also feasible in practice would then be the preferred management option. This mapping of solutions and concurrent assessment of the associated risks is also known as solution-focused risk assessment.Chemical risk assessment is typically organized in a limited number of steps, which may vary depending on the regulatory context. Here, we distinguish four steps (:The four risk assessment steps are explained in more detail below. The four steps are often repeated multiple times before a final conclusion on the acceptability of the risk is reached. This repetition is called tiering. It typically starts with a simple, conservative assessment and then, in subsequent tiers, more data are added to the assessment resulting in less conservative assumptions and risk estimates. Tiering is used to focus the available time and resources for assessing risks on those chemicals that potentially lead to unacceptable risks. Detailed data are gathered only for chemicals showing potential risk in the lower, more conservative tiers.The order of the exposure and effect assessment steps has been a topic of debate among risk assessors and managers. Some argue that effect assessment should precede exposure assessment because effect information is independent of the exposure scenario and can be used to decide how exposure should be determined, e.g., information on toxicokinetics can be relevant to determine the exposure duration of interest. Others argue that exposure should precede effect assessment since assessing effects is expensive and unnecessary if exposure is negligible. The current consensus is that the preferred order should be determined on a case-by-case basis with parallel assessment of exposure and effects and exchange of information between the two steps as the preferred option.The scope of the assessment is determined during the problem definition phase. Questions typically answered in the problem definition include:Problem definition is not a task for risk assessors only, but should preferably be performed in a collaborative effort between risk managers, risk assessors and stakeholders. The problem definition should try to capture the worries of stakeholders as good as possible. This is not always an easy task as these worries may be very broad and sometimes also poorly articulated. Risk assessors need a clearly demarcated problem and they can only assess those aspects for which assessment methods are available. The dialogue should make transparent which aspects of the stakeholder concerns will be assessed and which not. Being transparent about this can avoid disappointments later in the process, e.g. if aspects considered important by stakeholders were not accounted for because suitable risk assessment methods were lacking. For example, if stakeholders are worried about the acute and chronic impacts of pesticide exposure, but only the acute impacts will be addressed, this should be made clear at the beginning of the assessment.The problem definition phase results in a risk assessment plan detailing how the risks will be assessed given the available resources and within the available timeframe.An important aspect of exposure assessment is the determination of an exposure scenario. An exposure scenario describes the situation for which the exposure is being assessed. In some cases, this exposure situation may be evident, e.g. soil organisms living a contaminated site. However, especially when we want to assess potential risks of future substance applications, we have to come up a typical exposure scenario. Such scenarios are for example defined before a substance is allowed to be used as a food additive or before a new pesticide is allowed on the market. Exposure scenarios are often conservative, meaning that the resulting exposure estimate will be higher than the expected average exposure.The exposure metric used to assess the risk depends on the protection target. For ecosystems, a medium concentration is often used such as the water concentration for aquatic systems, the sediment concentration for benthic systems and the soil concentration for terrestrial systems. These concentrations can either be measured or predicted using a fate model (see Section 3.8) and may or may not take into account bioavailability (see Section 3.6). For human risk assessment, the exposure metric depends on the exposure route. An air concentration is often used to cover inhalation, the average daily intake from food and water to cover oral exposure, and uptake through skin for dermal exposure. Uptake through multiple routes can also be combined in a dose metric for internal exposure, such as Area Under the Curve (AUC) in blood (see Section 6.3.1). Exposure metrics for specific wildlife species (e.g. top predators) and farm animals are often similar as those for humans. Measuring and modelling route-specific exposures is generally more complex than quantifying a simple medium concentration, because it does not only require the quantification of the substance concentration in the contact medium (e.g., concentration in drinking water), but also quantification of the contact intensity (e.g., how much water is consumed per day). Especially oral exposure can be difficult to quantify because it covers a wide range of different contact media (e.g. food products) and intensities varying from organism to organism.The aim of the effect assessment is to estimate a reference exposure level, typically an exposure level which is expected to cause no or very limited adverse effects. There are many different types of reference levels in chemical risk assessment; each used in a different context. The most common reference level for ecological risk assessment is the Predicted No Effect Concentration (PNEC). This is the water, soil, sediment or air concentration at which no adverse effects at the ecosystem level are being expected. In human risk assessment, a myriad of different reference levels are being used, e.g. the Acceptable Daily Intake (ADI), the oral and inhalatory Reference Dose (RfD), the Derived No Effect Level (DNEL), the Point of Departure (PoD) and the Virtually Safe Dose (VSD). Each of these reference levels is used in a specific context, e.g. for addressing a specific exposure route (ADI is oral), regulatory domain (the DNEL is used in the EU for REACH, whereas the RfD is used in the USA), substance type (the VSD is typical for genotoxic carcinogens) or risk assessment method (the PoD is typical for the Margin of Safety approach).What all reference levels have in common, is that they reflect a certain level of protection for a specific protection goal. In ecological risk assessment, the protection goal typically is the ecosystem, but it can also be a specific species or even an organism. In human risk assessment, the protection goal typically comprises all individuals of the human population. The definition of protection goals is a normative issue and it therefore is not a task of risk assessors, but of politicians. The protection levels defined by politicians typically involve a high level of abstraction, e.g. "the entire ecosystem and all individuals of the human population should be protected". Such abstract protection goals do not always match with the methods used to assess the risks. For example, if one assumes that one molecule of a genotoxic carcinogen can trigger a deathly tumour, 100% protection for all individuals of the human population is feasible only by banning all genotoxic carcinogens (reference level = 0). Likewise, the safe concentration for an ecosystem is infinitely small if one assumes that the sensitivity of the species in the system follows a lognormal distribution which asymptotically approaches the x-axis. Hence, the abstract protection goals have to be operationalized, i.e. defined in more practical terms and matching the methods used for assessing effects. This is often done in a dialogue between scientific experts and risk managers. An example is the "one in a million lifetime risk estimated with a conservative dose response model" which is used by many (inter)national organizations as a basis for setting reference levels for genotoxic carcinogens. Likewise, the concentration at which the no observed effect concentration (NOEC) for only 5% of the species is being exceeded is often used as a basis for deriving a PNEC.Once a protection goal has been operationalized, it must be translated into a corresponding exposure level, i.e. the reference level. This is typically done using the outcomes of (eco)toxicity tests, i.e. tests with laboratory animals such as rats, mice and dogs for human reference levels and with primary consumers, invertebrates and vertebrates for ecological reference levels. Often, the toxicity data are plotted in a graph with the exposure level on the x-axis and the effect or response level on the y-axis. A mathematical function is then fitted to the data; the so-called dose-response relationship. This dose-response relationship is subsequently used to derive an exposure level that corresponds to a predefined effect or response level. Finally, this exposure level is extrapolated to the ultimate protection goal, accounting for phenomena such as differences in sensitivity between laboratory and field conditions, between tested species and the species to be protected, and the (often very large) variability in sensitivity in the human population or the ecosystem. This extrapolation is done by dividing the exposure level that corresponds to a predefined effect or response level by one or more assessment or safety factors. These assessment factors do not have a pure scientific basis in the sense that they account for physiological differences which have actually been proven to exist. These factors also account for uncertainties in the assessment and should make sure that the derived reference level is a conservative estimate. The determination of reference levels is an art in itself and is further explained in sections 6.3.1 for human risk assessment and 6.3.2 for ecological risk assessment.The aim of risk characterization is to come up with a risk estimate, including associated uncertainties. A comparison of the actual exposure level with the reference level provides an indication of the risk:If the reference level reflects the maximum safe exposure level, then the risk indicator should be below unity (1.0). A risk indicator higher than 1.0 indicates a potential risk. It is a "potential risk" because many conservative assumptions may have been made in the exposure and effect assessments. A risk indicator above 1.0 can thus lead to two different management actions: if available resources (time, money) allow and the assessment was conservative, additional data may be gathered and a higher tier assessment may be performed, or consideration of mitigation options to reduce the risk. Assessment of the uncertainties is very important in this phase, as it reveals how conservative the assessment was and how it can be improved by gathering additional data or applying more advanced risk assessment tools.Risks can also be estimated using a margin-of-safety approach. In this approach, the reference level used has not yet been extrapolated from the tested species to the protection goal, e.g. by applying assessment factors for interspecies and interindividual differences in sensitivity. As such, the reference level is not a conservative estimate. In this case, the risk indicator reflects the "margin of safety" between actual exposure and the non-extrapolated reference level. Depending on the situation at hand, the margin-of-safety typically should be 100 or higher. The main difference between the traditional and the margin-of-safety approach in risk assessment is the timing for addressing the uncertainties in the effect assessment.illustrates the risk assessment paradigm using the DPSIR chain (Section 1.2). It illustrates how reference exposure levels are being derived from protection goals, i.e. the maximum level of impact that we consider acceptable. The actual exposure level is either measured or predicted using estimated emission levels and dispersion models. When measured exposure levels are used, this is called retrospective or diagnostic risk assessment: the environmental is already polluted and the assessor wants to know whether the risk is acceptable and which substances are contributing to it. When the environment is not yet polluted, predictive tools can be used. This is called prospective risk assessment: the assessor wants to know whether a projected activity will result in unacceptable risks. Even if the environment is already polluted, the risk assessor may still decide to prefer predicted over measured exposure levels, e.g. if measurements are too expensive. This is possible only if the pollution sources are well-characterized. Retrospective (diagnostic) and prospective risk assessments can differ substantially in terms of problem definitions and methods used, and are therefore discussed in separate sections in this online book.can also be used to illustrate some important criticism on the current risk assessment paradigm, i.e. the comparison between the actual exposure level and a reference level. In current assessments, only one point of the dose-response relationship is being used to assess risk, i.e. the reference level. Critics argue that this is suboptimal and a waste of resources because the dose-response information is not used to assess the actual risk. A risk indicator with a value of 2.0 implies that the exposure is twice as high as the reference level but this does not give an indication of how many individuals or species are being affected or of the intensity of the effect. If the dose-response relationship would be used to determine the risk, this would result in a better-informed risk estimate.A final critical remark that should be made, is the fact that risk assessment is often performed on a substance-by-substance basis. Dealing with mixtures of chemicals is difficult because each mixture has a unique composition in terms of compounds and concentration ratios between compounds. This makes it difficult to determine a reference level for mixtures. Mixture toxicology is slowly progressing and several methods are now available to address mixtures, i.e. whole mixture methods and compound-based approaches (Section 6.3.6). Another promising development are effect-based methods (Section 6.4.2). These methods do not assess risk based on chemical concentration, but on the toxicity measured in an environmental sample. In terms of DPSIR, these methods are assessing risks on the level of impacts rather than the level of state or pressures.Imagine the herbicide glyphosate would be banned based on its carcinogenic properties. Would this intervention be risk-based or hazard-based?Indicate whether the following activities should involve risk assessors, risk managers/politicians and/or stakeholders:Indicate whether the following risk assessments are retrospective or prospective:A risk assessment was performed for two different substances, i.e. A and B. The risk indicator value of substance A was 1.5 and that of substance B was 2.0. A risk manager proposes to first address substance B and subsequently substance A. Do you agree? Motivate your answer.This page titled 6.1: Introduction - The Essence of Risk Assessment is shared under a CC BY 4.0 license and was authored, remixed, and/or curated by Sylvia Moes, Kees van Gestel, & Gerco van Beek via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.
1,000
6.3: Predictive risk assessment approaches and tools
https://chem.libretexts.org/Bookshelves/Environmental_Chemistry/Environmental_Toxicology_(van_Gestel_et_al.)/06%3A_Risk_Assessment_and_Regulation/6.03%3A_Predictive_risk_assessment_approaches_and_tools
under reviewAuthors: Jos Boesten, Theo BrockReviewer: Ad Ragas, Andreu RicoLearning objectives:You should be able to:Keywords: pesticides, exposure, scenarios, assessment goals, effectsAn exposure scenario describes the combination of circumstances needed to estimate exposure by means of models. For example, scenarios for modelling pesticides exposure can be defined as a combination of abiotic (e.g. properties and dimensions of the receiving environment and related soil, hydrological and climate characteristics) and agronomic (e.g. crops and related pesticide application) parameters that are thought to represent a realistic worst-case situation for the environmental context in which the exposure model is to be run. A scenario for exposure of aquatic organisms could be e.g. a ditch with a minimum water depth of 30 cm alongside a crop growing on a clay soil with annual applications of pesticide using a 20-year time series of weather data and including pesticide exposure via spray drift deposition and leaching from drainpipes. Such a scenario would require modelling of spray drift, leaching from drainpipes and exposure in surface water, ending up in a 20-year time series of the exposure concentration. In this chapter, we explain the use of exposure scenarios in prospective ERA by giving examples for the regulatory assessment of pesticides in particular.Between about 1995 and 2001 groundwater and surface water scenarios were developed for EU pesticide registration; also referred to as the FOCUS scenarios. The European Commission indicated that these should represent 'realistic worst-cases', a political concept which leaves considerable room for scientific interpretation. Risk assessors and managers agreed that the intention was to generate 90th percentile exposure concentrations. The concept of a 90th percentile exposure concentration assumes a statistical population of concentrations and 90% of these concentrations are lower than this 90th percentile (and thus 10% are higher). This 90th percentile approach has since then been followed for most environmental exposure assessments for pesticides at EU level.The selection of the FOCUS groundwater and surface water scenarios involved a considerable amount of expert judgement because this selection could not yet be based on well-defined GIS procedures and databases on properties of the receiving environment. The EFSA exposure assessment for soil organisms was the first environmental exposure assessment that could be based on a well-defined GIS procedure, using EU maps of parameters like soil organic matter, density of crops and weather. During the development of this exposure assessment, it became clear that the concept of a 90th percentile exposure concentration is too vague: it is essential to define also the statistical population of concentrations from which this 90th percentile is taken. Based on this insight, the EFSA Panel on Plant Protection Products and their Residues (PPR) developed the concept of the exposure assessment goals, which has become the standard within EFSA for developing regulatory exposure scenarios for pesticides.shows how an exposure assessment goal for the risk assessment of aquatic organisms can be defined following this EFSA procedure. The left part specifies the temporal dimensions and the right part the spatial dimensions. In box E1, the Ecotoxicologically Relevant type of Concentration (ERC) is defined, e.g. the freely dissolved pesticide concentration in water for pelagic organisms. In box E2, the temporal dimension of this concentration is defined, e.g. annual peak or time-weighted average concentration for a pre-defined period. Based on these elements, the multi-year temporal population of concentrations can be generated for one single water body (E5) which would consist of e.g. 20 peak concentrations in case of a time series of 20 years. The spatial part requires definition of the type of water body (e.g. ditch, stream or pond; box E3) and the spatial dimension of this body (e.g. having a minimum water depth of 30 cm; box E4). Based on these, the spatial population of water bodies can be defined (box E6), e.g. all ditches with a minimum water depth of 30 cm alongside fields treated with the pesticide. Finally, then in box E7 the percentile combination to be taken from the spatial-temporal population of concentrations is defined. Specification of the exposure assessment goals does not only involve scientific information, but also political choices because this specification influences the strictness of the exposure assessment. For instance, in case of exposure via spray drift a minimum water depth of 30 cm in box E4 leads to about a three times lower peak concentration in the water than a minimum water depth of 10 cm.The schematic approach of can easily be adapted to other exposure assessment goals.Interaction between exposure and effect assessment for organismsNearly all the environmental protection goals for pesticides involve assessment of risk for organisms; only groundwater and drinking water from surface water are based on a concentration of 0.1 μg/L which is not related to possible ecotoxicological effects. The risk assessment for organisms is a combination of an exposure assessment and an effect assessment as is illustrated by . When linking the two assessments, it has to be ensured that the type of concentration delivered by the exposure assessment is consistent with that required by the effect assessment (e.g. do not use time-weighted average concentrations in acute effect assessment). shows that in the assessment procedure information flows always from the exposure assessment to the effect assessment because the risk assessment conclusion is based on the effect assessment.A relatively new development is to assess exposure and effects at the landscape level. This typically is a combination of higher-tier effect and exposure assessments. In such an approach, first the dynamics in exposure is assessed for the full landscape, and then combined with the dynamics of effects, for example based on spatially-explicit population models for species typical for that landscape. Such an approach makes a separate definition of the exposure and effect scenario redundant because this approach aims to deliver the exposure and effect assessment in an integrated way in space and time. Such an integrated approach requires the definition of "environmental scenarios". Environmental scenarios integrate both the parameters needed to define the exposure (exposure scenario) and those needed to calculate direct and indirect effects and recovery (ecological scenario) (see . However, it will probably take at least a decade before landscape-level approaches, including agreed-upon environmental scenarios, will be implemented for regulatory use in prospective ERA.Boesten, J.J.T.I.. Conceptual considerations on exposure assessment goals for aquatic pesticide risks at EU level. Pest Management Science 74, 264-274.Brock, T.C.M., Alix, A., Brown, C.D., et al.. Linking aquatic exposure and effects: risk assessment of pesticides. SETAC Press & CRC Press, Taylor & Francis Group, Boca Raton, FL, 398 pp.Rico, A., Van den Brink, P.J., Gylstra, R., Focks, A., Brock, T.C.M.. Developing ecological scenarios for the prospective aquatic risk assessment of pesticides. Integrated Environmental Assessment and Management 12, 510-521.Why is a detailed specification of exposure assessment goals needed ?Why does specification of the exposure assessment goals include political choices ?Why does the risk assessment of organisms consist of two parallel tiered schemes for effects and exposure ?in preparationAuthors: Els Smit, Eric VerbruggenReviewers: Alexandra Kroll, Inge WernerLearning objectivesYou should be able to:Key words: PNEC, quality standards, extrapolation, assessment factorIntroductionThe key question in environmental risk assessment is whether environmental exposure to chemicals leads to unacceptable risks for human and ecosystem health. This is done by comparing the measured or predicted concentrations in water, soil, sediment, or air, with a reference level. Reference levels represent a dose (intake rate) or concentration in water, soil, sediment or air below which unacceptable effects are not expected. The definition of 'no unacceptable effects' may differ between regulatory frameworks, depending on the protection goal. The focus of this section is the derivation of reference levels for aquatic ecosystems as well as for predators feeding on exposed aquatic species (secondary poisoning), but the derivation of reference values for other environmental compartments follows the same principles.Terminology and conceptsVarious technical terms are in use as reference values, e.g. the Predicted No Effect Concentration (PNEC) for ecosystems or the Acceptable Daily Intake (ADI) for humans (Section on Human toxicology). The term "reference level" is a broad and generic term, which can be used independently of the regulatory context or protection goal. In contrast, the term "quality standard" is associated with some kind of legal status, e.g., inclusion in environmental legislation like the Water Framework Directive (WFD). Other terms exist, such as the terms 'guideline value' or 'screening level' which are used in different countries to indicate triggers for further action. While the scientific basis of these reference values may be similar, their implementation and the consequences of exceedance are not. It is therefore very important to clearly define the context of the derivation and the terminology used when deriving and publishing reference levels.PNECA frequently used reference level for ecosystem protection is the Predicted No Effect Concentration (PNEC). The PNEC is the concentration below which adverse effects on the ecosystem are not expected to occur. PNECs are derived per compartment and apply to the organisms that are directly exposed. In addition, for chemicals that accumulate in prey, PNECs for secondary poisoning of predatory birds and mammals are derived. The PNEC for direct ecotoxicity is usually based on results from single species laboratory toxicity tests. In some case, data from field studies or mesocosms may be included.A basic PNEC derivation for the aquatic compartment is based on data from single species tests with algae, water fleas and fish. Effects on the level of a complex ecosystem are not fully represented by effects on isolated individuals or populations in a laboratory set-up. However, data from laboratory tests can be used to extrapolate to the ecosystem level if it is assumed that protection of ecosystem structure ensures protection of ecosystem functioning, and that effects on ecosystem structure can be predicted from species sensitivity.Accounting for Extrapolation Uncertainty: Assessment Factor (AF) ApproachTo account for the uncertainty in the extrapolation from single species laboratory tests to effects on real life ecosystems, the lowest available test result is divided by an assessment factor (AF). In establishing the size of the AF, a number of uncertainties must be addressed to extrapolate from single-species laboratory data to a multi-species ecosystem under field conditions. These uncertainties relate to intra- and inter-laboratory variation in toxicity data, variation within and between species (biological variance), test duration and differences between the controlled laboratory set-up and the variable field situation. The value of the AF depends on the number of studies, the diversity of species for which data are available, the type and duration of the experiments, and the purpose of the reference level. Different AFs are needed for reference levels for e.g. intermittent release, short-term concentration peaks or long-term (chronic) exposure. In particular, reference levels for intermittent release and short-term exposure may be derived on the basis of acute studies, but short-term tests are less predictive for a reference level for long-term exposure and larger AFs are needed to cover this. Table 1 shows the generic AF scheme that is used to derive PNECs for long-term exposure of freshwater organisms in the context of European regulatory framework for industrial chemicals (REACH; see Section on REACH environment). This scheme is also applied for the authorisation of biocidal products, pharmaceuticals and for derivation of long-term water quality standards for freshwater under the EU Water Framework Directive. Further details on the application of this scheme, e.g., how to compare acute and chronic data and how to deal with irregular datasets, are presented in guidance documents (see suggested reading: EC, 2018; ECHA, 2008). Similar schemes exist for marine waters, sediment, and soil. However, for the latter two environmental compartments often too little experimental information is available and risk limits have to be calculated by extrapolation from aquatic data using the Equilibrium Partitioning concept. The derivation of Regulatory Acceptable Concentrations (RAC) for plant protection products (PPPs) is also based on the extrapolation of laboratory data, but follows a different approach focussing on generating data for specific taxonomic groups, taking account of the mode of action of the PPP (see suggested reading: EFSA, 2013).Table 1. Basic assessment factor scheme used for the derivation of PNECs for freshwater ecosystems used in several European regulatory frameworks. Consult the original guidance documents for full schemes and additional information (see suggested reading: EC, 2018; ECHA, 2008).Available dataAssessment factorAt least one short-term L(E)C50 from each of three trophic levels(fish, invertebrates (preferred Daphnia) and algae)1000One long-term EC10 or NOEC (either fish or Daphnia)100Two long-term results (e.g. EC10 or NOECs) from species representingtwo trophic levels (fish and/or Daphnia and/or algae)50Long-term results (e.g. EC10 or NOECs) from at least three species(normally fish, Daphnia and algae) representing three trophic levels10Application of Species Sensitivity Distribution (SSD) and Other Additional DataThe AF approach was developed to account for the uncertainty arising from extrapolation from (potentially limited) experimental datasets. If enough data are available for other species than algae, daphnids and fish, statistical methods can be applied to derive a PNEC. Within the concept of species sensitivity distribution (SSD), the distribution of the sensitivity of the tested species is used to estimate the concentration at which 5% of all species in the ecosystem is affected (HC5; see section on SSDs). When used for regulatory purposes in European regulatory frameworks, the dataset should meet certain requirements regarding the number of data points and the representation of taxa in the dataset, and an AF is applied to the HC5 to cover the remaining uncertainty from the extrapolation from lab to field.Where available, results from semi-field experiments (mesocosms, see section on Community ecotoxicology) can also be used, either on its own or to underpin the PNEC derived from the AF or SSD approach. SSDs and mesocosm-studies are also used in the context of authorisation of PPP.Reference levels for secondary poisoningSubstances might be toxic to wildlife because of bioaccumulation in prey or a high intrinsic toxicity to birds and mammals. If this is the case, a reference level for secondary poisoning is derived for a simple food chain: water è fish or mussel è predatory bird or mammal. The toxicity data from bird or mammal tests are transformed into safe concentrations in prey. This can be done by simply recalculating concentrations in laboratory feed into concentrations in fish using default conversion factors (see e.g., ECHA, 2008). For the derivation of water quality standards under the WFD, a more sophisticated method was introduced that uses knowledge on the energy demand of predators and energy content in their food to convert laboratory data to a field situation. Also, the inclusion of other, more complex and sometimes longer food chains is possible, for which field bioaccumulation factors are used rather than laboratory derived values.EC. Common Implementation Strategy for the Water Framework Directive (2000/60/EC). Guidance Document No. 27. Technical Guidance For Deriving Environmental Quality Standards. Updated version 2018. Brussels, Belgium. European Commission. circabc.europa.eu/ui/group/9...7a2a6b/detailsECHA. Guidance on information requirements and chemical safety assessment Chapter R.10: Characterisation of dose [concentration]-response for environment. Helsinki, Finland. European Chemicals Agency. May 2008. //echa.europa.eu/documents/10162/13632/information_requirements_r10_en.pdf/bb902be7-a503-4ab7-9036-d866b8ddce69EFSA. Guidance on tiered risk assessment for plant protection products for aquatic organisms in edge-of-field surface waters. EFSA Journal 2013; 11: 3290 efsa.onlinelibrary.wiley.com...efsa.2013.3290Traas, T.P., Van Leeuwen, C.. Ecotoxicological effects. In: Van Leeuwen, C., Vermeire, T.C. (Eds.). Risk Assessment of Chemicals: an Introduction, Chapter 7. Springer.What is a PNEC?How is a basic PNEC commonly derived in Europe?Why are assessment factors applied?Which aspects are covered by the assessment factor?Within the EU REACH/WFD regulatory framework, which assessment factor may be applied to derive a PNEC for freshwater if you have one LC50 value for Oncorhynchus mykiss, one EC50 value for Daphnia magna, one EC10 value for Oncorhynchus mykiss, and one NOEC value for Pseudokirchneriella subcapitata?Authors: Leo Posthuma, Dick de ZwartReviewers: Ad Ragas, Keith SolomonLearning objectives:You should be able to:Keywords: Species Sensitivity Distribution (SSD), benchmark concentration, Potentially Affected Fraction of species (PAF)The relationship between dose or concentration (X) and response (Y) is key in risk assessment of chemicals (see section on Concentration-response relationships). Such relationships are often determined in laboratory toxicity tests; a selected species is exposed under controlled conditions to a series of increasing concentrations to determine endpoints such as the No Observed Effect Concentration (NOEC), the EC50 (the Effect Concentration causing 50% effect on a studied endpoint such as growth or reproduction), or the LC50 (the Effect Concentration causing 50% lethal effects). For ecological risk assessment, multiple species are typically tested to characterise the (variation in) sensitivities across species or taxonomic groups within the ecosystem. In the mid-1980s it had been observed that-like many natural phenomena-a set of ecotoxicity endpoint data, representing effect concentrations for various species, follows a bell-shaped statistical distribution. The cumulative distribution of these data is a sigmoid (S-shaped) curve. It was recognized, that this distribution had particular utility for assessing, managing and protecting environmental quality regarding chemicals. The bell-shaped distribution was thereupon named a Species Sensitivity Distribution (SSD). Since then, the use of SSD models has grown steadily. Currently, the model is used for various purposes, providing important information for decision-making.Below, the dual utility of SSD models for environmental protection, assessment and management are shown first. Thereupon, the derivation and use of SSD models are elaborated in a stepwise sequence.A species sensitivity distribution (SSD) is a distribution describing the variance in sensitivity of multiple species exposed to a hazardous compound. The statistical distribution is often plotted using a log-scaled concentration axis (X), and a cumulative probability axis (Y, varying from 0 - 1; .shows that different species (here the dots represent 3 test data for algal species, 2 data for invertebrate species and 2 data fish species) have different sensitivities to the studied chemical. First, the ecotoxicity data are collected, and log10-transformed. Second, the data set can be visually inspected by plotting the bell-shaped distribution of the log-transformed data; deviations of the expected bell-shape can be visually identified in this step. They may originate from causes such as a low number of data points or be indicative for a selective mode of action of the toxicant, such as a high sensitivity of insects to insecticides. Third, common statistical software for deriving the two parameters of the log-normal model (the mean and the standard deviation of the ecotoxicity data) can be applied, or the SSD can be described with a dedicated software tool such as ETX (see below), including a formal evaluation of the 'goodness of fit' of the model to the data. With the estimated parameters, the fitted model can be plotted, and this is often done in the intuitively attractive form of the S-shaped cumulative distribution. This curve then serves two purposes. First, the curve can be used to derive a so-called Hazardous Concentration on the X-axis: a benchmark concentration that can be used as regulatory criterion to protect the environment (YàX). That is, chemicals with different toxicities have different SSDs, with the more hazardous compounds plotted to the left of the less hazardous compounds. By selecting a protection level on the Y-axis-representing a certain fraction of species affected, e.g. 5%-one derives the compound-specific concentration standards. Second, one can derive the fraction of tested species probably affected at an ambient concentration (XàY), which can be measured or modelled. Both uses are popular in contemporary environmental protection, risk assessment, and management.The SSD model for a chemical and an environmental compartment (e.g., surface water, soil or sediment) is derived based on pertinent ecotoxicity data. Those are typically extracted from scientific literature or ecotoxicity databases. Examples of such databases are the U.S. EPA's Ecotox database, the European REACH data sets and the EnviroTox database which contains quality-evaluated studies. The researcher selects the chemical and the compartment of interest, and subsequently extracts all test data for the appropriate endpoint (e.g., ECx-values). The set of test data is tabulated and ranked from most to least sensitive. Multiple data for the same species are assessed for quality and only the best data are used. If there is > 1 toxicity value for a species after the selection process, the geometric mean value is commonly derived and used. A species should only be represented once in the SSD. Data are often available for frequently tested species, representing different taxonomic and/or trophic levels. A well-known triplet of species frequently tested is "Algae, Daphnids and Fish", as this triplet is a requested minimum set for various regulations in the realm of chemical safety assessment (see section on Regulatory frameworks). For various compounds, the number of test data can be more than hundred, whilst for most compounds few data of acceptable quality may be available.Standard statistical software (a spreadsheet program) or a dedicated software model such as ETX can be used to derive an SSD from available data. Commonly, the fit of the model to the data set is checked to avoid misinterpretation. Misfit may be shown using common statistical testing (Goodness of Fit tests) or by visual inspection and ecological interpretation of the data points. That is, when a chemical specifically affects one group of species (e.g., insects having a high sensitivity for insecticides), the user may decide the derive an SSD model for specific groups of species. In doing so, the outcome will consist of two or more SSDs for a single compound (e.g., an SSDInsect and an SSDOther when the compound is an insecticide, whilst the SSDOther might be split further if appropriate). These may show a better goodness of fit of the model to the data, but - more importantly - they reflect the use of key knowledge of mode of action and biology prior to 'blindly' applying the model fit procedure.The oldest use of the SSD model is the derivation of reference levels such as the PNEC (YàX). That is, given the policy goal to fully protect ecosystems against adverse effects of chemical exposures (see Section on Ecosystem services and protection goals), the protective use is as follows. First, the user defines which ecotoxicity data are used. In the context of environmental protection, these have often been NOECs or low-effect levels (ECx, with low x, such as EC10) from chronic tests. This yields an SSD-NOEC or SSD-ECx. Then, the user selects a level of Y, that is: the maximum fraction of species for which the defined ecotoxicity endpoint (NOEC or ECx) may be exceeded, e.g., 0.05 (a fraction of 0.05 equals 5% of the species). Next, the user derives the Hazardous Concentration for 5% of the species (YàX). At the HC5, 5% of the species are exposed to concentrations greater than their NOEC, but-which is the obverse-95% of the species are exposed to concentration less than their NOEC. It is often assumed that the structural and functional integrity of ecosystems is sufficiently protected at the HC5 level if the SSD is based on NOECs. Therefore, many authorities use this level to derive regulatory PNECs (Predicted No Effect Concentration) or Environmental Quality Standards (EQS). The latter concepts are used as official reference levels in risk assessment, the first is the preferred abbreviation in the context of prospective chemical safety assessments, and the second is used in retrospective environmental quality assessment. Sometimes an extra assessment factor varying between 1 and 5 is applied to the HC5 to account for remaining uncertainties. Using SSDs for a set of compounds yields a set of HC5 values, which-in fact-represent a relative ranking of the chemicals by their potential to cause harm.The SSD model also can be used to explore how much damage is caused by environmental pollution. In this case, a predicted or measured ambient concentration is used to derive a Potentially Affected Fraction of species (PAF). The fraction ranges from 0-1 but, in practice, it is often expressed as a percentage (e.g., "24% of the species is likely affected"). According to this approach, users often have monitored or modelled exposure data from various water bodies, or soil or sediment samples, so that they can evaluate whether any of the studied samples contain a concentration higher than the regulatory reference level (previous section) and, if so how many species are affected. Evidently, the user must clearly express what type of damage is quantified, as damage estimates based on an SSDNOEC or an SSDEC50 quantify the fractions of species affected beyond the no effect level and at the 50% effect level, respectively. This use of SSDs for a set of environmental samples yields a set of PAF values, which, in fact, represent a relative ranking of the pollution levels at the different sites in their potential to cause harm.SSD model outcomes are used in various regulatory and practical contexts.Today, these three forms of use of SSD models have an important role in the practice of environmental protection, assessment and management on the global scale, which relates to their intuitive meaning, their ease of use, and the availability of a vast number of ecotoxicity data in the global databases.What is the basic concept underlying SSD models?What is the main assumption underlying SSD models?What is meant by "the dual utility of an SSD model" in environmental protection, assessment and management?Given that the SSD model is a statistical description of ecotoxicological differences in sensitivity between species for a chemical, what is a critical step in the derivation and use of SSD model outputs?Does an SSD describe or explain differences in species sensitivity for a chemical?under reviewThis page titled 6.3: Predictive risk assessment approaches and tools is shared under a CC BY 4.0 license and was authored, remixed, and/or curated by Sylvia Moes, Kees van Gestel, & Gerco van Beek via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.
1,002
6.4: Diagnostic risk assessment approaches and tools
https://chem.libretexts.org/Bookshelves/Environmental_Chemistry/Environmental_Toxicology_(van_Gestel_et_al.)/06%3A_Risk_Assessment_and_Regulation/6.04%3A_Diagnostic_risk_assessment_approaches_and_tools
Author: Michiel KraakReviewers: Ad Ragas and Kees van GestelLearning objectives:You should be able toKeywords: hazard assessment, risk assessment, prognosis, diagnosis, effect based monitoring, bioassays, effect directed analysis, mesocosm, biomonitoring, TRIAD approach, eco-epidemiology.To determine whether organisms are at risk when exposed to certain concentrations of hazardous compounds in the field, the toxicity of environmental samples can be analysed. To this purpose, several approaches and techniques have been developed, known as diagnostic tools. The tools described in Sections 6.5.1-6.5.8 have in common that they make use of living organisms to assess environmental quality. This is generally achieved by performing bioassays in which the selected test species are exposed to (concentrates or dilutions of) environmental samples after which their performance (survival, growth, reproduction etc) is measured. The species selected as test organisms for bioassays are generally the same as the ones selected for toxicity tests (see section on Selection of ecotoxicity test organisms).Each biological organization level has its own battery of test methods. At the lowest level of biological organization, a wide variety of in vitro bioassays is available (see section Effect based monitoring: in vitro bioassays). These comprise tests based on cell lines, but also bacteria and zebra fish embryos are employed. If the response of a bioassay to an environmental sample exceeds the predefined effect-based trigger value, the response is considered to be indicative of ecological risks. Yet, the compounds causing the observed toxicity are initially unknown. However, these can subsequently be elucidated with Effect Directed Analysis (see section Effect Directed Analysis). The sample causing the effect is subjected to fractionation and the fractions are tested again. This procedure is repeated until the sample is reduced to a few individual compounds, which can then be identified allowing to confirm their contribution to the observed toxic effects.At higher levels of biological organization, a wide variety of in vivo tests and test organisms are available, including terrestrial and aquatic plants and animals (see section Effect based monitoring: in vivo bioassays). Yet, different test species tend to respond very differently to specific toxicants and specific field collected samples. Hence, the results of a single species bioassay may not reliably reflect the risk of exposure to a specific environmental sample. To avoid over- and underestimation of environmental risks, it is therefore advisable to employ a battery of in vitro and in vivo bioassays. In a case study on effect-based water quality assessment, we showed the great potential of this approach, resulting in the ranking of sites based on ecological risks rather than on the absence or presence of compounds (see section Effect based water quality assessment).At the higher levels of biological organization, effect-based monitoring tools include bioassays performed in mesocosms (see section Community Ecotoxicology in practice) and in the field itself, the so called in situ bioassays (see section Biomonitoring: in situ bioassays and contaminant concentrations in organisms ). Cosm studies represent a bridge between the laboratory and the natural world. The originality of mesocosms is based on the combination of ecological realism, the ability to manipulate different environmental parameters and still having the opportunity to replicate treatments.In the field, the aim of biomonitoring is the in situ assessment of environmental quality on a regular basis in time, using living organisms (see section Biomonitoring: in situ bioassays and contaminant concentrations in organisms ). Organisms are collected from reference sites and exposed in cages or artificial substrates at the study sites, after which they are recollected and either their condition is analysed (in situ bioassay) or the internal concentrations of specific target compounds are measured, or both (see section Biomonitoring: in situ bioassays and contaminant concentrations in organisms).Finally, two approaches will be introduced that aid to bridge policy goals and ecosystem responses to perturbation: the TRIAD approach and eco-epidemiology. The TRIAD approach is a tool for site-specific ecological risk assessment, combining and integrating information on contaminant concentrations, bioassay results and ecological field inventories in a 'Weight of Evidence' approach (see section TRIAD approach). Eco-epidemiology is defined as the study of the distribution and causation of impacts of multiple stressor exposures in ecosystems, and the application of this study to reduce ecological impacts (see section Eco-epidemiology).Define and distinguish hazard and risk.Distinguish predictive tools (toxicity tests) and diagnostic tools (bioassays).What is the essence of diagnostic risk assessment?List and briefly describe bioassays at different levels of biological organisation.Name and briefly describe two approaches that aid to bridge policy goals and ecosystem responses to perturbation.Author: Timo HamersReviewer: Beate EscherLearning objectives:You should be able toKey words: effect-based monitoring; cell line; reporter gene assay; toxicity profile; trigger valueEffect-based monitoringDiagnosis of the chemical status of the environment is traditionally performed by the analytical detection of a limited number of chemical compounds. Environmental quality is then assessed by making a compound-by-compound comparison between the measured concentration of an individual contaminant and its environmental quality standard (EQS). Such a compound-by-compound approach, however, cannot cover the full spectrum of contaminants given the unknown identity of the vast majority of compounds released into the environment. It also ignores the presence of unknown breakdown products formed during degradation processes and the presence of compounds with concentration levels below the analytical limit of detection. Furthermore, it overlooks combined effects of contaminants present in the complex environmental mixture.To overcome these shortcomings, effect-based monitoring has been proposed as a comprehensive and cost-effective, complementary strategy to chemical analysis for the diagnosis of environmental chemical quality. In effect-based monitoring the toxic potency of the complex mixture is determined as a whole by testing environmental samples in bioassays. Bioassays are defined as "biological test systems that consist of whole organisms or parts of organisms (e.g., tissues, cells, proteins), which show a measurable and potentially biologically relevant response when exposed to natural or xenobiotic compounds, or complex mixtures present in environmental samples" (Hamers et al. 2010).Bioassays making use of whole organisms are further referred to as in vivo bioassays (in vivo means "while living"). In vivo bioassays have relatively high ecological relevance as they provide information on survival, reproduction, growth, or behaviour of the species tested. In vivo bioassays will be addressed in a separate section.In vitro bioassaysBioassays making use of tissues, cells, proteins are called in vitro bioassays (in vitro means "in glass"), as - in the past -they were typically performed in test tubes or petri dishes made from glass. Nowadays, in vitro bioassays are more often performed in microtiter wells-plates containing multiple (6, 12, 24, 48, 96, 384, or 1536) test containers (called "wells") per plate. Most in vitro bioassays show a very mechanism-specific response, which is for instance indicative of the inhibition of a specific enzyme or the activation of a specific molecular receptor.In addition to the mechanism-specific information about the complex mixture present in the environment, in vitro bioassays have several other advantages. Small test volumes, for instance, make the in vitro assays suitable to test small samples. If sampling volumes are not restricted, however, the small volume of the in vitro bioassays allow that pre-concentrated samples (i.e. extracts) can be tested. Moreover, in vitro bioassays have short test durations (usually incubation periods range from 15 minutes to 48 hours) and can be performed in relatively high-throughput, i.e. multiple samples can be tested per microtiter plate experiment. Microtiter plate experiments require an easy read-out (e.g. luminescence, fluorescence, optical density), which is typically a direct measure for the toxic potency to which the bioassay was exposed. Finally, using cells or proteins for toxicity testing raises less ethical objections than the use of intact organisms as is done in in vivo bioassays.Cell-based in vitro bioassays can make use of different types of cells. Cells can be isolated from animal tissue and be grown in medium in cell culture flasks. If a flask grows full, cells can be diluted in fresh medium and be distributed over several new flasks (i.e. "passaging"). For cells freshly isolated from animal tissue (called primary cells), however, the number of passages is limited, due to the fact that the cells have a limited number of cell doublings. Thus, the use of primary cells in environmental monitoring is not preferred, as preparation of cell cultures is time-consuming and requires the use of animals. Moreover, the composition and activity of the cells may change from batch to batch. Instead, environmental monitoring often makes use of cell lines. A cell line is a cell culture derived from a single cell which has been immortalized, allowing the cell to divide infinitely. Immortalization of cells is obtained by selecting either a (mutated) cancer cell from a donor animal or human being, or by causing a mutation in an a healthy cell after isolation using chemicals or viruses. The advantage of a cell line is that all cells are genetically identical and can be used for an indefinite number of experiments. The drawback of cell lines is that the cells are cancer cells that do not behave like a healthy cell in an intact organism. For instance, cancer cells have lost their differentiated properties and have a short cell cycle due to increased proliferation (see section on In vitro toxicity testing).Examples ofReporter gene bioassays are a type of in vitro bioassays that are frequently used in effect-based monitoring. Such bioassays make use of genetically modified cell lines or bacteria that contain an incorporated gene construct encoding for an easily measurable protein (i.e. the reporter protein). This gene construct is developed in such a way that its expression is triggered by a specific interaction between the toxic compound and a cellular receptor. If the receptor is activated by the toxic compound, transcription and translation of the reporter protein takes place, which can be easily measured as a change in colour, fluorescence, or luminescence.The most well-known reporter gene bioassays are steroid hormone-sensitive bioassays. These bioassays are based on the principle by which steroid hormones act, i.e. activation of a receptor protein followed by translocation of the hormone-receptor complex to the nucleus where it binds to a hormone-responsive element of the DNA, thereby initiating transcription and translation of steroid hormone-dependent genes. In case of a hormone-responsive reporter gene bioassay, the reporter gene construct is also under transcriptional control of a hormone-responsive element. Activation of the steroid hormone receptor by an endocrine disrupting compound thus leads to expression of the reporter protein, which can easily be measured. Estrogenic activity, for instance, is typically measured in cell lines in which a plasmid is stably transfected into the cellular genome that encodes for the reporter protein luciferase. Expression of this enzyme is under transcriptional control of an estrogen-responsive element (ERE). Upon exposure to an environmental sample, estrogenic compounds present in the sample may enter the cell and bind and activate the estrogen receptor (ER). The activated ER forms a dimer with another activated ER and is translocated to the nucleus where the dimer binds to the ERE, causing transcription and translation of the luciferase reporter gene. After 24 hours, the exposure is terminated and the amount of luciferase enzyme can be easily quantified by lysis of the cells and adding the energy source ATP and the substrate luciferin. Luciferin is hydrolysed by luciferase, which is associated with the emission of light (i.e. the same reaction as occurs in fireflies or in glowing worms). The amount of light produced by the cells is quantified in a luminometer and is a direct measure for the estrogenic potency of the complex mixture to which the cells were exposed.Another classic bioassay for the detection of dioxin-like compounds is the ethoxyresorufin-o-deethylase (EROD) bioassay. The EROD bioassay is an enzyme induction bioassay that makes use of a hepatic cell line (i.e. derived from liver cells). Similar as described for the estrogenic compounds, dioxin-like compounds can enter these cells upon exposure to an environmental sample, and bind and activate a receptor protein, i.c. the arylhydrocarbon receptor (AhR) (see section on Receptor interactions). The activated AhR is subsequently translocated to the nucleus where it forms a dimer with another transcription factor (ARNT) that binds to the dioxin responsive element (DRE), causing transcription and translation of dioxin-responsive genes. One of these genes encodes for CYP1A1, a typical Phase I biotransformation enzyme. Upon lysis of the cells and addition of the substrate ethoxyresorufin, CYP1A1 is capable of hydrolysing this substrate into ethanol and resorufin, which is a fluorescent reaction product that can be measured easily. As such, the amount of fluorescence is a direct measure for the dioxin-like potency to which the cells were exposed.Another classic bioassay is the acetylcholinesterase (AChE) inhibition assay for the detection of organophosphate and carbamate insecticides. By making a covalent bond to the active site of the AChE enzyme, these compounds are capable of inhibiting the hydrolysis of the neurotransmitter acetylcholine (ACh) (see section on Protein inactivation). The in vitro AChE inhibition assay makes use of the principle that AChE can also hydrolyse an alternative substrate called acetylthiocholine (ATCh) into acetic acid and thiocholine (TCh). AChE inhibition leads to a decreased rate of TCh formation, which can be measured using an indicator, called Ellman's reagent. This indicator reacts with the thiol (-SH) group of TCh, resulting in a yellow breakdown product that can easily be measured photometrically. In the bioassay, purified AChE (commercially available for instance from electric eel) is incubated with an environmental sample in the presence of ATCh and Ellman's reagent. A decrease in the rate by which the yellow reaction product is formed is a direct measure for the inhibition of the AChE activity.Another bioassay that is used to detect mutagenic compounds in environmental samples is the Ames assay, which has been described in the section on Carcinogenicity and Genotoxicity.Interpretation of the toxicity profileIn practice, multiple mechanism-specific in vitro bioassays are often combined into a test battery to cover the spectrum of toxicological endpoints in an (eco)system. As such, the battery can be considered as a safety net that signals the presence of toxic compounds at low concentrations. However, the question what combination of in vitro tests provides a sufficient level of coverage for the toxicological endpoints of concern still is an open one.Still, testing an environmental sample in a battery of mechanism-specific in vitro bioassays yields a toxicity profile of the sample, indicating its toxic potency towards different endpoints. Two main strategies have been described to interpret in vitro toxicity profiles in terms of risk. In the "benchmark strategy", the toxicity profiles are compared to one or more reference profiles. A reference profile may be defined as the profile that is generally observed in environmental samples from locations with good chemical and/or ecological quality. The benchmark approach indicates to what extent the observed toxicity profile deviates from a toxicity profile corresponding to the desired environmental quality. It also indicates the endpoints that are most affected by the environmental sample.In the "trigger value strategy" the response of each individual bioassay is compared to a bioassay response level at which chemicals are not expected to cause adverse effects at higher levels of biological organization. This endpoint-specific "safe" bioassay response level is called an effect-based trigger (EBT) value. The method for deriving EBT values is still under development. It can be based on different criteria, such as laboratory toxicity data, field concentrations, or EU environmental quality standards (EQS) of individual compounds, which are translated into bioassay-specific effect-levels (see section on Effect-based water quality assessment).In addition to the benchmark and trigger value approaches focusing on environmental risk assessment, effect-based monitoring with in vitro bioassays can also be used for effect-directed analysis (EDA). EDA focuses on samples that cause bioassay responses that cannot be explained by the chemicals that were analyzed in these samples. The goal of EDA is to detect and identify emerging contaminants that are responsible for the unexplained bioassay response and are not chemically analyzed because their presence or identity is unknown. In EDA, in vitro bioassay responses to fractionated samples are used to steer the chemical identification process of unknown compounds with toxic properties in the bioassays (see section on Effect-Directed Analysis).Further reading:Hamers, T., Leonards, P.E.G., Legler, J., Vethaak, A.D., Schipper, C.A.. Toxicity profiling: an integrated effect-based tool for site-specific sediment quality assessment. Integrated Environmental Assessment and Management 6, 761-773Name advantages and disadvantages of effect-based and chemical-based monitoring strategies?Name at least three characteristics that make in vitro bioassays suitable for effect-based monitoring.Can the principle of the EROD assay also be used to develop a reporter gene assay? Explain your answer.In the benchmark approach (see text), toxicity profiles from sampling locations are compared to a reference profile. Should the reference profile always correspond to a clean situation? Motivate your answer.Author: Marja LamoreeReviewers: Timo Hamers, Jana WeissLearning goals:You should be able toKeywords: extraction, bioassay testing, fractionation, identification, confirmationIn general, the quality of the environment may be monitored by two complementary approaches: i) quantitative chemical analysis of selected (priority) pollutants and ii) effect-based monitoring using in vitro/vivo bioassays. Compared to the more classical chemical analytical approach that has been used for decades, effect-based monitoring is currently applied in an explorative manner and has not yet matured into a routinely implemented monitoring tool that is anchored in legislation. However, in an international framework, developments to formalize the role of effect-based monitoring and to standardize the use of bioassay testing for environmental quality assessment are underway.A weakness of the chemical approach is that because of the preselection of target compounds for quantitative analysis other compounds that are of relevance for the environmental quality may be missed. In comparison, inclusiveness is one of the advantages of effect-based monitoring: all compounds - and not only a few pre-defined ones - having a specific effect will contribute to the total, measured biological activity (see Section In vitro bioassays). In turn, the effect-based approach strongly benefits from chemical analytical support to pinpoint which compounds are responsible for the observed activity and to be able to take measures for environmental protection, e.g. the reduction of the emission or discharge of a specific toxic compound into the environment. In Effect-Directed Analysis (EDA), the strengths of analytical chemical techniques and effect-based testing are combined with the aim to identify novel compounds that show activity in a biological analysis and that would have gone unnoticed using the chemical and the effect-based approach separately. A schematic representation of EDA is shown in and the various steps are described below in more detail. There is no limitation regarding the sample matrix: EDA has been applied to e.g. water, soil/sediment and biota samples. It is used for in-depth investigations at locations that are suspected to be contaminated but where the compounds responsible for the observed adverse effects are not known. In addition to environmental quality assessment, EDA is applied in the fields of food security analysis and drug discovery. In Table 1 examples of EDA studies are given.The first step is the preparation of an extract of the sample. For soil/sediment samples, a sieving step prior to the actual extraction may be necessary in order to remove large particles and obtain a sample that is well-defined in terms of particle size (e.g. <200 μm). Examples of biota samples are whole organism homogenates or parts of the organism, such as blood and liver. For the extraction of the samples, analytical techniques such as liquid/liquid or solid phase extraction are applied to concentrate the compounds of interest and to remove matrix constituents that may interfere with the later steps of the EDA.The choice of endpoint to include in an EDA study is very important, as it dictates the nature of the toxicity of the compounds that may be identified (see Section on Toxicodynamics and Molecular Interaction). For application in EDA, typically in vitro bioassays that are carried out in multiwell (≥ 96 well) plates can be used, because of their low cost, high throughput and ease of use (see Section on In vitro bioassays), although sometimes in vivo assays (see Section on In vivo bioassays) are applied too.Table 1. Examples of EDA studies, including endpoint, type of bioassay, sample matrix and compounds identified.EndpointType of bioassaySample matrixType of compounds identifiedEstrogenicityCell based reporter geneSedimentEndogenic hormonesAnti-androgenicityCell based reporter geneSedimentPlasticizers, organophosphorus flame retardants, synthetic fragrancesidemWaterPharmaceuticals, pesticides, plasticizers, flame retardants, UV filtersMutagenicityBacterial luminescence reporter strainWaterBenzotriazolesThyroid hormone disruptionRadioligand bindingPolar bear plasmaMetabolites of PCBs, nonylphenolsPhotosystem II toxicityPulse Amplitude Modulation fluorometryWaterPesticidesEndocrine disruptionSnail reproductionSedimentPhthalates, synthetic fragrances, alkylphenolsFractionation of the extract is achieved by the application of chromatography, resulting in the separation of the - in most cases - multitude of different compounds that are present in an extract of an environmental sample. Chromatographic separation is obtained after the migration of compounds through a sorbent bed. In most cases, the separation principle is based on the distribution of compounds between the liquid mobile phase and the solid stationary phase (liquid chromatography, or LC), but a chromatographic separation using the partitioning between the gas phase and a sorbent bed (gas chromatography, or GC) is also possible. At the end of the separation column, at specified time intervals fractions can be collected that are simpler in composition in comparison to the original extract: a reduction in the number of compounds per fraction is obtained. The collected fractions are tested in the bioassay and the responsive fractions are selected for further chemical analysis and identification (step 4). The time intervals for fraction collection vary between a few minutes in older applications and a few seconds in new applications of EDA, which enables fractionation directly into multiwell plates for high throughput bioassay testing. In cases where fractions are collected during time intervals in the order of minutes, the fractions are still so complex that a second round of fractionation to obtain fractions of reduced complexity is often necessary for the identification of compounds that are responsible for the observed effect (see .Chemical analysis for the identification of the compounds that cause the effect in the bioassay is usually done by LC coupled to mass spectrometric (MS) detection. To obtain high mass accuracy that facilitates compound identification, high resolution mass spectrometry (HR-MS) is generally applied. Fractions obtained after one or two fractionation steps are injected into the LC-MS system. In studies where fractionation into multiwell plates is used (and thus small fractions in the order of microliters are collected), only one round of fractionation is applied. In these cases, identification and fraction collection can be done in parallel, using a splitter after the chromatographic column that directs part of the eluent from the column to the well plate and the other part to the MS (see . This is called high throughput EDA (HT-EDA).The use of HR-MS is necessary to obtain mass information to establish the molecular weight with high accuracy (e.g. 119.12423 Dalton) to derive the molecular formula (e.g. C6H5N3) of the compound. Optimally, HR-MS instrumentation is equipped with an MS-MS mode, in which compound fragmentation is induced by collisions with other molecules, resulting in fragments that are specific for the original compound. Fragmentation spectra obtained using the MS-MS mode of HR-MS instruments help to elucidate the structure of the compounds eluting from the column, see for an example . Mass spectrometry instrumentation vendor software, public/web-based databases and databases compiled in-house enable suspect screening to identify compounds that are known, e.g. because they are applied in consumer products or construction materials. When MS signals cannot be attributed to known compounds or their metabolites/transformation products, the identification approach is called non-target screening, where additional identification techniques such as Nuclear Magnetic Resonance (NMR) may aid the identification. The identification process is complicated and often time consuming, and results in a suspect list that needs to be evaluated for further confirmation of the identification.For an unequivocal confirmation of the identity of a tentatively identified compound, it is necessary to obtain a standard of the compound to investigate whether its analytical chemical behaviour corresponds to that of the tentatively identified compound in the environmental sample. In addition, the biological activity of the standard should be measured and compared with the earlier obtained data. In case both the chemical analysis and bioassay testing results support the identification, confirmation of compound identity is achieved.In principle, the confirmation step of an EDA study is very straightforward, but in current practice the standards are mostly not commercially available. Dedicated synthesis is time consuming and costly, therefore the confirmation step often is a bottleneck in EDA studies.The application of EDA is suitable for samples collected at specific locations where comprehensive chemical analysis of priority pollutants and other chemicals of relevance has been conducted already, and where ecological quality assessment has revealed that the local conditions are compromised (see other Sections on Diagnostic risk assessment approaches and tools). Especially those samples that show a significant difference between the observed (in vitro) bioassay response and the activity that may be calculated according to the concept of Concentration Addition (see Section on Mixture Toxicity) by using the relative potencies and the concentrations of compounds active in that bioassay need a further in-depth investigation. EDA can be implemented at these 'hotspots' of environmental contamination to unravel the identity of compounds that have an effect, but that were not included in the chemical monitoring of the environmental quality. Knowledge with regard to the main drivers of toxicity at a specific location supports accurate decision making that is necessary for environmental quality protection.Draw a scheme of EDA and name the different steps.What is the aim of the fractionation of an extract?Describe the confirmation step of EDA.Give an example of an EDA study with regard to endpoint, bioassay, matrix and type of compounds identified.Explain why quantitative chemical analysis of known pollutants of e.g. a water sample is complementary to effect-based testing of that sample using e.g. an in vitro bioassay?Effect based monitoring: in vivo bioassaysAuthors: Michiel Kraak, Carlos BarataReviewers: Kees van Gestel, Jörg RömbkeLearning objectives:You should be able to:Key words: risk assessment, diagnosis, effect based monitoring, in vivo bioassays, environmental compartment, bioassay batteryIntroductionTo determine whether organisms are at risk when exposed to hazardous compounds present at contaminated field sites, the toxicity of environmental samples can be analysed. To this purpose, several diagnostic tools have been developed, including a wide variety of in vitro, in vivo and in situ bioassays (see sections on In vitro bioassays and on In situ bioassays). In vivo bioassays make use of whole organisms (in vivo means "while living"). The species selected as test organisms for in vivo bioassays are generally the same as the ones selected for single species toxicity tests (see sections 4.3.4, 4.3.5, 4.3.6 and 4.3.7 on the Selection of ecotoxicity test organisms). Likewise, also the endpoints measured in in vivo bioassays are the same as those in single species ecotoxicity tests (see section on Endpoints). In vivo bioassays therefore have a relatively high ecological relevance, as they provide information on the survival, reproduction, growth, or behaviour of the species tested. A major difference between toxicity tests and bioassays is the selection of the controls. In laboratory toxicity experiments the controls consist of non-spiked 'clean' test medium (see section on Concentration response relationships). In bioassays the choice of the controls is more complicated though. Non-treated test medium may be incorporated as a control in bioassays to check for the health and quality of the test organisms. But control media, like standard test water or artificial soil and sediment may differ in numerous aspects from natural environmental samples. Therefore, the control should preferably be a test medium that has exactly the same physicochemical properties as the contaminated sample, except for the chemical pollutants being present. This ideal situation, however, hardly ever exists. Hence, it is recommended to also incorporate environmental samples from less or non-contaminated reference sites into the bioassay and to compare the response of the organism to samples from contaminated sites with those from reference sites. Alternatively, controls can be selected as the least contaminated environmental samples from a gradient of pollution or as the dilution required to obtain no effect. As dilution medium artificial control medium can be used or medium from a reference site.The most commonly used For the soil compartment, the earthworms Eisenia fetida, E. andrei and Lumbricus rubellus, the enchytraeid Enchytraeus crypticus and the collembolan Folsomia candida are most frequently selected as in vivo bioassay test organisms. An example of employing earthworms to assess the ecotoxicological effects of Pb contaminated soils is given in The figure shows the total Pb concentrations in different field soils taken from a soccer field (S), a bullet plot (B), grassland (G1, G3) and a forest (F1-F3) site near a shooting range. The pH of the grassland soils was near neutral (pHCaCl2 = 6.5-6.8), but the pH was rather low (3.2-3.7) for all other field sites. Earthworms exposed to these soils showed a significantly reduced reproductive output at the most contaminated sites. At the less contaminated sites, earthworm responses were also affected by the difference in soil pH, leading to low juvenile numbers in the acid soil F0 but high numbers in the near neutral reference R3 and the field soil G3. In fact, earthworm reproduction was highest in the latter soil, even though it did contain an elevated concentration of 355 ± 54 mg Pb/kg dry soil. In soil G1, which contained almost twice as much Pb (656 ± 60 mg Pb/kg dry soil), reproduction was much lower and also reduced compared to the control, suggesting the presence of additional, unknown stressor (Luo et al., 2014).For water, predominantly daphnids are employed, mainly Daphnia magna, but sometimes also other daphnid species or other aquatic invertebrates are selected. Also bioassays with several primary producers are available. An example of exposing daphnids (Chydorus sphaericus) to water samples is shown in The bars show the toxicity of the water samples and the diamonds the concentrations of cholinesterase inhibitors, as a proxy for the presence of insecticides. The toxicity of the water samples was higher when also the concentrations of insecticides were higher. Hence, in this case, the observed toxicity is well explained by the measured compounds. Yet, it has to be realized that this is an exception rather than a rule, since mostly a large portion of the toxic effects observed in surface waters cannot be attributed to compounds measured by water authorities and moreover, interactions are also not covered by such analytical data (see section on Effect based water quality assessment).For sediments, oligochaetes and chironomids are selected as test organisms, but sometimes also rooting macrophytes and benthic diatoms. An example of exposing chironomids (Chironomus riparius) to contaminated sediments is shown in Whole sediment bioassays with chironomids allow the assessment of sensitive species-specific sublethal endpoints (see section on Chronic toxicity), in this case emergence. shows that more chironomids emerged on the reference sediment than on the contaminated sediment and that the chironomids on the reference sediment also emerged faster than on the contaminated sediment.For sediment, also benthic diatoms are selected as in vivo bioassay test organisms. shows the growth of the benthic diatom Nitzschia perminuta after 4 days of exposure to 160 sediment samples. The dotted line represents control growth. The growth of the diatoms ranged from higher than the controls to no growth at all, raising the question which deviation from the control should be considered a significant adverse effect.In vivo bioassay batteries Environmental quality assessments are often performed with a single test species, like the four examples given above. Yet, toxicity is species and compound specific and this may therefore result in large margins of uncertainty in the environmental quality assessments, consequently leading to over- or underestimation of environmental risks. Obvious examples include the presence of herbicides that only would induce responses in bioassays with primary producers and the other way around, the presence of insecticides that induces strong effects on insects and to a lesser extent on other animals, but that would be completely overlooked in bioassays with primary producers. To reduce these uncertainties and to increase ecological relevance it is therefore advised to incorporate more test species belonging to different taxa in a bioassay battery (see section on Effect based water quality assessment).ReferencesLuo, W., Verweij, R.A., Van Gestel, C.A.M.. Determining the bioavailability and toxicity of lead to earthworms in shooting range soils using a combination of physicochemical and biological assays. Environmental Pollution 185, 1-9.Pieters, B.J., Bosman-Meijerman, D., Steenbergen, E., Van den Brandhof, E.-J., Van Beelen, P., Van der Grinten, E., Verweij, W., Kraak, M.H.S.. Ecological quality assessment of Dutch surface waters using a new bioassay with the cladoceran Chydorus sphaericus. Proceedings Netherlands Entomological Society Meetings 19, 157-164.Define in vivo bioassays and explain how in vivo bioassays are performed.Give examples of the most commonly used in vivo bioassays per environmental compartment.Motivate the necessity to incorporate several in vivo bioassays into a bioassay battery.Effect-based water quality assessmentAuthors: Milo de Baat, Michiel KraakReviewers: Ad Ragas, Ron van der Oost, Beate EscherLearning objectives:You should be able toKeywords: Effect-based monitoring, water quality assessment, bioassay battery, effect-based trigger values, ecotoxicological risk assessmentIntroductionTraditional chemical water quality assessment is based on the analysis of a list of a varying, but limited number of priority substances. Nowadays, the use of many of these compounds is restricted or banned, and concentrations of priority substances in surface waters are therefore decreasing. At the same time, industries have switched to a plethora of alternative compounds, which may enter the aquatic environment, seriously impacting water quality. Hence, priority substances lists are outdated, as the selected compounds are frequently absent, while many compounds with higher relevance are not listed as priority substances. Consequently, a large portion of toxic effects observed in surface waters cannot be attributed to compounds measured by water authorities, and toxic risks to freshwater ecosystems are thus caused by mixtures of a myriad of (un)known, unregulated compounds. Understanding of these risks requires a paradigm shift towards new monitoring methods that do not depend on chemical analysis of priority substances solely, but consider the biological effects of the entire micropollutant mixture first. Therefore, there is a need for effect-based monitoring strategies that employ bioassays to identify environmental risk. Responses in bioassays are caused by all bioavailable (un)known compounds and their metabolites, whether or not they are listed as priority substances. Table 1. Example of the bioassay battery employed by the SIMONI approach of Van der Oost et al. that can be applied to assess surface water toxicity. Effect-based trigger values (EBT) were previously defined by Escher et al. (PAH, anti-AR and ER CALUX) and Van der Oost et al..BioassayEndpointReference compoundEBTUnitin situDaphnia in situMortalityn/a20% mortalityin vivoDaphniatoxMortalityn/a0.05TUAlgatoxAlgal growth inhibitionn/a0.05TUMicrotoxLuminescence inhibitionn/a0.05TUin vitro CALUXcytotoxCytotoxicityn/a0.05TUDRDioxin (-like) activity2,3,7,8-TCDD50pg TEQ/LPAHPAH activitybenzo(a)pyrene6.21ng BapEQ/LPPARγLipid metabolism inhibitionrosiglitazone10ng RosEQ/LNrf2Oxidative stresscurcumin10µg CurEQ/LPXRToxic compound metabolismnicardipine3µg NicEQ/Lp53 -S9Genotoxicityn/a0.005TUp53 +S9Genotoxicity (after metabolism)n/a0.005TUEREstrogenic activity17ß-estradiol0.1ng EEQ/Lanti-ARAntiandrogenic activityflutamide14.4µg FluEQ/LGRGlucocorticoid activitydexamethasone100ng DexEQ/Lin vitro antibioticsTBacterial growth inhibition (Tetracyclines)oxytetracycline250ng OxyEQ/LQBacterial growth inhibition (Quinolones)flumequine100ng FlqEQ/LB+MBacterial growth inhibition (β-lactams and Macrolides)penicillin G50ng PenEQ/LSBacterial growth inhibition (Sulfonamides)sulfamethoxazole100ng SulEQ/LABacterial growth inhibition (Aminoglycosides)neomycin500ng NeoEQ/LBioassay batteryThe regular application of effect-based monitoring largely relies on the ease of use, endpoint specificity, costs and size of the used bioassays, as well as on the ability to interpret the measured responses. To ensure sensitivity to a wide range of potential stressors, while still providing specific endpoint sensitivity, a successful bioassay battery like the example given in Table 1 can include in situ whole organism assays (see section on Biomonitoring and in situ bioassays), and should include laboratory-based whole-organism in vivo (see section on In vivo bioassays) and mechanism-specific in vitro assays (see section on In vitro bioassays). Adverse effects in the whole-organism bioassays point to general toxic pressure and represent a high ecological relevance. In vitro or small-scale in vivo assays with specific drivers of adverse effects allow for focused identification and subsequent confirmation of (groups of) toxic compounds with specific modes of action. Bioassay selection can also be based on the Adverse Outcome Pathways (AOP) (see section on Adverse Outcome Pathways) concept that describes relationships between molecular initiating events and adverse outcomes. Combining different types of bioassays ranging from whole organism tests to in vitro assays targeting specific modes of action can thus greatly aid in narrowing down the number of candidate compound(s) that cause environmental risks. For example, if bioanalytical responses at a higher organisational level are observed (the orange and black pathways in , responses in specific molecular pathways (blue, green, grey and red in can help to identify certain (groups of) compounds responsible for the observed effects.Toxic and bioanalytical equivalent concentrationsThe severity of the adverse effect of an environmental sample in a bioassay is expressed as toxic equivalent (TEQ) concentrations for toxicity in in vivo assays or as bioanalytical equivalent (BEQ) concentrations for responses in in vitro bioassays. The toxic equivalent concentrations and bioanalytical equivalent concentrations represent the joint toxic potency of all unknown chemicals present in the sample that have the same mode of action (see section on Toxicodynamics and molecular interactions) as the reference compound and act concentration-additively (see section on Mixture toxicity). The toxic equivalent concentrations and bioanalytical equivalent concentrations are expressed as the concentration of a reference compound that causes an effect equal to the entire mixture of compounds present in an environmental sample. depicts a typical dose-response curve for a molecular in vitro assay that is indicative of the presence of compounds with a specific mode of action targeted by this in vitro assay. A specific water sample induced an effect of 38% in this assay, equivalent to the effect of approximately 0.02 nM bioanalytical equivalents.Effect-based trigger valuesThe identification of ecological risks from bioassay battery responses follows from the comparison of bioanalytical signals to previously determined thresholds, defined as effect-based trigger values (EBT), that should differentiate between acceptable and poor water quality. Since bioassays potentially respond to the mixture of all compounds present in a sample, effect-based trigger values are expressed as toxic or bioanalytical equivalents of concentrations of model compounds for the respective bioassay (Table 1). Ranking of contaminated sites based on effect-based risk assessmentOnce the toxic potency of a sample in a bioassay is expressed as toxic equivalent concentrations or bioanalytical equivalent concentrations, this response can be compared to the effect-based trigger value for that assay, thus determining whether or not there is a potential ecological risk from contaminants in the investigated water sample. The ecotoxicity profiles of the surface water samples generated by a bioassay battery allow for calculation and ranking of a cumulative ecological risk for the selected locations. In the example given in , 'response below the effect-based trigger value' (yellow) or 'response above the effect-based trigger value' (orange). Next, the cumulative ecological risk per location is calculated.The resulting integrated ecological risk score allows ranking of the selected sites based on the presence of ecotoxicological risks rather than on the presence of a limited number of target compounds. This in turn permits water authorities to invest money where it matters most: identification of compounds causing adverse effects at locations with indicated ecotoxicological risks. Initially, the compounds causing the observed effect-based trigger value exceedance will not be known, however, this can subsequently be elucidated with targeted or non-target chemical analysis, which will only be necessary at locations with indicated ecological risks. A potential follow-up step could be to investigate the drivers of the observed effects by means of effect-directed analysis (see section on Effect-directed analysis).ReferencesEscher, B. I., Aїt-Aїssa, S., Behnisch, P. A., Brack, W., Brion, F., Brouwer, A., et al.. Effect-based trigger values for in vitro and in vivo bioassays performed on surface water extracts supporting the environmental quality standards (EQS) of the European Water Framework Directive. Science of the Total Environment 628-629, 748-765.Van der Oost, R., Sileno, G., Suarez-Munoz, M., Nguyen, M.T., Besselink, H., Brouwer, A.. SIMONI (Smart Integrated Monitoring) as a novel bioanalytical strategy for water quality assessment: part I - Model design and effect-based trigger values. Environmental Toxicology and Chemistry 36, 2385-2399.Additional readingAltenburger, R., Ait-Aissa, S., Antczak, P., Backhaus, T., Barceló, D., Seiler, T.-B., et al.. Future water quality monitoring - Adapting tools to deal with mixtures of pollutants in water resource management. Science of the Total Environment 512-513, 540-551.Escher, B.I., Leusch, F.D.L.. Bioanalytical Tools in Water Quality Assessment. IWA publishing, London (UK).Hamers, T., Legradi, J., Zwart, N., Smedes, F., De Weert, J., Van den Brandhof, E-J., Van de Meent, D., De Zwart, D.. Time-Integrative Passive sampling combined with TOxicity Profiling (TIPTOP): an effect-based strategy for cost-effective chemical water quality assessment. Environmental Toxicology and Pharmacology 64, 48-59.List 5 advantages of an effect-based approach over a compound based approach for water quality assessment.Motivate the necessity of employing a bioassay battery in effect based monitoring approaches.Explain how bioassay responses are expressed in terms of toxicity equivalents of reference compounds? (you may wish to draw a figure).Translate the outcome of a bioassay battery into a ranking of contaminated sites based on ecotoxicological risk. (you may wish to draw a figure).Author: Michiel KraakReviewers: Ad Ragas, Suzanne Stuijfzand, Lieven BervoetsLearning objectives:You should be able toKey words: Biomonitoring, test organisms, in situ bioassays, contaminant concentrations in organisms, environmental qualityIntroductionSeveral approaches and tools are available for diagnostic risk assessment. Tools specially developed for field assessments include the TRIAD approach (see section on TRIAD approach), in situ bioassays and biomonitoring. In ecotoxicology, biomonitoring is defined as the use of living organisms for the in situ assessment of environmental quality. Passive biomonitoring and active biomonitoring are distinguished. For passive biomonitoring, organisms are collected at the site of interest and their condition is assessed or the concentrations of specific target compounds in their tissues are analysed, or both. By comparing individuals from reference and contaminated sites an indication of the impact on local biota at the site of interest is obtained. For active biomonitoring, organisms are collected from reference sites and exposed in cages or artificial substrates at the study sites. Ideally, reference organisms are simultaneously exposed at the site of origin to control for potential effects of the experimental set-up on the test organisms. As an alternative to field collected animals, laboratory cultured organisms may be employed. After exposure at the study sites for a certain period of time, the organisms are recollected and either their condition is analysed (in situ bioassay) or the concentrations of specific target compounds are measured in the organisms, or both.The results of biomonitoring studies may be used for management decisions, e.g. when accumulation of contaminants has been demonstrated in the field and especially when the sources of the pollution have been identified. However, the use of biomonitoring studies in environmental management has not been captured in formal protocols or guidelines like those of the Water Framework Directive (WFD) or - to a lesser extent - the TRIAD approach and effect-based quality assessments. Biomonitoring studies are typically applied on an case-by-case basis and their application therefore strongly depends on the expertise and resources available for the assessment. The text below explains and discusses the most important aspects of biomonitoring techniques used in diagnostic risk assessment.Selection of biomonitoring test organismsThe selection of adequate organisms for biomonitoring partly follows the selection of test organisms for toxicity tests (see section on the Selection of test organisms). Suitable biomonitoring organisms:Based on the above listed criteria, in the marine environment mussels belonging to the genus Mytilus are predominantly selected. The genus Mytilus has the additional advantage of a global distribution, although represented by different species. This facilitates the comparison of contaminant concentrations in the organisms all around the globe. Lugworms have occasionally also been used for biomonitoring in marine systems. For freshwater, the cladoceran Daphnia magna is most frequently employed, although occasionally other species are selected, including mayflies, snails, worms, amphipods, isopods, caddisflies and fish. Given the positive experience with marine mussels, freshwater bivalves are also employed as biomonitoring organisms. Sometimes primary producers have been used, mainly periphyton. Due to the complexity of the sediment and soil compartments, few attempts have been made to expose organisms in situ, mainly restricted to chironomids on sediment.In situ exposure devicesAn obvious requirement of the in situ exposure devices is that the test organisms do not suffer from (sub)lethal effects of the experimental setup. If the organisms are large enough, cages may be used, like for freshwater and marine mussels. For daphnids, a simple glass jar with a permeable lid suffices. For riverine insects, the device should allow the natural flow of the stream to pass, but meanwhile prevent the organisms from escaping. In the device shown in are used as artificial substratum for algal attachment. Substrata are placed vertically in the water, parallel to the current, by means of polyethylene racks, each rack supporting a total of 170 discs. After the colonization period, the periphyton containing glass discs can be harvested, offering the unique possibility to perform laboratory or field experiments with entire algal and microbial communities, replicated 170 times.In situ bioassaysAfter exposure at the study sites for a certain period of time, the organisms are recollected and their condition can be analysed. The endpoint is mostly survival, especially in routine monitoring programs. If the in situ exposure lasts long enough, also effects on species specific sublethal endpoints can be assessed. For daphnids and snails, this is reproduction and for isopods growth. For aquatic insects (mayflies, caddisflies, damselflies, chironomids), emergence has been assessed as a sensitive ecological relevant endpoint (Barmentlo et al., 2018).In situ bioassays come closest to the actual field situation. Organisms are directly exposed at the site of interest and respond to all joint stressors present. Yet, this is also the limitation of the approach. If organisms do respond it remains unknown what causes the observed adverse effects. This could be (the combination of) any natural or anthropogenic physical or chemical stress factor. In situ bioassays can therefore be best combined with laboratory bioassays (see section on Bioassays) and the analysis of physico-chemical parameters, conform the TRIAD approach (see section on TRIAD approach). If the adverse effects are also observed in the bioassays under controlled laboratory conditions, then poor water quality is most likely the cause. The water sample may then be subjected to suspected target analysis, non-target analysis or effect directed analysis (EDA). If adverse effects are observed in situ but not in the laboratory, then the presence of hazardous compounds is most likely not the cause. Instead, the effects may be attributable to e.g. low pH, low oxygen concentrations, high temperatures etc, which may be verified by physico-chemical analysis in the field.Online biomonitoringA specific application of in situ bioassays are the online systems for continuous water quality monitoring. In these systems, behaviour is generally the endpoint (see section on Endpoints). Organisms are exposed in a laboratory setting in situ (on shore or on a boat) in an experimental device to a continuous flow of surface water. If the water quality changes, the organisms respond by changing their behaviour. Above a certain threshold an alarm may go off and, for instance, the intake of surface water for drinking water preparation can be temporarily stopped.Contaminant concentrations in organismsAs an addition or as an alternative to analysing the condition of the exposed biomonitoring organisms upon retrieval, contaminant concentrations in organisms can be analysed. This has several advantages over chemical analysis of environmental samples: biomonitoring organisms may be exposed for days to weeks at the site of interest, providing time integrated measurements of contaminant concentrations, in contrast to the chemical analysis of grab samples. This way, biomonitoring organisms actually serve as 'biological passive samplers' (see to section on Experimental methods of assessing available concentrations of organic chemicals). Another advantage of measuring contaminant concentrations in organisms is that they only take up the bioavailable (fraction of) substances, ecologically very relevant information, that remains unknown if chemical analysis is performed on water, sediment, soil or air samples. Yet, elevated concentrations in organisms do not necessarily imply toxic effects, and therefore these measurements are best complemented with determining their condition, as described above. Moreover, analysing contaminants in organisms may be more expensive than measurements of environmental samples, due to a more complex sample preparation. Weighing the advantages and disadvantages, the explicit strength of biomonitoring programs is that they provide insight into the spatial and temporal variation in bioavailable contaminant concentrations. In two examples are given. The left panel shows the concentrations of PCBs in zebra mussels at different sampling sites in Flanders, Belgium (Bervoets et al., 2004). The right panel shows the rapid (within 2 wk) Cd accumulation and depuration in biofilms translocated from a reference to a polluted site and from a polluted to a reference site, respectively (Ivorra et al., 1999).ReferencesBarmentlo, S.H., Parmentier, E.M., De Snoo, G.R., Vijver, M.G.. Thiacloprid-induced toxicity influenced by nutrients: evidence from in situ bioassays in experimental ditches. Environmental Toxicology and Chemistry 37, 1907-1915.Bervoets, L., Voets, J., Chu, S.G., Covaci, A., Schepens, P., Blust, R.. Comparison of accumulation of micropollutants between indigenous and transplanted zebra mussels (Dreissena polymorpha). Environmental Toxicology and Chemistry 23, 1973-1983.Blanck, H.. A simple, community level, ecotoxicological test systemusing samples of periphyton. Hydrobiologia 124, 251-261.Ivorra, N., Hettelaar, J., Tubbing, G.M.J., Kraak, M.H.S., Sabater, S., Admiraal, W.. Translocation of microbenthic algal assemblages used for in situ analysis of metal pollution in rivers. Archives of Environmental Contamination and Toxicology 37, 19-28.Stuijfzand, S.C., Engels, S., Van Ammelrooy, E., Jonker, M.. Caddisflies (Trichoptera: Hydropsychidae) used for evaluating water quality of large European rivers. Archives of Environmental Contamination and Toxicology 36, 186-192.Vuori, K.M.. Species- and population-specific responses of translocated hydropsychid larvae (Trichoptera, Hydropsychidae) to runoff from acid sulphate soils in the River Kyronjoki, western Finland. Freshwater Biology 33, 305-318.Define biomonitoring.Explain the difference between passive and active biomonitoring.What can be measured in biomonitoring organisms after (re)collection?Explain the difference between passive and active biomonitoring.List the characteristics of suitable biomonitoring organisms.Name the advantage and disadvantage of in situ bioassays.Name the advantages and disadvantages of measuring contaminant concentrations in organisms.Author: Michiel RutgersReviewers: Kees van Gestel, Michiel Kraak, Ad RagasLearning goals:You should be ableKeywords: Triad, site-specific ecological risk assessment, weight of evidenceLike the other diagnostic tools described in the previous sections (see sections on Effect-based monitoring In vivo bioassays and In vitro bioassays, Effect-directed analysis, and Effect-based water quality assessment and Biomonitoring), the TRIAD approach is a tool for site-specific ecological risk assessment of contaminated sites (Jensen et al., 2006; Rutgers and Jensen, 2011). Yet, it differs from the previous approaches by combining and integrating different techniques through a 'weight of evidence' approach. To this purpose, the TRIAD combines information on contaminant concentrations (environmental chemistry), the toxicity of the mixture of chemicals present at the site ((eco)toxicology), and observations of ecological effects (ecology).The mere presence of contaminants is just an indication of potential ecological effects to occur. Additional data can help to better assess the ecological risks. For instance, information on actual toxicity of the contaminated site can be obtained from the exposure of test organisms to (extracts of) environmental samples (bioassays), while information on ecological effects can be obtained from an inventory of the community composition at the specific site. When these disciplines tend to converge to corresponding levels of ecological effects, a weight of evidence is established, making it possible to finalize the assessment and to support a decision for contaminated site management.The TRIAD approach thus combines the information obtained from three lines of evidence (LoE):The three lines of evidence form a weight of evidence when they are converging, meaning that when the independent lines of evidence are indicating a comparable risk level, there is sufficient evidence for providing advice to decision makers about the ecological risk at a contaminated site. When there is no convergence in risk information obtained from the three lines of evidence, uncertainty is large. Further investigations are then required to provide a unambiguous advice.Table 1. Basic data for site-specific environmental risk assessment (SS-ERA) sorted per line of evidence (LoE). Data and methods are described in Van der Waarde et al. and Rutgers et al..Tests and abbreviations used in the table:The results of a site-specific ecological risk assessment (SS-ERA) applying the TRIAD approach are first organized basic tables for each sample and line of evidence separately. Table 1 shows an example. This table also collects supporting data, such as soil pH and organic matter content. Subsequently, these basic data are processed into ecological risk values by applying a risk scale running from zero (no effects) to one (maximum effect). An example of a metric used is the multi-substance Potentially Affected Fraction of species from the mixture of contaminants (see Section on SSDs). These risk values are then collected in a TRIAD table (Table 2), for each endpoint separately, integrated per line of evidence individually, and finally integrated over the three lines of evidence. Also the level of agreement between the three lines of evidence is given a score. Weighting values are applied, e.g. equal weights for all ecological endpoints (depending on number of methods and endpoints), and equal weights for each line of evidence (33%). When differential weights are preferred, for instance when some data are judged as unreliable, or some endpoints are considered more important than others, the respective weight factors and the arguments to apply them must be provided in the same table and accompanying text.Table 2. Soil Quality TRIAD table demonstrating scaled risk values for two contaminated sites (A, B) and a Reference site (based on real data, only for illustration purposes). Risk values are collected per endpoint, grouped according to respective Lines of Evidence (LoE), and finally integrated into a TRIAD value for risks. The deviation indicates a level of agreement between LoE (default threshold 0.4). For site B, a Weight of Evidence (WoE) is demonstrated (D<0.4) making decision support feasible. By default equal weights can be used throughout. Differential weights should be indicated in the table and described in the accompanying text.ISO. ISO 19204: Soil quality -- Procedure for site-specific ecological risk assessment of soil contamination (soil quality TRIAD approach). International Standardization Organization, Geneva. //www.iso.org/standard/63989.html.Jensen, J., Mesman, M. (Eds.). LIBERATION, Ecological risk assessment of contaminated land, decision support for site specific investigations. ISBN 90-6960-138-9, Report 711701047, RIVM, Bilthoven, The Netherlands.Rutgers, M., Bogte, J.J., Dirven-Van Breemen, E.M., Schouten, A.J. Locatiespecifieke ecologische risicobeoordeling - praktijkonderzoek met een Triade-benadering. RIVM-rapport 711701026, Bilthoven.Rutgers, M., Jensen, J.. Site-specific ecological risk assessment. Chapter 15, in: F.A. Swartjes (Ed.), Dealing with Contaminated Sites - from Theory towards Practical Application, Springer, Dordrecht. pp. 693-720.Van der Waarde, J.J., Derksen, J.G.M, Peekel, A.F., Keidel, H., Bloem, J., Siepel, H. Risicobeoordeling van bodemverontreiniging met behulp van een triade benadering met chemische analyses, bioassays en biologische veldinventarisaties. Eindrapportage NOBIS 98-1-28, Gouda.A sediment sample was analyzed for Priority Hazardous Substances (PHSs), but none were detected. Yet, this sediment sample caused high mortality in laboratory bioassays with three sediment inhabiting species. At the site where the sample was taken biodiversity was very low. Explain these observations.In a sediment sample, the total concentration of Priority Hazardous Substances (PHSs) was shown to be very high. Yet, this sediment sample caused no mortality in laboratory bioassays with three sediment inhabiting species. Moreover, at the sample site in the field species rich communities were observed. Explain these observations.What is the added value of using bioassays and field observations over chemical analysis when assessing the potential risk of a contaminated site?What is the added value of performing an assessment along three independent Lines of Evidence (LoE)?Authors: Leo Posthuma, Dick de ZwartReviewers: Allan Burton, Ad RagasLearning objectives:You should be able to:Keywords: eco-epidemiology, mixture pollution, diagnosis, impact magnitude, probable causes, validationApproaches for environmental protection, assessment and management differ between 'classical' stressors (such as excess nutrients and pH) and chemical pollution. For the 'classical' environmental stress factors, ecologists use monitoring data to develop concepts and methods to prevent and reduce impacts. Although there are some clear-cut examples of chemical pollution impacts [e.g., the decline in vulture populations in South East Asia due to diclofenac (Oaks et al. 2004), and the suit of examples in the book 'Silent Spring' (Carson 1962)], ecotoxicologists commonly have assessed the stress from chemical pollution by evaluating exposures vis a vis laboratory toxicity data. Current pollution often consists of complex mixtures of chemicals, with highly variable patterns in space and time. This poses problems when one wants to evaluate whether observed impacts in ecosystems can be attributed to chemicals or their mixtures. Eco-epidemiological methods have been established to discern such pollution stress. These methods provide the diagnostic tools to identify the impact magnitude and key chemicals that cause impacts in ecosystems. The use of these methods is further relevant for validating the laboratory-based risk assessment approaches developed by ecotoxicology.Risk assessments of chemicals provide insights in expected exposures and impacts, commonly for separate chemicals. These are predictive outcomes with a high relevance for decision making on environmental protection and management. The validation of those risk assessments is key to avoid wrong protection and management decisions, but it is complex. It consists of comparing predicted risk levels to observed effects. This begs the question on how to discern effects of chemical pollution in the field. This question can be answered based on the principles of ecological bio-assessments combined with those of human epidemiology. A bio-assessment is a study of stressors and ecosystem attributes, made to delineate causes of impacts via (often) statistical associations between biotic responses and particular stressors. Epidemiology is defined as the study of the distribution and causation of health and disease conditions in specified populations. Applied epidemiology serves as a scientific basis to help counteracting the spreading of human health problems. Dr. John Snow is often referred to as the 'father of epidemiology'. Based on observations on the incidence, locations and timings of the 1854 cholera outbreak in London, he attributed the disease to contaminated water taken from the Broad Street pump well, counteracting the prevailing idea that the disease was caused by transmission via air. His proposals to control the disease were effective. Likewise, eco-epidemiology - in its ecotoxicological context - has been defined as the study of the distribution and causation of impacts of multiple stressor exposures in ecosystems. In its applied form, it supports the reduction of ecological impacts of chemical pollution. Human-health eco-epidemiology is concerned with environment-mediated disease.The first literature mention of eco-epidemiological analyses on chemical pollution stems from 1984 (Bro-Rasmussen and Løkke 1984). Those authors described eco-epidemiology as a discipline necessary to validate the risk assessment models and approaches of ecotoxicology. In its initial years, progress in eco-epidemiological research was slow due to practical constraints such as a lack of monitoring data, computational capacity and epidemiological techniques.Current eco-epidemiological studies in ecotoxicology aim to diagnose the impacts of chemical pollution in ecosystems, and utilize a combination of approaches in order to diagnose the role of chemical mixtures in causing ecological impacts in the field. The combination of approaches consists of:1. Collection of monitoring data on abiotic characteristics and the occurrence and/or abundance of biotic species, for the environmental compartment under study;2. If needed: data optimization, usually to align abiotic and biotic monitoring data, including the chemicals;3. Statistical analysis of the data set using eco-epidemiological techniques to delineate impacts and probable causes, according to the approaches followed in 'classical' ecological bio-assessments;4. Interpretation and use of the outcomes for either validation of ecotoxicological models and approaches, or for control of the impacts sensu Dr. Snow.Although impacts of chemicals in the environment were known before 1962, Rachel Carson's book Silent Spring (see Section on the history of Environmental toxicology) can be seen as early and comprehensive eco-epidemiological study that synthesized the available information of impacts of chemicals in ecosystems. She considered effects of chemicals a novel force in natural selection when she wrote: "If Darwin were alive today the insect world would delight and astound him with its impressive verification of his theories of survival of the fittest. Under the stress of intensive chemical spraying the weaker members of the insect populations are being weeded out."Clear examples of chemical impacts on species are still reported. Amongst the best-known examples is a study on vultures. The population of Indian vultures declined more than 95% due to diclofenac exposure which was used intensively as a veterinary drug (Oaks et al. 2004). The analysis of chemical impacts in nature becomes however more complex over time. The diversity of chemicals produced and used has vastly increased, and environmental samples contain thousands of chemicals at often low concentrations. Hence, contemporary eco-epidemiology is complex. Nonetheless, various studies demonstrated that contemporary mixture exposures affect species assemblages. Starting from large-scale monitoring data and following the four steps mentioned above, De Zwart et al. were able to show that effects on fish species assemblages could be attributed to both habitat characteristics and chemical mixtures. Kapo and Burton Jr showed the impacts of multiple stressors and chemical mixtures in aquatic species assemblages with similar types of data, but slightly different techniques. Eco-epidemiological studies of the effects of chemicals and their mixtures currently represent different geographies, species groups, stressors and chemicals/mixtures that are considered. The potential utility eco-epidemiological studies was reviewed by Posthuma et al.. The review showed that mixture impacts occur, and that they can be separated from natural variability and multiple-stressor impacts. That means that water managers can develop management plans to counteract stressor impacts. Thereby, the study outcomes are used to prioritize management to sites that are most affected, and to chemicals that contribute most to those effects. Based on sophisticated statistical analyses, Berger et al. suggested chemicals can induce effects in the environment at concentrations much lower than expected based on laboratory experiments. Schäfer et al. argued that eco-epidemiological studies that cover both mixtures and other stressors are essential for environmental quality assessment and management. In practice, however, the analysis of the potential impacts of chemical mixtures is often still separate from the analysis of impacts of other stressors.Various regulations require collection of monitoring data, followed by bio-assessment, such as the EU Water Framework Directive (see section on the Water Framework Directive). Therefore, monitoring data sets are increasingly available. The data set is subsequently curated and/or optimized for the analyses. Data curation and management steps imply amongst others that taxonomic names of species are harmonized, and that metrics for abiotic and biotic variables represent the conditions for the same place and time as much as possible. Next, the data set is expanded with novel variables, e.g. a metric for the toxic pressure exerted by chemical mixtures. An example of such a metric is the multi-substance Potentially Affected Fraction of species (msPAF). This metric transfers measured or predicted concentrations into the Potentially Affected Fraction of species (PAF), the values of which are then aggregated for a total mixture (De Zwart and Posthuma 2005). This is crucial, as adding each chemical of interest as a separate variable implies an increasingly expanding number of required sampling sites to maintain statistical power to diagnose impacts and probable causation.The interpretation of the outcomes of the statistical analyses of the data set is the final step. Here, it must be acknowledged that statistical association is not equal to causation, and that care must be taken to explain the findings as indicative for mixture effects. Depending on the context of the study, this may then trigger a refined assessment, or alignment with other methods to collect evidence, or a direct use in an environmental management program.A very basic eco-epidemiological method is quantile regression. Whereas common regression methods explore the magnitude of the change of the mean of the response variable (e.g., biodiversity) in relation to a predictor variable (e.g., pollutant stress), the quantile regression looks at the tails of the distributions of the response variable. How this principle operates is illustrated in When a monitoring data set contains one stressor variable at different levels (i.e., a gradient of data), the observations typically take the shape of a common stressor-response relationship (see section on Concentration-effect relationships). If the monitoring sites are affected by an extra stressor, the maximum-performance under the first stressor cannot be reached, so that the area under the curve contains the XY-points for this situation. Further addition of stressor variables and levels fills this space under the curve. When the raw data plotted as XY show an 'empty area' lacking XY-points, e.g. in the upper right corner, it is likely that the stressor variable can be identified as a stressor that limits the response variable, for example: chemicals limit biodiversity. The quantile regression calculates an upper percentiles (e.g., the 95th percentile) of the Y-values in assigned subgroups of X-values ("bins"). Such a procedure yields a picture such as monitoring data have been developed and applied. The methods are closely associated to those developed for, and utilized in, applied ecology. Well-known examples are 'species distribution models' (SDM), which are used to describe the abundance or presence of species as a function of multiple environmental variables. A well-known SDM is the bell-shaped curve relating species abundances to water pH: numbers of individuals of a species are commonly low at low and high pH, and the SDM is characterized as an optimum model for species abundance (Y) versus pH (X). Statistical models can also describe species abundance, presence or biodiversity, as a function of multiple stressors, for example via Generalized Linear Models. These have the general shape of:Log(Abundance)= (a. pH + a' pH2) + (b. OM + b' OM2) + …… + e,with a, a', b and b' being estimated from fitting the model to the data, whilst pH and OM are the abiotic stressor variables (acidity and Organic Matter, respectively); the quadratic terms are added to allow for optimum and minimum shaped relationships. When SSD models (see Section on Species Sensitivity Distribution) are used to predict the multi-substance Potentially Affected Fraction of species, the resulting mixture stress proxy can be analysed together with the other stressor variables. Data analyses from monitoring data from the United States and the Netherlands have, for example, shown that the abundance of >60% of the taxa is co-affected by mixtures of chemicals. An example study is provided by Posthuma et al..In addition to the retrospective analysis of monitoring data, in search of chemical impacts, recent studies also show examples of prospective studies of effects of mixtures. Different land uses imply different chemical use patterns, summarized as 'signatures'. That is, agricultural land use will yield intermittent emissions of crop-specific plant protection products, aligning with the growing season. Emissions from populated areas will show continuous emission of household chemicals and discontinuous emissions of chemicals in street run-off associated to heavy rain events. The application of emission, fate and ecotoxicity models showed that aquatic ecosystems are subject to the 'signatures', with associated predicted impact magnitudes (Holmes et al. 2018; Posthuma et al. 2018). Although such prospective assessments did not yet prove ecological impacts, they can assist in avoiding impacts by preventing the emission 'signatures' that are identified as potentially most hazardous. Eco-epidemiological analysis outputs serve two purposes, closely related to prospective and retrospective risk assessment of chemical pollution:1. Validation of ecotoxicological models and approaches;2. Derivation of control measures, to reduce impacts of diagnosed probable causes of impacts.If needed, multiple lines of evidence can be combined, such as in the Triad approach (see section on TRIAD) or approaches that consider more than three lines of evidence (Chapman and Hollert, 2006). The higher the importance of a good diagnosis, the better the user may rely on multiple lines of evidence.First, the validation of ecotoxicological models and approaches is crucial, to avoid that important environmental protection, assessment and management activities rely on approaches that have limited relationship to field effects. Eco-epidemiological analyses have, for example, been used to validate the protective benchmarks used in the chemical-oriented environmental policies.Second, the outcomes of an eco-epidemiological analysis can be used to control causes of impacts to ecosystems. Some studies have, for example, identified a statistical association between observed impacts (species expected but absent) and pollution of surface waters with mixtures of metals. Though local experts first doubted this association due to lack of industrial activities with metals in the area, they later found the association relevant given the presence of old spoil heaps from past mining activities. Metals appeared to leach into the surface waters at low rates, but the leached mixtures appeared to co-vary with species missing (De Zwart et al. 2006).Berger, E., Haase, P., Oetken, M., Sundermann, A.. Field data reveal low critical chemical concentrations for river benthic invertebrates. Science of The Total Environment 544, 864-873.Bro-Rasmussen, F., Løkke, H.. Ecoepidemiology - a casuistic discipline describing ecological disturbances and damages in relation to their specific causes; exemplified by chlorinated phenols and chlorophenoxy acids. Regulatory Toxicology and Pharmacology 4, 391-399.Carson, R.. Silent spring. Boston, Houghton Mifflin.Chapman, P.M., Hollert, H.. Should the sediment quality triad become a tetrad, a pentad, or possibly even a hexad? Journal of Soils and Sediments 6, 4-8.De Zwart, D., Dyer, S.D., Posthuma, L., Hawkins, C.P.. Predictive models attribute effects on fish assemblages to toxicity and habitat alteration. Ecological Applications 16, 1295-1310.De Zwart, D., Dyer, S.D., Posthuma, L., Hawkins, C.P.. Use of predictive models to attribute potential effects of mixture toxicity and habitat alteration on the biological condition of fish assemblages. Ecological Applications 16, 1295-1310.De Zwart, D., Posthuma, L.. Complex mixture toxicity for single and multiple species: Proposed methodologies. Environmental Toxicology and Chemistry 24,: 2665-2676.Holmes, C.M., Brown, C.D., Hamer, M., Jones, R., Maltby, L., Posthuma, L., Silberhorn, E., Teeter, J.S., Warne, M.S.J., Weltje, L.. Prospective aquatic risk assessment for chemical mixtures in agricultural landscapes. Environmental Toxicology and Chemistry 37, 674-689.Kapo, K.E., Burton Jr, G.A.. A geographic information systems-based, weights-of-evidence approach for diagnosing aquatic ecosystem impairment. Environmental Toxicology and Chemistry 25, 2237-2249.Oaks, J.L., Gilbert, M., Virani, M.Z., Watson, R.T., Meteyer, C.U., Rideout, B.A., Shivaprasad, H.L., Ahmed, S., Chaudhry, M.J., Arshad, M., Mahmood, S., Ali, A., Khan, A.A.. Diclofenac residues as the cause of vulture population decline in Pakistan. Nature 427, 630-633.Posthuma, L., Brown, C.D., de Zwart, D., Diamond, J., Dyer, S.D., Holmes, C.M., Marshall, S., Burton, G.A.. Prospective mixture risk assessment and management prioritizations for river catchments with diverse land uses. Environmental Toxicology and Chemistry 37, 715-728.Posthuma, L., De Zwart, D., Keijzers, R., Postma, J.. Water systems analysis with the ecological key factor 'toxicity'. Part 2. Calibration. Toxic pressure and ecological effects on macrofauna in the Netherlands. Amersfoort, the Netherlands, STOWA.Posthuma, L., Dyer, S.D., de Zwart, D., Kapo, K., Holmes, C.M., Burton Jr, G.A.. Eco-epidemiology of aquatic ecosystems: Separating chemicals from multiple stressors. Science of The Total Environment 573, 1303-1319.Posthuma, L., Suter, II, G.W., Traas, T.P. (Eds.). Species Sensitivity Distributions in Ecotoxicology. Boca Raton, FL, USA, Lewis Publishers.Schäfer, R.B., Kühn, B., Malaj, E., König, A., Gergs, R.. Contribution of organic toxicants to multiple stress in river ecosystems. Freshwater Biology 61, 2116-2128Which motivations do you know for executing eco-epidemiological analyses?Pollution in the 1950s and 1960s showed clear evidence for major impacts of chemicals in nature, whilst regulatory management actions taken since then have reduced clear recognition of chemical impacts in nature. Which ecotoxicological model has rejuvenated the development of eco-epidemiological methods?What is a simple approach, via which even raw-data plottings already show whether a stress factor (such as chemical mixture exposure) is limiting for ecologyThis page titled 6.4: Diagnostic risk assessment approaches and tools is shared under a CC BY 4.0 license and was authored, remixed, and/or curated by Sylvia Moes, Kees van Gestel, & Gerco van Beek via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.
1,003
6.5: Regulatory Frameworks
https://chem.libretexts.org/Bookshelves/Environmental_Chemistry/Environmental_Toxicology_(van_Gestel_et_al.)/06%3A_Risk_Assessment_and_Regulation/6.05%3A_Regulatory_Frameworks
Regulatory frameworksAuthors: Charles Bodar and Joop de KnechtReviewers: Kees van GestelLearning objectives:You should be able toKeywords: chemicals, environmental regulations, hazard, riskIntroductionThere is no single, overarching global regulatory framework to manage the risks of all chemicals. Instead, different regulations or directives have been developed for different categories of chemicals. These categories are typically related to the usage of the chemicals. Important categories are industrial chemicals (solvents, plasticizers, etc.), plant protection products, biocides and human and veterinary drugs. Some chemicals may belong to more than one category. Zinc, for example, is used in the building industry, but it also has biocidal applications (antifouling agent) and zinc oxide is used as a veterinary drug. In the European Union, each chemical category is subject to specific regulations or directives providing the legal conditions and requirements to guarantee a safe production and use of chemicals. A key element of all legal frameworks is the requirement that sufficient data on a chemical should be made available. Valid data on production and identity (e.g. chemical structure), use volumes, emissions, environmental fate properties and the (eco)toxicity of a chemical are the essential building blocks for a sound assessment and management of environmental risks. Rules for the minimum data set that should be provided by the actors involved (e.g. producers or importers) are laid down in various regulatory frameworks. With this data, both hazard and risk assessments can be carried out according to specified technical guidelines. The outcome of the assessment is then used for risk management, which is focused on minimizing any risk by taking measures, ranging from requests for additional data to restrictions on the particular use or a full-scale ban of a chemical.REACHREACH is a regulation of the European Union, adopted to improve the protection of human health and the environment from the risks that can be posed by chemicals, while enhancing the competitiveness of the EU chemicals industry. REACH stands for Registration, Evaluation, Authorisation and Restriction of Chemicals. The REACH regulation entered into force on 1st June 2007 to streamline and improve the former legislative frameworks on new and existing chemical substances. It replaced approximately forty community regulations and directives by one single regulation.REACH establishes procedures for collecting and assessing information on the properties, hazards and risks of substances. REACH applies to a very broad spectrum of chemicals: from industrial to household applications, and very much more. It requires that EU manufacturers and importers register their chemical substances if produced or imported in annual amounts of > 1 tonne, unless the substance is exempted from registration under REACH. At quantities of > 10 tonnes, the manufacturers, importers, and down-stream users are responsible to show that their substances do not adversely affect human health or the environment.The amount of standard information required to show safe use depends on the quantity of the substance that is manufactured or imported. Before testing on vertebrate animals like fish and mammals, the use of alternative methods must be considered. The European Chemical Agency (ECHA) coordinates and facilitates the REACH program. For production volumes above 10 tonnes per year, industry has to prepare a risk assessment, taking into account all risk management measures envisaged, and document this in a chemical safety assessment (CSA). A CSA should include an exposure assessment, hazard or dose-response assessment, and a risk characterization showing risks ratios below 1.0, i.e. safe use (see sections on REACH Human and REACH Eco).Classification, Labelling and Packaging (CLP)The EU CLP regulation requires manufacturers, importers or downstream users of substances or mixtures to classify, label and package their hazardous chemicals appropriately before placing them on the market. When relevant information (e.g. ecotoxicity data) on a substance or mixture meets the classification criteria in the CLP regulation, the hazards of a substance or mixture are identified by assigning a certain hazard class and category. An important CLP hazard class is 'Hazardous to the aquatic environment', which is divided into categories based on the criteria, for example, Category Acute 1, representing the most (acute) toxic chemicals (LC50/EC50 ≤ 1 mg/L). CLP also sets detailed criteria for the labelling elements, such as the well-known pictograms.Plant protection products regulationPlant protection products (PPPs) are pesticides that are mainly used to keep crops healthy and prevent them from being damaged by disease and infestation. They include among others herbicides, fungicides, insecticides, acaricides, plant growth regulators and repellents (see section on Crop Protection Products). PPPs fall under the EU Regulation (EC) No 1107/2009 which determines that PPPs cannot be placed on the market or used without prior authorization. The European Food and Safety Authority (EFSA) coordinates the EU regulation on PPPs.Biocides regulationThe distinction between biocides and PPP is not always straightforward, but as a general rule of thumb the PPP regulation applies to substances used by farmers for crop protection while the biocides regulation covers all other pesticide applications. Different applications of the same active ingredient, one as a PPP and the other as a biocide, may thus fall under different regulations. Biocides are used to protect humans, animals, materials or articles against harmful organisms like pests or bacteria, by the action of the active substances contained in the biocidal product. Examples of biocides are antifouling agents, preservatives and disinfectants.According to the EU Biocidal Products Regulation (BPR), all biocidal products require an authorization before they can be placed on the market, and the active substances contained in the biocidal product must be previously approved. The European Chemical Agency (ECHA) coordinates and facilitates the BPR. More or less similar to other legislations, for biocides the environmental risk assessment is mainly performed by comparing compartmental concentrations (PEC) with the concentration below which unacceptable effects on organisms will most likely not occur (PNEC).Veterinary and human pharmaceuticals regulationSince 2006, EU law requires an environmental risk assessment (ERA) for all new applications for a marketing authorization of human and veterinary pharmaceuticals. For both products, guidance documents have been developed for conducting an ERA based on two phases. The first phase estimates the exposure of the environment to the drug substance. Based on an action limit the assessment may be terminated. In the second phase, information about the fate and effects in the environment is obtained and assessed. For conducting an ERA a base set, including ecotoxicity data, is required. For veterinary medicines, the ERA is part of a risk-benefit analysis, in which the positive therapeutic effects are weighed against any environment risks, whereas for human medicines the environmental concerns are excluded from the risk-benefit analysis. The European Medicines Agency (EMA) is responsible for the scientific evaluation, supervision and safety monitoring of medicines in the EU.Harmonization of testingTesting chemicals is an important aspect of risk assessment, e.g. testing for toxicity, for degradation or for a physicochemical property like the Kow (see Chapter 3). The outcome of a test may vary depending on the conditions, e.g. temperature, test medium or light conditions. For this reason there is an incentive to standardize the test conditions and to harmonize the testing procedures between agencies and countries. This would also avoid duplication of testing, and leading to a more efficient and effective testing system.The Organization for Economic Co-operation and Development (OECD) assists its member governments in developing and implementing high-quality chemical management policies and instruments. One of the key activities to achieve this goal is the development of harmonized guidelines to test and assess the risks of chemicals leading to a system of mutual acceptance of chemical safety data among OECD countries. The OECD also developed Principles of Good Laboratory Practice (GLP) to ensure that studies are of sufficient quality and rigor and are verifiable. The OECD also facilitates the development of new tools to obtain more safety information and maintain quality while reducing costs, time and animal testing, such as the OECD QSAR toolbox.What are the major categories of chemicals for which regulatory frameworks are present controlling the environmental risks of these chemicals?Is a chemical producer in the EU allowed to put a new chemical on the market without a registration or authorization?Is the CLP regulation based on the hazard, exposure and/or risk of chemicals?The amount of minimum information that is required under REACH for making a proper hazard and risk assessment is dependent on what? And what do you think is the rationale behind it?Name three European agencies or authorities that are coordinating important regulatory frameworks in the EU?Authors: Theo VermeireReviewers: Tim BowmerLearning objective:You should be able to:Keywords: REACH, chemical safety assessment, human, RCR, DNEL, DMELThe REACH Regulation aims to ensure a high level of protection of human health and the environment, including the promotion of alternative methods for assessment of hazards of substances, as well as the free circulation of substances on the internal market while enhancing competitiveness and innovation. Risk assessment under REACH aims to realize such a level of protection for humans that the likelihood of adverse effects occurring is low, taking into account the nature of the potentially exposed population (including sensitive groups) and the severity of the effect(s). Industry therefore has to prepare a risk assessment (in REACH terminology: chemical safety assessment, CSA) for all relevant stages in the life cycle of the chemical, taking into account all risk management measures envisaged, and document this in the chemical safety report (CSR). Risk characterization in the context of a CSA is the estimation of the likelihood that adverse effect levels occur due to actual or predicted exposure to a chemical. The human populations considered, or protection goals, are workers, consumers and humans exposed via the environment. In risk characterization, exposure levels are compared to reference levels to yield "risk characterization ratios" (RCRs) for each protection goal. RCRs are derived for all endpoints (e.g. skin and eye irritation, sensitization, repeated dose toxicity) and time scales. It should be noted that these RCRs have to be derived for all stages in the life-cycle of a compound.Humans can be exposed through the environment directly via inhalation of indoor and ambient air, soil ingestion and dermal contact, and indirectly via food products and drinking water. REACH does not consider direct exposure via soil.In the REACH exposure scenario, assessment of human exposure through the environment can be divided into three steps:A fourth step may be the consideration of aggregated exposure taking into account exposure to the same substance in consumer products and at the workplace. Moreover, there may be similar substances, acting via the same mechanism of action, that may have to be considered in the exposure assessment, for instance, as a worst case, by applying the concept of dose or concentration addition.The section on Environmental realistic scenarios (PECs) - Human explains the concept of exposure scenarios and how concentrations in environmental compartments are derived.The aim of hazard identification is to classify chemicals and to select key data for the dose-response assessment to derive a safe reference level, which in REACH terminology is called the DNEL (Derived No Effect Level) or DMEL (Derived Minimal Effect Level). For human end-points, a distinction is made between substances considered to have a threshold for toxicity and those without a threshold. For threshold substances, a No-Observed-Adverse Effect Level (NOAEL) or Lowest-Observed-Adverse-Effect Level (LOAEL) is derived, typically from toxicity studies with laboratory animals such as rats and mice. Alternatively a Benchmark Dose (BMD) can be derived by fitting a dose-response model to all observations. These toxicity values are then extrapolated to a DNEL using assessment factors to correct for uncertainty and variability. The most frequently used assessment factors are those for interspecies differences and those for intraspecies variability (see section on Setting safe standards). Additionally, factors can be applied to account for remaining uncertainties such as those due to a poor database.For substances considered to exert their effect by a non-threshold mode of action, especially mutagenicity and carcinogenicity, it is generally assumed, as a default assumption, that even at very low levels of exposure residual risks cannot be excluded. That said, recent progress has been made on establishing scientific, 'health-based' thresholds for some genotoxic carcinogens. For non-threshold genotoxic carcinogens it is recommended to derive a DMEL, if the available data allow. A DMEL is a cancer risk value considered to be of very low concern, e.g. a 1 in a million tumour risk after lifetime exposure to the chemical and using a conservative linear dose-response model. There is as yet no EU-wide consensus on acceptable levels of cancer risk.Safe use of substances is demonstrated when:• RCRs are below one, both at local and regional level. For threshold substances, the RCR is the ratio of the estimated exposure (concentration or dose) and the DNEL; for non-threshold substances the DMEL is used.• The likelihood and severity of an event such as an explosion occurring due to the physicochemical properties of the substance as determined in the hazard assessment is negligible.A risk characterization needs to be carried out for each exposure scenario (see Section on Environmental realistic scenarios (PECs) - Human) and human population. The assessment consists of a comparison of the exposure of each human population known to be or likely to be exposed with the appropriate DNELs or DMELs and an assessment of the likelihood and severity of an event occurring due to the physicochemical properties of the substance.Example of a deterministic assessment (Vermeire et al., 2001)Based on an emission estimation for processing of dibutylphthalate (DBP) as a softener in plastics, the concentrations in environmental compartments were estimated. Based on modelling as schematically presented in a reduced number of live pups per litter and decreased pup weights were seen in the absence of maternal toxicity. The lowest dose level of 52 mg.kgbw-1.d-1 was chosen as the NOAEL. The DNEL was derived by the application of an overall assessment factor of 1000, accounting for interspecies differences, human variability and uncertainties due to a non-chronic exposure period.The deterministic estimate of the RCR would be based on the deterministic exposure estimate of 0.093 mg.kgbw-1.d-1 and the deterministic DNEL of 0.052 mg.kgbw-1.d-1. The deterministic RCR would then be 1.8, based on the NOAEL. Since this is higher than one, this assessment indicates a concern, requiring a refinement of the assessment or risk management measures.Van Leeuwen C.J., Vermeire T.G. (Eds.) Risk assessment of chemicals: an introduction. Springer, Dordrecht, The Netherlands, ISBN 978-1-4020-6102-8 (e-book), //doi.org/10.1007/978-1-4020-6102-8.Vermeire, T., Jager, T., Janssen, G., Bos, P., Pieters, M. A probabilistic human health risk assessment for environmental exposure to dibutylphthalate. Journal of Human and Ecological Risk Assessment 7, 1663-1679.Uncertainty happens! It is inherent to risk assessment. Where, in your view, are the greatest sources of uncertainty in the process of risk assessment?Are there risks identified in the example for humans indirectly exposed via the environment? To what extent are these potential or realistic risks by asking yourself the following questions:• Do you have relevant toxicological and exposure data ?• Are these fixed values or not?• How relevant or adverse are the toxicological effects observed?• Were appropriate assessment factors used?What would you recommend as a strategy to reduce the identified risks sufficiently?Author: Joop de KnechtReviewers: Watze de WolfKeywords: REACH, European chemicals regulationREACH establishes procedures for collecting and assessing information on the properties, hazards and risks of substances. At quantities of > 10 tonnes, the manufacturers, importers, and down-stream users must show that their substances do not adversely affect human health or the environment for the uses and operational conditions registered. The amount of standard information required to show safe use depends on the quantity of the substance that is manufactured or imported. This section explains how risks to the environment are assessed in REACH.As a minimum requirement, all substances manufactured or imported in quantities of 1 tonne or more need to be tested in acute toxicity tests on Daphnia and algae, while also information should be provided on biodegradability (Table 1). Physical-chemical properties relevant for environmental fate assessment that have to be provided at this tonnage level are water solubility, vapour pressure and octanol-water partition coefficient. At 10 tonnes or more, this should be supplemented with an acute toxicity test on fish and an activated sludge respiration inhibition test. At this tonnage level, also an adsorption/desorption screening and a hydrolysis test should be performed. If the chemical safety assessment, performed at 100 tonnes or more in case a substance is classified based on hazard information, indicates the need to investigate further the effects on aquatic organisms, the chronic toxicity on these aquatic species should be determined. If the substance has a high potential for bioaccumulation (for instance a log Kow > 3), also the bioaccumulation in aquatic species should be determined. The registrant should also determine the acute toxicity to terrestrial species or, in the absence of these data, consider the equilibrium partitioning method (EPM) to assess the hazard to soil organisms. To further investigate the fate of the substance in surface water, sediment and soil, simulation tests on its degradation should be conducted and when needed further information on the adsorption/desorption should be provided. At 1000 tonnes or more, chronic tests on terrestrial and sediment-living species should be conducted if further refinement of the safety assessment is needed. Before testing vertebrate animals like fish and mammals, the use of alternative methods and all other options must be considered to comply with the regulations regarding (the reduction of) animal testing.Table 1 Required ecotoxicological and environmental fate information as defined in REACH1-10 t/y10-100 t/y100-1000 t/y≥ 1000 t/yFor substances that are classified based on hazard information the registrant should assess the environmental safety of a substance, by comparing the predicted environmental concentration (PEC) with the Predicted No Effect Concentration (PNEC), resulting in a Risk Characterisation Ratio (RCR=PEC/PNEC). The use of the substance is considered to be safe when the RCR <1.Chapter 16 of the ECHA guidance offers methods to estimate the PEC based on the tonnage, use and operational conditions, standardised through a set of use descriptors, particularly the Environmental Release Categories (ERCs). These ERCs are linked to conservative default release factors to be used as a starting point for a first tier environmental exposure assessment. When substances are emitted via waste water, the physical-chemical and fate properties of the chemical substance are then used to predict its behaviour in the Wastewater Treatment Plant (WWTP). Subsequently, the release of treated wastewater is used to estimate the concentration in fresh and marine surface water. The concentration in sediment is estimated from the PEC in water and experimental or estimated sediment-water partitioning coefficient (Kpsed). Soil concentrations are estimated from deposition from air and the application of sludge from an WWTP. The guidance offers default values for all relevant parameters, thus a generic local PEC can be calculated and considered applicable to all local emissions in Europe, although the default values can be adapted to specific conditions if justified. The local risk for wide-dispersive uses (e.g. from consumers or small, non- industrial companies) is estimated for a default WWTP serving 10,000 inhabitants. In addition, a regional assessment is conducted for a standard area, a region represented by a typical densely populated EU area located in Western Europe (i.e. about 20 million inhabitants, distributed in a 200 x 200 km2 area). For calculating the regional PECs, a multi-media fate-modelling approach is used (e.g. the SimpleBox model; see Section on Multicompartment fate modelling). All releases to each environmental compartment for each use, assumed to constitute a constant and continuous flux, are summed and averaged over the year and steady-state concentrations in the environmental compartments are calculated. The regional concentrations are used as background concentrations in the calculation of the local concentrations.The PNEC is calculated using the lowest toxicity value and an assessment factor (AF) related to the amount of information (see section Setting safe standards or chapter 10 of the REACH guidance. If only the minimum set of aquatic acute toxicity data is available, i.e. LC50s or EC50s for algae, daphnia and fish, a default value of 1000 is used. When one, two or three or more long-term tests are available, a default AF of 100, 50 and 10 is applied to No Observed Effect Concentrations (NOECs), respectively. The idea behind lowering the AF when more data become available is that the amount of uncertainty around the PNEC is being reduced.In the absence of ecotoxicological data for soil and/or sediment-dwelling organisms, the PNECsoil and/or PNECsed may be provisionally calculated using the EPM. This method uses the PNECwater for aquatic organisms and the suspended matter/water partitioning coefficient as inputs. For substances with a log Kow >5 (or with a corresponding log Kp value), the PEC/PNEC ratio resulting from the EPM is increased by a factor of 10 to take into account possible uptake through the ingestion of sediment. If the PEC/PNEC is greater than 1 a sediment test must be conducted. If one, two or three long-term No Observed Effect Concentrations (NOECs) from sediment invertebrate species representing different living and feeding conditions are available, the PNEC can be derived using default AFs of 100, 50 and 10, respectively.For data rich chemicals, the PNEC can be derived using Species Sensitivity Distributions (SSD) or other higher-tier approaches.Who is responsible within the EU that industrial chemicals do not pose a risk for the environment?In which circumstances will an environmental risk be identified?Describe how the ecotoxicogical safety levels (PNECs) for the aquatic environment are derived depending on the ecotoxicological information available?under reviewAuthor: Gerd MaackReviewers: Ad Ragas, Julia Fabrega, Rhys WhomsleyLearning objectives:You should be able toKeywords: Human pharmaceuticals, veterinary pharmaceuticals, environmental impact, tiered approachPharmaceuticals are a crucial element of modern medicine and confer significant benefits to society. About 4,000 active pharmaceutical ingredients are being administered worldwide in prescription medicines, over-the-counter medicines, and veterinary medicines. They are designed to be efficacious and stable, as they need to pass different barriers i.e. skin, the gastrointestinal system (GIT), or even the blood-brain barrier before reaching the target cells. Each target system has a different pH and different lipophilicity and the GIT is in addition colonised with specific bacteria, specialized to digest, dissolve and disintegrate organic molecules. As a consequence of this stability, most of the pharmaceutical ingredients are stable in the environment as well and could cause effects on non-target organisms.The active ingredients comprise a variety of synthetic chemicals produced by pharmaceutical companies in both the industrialized and the developing world at a rate of 100,000 tons per year.While pharmaceuticals are stringently regulated in terms of efficacy and safety for patients, as well as for target animal safety, user and consumer safety, the potential effects on non-target organisms and environmental effects are regulated comparably weakly.The authorisation procedure requires an environmental risk assessment (ERA) to be submitted by the applicants for each new human and veterinary medicinal product. The assessment encompasses the fate and behaviour of the active ingredient in the environment and its ecotoxicity based on a catalogue of standardised test guidelines.In the case of veterinary pharmaceuticals, constraints to reduce risk and thus ensure safe usage can be stipulated in most cases. In the case of human pharmaceuticals, it is far more difficult to ensure risk reduction through restriction of the drug's use due to practical and ethical reasons. Because of their unique benefits, a restriction is not reasonable. This is reflected in the legal framework, as a potential effect on the environment is not included in the final benefit risk assessment for a marketing authorisation.Human pharmaceuticals enter the environment mainly via surface waters through sewage systems and sewage treatment plants. The main exposure pathways are excretion and non-appropriate disposal. Typically only a fraction of the medicinal product taken is metabolised by the patients, meaning that the main share of the active ingredient is excreted unchanged into the wastewater system. Furthermore, sometimes the metabolites themselves are pharmacologically active. Yet, no wastewater treatment plant is able to degrade all active ingredients. So medicinal products are commonly found in surface water, to some extent in ground water, and sometimes even in drinking water. However, the concentrations in drinking water are orders of magnitude lower than therapeutic concentrations. An additional exposure pathway for human pharmaceuticals is the spreading of sewage sludge on soil, if the sludge is used as fertilizer on farmland. See for more details the Link "The Drugs We Wash Away: Pharmaceuticals, Drinking Water and the Environment".Veterinary pharmaceuticals on the other hand enter the environment mainly via soil, either indirectly, if the slurry and manure from mass livestock production is spread onto agricultural land as fertiliser, or directly from pasture animals. Moreover, pasture animals might additionally excrete directly into surface water. Pharmaceuticals can also enter the environment via the detour of manure used in biogas plants.Despite the differences mentioned above, the general scheme of the environmental risk assessment of human and veterinary pharmaceuticals is similar. Both assessments start with an exposure assessment. Only if specific trigger values are reached an in-depth assessment of fate, behaviour and effects of the active ingredient is necessary.In Europe, an ERA for human pharmaceuticals has to be conducted according to the Guideline on Environmental Risk Assessment of Medicinal Products for Human Use (EMA 2006). This ERA consists of two phases. Phase I is a pre-screening for estimating the exposure in surface water, and if this Predicted Environmental Exposure Concentration (PEC) does not reach the action limit of 0.01 µg/L, in most cases, the ERA can stop. In case this action limit is reached or exceeded, a base set of aquatic toxicology, and fate and behaviour data need to be supplied in phase II Tier A. A risk assessment, comparing the PEC with the Predicted No Effect Concentration (PNEC), needs to be conducted. If in this step a risk is still identified for a specific compartment, a substance and compartment-specific refinement and risk assessment in Phase II Tier B needs to be conducted.In Phase I, the PEC calculation is restricted to the aquatic compartment. The estimation should be based on the drug substance only, irrespective of its route of administration, pharmaceutical form, metabolism and excretion. The initial calculation of the PEC in surface water assumes:The following formula is used to estimate the PEC in surface water:PECsurfacewater = mg/lDOSEai = Maximum daily dose consumed per capita [mg.inh-1.d-1]Fpen = Fraction of market penetration (= 1% by default)WASTEinhab = Amount of wastewater per inhabitant per day (= 200 l by default)DILUTION = Dilution Factor (= 10 by default)Three factors of this formula, i.e. Fpen, Wasteinhab and the Dilution Factor, are default values, meaning that the PECsurfacewater in Phase I entirely depends on the dose of the active ingredient. The Fpen can be refined by providing reasonably justified market penetration data, e.g. based on published epidemiological data.If the PECsurfacewater value is equal to or above 0.01 μg/l (mean dose ≥ 2 mg cap-1 d-1), a Phase II environmental fate and effect analysis should be performed. Otherwise, the ERA can stop. However, in some cases, the action limit may not be applicable. For instance, medicinal substances with a log Kow > 4.5 are potential PBT candidates and should be screened for persistence (P), bioaccumulation potential (B), and toxicity (T) independently of the PEC value. Furthermore, some substances may affect vertebrates or lower animals at concentrations lower than 0.01 μg/L. These substances should always enter Phase II and a tailored risk assessment strategy should be followed which addresses the specific mechanism of action of the substance. This is often true for e.g. hormone active substances (see section on Endocrine disruption). The required tests in a Phase II assessment (see below) need to cover the most sensitive life stage, and the most sensitive endpoint needs to be assessed. This means for instance that for substances affecting reproduction, the organism needs to be exposed to the substance during gonad development and the reproductive output needs to be assessed.A Phase II assessment is conducted by evaluating the PEC/PNEC ratio based on a base set of data and the predicted environmental concentration from Tier A. If a potential environmental impact is indicated, further testing might be needed to refine PEC and PNEC values in Tier B.Under certain circumstances, effects on sediment-dwelling organisms and terrestrial environmental fate and effects analysis are also required. Experimental studies should follow standard test protocols, e.g. OECD guidelines. It is not acceptable to use QSAR estimation, modelling and extrapolation from e.g. a substance with a similar mode of action and molecular structure (read across). This is in clear contrast to other regulations like e.g. REACH.Human pharmaceuticals are used all year round without any major fluctuations and peaks. The only exemption are substances used against cold and influenza. These substances have a clear peak in the consumption in autumn and winter times. In developed countries in Europe and North America, antibiotics display a similar peak as they are prescribed to support the substances used against viral infections. The guideline reflects this exposure scenario and asks explicitly for long-term effect tests for all three trophic levels: algae, aquatic invertebrates and vertebrates (i.e., fish).In order to assess the physio chemical fate, amongst other tests the sorption behaviour and fate in a water/sediment system should be determined.If, after refinement, the possibility of environmental risks cannot be excluded, precautionary and safety measures may consist of:Labelling should generally aim at minimising the quantity discharged into the environment by appropriate mitigation measuresIn the EU, the Environmental Risk Assessment (ERA) is conducted for all veterinary medicinal products. The structure of an ERA for Veterinary Medicinal Products (VMPs) is quite similar to the ERA for Human Medicinal Products. It is also tier based and starts with an exposure assessment in Phase I. Here, the potential for environmental exposure is assessed based on the intended use of the product. It is assumed that products with limited environmental exposure will have negligible environmental effects and thus can stop in Phase I. Some VMPs that might otherwise stop in Phase I as a result of their low environmental exposure, may require additional hazard information to address particular concerns associated with their intrinsic properties and use. This approach is comparable to the assessment of Human Pharmaceutical Products, see above.For the exposure assessment, a decision tree was developed. The decision tree consists of a number of questions, and the answers of the individual questions will conclude in the extent of the environmental exposure of the product. The goal is to determine if environmental exposure is sufficiently significant to consider if data on hazard properties are needed for characterizing a risk. Products with a low environmental exposure are considered not to pose a risk to the environment and hence these products do not need further assessment. However, if the outcome of Phase I assessment is that the use of the product leads to significant environmental exposure, then additional environmental fate and effect data are required. Examples for products with a low environmental exposure are, among others are products for companion animals only and products that result in a Predicted Environmental Concentration in soil (PECsoil) of less than 100 µg/kg, based on a worst-case estimation.A Phase II assessment is necessary if either the trigger of 100 µg/kg in the terrestrial branch or the trigger of 1 µg/L in the aquatic branch is reached. It is also necessary, if the substance is a parasiticide for food producing animals. A Phase II is also required for substances that would in principle stop in Phase I, but there are indications that an environmental risk at very low concentrations is likely due to their hazardous profile (e.g., endocrine active medicinal products). This is comparable to the assessment for Human Pharmaceutical Products.For Veterinary Pharmaceutical Products also the Phase II assessment is sub-divided into several Tiers, see For Tier A, a base set of studies assessing the physical-chemical properties, the environmental fate, and effects of the active ingredient is necessary. For Tier A, acute effect tests are suggested, assuming a more peak like exposure scenario due to e.g. applying manure and dung on fields and meadows, in contrast to the permanent exposure of human pharmaceuticals. If for a specific trophic level, e.g. dung fauna or algae, a risk is identified (PEC/PNEC ≥1) (see Introduction to Chapter 6), long-term tests for this level have to be conducted in Tier B. For the trophic levels, without an identified risk, the assessment can stop. If the risk still applies with these long-term studies, a further refinement with field studies in Tier C can be conducted. Here a co-operation with a competent authority is strongly recommended, as these tests are tailored, reflected by the individual design of these field studies. In addition, and independent of this, risk mitigation measures can be imposed to reduce the exposure concentration (PEC). These can be, beside others, that animals must remain stabled for a certain amount of time after the treatment, to ensure that the concentration of active ingredient in excreta is low enough to avoid adverse effects on dung fauna and their predators. Alternatively, the treated animals are denied access to water as the active ingredient has harmful effects on aquatic organisms.The Environmental Risk Assessment of Human and Veterinary Medicinal Products is a straightforward, tiered-based process with the possibility to exit at several steps in the assessment procedure. Depending on the dose, the physico-chemical properties, and the anticipated use, this can be quite early in the procedure. On the other hand, for very potent substances with specific modes of action the guidelines are flexible enough to allow specific assessments covering these modes of action.The ERA guideline for human medicinal products entered into application 2006 and many data gaps exist for products approved prior to 2006. Although there is a legal requirement for an ERA dossier for all marketing authorisation applications, new applications for pharmaceuticals on the market before 2006 are only required to submit ERA data under certain circumstances (e.g. significant increase in usage). Even for some of the blockbusters, like Ibuprofen, Diclofenac, and Metformin, full information on fate, behaviour and effects on non-target organisms is currently lacking.Furthermore, systematic post-authorisation monitoring and evaluation of potential unintended ecotoxicological effects does not exist. The market authorisation for pharmaceuticals does not expire, in contrast to e.g. an authorisation of pesticides, which needs to be renewed every 10 years.For Veterinary Medicinal Products, an in-depth ERA is necessary for food producing animals only. An ERA for non-food animals can stop with question 3 in Phase I as it is considered that the use of products for companion animals leads to negligible environmental concentrations, which might not be necessarily the case. Here, the guideline does not reflect the state of the art of scientific and regulatory knowledge. For example, the market authorisation, as a pesticide or biocide, has been withdrawn or strongly restricted for some potent insecticides like imidacloprid and fipronil which both are authorised for use in companion animals.Pharmaceuticals in the Environment://www.umweltbundesamt.de/en/publikationen/pharmaceuticals-in-the-environment-the-globalRecommendations for reducing micro-pollutants in waters: //www.umweltbundesamt.de/publikationen/recommendations-for-reducing-micropollutants-inWhy are pharmaceuticals a problem for non-target organisms and for the environment?How do Human pharmaceuticals enter the environment?How do Veterinary pharmaceuticals enter the environment?What is the general scheme of the environmental risk assessment of human and veterinary pharmaceuticals?Why are long-term tests needed for the assessment of human pharmaceuticals, in contrast to the assessment of veterinary pharmaceuticals?under reviewAuthor: Frank SwartjesReviewers: Kees van Gestel, Ad Ragas, Dietmar Müller-GrabherrLearning objectives:You should be able toKeywords: Policy on soil contamination, Water Framework Directive, screening values comparison, Thematic Soil Strategy, Common ForumAs a bomb hit, soil contamination came onto the political agenda in the United States and in Europe through a number of disasters in the late 1970s and early 1980s. Starting point was the 1978 Love Canal disaster in upper New York State, USA, in which a school and a number of residences had been built on a former landfill for chemical waste disposal with thousands of tonnes of dangerous chemical wastes, and became a national media event. In Europe in 1979, the residential site of Lekkerkerk in the Netherlands became an infamous national event. Again, a residential area had been built on a former waste dump, which included chemical waste from the painting industry, and with channels and ditches that had been filled in with chemical waste-containing materials.Since these events, soil contamination-related policies emerged one after the other in different countries in the world. Crucial elements of these policies were a benchmark date for a ban on bringing pollutants in or on the soil ('prevention'), including a strict policy, e.g. duty of care, for contaminations that are caused after the benchmark date, financial liability for polluting activities, tools for assessing the quality of soil and groundwater, and management solutions (remediation technologies and facilities for disposal).Objectives in soil policies often show evolution over time and changes go along with developing new concepts and approaches for implementing policies. In general, soil policies often develop from a maximum risk control until a functional approach. The corresponding tools for implementation usually develop from a set of screening values towards a systemic use of frameworks, enabling sound environmental protection while improving the cost-benefit-balance. Consequently, soil policy implementation usually goes through different stages. In general terms, four different stages can be distinguished, i.e., related to maximum risk control, to the use of screening values, to the use of frameworks and based on a functional approach. Maximum risk control follows the precaution principle and is a stringent way of assessing and managing contamination by trying to avoid any risk. Procedures based on screening values allow for a distinction in polluted and non-polluted sites for which the former, the polluted sites, require some kind of intervention. The scientific underpinning of the earliest generations of screening values was limited and expert judgement played an important role. Later, more sophisticated screening values emerged, based on risk assessment. This resulted in screening values for individual contaminants within the contaminant groups metals and metalloids, other inorganic contaminants (e.g., cyanides), polycyclic aromatic hydrocarbons (PAHs), monocyclic aromatic hydrocarbons (including BETX (benzene, toluene, xylene)), persistent organic pollutants (including PCBs and dioxins), volatile organic contaminants (including trichloroethylene, tetrachloroethylene, 1,1,1-trichloroethane, and vinyl chloride), petroleum hydrocarbons and, in a few countries only, asbestos. For some contaminants such as PAHs, sum-screening values for groups were derived in several countries, based on toxicity equivalents. In a procedure based on frameworks, often the same screening values generally act as a trigger for further, more detailed site-specific investigations in one or two additional assessment steps. In the functional approach, soil and groundwater must be suited for the land use it relates to (e.g., agricultural or residential land) and the functions (e.g., drinking water abstraction, irrigation water) it performs. Some countries skip the maximum risk control and sometimes also the screening values stages and adopt a framework and/or a functional approach.In Europe, collaboration was strengthened by concerted actions such as CARACAS (concerted action on risk assessment for contaminated sites in the European Union; 1996 - 1998) and CLARINET (Contaminated Land Rehabilitation Network for Environmental Technologies; 1998 - 2001). These concerted actions were followed up by fruitful international networks that are still are active today. These are the Common Forum, which is a network of contaminated land policy makers, regulators and technical advisors from Environment Authorities in European Union member states and European Free Trade Association countries, and NICOLE (Network for Industrially Co-ordinated Sustainable Land Management in Europe), which is a leading forum on industrially co-ordinated sustainable land management in Europe. NICOLE is promoting co-operation between industry, academia and service providers on the development and application of sustainable technologies.In 2000, the EU Water Framework Directive (WFD; Directive 2000/60/EC) was adopted by the European Commission, followed by the Groundwater Directive (Directive 2006/118/EC) in 2006 (European parliament and the council of the European Union, 2019b). The environmental objectives are defined by the WFD. Moreover, 'good chemical status' and the 'no deterioration clause' account for groundwater bodies. 'Prevent and limit' as an objective aims to control direct or indirect contaminant inputs to groundwater, and distinguishes for 'preventing hazardous substances' to enter groundwater as well as 'limiting other non-hazardous substances'. Moreover, the European Commission adopted a Soil Thematic Strategy, with soil contamination being one out of the seven identified threats. A proposal for a Soil Framework Directive, launched in 2006, with the objective to protect soils across the EU, was formally withdrawn in 2014 because of a lack of support from some countries.Today, most countries in Europe and North America, Australia and New Zealand, and several countries in Asia and Middle and South America, have regulations on soil and groundwater contamination. The policies, however, differ substantially in stage, extent and format. Some policies only cover prevention, e.g., blocking or controlling the inputs of chemicals onto the soil surface and in groundwater bodies. Other policies cover prevention, risk based quality assessment and risk management procedures and include elaborated technical tools, which enable a decent and uniform approach. In particular the larger countries such as the USA, Germany and Spain, policies differ between states or provinces within the country. And even in countries with a policy on the federal level, the responsibilities for different steps in the soil contamination chain are very different for the different layers of authorities (at the national, regional and municipal level).In the European countries are shown that have a procedure based on frameworks (as described above), including risk-based screening values. It is difficult, if not impossible, to summarise all policies on soil and groundwater protection worldwide. Alternatively, some general aspects of these policies are given here. A fair first basic element in nearly all soil and groundwater policies, relating to prevention of contamination, is the declaration of a formal point in time after which polluting soil and groundwater is considered an illegal act. For soil and groundwater quality assessment and management, most policies follow the risk-based land management approach as the ultimate form of the functional approach described above. Central in this approach are the risks for specific targets that need to be protected up to a specified level. Different protection targets are considered. Not surprisingly, 'human health' is the primary protection target that is adopted in nearly all countries with soil and groundwater regulations. Moreover, the ecosystem is an important protection target for soil, while for groundwater the ecosystem as a protection target is under discussion. Another interesting general characteristic of mature soil and groundwater policies is the function-specific approach. The basic principle of this approach is that land must be suited for its purpose. As a consequence, the appraisal of a contaminated site in a residential area, for instance, follows a much more stringent concept than that of an industrial site.Risk assessment tools often form the technical backbone of policies. Since the late 1980s risk assessment procedures for soil and groundwater quality appraisal were developed. In the late 1980s the exposure model CalTOX was developed by the Californian Department of Toxic Substances Control in the USA, a few years later the CSOIL model in the Netherlands (Van den Berg, 1991/ 1994/ 1995). In the soil into contact media; and direct and indirect exposure to humans. The major exposure pathways are exposure through soil ingestion, crop consumption and inhalation of indoor vapours (Elert et al., 2011). Today, several exposure models exist (see for some 'national' European exposure models). However, these exposure models may give quite different exposure estimates for the same exposure scenario (Swartjes, 2007).Moreover, procedures were developed for ecological risk assessment, including the Species Sensitivity Distributions (see section on SSDs), based on empirical relations between concentration in soil or groundwater and the percentage of species or ecological processes that experience adverse effects (PAF: potentially Affected Fraction). For site specific risk assessment, the TRIAD approach was developed, based on three lines of evidence, i.e., chemically-based, toxicity-based and using data from ecological field surveys (see section on the TRIAD approach).In the framework of the HERACLES network, another attempt was made to summarizing different EU policies on polluted soil and groundwater. A strong plea was made for harmonisation of risk assessment tools (Swartjes et al., 2009). The authors also described a procedure for harmonization based on the development of a toolbox with standardized and flexible risk assessment tools. Flexible tools are meant to cover national or regional differences in cultural, climatic and geological (e.g., soil type, depth of the groundwater table) conditions. It is generally acknowledged, however, that policy decisions should be taken on the national level. In 2007, an analysis of the differences of soil and groundwater screening values and of the underlying regulatory frameworks, human health and ecological risk assessment procedures (Carlon, 2007) was launched. Although screening values are difficult to compare, since frameworks and objectives of screening values differ significantly, a general conclusion can be drawn for e.g. the screening values at the potentially unacceptable risk level (often used as 'action' values, i.e. values that trigger further research or intervention when exceeded). For the 20 metals, most soil screening values (from 13 countries or regions) show between a factor of 10 and 100 difference between the lowest and highest values. For the 23 organic pollutants considered, most soil screening values (from 15 countries or regions) differ by a factor of between 100 and 1000, but for some organic pollutants these screening values differ by more than four orders of magnitude. These conclusions are merely relevant from a policy viewpoint. Technically, these conclusions are less relevant, since, the screenings values are derived from a combination of different protection targets and tools and based on different policy decisions. Differences in screening values are explained by differences in geographical and biological and socio-cultural factors in different countries and regions, different national regulatory and policy decisions and variability in scientific/ technical tools.Carlon, C. (Ed.). Derivation methods of soil screening values in Europe. A review and evaluation of national procedures towards harmonisation, JRC Scientific and Technical report EUR 22805 EN.Elert, M., Bonnard, R., Jones, C., Schoof, R.A., Swartjes, F.A.. Human Exposure Pathways. Chapter 11 in: Swartjes, F.A. (Ed.), Dealing with Contaminated Sites. From theory towards practical application. Springer Publishers, Dordrecht.Swartjes, F.A.. Insight into the variation in calculated human exposure to soil contaminants using seven different European models. Integrated Environmental Assessment and Management 3, 322-332.Swartjes, F.A., D'Allesandro, M., Cornelis, Ch., Wcislo, E., Müller, D., Hazebrouck, B., Jones, C., Nathanail, C.P.. Towards consistency in risk assessment tools for contaminated sites management in the EU. The HERACLES strategy from the end of 2009 onwards. National Institute for Public Health and the Environment (RIVM), Bilthoven, The Netherlands, RIVM Report 711701091.Van den Berg, R. (1991/1994/1995). Exposure of humans to soil contamination. A quantitative and qualitative analyses towards proposals for human toxicological C‑quality standards (revised version of the 1991/ 1994 reports). National Institute for Public Health and the Environment (RIVM), Bilthoven, The Netherlands, RIVM-report no. 725201011.Swartjes, F.A. (Ed.). Dealing with Contaminated Sites. From theory towards practical application. Springer Publishers, Dordrecht.Rodríguez-Eugenio, N., McLaughlin, M., Pennock, D.. Soil Pollution: a hidden reality. Rome, FAO.What is the logical first step in policy on soil and groundwater protection, related to ' prevention' ?What are the most frequently used protection targets on policies on soil and groundwater in the world?What role should screening values play in sophisticated risk assessment procedures?What is the ideal approach when developing a new soil and groundwater policy?Regarding the harmonization of risk assessment tools: why are flexible risk assessment tools necessary?in preparationThis page titled 6.5: Regulatory Frameworks is shared under a CC BY 4.0 license and was authored, remixed, and/or curated by Sylvia Moes, Kees van Gestel, & Gerco van Beek via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.
1,004
6.7: Risk perception
https://chem.libretexts.org/Bookshelves/Environmental_Chemistry/Environmental_Toxicology_(van_Gestel_et_al.)/06%3A_Risk_Assessment_and_Regulation/6.07%3A_Risk_perception
Author: Fred WoudenbergReviewers: Ortwin RennLearning objectives:Key words: Risk perception, fear, worry, risk, contextIf risk perception had a first law like toxicology has with Paracelsus' "Sola dosis facit venenum" (see section on History of Environmental Toxicology) it would be:"People fear things that do not make them sick and get sick from things they do not fear."People can, for instance, worry extremely over a newly discovered soil pollution site in their neighborhood, which they hear about at a public meeting they have come to with their diesel car, and, when returning home, light an extra cigarette without thinking to relieve the stress.Sources: Left: File photo: Kalamazoo City Commissioner Don Cooney and Kalamazoo residents march to protest capping the Allied Paper Landfill, May 2013. //www.wmuk.org/post/will-bioremediation-work-allied-microbial-ecologist-skeptical. Right: //desmotivaciones.es/352654/ObamaThe explanation for this first law is quite easy. The annual risk of getting sick, being injured or to die has only limited influence on the perception of a risk. Other factors are more important. is a model of risk perception in its most basic form.In the middle of this figure, there is a list with several factors which determine risk perception to a large extent. In any given situation, they can each end up at the left, safe side or on the right, dangerous side. The model is a simplification. Research since the late sixties of the previous century has led to a collection of many more factors that often are connected to each other (for lectures of some well-known researchers see examples 1, 2, 3, 4 and 5).An example can show this interconnection and the discrepancy between the annual health risks (at the top of and other factors. The risk of harmful health effects for people living on polluted soil is often very small. The factor 'risk' thus ends at the left, safe side. Most of the other factors end up at the right. People do not voluntary choose to have a soil pollution in their garden. They have absolutely no control over the situation and an eventual sanitation. For this, they depend on authorities and companies. Nowadays, trust in authorities and companies is very small. Many people will suspect that these authorities care more about their money than about the health and well-being of their citizens and neighbours. A newly discovered soil pollution will get local media attention and this will certainly be the case if there is controversy. If the untrusted authorities share their conclusion that the risks are low, many people will suspect that they withhold information and are not completely open. Especially saying that there is 'no cause for alarm' will only make people worry more (see a funny example). People will not believe the conclusion of authorities that the risk is low, so effectively all factors end up at the dangerous side.For smoking a cigarette, the evaluation is the other way around. Almost everybody knows that smoking is dangerous, but people light their cigarette themselves. Most people at least think they have control over their smoking habit as they can decide to stop at any moment (but being addicted, they probably highly overestimate their level of control). For their information or taking measures, people do not depend on others and no information is withheld about smoking. Some smokers suffer from what is called optimistic bias, the idea that misery only happens to other. They always have the example of their grandfather who started smoking at 12 and still ran the marathon at 85.People can be upset if they learn that cigarette companies purposely make cigarettes more addictive. It makes them feel the company takes over control which people greatly resent. This, and not the health effects, can make people decide to quit smoking. This also explains why passive smoking is more effective than active smoking in influencing people's perceptions. Although the risk of passive smoking is 100 times smaller than the risk of active smoking, most factors end up at the right, dangerous side, making passive smoking maybe 100 times more objectionable and worrisome than active smoking.Many people are surprised to find out that the calculated or estimated health risks influences risk perception so little. But we experience it in our own daily lives, especially when we add another factor in the model: advantages. All of us perform risky activities because they are necessary, come with advantages or sometimes out of sheer fun. Most of us take part in daily traffic with an annual risk of dying far higher than 1 in a million. Once, twice or even more times a year we go on a holiday with a multitude of risks: transport, microbes, robbery, divorce. The thrill seekers of us go diving, mountain climbing, parachute jumping without even knowing the annual fatality rates. If the stakes are high, people can knowingly risk their life in order to improve it, as the thousands of immigrants trying to cross the Mediterranean illustrate, or even to give their life for a higher cause, like soldiers at war (Winston Churchill in 1940: "I have nothing to offer but blood, toil, tears and sweat").An example at the other side can show it maybe even clearer. No matter how small the risk, it can be totally unacceptable and nonsensical. Suppose the government starts a new lottery with an extremely small chance of winning, say one in a billion. Every citizen must play and tickets are free. So far nothing strange, but there is a twitch. The main and only price of the lottery is a public execution broadcasted live on national TV. The government will probably not make itself very popular with this absurd lottery. When government, as still is done, tells people they have to accept a small risk because they accept larger risks of activities they choose themselves, it makes people feel they have been given a ticket in the above mentioned lottery. This is how people can feel if the government tells them the risk of the polluted soil they live on is extremely small and that it would be wiser for them to quit smoking.A main lesson which can be learned from the study of risk perception is that risks always occur in a context. A risk is always part of a situation or activity which has many more characteristics than only the chance of getting sick, being injured or to die. We do not judge risks, we judge situations and activities of which the risk is often only a small part. Risk perception occurs in a rich environment. After 50 years of research a lot has been discovered, but predicting how angry of afraid people will be in a new, unknown situation is still a daunting task.Name at least 5 important determinants of risk perception.Do people judge risks in isolation or the situation or activities they are part of?How large in general is the influence of the risk on risk perception?Do experts react differently from lay people if they encounter risks in their own daily lives?This page titled 6.7: Risk perception is shared under a CC BY 4.0 license and was authored, remixed, and/or curated by Sylvia Moes, Kees van Gestel, & Gerco van Beek via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.
1,006
1.1: Composition and Structure of the Earth
https://chem.libretexts.org/Bookshelves/Environmental_Chemistry/Geochemistry_(Lower)/01%3A_The_Earth_and_its_Lithosphere/1.01%3A_Composition_and_Structure_of_the_Earth
The earth has been in a state of continual change since its formation. The major part of this change, involving volcanism and tectonics, has been driven by heat produced from the decay of radioactive elements within the earth. The other source of change has been solar energy, which acts as the driving force of weathering and is the ultimate source of energy for living organisms.The solar system was probably formed about 4.6 billion years ago, and the oldest known rocks have an age of 3.8 billion years. There is thus a gap of 0.8 billion years for which there is no direct evidence. It is known that the earth was subjected to extensive bombardment earlier in its history; recent computer simulations suggest that the moon could have resulted from an especially massive collision with another body. Although these major collisions have diminished in magnitude as the matter in the solar system has become more consolidated, they continue to occur, with the most recent one being responsible for the annihilation of the dinosaurs and much of the other life on Earth. The lack of many overt signs of these collisions (such as craters, for example) testifies to the dynamic processes at work on the Earth’s surface and beneath it.The earth is composed of 90 chemical elements, of which 81 have at least one stable isotope. The unstable elements are 43Tc and 61Pm, and all elements heavier than 83Bi.Note that the vertical axis is logarithmic, which has the effect of greatly reducing the visual impression of the differences between the various elements.The chart gives the abundances of the elements present in the solar system, in the earth as a whole, and in the various geospheres. Of particular interest are the differences between the terrestrial and cosmic abundances, which are especially notable in the cases of the lighter elements (H, C, N) and the noble gas elements (He, Ne, Ar, Xe, Kr).Example \(\PageIndex{1}\)Given the mix of elements that are present in the earth, how might they combine so as to produce the chemical composition we now observe?SolutionGiven the mix of elements that are present in the earth, how might they combine so as to produce the chemical composition we now observe? Thermodynamics allows us to predict the composition that any isolated system will eventually reach at a given temperature and pressure. Of course the earth is not an isolated system, although most parts of it can be considered approximately so in many respects, on time scales sufficient to make thermodynamic predictions reasonably meaningful. The equilibrium states predicted by thermodynamics differ markedly from the observed compositions. The atmosphere, for example, contains 0.03% \(\ce{CO2}\), 78% \(\ce{N2}\) and 21% \(\ce{O2}\); in a world at equilibrium the air would be 99% \(\ce{CO2}\).Similarly, the oceans, containing about 3.5% NaCl, would have a salt content of 35% if they were in equilibrium with the atmosphere and the lithosphere. Trying to understand the mechanisms that maintain these non-equilibrium states is an important part of contemporary environmental geochemistry.Studies based on the reflection and refraction of the acoustic waves resulting from earthquakes show that the interior of the earth consists of four distinct regions. A combination of physical and chemical processes led to the differentiation of the earth into these major parts. This is believed to have occurred approximately 4 billion years ago.The Earth’s core is believed to consist of two regions. The inner core is solid, while the outer core is liquid. This phase difference probably reflects a difference in pressure and composition, rather than one of temperature. Density estimates obtained from seismological studies indicate that the core is metallic, and mainly iron, with 8-10 percent of lighter elements.Hypotheses about the nature of the core must be consistent with the the core’s role as the source of the earth’s magnetic field. This field arises from convective motion of the electrically conductive liquid comprising the outer core. Whether this convection is driven by differences in temperature or composition is not certain. The estimated abundance of radioactive isotopes (mainly U238 and K40 in the core is sufficient to provide the thermal energy required to drive the convective dynamo. Laboratory experiments on the high-pressure behavior of iron oxides and sulfides indicate that these substances are probably metallic in nature, and hence conductive, at the temperatures (4000-5000K) and pressures (1.3-3.5 million atm) that are estimated for the core. Their presence in the core, alloyed with the iron, would be consistent with the observed density, and would also resolve the apparent lack of sulfur in the earth, compared to its primordial abundance.The region extending from the outer part of the core to the crust of the earth is known as the mantle. The mantle is composed of oxides and silicates, i.e., of rock. It was once believed that this rock was molten, and served as a source of volcanic magma. It is now known on the basis of seismological evidence that the mantle is not in the liquid state. Laboratory experiments have shown, however, that when rock is subjected to the high temperatures and pressures believed to exist in the mantle, it can be deformed and flows very much like a liquid.The upper part of the mantle consists of a region of convective cells whose motion is driven by the heat due to decay of radioactive potassium, thorium, and uranium, which were selectively incorporated in the crystal lattices of the lower-density minerals that form the mantle. There are several independent sources of evidence of this motion. First, there are gravitational anomalies; the force of gravity, measured by changes in elevation in the sea surface, is different over upward and downward moving regions, and has permitted the mapping of some of the convective cells. Secondly, numerous isotopic ratio studies have traced the exchange of material between oceanic sediments, upper mantle rock, and back into the continental crust, which forms from melting of the upper mantle. Thirdly, the composition of the basalt formed by upper mantle melting is quite uniform everywhere, suggesting complete mixing of diverse materials incorporated into the mantle over periods of 100 million years.High-pressure studies in the laboratory have revealed that olivine, a highly abundant substance in the mantle composed of Fe, Mg, Si, and O (and also the principal constituent of meteorites) can undergo a reversible phase change between two forms differing in density. Estimates of conditions within the upper mantle suggest that the this phase change could occur within this region in such as way as to contribute to convection. The most apparent effect of mantle convection is the motion it imparts to the earth’s crust, as evidenced by the the external topography of the earth.The outermost part of the earth, known also as the lithosphere, is broken up into plates that are supported by the underlying mantle, and are moved by the convective cells within the mantle at a rate of a few centimetres per year. New crust is formed where plates move away from each other under the oceans, and old crust is recycled back into the mantle as where plates moving in opposite directions collide.The parts of the crust that contain the world’s oceans are very different from the parts that form the continents. The continental crust is 10-70 km thick, while oceanic crust averages only 5-7 km in thickness. Oceanic crust is more dense (3.0-3.1 g cm–3) and therefore “floats” on the mantle at a greater depth than does continental crust (density 2.7-2.8 ). Finally, oceanic crust is much younger; the oldest oceanic crust is about 200 million years old, while the most ancient continental rocks were formed 3.8 billion years ago.New crust is formed from molten material in the upper mantle at the divergent boundaries that exist at undersea ridges. The melting is due to the rise in temperature associated with the nearly adiabatic decompression of the upper 50-70 km of mantle material as separation of the plates reduces the pressure below. The molten material collects in a magma pocket which is gradually exuded in undersea lava flows. The solidified lava is transformed into crust by the effects of heat and the action of seawater which selectively dissolves the more soluble components.An animated view of seafloor spreading.Where two plates collide, one generally plunges under the other and returns to the mantle in a process known as subduction. Since the continental plates have a lower density, they tend to float above the oceanic plates and resist subduction. At continental boundaries such as that of the North American west coast where an oceanic plate pushes under the continental crust, oceanic sediments may be sheared off, resulting in a low coastal mountain range (see here for a nice animation of this process.) Also, the injection of water into the subducting material lowers its melting point, resulting in the formation of shallow magma pockets and volcanic activity. Divergent plate boundaries can cross continents, however; temporary divergences create rift valleys such as the Rhine and Rio Grande, while permanent ones eventually lead to new oceanic basins.Collision of two continental plates can also occur; the most notable example is the one resulting in the formation of the Himalayan mountain chain.The Earth is composed of 90 chemical elements, of which 81 have at least one stable isotope. Most of these elements have also been detected in stars. Where did these elements come from? The accepted scenario is that the first major element to condense out of the primordial soup was helium , which still comprises about one-quarter of the mass of the known universe.Hydrogen is the least thermodynamically stable of the elements, and at very high temperatures will combine with itself in a reaction known as nuclear fusion to form the next element, 2He4. "Heavier" nuclei (that is , those having high atomic numbers, indicated here by the subscript preceding the element symbol), are more stable than "lighter" ones, so this fusion process can continue up to 56Fe, which is the most energetically stable of all the nuclides. Beyond this point, heavier nuclei slowly become less stable, so fission becomes more likely. FIssion, however, is not considered an important mechanism of primordialnucleosynthesis, so other processes are invoked, as discussed farther below.According to the “big bang” theory for which there is now overwhelming evidence, the universe as we know it (that is, all space, time, and matter) had its origin in a point source or singularity that began an explosive expansion about 12-15 billion years ago, and which is still continuing.Following a brief period of extremely rapid expansion called inflation, protons and neutrons condensed out of the initial quantum soup after about 10–32 s. Helium and hydrogen became stable during the first few minutes, along with some of the very lightest nuclides up to 7Li, which were formed through various fusion and neutron-absorption processes. Formation of most heavier elements was delayed for about 106 years until nucleosynthesis commenced in the first stars. Hydrogen still accounts for about 93% of the atoms in the universe.The main lines of observational evidence that support this theory are the 2.7K background radiation that permeates the cosmos (the cooled-down remnants of the initial explosion), and the abundances of the lightest elements. Conventional physics is able to extrapolate back to about the first 10–33 second; what happened before then remains speculative.All elements beyond hydrogen were formed in regions where the concentration of matter was large, and the temperature was high; in other words, in stars. The formation of a star begins when the gravitational forces due to a large local concentration of hydrogen bring about a contraction and compression to densities of around 105 g cm–3. This is a highly exothermic process in which the gravitational potential energy is released as heat, about 1200 kJ per gram, raising the temperature to about 107 K. Under these conditions, hydrogen nuclei possess sufficient kinetic energy to overcome their electrostatic repulsion and undergo nuclear fusion:\[ 4\;_{1}H^{1} \rightarrow _{2}He^{4} +2b^{+}+2g+2n \]There will be a net mass loss in above process, which will therefore be highly exothermic and is known as “hydrogen burning”. As hydrogen burning proceeds, the helium collects in the core of the star, raising the density to 108 g cm–3 and the temperature to 108 K. This temperature is high enough to initiate helium burning, which proceeds in several steps:2 2He4 \( \rightarrow \) 4Be8 + gThe first product, 4Be8 has a half life of only 10–16 sec, but a sufficient amount accumulates to drive the following two reactions:4Be8 + 2He4 \( \rightarrow \) 6C12 + g6C12 + 1H1 \( \rightarrow \) 7N13 \( \rightarrow \) 6C13 + b+ + gThe size of a star depends on the balance between the kinetic energy of its matter and the gravitational attraction of its mass. As the helium burning runs its course, the temperature drops and the star begins to contract. The course of further nucleosynthesis and the subsequent fate of the star itself depends on the star’s mass.If the mass of the star is no greater than 1.4 times the mass of our sun, the star collapses to a white dwarf, and eventually cools to a dark, dense dead star.In larger stars, the gravitational forces are sufficiently strong to overcome the normal repulsion between atoms, and so gravitational collapse continues. The gravitational energy released in this process produces temperatures of 6 108 K, which are sufficient to initiate a complex series of nuclear reactions known as the carbon-nitrogen cycle. The net reaction of this cycle is the further fusion of hydrogen to helium, in which C12 acts as a catalyst, and various nuclides of nitrogen and oxygen are intermediates. The temperature is sufficiently high, however, to initiate fusion reactions of some of these intermediates:6C12 + 6C12 \( \rightarrow \) 10Ne20 + 2He42 8O16 \( \rightarrow \) 14Si28 + 2He42 8O16 \( \rightarrow \) 16S31 + 0n1The intense gamma radiation that is produced in some of these reactions breaks some of the product nuclei into smaller fragments, which can then fuse into a variety of heavier species, up to the limit of26Fe56, beyond which fusion is no longer exothermic. The greater relative abundance of elements such as 6C12, 8O16, and 10Ne20 which differ by a 2He4 nucleus, reflects the participation of the latter species in these processes. These exothermic reactions eventually produce temperatures of 8 109 K, while contraction continues until the central core is essentially a ball of neutrons having a radius of about 10 km and a density of 1014 g cm–3. At the same time the outer shell of the star is blasted away in an explosion known as a supernova.Since 26Fe56 has the highest binding energy per nucleon of any nuclide, there are no exothermic processes which can lead to the formation of heavier elements. Fusion into heavier species is also precluded by the electrostatic repulsion of the highly charged nuclei. However, the process of neutron capture can still take place (this is the same process that is used to make synthetic elements). The neutrons are by-products of a large variety of stellar processes, and are present in a wide range of energies. Two general types of neutron capture processes are recognized. In an “s” (slow) process, only a single neutron is absorbed and the product usually decomposes by b-decay into a more proton-rich species.26Fe56 + 0n1 \( \rightarrow \) 26Fe56 \( \rightarrow \) 26Fe59 \( \rightarrow \) 27Co59 + –1e0This process occurs at rates of about 105 yr–1 , and accounts for the lighter isotopes of many elements. The other process (the “r”, or rapid process) occurs in regions of high neutron density and involves multiple captures at rates of 0.1-10 sec–1:26Fe56 + 13 0n1 \( \rightarrow \) 26Fe56 \( \rightarrow \) 27Co59 + –1e0This mechanism favors the heavier, neutron-rich isotopes and the heaviest elements.A few nuclei are not accounted for by any of the processes mentioned. These are all low-abundance species, and they probably result from processes having low rates. Examples are Sn112 and Sn114which may be produced through proton-capture, and H2, Li6, Li7, Be, B10 and B11, which may come from spallation processes resulting from collisions of cosmic ray particles with heavier elementsThe solar system is believed to have formed about 5 billion years ago as a result of aggregation of cosmic dust and interstellar atoms in a region of space in which the density of such material happened to be greater than average. Over 99.8% of this mass, which consisted mostly of hydrogen, collapsed into a proto-sun; the gravitational energy released in this process raised the temperature sufficiently to initiate the hydrogen fusion reactions discussed above.The remaining material probably formed a disk that rotated around the sun. As the temperature dropped to around 2000K, some of the most stable combinations of the elements began to condense out. These substances might have been calcium aluminum silicates, followed by the more volatile iron-nickel system, and then magnesium silicates. The further aggregation of these materials, together with the other constituents of the cooling disk, is now believed to be the origin of the planets. Density estimates indicate that the planets closest to the sun are predominantly rocky in nature, and probably condensed first. The outer planets (Uranus, Neptune and Pluto) appear to consist largely of water ice, methane, and ammonia, with a smaller rocky core.This page titled 1.1: Composition and Structure of the Earth is shared under a CC BY-NC 3.0 license and was authored, remixed, and/or curated by Stephen Lower via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.
1,007
1.2: Origin of the Elements
https://chem.libretexts.org/Bookshelves/Environmental_Chemistry/Geochemistry_(Lower)/01%3A_The_Earth_and_its_Lithosphere/1.02%3A_Origin_of_the_Elements
The Earth is composed of 90 chemical elements, of which 81 have at least one stable isotope. Most of these elements have also been detected in stars. Where did these elements come from? The accepted scenario is that the first major element to condense out of the primordial soup was helium, which still comprises about one-quarter of the mass of the known universe.Primordial ChemistryAccording to the “big bang” theory for which there is now overwhelming evidence, the universe as we know it (that is, all space, time, and matter) had its origin in a point source or singularity that began an explosive expansion about 12-15 billion years ago, and which is still continuing.Following a brief period of extremely rapid expansion called inflation, protons and neutrons condensed out of the initial quantum soup after about 10–32 s. Helium and hydrogen became stable during the first few minutes, along with some of the very lightest nuclides up to 7Li, which were formed through various fusion and neutron-absorption processes. Formation of most heavier elements was delayed for about 106 years until nucleosynthesis commenced in the first stars. Hydrogen still accounts for about 93% of the atoms in the universe.All elements beyond hydrogen were formed in regions where the concentration of matter was large, and the temperature was high; in other words, in stars. The formation of a star begins when the gravitational forces due to a large local concentration of hydrogen bring about a contraction and compression to densities of around 105 g cm–3. This is a highly exothermic process in which the gravitational potential energy is released as heat, about 1200 kJ per gram, raising the temperature to about 107 K. Under these conditions, hydrogen nuclei possess sufficient kinetic energy to overcome their electrostatic repulsion and undergo nuclear fusion:\[\ce{4 ^{1}_1H -> ^{4}_2He + 2 \beta^{+} + 2\gamma + 2 n}\]There will be a net mass loss in above process, which will therefore be highly exothermic and is known as “hydrogen burning”. As hydrogen burning proceeds, the helium collects in the core of the star, raising the density to 108 g cm–3 and the temperature to 108 K. This temperature is high enough to initiate helium burning, which proceeds in several steps:\[\ce{2 ^{4}_2He -> 4Be8 + g}\]The first product, \(\ce{^{8}_4Be}\) has a half life of only 10–16 sec, but a sufficient amount accumulates to drive the following two reactions:\[ \ce{^{8}_4Be + ^{4}_2He -> ^{12}_6C + \gamma}\]\[ \ce{ ^{12}_6C + ^{1}_1H -> ^{13}_1N + ^{13}_1C + \beta^{+} + \gamma}\]The size of a star depends on the balance between the kinetic energy of its matter and the gravitational attraction of its mass. As the helium burning runs its course, the temperature drops and the star begins to contract. The course of further nucleosynthesis and the subsequent fate of the star itself depends on the star’s mass.If the mass of the star is no greater than 1.4 times the mass of our sun, the star collapses to a white dwarf, and eventually cools to a dark, dense dead star.In larger stars, the gravitational forces are sufficiently strong to overcome the normal repulsion between atoms, and so gravitational collapse continues. The gravitational energy released in this process produces temperatures of 6 108 K, which are sufficient to initiate a complex series of nuclear reactions known as the carbon-nitrogen cycle. The net reaction of this cycle is the further fusion of hydrogen to helium, in which C12 acts as a catalyst, and various nuclides of nitrogen and oxygen are intermediates. The temperature is sufficiently high, however, to initiate fusion reactions of some of these intermediates:6C12 + 6C12 \( \rightarrow \) 10Ne20 + 2He42 8O16 \( \rightarrow \) 14Si28 + 2He42 8O16 \( \rightarrow \) 16S31 + 0n1The intense gamma radiation that is produced in some of these reactions breaks some of the product nuclei into smaller fragments, which can then fuse into a variety of heavier species, up to the limit of 26Fe56, beyond which fusion is no longer exothermic. The greater relative abundance of elements such as 6C12, 8O16, and 10Ne20 which differ by a 2He4 nucleus, reflects the participation of the latter species in these processes. These exothermic reactions eventually produce temperatures of 8 109 K, while contraction continues until the central core is essentially a ball of neutrons having a radius of about 10 km and a density of 1014 g cm–3. At the same time the outer shell of the star is blasted away in an explosion known as a supernova.NoteOnly six supernovas have been observed in our galaxy. The supernova of 1987 was the most recent; the one before this occurred in 1604, prior to the invention of the telescope. Tycho Brahe's observation of a supernova in 1572 was crucial in overturning the Aristotelian tradition of the immutability of the "fixed stars", or "firmament". The remains of these supernovas have been detected and studied by X-ray observations.Thus all of the elements in our solar system that are heavier than iron are the recycled remnants of former stars. The Chandra X-ray observatory page on supernovasSince 26Fe56 has the highest binding energy per nucleon of any nuclide, there are no exothermic processes which can lead to the formation of heavier elements. Fusion into heavier species is also precluded by the electrostatic repulsion of the highly charged nuclei. However, the process of neutron capture can still take place (this is the same process that is used to make synthetic elements). The neutrons are by-products of a large variety of stellar processes, and are present in a wide range of energies. Two general types of neutron capture processes are recognized. In an “s” (slow) process, only a single neutron is absorbed and the product usually decomposes by b-decay into a more proton-rich species:26Fe56 + 0n1 \( \rightarrow \) 26Fe56 \( \rightarrow \) 26Fe59 \( \rightarrow \) 27Co59 + –1e0This process occurs at rates of about 105 yr–1 , and accounts for the lighter isotopes of many elements. The other process (the “r”, or rapid process) occurs in regions of high neutron density and involves multiple captures at rates of 0.1-10 sec–1:26Fe56 + 13 0n1 \( \rightarrow \) 26Fe56 \( \rightarrow \) 27Co59 + –1e0This mechanism favors the heavier, neutron-rich isotopes and the heaviest elements.A few nuclei are not accounted for by any of the processes mentioned. These are all low-abundance species, and they probably result from processes having low rates. Examples are Sn112 and Sn114 which may be produced through proton-capture, and H2, Li6, Li7, Be, B10 and B11, which may come from spallation processes resulting from collisions of cosmic ray particles with heavier elements.Stephen Lower, Professor Emeritus (Simon Fraser U.) Chem1 Virtual TextbookThis page titled 1.2: Origin of the Elements is shared under a CC BY-NC 3.0 license and was authored, remixed, and/or curated by Stephen Lower via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.
1,008
1.3: Formation and evolution of the Earth
https://chem.libretexts.org/Bookshelves/Environmental_Chemistry/Geochemistry_(Lower)/01%3A_The_Earth_and_its_Lithosphere/1.03%3A_Formation_and_evolution_of_the_Earth
The solar system is believed to have formed about 5 billion years ago as a result of aggregation of cosmic dust and interstellar atoms in a region of space in which the density of such material happened to be greater than average. Over 99.8% of this mass, which consisted mostly of hydrogen, collapsed into a proto-sun; the gravitational energy released in this process raised the temperature sufficiently to initiate the hydrogen fusion reactions discussed above.The remaining material probably formed a disk that rotated around the sun. As the temperature dropped to around 2000K, some of the most stable combinations of the elements began to condense out. These substances might have been calcium aluminum silicates, followed by the more volatile iron-nickel system, and then magnesium silicates. The further aggregation of these materials, together with the other constituents of the cooling disk, is now believed to be the origin of the planets. Density estimates indicate that the planets closest to the sun are predominantly rocky in nature, and probably condensed first. The outer planets (Uranus, Neptune and Pluto) appear to consist largely of water ice, methane, and ammonia, with a smaller rocky core.The Earth formed by accretion of solid and particulate material that remained after the much more massive amounts of hydrogen and helium present in the original protoplanets had been dispersed out of the solar system. Gradually, the heat produced by decay of radioactive elements brought about partial melting of the silicate rocks; these lower density molten materials migrated upward, leaving the more dense, iron-containing minerals below. This process, which took about 2 million years, was the first of the three stages into which the chemical evolution of the earth is usually divided:The above listing should not be taken too literally; all three kinds of processes have probably proceeded simultaneously, and over a number of cycles. Since the earth is losing approximately four times as much heat as is generated by radioactive decay, the principal driving force of primary and secondary differentiation has gradually slowed down. Partial melting of the upper mantle brought about further fractionation as silicon-containing materials of low density migrated outward to form a crust. In its early stages the stronger granitic rocks had not yet appeared, and the crust was mechanically weak. Upwelling flows of lava would break the surface, and the weight of the solidified lava would cause the crust to subside. In some places, magma would solidify underground, forming low-density rock (batholiths) that would eventually rise by buoyancy and push up overlying crust. These mountain-building periods probably occurred in 6-8 major episodes, each lasting about 800 million years.At the same time, outgassing of solids released large amounts of HCl, CO, CO2, H2S, CH4 , SO2 , and SO3 into the primitive atmosphere. Large amounts of water were present in the primeval rocks in the form of hydrates, which were broken down as the result of the heating. Eventually when the outer crust cooled enough to permit condensation of the water vapor as rain, a new stage of chemical evolution began. The rain was initially highly acidic, equivalent to about 1M HCl; this reacted readily with the basic rocks having high contents of K, Na, Mg, and Ca, leaching them away and forming what would eventually evolve into the oceans. The partial dissolution of the rocks also resulted in large amounts of sediments, which played their own role in the transformation of the earth’s surface.Within the crust, the lighter materials, being in isostatic equilibrium with the upper mantle, floated higher, and gradually became the nuclei of continents, which grew by accumulating similar material around their boundaries. This picture of continental development is supported by isotopic ratio studies which indicate that the nucleus of the North American continent, the Canadian Shield, is over 2.5 billion years old, while the peripheral parts are less than 0.6 billion years of age.The more traditional geochemical view of primary differentiation begins with the assumption that the core of the earth is in a chemically reduced state, while the metallic elements constituting the mantle are almost entirely oxidized to their lower free energy cationic forms. Oxygen and sulfur acted as the major electron acceptors in this process, but the abundance of these elements was insufficient to oxidize much of the nickel or iron.Iron itself is believed to have played a crucial role in the primary differentiation of other metals and of oxidized metallic elements that iron is able to reduce. As the dense molten iron migrated in toward the core, it dissolved (formed a liquid alloy with) any other metals with which it came in contact, and it reduced (donated electrons to) those metallic cations that are less “active” metals than iron under these conditions. The resulting metal would then mix with more of the migrating liquid iron, and be carried along with it into the core.Accordingly, elements whose reduction potentials are more positive than iron (i.e., are lower-free energy electron sinks) are called siderophiles; these elements have a low abundance in the crust and upper mantle. The other two important classes of solid-forming elements are lithophile and chalcophile (see below.) These generally have more negative reduction potentials than iron, and are distinguished mainly by their relative affinity for oxygen or sulfur. The chalcophiles, of which Cu, Cd, and Sb are examples, tend to form larger, more polarizable ions which can associate with the sulfide ion. The lithophiles comprise those elements such as K, Al, Mn, and Si, which have smaller ions and which combine preferentially with oxygen. This broad classification is reflected in the dominant forms in which many of these elements occur in nature.The differential distribution of the elements within one of the main regions of the earth has been studied in detail only in that portion that is accessible, namely the upper crust. It is clear that fractional crystallization from the cooling magma has played an important role. The relative temperatures at which minerals crystallize is determined in large part by their lattice energies, which are in turn related to ionic sizes and charges. Minerals with small, highly charged ions will have higher melting points and should crystallize first. Thus the sodium-containing feldspar albite (NaAlSi2O8) is found nearer the surface than is its calcium analog anorthite (CaAlSi2O8). The less abundant elements often do not form minerals of their own, but may replace the ion of a more abundant mineral in its crystal lattice. This is known as isomorphous replacement, and it naturally depends on the relative ionic radii. Some ion pairs that undergo isomorphous replacement in minerals are K+ and Ba2+, Si4+ and Ge4+.The Phase Rule can be invoked to explain in a very rough way the differentiation of the elements into distinct solid phases.P = C + 2 – FTaking the degrees of freedom as 2 (fixed temperature and pressure), the six major elemental components (O, Si, Al, Fe, Mg and Na) can form up to six phases. Actually, more than 99% of igneous rocks comprise seven principal mineral phases. These are: the silica minerals, feldspars, feldspathoids, olivine, pyroxenes, amphiboles and micas.The differential deposition of minerals is also influenced by the temperature-composition phase relations as exemplified by the ordinary two-component phase diagram. If the mineral that is rich in one component and which first crystallizes out is also more dense, then the richer ore will occur near the bottom of the deposit, while a more mixed ore (approaching the eutectic) will remain near the top.Whether an element is concentrated in the crust or elsewhere depends on its chemical behavior and on the physical properties of its stable compounds. Geochemists have found it convenient to establish the following general classifications:"TheTalk" site contains numerous links to discussions of geochronology and other evidence for the age of the earth, and on the pseudoscientific concept of a "young Earth".Unravelling geological time (Columbia University)Formation of the Earth's core - description of theories based on high pressure studies Page last modified: 21.01.2008© 1998 by Stephen LowerFor information about this Web site or to contact the author, please see theChem1 Virtual Textbook home page. This work is licensed under a Creative Commons Attribution-NonCommercial 2.5 License.This page titled 1.3: Formation and evolution of the Earth is shared under a CC BY-NC 3.0 license and was authored, remixed, and/or curated by Stephen Lower via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.
1,009
1.4: The Earth's crust
https://chem.libretexts.org/Bookshelves/Environmental_Chemistry/Geochemistry_(Lower)/01%3A_The_Earth_and_its_Lithosphere/1.04%3A_The_Earth's_crust
The structure and composition of the outer part of the lithosphere has been profoundly affected by interactions with the atmosphere over one-quarter of the surface area of the earth, and with the hydrosphere over the remaining area. Further modification of the outermost parts of the crust has occurred as the result of the activities of living organisms. These changes have transformed much of the outermost parts of the crust into an unconsolidated surface region called the regolith. Further weathering and translocation of soluble substances often results in a sequence of horizons consisting of sediments, soils, or evaporites.Chemically, the earth’s crust consists of about 80 elements distributed in approximately 2000 compounds or minerals, many of which are of variable composition. Over 99% of the mass of the crustal material is made up of only eight of these elements, however:Table \(\PageIndex{1}\): Average amounts of elements in crustal rocks, mg/g.The crust has its origin in the upwelling convection currents that bring mantle material near to the surface at the mid-ocean ridges. The reduced pressure causes it to melt into magma. The magma may solidify before it reaches the surface, forming basalt, or it may energe from the surface in a volcanic eruption. The oceanic crust consists mostly of the simpler silicate minerals, which are said to be basic or mafic. The more evolved, silicon-rich rocks found in the continental crust are known as acidic or sialic.Oceanic crust is continually being extruded from regions of the plastic mantle that intrude upward to just beneath the ocean’s floor at the mid-ocean ridges. A corresponding amount of this crust is being returned to the lithosphere at subduction zones off the West coasts of the Americas, and in the process pushing up the mountain ranges that lie along these coasts. The subducted oceanic crust is reheated and combined with sedimentary material to undergo partial remelting and reworking; this is believed by some to be the origin of granite. Subduction proceeds at a rate of a few cm per year, and the complete cycle time is on the order of a few hundred million years.: The continental and oceanic crusts. This schematic cross section runs in a west-to-east direction, from just of the West coast of South America on the left to the African continent on the right. Oceanic crust is shown in black, with with continental crust in red. (From Cosmos and Earth Study Guide, NYU) Both oceanic and contintental crusts float on the more dense upper lithosphere, and gradually shift their positions as they push against each other, and in response to the slow convective motions in the medium that supports them. The continental crust is thicker than the oceanic crust, but it is also less dense, which allows it to float higher (and thus to differentiate continents from oceans.) The lower density also prevents it from being subducted. Recycling can occur indirectly as continental material erodes and is deposited as sediments on the ocean floor, but this is a much slower process and one that takes billions instead of millions of years. Some of the very oldest rocks, found in Greenland and Labrador, have been dated at 3.9 billion years, and thus approach the age of the Earth itself.When magma crystallizes it forms igneous rock, the major component of the Earth’s crust. The crystallization is a complex process which is not entirely understood, due largely to the lack of sufficient thermodynamic data on the various components at high temperatures and pressures. It is known that the different components of magma have differing melting points and densities, and that the phase behavior of multicomponent systems based on some of these substances is quite complex, involving binary and ternary eutectics, solid solutions, the presence of dissolved water (under pressure), and incongruent melting. One consequence of this complexity is that the composition of the magma will change as crystallization takes place; different substances will crystallize at various stages, and the resulting solids may migrate toward the top or bottom of the region if their densities differ greatly from that of the magmaIt is well known that larger crystals form when a melt cools more slowly. This principle affords a simple distinction between the coarser-grained plutonic rocks, which are believed to have been formed by gradual cooling of magma pockets within the crust, and the fine-grained volcanic rocks such as basalt. Under the influence of heat and pressure, particularly at plate boundaries, solid crustal material may undergo partial or complete remelting, followed by cooling and transformation into metamorphic rocks such as gneiss, micas, quartzite, and possibly granites.Granite was once thought to be an igneous rock, originating from the crystallization of a particular kind of magma. The association of granitic rocks with mountainous regions, and the similarity of their compositions in widely scattered regions, lends credence to the more recent hypothesis that granitic rocks are of metamorphic origin.Another class of rock is sedimentary rock, formed from the consolidation of material produced by weathering and other chemical, and biological processes. Sedimentary rocks cover about three-quarters of the land area of the earth; 80% are shales, 15% sandstones and 5% limestones.The chemical composition of rocks tends to be complex and variable, and can only be specified in a precise way at the structural level. The traditional way of expressing rock compositions is in terms of the mass percent of the oxides of the elements present in the rock.Table \(\PageIndex{1}\): : Chemical composition of a typical rock (quartz-feldspar-biotite gneiss). The figures are in percent by weight.oxidecommon namefresh rockweathered rockThis does not mean that these oxides, or the structural units they represent, are actually present as such in a rock. In the chemical analysis of rocks, oxygen is generally not determined separately. When it is, however, it is found in an amount that would be expected to combine stoichiometrically with the other elements present. Thus the composition of albite can be written as either NaAlSi3O8 or Na2O·Al2O3·6SiO2 . Some rocks contain varying ratios of certain elements. For example olivine, which can be considered a solid solution of Mg2SiO4 and Fe2SiO4, can be represented by (Mg,Fe)2SiO4; this implies that the ratio of metal to silica is constant, and that magnesium is ordinarily present in greater amount than iron.The major structural elements of rock (both in the crust and in the mantle) are the silicate minerals, built from silicon atoms surrounded tetrahedrally by four oxygens. The simplest of these consist just of SiO44– tetrahedra interspersed with positive ions to achieve electroneutrality; olivine, (Mg,Fe)2SiO4 is a well known example. More commonly, the silicate groups polymerize by sharing one or more oxygen atoms at adjacent tetrahedral corners. Depending on the number of joined corners per silicate unit, this can lead to the formation of a wide variety of chains (pyroxenes, amphiboles) and sheets (micas), culminating in the complete tetrahedral polymerization that produces quartz, SiO2.Higher degrees of polymerization are associated with higher ratios of Si to O, smaller quantities of positive ions, and higher melting points. Thus when magma cools, the first silicates to crystallize are the olivines, followed by chain and sheet minerals having progressively higher degrees of polymerization and smaller fractions of cations of metals such as Fe and Mg.Although some elements are distributed fairly uniformly throughout the crust, others occur at greatly enhanced concentrations in localized areas. There are two general processes that result in these localized excesses, which are called ores when their extraction and refining is economically feasibleThe first of these relates to how well a metallic ion can fit into the silicate lattice structure. Ions having the right charge and size can readily enter this structure, displacing the more common ions of Fe, Al and Mg. Such ions (of which Ga3+ is an example) are readily soluble in other minerals and thus are widely disributed and do not concentrate into ores. Other ions may be too large (Cs and La), too small (Li, Be, B) or too highly charged (Nb, Ta, W) to be accommodated in silicate mineral structures; these elements tend to remain in the magma as it solidifies, finally forming solid minerals only in the last stages of cooling.The other major source of ores is hydrothermal formation. Magma contains some water itself, and additional water from the surface is able to reach the heated rock near magma chambers. At the very high temperatures and pressures that prevail in these regions, the water can dissolve many compounds such as sulfides which are normally considered highly insoluble. When these superheated solutions rise to the surface the solids are re-deposited, often in highly concentrated form. Ores of Cu, Sn, W, and possibly some iron ores, as well as some native metals such as gold, are believed to be formed in this way.: The hydrothermal vent photographed here emits both black and white "smoke".Hydrothermal vents known as “black smokers” have been observed at sites of sea-floor spreading; the “smoke” consists of metallic sulfides which precipitate in the cold seawater. The veins of pyrites (FeS2) and similar sulfide minerals that are often observed in rock formations are the result of hydrothermal solutions that once penetrated cracks and fissures in the rock.Deep-sea hydrothermal vents (many more pictures) Theories of ore formation (a more technical tutorial)The weathering of rocks at the earth’s surface is a complex process involving both physical and chemical changes. The latter tend in principle to be rather simple kinds of reactions involving dissolution, reaction with carbon dioxide, hydrolysis, hydration, and oxidation. The difficulty in studying them and in arriving at a quantitative description is that these reactions occur very slowly and may never reach an equilibrium state. A comparison of the two rightmost columns in Table 2 on page 14 provides some illustration of the overall effect of these changes, although it must be emphasized that these are relative composition data, and thus cannot show how much of a given component has been lost. In general, sodium, calcium and magnesium seem to be lost more rapidly than potassium and silicon, while iron and aluminum decrease very slowly. Individual rates are of course dependent on the particular structural units containing the element, and also vary somewhat with grain size and condition of the surface.Water is undoubtedly the most important weathering agent. Not only does it act as a solvent for ionic dissolution products, but it also brings other active agents such as carbon dioxide and oxygen into intimate contact with the rock material. As water percolates into the outermost layers of the crust, it extends the zone of weathering beneath the surface; the effects of this are quite noticeable in a number of buried sedimentary materials such as Paleozoic sandstones, which tend to be depleted of all but the most resistant minerals. Dissolution, the simplest of all the weathering processes, usually results in ionic species, some of which may react with water to yield acidic or alkaline solutions. Dissolution of silica, however, results in the neutral species H4SiO4. Reactions involving hydration and dehydration are very common, and since the free energy changes tend to be small, these reactions can usually take place in either direction under slightly different conditions. Thus gypsum and anhydrite are interconvertable at observable rates under common environmental conditions:CaSO4·2H2O --> CaSO4 + 2 H2OIn many cases, however, the reaction products are not very well characterized, thermodynamic data is lacking, and the reactions proceed so slowly that they are not entirely understood. For example, both hydrous and anhydrous iron oxides can be found in similar geologic environments, but the little is known about the interconversion process, represented approximately asFe2O3 + H2O --> 2 FeOOHSolid carbonates tend to dissolve in acidic solutions, including those produced when atmospheric carbon dioxide dissolves in water. Thus the major surface limestone deposits (largely CaCO3, with some admixture of MgCO3) tend to be highly eroded in non-arid regions, and the local groundwater may have Ca2+ as high as 0.1-0.2M. Thermodynamics can unambiguously predict the most stable oxidation state of a metal ion under given conditions of pH and oxidant concentration. The mechanisms tend to be very uncertain, however. For one thing, both the reactant and product can often exist in various states of hydration, and the dissolved species (which probably undergo the actual oxidation) often consist of polycations and complexed species.Compounds of Fe(II), for example, will always tend to oxidize to Fe(III) in the presence of air; the various oxides of iron are responsible for the bright colors seen in many geological formations, and in certain soils. Some of the net reactions that probably occur areFe2SiO4 + 1/2 O2 + H2O --> Fe2O3 + H2SiO42 FeCO3 + 1/2 O2 --> Fe2O3 + 2 H2CO3An environmental side effect of the first process is the release of hydrated silica. Also, where both starting materials are present, the carbonic acid produced in the second reaction is believed to promote the dissolution of ferrous silicate, creating a source of Fe(II) ions that can be rapidly oxidized:Fe2SiO4 + 4 H2CO3 --> 2 Fe2+ + 4 HCO3– + H4SiO4Fe2+ + 4 HCO3– + 1/2 O2 --> Fe2O3 + 4 H2CO3The oxidation of sulfides can produce strongly acidic solutions:2 FeS2 + 15/2 O2 + 4 H2O --> Fe2O3 + 2 SO42– + 8 H+The effects of this can be seen in formations containing outcrops of pyrite veins, where the surrounding rocks are heavily stained with yellow and brown Fe(III) oxides, and the groundwater tends to be highly acidic. This process is mediated by microorganisms, and is an important source of acid pollution associated with mines and mine tailings.The various components of rocks weather at different rates. The more basic components such as CaO and MgO tend to disappear first, especially if in contact with groundwaters containing high CO2 concentrations. For rocks in general, the first reaction is usually hydration, followed by hydrolysis which can be summarized by4 KAlSi3O8(s) + 22 H2O --> Al4Si4O10(OH)8(s) + 8 H4SiO4(aq) + 4 K+ + 4 OH–in which other Group 1 or 2 cations might replace potassium. The product Al4Si4O10(OH)8 is kaolinite, a form of clay (see below).In general, the rocks which crystallized first from the magma (the Ca-feldspars and olivines) weather more rapidly than do the lower-melting rocks.: Clays are the solid end products of the weathering of rocks. They are basically composed of alternating sheets of “SiO2” and “AlO6” units in ratios of 1:1 (kaolinite), 2:1 (montmorillonite and vermiculite) and 2:2 (chlorite). In between the sheets, and holding them together by hydrogen bonding are water molecules. Also present are cations such as K+, Ca2+ and Mg2+ which act to neutralize the negative charges of the oxide ions.: Structure of a clay, showing two layers of the stacked sheets of kaoliniteThe major agents of physical weathering of exposed rocks are rapid changes in temperature (promoting fracture by differential expansion), the abrasive action of windborne material and glacier movement, and especially by the penetration of water into cracks and its subsequent freezing. The expansion of water on freezing can exert a pressure of 150 kg cm–2, whereas the tensile strength of a typical rock is around 120 kg cm–2 . The roots of some plants are able to penetrate rock quite effectively, producing comparable expansive pressures in subsurface rocks.Soils are a product of the interaction of water, air, and living organisms with exposed rocks or sediments at the earth’s surface. A typical soil contains about 45% inorganic solids and 5% organic solids by volume. Water and air each make up about 20-30%.: A simple way of classifying soils is based on the relative quantities of clay, silt and sand in the solid component. For ordinary agricultural purposes, loams are considered the most ideal soil type.Mineral Components of soilThe primary inorganic components of soils consist of sand and silt particles that come directly from the parent rocks. This fraction is dominated by quartz and feldspars (aluminosilicates). Secondary components are formed by chemical changes within the soil itself, or in sediments from which the soil derived. These are most commonly clays, but may also include calcite, gypsum, and sulfide minerals such as pyrites; the latter are formed by bacterial action under reducing conditions in the presence of organic matter.The clays have an especially important effect on both the physical properties of the soil, and on its ability to store plant nutrients, including trace nutrients such as Mo and Mn. These properties are due to the high ion-exchange capacity of clays. The more highly charged cations such as Al3+ and Fe3+ tend to be more strongly absorbed within the inter-sheet regions than do Mg2+ or K+. As plants withdraw these latter cations from the soil water, more are released by the clay components, which thus act as nutrient reservoirs.The ion-exchange properties of clays also help to maintain the pH balance of soils, through the exchange of H+ and cations such as Ca2+. The soil pH, in turn, strongly affects the solubility of nutrient cations, and thus their availability to plants. For example, the uptake of phosphorus (in the form of H2PO42– , is only efficient within the rather narrow pH range between 6 and 7. Below 6, dihydrogen phosphates of Fe and Al are precipitated, while insoluble Ca3(PO4)2 forms at higher pH’s.Part of the organic matter of soil consists of organisms (mainly bacteria and fungi) and roots and root hairs. The remainder is largely in the form of fulvic and humic acids. These substances of indefinite composition are classified on the basis of their solubility behavior; fulvic acids remain in solution at pH 2, but humic acids, having molecular weights of 20,000 to 100,000, are precipitated. Both are flexible polyelectrolytes that interact strongly with their own kind and with inorganic ions.: Model structure of humic acid (Stevenson 1982)Associated with the fulvic and humic fractions are a wide variety smaller molecules such as alkanes, amino acids, amino sugars, sulfur and phosphorus derivatives of sugars, etc. Part of the organic carbon in a fertile soil is recycled in 1-2 years; plant residues, which are the major source of soil organic matter, have a half-life of days to months. Once carbon gets incorporated into humic substances, it is locked into a much slower recycling process; the turnover times of fulvic acids are a hundred years or more, while those for fulvic acids are around a thousand years. For this reason, humic substances are the major reservoir of organic carbon in soil. Organic matter, particularly polysaccharides, binds strongly to the cation components of clay colloids; the two together act as cementing agents and strongly influence the consistency and structure of the soil.Soil water is held by capillary action and adsorption with varying degrees of tenacity. This water binding strength is traditionally expressed in terms of the pressure, or “tension” that would be required to force the water out of the soil. The tension of capillary water varies over a wide range of 0.1-32 atm; only in the lower half of this range will it be available to plants, which can exert an osmotic pressure of up to about 15 atm. Water in excess of the capillary capacity fills larger voids and is called gravitational water. Its presence in surface soils corresponds to a flooded condition that inhibits plant growth by reducing soil aeration.The gas phase within soil pores generally has a CO2 content of 5-50 times that of the atmosphere due to the action of organisms. O2 tends to be depleted to roughly the extent that CO2 is present in excess. Under conditions of poor aeration (i.e., exchange with the atmosphere), considerable quantities of N2O, NO, H2, CH4, C2H4, and H2S may also be present.What is soil? An introductory surveySoils interactive textbookPage last modified: 21.01.2008© 1998 by Stephen LowerFor information about this Web site or to contact the author, please see theChem1 Virtual Textbook home page. This work is licensed under a Creative Commons Attribution-NonCommercial 2.5 License.This page titled 1.4: The Earth's crust is shared under a CC BY-NC 3.0 license and was authored, remixed, and/or curated by Stephen Lower via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.
1,010
2.1: Water, Water Everywhere...
https://chem.libretexts.org/Bookshelves/Environmental_Chemistry/Geochemistry_(Lower)/02%3A_The_Hydrosphere/2.01%3A_Water_Water_Everywhere...
Water is the most abundant substance at the earth’s surface. Almost all of it is in the oceans, which cover 70% of the surface area of the earth. However, the amounts of water present in the atmosphere and on land (as surface runoff, lakes and streams) is great enough to make it a significant agent in transporting substances between the lithosphere and the oceans. Water interacts with both the atmosphere and the lithosphere, acquiring solutes from each, and thus provides the major chemical link between these two realms. The various transformations undergone by water through the different stages of the hydrologic cycle act to transport both dissolved and particulate substances between different geographic locations.ReservoirVolume / 106, km3Percent of totalShallow groundwaterThe composition of the ocean has attracted the attention of some of the more famous names in science, including Robert Boyle, Antoine Lavoisier and Edmund Halley. Their early investigations tended to be difficult to reproduce, owing to the different conditions under which they crystallized the various salts. As many as 54 salts, double salts and hydrated salts can be obtained by evaporating seawater to dryness. At least 73 elements are now known to be present in seawater.The best way of characterizing seawater is in terms of its ionic content, shown above. The remarkable thing about seawater is the constancy of its relative ionic composition. The overall salt content, known as the salinity (grams of salts contained in 1 kg of seawater), varies slightly within the range of 32-37.5%, corresponding to a solution of about 0.7% salt content. The ratios of the concentrations of the different ions, however, are quite constant, so that a measurement of Cl– concentration is sufficient to determine the overall composition and total salinity. Although most elements are found in seawater only at trace levels, marine organisms may selectively absorb them and make them more detectable. Iodine, for example, was discovered in marine algae (seaweeds) 14 years before it was found in seawater. Other elements that were not detected in seawater until after they were found in marine organisms include barium, cobalt, copper, lead, nickel, silver and zinc. Si32, presumably deriving from cosmic ray bombardment of Ar, has been discovered in marine sponges.pH balance. Reflecting this constant ionic composition is the pH, which is usually maintained in the narrow range of 7.8-8.2, compared with 1.5 to 11 for fresh water. The major buffering action derives from the carbonate system, although ion exchange between Na+ in the water and H+ in clay sediments has recently been recognized to be a significant factor.The major ionic constituents whose concentrations can be determined from the salinity are known as conservative substances. Their constant relative concentrations are due to the large amounts of these species in the oceans in comparison to their small inputs from river flow. This is another way of saying that their residence times are very large. A number of other species, mostly connected with biological activity, are subject to wide variations in concentration. These include the nutrients NO3–, NO2–, NH4+, and HPO42–, which may become depleted near the surface in regions of warmth and light. As was explained in the preceding subsection on coastal upwelling, offshore prevailing winds tend to drive Western coastal surface waters out to sea, causing deeper and more nutrient-rich water to be drawn to the surface. This upwelled water can support a large population of phytoplankton and thus of zooplankton and fish. The best-known example of this is the anchovy fishery off the coast of Peru, but the phenomenon occurs to some extent on the West coasts of most continents, including our own.Other non-conservative components include Ca2+ and SiO42–. These ions are incorporated into the solid parts of marine organisms, which sink to greater depths after the organisms die. The silica gradually dissolves, since the water is everywhere undersaturated in this substance. Calcium carbonate dissolves at intermediate depths, but may reprecipitate in deep waters owing to the higher pressure. Thus the concentrations of Ca and of SiO42– tend to vary with depth. The gases O2 and CO2, being intimately involved with biological activity, are also non-conservative, as are N2O and CO.Most of the organic carbon in seawater is present as dissolved material, with only about 1-2% in particulates. The total organic carbon content ranges between 0.5 mg/L in deep water to 1.5 mg/L near the surface. There is still considerable disagreement about the composition of the dissolved organic matter; much of it appears to be of high molecular weight, and may be polymeric. Substances qualitatively similar to the humic acids found in soils can be isolated. The greenish color that is often associated with coastal waters is due to a mixture of fluorescent, high molecular weight substances of undetermined composition known as “Gelbstoffe”. It is likely that the significance of the organic fraction of seawater may be much greater than its low abundance would suggest. For one thing, many of these substances are lipid-like and tend to adsorb onto surfaces. It has been shown that any particle entering the ocean is quickly coated with an organic surface film that may influence the rate and extent of its dissolution or decomposition. Certain inorganic ions may be strongly complexed by humic-like substances. The surface of the ocean is mostly covered with an organic film, only a few molecular layers thick. This is believed to consist of hydrocarbons, lipids, and the like, but glycoproteins and proteoglycans have been reported. If this film is carefully removed from a container of seawater, it will quickly be reconstituted. How significant this film is in its effects on gas exchange with the atmosphere is not known.The salinity of the ocean appears to have been about the same for at least the last 200 million years. There have been changes in the relative amounts of some species, however; the ratio of Na/K has increased from about 1:1 in ancient ocean sediments to its present value of 28:1. Incorporation of calcium into sediments by the action of marine organisms has depleted the Ca/Mg ratio from 1:1 to 1:3.If the composition of the ocean has remained relatively unchanged with time, the continual addition of new mineral substances by the rivers and other sources must be exactly balanced by their removal as sediment, possibly passing through one or more biological systems in the process.In 1715 Edmund Halley suggested that the age of the ocean (and thus presumably of the world) might be estimated from the rate of salt transport by rivers. When this measurement was actually carried out in 1899, it gave an age of only 90 million years.This is somewhat better than the calculation made in 1654 by James Ussher, the Anglican Archbishop of Armagh, Ireland, based on his interpretation of the Biblical book of Genesis, that the world was created at 9 A.M. on October 23, 4004 BC, but it is still far too recent, being about when the dinosaurs became extinct. What Halley actually described was the residence time, which is about right for Na but much to long for some of the minor elements of seawater.The commonly stated view that the salt content of the oceans derives from surface runoff that contains the products of weathering and soil leaching is not consistent with the known compositions of the major river waters (See Table). The halide ions are particularly over-represented in seawater, compared to fresh water. These were once referred to as “excess volatiles”, and were attributed to volcanic emissions. With the discovery of plate tectonics, it became apparent that the locations of seafloor spreading at which fresh basalt flows up into the ocean from the mantle are also sources of mineral-laden water. Some of this may be seawater that has cycled through a hot porous region and has been able to dissolve some of the mineral material owing to the high temperature. Much of the water, however, is “juvenile” water that was previously incorporated into the mantle material and has never before been in the liquid phase. The substances introduced by this means (and by volcanic activity) are just the elements that are “missing” from river waters. Estimates of what fraction of the total volume of the oceans is due to juvenile water (most of it added in the early stages of mantle differentiation that began a billion years ago) range from 30 to 90%.The oceans can be regarded as a product of a giant acid-base titration in which the carbonic acid present in rain reacts with the basic materials of the lithosphere. The juvenile water introduced at locations of ocean-floor spreading is also acidic, and is partly neutralized by the basic components of the basalt with which it reacts. Surface rocks mostly contain aluminum, silicon and oxygen combined with alkali and alkaline-earth metals, mainly potassium, sodium and calcium. The CO2 and volcanic gases in rainwater react with this material to form a solution of the metal ion and HCO3–, in which is suspended some hydrated SiO2. The solid material left behind is a clay such as kaolinite, Al2Si2O5(OH)4. This first forms as a friable coating on the surface of the weathered rock; later it becomes a soil material, then an alluvial deposit, and finally it may reach the sea as a suspended sediment. Here it may undergo a number of poorly-understood transformations to other clay sediments such as illites. Sea floor spreading eventually transports these sediments to a subduction region under a continental block, where the high temperatures and pressures permit reactions that transform it into hard rock such as granite, thus completing the geochemical cycle.Deep-sea hydrothermal vents are now recognized to be another significant route for both the addition and removal of ionic substances from seawater.Although the relative concentrations of most of the elements in seawater are constant throughout the oceans, there are certain elements that tend to have highly uneven distributions vertically, and to a lesser extent horizontally. Neglecting the highly localized effects of undersea springs and volcanic vents, these variations are direct results of the removal of these elements from seawater by organisms; if the sea were sterile, its chemical composition would be almost uniform.Plant life can exist only in the upper part of the ocean where there is sufficient light available to drive photosynthesis. These plants, together with the animals that consume them, extract nutrients from the water, reducing the concentrations of certain elements in the upper part of the sea. When these organisms die, they fall toward the lower depths of the ocean as particulate material. On the way down, some of the softer particles, deriving from tissue, may be consumed by other animals and recycled. Eventually, however, the nutrient elements that were incorporated into organisms in the upper part of the ocean will end up in the colder, dark, and essentially lifeless lower part.Mixing between the upper and lower reservoirs of the ocean is quite slow, owing to the higher density of the colder water; the average residence time of a water molecule in the lower reservoir is about 1600 years. Since the volume of the upper reservoir is only about 1/20 of that of the lower, a water molecule stays in the upper reservoir for only about 80 years.Except for dissolved oxygen, all elements required by living organisms are depleted in the upper part of the ocean with respect to the lower part. In the case of the major nutrients P, N and Si, the degree of depletion is sufficiently complete (around 95%) to limit the growth of organisms at the surface. These three elements are said to be biolimiting. A few other biointermediate elements show partial depletion in surface waters: Ca (1%), C (15%), Ba (75%).The organic component of plants and animals has the average composition C80N15P . It is remarkable that the ratio of N:P in seawater (both surface and deep) is also 15:1; this raises the interesting question of to what extent the ocean and life have co-evolved.In the deep part of the ocean the elemental ratio corresponds to C800N15P, but of course with much larger absolute amounts of these elements. Eventually some of this deeper water returns to the surface where the N and P are quickly taken up by plants. But since plants can only utilize 80 out of every 800 carbon atoms, 90 percent of the carbon will remain in dissolved form, mostly as HCO3–.To work out the balance of Ca and Si used in the hard parts of organisms, we add these elements to the average composition of the lower reservior to get Ca3200Si50C800N15P. Particulate carbon falls into the deep ocean in the ratio of about two atoms in organic tissue to one atom in the form of calcite. This makes the overall composition of detrital material something like C120N15P; i.e., 80 organic C’s and 40 in CaCO3. Accompanying these 40 calcite units will be 40 Ca atoms, but this represents a minor depletion of the 3200 Ca atoms that eventually return to the surface, so this element is only slightly depleted in the upper waters. Silicon, being far less abundant, is depleted to a much greater extent.A continual rain of particulate material from dead organisms falls through the ocean. This shower is comprised of three major kinds of material: calcite (CaCO3), silica (SiO2), and organic matter. The first two come from the hard parts of both plants and animals (mainly microscopic animals such as foraminifera and radiolarians). The organic matter is derived mainly from the soft tissues of organisms, and from animal fecal material. Some of this solid material dissolves before it reaches the ocean floor, but not usually before it enters the deep ocean where it will remain for about 1600 years.The remainder of this material settles onto the floor of the sea, where it forms one component of a layer of sediments that provide important information about the evolution of the sea and of the earth. Over a short time scale of months to years, these sediments are in quasi-equilibrium with the seawater. On a scale of millions of years, the sediments are merely way-stations in the geochemical cycling of material between the earth’s surface and its interior.The oceanic sediments have three main origins:Our main interest lies with the silica and calcium carbonate, since these substances form a crucial part of the biogeological cycle. Also, their distributions in the ocean are not uniform- a fact that must tell us something.The skeletons of diatoms and radiolarians are the principal sources of silica sediments. Since the ocean is everywhere undersaturated with respect to silica, only the most resistant parts of these skeletons reach the bottom of the deep ocean and get incorporated into sediments. Silica sediments are less common in the Atlantic ocean, owing to the lower content of dissolved silica.The parts of the ocean where these sediments are increasing most rapidly correspond to regions of upwelling, where deep water that is rich in dissolved silica rises to the surface where the silica is rapidly fixed by organisms. Where upwelling is absent, the growth of the organisms is limited, and little silica is precipitated. Since deep waters tend to flow from the Atlantic into the Pacific ocean where most of the upwelling occurs, Atlantic waters are depleted in silica, and silica sediments are not commonly found in this ocean.For calcium carbonate, the situation is quite different. In the first place, surface waters are everywhere supersaturated with respect to both calcite and aragonite, the two common crystal forms of CaCO3. Secondly, Ca2+ and HCO3– are never limiting factors in the growth of the coccoliths (plants) and forams (animals) that precipitate CaCO3; their production depends on the availability of phosphate and nitrogen. Because these elements are efficiently recycled before they fall into the deep ocean, their supply does not depend on upwelling, and so the production of solid is more uniformly distributed over the world’s oceans.More importantly, however, the chances that a piece of carbonate skeleton will end up as sediment will be highly dependent on both the local CO32– concentration and the depth of the ocean floor. These factors give rise to small-scale variations in the production of carbonate sediments that can be quite wide-ranging.New crust is being generated and moving away from the crests of the mid-ocean ridges at a rate of a few centimetres per year. Although the crests of these ridges are relatively high points, projecting to within about 3000 m of the surface, the continual injection of new material prevents sediments from accumulating in these areas. Farther from the crests, carbonate sediments do build up, eventually reaching a depth of about 500 m, but by this time the elevation has dropped off below the saturation horizon, so from this point on the carbonate sediments are overlaid by red clay.If we drill a hole down through a part of the ocean floor that is presently below the saturation horizon, the top part of the drill core will consist of clay, followed by CaCO3 at greater depths. The core may also contain regions in which silica predominates. Since silica production is very high in equatorial regions, the appearance of such a layer suggests that this particular region of the oceanic crust has moved across the equator.Stephen Lower, Professor Emeritus (Simon Fraser U.) Chem1 Virtual TextbookThis page titled 2.1: Water, Water Everywhere... is shared under a CC BY-NC 3.0 license and was authored, remixed, and/or curated by Stephen Lower via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.
1,012
2.2: The hydrosphere and the oceans
https://chem.libretexts.org/Bookshelves/Environmental_Chemistry/Geochemistry_(Lower)/02%3A_The_Hydrosphere/2.02%3A_The_hydrosphere_and_the_oceans
How inappropriate to call this planet Earth, when clearly it is Ocean.”Arthur C. ClarkeWater is the most abundant substance at the earth’s surface. Almost all of it is in the oceans, which cover 70% of the surface area of the earth. However, the amounts of water present in the atmosphere and on land (as surface runoff, lakes and streams) is great enough to make it a significant agent in transporting substances between the lithosphere and the oceans.Water interacts with both the atmosphere and the lithosphere, acquiring solutes from each, and thus provides the major chemical link between these two realms. The various transformations undergone by water through the different stages of the hydrologic cycle act to transport both dissolved and particulate substances between different geographic locations.Table\(\PageIndex{1}\): Inventory of water on EarthReservoirVolume / 106, km3Percent of totalOceansIce caps and glaciersDeep groundwaterShallow groundwaterIt appears to have been bound up in the silica-based materials such as micas and amphiboles which accreted to form the Earth. The heat released during this process would have been sufficient to drive off this water, which amounted to about 0.01% by mass of the primoridal material.The hydrologic cycle refers to the steady state that exists between evaporation, condensation, percolation, runoff, and circulation of water. The cycle is driven by solar energy, mainly through direct vaporization, but also by convective motion induced by uneven heating.The major interphase transport process of the hydrologic cycle is evaporation of water from the ocean. However, 90% of this vapor falls directly back into the ocean as rain, while 10% is transported over the land. Of the latter, about two-thirds evaporates again and one third runs off to the ocean.: The climatic hydrological cycle at global scale (Diagram and text from a page at the Web site of the Institute for Global Environment and Society) The movement of water on the earth's surface and through the atmosphere is known as the hydrologic cycle. Water is taken up by the atmosphere from the earth's surface in vapor form through evaporation. It may then be moved from place to place by the wind until it is condensed back to its liquid phase to form clouds. Water then returns to the surface of the earth in the form of either liquid (rain) or solid (snow, sleet, etc.) precipitation. Water transport can also take place on or below the earth's surface by flowing glaciers, rivers, and ground water flow.The amounts of water precipitated onto the land and oceans are in approximate proportion to the relative surface areas, but evaporation from the ocean exceeds that from the land by about 37,400 km–3 yr–1. This difference is the amount of water transported to the oceans by river runoff. When water condenses from the atmosphere in the form of rain, it is slightly enriched in H2O18. During epochs of glacial buildup the fraction of H2O18 in the oceans consequently decreases. Observation of H2O18/H2O16 ratios in marine sediments is one way of studying the timing and extent of past glaciations. Since the degree of heavy isotope enrichment of condensed water is temperature dependent, this same method can be used to estimate mean world temperatures in the distant past.The hydrologic cycle also has important effects on the energy budget of the earth. Atmospheric water vapor (along with carbon dioxide and methane) tends to absorb the long-wavelength infrared radiation emitted by the earth’s surface, partially trapping the incoming shorter-wavelength energy and thus maintaining the mean surface temperature about 30° higher than would be the case in the absence of water vapor. Of the 51% of the solar radiation incident on the atmosphere that reaches the earth’s surface, about half of this (23%) is used to evaporate water. During the ten days that an average molecule resides in the atmosphere, it will travel about 1000 km. The atmospheric transport of water from equatorial to subtropical regions serves as an important mechanism for the transport of thermal energy; at latitudes of about 40°, as much as one-third of the energy input comes from release of latent heat from water vapor formed in equatorial regions.About 97% of the earth’s water is contained in the two reservoirs which comprise the oceans. The upper mixed layer contains about 5% of the total; it is separated from the deeper and colder layer by the thermocline. Mixing between these two stratified layers is very slow; of the total ocean volume of 6.8 1018 m3, only about 0.71 1015 m3, or about 0.01%, moves between the two layers per year. The mean residence time of a water molecule in the deep layer is about 1600 years.The large-scale motions of ocean water are the primary means by which chemical substances, especially those taken up and excreted by organisms, are transported within the ocean. An understanding of the general patterns of this circulation is essential in order to analyze the observed distribution of many of the chemical elements in different parts of the ocean and in the oceanic sediments.The circulation of the surface waters of the ocean is driven by the prevailing winds. The latter arise from uneven heating of the earth’s surface, and are arranged in bands that parallel the equator.Although the motion of the waters at the surface of the ocean are driven by the winds, they do not follow them in a simple manner. The reasons are threefold: the Coriolis effect, the presence of land masses, and uneveness in the sea level due to regional differences in temperature and atmospheric pressure.: Atmospheric windsThe most intense heat input into the atmosphere occurs near the equator, where the heated air rises and cools, producing intense local precipitation but little surface wind. After cooling and losing moisture, this air moves north and south and descends at a latitude of about 30°. As it descends, it warms (largely by adiabatic compression) and its relative humidity decreases. The extreme dryness of this air gives rise to the subtropical desert regions between about 15° and 30°. Part of this air flows back toward the equator, giving rise to the northeast and southwest trade winds; the deflection to the east or west is caused by the Coriolis effect. Another part of the descending air travels poleward, producing the prevailing westerlies. Eventually these collide with cold air masses moving away from the polar regions, producing a region of unstable air and storm activity known as a polar front. Some of this polar air picks up enough heat to rise and enter into polar cell circulation patterns.The flow of air in the prevailing westerlies is subject to considerable turbulence which gives rise to planetary waves. These are moving regions in which warm surface air is lifted to higher levels, producing lines of storms that travel from west to east, and exchanging more air between the polar and temperate regions. : In the Northern hemisphere, the Coriolis effect not only deflects south-moving objects to the east but it also causes currents flowing parallel to the equator to veer to the right of the direction of flow, i.e. to the north or south.In addition, prevailing westerly winds and the eastward rotation of the earth cause water to pile up by a few centimeters at the western edges of the oceans. The resultant downhill flow, interacting with Coriolis forces, produces a western boundary current that runs south-to-north in the northern hemisphere. A similar but opposite effect gives rise to a south-flowing eastern boundary current on continental east coasts.In contrast to the upper levels of the ocean, the deep ocean is stratified; the density increases with depth so as to inhibit the vertical transport of water. This stratification divides the deep oceans into several distinct water masses which undergo movement in a more or less horizonal plane, with adjacent masses sometimes moving in opposite directions.: As the cold water fills up the deepest regions and spills over ridges into other deep basins, it creates huge undersea cascades which rival the greatest terrestrial waterfalls in height and the largest rivers in volume. For more on this, see the Feb. 1989 Scientific AmericanThe winds and atmospheric effects outlined above affect only the upper part of the ocean. Below 100 meters or so, oceanic circulation is driven by the density of the seawater, which is determined by its temperature and its salinity. Variations in these two quantities give rise to the thermohaline circulation of the deep currents of the ocean.It all starts when seasonal ice begins to form in the polar regions. Because the salts dissolved in seawater cannot be accommodated within the ice structure, they are largely excluded from the new ice and remain in solution. This increases the density of the surrounding unfrozen water, causing it to sink into the bottom ocean.There are two major locations at which surface waters enter the deep ocean. The northern entry point is in the Norwegian Sea off Greenland; this water forms a mass known as the North Atlantic Deep Water (NADW) which flows southward across the equator.: From Don Reed, Marine Geology 130 at San Jose State U.Most of the transport into the deep ocean takes place in the Weddell Sea off the coast of Antarctica. The highly saline water flows down the submerged Antarctic Slope to begin a 5000-year trip to the north across the bottom of the ocean. This is the major route by which dissolved CO2 and O2 (which are more soluble in this cold water) are transported into the deep ocean where it forms a water mass known as the Antarctic Bottom Water (AABW) which can be traced into all three oceans.In the south, a flow from the antarctic region forms a water mass known as the Antarctic Bottom Water (AABW) which can be traced into all three oceans.The Pacific Ocean lacks any major identifiable direct source of cold water, so it is less differentiated and its deep circulation is sluggish and poorly defined.: Temperature and salinity profiles in a north-south subsection of the Atlantic ocean.As is apparent from the figure above, the vertical profiles of temperature and especially of the salinity are not uniform. To some extent, these two parameters have opposite effects: in equatorial regions, temperatures are higher (leading to lower density) but evaporation rates are also higher (leading to higher density). In polar regions, the formation of sea ice raises the density of the seawater (because only a small proportion of salt is incorporated into the ice).The nature and extent of the deep ocean currents differ in the Atlantic, Pacific, and Indian oceans. These currents are much slower than the surface currents, and in fact have not been measured directly; their existence is however clearly implied by the chemical composition and temperature of water samples taken from various parts of the ocean. Estimated rates are of the order of kilometers per month, in contrast to the few kilometers per hour of surface waters. The deep currents are the indirect results of processes occuring at the surface in which cold water of high salinity is produced as sea ice forms in the arctic and antarctic regions. This water is so dense that it sinks to the bottom, displacing warmer or less saline water as it moves.Recirculation of deep water to the surface occurs to a very small extent in many regions, but it is especially pronounced where water entering the Antarctic Bottom mass displaces other bottom water, and where water piles up at the western edges of continents. This latter water flows downhill (forming the western boundary currents mentioned above) and is replaced by colder water from the deep ocean. The deep ocean contains few organisms to deplete the water of the nutrients it receives from the remains of the dead organisms floating down from above; this upwelled water is therefor exceptionally rich in nutrients, and strongly encourages the growth of new organisms that extend up the food chain to fish. Thus the wind-driven upwelling that occurs off the west coast of South America is responsible for the Peruvian fishing and guano fertilizer industry.About every seven years these prevailing winds disappear for a while, allowing warm equatorial waters to move in. This phenomenon is known as El Niño and it results in massive kills of plankton and fish. Decomposition of the dead organisms reduces the oxygen content of the water, causing the death of still more fish, and allowing reduced compounds such as hydrogen sulfide to accumulate.Matthias Tomczak's Introduction to physical oceanography is an excellent source of more information on the topics covered above.Page last modified: 21.01.2008© 1998 by Stephen LowerFor information about this Web site or to contact the author, please see theChem1 Virtual Textbook home page. This work is licensed under a Creative Commons Attribution-NonCommercial 2.5 License.This page titled 2.2: The hydrosphere and the oceans is shared under a CC BY-NC 3.0 license and was authored, remixed, and/or curated by Stephen Lower via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.
1,013
2.3: Chemistry and geochemistry of the oceans
https://chem.libretexts.org/Bookshelves/Environmental_Chemistry/Geochemistry_(Lower)/02%3A_The_Hydrosphere/2.03%3A_Chemistry_and_geochemistry_of_the_oceans
The composition of the ocean has attracted the attention of some of the more famous names in science, including Robert Boyle, Antoine Lavoisier and Edmund Halley. Their early investigations tended to be difficult to reproduce, owing to the different conditions under which they crystallized the various salts.As many as 54 salts, double salts and hydrated salts can be obtained by evaporating seawater to dryness. At least 73 elements are now known to be present in seawater.These values, expressed in parts per thousand, are for seawater of 35% salinity.The best way of characterizing seawater is in terms of its ionic content, shown above. The remarkable thing about seawater is the constancy of its relative ionic composition. The overall salt content, known as the salinity (grams of salts contained in 1 kg of seawater), varies slightly within the range of 32-37.5%, corresponding to a solution of about 0.7% salt content. The ratios of the concentrations of the different ions, however, are quite constant, so that a measurement of Cl– concentration is sufficient to determine the overall composition and total salinity. Although most elements are found in seawater only at trace levels, marine organisms may selectively absorb them and make them more detectable. Iodine, for example, was discovered in marine algae (seaweeds) 14 years before it was found in seawater. Other elements that were not detected in seawater until after they were found in marine organisms include barium, cobalt, copper, lead, nickel, silver and zinc. Si32, presumably deriving from cosmic ray bombardment of Ar, has been discovered in marine sponges.pH balance. Reflecting this constant ionic composition is the pH, which is usually maintained in the narrow range of 7.8-8.2, compared with 1.5 to 11 for fresh water. The major buffering action derives from the carbonate system, although ion exchange between Na+ in the water and H+ in clay sediments has recently been recognized to be a significant factor.The major ionic constituents whose concentrations can be determined from the salinity are known as conservative substances. Their constant relative concentrations are due to the large amounts of these species in the oceans in comparison to their small inputs from river flow. This is another way of saying that their residence times are very large. A number of other species, mostly connected with biological activity, are subject to wide variations in concentration. These include the nutrients NO3–, NO2–, NH4+, and HPO42–, which may become depleted near the surface in regions of warmth and light. As was explained in the preceding subsection on coastal upwelling, offshore prevailing winds tend to drive Western coastal surface waters out to sea, causing deeper and more nutrient-rich water to be drawn to the surface. This upwelled water can support a large population of phytoplankton and thus of zooplankton and fish. The best-known example of this is the anchovy fishery off the coast of Peru, but the phenomenon occurs to some extent on the West coasts of most continents, including our own.Other non-conservative components include Ca2+ and SiO42–. These ions are incorporated into the solid parts of marine organisms, which sink to greater depths after the organisms die. The silica gradually dissolves, since the water is everywhere undersaturated in this substance. Calcium carbonate dissolves at intermediate depths, but may reprecipitate in deep waters owing to the higher pressure. Thus the concentrations of Ca and of SiO42– tend to vary with depth. The gases O2 and CO2, being intimately involved with biological activity, are also non-conservative, as are N2O and CO.Most of the organic carbon in seawater is present as dissolved material, with only about 1-2% in particulates. The total organic carbon content ranges between 0.5 mg/L in deep water to 1.5 mg/L near the surface. There is still considerable disagreement about the composition of the dissolved organic matter; much of it appears to be of high molecular weight, and may be polymeric. Substances qualitatively similar to the humic acids found in soils can be isolated. The greenish color that is often associated with coastal waters is due to a mixture of fluorescent, high molecular weight substances of undetermined composition known as “Gelbstoffe”. It is likely that the significance of the organic fraction of seawater may be much greater than its low abundance would suggest. For one thing, many of these substances are lipid-like and tend to adsorb onto surfaces. It has been shown that any particle entering the ocean is quickly coated with an organic surface film that may influence the rate and extent of its dissolution or decomposition. Certain inorganic ions may be strongly complexed by humic-like substances. The surface of the ocean is mostly covered with an organic film, only a few molecular layers thick. This is believed to consist of hydrocarbons, lipids, and the like, but glycoproteins and proteoglycans have been reported. If this film is carefully removed from a container of seawater, it will quickly be reconstituted. How significant this film is in its effects on gas exchange with the atmosphere is not known.The salinity of the ocean appears to have been about the same for at least the last 200 million years. There have been changes in the relative amounts of some species, however; the ratio of Na/K has increased from about 1:1 in ancient ocean sediments to its present value of 28:1. Incorporation of calcium into sediments by the action of marine organisms has depleted the Ca/Mg ratio from 1:1 to 1:3.Mass balance of P, C, and Ca for the oceansinput to oceandissolved in seawaterin dead organismsloss to sedimentsresidence time, y If the composition of the ocean has remained relatively unchanged with time, the continual addition of new mineral substances by the rivers and other sources must be exactly balanced by their removal as sediment, possibly passing through one or more biological systems in the process.In 1715 Edmund Halley suggested that the age of the ocean (and thus presumably of the world) might be estimated from the rate of salt transport by rivers. When this measurement was actually carried out in 1899, it gave an age of only 90 million years.This is somewhat better than the calculation made in 1654 by James Ussher, the Anglican Archbishop of Armagh, Ireland, based on his interpretation of the Biblical book of Genesis, that the world was created at 9 A.M. on October 23, 4004 BC, but it is still far too recent, being about when the dinosaurs became extinct. What Halley actually described was the residence time, which is about right for Na but much to long for some of the minor elements of seawater.The commonly stated view that the salt content of the oceans derives from surface runoff that contains the products of weathering and soil leaching is not consistent with the known compositions of the major river waters (See Table). The halide ions are particularly over-represented in seawater, compared to fresh water. These were once referred to as “excess volatiles”, and were attributed to volcanic emissions. With the discovery of plate tectonics, it became apparent that the locations of seafloor spreading at which fresh basalt flows up into the ocean from the mantle are also sources of mineral-laden water. Some of this may be seawater that has cycled through a hot porous region and has been able to dissolve some of the mineral material owing to the high temperature. Much of the water, however, is “juvenile” water that was previously incorporated into the mantle material and has never before been in the liquid phase. The substances introduced by this means (and by volcanic activity) are just the elements that are “missing” from river waters. Estimates of what fraction of the total volume of the oceans is due to juvenile water (most of it added in the early stages of mantle differentiation that began a billion years ago) range from 30 to 90%.The oceans can be regarded as a product of a giant acid-base titration in which the carbonic acid present in rain reacts with the basic materials of the lithosphere. The juvenile water introduced at locations of ocean-floor spreading is also acidic, and is partly neutralized by the basic components of the basalt with which it reacts. Surface rocks mostly contain aluminum, silicon and oxygen combined with alkali and alkaline-earth metals, mainly potassium, sodium and calcium. The CO2 and volcanic gases in rainwater react with this material to form a solution of the metal ion and HCO3–, in which is suspended some hydrated SiO2. The solid material left behind is a clay such as kaolinite, Al2Si2O5(OH)4. This first forms as a friable coating on the surface of the weathered rock; later it becomes a soil material, then an alluvial deposit, and finally it may reach the sea as a suspended sediment. Here it may undergo a number of poorly-understood transformations to other clay sediments such as illites. Sea floor spreading eventually transports these sediments to a subduction region under a continental block, where the high temperatures and pressures permit reactions that transform it into hard rock such as granite, thus completing the geochemical cycle.Deep-sea hydrothermal vents are now recognized to be another significant route for both the addition and removal of ionic substances from seawater.Although the relative concentrations of most of the elements in seawater are constant throughout the oceans, there are certain elements that tend to have highly uneven distributions vertically, and to a lesser extent horizontally. Neglecting the highly localized effects of undersea springs and volcanic vents, these variations are direct results of the removal of these elements from seawater by organisms; if the sea were sterile, its chemical composition would be almost uniform.Plant life can exist only in the upper part of the ocean where there is sufficient light available to drive photosynthesis. These plants, together with the animals that consume them, extract nutrients from the water, reducing the concentrations of certain elements in the upper part of the sea. When these organisms die, they fall toward the lower depths of the ocean as particulate material. On the way down, some of the softer particles, deriving from tissue, may be consumed by other animals and recycled. Eventually, however, the nutrient elements that were incorporated into organisms in the upper part of the ocean will end up in the colder, dark, and essentially lifeless lower part.Mixing between the upper and lower reservoirs of the ocean is quite slow, owing to the higher density of the colder water; the average residence time of a water molecule in the lower reservoir is about 1600 years. Since the volume of the upper reservoir is only about 1/20 of that of the lower, a water molecule stays in the upper reservoir for only about 80 years.Except for dissolved oxygen, all elements required by living organisms are depleted in the upper part of the ocean with respect to the lower part. In the case of the major nutrients P, N and Si, the degree of depletion is sufficiently complete (around 95%) to limit the growth of organisms at the surface. These three elements are said to be biolimiting. A few other biointermediate elements show partial depletion in surface waters: Ca (1%), C (15%), Ba (75%).The organic component of plants and animals has the average composition C80N15P . It is remarkable that the ratio of N:P in seawater (both surface and deep) is also 15:1; this raises the interesting question of to what extent the ocean and life have co-evolved.In the deep part of the ocean the elemental ratio corresponds to C800N15P, but of course with much larger absolute amounts of these elements. Eventually some of this deeper water returns to the surface where the N and P are quickly taken up by plants. But since plants can only utilize 80 out of every 800 carbon atoms, 90 percent of the carbon will remain in dissolved form, mostly as HCO3–.To work out the balance of Ca and Si used in the hard parts of organisms, we add these elements to the average composition of the lower reservior to get Ca3200Si50C800N15P. Particulate carbon falls into the deep ocean in the ratio of about two atoms in organic tissue to one atom in the form of calcite. This makes the overall composition of detrital material something like C120N15P; i.e., 80 organic C’s and 40 in CaCO3. Accompanying these 40 calcite units will be 40 Ca atoms, but this represents a minor depletion of the 3200 Ca atoms that eventually return to the surface, so this element is only slightly depleted in the upper waters. Silicon, being far less abundant, is depleted to a much greater extent.A continual rain of particulate material from dead organisms falls through the ocean. This shower is comprised of three major kinds of material: calcite (CaCO3), silica (SiO2), and organic matter. The first two come from the hard parts of both plants and animals (mainly microscopic animals such as foraminifera and radiolarians). The organic matter is derived mainly from the soft tissues of organisms, and from animal fecal material. Some of this solid material dissolves before it reaches the ocean floor, but not usually before it enters the deep ocean where it will remain for about 1600 years.The remainder of this material settles onto the floor of the sea, where it forms one component of a layer of sediments that provide important information about the evolution of the sea and of the earth. Over a short time scale of months to years, these sediments are in quasi-equilibrium with the seawater. On a scale of millions of years, the sediments are merely way-stations in the geochemical cycling of material between the earth’s surface and its interior.The oceanic sediments have three main origins:Our main interest lies with the silica and calcium carbonate, since these substances form a crucial part of the biogeological cycle. Also, their distributions in the ocean are not uniform- a fact that must tell us something.The skeletons of diatoms and radiolarians are the principal sources of silica sediments. Since the ocean is everywhere undersaturated with respect to silica, only the most resistant parts of these skeletons reach the bottom of the deep ocean and get incorporated into sediments. Silica sediments are less common in the Atlantic ocean, owing to the lower content of dissolved silica.The parts of the ocean where these sediments are increasing most rapidly correspond to regions of upwelling, where deep water that is rich in dissolved silica rises to the surface where the silica is rapidly fixed by organisms. Where upwelling is absent, the growth of the organisms is limited, and little silica is precipitated. Since deep waters tend to flow from the Atlantic into the Pacific ocean where most of the upwelling occurs, Atlantic waters are depleted in silica, and silica sediments are not commonly found in this ocean.For calcium carbonate, the situation is quite different. In the first place, surface waters are everywhere supersaturated with respect to both calcite and aragonite, the two common crystal forms of CaCO3. Secondly, Ca2+ and HCO3– are never limiting factors in the growth of the coccoliths (plants) and forams (animals) that precipitate CaCO3; their production depends on the availability of phosphate and nitrogen. Because these elements are efficiently recycled before they fall into the deep ocean, their supply does not depend on upwelling, and so the production of solid is more uniformly distributed over the world’s oceans.More importantly, however, the chances that a piece of carbonate skeleton will end up as sediment will be highly dependent on both the local CO32– concentration and the depth of the ocean floor. These factors give rise to small-scale variations in the production of carbonate sediments that can be quite wide-ranging.New crust is being generated and moving away from the crests of the mid-ocean ridges at a rate of a few centimetres per year. Although the crests of these ridges are relatively high points, projecting to within about 3000 m of the surface, the continual injection of new material prevents sediments from accumulating in these areas. Farther from the crests, carbonate sediments do build up, eventually reaching a depth of about 500 m, but by this time the elevation has dropped off below the saturation horizon, so from this point on the carbonate sediments are overlaid by red clay.If we drill a hole down through a part of the ocean floor that is presently below the saturation horizon, the top part of the drill core will consist of clay, followed by CaCO3 at greater depths. The core may also contain regions in which silica predominates. Since silica production is very high in equatorial regions, the appearance of such a layer suggests that this particular region of the oceanic crust has moved across the equator.Page last modified: 21.01.2008© 1998 by Stephen LowerFor information about this Web site or to contact the author, please see the Chem1 Virtual Textbook home page. This work is licensed under a Creative Commons Attribution-NonCommercial 2.5 License.This page titled 2.3: Chemistry and geochemistry of the oceans is shared under a CC BY-NC 3.0 license and was authored, remixed, and/or curated by Stephen Lower via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.
1,014
2.4: Chemical budgets of oceanic elements
https://chem.libretexts.org/Bookshelves/Environmental_Chemistry/Geochemistry_(Lower)/02%3A_The_Hydrosphere/2.04%3A_Chemical_budgets_of_oceanic_elements
To the extent that the composition of the ocean remains constant, the rate at which any one element is introduced into seawater must equal the rate of its removal. A listing of the various routes of addition and removal, together with the estimated rate of each process, constitutes the budget for a given element. If that budget is greatly out of balance and no other transport routes are apparent, then it is likely that the ocean is not in a steady state with respect to that element, at least on a short time scale. It is important to understand, however, that short-term deviations from constant composition are not necessarily inconsistent with a long-term steady state. Deviations from the latter condition are most commonly inferred from geological evidence.The major input of elements to the oceans is river water. Groundwater seepage constitutes a very small secondary source. These were considered the only sources until the 1970’s, when the existence of hydrothermal springs at sites of seafloor spreading became known. There are presently no reliable estimates of the magnitude of this source. Pollution represents an additional input, mainly dissolved in river water, but also sometimes in rain and by dry deposition.Routes of removal are formation and burial of sediments, formation of evaporite deposits, direct input to the atmosphere by sea-salt aerosol transfer associated with bubble-breaking, and burial with sediments, either in interstitial water or adorbed onto active surfaces. Reaction with newly-formed basalt associated with undersea volcanic activity appears to be an important removal mechanism for some elements.The major elements undergoing steady-state dynamic change in the ocean are connected with biological processes. The key limiting element in the development of oceanic biomass is phosphorus, in the form of the phosphate ion. For terrestrial plant life, nitrogen is more commonly the limiting element, where it is taken up in the form of the nitrate ion. In the ocean, however, the ratio of the nitrate ion concentration to that of phosphorus has been found to be everywhere the same; this implies that the concentration of one controls that of the other. The source of nitrate ion is atmospheric N2, which is freely soluble in water and is thus always present in abundance. The conversion of N2 to NO3– is presumed to be biologically mediated, probably by bacteria. The constancy of the NO3–/P concentration ratio implies that the phosphorus concentration controls the activity of the nitrogen-fixing organisms, and thus the availability of nitrogen to oceanic life.Photosynthetic activity in the upper part of the ocean causes inorganic phosphate to be incorporated into biomass, reducing the concentration of phosphorus; in warm surface waters, phosphate may become totally depleted. A given phosphorus atom may be traded several times among the plant, animal and bacterial populations before it eventually finds itself in biodebris (a dead organism or a fecal pellet) that falls into the deep part of the sea. Only about 1% of the phosphorus atoms that descend into deeper waters actually reach the bottom, where they are incorporated into sediments and permanently removed from circulation. The other 99% are released in the form of soluble phosphate, which is eventually brought to the surface in regions of upwelling. An average phosphorus atom will undergo one cycle of this circulation in about 1000 years; only a few months of this cycle will be spent in biomass. After an average of 100 such cycles, the atom will be removed from circulation and locked into the bottom sediment, and a new one will have entered the sea with river or juvenile water.Phosphorus is unique in that its major source of input to the oceans derives ultimately from pollution; in the long term, this represents a transfer of land-based phosphate deposits to the oceans. About half of the phosphorus input is in the form of suspended material, both organic and inorganic, the latter being in a variety of forms including phosphates adsorbed onto clays and iron oxide paticles, and calcium phosphate (apatatite) eroded from rocks. These various particulates are known to dissolve to some extent once they reach the ocean, but there is considerable uncertainty about the rates of these processes under various conditions.The major sink for oceanic phosphorus is burial with organic matter; this accounts for about two-thirds of the phosphorus removed. Most of the remainder is due to deposition with CaCO3. A minor removal route is through reaction with Fe(II) formed when seawater attacks hot basalt, and in the formation of evaporite deposits. However, there is more phosphorus in evaporite deposits in the Western U.S. than in all of the ocean, so it is apparent that the long-term phosphorus budget is still not clearly understood.Carbon enters the ocean from both the atmosphere (as CO2) and river water, in which the principal species is HCO3–. Once in solution, the carbonate species are in equilibrium with each other and with H3O+, and the concentrations of all of these are influenced by the partial pressure of atmospheric CO2. The mass budget for calcium is linked to that of carbon through solubility equilibria with the various solid forms of CaCO3 (mainly calcite). During photosynthesis, C12 is taken up slightly more readily than the rare isotope C13. Since the rate of photosynthesis is controlled by the phosphate concentration, the C13/C12 ratio in the dissolved carbon dioxide of surface waters is slightly higher than in the ocean as a whole. Observations of carbon isotope ratios in buried sediments have been useful in tracking historical changes in phosphate concentrations.The ratio of carbon to phosphorus in sea salt is about eight times greater than the same ratio measured in organic debris. This implies that in exhausting the available phosphate, the living organisms in the upper ocean consume only 12.5% of the dissolved carbon. Even this relatively small withdrawal of carbon from the carbonate system is sufficient to noticeably reduce the partial pressure of gaseous CO2 in equilibrium with the ocean; it has been estimated that if all life in the ocean should suddenly cease, the atmospheric CO2 content would rise to about three times its present level. The regulation of atmospheric CO2 pressure by the oceans also works the other way: since the amount of dissolved carbonate in the oceans is so much greater than the amount of CO2 in the atmosphere, the oceans act to buffer the effects of additions of CO2 to the atmosphere. Calculations indicate that about half of the CO2 that has been produced by burning fossil fuel since the Industrial Revolution has ended up in the oceans.This is of course the major carbonate species in the ocean. Although it is interconvertible with CO2 and is thus coupled to the carbon and photosynthetic cycles, itself can neither be taken up nor produced by organisms, and so it can be treated somewhat independently of biological activity. In this sense the only major input of into the oceans is river water. The two removal mechanisms are formation of CO2\[H^+ + HCO_3^– \rightarrow H_2O + CO^2\]and the (biologically mediated) formation of \(CaCO_3^:\)\[Ca^{2+} + HCO_3^– \rightarrow CaCO_3 + H^+\]Since the pH of the oceans does not change, \(H^+\) is conserved and the removal of \(HCO_3^–\) by biogenic secretion of \(CaCO_3\) can be expressed by the sum of these reactions:\[Ca^{2+} + 2 HCO_3^– \rightarrow CaCO_3 + CO_2 + H_2O\]whose reverse direction represents the dissolution of the skeletal remains of dead organisms as they fall to lower depths.The upper parts of the ocean tend to be supersaturated in CaCO3. Solid CaCO3, in the form of calcite, is manufactured by a large variety of organisms such as foraminifera. A constant rain of calcite falls through the ocean as these organisms die. The solubility of increases with pressure, so only that portion of the calcite that falls to shallow regions of the ocean floor is incorporated into sediments and removed from circulation; the remainder dissolves after reaching a depth known as the lysolcline. At the present time, the amounts of carbonate and Ca2+ supplied by erosion and volcanism appear to be only about one-third as great as the amount of calcite produced by organisms. As the carbonate concentration in a given region of the ocean becomes depleted due to higher calcite production, the lysocline moves up, tending to replenish the carbonate, and reducing the amount that is withdrawn by burial in sediments. Organic residues that fall into the deep sea are mostly oxidized to CO2, presumably by bacterial activity.Calcium is removed from seawater solely by biodeposition as CaCO3, a process whose rate can be determined quite accurately both at the present and in the past.Table \(\PageIndex{1}\): Present-day budget for oceanic calcium (Tg Ca/yr).inputsoutputsAs is explained above, the upper part of the ocean is supersaturated in calcite but the lower ocean is not. For this reason, less than 20% of the CaCO3 that is formed ends up as sediment and is eventually buried.The main questions about the calcium budget tend to focus on the rates and locales at which dissolution of skeletal carbonates occurs, and on how to interpret the various kinds of existing carbonate sediments. For example, the crystalline form aragonite is less stable than calcite, and will presumably dissolve at a higher elevation. The absence of aragonite-containing pteropod shells in deeper deposits seems to confirm this, but in the absence of rate data is it difficult to know at what elevations these particular organisms originated.The data in the above Table indicate that at the present time there is a net removal of calcium from the oceans. This is due to the rise in sea level since the decline of the more recent glacial epoch during the past 11,000 years. The additional water has covered the continental shelves, increasing the amount of shallow ocean where the growth of organisms is most intense. Over the more distant past (25 million years) the calcium budget appears to be well balanced.Evidence from geology and peleontology indicates that the salinity, and hence the chloride concentration of sewater has been quite constant for about 600 million years. There have been periods when climatic conditions and coastal topography have led to episodes of evaporite formation, but these have evidently been largely compensated by the eventual return of the evaporite deposits to the sea. The natural input of chloride from rivers is about 215 Tg/yr, but the present input is about half again as great (Table 7 on page 33), due to pollution. Also, there are presently no significant areas where seawater is evaporating to dryness. Thus the oceanic chloride budget is considerably out of balance. However, the replacement time of Cl– in the oceans is so long (87 million years) that this will probably have no long-term effect.Although sodium is tied to chloride, it is also involved in the formation of silicate minerals, the weathering of rocks, and in cation exchange with clay sediments. Its short-term budget is quite out of balance for the same reasons as is that of chloride. On a longer time scale, removal of sodium by reaction with hot basalt associated with undersea volcanic activity may be of importance.Considerably more sulfate is being added to seawater than is being removed by the major mechanisms of sediment formation (mainly CaSO4 and pyrites). The natural river input is 82 Tg of S per year, while that due to pollution is 61 Tg/yr from rivers and 17 Tg/yr from rain and dry deposition.This element is unusual in that its river-water input is balanced mostly by reaction with volcanic basalt; removal through biogenic formation of magnesian calcite (dolomite) accounts for only 11% of its total removal from the ocean. The present-day magnesium budget seems to have been balanced for the past 100 million years. However, most of the extensive dolomite deposits were formed prior to this time, so the longer-term magnesium budget is poorly understood.The potassium budget of the ocean is not well understood. The element is unusual in that only about 60% of its input is by rivers; the remainder is believed to come from newly formed undersea basalt. The big question about potassium is how it is removed; fixation by ion-exchange with illite clays seems to be a major mechanism, and its uptake by basalt (at lower temperatures than are required for its release) is also believed to occur.About 85% of the silicon input to the oceans comes from river water in the form of silicic acid, H4SiO4. The remainder probably comes from basalt. It is removed by biogenic deposition as opaline silica, SiO2·n H2O produced mainly by planktonic organisms (radiolaria and diatoms). Unlike the case for CaCO3, the ocean is everywhere undersaturated in silica, especially near the surface where these organisms deplete it with greater efficiency than any other element.Because opaline silica dissolves so rapidly, only a small fraction makes it to the bottom. The major deposits occur in shallower waters where coastal upwelling provides a good supply of N and P nutrients for siliceous organisms. Thus over half of the biogenic silica deposits are found in the Antarctic ocean.In spite of the fact that dissolved silica has the shortest replacement time (21,000 years) of any major element in the ocean, its concentration appears to have been remarkably constant during geological time. This is taken as an indication of the ability of siliceous organisms to respond quickly to changes in local concentrations of dissolved silica.This element is complicated by its biologically-mediated exchange with atmospheric nitrogen, and by its existence in several oxidation states, all of which are interconvertible. Unlike the other major elements, nitrogen does not form extensive sedimentary deposits; most of the nitrogen present in dead organic material seems to be removed before it can be buried. Through this mechanism there is extensive cycling of nitrogen between the shallow and deep parts of the oceans.The real difficulty in constructing a budget for oceanic nitrogen is the very large uncertainty in the rates of the major input (fixation) and output. Both of these processes are biologically mediated, but little is known about what organisms are responsible, where they thrive and how they are affected by local nutrient supply and other conditions.Stephen Lower, Professor Emeritus (Simon Fraser U.) Chem1 Virtual TextbookThis page titled 2.4: Chemical budgets of oceanic elements is shared under a CC BY-NC 3.0 license and was authored, remixed, and/or curated by Stephen Lower via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.
1,015
3.1: Structure and Composition of the Atmosphere
https://chem.libretexts.org/Bookshelves/Environmental_Chemistry/Geochemistry_(Lower)/03%3A_The_Atmosphere/3.01%3A_Structure_and_Composition_of_the_Atmosphere
Life as we know it on the Earth is entirely dependent on the tenuous layer of gas that clings to the surface of the globe, adding about 1% to its diameter and an insignificant amount to its total mass. And yet the atmosphere serves as the earth’s window and protective shield, as a medium for the transport of heat and water, and as source and sink for exchange of carbon, oxygen, and nitrogen with the biosphere. The atmosphere acts as a compressible fluid tied to the earth by gravitation; as a receptor of solar energy and a thermal reservoir, it constitutes the working fluid of a heat engine that transports and redistributes matter and energy over the entire globe. The atmosphere is also a major temporary repository of a number of chemical elements that move in a cyclic manner between the hydrosphere, atmosphere, and the upper lithosphere. Finally, the atmosphere is a site for a large variety of complex photochemically initiated reactions involving both natural and anthropogenic substances.On the scale of cubic meters the air is a homogeneous mixture of its constituent gases, but on a larger scale the atmosphere is anything but uniform. Variations of temperature, pressure, and moisture content in the layers of air near the earth’s surface give rise to the dynamic effects we know as the weather.Although the density of the atmosphere decreases without limit with increasing height, for most practical purposes one can roughly place its upper boundary at about 500 km. However, half the mass of the atmosphere lies within 5 km, and 99.99% within 80 km of the surface. The average atmospheric pressure at sea level is 1.01 x 105 pascals, or 1010 millibars. A 1-cm2 cross section of the earth’s surface supports a column weighing 1030 g; the total mass of the atmosphere is about 5.27 x 1021 g.About 80% of the mass of the atmosphere resides in the first 10 km; this well-mixed region of fairly uniform composition is known as the troposphere.Solar irradiation of the EarthThe gases ozone, water vapor, and carbon dioxide are only minor components of the atmosphere, but they exert a huge effect on the Earth by absorbing radiation in the ranges indicated by the shading. Ozone in the upper atmosphere filters out the ultraviolet light below about 360 nm that is destructive of life. O2, H2O, CO2 and CH4 are "greenhouse" gases that trap some of the heat absorbed from the Sun and prevent it from re-radiating back into space.We commonly think of gas molecules as moving about in a completely random manner, but the Earth’s gravitational field causes downward motions to be very slightly favored so that the molecules in any thin layer of the air collide more frequently with those in the layer below. This gives rise to a pressure gradient that is the most predictable and well known structural characteristic of the atmosphere. This gradient is described by an exponential law which predicts that the atmospheric pressure should decrease by 50% for every 6 km increase in altitude. This law also predicts that the composition of a gas mixture will change with altitude, the lower-molecular weight components being increasingly favored at higher altitudes. However, this gravitational fractionation effect is completely obliterated below about 160 km owing to turbulence and convective flows (winds).The atmosphere is divided vertically into several major regions which are distinguished by the sign of the temperature gradient. In the lowermost region, the troposphere, the temperature falls with increasing altitude. The major source of heat input into this part of the atmosphere is long-wave radiation from the earth’s surface, while the major loss is radiation into space.At higher elevations the temperature begins to rise with altitude as we move into a region in which heat is produced by exothermic chemical reactions, mainly the decomposition of ozone that is formed photochemically from dioxygen in the stratosphere. At still higher elevations the ozone gives out and the temperature begins to drop; this is the mesosphere, which is finally replaced by the thermosphere which consists largely of a plasma (gaseous ions). This outer section of the atmosphere which extends indefinitely to perhaps 2000 km is heated by absorption of intense u.v. radiation from the Sun and also from the solar wind, a continual rain of electrons, protons, and other particles emitted from the Sun’s surface.Structure of the atmosphere. The main divisions of the atmosphere are defined by the elevations at which the sign of the temperature gradient changes. The chemical formulas at the right show the major species of interest in the various regions. The shaded D- and E-layers are regions of high ion concentrations that reflect radio waves and are important in long-distance communication.Except for water vapor, whose atmospheric abundance varies from practically zero up to 4%, the fractions of the major atmospheric components N2, O2, and Ar are remarkably uniform below about 100 km. At greater heights, diffusion becomes the principal transport process, and the lighter gases become relatively more abundant. In addition, photochemical processes result in the formation of new species whose high reactivities would preclude their existence in significant concentrations at the higher pressures found at lower elevations.The atmospheric gases fall into three abundance categories: major, minor, and trace. Nitrogen, the most abundant component, has accumulated over time as a result of its geochemical inertness; a very small fraction of it passes into the other phases as a result of biological activity and natural fixation by lightning. It is believed that denitrifying bacteria in marine sediments may provide the major route for the return of N2 to the atmosphere. Oxygen is almost entirely of biological origin, and cycles through the hydrosphere, the biosphere, and sedimentary rocks. Argon consists mainly of Ar40 which is a decay product of K40 in the mantle and crust.The most abundant of the minor gases aside from water vapor is carbon dioxide, about which more will be said below. Next in abundance are neon and helium. Helium is a decay product of radioactive elements in the earth, but neon and the other inert gases are primordial, and have probably been present in their present relative abundances since the earth’s formation. Two of the minor gases, ozone and carbon monoxide, have abundances that vary with time and location. A variable abundance implies an imbalance between the rates of formation and removal. In the case of carbon monoxide, whose major source is anthropogenic (a small amount is produced by biological action), the variance is probably due largely to localized differences in fuel consumption, particularly in internal combustion engines. The nature of the carbon monoxide sink (removal mechanism) is not entirely clear; it may be partly microbial. Ozone is formed by the reaction of O2 with oxygen atoms produced photochemically. As a consequence the abundance of ozone varies with the time of day, the concentration of O atoms from other sources (photochemical smog, for example), and particularly with altitude; at 30 km, the ozone concentration reaches a maximum of 12 ppm.The concentration of atmospheric carbon dioxide, while fairly uniform globally, is increasing at a rate of 0.2-0.7% per year as a result of fossil fuel burning. The present CO2 content of the atmosphere is about 129 1018 g. Most of the CO2, however, is of natural origin, and represents the smallest part of the total carbonate reservoir that includes oceanic CO2, HCO3–, and carbonate sediments. The latter contain about 600 times as much CO2 as the atmosphere, and the oceans contain about 50 times as much. These relative amounts are controlled by the rates of the reactions that interconvert the various forms of carbonate.The surface conditions on the earth are sensitively dependent on the atmospheric CO2 concentration. This is due mainly to the strong infrared absorption of CO2, which promotes the absorption and trapping of solar heat (see below). Since CO2 acts as an acid in aqueous solution, the pH of the oceans is also dependent on the concentration of CO2 in the atmosphere; it has been estimated that if only 1% of the carbonate presently in sediments were still in the atmosphere, the pH of the oceans would be 5.9, instead of the present 8.2.The amount of energy (the solar flux) impinging on the outer part of the atmosphere is 1367 watts m–2 . About 30% of this is reflected or scattered back into space by clouds, dust, and the atmospheric gas molecules themselves, and by the earth’s surface. About 19% of the radiation is absorbed by clouds or the atmosphere (mainly by and O3 , but not CO2), leaving 51% of the incident energy available for absorption by the earth’s surface. If one takes into account the uneven illumination of the earth’s surface and the small flux of internal heat to the surface, the assumption of thermal equilibrium requires that the earth emit about 240 watts m–2. This corresponds to the power that would be emitted by a black body at 255 K, or –18°C, which is the average temperature of the atmosphere at an altitude of 5 km. The observed mean global surface temperature of the earth is 13°C, and is presumably the temperature required to maintain thermal equilibrium between the earth and the atmosphere.The energy radiated by the earth has a longer wavelength (maximum 12m) than the incident radiation. Most gases absorb radiation in this range quite efficiently, including those gases such as CO2 and N2O that do not absorb the incident radiation. The energy absorbed by atmospheric gases is re-radiated in all directions; some of it therefore escapes into space, but a portion returns to the earth and is reabsorbed, thus raising its temperature.This is commonly called the greenhouse effect. If the amount of an infrared-absorbing gas such as carbon dioxide increases, a larger fraction of the incident solar radiation is trapped, and the mean temperature of the earth will increase.Any significant increase in the temperature of the oceans would increase the atmospheric concentrations of both water and CO2, producing the possibility of a runaway process that would be catastrophic from a human perspective. Fossil fuel combustion and deforestation during the last two hundred years have increased the atmospheric CO2 concentration by 25%, and this increase is continuing. The same combustion processes responsible for the increasing atmospheric CO2 concentration also introduce considerable quantities of particulate materials into the upper atmosphere. The effect of these would be to scatter more of the incoming solar radiation, reducing the amount that reaches and heats the earth’s surface. The extent to which this process counteracts the greenhouse effect is still a matter of controversy; all that is known for sure is that the average temperature of the Earth is increasing.Carbon dioxide is not the only atmospheric gas of anthropogenic origin that can affect the heat balance of the earth; other examples are SO2 and N2O. Nitrous oxide is of particular interest, since its abundance is fairly high, and is increasing at a rate of about 0.5% per year. It is produced mainly by bacteria, and much of the increase is probably connected with introduction of increased nitrate into the environment through agricultural fertilization and sewage disposal. Besides being a strong infrared absorber, N2O is photochemically active, and can react with ozone. Any significant depletion of the ozone content of the upper atmosphere would permit more ultraviolet radiation to reach the earth. This would have numerous deleterious effects on present life forms, as well as contributing to a temperature increase. The warming effect attributed to anthropogenic additions of greenhouse gases to the atmosphere is estimated to be about 2 watts per m2 , or about 1.5% of the 150 watts per m2trapped by clouds and atmospheric gases. This is a relatively large perturbation compared to the maximum variation in solar output of 0.5 watts per m2 that has been observed during the past century. Continuation of greenhouse gas emission at present levels for another century could increase the atmospheric warming effect by 6-8 watts per m2.A less-appreciated side effect of the increase in atmospheric carbon dioxide (and of other plant nutrients such as nitrates) may be reduction in plant species diversity by selectively encouraging the growth of species which are ordinarly held in check by other species that are able to grow well with fewer nutrients. This effect, for which there is already some evidence, could be especially pronounced when the competing species utilize the C3 and C4 photosynthetic pathways that differ in their sensitivity to CO2.Stephen Lower, Professor Emeritus (Simon Fraser U.) Chem1 Virtual Textbook Stephen Lower, Professor Emeritus (Simon Fraser U.) Chem1 Virtual TextbookThis page titled 3.1: Structure and Composition of the Atmosphere is shared under a CC BY-NC 3.0 license and was authored, remixed, and/or curated by Stephen Lower via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.
1,018
3.2: Origin and Evolution of the Atmosphere
https://chem.libretexts.org/Bookshelves/Environmental_Chemistry/Geochemistry_(Lower)/03%3A_The_Atmosphere/3.02%3A_Origin_and_Evolution_of_the_Atmosphere
The atmosphere of the Earth (and also of Venus and Mars) is generally believed to have its origin in relatively volatile compounds that were incorporated into the solids from which these planets accreted. Such compounds could include nitrides (a source of N2), water (which can be taken up by silica, for example), carbides, and hydrogen compounds of nitrogen and carbon. Many of these compounds (and also some noble gases) can form clathrate complexes with water and some minerals which are fairly stable at low temperatures. The high temperatures developed during the later stages of accretion as well as subsequent heating produced by decay of radioactive elements presumably released these gases the surface. Even at the present time, large amounts of CO2 , water vapor, N2, HCl, SO2 and H2S are emitted from volcanos. The more reactive of these gases would be selectively removed from the atmosphere by reaction with surface rocks or dissolution in the ocean, leaving an atmosphere enriched in its present major components with the exception of oxygen which is discussed in the next section.Any hydrogen present would tend to escape into space, causing the atmosphere to gradually became less reducing. However there is now some doubt that hydrogen and other volatiles (mainly the inert gases) were present in the newly accreted planets in anything like their cosmic abundance. The main evidence against this is the observation that gases such as helium, neon and argon, which are among the ten most abundant elements in the universe, are depleted on the earth by factors of 10–7 to 10–11 . This implies that there was a selective removal of these volatiles prior to or during the planetary accretion process.The overall oxidation state of the earth’s mantle is not consistent with what one would expect from equilibration with highly reduced volatiles, and there is no evidence to suggest that the composition of the mantle has not remained the same. If this is correct, then the primitive atmosphere may well have had about the same composition as the gases emitted by volcanos at the present time. These consist mainly of water and CO2, together with small amounts of N2, H2, H2S, SO2, CO, CH4, NH3, HCl and HF. If water vapor was a major component of outgassing of the accreted earth, it must have condensed quite rapidly into rain. Any significant concentration of water vapor in the atmosphere would have led to a runaway greenhouse effect, resulting in temperatures as high as 400°C.Free oxygen is never more than a trace component of most planetary atmospheres. Thermodynamically, oxygen is much happier when combined with other elements as oxides; the pressure of O2 in equilibrium with basaltic magmas is only about 10–7 atm. Photochemical decomposition of gaseous oxides in the upper atmosphere is the major source of O2 on most planets. On Venus, for example, CO2 is broken down into CO and O2 . On the earth, the major inorganic source of O2 is the photolysis of water vapor; most of the resulting hydrogen escapes into space, allowing the O2 concentration to build up. An estimated 2 1011 g of O2 per year is generated in this way. Integrated over the earth’s history, this amounts to less than 3% of the present oxygen abundance. The partial pressure of O2 in the prebiotic atmosphere is estimated to be no more than 10–3 atm, and may have been several orders of magnitude less. The major source of atmospheric oxygen on the earth is photosynthesis carried out by green plants and certain bacteria:\[\ce{H2O + CO2 -> (CH2O)_x + O2}\]A historical view of the buildup of atmospheric oxygen concentration since the beginning of the sedimentary record (3.7 109 ybp) can be worked out by making use of the fact that the carbon in the product of the above reaction has a slightly lower C13 content than does carbon of inorganic origin. Isotopic analysis of carbon-containing sediments thus provides a measure of the amounts of photosynthetic O2 produced at various times in the past.: Evolution of atmospheric oxygen content. Note carefully that the curve plots cumulative O2 production, but that until a few hundred million years ago, most of this was taken up by Fe(II) compounds in the crust and by reduced sulfur; only after this massive "oxygen sink" became filled did free O2 begin to accumulate in the atmosphere.Carbon dioxide has probably always been present in the atmosphere in the relatively small absolute amounts now observed (around 54 x 1015 mol = 54 Pmol). The reaction of CO2 with silicate-containing rocks to form precambrian limestones suggest a possible moderating influence on its atmospheric concentration throughout geological time.\[\ce{ CaSiO_3 + CO_2 \rightarrow CaCO_3 + SiO_2 }\]About ten percent of the atmospheric \(CO_2\) is taken up each year by photosynthesis. Of this, all except 0.05 percent is returned by respiration, almost entirely due to microorganisms. The remainder leaks into the slow part of the geochemical cycle, mostly as buried carbonate sediments.Since the advent of the industrial revolution in 1860, the amount of \(CO_2\) in the atmosphere has been increasing. Isotopic analysis shows that most of this has been due to fossil-fuel combustion; in recent years, the mass of carbon released to the atmosphere by this means has been more than ten times the estimated rate of natural removal into sediments. The large-scale destruction of tropical forests, which has accelerated greatly in recent years, is believed to exacerbate this effect by removing a temporary sink for \(CO_2\).The oceans have a large absorptive capacity for \(CO_2\) owing to its reaction with carbonate:\[ \ce{CO_2 + CO_3^{2–} \rightleftharpoons 2 HCO_3^–}\]There is about 60 times as much inorganic carbon in the oceans as in the atmosphere. However, efficient transfer takes place only into the topmost (100 m) wind-mixed layer of the ocean, which contains only about one atmosphere equivalent of \(CO_2\). Further uptake is limited by the very slow transport of water into the deep ocean, which takes around 1000 years. For this reason, the buffering effect of the oceans on atmospheric \(CO_2\) is not very effective; only about ten percent of the added \(CO_2\) is taken up by the oceans. Chem1 Virtual TextbookThis page titled 3.2: Origin and Evolution of the Atmosphere is shared under a CC BY-NC 3.0 license and was authored, remixed, and/or curated by Stephen Lower via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.
1,019
4.1: Chemistry and Energetics of the Life Process
https://chem.libretexts.org/Bookshelves/Environmental_Chemistry/Geochemistry_(Lower)/04%3A_The_Biosphere/4.01%3A_Chemistry_and_Energetics_of_the_Life_Process
The biosphere comprises the various regions near the earth’s surface that contain and are dynamically affected by the metabolic activity of the approximately 1.5 million animal species and 0.5 million plant species that are presently known and are still being discovered at a rate of about 10,000 per year. The biosphere is the youngest of the dynamical systems of the earth, having had its genesis about 2 billion years ago. It is also the one that has most profoundly affected the other major environmental systems, particularly the atmosphere and the hydrosphere."The tree of life"About a third of the chemical elements cycle through living organisms, which are responsible for massive deposits of silicon, iron, manganese, sulfur, and carbon. Large quantities of methane and nitrous oxide are introduced into the atmosphere by bacterial action, and plants alone inject about 400,000 tons of volatile substances (including some metals) into the atmosphere annually.It has been suggested that biological activity might be responsible for the deficiency of hydrogen on Earth, compared to its very high relative abundance in the solar system as a whole. Bacteria capable of reducing hydrogen compounds into H2 transform this element into a form in which it can escape from the earth; such bacteria may have been especially active in the reducing atmosphere of the early planet.A second mechamism might be the microbial production of methane, which presently injects about 109 T of CH4 into the atmosphere each year. Some of this reaches the stratosphere, where it is oxidized to CO2 and H2O. The water vapor is photolyzed to H2, which escapes into space. This may be the major mechanism by which water vapor (and thus hydrogen) is transported to the upper atmosphere; the low temperature of the upper atmosphere causes most of the water originating at lower levels to condense before it can migrate to the top of the atmosphere.The increase in the abundance of atmospheric oxygen from its initial value of essentially zero has without question been the most important single effect of life on earth, and the time scale of this increase parallels the development of life forms from their most primitive stages up to the appearance of the first land animals about 0.5 billion years ago.All life processes involve the uptake and storage of energy, and its subsequent orderly release in small steps during the metabolic process. This energy is taken up in the combination of ADP with inorganic phosphate to form ATP, in which form the energy is stored and eventually delivered to sites where it can provide the free energy needed for driving non-spontaneous reactions such as protein and carbohydrate synthesis.ADP + PO43– → ATP ΔG° = +30 kJThe three main metabolic processes are glycolysis, respiration, and photosynthesis. The first two of these extract free energy from glucose by breaking it up into smaller, more thermodynamically stable fragments. Photosynthesis reverses this process by capturing the energy of sunlight into ATP which then drives the buildup of glucose from CO2.As its name implies, this most primitive (and least efficient) of all metabolic processes is based on the breakdown of a sugar into fragments having a smaller total free energy. Thus the 6-carbon sugar glucose can be broken down into two 3-carbon lactic acid units, or into three 2-carbon ethanol units.C6H12O6 → 2 CH3CHOHCOOH ΔG° = –197 kJIn this process, two molecules of ATP are produced, thus capturing 61 kJ of free energy. Since the standard free energy of glucose (with respect to its elements) is –2870 kJ, this represents an overall efficiency of about 2 percent.The net reaction of glycolysis is essentially a rearrangement of the atoms initially present in the energy source. This is a form of fermentation, which is defined as the enzymatic breakdown of organic molecules in which other organic compounds serve as electron acceptors. Since there is no need for an external oxidizing or reducing agent, there is no change in the oxidation state of the environment.When the enzymatic degradation of organic molecules is accompanied by transfer of electrons to an external (and usually better) electron acceptor, the process is known as respiration. The overall reaction of respiration is the oxidation of glucose to carbon dioxide:C6H12O6 + 6 O2 → 6 CO2 + 6 H2O ΔG° = –2380 kJIn this process, 36 molecules of ADP are converted into ATP, thus capturing 1100 kJ of free energy: an efficiency of 38 percent.The oxidizing agent (electron sink) need not be oxygen; some bacteria reduce nitrates to NO or to N2, and sulfates or sulfur to H2S. These metabolic products can have far-reaching localized environmental effects, particularly if hydrogen ions are involved. Falling down the respiratory ladder presents a succinct picture of oxidation-reduction and the role of non-O2 electron sinks in biological energy capture.The energy of sunlight is trapped in the form of an intermediate which is able to deliver electrons to successively lower free-energy levels through the mediation of various molecules (mainly cytochromes) comprising an electron-transport chain. The free energy thus gained is utilized in part to reduce CO2 to glucose, which is then available to supply metabolic energy by glycolysis or respiration. In green plants and eukaryotic algae, the source of hydrogen is water, the net reaction being6 CO2 + 6 H2O → C6H12O6 + 6 O2 ΔG° = +2830 kJFor every CO2 molecule fixed in this way, 469 kJ of free energy must be supplied. Red light of 680 nm wavelength has an energy of 168 kJ/mol; this implies that about three photons must be absorbed for every carbon atom taken up, but experiment indicates that about ten seem to be required.The photosynthetic - respiration cycle.Photosynthesis utilizes the energy of red light to add hydrogen (from H2O) and electrons (from the O in H2O) to CO2, reducing it to carbohydrate. Respiration is the reverse of this process; electrons are removed from the carbohydrate (food is "burned") in small steps, each one releasing a small amount of energy. Some of this energy is liberated as heat, but part of it is used to add a phosphate group to ADP, converting it into ATP. ATP (adenosine triphosphate) is to an organism's energy needs as money is to our material needs; it circulates to wherever it is required in order to bring about energy-requiring reactions or to make muscle cells contract. Each increment of energy given up by ATP converts it back to ADP and phosphate, ready to repeat the cycle.Green plants are able to operate in both modes during the daylight hours, reverting to respiration-only at night. Animals carry out only the right side of the cycle and thus require as source of carbohydrate ("food") either directly (by eating plants), or indirectly by eating other animals that eat plants.There are many kinds of photosynthetic bacteria, but with one exception (cyanobacteria) they are incapable of using water as a source of hydrogen for reducing carbon dioxide. Instead, they consume hydrogen sulfide or other reduced sulfur compounds, organic molecules, or elemental hydrogen itself, excreting the reducing agent in an oxidized state. Green plants, cyanobacteria, green filamentous bacteria and the purple nonsulfur bacteria utilize glucose by respiration during periods of darkness, while the green sulfur bacteria and the purple sulfur bacteria are strictly anaerobic.This page titled 4.1: Chemistry and Energetics of the Life Process is shared under a CC BY-NC 3.0 license and was authored, remixed, and/or curated by Stephen Lower via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.
1,020
4.2: Biogeochemical Evolution
https://chem.libretexts.org/Bookshelves/Environmental_Chemistry/Geochemistry_(Lower)/04%3A_The_Biosphere/4.02%3A_Biogeochemical_Evolution
Present evidence suggests that blue-green algae, and possibly other primitive microbial forms of life, were flourishing 3 billion years ago. This brackets the origin of life to within one billion years; prior to 4 billion years ago, surface temperatures were probably above the melting point of iron, and there was no atmosphere nor hydrosphere.By about 3.8 billion years ago, or one billion years after the earth was formed, cooling had occurred to the point where rain was possible, and primitive warm, shallow oceans had formed. The atmosphere was anoxic and highly reducing, containing mainly CO2, N2, CO, H2O, H2S, traces of H2, NH3, CH4, and less than 1% of the present amount of O2, probably originating from the photolysis of water vapor. This oxygen would have been taken up quite rapidly by the many abundant oxidizable substances such as Fe(II), H2S, and the like.Timeline for development of the major life forms.The fossil record that preserves the structural elements of organisms in sedimentary deposits has for some time provided a reasonably clear picture of the evolution of life during the past 750,000 years. In more recent years, this record has been considerably extended, as improved techniques have made it possible to study the impressions made by single-celled microorganisms embedded in rock formations.The main difficulty in studying fossil microorganisms extending back beyond a billion years is in establishing that the relatively simple structural forms one observes are truly biogenic. There are three major kinds of evidence for this.If all three of these lines of evidence are present in samples that can be shown to be contemporaneous with the sediments in which they are found, then the argument for life is incontrovertible. One of the most famous of these sites was discovered near Thunder Bay, Ontario in the early 1950’s. The Gunflint Formation consists of an exposed layer of chert (largely silica) from which the overlying shale of the Canadian Shield had been removed. Microscopic examination of thin sections of this rock revealed a variety of microbial cell forms, including some resembling present freshwater blue-green algae. Also present in the Gunflint Deposits are the oldest known examples of metazoa, or organisms which display a clear differentiation into two or more types of cell. These deposits have been dated at 1.9-2.0 billion years.These filaments are believed to be the fossilized imprints of blue-green algae, one of the earliest life forms. They occur in the Bitter Springs Formation in Australia and are about 850 million years old. The evidence from very old paleomicrobiotic deposits is less clear. Western Australia has yielded fossil forms that are apparently 2.8 billion years old, and other deposits in the same region contain structures resembling living blue-green algae. Other forms, heavily modified by chemical infiltration, bear some resemblence to a present iron bacterium, and are found in sediments laid down 3.5 billion years ago, but evidence that these fossils are contemporaneous with the sediments in which they are found is not convincing.The oldest evidence of early life is the observed depletion of C13 in 3.8-billion year old rocks found in southwestern Greenland.Under the conditions that prevailed at this time, most organic molecules would be thermodynamically stable, and there is every indication that a rich variety of complex molecules would be present. The most direct evidence of this comes from laboratory experiments that attempt to simulate the conditions of the primitive environment of this period, the first and most famous of these being the one carried out by Stanley Miller in 1953.Schematic of the Miller experiment. A mixture of the various reduced gases believed to be compose the early atmosphere circulate through an apparatus in which spark discharges (intended to simulate lightning) create a complex mixture of organic compounds. S.L. Miller: A production of amino acids under possible primitive earth conditions. 1953: Science 528-529. U. Texas provided the above diagram, and contains a very good description of of primitive life with many pictures of fossil organisms.Since that time, other experiments of a similar nature have demonstrated the production of a wide variety of compounds under prebiotic conditions, including nearly all of the monomeric components of the macromolecules present in living organisms. In addition, small macromolecules, including peptides and sugars, as well as structural entities such as lipid-based micelles, have been prepared in this way.The discovery in 1989 of a number of amino acids in the iridum-rich clay layer at the Cretaceous-Tertiary boundary suggests that bio-precursor molecules can be formed or deposited during a meteoric impact. Although this particular event occurred only 65 million years ago (and is presumed to be responsible for the extinction of the dinosaurs), the Earth has always been subject to meteoric impacts, and it is conceivable that these have played a role in the origin of life.The presence of clays, whose surfaces are both asymmetric and chemically active, could have favored the formation of species of a particular chirality; a number of experiments have shown that clay surfaces can selectively adsorb amino acids which then form small peptides. It has been suggested that the highly active and ordered surfaces of clays not only played a crucial role in the formation of life, but might have actually served as parts of the first primitive self-replicating life forms, which only later evolved into organic species.Since no laboratory experiment has yet succeeded in producing a self-replicating species that can be considered living, the mechanism by which this came about in nature must remain speculative. Infectious viruses have been made in the laboratory by simply mixing a variety of nucleotide precursors with a template nucleic acid and a replicase enzyme; the key to the creation of life is how to do the same thing without the template and the enzyme.Smaller polynucleotides may have formed adventitiously, possibly on the active surface of an inorganic solid. These could form complementary base-paired polymers, which might then serve as the templates for larger molecules. Non-enzymatic template-directed synthesis of nucleotides has been demonstrated in the laboratory, but the resulting polymers have linkages that are not present in natural nucleotides.It has been suggested that these linkages could have been selectively hydrolyzed by a long period of cycling between warm, cool, wet, and dry environmental conditions. The earth at that time was rotating more rapidly than it is now; cycles of hydration-dehydration and of heating-cooling would have been more frequent and more extreme.The first organisms would of necessity have been heterotrophs— that is, they derived their metabolic energy from organic compounds in the environment. Their capacity to synthesize molecules was probably very limited, and they would have had to absorb many key substances from their surroundings in order to maintain their metabolic activity. Among the most primitive organisms of this kind are the archaeons, which are believed to be predecessors of both bacteria and eucaryotes. DNA sequencing of one such organism, a methane-producer that lives in ocean-bottom sediments at 200 atm and 48-94°C, reveals that only about a third of the genes resemble those of bacteria or eucaryotes.It has been estimated that about 50 genes are required in order to define the minimal biochemical and structural machinery that a hypothetical simplest possible cell would have.The earliest organisms derived their metabolic energy from the organic substances present in their environment; once they began to reproduce, this nutrient source began to become depleted.Some species had probably by this time developed the ability to reduce carbon dioxide to methane; the hydrogen source could at first have been H2 itself (at that time much more abundant in the atmosphere), and later, various organic metabolites from other species could have served.Before the food supply neared exhaustion, some of these organisms must have developed at least a rudimentary means of absorbing sunlight and using this energy to synthesize metabolites. The source of hydrogen for the reduction of CO2 was at first small organic molecules; later photosynthetic organisms were able to break this dependence on organic nutrients and obtain the hydrogen from H2S.These bacterial forms were likely the dominant form of life for several hundred million years. Eventually, due perhaps to the failing supply of H2S, plants capable of mediating the photochemical extraction of hydrogen from water developed. This represented a large step in biochemical complexity; it takes 10 times as much energy to abstract hydrogen from water than from hydrogen sulfide, but the supply is virtually limitless.It appears that photosynthesis evolved in a kind of organism whose present-day descendents are known as cyanobacteria.The five “kingdoms” into which living organisms are classified are Monera, Protista (protozoans, algae), Fungi, Plantae, and Animalia. The genetic (and thus, evolutionary) relations between these and the subcategories within them are depicted below.Superimposed on this, however, is an even more fundamental division between the procaryotes and eucaryotes.In this group are primitive organisms whose single cells contain no nucleus; the gene-bearing structure is a single long DNA chain that is folded irregularly throughout the cell. Procaryotic cells usually reproduce by budding or division; where sexual reproduction does occur, there is a net transfer of some genetic material from one cell to another, but there is never an equal contribution from both parents.In spite of their primitive nature, procaryotes constitute the majority of organisms in the biosphere. The division between bacteria and archaea within the procaryotic group is a fairly recent one. Archaea are now believed to be the most primitive of all organisms, and include the so-called extremophiles that occupy environmental niches in which life was at one time thought to be impossible; they have been found in sedimentary rocks, hot springs, and highly saline environments.All other organisms— seaweeds (algae), protozoa, molds, fungi, animals and plants, are composed of eucaryotic cells. These all have a membrane-bound nucleus, and with a few exceptions they all reproduce by mitosis, in which the chromosomes split longitudinally and move toward opposite poles. Other organelles unique to eucaryotes are mitochondria, ribosomes, and structural elements such as microtubules.Oxygen and biogeochemical evolutionOxygen is poisonous to all forms of life in the absence of enzymes that can reduce the highly reactive byproducts of oxidation and oxidative metabolism (peroxides, superoxides, etc.). All organic compounds are thermodynamically unstable in the presence of oxygen; carbon-carbon double bonds in lipids are subject to rapid attack. Prebiotic chemical evolution leading to the development of biopolymers was possible only under the reducing, anoxic conditions of the primitive atmosphere.The rise of atmospheric oxygen. Once organisms existed that could use water as a hydrogen source for the reduction of carbon dioxide, O2began to be introduced into the atmosphere. The widespread occurrence of ferrous compounds in surface rocks and sediments provided a sink for this oxygen that probably did not become saturated until about 2 billion years ago, when the atmospheric oxygen abundance first rose above about 1 percent.Illustration from All about entropy, the laws of thermdynamics, and order from disorder. There is lots of good stuff here!As the oxygen concentration began to rise, organisms in contact with the atmosphere had to develop protective mechanisms in order to survive. One indication of such adaptation is the discovery of fossil microbes whose cell walls are unusually thick. A more useful kind of adaptation was the synthesis of compounds that would detoxify oxygen by reacting rapidly either with O2 itself or with peroxides and other active species derived from it. Isoprenoids (the precursors of steroids) and porphyrins are examples of two general classes of compounds that are found in nearly all organisms, and which may have originated in this way. Later, highly efficient oxygen mediating enzymes such as peroxidase and catalase developed. The widespread phenomenon of bioluminescence may be the result of a very early adaptation to oxygen. The compound luciferin is a highly efficient oxygen detoxifier, which also happens to be able to emit light under certain conditions. Bioluminescence probably developed as a by-product in early procaryotic organisms, but was gradually lost as more efficient detoxifying mechanisms became available.In spite of the deleterious effects of oxygen on cell biomolecules, O2 is nevertheless an excellent electron sink, capable of releasing large quantities of energy through the oxidation of glucose. This energy can be efficiently captured through oxidative phosphorylation, the key process in respiration.A cell that utilizes oxygen must have a structural organization that isolates the oxygen-consuming respiratory centers from the other parts of the cell that would be poisoned by oxygen or its reaction products. Some procaryotic organisms have developed in this way; a number of cyanobacteria and other species are facultative anaerobes which can survive both in the presence and absence of oxygen.It is in the eucaryotic cell, however, that this organization is fully elaborated; here, respiration occurs in membrane-bound organelles called mitochondria. With only a few exceptions, all eucaryotic organisms are obligate aerobes; they can rarely survive and can never reproduce in the absence of oxygen. Mitotic cell division depends on the contractile properties of the protein actomyosin, which only forms when oxygen is present.The development of the eucaryotic cell about 1.4 billion years ago is regarded as the most significant event in the evolution of the earth and of the biosphere since the appearance of photosynthesis and the origin of life itself. How did it come about? The present belief, supported by an increasing amount of evidence, suggests that it began when one species of organism engulfed another. The ingested organism possessed biochemical machinery not present in the host, but which was retained in such a way that it conferred a selective evolutionary advantage on the host. Eventually the two organisms became able to reproduce as one, and so effectively became a single organism. This process is known as endosymbiosisAccording to this view, mitochondria represent the remains of a primitive oxygen-tolerant organism that was incorporated into one that could produce the glucose fuel for the oxygen to burn. Chloroplasts were once free-living photosynthesizing procaryotes similar to present-day cyanobacteria. It is assumed that some of these began parasitising respiratory organisms, conferring upon them the ability to synthesize their carbohydrate food during daylight. The immense selective advantage of this arrangement is evident in the extent of the plant kingdom.It is interesting that an atmospheric oxygen concentration of about 1 percent, known as the Pasteur point is both the maximum that obligate anaerobes can tolerate, and the minimum required for oxidative phosphorylation.Louis Pasteur discovered that some bacteria are anaerobic and unable to tolerate oxygen above 1% concentration.As was mentioned previously, the oxygen produced by the first photosynthetic organisms was taken up by ferrous iron in sediments and surface minerals. The widespread deposits known as banded iron formations consist of alternating layers of Fe(III)-containing oxides (hematite and magnetite) that were laid down between 1 and 2 billion years ago; the layering may reflect changing climatic or other environmental conditions that brought about a cycling of the organism population.During the buildup of oxygen, an equivalent amount of carbon had to be deposited in sediments in order to avoid the thermodynamically spontaneous back reaction which would consume the O2through oxidation of the organic matter. Thus the present levels of atmospheric oxygen are due to a time lag in the geochemical cycling of photosynthetic products.As the oxygen concentration increased, evolution seems to have speeded up; this may reflect both the increased metabolic efficiency and the greater biochemical complexity of the eucaryotic cell. The oldest metazoan (multiple-celled) fossils are coelenterates that appeared about 700 million years ago. Modern representatives of this group such as marine worms and jellyfish can tolerate oxygen concentrations as low as 7%, thus placing a lower boundary on the atmospheric oxygen content of that era. The oldest fossil organisms believed to have possessed gills, which function only above 10% oxygen concentration, appeared somewhat later.Carbon dioxide decreased as oxygen increased, as indicated by the prevalence of dolomite over limestone in early marine sediments.This page titled 4.2: Biogeochemical Evolution is shared under a CC BY-NC 3.0 license and was authored, remixed, and/or curated by Stephen Lower via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.
1,021
4.3: Gaia - Bioregulation of the Environment
https://chem.libretexts.org/Bookshelves/Environmental_Chemistry/Geochemistry_(Lower)/04%3A_The_Biosphere/4.03%3A_Gaia_-_Bioregulation_of_the_Environment
The physical conditions under which life as we know it can exist encompass a relatively narrow range of temperature, pH, osmotic pressure, and ultraviolet radiation intensity. It seems remarkable enough that life was able to get started at all; it is even more remarkable that it has continued to thrive in the face of all the perils that have, or could have occurred, during the past 3 billion years or so.During the time that life has been evolving, the sun has also been going through the process of evolution characteristic of a typical star; one consequence of this is an increase in its energy output by about 30 percent during this time. If the sun’s output should suddenly drop to what it was 3 billion years ago, the oceans would freeze. How is it that the earth was not in a frozen state for the first 1.5 billion years of life’s existence? Alternatively, if conditions were somehow suitable 3 billion years ago, why have the oceans not long since boiled away?A rather non-traditional answer to this kind of question is that the biosphere is far from playing a passive role in which it is continually at the mercy of environmental conditions. Instead, the earth’s atmosphere, and to a lesser extent the hydrosphere, may be actively maintained and regulated by the biosphere. This view has been championed by the British geochemist J.E. Lovelock, and is known as the Gaia hypothesis.Gaia is another name for the Greek earth-goddess Ge, from which root the sciences of geography, geometry, and geology derive their names. Lovelock's book Gaia: a new look at life on Earth (Oxford, 1979) is a short and highly readable discussion of the hypothesis.Evidence in support of this hypothesis is entirely circumstantial, but nevertheless points to important questions that must be answered: how have the climatic and chemical conditions on the earth remained optimal for life during all this time; how can the chemical composition of the atmosphere remain in a state that is tens of orders of magnitude from equilibrium?NoteAlthough the Gaia hypothesis has received considerable publicity in the popular press, it has never been very well received by the scientific community, many of whom feel that there is no justification for proposing a special hypothesis to describe a set of connections which can be quite adequately explained by conventional geochemical processes. More recently, even Lovelock has backed away from the teleological interpretation of these relations, so that the Gaia hypothesis should now be more properly described as a set of loosely connected effects, rather than as a hypothesis. Nevertheless, these effects and the mechanisms that might act to connect them are sufficiently interesting that it seems worthwhile to provide an overview of the major observations that led to the development of the hypothesis.Teleology is the doctrine that natural processes operate with a purpose. See No longer willful, Gaia becomes respectable. 1988: Science 240 393-395.The increase in the oxygen content of the atmosphere as a result of the development of the eucaryotic cell was discussed above. Why has the oxygen content leveled off at 21 percent? It is interesting to note that if the oxygen concentration in the atmosphere were only four percent higher, even damp vegetation, once ignited by lightning, would continue to burn, enveloping vast areas of the earth in a firestorm. Evidence for such a worldwide firestorm that may be related to the extinction of the dinosaurs has recently been discovered. The charcoal layers found in widely distributed sediments laid down about 65 million years ago are coincident with the iridium anomaly believed to be due to the collision of a large meteor with the earth.The input of salts into the sea from streams and rivers is about 5.4 x 108 tons per year, into a total volume of about 1.2 x 109 km3 yr–1 . Upwelling of juvenile water and hydrothermal action at oceanic ridges provide additional inputs of salts. With a few bizarre exceptions such as the brine shrimp and halophilic bacteria, 6 percent is about the maximum salinity level that organisms can tolerate. The internal salinities of cells must be maintained at much lower levels (around 1%) to prevent denaturation of proteins and other macromolecules whose conformations are dependent on electrostatic forces. At higher levels than this, the electrostatic interaction between the salt ions and the cell membrane destroys the integrity of the latter so that it can no longer pump out salt ions that leak in along the osmotic gradient. At the present rate of salt input, the oceans would have reached their present levels of salinity millions of years ago, and would by now have an ionic strength far to high to support life, as is presently the case in the landlocked Dead Sea.The present average salinity of seawater is 3.4 percent. The salinity of blood, and of many other intra- and intercellular fluids in animals, is about 0.8 percent. If we assume that the first organisms were approximately in osmotic equilibrium with seawater, then our body fluids might represent “fossilized” seawater as it existed at the time our predecessors moved out of the sea and onto the land.By what processes is salt removed from the oceans in order to maintain a steady-state salinity? This remains one of the major open questions of chemical oceanography. There are a number of answers, mostly based on strictly inorganic processes, but none is adequately supported by available evidence. For example, Na+ and Mg2+ ions could adsorb to particulate debris as it drops to the seafloor, and become incorporated into sediments. The requirement for charge conservation might be met by the involvement of negatively charged silicate and hydroxyaluminum ions. Another possible mechanism might be the burial of salt beds formed by evaporation in shallow, isolated arms of the sea, such as the Persian Gulf. Extensive underground salt deposits are certainly found on most continents, but it is difficult to see how this very slow mechanism could have led to an unfluctuating salinity over shorter periods of highly variable climatic conditions.The possibility of biological control of oceanic salinity starts with the observation that about half of the earth’s biomass resides in the sea, and that a significant fraction of this consists of diatoms and other organisms that build skeletons of silica. When these organism die, they sink to the bottom of the sea and add about 300 million tons of silica to sedimentary rocks annually. It is for this reason that the upper levels of the sea are undersaturated in silica, and that the ratio of silica to salt in dead salt lakes is much higher than in the ocean.These facts could constitute a basis for a biological control of the silica content of seawater; any link between silica and salt could lead to the control of the latter substance as well. For example, the salt ions might adsorb onto the silica skeletons, and be carried down with them; if the growth of these silica-containing organisms is itself dependent on salinity, we would have our negative feedback mechanism.The continual buildup of biogenic sedimentary deposits on the ocean floor might possibly deform the thin oceanic crust by its weight, and cause local heating by its insulating properties. This could conceivably lead to volcanic action and the formation of new land mass, thus linking the lithosphere into Gaia.Stephen Lower, Professor Emeritus (Simon Fraser U.) Chem1 Virtual TextbookThis page titled 4.3: Gaia - Bioregulation of the Environment is shared under a CC BY-NC 3.0 license and was authored, remixed, and/or curated by Stephen Lower via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.
1,022
Preface
https://chem.libretexts.org/Bookshelves/Environmental_Chemistry/Green_Chemistry_and_the_Ten_Commandments_of_Sustainability_(Manahan)/00%3A_Front_Matter/01%3A_TitlePage
Since about the 1990s, “green” has come into widespread use as a term to describe practices and disciplines that deal with sustainability and the maintenance of environmental quality. One area that has been particularly active is in chemistry with green chemistry the subject of large numbers of symposia, international meetings, books, and journal papers. In addition, green chemistry institutes and academic programs have been established in various countries.Green Chemistry and the Ten Commandments of Sustainability, Third Edition, is a basic book on green chemistry and environmental sustainability designed for readers who have a need to learn about these topics at a fundamental level. Most works on green chemistry have concentrated on aspects of chemical synthesis, especially organic chemical synthesis. Green Chemistry and the Ten Commandments of Sustainability, discusses chemistry as a whole particularly as it relates to the environment and sustainability. In addition to covering green chemistry, the book deals with sustainable science and technology in general. In so doing, it views Earth and its environment as consisting of five highly interactive spheres: The hydrosphere, the atmosphere, the geosphere, the biosphere, and the anthrosphere. It is particularly important to consider the anthrosphere, that part of Earth’s environment made and operated by humans, because of its overwhelming importance in determining Earth’s environment.Chapter 1, “Sustainability and the Environment,” consists of an introduction to environmental science and the concept of sustainability. It introduces and defines the five environmental spheres.Green science and green technology are introduced and explained. In recognition of the overwhelming importance of energy in sustainability, this chapter includes a section entitled“Sustainable Energy: Away from the Sun and Back Again” that explains how humankind relied on solar energy, such as photosynthetically produced food and wood, for most of its time on Earth, then entered an approximately two-century era in which fossil fuels became dominant energy sources, but now must return to the sun directly and indirectly for basic energy supply.Chapter 3, “The Key Role of Chemistry and Making Chemistry Green” outlines the importance and role of chemistry in sustainability. Environmental chemistry is introduced as a key discipline in sustainability. Green chemistry is defined and the twelve principles of green chemistry are listed and explained. The chapter provides a brief introduction to the most basic aspects of chemistry to aid in understanding chemical concepts in later chapters. Chapters 3 through 7 cover the fundamentals of chemistry from a green chemistry perspective. Chapter 6 is a basic coverage of organic chemistry and Chapter 7 deals with biochemistry as it relates to green chemistry.Chapters 8 through 13 are organized according to the five spheres of the environment. Chapter 8, “The Five Environmental Spheres and Biogeochemical Cycles,” defines and explains each of these spheres and how they relate and interact through biogeochemical cycles of matter. Chapter 9, “Water, the Ultimate Green Substance,” covers the hydrosphere. It also emphasizes the unique properties of water as related to the structure of the water molecule and explains the important role of water in green technology. Chapter 10, “The Atmosphere: Blue Skies for a Green Environment,” deals with air and the atmosphere, the natural capital provided by the atmosphere, the protective role of the atmosphere, and threats to the atmosphere and climate from activities in the anthrosphere, including the combustion of carbonaceous fuels. Chapter 11, “The Geosphere and a Green Earth,” covers a number of topics related to the geosphere including aspects of geology, natural hazards of the geosphere (volcanoes, earthquakes), natural capital of the geosphere (minerals), and the geosphere as a repository of wastes. Important sections of this chapter discuss soil, how its productive capacity may be lost through erosion and desertification, and how green technology may prevent these harmful effects. Chapter 12, “The Biosphere and the Role of Green Chemistry in Feeding a Hungry World,” begins with a basic coverage of biology as it relates to sustainability and among other topics discusses the production of food and fiber by the biosphere (agriculture). It also contains a discussion of agricultural applications of genetically modified organisms as well as a section on how the anthrosphere may be operated in a way that supports and benefits the biosphere. Chapter 13, “The Anthrosphere, Industrial Ecology, and Green Chemistry,” begins with a discussion of the emerging area of industrial ecology, which treats industrial systems in a manner analogous to natural ecosystems including industrial metabolism through which materials are processed to produce manufactured products. Life cycles of materials are discussed with respect to sustainability. The role of green chemistry in sustainable manufacturing is explained in this chapter.As the title implies, Chapter 14, “Feeding the Anthrosphere: Utilizing Renewable and Biological Materials,” discusses how feedstocks for manufacturing may be produced sustainably. Emphasis is placed upon biological sources of feedstocks produced through photosynthesis. Biorefineries and their role in biomass utilization are explained.Chapter 15, “Sustainable Energy: The Essential Basis of Green Systems,” explains the key importance of sustainable energy in sustainability and how most environmental and sustainability problems can be solved if abundant sources of energy are available and if they can be used without doing unacceptable harm to the environment. Part of the chapter pertains to green technology for efficient energy conversion and utilization. Renewable sources of energy including solar electric, wind, moving water, and biomass are discussed in this chapter.Chapter 16, “Terrorism, Toxicity, and Vulnerability: Green Chemistry and Technology in Defense of Human Welfare,” discusses the role of green chemistry, science, and technology in dealing with terrorist threats. A major part of the chapter deals with toxic substances and toxicology as they relate to terrorist threats. The chapter also covers potential biohazards in terrorism and protecting water, food, and air.Chapter 17, “The Ten Commandments of Sustainability and Sensible Measures,” presents ten important rules for the achievement of sustainability upon which part of the book title is based. It concludes with a section on “sensible measures” that might be taken to enhance sustainability. Designed to provoke thought, these suggestions range from small measures to grandiose schemes. The author welcomes input from readers and may be contacted at the following
1,023
InfoPage
https://chem.libretexts.org/Bookshelves/Environmental_Chemistry/Green_Chemistry_and_the_Ten_Commandments_of_Sustainability_(Manahan)/00%3A_Front_Matter/02%3A_InfoPage
This text is disseminated via the Open Education Resource (OER) LibreTexts Project and like the hundreds of other texts available within this powerful platform, it is freely available for reading, printing and "consuming." Most, but not all, pages in the library have licenses that may allow individuals to make changes, save, and print this book. Carefully consult the applicable license(s) before pursuing such effects.Instructors can adopt existing LibreTexts texts or Remix them to quickly build course-specific resources to meet the needs of their students. Unlike traditional textbooks, LibreTexts’ web based origins allow powerful integration of advanced features and new technologies to support learning. The LibreTexts mission is to unite students, faculty and scholars in a cooperative effort to develop an easy-to-use online platform for the construction, customization, and dissemination of OER content to reduce the burdens of unreasonable textbook costs to our students and society. The LibreTexts project is a multi-institutional collaborative venture to develop the next generation of open-access texts to improve postsecondary education at all levels of higher learning by developing an Open Access Resource environment. The project currently consists of 14 independently operating and interconnected libraries that are constantly being optimized by students, faculty, and outside experts to supplant conventional paper-based books. These free textbook alternatives are organized within a central environment that is both vertically (from advance to basic level) and horizontally (across different fields) integrated.The LibreTexts libraries are Powered by NICE CXOne and are supported by the Department of Education Open Textbook Pilot Project, the UC Davis Office of the Provost, the UC Davis Library, the California State University Affordable Learning Solutions Program, and Merlot. This material is based upon work supported by the National Science Foundation under Grant No. 1246120, 1525057, and 1413739.Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation nor the US Department of Education.Have questions or comments? For information about adoptions or adaptions contact More information on our activities can be found via Facebook , Twitter , or our blog .This text was compiled on 07/13/2023
1,024
1.1: Sustainability
https://chem.libretexts.org/Bookshelves/Environmental_Chemistry/Green_Chemistry_and_the_Ten_Commandments_of_Sustainability_(Manahan)/01%3A_Sustainability_and_the_Environment/1.01%3A_Sustainability
The old Chinese proverb certainly applies to modern civilization and its relationship to world resources that support it. Evidence abounds that humans are degrading the Earth life support system upon which they depend for their existence. The emission to the atmosphere of carbon dioxide and other greenhouse gases is almost certainly causing global warming and climate change. Discharge of pollutants has degraded the atmosphere, the hydrosphere, and the geosphere in industrialized areas and has placed great stress on parts of the biosphere. Natural resources including minerals, fossil fuels, fresh water, and biomass have become stressed and depleted. The productivity of agricultural land has been diminished by water and soil erosion, deforestation, desertification, contamination, and conversion to non-agricultural uses. Wildlife habitats including woodlands, grasslands, estuaries, and wetlands have been destroyed or damaged. About 3 billion people (half of the world’s population) live in dire poverty on less than the equivalent of U.S. $2/day. The majority of these people lack access to sanitary sewers and the conditions under which they live give rise to debilitating viral, bacterial, and protozoal diseases. At the other end of the living standard scale, a relatively small fraction of the world’s population consumes an inordinate amount of resources with lifestyles that involve living too far from where they work in energy-wasting houses that are far larger than they need, commuting long distances in large “sport utility vehicles” that consume far too much fuel, and overeating to the point of unhealthy obesity with accompanying problems of heart disease, diabetes, and other obesity-related maladies.As We Enter the Anthropocene Humans have gained an enormous capacity to alter Earth and its support systems. Their influence is so great that we are now entering a new epoch, the Anthropocene, in which human activities have effects that largely determine conditions on the planet. The major effects of humans upon Earth have taken place within a minuscule period of time relative to that during which life has been present on the planet or, indeed, relative to the time that modern humans have existed. These effects are largely unpredictable, but it is essential for humans to be aware of the enormous power in their hands — and of their limitations if they get it wrong and ruin Earth and its climate as life support systems.Achieving SustainabilityAlthough the condition of the world and its human stewards outlined above sounds rather grim and pessimistic, this is not a grim and pessimistic book. That is because the will and ingenuity of humans that have given rise to conditions leading to deterioration of Planet Earth can be — indeed, are being — harnessed to preserve the planet, its resources, and its characteristics that are conducive to healthy and productive human life. The key is sustainability or sustainable development defined by the Bruntland Commission in 1987 as industrial progress that meets the needs of the present without compromising the ability of future generations to meet their own needs. 1A key aspect of sustainability is the maintenance of Earth’s carrying capacity, that is, its ability to maintain an acceptable level of human activity and consumption over a sustained period of time. Although change is a normal characteristic of nature, sudden and dramatic change can cause devastating damage to Earth support systems. Change that occurs faster than such systems can adjust can cause irreversible damage to them. In addition to its main theme of green chemistry, a major purpose of this book is to serve as an overview of the science and technology of sustainability emphasizing sustainable chemistry as well as the general science and technology of sustainability.Rethinking Environmentalism and SustainabilityThe common view of a good, sustainable environment as a rural, low-population-density area may be misleading. A convincing argument for this proposition is made in the 2009 book Green Metropolis: Why Living Smaller, Living Closer, and Driving Less are the Keys to Sustainability.2Classified as an “eco-urbanist manifesto,” this book makes the somewhat surprising case that New York City’s Manhattan is a model of sustainability for the modern overpopulated world. This densely populated compact city emits less than one third as much greenhouse gas per person compared to the average for the United States. One reason why this is so is that the large apartment buildings and other large structures in New York City are very efficient in conserving heat; that which leaks from one tends to end up heating another. Cold air produced by air conditioning in the summer is similarly conserved. Another reason that the city is energy-efficient stems from its outrageously congested traffic and lack of affordable parking meaning that the automobile is impractical for most residents thereby forcing reliance on far more efficient public transportation. Only about one-fifth of New York City’s residents regularly commute with individual automobiles. In contrast, those who live “close to nature” in rural settings tend to dwell in free-standing houses that are inherently less energy efficient than apartment buildings and by necessity they must commute with energy-wasting vehicles. If they live on unimproved roads they may require especially inefficient large, rugged four-wheel-drive vehicles. Compensation cannot be made for such a lifestyle by measures advocated by many environmentalists, such as backyard compost piles and fuel-efficient vehicles.According to Green Metropolis, New York City, which has a population density more than 800 times that of the U.S. as a whole and about 30 times that of Los Angeles, offers a model for a growing world population to exist within the confines of Earth’s limited resources. The prescription for sustainability is to “live smaller, live closer, and drive less.” To that may be added “reproduce less” in that dense urban environments tend to discourage large families. A major culprit in the development of modern environmental problems is the public obsession with the private automobile, which enables destructive urban sprawl and excessive consumption of gasoline. One of the unintended consequences of the laudable goal of increased fuel economy in automobiles is to make them more affordable to use, thus facilitating destructive urban sprawl. The automobile-based societies of the U.S. and many other industrialized nations has been made possible by the exploitation of relatively abundant and inexpensive petroleum. In years to come, as petroleum inevitably becomes more scarce and expensive, these societies will have to undergo wrenching changes, the best end result of which would be much more sustainable, compact urban societiesThis page titled 1.1: Sustainability is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by Stanley E. Manahan.
1,027
1.2: The Environment and the Five Environmental Sphere
https://chem.libretexts.org/Bookshelves/Environmental_Chemistry/Green_Chemistry_and_the_Ten_Commandments_of_Sustainability_(Manahan)/01%3A_Sustainability_and_the_Environment/1.02%3A_The_Environment_and_the_Five_Environmental_Sphere
Since this book deals with the environment, it is important to know what is meant by the environment. Essentially, the environment consists of our surroundings, which may affect us and which, in turn, we may affect. A part of the environment may consist of rock formations several kilometers below Earth’s surface, so deep that humans cannot reach them and of which they are scarcely aware except in those instances when the rock formations shift along a fault line and cause an earthquake, which may be very destructive and take many lives. Another part of the environment is the atmosphere touching Earth’s surface, a part of the environment with which humans are always in contact and which is essential for the life-giving oxygen that they require. In discussing the environment it is helpful to regard it as consisting of the following five spheres as shown in . These spheres overlap and interact strongly. For example, fish are part of the biosphere, dwell in the hydrosphere, and acquire dissolved oxygen that they need from the atmosphere. Mineral nutrients required by the fish and by the algae upon which the fish feed come from the geosphere. The part of the hydrosphere in which the fish reside may be a reservoir constructed by impounding a stream with a dam that is part of the anthrosphere. Many other such examples may be cited.Biogeochemical cycles describe the exchange of materials among the five environmental spheres. Aspects of these environmentally crucial cycles are covered in various parts of this book. As the name implies, these cycles involve biological and geochemical phenomena but may also include processes that occur in the atmosphere and the hydrosphere as well as human influences on them. An important part of these cycles consists of the interfaces between the environmental spheres. The interfaces are often very thin with respect to Earth’s whole environment. An important example of such an interface is the one between the geosphere and the atmosphere where most plants grow that support life on Earth. Typically this region extends into the geosphere for only the meter or less penetrated by plant roots and into the atmosphere only to the height of the plants. Within this region there are other interfaces including the biosphere/geosphere boundary between plant roots and soil and the biosphere/atmosphere boundary across which oxygen and carbon dioxide gas are exchanged between leaf surfaces and the atmosphere.The study of the environment is environmental science, in its broadest sense the science of the complex interactions that occur among the terrestrial, atmospheric, aquatic, living, and anthropological systems that compose Earth and the surroundings that may affect living things.3 It includes all the disciplines, such as chemistry, biology, ecology, sociology, and government, that affect or describe these interactions. For the purposes of this book, environmental science will be defined as the study of the earth, air, water, and living environments, and the effects of technology there on. To a significant degree, environmental science has evolved from investigations of the ways by which, and places in which, living organisms carry out their life cycles. This discipline used to be known as natural history, which later evolved into ecology, the study of environmental factors that affect organisms and how organisms interact with these factors and with each other.This page titled 1.2: The Environment and the Five Environmental Sphere is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by Stanley E. Manahan.
1,028
1.3: Seeing Green
https://chem.libretexts.org/Bookshelves/Environmental_Chemistry/Green_Chemistry_and_the_Ten_Commandments_of_Sustainability_(Manahan)/01%3A_Sustainability_and_the_Environment/1.03%3A_Seeing_Green
Given the dependence of humans upon a livable environment, it is essential that it be maintained in a healthy state. The maintenance of a healthy environment is commonly termed sustainability. In recent years there has been a lot of activity in the area of sustainability. Earlier efforts in the sustainability arena centered around pollution and its effects. Degradation of the environment has been a concern of thoughtful people for many decades. Dating back to the early 1800s and even before, the widespread use of high-sulfur coal for fuel was noted as a cause of bad air quality and impaired visibility in urban areas such as London. Water polluted by pathogenic microorganisms sickened and killed millions of people for many centuries. By the end of World War II, the atmosphere of Los Angeles had become noxious, irritating, and unhealthy due to the presence of ozone and other chemical oxidants, aldehydes, and small particulate matter. In some respects, this condition resembled pollution of the London atmosphere observed earlier, which was a combination of smoke and fog which some called “smog.” So the condition afflicting Los Angeles and a number of similar cities came to be known as smog, but a kind of smog that developed in air having low humidity and exposed to intense sunlight, conditions opposite of those under which London smog was formed. Chemically, the two kinds of smog were totally different in that London smog was in a reducing atmosphere with high concentrations of chemically reducing SO2 whereas the Los Angeles smog is oxidizing and any SO2 emitted to it is rapidly oxidized to sulfuric acid.Concern over deterioration of the environment increased with the 1962 publication of Rachel Carson’s classic book Silent Spring,4 the theme of which was that DDT and other mostly pesticidal chemicals were becoming concentrated through the food chain with the result that birds at the top of the chain, such as eagles and hawks, were producing eggs with soft shells that failed to produce viable baby birds. The implication was that substances harming bird populations might harm humans as well.By about 1970 it was generally recognized that air, water, and land pollution was reaching intolerable levels. As a result, various countries passed and implemented laws designed to reduce pollutants and to clean up waste chemical sites at a cost that has easily exceeded one trillion dollars globally.More recently concern over environmental pollution has extended beyond a narrow focus upon pollution and its effects to include the broader area of sustainability. The achievement of sustainability certainly requires avoiding pollution and counteracting its effects. But it also mean maintaining flows of essential materials, energy, food, safe water, healthy air, and the other things that humans and other organisms on Planet Earth require for their survival and well-being.The term “green” has come to stand for sustainability in its various forms and is used throughout this book. Most of sustainability has to do with matter, and chemistry is the science of matter. It is only natural, therefore, that sustainable chemistry is now known as green chemistry, a discipline that has developed rapidly since about the mid-1990s. This book is about green chemistry. But the practice of green chemistry involves more broadly green science and technology, which are discussed in this book and related to green chemistry.This page titled 1.3: Seeing Green is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by Stanley E. Manahan.
1,029
1.4: Natural Capital of the Earth
https://chem.libretexts.org/Bookshelves/Environmental_Chemistry/Green_Chemistry_and_the_Ten_Commandments_of_Sustainability_(Manahan)/01%3A_Sustainability_and_the_Environment/1.04%3A_Natural_Capital_of_the_Earth
All of that very small group of humans who have been privileged to view Earth from outer space have been struck with a sense of awe at the sight. Photographs of Earth taken at altitudes high enough to capture its entirety reveal a marvelous sphere, largely blue in color, white where covered by clouds, with desert regions showing up in shades of brown and red. But Earth is far more than a beautiful globe that inspires artists and poets. In a very practical sense it is a source of the life support systems that sustain humans and all other known forms of life. Earth obviously provides the substances required for life including water, atmospheric oxygen, carbon dioxide from which billions of tons of biomass are made each year by photosynthesis, and ranging all the way down to the trace levels of micronutrients such as iodine and chromium that organisms require for their metabolic processes. But more than materials are involved. Earth provides temperature conditions conducive to life and a shield against incoming ultraviolet radiation, its potentially deadly photons absorbed by molecules in the atmosphere, their energy dissipated as heat. Earth also has a good capacity to deal with waste products that are discharged to the atmosphere, into water, or into the geosphere.The capacity of Earth to provide materials, protection, and conditions conducive to life is known as its natural capital, which can be regarded as the sum of two major components: natural resources and ecosystem services. These conditions are giving rise to a new business model termed natural capitalism. Early hunter-gatherer and agricultural human societies made few demands upon Earth’s natural capital. As the industrial revolution developed from around 1800, natural resources were abundant and production of material goods was limited largely by labor and the capacity of machines to process materials. But now population is in excess, computerized machines have an enormous capacity to process materials, and the availability of natural capital is the limiting factor in production including availability of natural resources, the vital life-support ability of ecological systems, and the capacity of the natural environment to absorb the byproducts of industrial production, most notably greenhouse gas carbon dioxide.Rather than the adversarial relationship that has prevailed between the traditional business community and environmentalists with regard to economic development, a functioning system of natural capitalism properly values natural and environmental resources. The goal of natural capitalism is to increase well-being, productivity, wealth, and capital while reducing waste, consumption of resources, and adverse environmental effects. The traditional capitalist economic system has proven powerful in delivering consumer goods and services using the leverage of individual and corporate incentives. A functional system of natural capitalism retains these economic drivers while incorporating sustainable practices such as recycling wastes back into the raw material stream and emphasizing the provision of services rather than just material goods. In so doing a system of natural capitalism emulates nature’s systems through the practice of industrial ecology, discussed in detail in Chapter 14, and the application of the principles of green chemistry (see Chapter 2).The development of a functional system of natural capitalism requires several important changes in business practices. These include the following:Evolution of the Utilization of Natural Capital shows the burden on Earth’s natural capital as a function of the progression of economic development. During pre-industrial times the capacity of humans to deplete natural capital was minimal, largely because of limitations on the rates at which energy could be used. As the industrial revolution developed and humans learned how to harness energy sources, particularly from fossil fuels, Earth’s natural capital was increasingly consumed in areas such as exploitation of depleting resources and utilization of the hydrosphere, geosphere, and atmosphere for the disposal of wastes. As industrialization progressed, it became increasingly obvious that it was causing problems in areas such as air and water pollution, soil erosion exacerbated by the capabilities of fossil-fueled tillage machinery to disturb soil, and depletion of rich ore sources necessitating the mining of much larger amounts of less rich ores to obtain needed quantities ofmetals and other geospheric resources. As a consequence, laws were passed and regulations put into place to reduce pollution and to conserve resources. Particularly after the devastating Dust Bowl years of the 1930s in which much productive topsoil was lost to wind erosion, the U.S.government initiated programs of soil conservation with incentives to preserve the essential soil resource. Efforts to reduce air and water pollution concentrated initially on the most obvious pollutants, such as particles emitted from smokestacks, followed by greater emphasis upon more insidious pollutants, such as heavy metals in water. The regulatory approach has been evolving into one that emphasizes pollution prevention, recycling, and conservation of energy and materials. A final phase is sustainable development and utilization of green technology that can support growing economic development while substantially reducing exploitation of Earth’s natural capital.This page titled 1.4: Natural Capital of the Earth is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by Stanley E. Manahan.
1,030
1.5: Sustainability as a Group Effort- It Takes a (Very Big) Village
https://chem.libretexts.org/Bookshelves/Environmental_Chemistry/Green_Chemistry_and_the_Ten_Commandments_of_Sustainability_(Manahan)/01%3A_Sustainability_and_the_Environment/1.05%3A_Sustainability_as_a_Group_Effort-_It_Takes_a_(Very_Big)_Village
The achievement of sustainability and preservation of natural capital requires intense efforts by both individuals and groups. This was illustrated centuries ago in England by “the tragedy of the commons”5. The commons consisted of a pasture shared by village residences to provide forage for their cows, sheep, and horses. An individual family could increase its wealth (in meat, milk, or horsepower) by adding an animal. For example, a one-cow family could double its wealth in cows by buying another and putting it to graze in the pasture. If the pasture was accommodating 100 cows, for example, this would have an apparent cost of only about 1% for the small community as a whole. The natural tendency was for families to keep on adding cows until a point was reached at which the pasture became exhausted and unproductive due to overgrazing, the animals died or had to be slaughtered, and the entire support system to provide milk and meat based upon the natural capital of the pasture in the commons collapsed. During the fourteenth century this unfortunate circumstance became so widespread that the economies of many villages collapsed with whole populations no longer able to provide for their basic food needs.History has many examples of the tragedy of the commons. As an example, when the settlers began to cultivate what was formerly open rangeland in Edwards County, Texas, in the 1880s the ranchers who had used it for pasture met and proclaimed the following: “Resolved that none of us know, or care to know, anything about grasses, native or otherwise, outside of the fact that for the present, there are lots of them, the best on record, and we are getting the most of them while they last.”6 Soon the combined effects of overgrazing and drought reduced the yield of grass such that the ranchers’ livelihoods were threatened and the newly cultivated land became unproductive. Shortsighted attitudes towards Earth’s natural capital similar to those expressed by the ranchers continue to lead to many tragedies of the commons. In modern times heavy cultivation of marginal land is turning large areas to desert (desertification), the Amazon rain forest is being cut down and burned to provide a one-time harvest of wood and a few years of crop production (deforestation), severe deterioration of the global ocean fisheries resource is occurring, congested freeways at times become great linear parking lots and, of much direct concern to many university students and faculty, some parking facilities have become so oversold that their utility is seriously curtailed as paying customers cannot find parking space.The logic of the commons holds true in modern times in which the global commons consist of the air humans must breathe, water resources, agricultural lands, mineral resources, capacity of the natural environment to absorb wastes, and all other facets of natural capital. According to the logic of the commons, each consumer has the right to acquire a segment of natural capital, the cost of which is distributed throughout the commons and shared by all. The natural competition among consumers results in some consumers acquiring relatively more of Earth’s natural capital and becoming wealthier. Within limits this is a healthy consequence of capitalist systems. However, if enough consumer units use too much natural capital, it becomes exhausted and unsustainable, therefore unable to support the society as a whole, so that all suffer, including those on top of the consumer food chain.Automotive transportation illustrates a modern tragedy of the commons. Acquisition of an automobile adds to an individual’s possessions and mobility. The materials required to make a single automobile, the fuel to run it, and its exhaust pollutants make a minuscule impression on Earth’s natural capital. However, when millions of people acquire automobiles, the demand on Earth’s natural capital of materials, fuel, and ability to absorb pollutants becomes severely stressed, heavy traffic turns the automobile from a convenience into a burden, and, in some places at some times, the whole transportation system collapses.These “tragedies of the commons” illustrate the limitations of unregulated “free-for-all” capitalist economic systems in achieving sustainable development and make a strong case for collective actions in the public sector to ensure that humankind can exist within the limits of Earth’s natural capital. However, the collapse of Communist economic systems around 1990 left a legacy of abandoned, inefficient factories, poverty, and environmental degradation showing the adverse effects from discouraging private enterprise. In addition to enlightened regulations that ensure preservation of Earth’s essential support systems, successful economic systems require human ingenuity, initiative, and even greed. Getting these and other incentives to work well on a planet in which natural capital is the major limiting economic factor is the huge challenge facing modern and developing economies.There is an old African proverb that translates to, “It takes a village to raise a child.” The idea is of course that successful child-rearing requires the efforts of more than just parents, but requires the efforts of an entire village. The same principle applies to Planet Earth except that in this case billions of children are being raised and it will take the efforts of a very large village — the population of the entire world — to preserve the planet and its resources upon which those billions of children must depend for their existence.This page titled 1.5: Sustainability as a Group Effort- It Takes a (Very Big) Village is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by Stanley E. Manahan.
1,031
1.6: Sustainable Energy- Away from the Sun and Back Again
https://chem.libretexts.org/Bookshelves/Environmental_Chemistry/Green_Chemistry_and_the_Ten_Commandments_of_Sustainability_(Manahan)/01%3A_Sustainability_and_the_Environment/1.06%3A_Sustainable_Energy-_Away_from_the_Sun_and_Back_Again
As discussed in more detail in Chapter 15, the key to sustainability is abundant, environmentally safe energy. The evolution of humankind’s utilization of energy is illustrated in . Until very recently in the history of humankind we have depended upon the sun to meet our energy needs. The sun has kept most of the land mass of Earth at a temperature that enables human life to exist. Solar radiation has provided the energy for photosynthesis to convert atmospheric carbon dioxide to plant biomass providing humans with food, fiber, and wood employed for dwelling construction and fuel. Animals feeding on this biomass provided meat for food and hides and wool that humans used for clothing. Eventually humans developed means of indirectly using solar energy. This was especially true of wind driven by solar heating of air masses and used to propel sailing vessels and eventually to power windmills employed for power. The solar-powered hydrologic cycle provided flowing water, the energy of which was harnessed by water wheels. Virtually all the necessities of life came from utilization of solar energy.Dating from around 1800, humankind began to exploit fossil fuels for their energy needs. Initially, coal was burned for heating and to power newly developed steam engines for mechanical energy used in manufacturing and steam locomotives. After about 1900 petroleum developed rapidly as a source of fuel and, with the development of the internal combustion engine, became the energy source of choice for transportation needs. Somewhat later natural gas developed as an energy source. The result was a massive shift from solar and biomass energy sources to fossil fuels. Utilization of fossil carbon-based materials resulted in a revolution that went far beyond energy utilization. One important example was the invention by Carl Bosch and Fritz Haber in Germany in the early 1900s, of a process for converting atmospheric elemental nitrogen from air to ammonia, NH3 by the reaction of N2 with H2. This high-pressure, high-temperature process required large amounts of fossil fuel to provide energy and to react with steam to produce elemental hydrogen. The discovery of synthetic nitrogen fixation enabled the production of huge quantities of relatively inexpensive nitrogen fertilizer and the resulting increase in agricultural production may well have saved Europe, with a rapidly developing population at the time, from widespread starvation. (It also enabled the facile synthesis of great quantities of nitrogen-based explosives that killed millions of people in World War I and subsequent conflicts.) Fossil fuel, which has been described as “fossilized sunshine,”7 resulted in an era of unprecedented material prosperity and an increase in human population from around 1 billion to over 6 billion.By the year 2000 it had become obvious that the era of fossil fuels was not sustainable. One reason is that fossil fuel is a depleting resource that cannot last indefinitely as the major source of energy for the industrial society to which it has led. Approximately half of the world’s total petroleum resource has already been consumed so that petroleum will continue to become more scarce and expensive and can last for only a few more decades as the dominant fuel and organic chemicals raw material. Coal is much more abundant, but its utilization leads to the second reason that the era of fossil fuels must end because it is the major source of anthropogenic atmospheric carbon dioxide, greatly increased levels of which will almost certainly lead to global warming and massive climate change. Natural gas (methane, CH4) is an ideal, clean-burning fossil fuel that produces the least amount of carbon dioxide per unit energy generated. Rapidly expanding new discoveries of natural gas largely from previously inaccessible tight shale formations means that it can serve as a “bridging fuel” for several decades until other sources can be developed. Nuclear energy, properly used with nuclear fuel reprocessing, can take on a greater share of energy production, especially for base load electricity generation. But clearly drastic shifts must occur in the ways in which energy is obtained and used.With the closing of the brief but spectacular era of fossil hydrocarbons, the story of humankind and its relationship to Planet Earth is becoming one of “from the sun to fossil fuels and back again” as humankind returns to the sun as the dominant source of energy and photosynthetic energy to convert atmospheric carbon dioxide to biomass raw materials. In addition to direct uses for solar heating and for photovoltaic power generation, there is enormous potential to use the sun for the production of energy and material. Arguably the fastest-growing energy source in the world is wind-generated electricity. The wind is produced when the sun heats masses of air causing the air to expand. Once the dominant source of energy and materials, biomass produced by solar-powered photosynthesis is beginning to live up to its potential as a source of feedstocks to replace petroleum in petrochemicals manufacture and of energy in synthetic fuels (see Chapter 14,“Chapter 14 Feeding the Anthrosphere: Utilizing Renewable and Biological Materials” and Chapter 15, “Sustainable Energy: The Essential Basis of Sustainable Systems”). Biomass is still evolving as a practical source of liquid fuels. The two main ones of these are fermentation to produce ethanol and synthesis of biodiesel fuel made from plant lipid oils. Unfortunately, although ethanol made from sugar derived from sugar cane that grows prolifically in some areas such as Brazil is an economical gasoline substitute, the net energy gain from ethanol derived from cornstarch relies on the grain, the most valuable part of the plant otherwise used for food and animal feed; the net energy gain is marginal. The economics of producing synthetic biodiesel fuel from sources such as soybeans may be somewhat better. However, production of this fuel from oil palm trees in countries such as Malaysia is resulting in destruction of rain forests and diversion of palm oil from the food supply.Practical means do exist to utilize biomass for energy and materials without seriously disrupting the food supply. Arguably the best way to do that is to thermochemically convert biomass to synthesis gas, a mixture of CO and H2 that can be combined chemically by long-established synthetic routes to produce methane, larger-molecule hydrocarbons, alcohols and other products (see Chapter 15). The main pathway for so doing is to utilize biomass from renewable non-food biosources, which include crop byproducts (wheat straw, rice straw, cornstalks produced in surplus during the production of grain) and dedicated crops among which are highly productive hybrid poplar trees and sawgrass. Microscopic algae are especially promising as a biomass source because of their much higher productivity than terrestrial plants, their ability to grow in brackish(somewhat saline) water in containments in desert areas, and their ability to utilize sewage as a nutrient source. When biomass is used to produce synthesis gas, the essential nutrients, especially potassium and phosphorus, can be reclaimed from the biomass residues and used as fertilizer material to promote the growth of additional biomass.Future scientific discoveries and technological advances will play key roles in the achievement of energy sustainability. Three areas in which Nobel-level breakthroughs are needed in the achievement of energy sustainability were expressed in a February 2009 interview by Dr.Steven Chu, a Nobel Prize winning physicist who had just been appointed Secretary of Energy inU.S. President Barack Obama’s new administration. The first of these is in solar power in which the efficiency of solar energy capture and conversion to electricity needed to improve several-fold. A second area of need is for improved electric batteries to store electrical energy generated by renewable means and to enable practical driving ranges in electric vehicles. A third area in need of a quantum leap is for improved crops capable of converting solar energy to chemical energy in biomass by photosynthesis at much higher efficiencies than the current levels of less than 1%achieved by many crops. In this case the potential for improvement is enormous because most plants convert less than 1% of the solar energy falling on them to chemical energy through photosynthesis. Through genetic engineering, it is likely that this efficiency could be improved several-fold leading to vastly increased generation of biomass. Clearly, the achievement of sustainability employing high-level scientific developments will be an exciting development in decades to comeThis page titled 1.6: Sustainable Energy- Away from the Sun and Back Again is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by Stanley E. Manahan.
1,032